Hey there! I`m in a desperate situation - > wrong USB adapter fried a component on the PCB... PLEASE: Can anyone tell me what the component is?
Hey again! Can anyone please tell me the labeling/text on the COMPONENT IN THE IMAGE of the previous post? (mine is fried and i need to replace the component) Thanks!
I'm worry about wether to create surfaces before recording or post hoc on player after recording
Hello ya'll Iwas just wondering if anyone had been able get pupil mobil to work with a older phone then Nexus 5x. In my research lab were trying to use an adapter and a galaxy note 4 to work, it doesn't seem like it will. It seems from past messages I read you need a direct connect. Im pretty new to alot of this and I was just wondering why an adapter won't work?
Hi, I installed V3.3-0 on My Mac (as below picture). But I'm not able to open Pupil Player Software. Any suggestion?
What happens if you double-click it?
nothing
Could you please copy the application into the Applications folder and run /Applications/Pupil\ Player.app/Contents/MacOS/pupil_player
from terminal?
let me check!
How do people define/recognize a drift and correct it? I have also noticed that in some cases it takes a while for the fixation detection algorithm to work after a blink or a saccade (if the eyelid is droopy). How are others accounting for or are you?
Hi. Is there any standard for the value in the area pointed to by the red arrow, that is, which value is usually more appropriate? thank you
Hi there, so I ran an experiment with the pupil labs eye tracker and unfortunately didn't define a surface for my computer screen. So now I need to match my eye data with the data from psychopy posthoc (I'm running a reaching task). I did record multiple calibrations that I coded into in the experiment. Is there an easy way to access these calibrations? Can I export them somehow (I found them when I go on "Posthoc gaze calibrations" in the player but I don't know how to export them...
The calibration data does not allow you to define surfaces post-hoc. You can set up surfaces post-hoc, but for that you would have had to set up surface markers around the screen edges.
Can anyone help?
Hi
Please how can we deal with issue of low precision when doing AOI analysis of targets that are of equal sizes? Can the surface created for the targets be of different sizes?
Yes, the surfaces can have different sizes
How can I access the GazeData script?
I am assuming that you are referring to the hmd-eyes project. Please have a look at the included gaze demos.
Thank you for your reply! I didn't use the markers unfortunately... And I already collected a big amount of data. Is there any other way to match the eye data with my other data (in scale)? I'm really stuck with this and I understand that it's been a big mistake not using the markers.
Assuming that there is no head movement, you could try defining an area of the scene camera as the monitor. But you would have to do that in your post-processing of the exported data.
Hi Does anyone know that how pupil core acquires 3d gaze data? (Is there any publication?) I want to know how to calculated 3d gaze positions, which are shown in "fizations.csv."
are you using python or unity?
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using PupilLabs;
public class Data : MonoBehaviour
{
public GazeData gazeData;
private Vector3 direct;
private float distance;
private Vector2 norm;
// Start is called before the first frame update
void Start()
{
direct =new Vector3(gazeData.GazeDirection.x, gazeData.GazeDirection.y, gazeData.GazeDirection.z);
distance =gazeData.GazeDistance;
norm = new Vector2(gazeData.NormPos.x, gazeData.NormPos.y);
}
// Update is called once per frame
void Update()
{
Debug.Log(direct);
Debug.Log(distance);
Debug.Log(norm);
}
}
is this the correct way to access Data?
when i run the script on unity this is what it says "NullReferenceException: Object reference not set to an instance of an object"
anyone know how to fix it? or is this normal?
Thank you for imformation! I use python.
Pupil Core uses https://docs.pupil-labs.com/developer/core/pye3d/ to generate 3d pupil data which is then mapped into a common coordinate system (scene camera)
Is it possible to calculate the distance of saccades from the consecutive values of the "gaze_point_3d_x,y,z" in the "fixation.csv?" I think these values are dimensionless, and we only can calculate relative values of the distance of the saccades. Could you tell me this idea is correct or there are any limitations?
Tank you very much!
Looks like this topic would fit the 💻 software-dev or 🥽 core-xr channel better. Unfortunately, I am not a Unity expert. If I had to guess, gazeData
is not set in your example.
Oh I wasn't aware sorry and thanks!
Hi, just following up, how do I correct for drifts in fixations or gaze data? This came up during calibration too and the RAs made a note of it during data collection. Also, on the export info csv file, what is the difference between absolute and relative time range? I'm interested in calculating total task time (which is how the data was clipped during the export in player). These are the numbers from one of my exports: Relative Time Range 00:02.479 - 20:14.249
Absolute Time Range 1523.005157 - 2734.774731. Thanks!
Given that drift can be highly non-linear, it can be very difficult to correct it, even in case-by-case scenerios.
If you want to calculate time differences, it might be easier to use the absolute time (in seconds) range, as it is easier to parse
Also, relative time range is in the following format right: minutes:second:millseconds?
yes, the time format is correct. Relative time refers to time since the first scene video frame
Thanks! But that gives me a duration value a couple of decimal points higher than what the exported mp4 video shows. It's a small 0.9 difference when the values are rounded.
exported mp4 video duration How are you getting this value or which software displays this value?
To be sure, you can also look at the exported world_timestamps.csv
. It includes absolute timestamps for each exported frame.
Its windows media player. Got it! Yes, those values line withe export info.
Yeah, media players are not the most accurate ones to display durations. They read this information from the video containers meta information which is not required to be perfectly in line with the actual stream data. For example, you could have a 10 minute audio and a 5 minute video stream in the same container file, and most players would display a 10 minute duration.
Is there a way to quatify amount of drift occurring and then decide if it's worth correcting for? Would you suggest discarding those data? For most (not all) recordings, the RAs noticed drift at the bottom of the screen where the tabs were black and white (perhaps because the colour scheme was similar to a calibration marker is my best guess).
You can only quantify it if you have reference data to compare the gaze estimation to. Concretely, you could use the post-hoc calibration to add manual reference locations and run a validation on that particular time section.
Okay, so ideally, if there is good data without drift, I can use its gaze esimation data and compare it with the data with drift. That's problemation because at a particular time, two people can be fixating on different spaces on the screen so the calibration may fail.
I feel like we have different concepts of drift. Gaze estimation errors are usually dependent on the location of the actual gaze (scene camera center is usually more accurate than scene camera borders). If the accuracy for a specific location decreases over time, e.g. due to headset slippage, then I would speak of drift.
So, technically, during a calibration, there can be no drift (as in the definition above).
two people can be fixating on different spaces on the screen so the calibration may fail I do not understand this statement. Is this something specific to your experiment setup?
If we are validating a particular section based on reference data, do gaze data have to match exactly based on time? Participants basically scanned a screen while playing a slot machine game.
Okay, yes I see. I was talking about a shift in gaze. So, the participant's were looking somewhere else and the scene camera shows the location of the gaze that is a few centimetres off than their actual, reported gaze. This came up during calibration and we just carried on collecting data regardless. What to do in this case? As for headset slippage, I am not sure if that actually occurred or to what extent. Some people definitely moved their heads a lot. Visually during exports , I have been noticing lack of fixation markers at the bottom of the screen for a few participants (i have yet to visualize problematic data yet, only going through clean data for now).
Usually, you calibrate because you do not have a valid gaze estimation yet. Therefore, it is kind of difficult to talk about gaze estimation errors during calibration. Usually, it is recommended to perform a validation directly after (same procedure as calibration but with different marker locations) to evaluate the calibration quality.
When people look down, their pupil is often occluded by their eye lids. This might cause you lack in high confidence gaze or fixations at the bottom of your screen,
Okay, so basically there is no way to correct for slight shift in gaze points that we observed at the bottom of the screen post an otherwise successful calibration? We did perform a validation. What happened was after multiple validations with the same issue, data was just collected anyway because no solution worked.
What happened was after multiple validations with the same issue, data was just collected anyway because no solution worked. ok, understood.
slight shift in gaze points that we observed at the bottom of the screen Yeah, I would recommend enabling the eye video overlay and check out the pupil detections during these bottom-screen periods. You will probably see that the pupil detections do not match the pupil perfectly
Hi everyone, I am new to pupil player and honestly, as a MD, not very IT-talented.. We are using pupil player to count fixations and measure dwell-times on a specific projekt. I defined my surface but when I press 'e' the tables remain empty. Everything works well on my colleagues computer. I realised that the plugin 'fixation detector' is missing and nowhere to find. A quick internet research didn't provide an answer. Deinstallation and Reinstallation didn't solve the problem. Thank you in advance!
Hi, is it possible that you are using Pupil Invisible instead of Pupil Core?
thank you for your quick answer! where can I find this information? In the about section I only see that it is Pupil Player V3.3.0.
@user-22c43d starting in version 3.2 we disabled plugins that were not relevant/robust for Pupil Invisible recordings. (I am assuming that you are opening Pupil Invisible recordings). Therefore you will not see the fixation detector plugin in Pupil Player v3.2 and above with Pupil Invisible recordings. Please see release notes: https://github.com/pupil-labs/pupil/releases/tag/v3.2
That being said. We are working on a new fixation detection algorithm that will be available (hopefully soon) for Pupil Invisible recordings. This will be available in Cloud.
Ok I got it, thank you! And yes, in the general settings section I just saw that it is Pupil Invisible recordings. That's why it is working on my colleagues computer, he has an older version. Is there a possibilty to download the older versions in the meantime?
Found the version 3.2 in the links you sent. Thanks!
I downloaded V3.2 now but there is still no fixation detector..🤷♀️
Please allow me to clarify version < 3.2 should have fixation detector for Pupil Invisible recordings.
Please note that it is not recommended to use Pupil Core's fixation detector for Pupil Invisible recordings. ⚠️ The fixation detector in Pupil Core works well for Pupil Core recordings, but was not designed for Pupil Invisible and therefore may not be accurate in classification.
ok. Is there an alternative way to count fixations/dwell-times until you released the new algo?
@user-22c43d there is not currently a recommended method for fixation classification with Pupil Invisible, but we will have a solution soon. Dwell time is a separate topic as it generally concerns duration of gaze on stimulus or area of interest - which would require a surface/reference image as AOI and then gaze data could be used to calculate dwell time per surface/aoi.
ok thanks, will discuss this in our group. Thank you for your help!
@user-22c43d has your group considered using Pupil Cloud BTW?
not yet, as far as I know.
Here is a timestamp showing here
The timestamps for fixation in the video here and the start timestamp in fixation file which is more accurate?
Here is another.... I know we can look at the export files but we are interested on something we thought we can get by manually taking data from the video. Now what's the difference between these times (duration) in both pictures?
The first is a single point in time (speciically, relative time since the first scene video frame). The second is a time period. This is not comparable.
Thanks... We are interested in the first one .. I think it is not on the exported file right ?
The exported file only includes absolute timestamps. The displayed relative time is just as an intuitive reference for the playback ui.
We have used the difference between displayed timestamps and the difference between fixation durations and both look similar. I'm sorry what's "absolute timestamp" ?
I mean we computed the differences between consecutive timestamps and then
The differences in consecutive fixation duration.
Both seem the same
One thing to note is that Player only displays scene video frame timestamps, i.e. which have a limited frequency (30 Hz by default). While fixations are calculated based on gaze (usually 120 Hz or higher). If you use the displayed timestamps in Player, you will have less time precision compared to using the timestamps in the fixations.csv export
Absolute time refers to "pupil time", the timestamps that were recorded during the recording
Thanks
What of the Annotations timestamps? Does it align with the fixation data?
Timestamps in annotations.csv and fixations.csv come from the same clock and are therefore comparable
Thanks
Please how can we get an audio sync to pupil lab system? Or is there plugin for audio system... ?
We planned to use an eternal audio recorder but am not sure how that can be in sync with the recording
Hi @user-7daa32. For external audio devices, I would recommend using the Lab Streaming Layer (LSL) framework. This would provide accurate synchronisation between audio and gaze, but takes a bit of work to get setup: 1) Use the AudioCaptureWin App to record audio and publish it via LSL (https://github.com/labstreaminglayer/App-AudioCapture#overview) 2) Publish gaze data during a Pupil Capture recording with our LSL Plugin (https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md) 3) Record the LSL stream with the Lab Recorder App (https://github.com/labstreaminglayer/App-LabRecorder#overview) 4) Extract timestamps and audio from the .xdf and convert to a listenable format -- do whatever post-processing
@papr or @nmt any thoughts/suggestions here?
Hi everyone, we are considering getting the Pupil Labs add-on for HTC Vive, and were wondering if it will be compatible with the new Vive Pro 2 headset?
Hi there. I have been trying to run pupil_capture on PopOS 20.10 (also in arch) for the past 4 days and just realized that my user was not in plugdev group.......
Hi 👋 based on images of the Vive Pro 2, it looks like our existing Vive add-on will not be compatible with Vive Pro 2. A dedicated Vive Pro 2 is not on our roadmap. If you are purchasing Pro 2, can offer to a de-cased version of our Add-on with cameras and cable tree for custom prototyping. If interested send an email to [email removed]
@wrp Jumping in - thanks for letting us know. Is there anything else that you can share about your roadmap for VR?
Hi 👋 Sorry you spent so much time on this. For future readers of this thread please check out documentation here: https://docs.pupil-labs.com/core/software/pupil-capture/#linux
thanks for the heads-up, we haven't ordered yet, so good to know, we'll reconsider our options
we are also very much interested in this, i guess more generally a solution for precise syncing of an external TTL/DIO to the system would be incredibly useful not just for audio (but for displays, button presses, events, etc)
I am currently running an eye-tracking experiment involving screen stimulus, key pressing and audio capture using Psychopy (https://psychopy.org/).
Psychopy is built on Python and can run any Python code during the experiment. This means that I can send Pupil annotations over the network API from PsychoPy on triggers/events etc, as well as syncing Pupil with the experiment clock. The audio recording is also done automatically using sounddevice
library and scipy
for WAV export. Psychopy's included microphone module caused a lot of bugs unfortunately.
My set up probably isn't as "in sync" as LSL, but for my experiment the temporal resolution is good enough and in my opinion makes the data a bit more manageable. Reach out on DM if I can help 👍
@user-7daa32 @user-6ef9ea Our Network API also offers flexible approaches to control experiments remotely, send/receive trigger events, and synchronise Pupil Core with other devices (https://docs.pupil-labs.com/developer/core/network-api/#network-api). For example, you can use our annotation plugin to make annotations with a timestamp in the recording on desired events, such as trigger events. Annotations can be sent via the keyboard or programmatically (https://docs.pupil-labs.com/core/software/pupil-capture/#annotations). We also maintain a Plugin for LSL that publishes gaze data in real-time using the LSL framework. This enables unified/time-synchronised collection of measurements with other LSL supported devices (https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md)
Hello ! I am trying to install a plugin for pupil lab but I have some issues. When I try to install all Python dependencies, I use the requirements.txt file from the root of the pupil repository. I use this command :
python -m pip install -r requirements.txt and here is the error :
ERROR: Could not find a version that satisfies the requirement pyre (from pupil-invisible-lsl-relay) (from versions: none) ERROR: No matching distribution found for pyre
So I try to install pyre-check with this command : pip install pyre-check
And here is the error : error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
But I already install Microsoft C++ Build Tools with many tutorials, you can check on the following screen picture. Do you know how to solve my problem ? Or an other way to install a plugin ?
Apologies, the pyre
repository is going through a name change right now, which is conflicting with the requirements.txt file. Please delete the [email removed] lines at the bottom of the requirements.txt files and install ndsi via one of these wheels: https://github.com/pupil-labs/pyndsi/suites/2900326485/artifacts/64999752
Thank you very much !
Hi, I have some quite long recordings (everything from 10-40 minutes) where I want to export the raw data, surface data and annotations. When using Pupil Player I have ran into some unexpected crashes. Here is (parts of) the log from when exporting began until it shut down:
Offline Surface Tracker Exporter - [INFO] surface_tracker.background_tasks: exporting metrics to my_path\Pupil data\participant_502\000\exports\000\surfaces
(........)
Offline Surface Tracker Exporter - [INFO] surface_tracker.background_tasks: Saved surface gaze and fixation data for 'Monitor'
Offline Surface Tracker Exporter - [INFO] background_helper: Traceback (most recent call last):
File "background_helper.py", line 73, in _wrapper
File "surface_tracker\background_tasks.py", line 318, in save_surface_statisics_to_file
File "surface_tracker\background_tasks.py", line 385, in _export_marker_detections
TypeError: 'NoneType' object is not iterable
player - [INFO] raw_data_exporter: Created 'gaze_positions.csv' file.
player - [INFO] annotations: Created 'annotations.csv' file.
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
File "launchables\player.py", line 720, in player
File "surface_tracker\surface_tracker_offline.py", line 287, in recent_events
File "surface_tracker\surface_tracker_offline.py", line 308, in _fetch_data_from_bg_fillers
File "background_helper.py", line 121, in fetch
TypeError: 'NoneType' object is not iterable
player - [INFO] launchables.player: Process shutting down.
Do I have to wait for something to load in Player before exporting? Neither blinking detection or fixations were turned on here.
Hi, we tried reproducing the issue but were not successful in doing so. We think it could be related to your specific recording. Would you be able to share it with [email removed]
Which pupil Player Version are you using? @user-764f72 please have a look at this issue and check if this is already a known issue
Did a quick experiment and have a theory on why it's crashing. If it is of any help.. I think it's due to exporting before marker cache is loaded completely? It did not crash when I waited for the cache to "fill", and crashed when I exported before cache was filled right after importing the data. See pictures.
v.3.3.0
Hi I have some questions about the picture below. What are the special cases? Thank you
Hello ! When I deleted the ndsi @ lines of the requirements.txt I had an error like this : ERROR: uvc-0.14-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform. I tried to install Pyuvc 0.14 with other links but it still has the same error... Do you know my problem ? I am on windows 10
Please be aware that it usually is not necessary to run the software from source. Nearly everything can be done using the plugin or network api of the bundled applications.
Most of our wheels are currently only available for python 3.6 on windows. You are likely running a newer version of python.
Ahhh yes thank you !
Hi @user-4bc389. The natural features calibration is mainly for situations where you have no access to a calibration marker: https://docs.pupil-labs.com/core/software/pupil-capture/#natural-features-calibration-choreography In general, this method is less reliable as it depends on communication between the wearer and operator
thanks👍
Hi @user-4bad8e. You can calculate visual angle changes between consecutive gaze_point_3d values relatively easily: https://discord.com/channels/285728493612957698/285728635267186688/692710274066677820. The trickier part is using this to classify saccades. Just looking at the data within fixations.csv will likely not be sufficient as the dispersion-based fixation filter is not optimised for saccades – there is no guarantee that the changes between fixations accurately reflect saccadic dispersion.
Sorry, I have an additional question. What is the unit of gaze_point_x,y,z showed in fixations.csv? Pixels?
@user-4bad8e there are community projects that have that implemented saccadic analysis, e.g. by @user-2be752 https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing
Hi @nmt. Thank you for your polite responses. That helps me a lot. I understood the limitation of the data for evaluating saccades. I want to analyze the scattering of places that occupants have fixated in a room using the calculated visual angles. Thank you.
I am still trying to get your last responses to me. Thanks....they are just esoteric for now
Hi all, I get a very similar error information as the issue https://github.com/pupil-labs/pupil/issues/1764 with version 3.3.0 when I open the pupil eye capture. I have tried the suggested solutions in that issue, but unfortunately they don't work for me. Does anyone have some clues on how to fix this? Thanks!
world - [WARNING] video_capture.uvc_backend: Updating drivers, please wait...
powershell.exe -version 5 -Command "Start-Process 'D:\pupil\Pupil Capture v3.3.0\PupilDrvInst.exe' -Wait -Verb runas -WorkingDirectory 'C:\Users\lanru\AppData\Local\Temp\tmpgt4z7836' -ArgumentList '--vid 1443 --pid 37424 --desc \"Pupil Cam1 ID0\" --vendor \"Pupil Labs\" --inst'"
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "launchables\world.py", line 669, in world
File "plugin.py", line 398, in __init__
File "plugin.py", line 421, in add
File "video_capture\uvc_backend.py", line 78, in __init__
File "video_capture\uvc_backend.py", line 248, in verify_drivers
File "subprocess.py", line 729, in __init__
File "subprocess.py", line 1017, in _execute_child
FileNotFoundError: [WinError 2] 系统找不到指定的文件。
world - [INFO] launchables.world: Process shutting down.
Hi What could be the cause of the problem in the picture? thanks
You need gaze data for the fixation detector to work on. It looks like you have none. This can happen if you selected Post-hoc calibration in the gaze data menu but did not finish setting it up.
It's millimeters within the scene camera coordinate system.
Thank you very much!
Hi, so I am interested in counting fixations and dwell times per AOI. For fixations (defined as the number of times spent fixating in an AOI), I am thinking of each counting fixation id (9, 3, 6, etc) as one and then summing the number of times it occurs over my pre-defined area. We have defined dwell times as percentage of task time spent fixating in an AOI. So, my goal here is sum the durations of all fixations within an AOI and then calculate what percentage of task time it actually was. Does this make sense? I just want to run the logic by someone to make sure I am thinking/using the exported values correctly. We are not using gaze data to get dwell time.
Summing the durations of each unique AOI fixation would have been my approach, too.
As a follow-up, I have been using the fixation scan path code and someone else's code to map fixations. If each fixation count can occur over a range of x, y coordinates (which I am guessing is what probably contributes to dispersion), then what is the merit of plotting all fixations in the latter case. To get a sense of the number of fixation across AOIs visually, would it better to filter by id and then use either the average value or that reported in the fixations.csv file (matching it by the relevant ids of course during data parsing)?
@user-908b50, yes this is a common approach in the literature
yes, that is what I've been doing. I have been noticing this in addition to the fixation marker not showing up on screen. Some are essentially gaps where the person is looking somewhere but we don't know where exactly. From an analysis pov, what do you recommend? Should I get rid of this data? I am thinking if there is some way to salvage this data.
You could always reduce the minimum-data confidence. That will include lesser-confidence data in the fixation detection. But it is likely that this will result in false-negative detections (low confidence tends to jump a lot). You could automatically annotate periods of low confidence and exclude these periods from the dwell-time and task-duration calculations. @nmt What do you think?
Hi . What does the place pointed by the red arrow in the picture mean,thanks
If a surface is visible/detected within a scene video frame, the software tries to map the gaze for this specific frame onto the surface. This columns indicates, if the gaze is on the surface/AOI or not.
@papr In other words, false means that the data does not fall into the AOI, and true means that it falls into the AOI, right?
correct
@papr I see thank you
@user-908b50 This is where things can get quite subjective. Sometimes it's clear that the wearer was looking within an AOI even during periods of lower confidence. And in these cases, reducing the minimum data confidence can work. But if pupil detection quality is too low, the fixation result might be invalid. Deciding what is and isn’t valid can become difficult. I would try reducing the minimum data confidence to see what happens. It might be obvious that the results can be trusted, or vice versa.
great! thanks @papr & @nmt for the suggestions! I will try it out and see where that takes me.
Hello, it's the first time I'm using pupil core and it's showing me this error. I don't know what to do, if there's anyone who can help me
Hi 👋 It looks like the driver installation was not successful. Did you get the request for administrative privileges shortly after starting the application?
No, I did not receive the request for administrative privileges shortly after starting the application
Did you see any other terminal-like windows pop up?
no i don't see any other terminal
Have you connected the headset to the computer? If this is the case, please check the Cameras
and Imaging Devices
categories in the system's Device Manager application. They should list 3 Pupil Cam
cameras.
you mean this ???
Ah, you are using an older model. In this case, you need to install the drivers by hand. Please follow steps 1-7 from these instructions: https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
thank you, i will intall the drivers and try again.
Hello
Is the frame time in player in milliseconds... It looks like it will start with millisecond, enter second and then minute.
its minutes:seconds - frame index
Thank you
one camera works but the other one doesn't i can't make it composite parent
You have a monocular headset. It only has a scene camera and one eye camera. Our current headsets are binocular, i.e. they have two eye cameras. This is why the software shows two eye windows, but only one connects to a camera. You can simply close the eye1 window.
Hello, I still got stuck in this error(following picture). Could someone has a clue on how to fix it? thanks!
I think it is probably due to the driver, my pupil cameras under libusbK look wired
Yeah, there is definitively something going wrong during driver installation. Could you uninstall these again and then try the steps 1-7 from these instructions to manually install the drivers? https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Thanks so much! it works well now👍
I know I have asked this question before. I'm sorry.
Problem: words displayed on a screen are blurry to view on player. What do you think might have happened or what do you think can remedy this?
Is the Computer screen resolution or the resolution set in Capture matter?
Or must both resolutions align?
Hi, @papr - Do you know if there is a flowchart/something similar floating around, or one that I might be able to get access to, showing the process of how confidence is calculated for the 2d pupil detector?
The most accurate documentation of the 2d detection can be found in our paper from 2014 https://arxiv.org/pdf/1405.0006.pdf
It also includes information about confidence.
Also, the same question, but for the process of pupil fitting
Hi, sorry to bother you again. After I move the pupil cam 1 ID0(composite parent) to the libusbK, the program can start and the two windows for the eyes work normally. But I just found the calibration function doesn't work and gives the information " Calibration requires world capture video input." , like the following picture. Since I am using the HTC Vive add-on, should I also move the HTC Vive (composite parent)to libusbK? Or do you have some clues on the possible reasons leading to this problem? Thanks!
You should check out our hmd-eyes project. It is able to stream the Unity main camera to Capture as a scene camera replacement.
Hello
Those gaze points before fixation Begin should be called what ?
If there a gaze at frame 7 follow by a fixation at frame range of 8-12, what's the name for that gaze points at 7?
There are various types of eye movements. If a gaze point does not belong to a fixation, it is most likely a saccade, but might be part of an other eye movement type, too.
Thanks... That's my explanation too. Thanks for clarification
They are the same timestamps but with different clock starting points. The UI only displays the timestamps up to a 3-digit precision. That should be sufficient for your use case.
Thanks
You probably need to adjust the focus of the world camera lens to suit the camera-to-screen distance in your experiment
Got this answer here before.... Thanks for your patience
Hi, I am currently trying to run pupil labs capture from source on linux. As far as I'm aware, I have installed all required dependencies, however, when trying to run main.py, i get an error that the OpenGL module wasn't found. I already tried just installing the module with pip, however that fails as pip claims to be unable to find the module - any idea what the issue may be/ how to fix it?
What linux and Python version are you using?
The module to be installed is pip install PyOpenGL
.
Okay, thanks - i think i found the issue now, apparently the requirements.txt script did not run through properly and I didn't realise that. The script apparently errors out when trying to install pupil-apriltags==1.0.4
and pye3d==0.0.7
, both times because the module six
is not found...
That is weird. Six should be available via pypi https://pypi.org/project/six/#description
hm, six is installed, OS is Ubuntu 20.04 based, python version is 3.8.5
Could you provide the full error message then?
sure
Please
1. git clone --recursive https://github.com/pupil-labs/apriltags.git
2. Add "six"
to the requires
list in pyproject.toml
3. run pip install <path to repo>
That is the part of the script output where it tries to install apriltags
the error message for pye3d is the same one basically as far as i can tell
That did it 👍
Should i do the same for pye3d?
yes, please
That worked, thank you very much! 😄
Hi there, I ran into a problem with the pupil detection today (this hasn't happened before and I have collected about 20 participants already). The eye tracker was not able to detect the pupils at all and I'm not sure what the reason is for this. This participant had green eyes, could this be a problem? Is there a way to adjust the settings to still detect the pupils in a situation like this?
Thanks! I will check it. But just in case, I would like to make sure my drivers are now installed correctly since the name pupil Cam1 ID0 --vendor\Pupil
looks wired and there are only two pupil cams under libusbK(plus hidden ones). 🤔 Could you please kindly check it out? Here you can see the picture:
The add-on only has two cameras. Even though the naming looks weird, if you can see the video stream in Capture, everything is working fine.
Why are the width and height of the surfaces created not really adjustable?
Please elaborate on what you are referring to.
And the issue is that you cannot click into the text field? In your previous screenshot, the name field was not filled. I think the software requires you to set a name. Therefore, switching to the width and height field is not possible.
Please describe the behaviour that you are encountering and how it differs from the expected behavior.
There is a name. When I changed the 1.0 in both boxes to 2.0 for instance, I saw no change. Meaning putting in values there change nothing
The size is used to define the internal AOI coordinate system. It makes a difference if you look at the surface-mapped gaze data and the heatmap.
I will check what you mean by internal AOI
I have never seen a clear heatmap except on the surface created. Heatmap outputs are blurry
To adjust the surface definition, click the edit surface
button (top right of the surface) and move the corners of your surface into the desired position
I can adjust it that way. I just want all the surfaces size to be precisely the Same
The "blurryness" is a function of the width/height and the heatmap smoothness. Larger width/height -> more heatmap resolution.
Oooh I those are the function of the size of the AOI. Thanks. I have to edit and do that manually as I have been doing
Also, how do you automatically annotate the data? I have been doing it manually.
Not sure what you mean by automatic, but this can be programmatically done with the network API. Example code here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Traceback (most recent call last): File "launchables\world.py", line 751, in world File "camera_intrinsics_estimation.py", line 323, in recent_events cv2.error: OpenCV(3.4.13) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-cg3xbgmk\opencv\modules\core\src\matmul.dispatch.cpp:439: error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'cv::transform'
Hi I get this error when I am trying to do the camera intrinsics estimation: Traceback (most recent call last): File "launchables\world.py", line 751, in world File "camera_intrinsics_estimation.py", line 323, in recent_events cv2.error: OpenCV(3.4.13) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-cg3xbgmk\opencv\modules\core\src\matmul.dispatch.cpp:439: error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'cv::transform'
the world and eye processes shut down and the capture is not responsive
Hello. Is it possible to estimate the depth of the users' viewing place in real space from the world camera using gaze_point_3d_z in fixation.csv?
Hi @papr, since upgrading to version 3.x, gazepoint positions seem to be MUCH noisier than v2.6, has anyone else noticed this? I confirmed this by recording with v2.6 capture, and creating gaze mapping with both 2.6 and 3.1 (both their own post-hoc calibration and new gaze mapping). You can see clearly during the visualization of the gaze position that player 3.1 created with the same recording is jumping around all over the place, while 2.6 is quite stable. This will become a problem with anyone who records with 3.1 and can't go backwards to use the 2.6 algorithm (theoretically this should be possible with video alone, is that correct?)
Mmh, sounds like your 3d model seems to be less stable than before. Could you share the 3.x recording with [email removed] I could give more specific feedback this way.
Sure, thank you. To be specific, the recording was done in 2.6 (since 3.1 recordings aren't backwards compatible), and it was the 3.1 remapping that was the problem. Recording in 3.1 and mapping still has the problem. I will share both the original 2.6 mapping and 3.1 mapping folders. THANKS!
Thank you for sharing the recording. I will come back to you via email early next week.
Hi
Is it possible to visualize the overall scanpath of the task? A static image of the order of fixations?
Hi @user-7daa32 . You can add a ‘Vis Polyline’ in Pupil Player that visualises the scanpath of gaze points for the last 3.0 secs. Alternatively, if you used surface tracking in your experiment, you can visualise the entire scan path on surfaces with some coding. Check our this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb
@papr, is Pupil Core monocular 200Hz? Thanks.
Hi @user-b772cc. Yes, Pupil Core is able to sample up to 200 Hz (monocular).
Hi all, sorry for asking again about a topic which has been discussed already, but I am trying to use the latest Pupil Capture (3.3.0), on Windows 10, to work with my pupil core device (high-speed 120hz worldcamera + binocular 200hz eye cameras). It seems there is no way to have the device recognized by the PC. I have also activated the Administrator account, but the device is not listed under the "libUSBK Usb Devices": it is rather listed under "USB (Universal Serial Bus) controller", but with the following errors (I try to translate them from Italian): - unknown USB device (failed port setting) - unknown USB device (failed request device descriptor)
When I have tried in the lab, at some point - after installing the software as administrator - it worked for few seconds (and the device was listed under libUSBK Usb Devices) but then it stopped. I read previous posts, and one of the suggestions (may 15th 2021) was to install an old version of PupilCapture: is this the only possible solution? In this case, which is the suggested version?
Thank you very much, best regards nicola
Hii @user-2996ad. In the first instance, please follow the instructions in this message to debug your driver installation: https://discord.com/channels/285728493612957698/285728493612957698/842376938981949440
Probably a simple question, but i just cant find a solution: since im using the nework api (and already had to use the suicidal snail method and restart the subscription, since the data comes faster in than i can work through it),
how do i map the gaze to the world camera? i have normalized coordinates, but those are for the world window, and i cant find a way to get the world window as a topic, so my python script only has the world_frame, and the normalized position is not corresponding to the coordinates of that.
maybe in other words: is there any way to get the world window coordinates for the world frame from the network API over ZMQ?
big thank you if somebody has an idea
Hi @user-6e6da9. If you subscribe to the gaze datum, norm_pos
is gaze in the world camera image frame in normalised coordinates. If you want the 3d point that the wearer looks at in the world camera coordinate system, look at gaze_point_3d
(https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format).
If you haven’t already, check out the Pupil Helpers repository for examples of receiving data in real-time: https://github.com/pupil-labs/pupil-helpers/tree/master/python
@user-6e6da9 to extend the response above: Should you be interested in the 2d pixel location within the scene frame, you can use this equation to convert from normalized to pixel locations:
x_pixel = x_norm * frame.width
y_pizel = (1.0 - y_norm) * frame.height
It would be correct, if i could calibrate the pupil labs that the calibrated world window is the same size as the world_frame. As it stands, its always a rectangle (or shape) in the world_frame in pupil labs, and the norm_position is for this smal part of the world_frame. And this would be okay, if i know, what the Thats what i tried. the problem i have: it coresponds to the calibrated window in the world frame. But i dont know where the window is. the norm_pos corresponds with the position in this window for me.(using the norm_pos of base_data) if i look at the corner of the calibrated window, i already get (1,0). But that is not the corner of the whole world_frame.
if i look at the 3D data, it is even more off. gaze_normal_3d is almost at the edge of the screen, and norm_pos of gaze 3D has a negativ normalized position. none of those would correspont to the correct pixel on the world_frame
Gaze: {'eye_center_3d': [-417.07332897871754, 75.0570080555142, -15.841330035112268], ' gaze_normal_3d': [0.9893230944326201, 0.050810551746338056, 0.1365946655382179], 'gaze_point_3d': [77.58821823759251, 100.46228392868323, 52.456002733996684], 'norm_pos': [0.7362048359425144, -0.08089865886879655], 'topic': 'gaze.3d.0.', ... 'base_data': [{... 'norm_pos': [0.7015782696975317, 0.034688731852176025]...]}
thats what i have done, but since the norm_pos correspont to the callibrated frame in the picture, and not the whole world frame, it yields a wrong position.
since the norm_pos correspont to the callibrated frame in the picture That is not correct. Gaze is mapped within the whole scene camera field of view. See https://docs.pupil-labs.com/core/terminology/#coordinate-system for details. The green bounding box does represent the "calibration area", true, but it refers to the area where the gaze is most accurate. It does not define a new coordinate system.
From the link above, you will see that the 3d coordinate system has its origin in the center of the camera, not in the bottom left.
using the norm_pos of base_data This sounds like you are using the pupil norm_pos and not the gaze norm_pos. That would be incorrect, since pupil data is relative to the eye cameras and not to the scene camera. https://docs.pupil-labs.com/core/terminology/#pupil-positions
Is there a way to only choose one eye for the results in pupil player? So only results from the left eye should be exported?
If you are just talking about the export, I suggest filtering the data post-hoc. If you want to recalculate everything, including gaze, I suggest renaming the right eye video file and running post-hoc pupil detection and calibration. This will generate monocular pupil and gaze data from the non-modified video file.
hm, ok. i tried it with the norm_pos from gaze3d, and got wrong positions, but then it seems to have been because of calibration or delay in receiving the data... norm_pos of the base data seemed to be more acurate.
Is there a reason why e normalized position between 0 and 1 can deliver negative numbers? i know floating point can be tricky with fine precision, but this should not really be if it detected a gaze... is the correct way in this case to just take the absolut, or to map if(x<0 ) x = 0 ?
Thanks a lot for the answer 🙂
norm_pos from gaze3d Could you share a short example code in 💻 software-dev how you access and visualize the data exactly? Maybe I can spot the issue.
Is there a reason why e normalized position between 0 and 1 can deliver negative numbers? Yes, gaze can lie outside of the scene cameras field of view. In these cases, gaze coordinates can be negative or exceed 1. Most likely, these are from inaccurate pupil detections, though. I suggest filtering by confidence and to discard gaze points that are outside of the [0, 1] range.
Does the filtering of the data work in the fixations_on_surface csv? or what do you mean exactly with filtering posthoc?
If you only used the pupil_positions.csv, you could simply delete all entries for eye0. But in all other cases, you need to run the post-hoc processes again after renaming the eye video file. (Please also rename the according timestamps file)
so ill just rename all eye0 files (the video, the timestamps and the lookup) and run post hoc. Thanks @papr
maybe in a few days. Essentially, im using the method described in recv_world_video_frames_with_visualization.py, but the opening of the frames with cv2.imshow takes to long, and i already had to implement the suicidal snail pattern to close and reopen the connection
In regards to the negative coordinates... that was a missunderstanding i had, i asumed if a gaze it outside of the world_frame, then it does not get recognized. knowing that positions <0 and >1 are posible explain why i was so confused, sins the documentation says it returns a normalized position between [0,1]. With that knowledge it makes more sense, when you add that to a suboptimal calibration values as 1.4 and -0.3 are rather confusing without it...
The recv_world_video_frames_with_visualization.py
should not need the "suicidal snail pattern". It is built such that it processes the received messages first, before drawing the latest images. You should follow this pattern with the gaze data as well.
im receiving it first and only show them when i show them whewn receiving gaze data, but the drawing/opening takes long enough (windows pycharm or ubuntu pycharm), for now im more then happy with the help you gave me, and i will report back if i find whats causing the delay in case that somebody has the same problem 👍🏻
Thanks. After validation, the calibration area masked with green line go off and another small smaller one is formed. Does it mean that only gazes within the validation box are accurate?
Exported data such as the fixations are averagely calculated from both eyes right ?
Does it mean that only gazes within the validation box are accurate? No. It just means your validation area was smaller than your calibration area. Validation does not change the accuracy of your calibration.
Fixations are calculated based on gaze. Gaze can be both: monocular and binocular. The system tries to maximize the number of binocular mappings given these conditions: - both pupil datums from left and right eye have high confidence - both pupil datums are close in time In all other cases, gaze is mapped monocularly.
lik
Hello,I have a pair of pupil core labs for sale, the product is new
if you are interested please contact me
[email removed]
Hi! 👋 I’ve had two participants with intense blue eyes, where the 3D model flickers constantly (between 0.8-1.0). Any idea what causes this? The pupil seems to be well lit and is centered within the cameras FOV, similar to all other participants I’ve had earlier.
As long as 2d pupil detection works well, and there are sufficiently different gaze directions, the eye model should be stable. The iris color should not play a role regarding model fitness. If you have trouble with model stability you can always freeze the 3d eye model to enforce a single model.
Could you provide an example recording with data@pupil-labs.com ? This would allow us to give more specific feedback.
Sent you an email now.
Hey i have a question regarding the core eye tracker. I run an experiment and record for about 3 minutes. When i open the sheet that records the diameter 3d in mm during that recording , i get ranges from 0.01 mm to 3.5 mm The problem is that i know that there are absolete numbers (0.01 for examoke) So i dont know which numbers i should include as real measurements. Also keep in mind that during the expeeiment, the eye pupil wasnt exactly at the green circle of the eye tracker, so it wasnt always taking accurate measurements. So how can i know the right real numbers inside the pupil_positions.csv file?
Hi, please see https://docs.pupil-labs.com/core/#_3-check-pupil-detection
I have read it before
But it does not answer my question
Slowly move your eyes around until the eye model (green circle) adjusted to fit your eye ball If everything is setup correctly, the green circle should no longer move around.
I know that, what i am saying is that i have already taken the recordings and i dont know which points in time the green circle was positioned correctly (because the eye was moving)
Once you have a stable model, you will also see more realistic diameter values in the range of 2-9 mm. Of course, it is not always possible to estimate the diameter reliable, e.g. during blinks. It is recommend to discard entries with low confidence, e.g. lower than 0.6
So i should discard the values less than 0.6 mm
Right?
The export has a column confidence
. Entries with a confidence lower than 0.6 should be discarded. If you are interested in further processing options, I recommend having a look at:
Kret, M. E., & Sjak-Shie, E. E. (2019). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 51(3), 1336–1342. https://doi.org/10.3758/s13428-018-1075-y
If one is interested in the saccades time, do we need to worry about low confidences if the confidences during focused attention (fixations) are higher?
Also, it is good practice to freeze your model once it is fit well. If you have already finished all your recordings, you can still try running the post-hoc pupil detection in Player. Start the process until the model fits the eye, pause the processing, freeze the models and hit restart. It will apply the frozen model to the whole recording.
Not necessarily, just the current pupil detection. As I mentioned here, you can run post-hoc pupil detection to improve the detection. If you want, you can share an example recording with data@pupil-labs.com and I can let you know via email what can be improved specifically based on the recorded eye videos.
Awesome man, i will try that
Thanks @papr , you are a very helpful guy
Is the pupil diameter range 3d values in the pupil player accurate?
I am not sure what you are referring to. Are you referring to the values displayed in the timeline?
Can i take it onto account?
That range represents the recorded/detected data with some simple outlier removal. It does not do any sanity check regarding the validity. The resulting diameter values can only be accurate if the 3d model is accurate. Given that the displayed range in your photo is way too small ("normal" pupil physiology has a range of 2-9 mm), it is likely that there is no valid model and diameter data in this recording (given the current detection)
I am uploading a photo of it
So is this recording invalid?
Is there a way to get accurate estimations from this recording?
So what should i do after i run the post-hoc pupil detection?
The improvements need to be done during the post-hoc detection. This includes adjusting 2d pupil detection parameters if 2d detection is bad, etc. It is difficult to say what needs to be done specifically without having access to the eye videos.
I clicked on "redirect"
Okay
So what should i send exactly to that email?
Ideally, you would share a raw Pupil Capture recording (i.e. the folder that you drop ontop of Player to open a recording).
Great
I will send it now
What should i write in the email subject?
No need for a specific subject. I will recognize your email 🙂
Great man😃
I sent you the email
It is from [email removed]
Did i send you a wrong file?
You only shared the export folder. I would need the folder that includes the info.player.json
file
Should i send the whole folder?
Yes, the whole folder please
Or just that file?
Okay
But the file is 600 MB
It is so big, is that okay with you?
Yes, that size is to be expected and is no problem.
Okay
I will send it, but i will need to do the same thing with about 10 different recordinga
Recordings
So it would be better if you can tell me what to do
Yes, of course. If all recordings have the same issue, it should be possible to find a solution from a single recording.
Because adjusting those 10 recordings will take all your day
And then duplicate it?
The "solution" will probably be a series of instructions that you can apply in Player to all of your recordings. But I won't be able to tell you until I have seen an example recording. As an example, it could also be that pupil detection is impossible due to bad lighting of the video.
Okay
I am uploading the file
Hey @papr
The file just finished uploading and the message was sent
If you turn on the eye video overlay, you will see the issue. The subject is wearing glasses, causing strong reflexions in the eye camera. The pupil is barely visible at times. I doubt that it will be possible to get good pupil data out of this. 😕
Also, usually if you are interested in measuring the psychosensory pupil response, you need to control the lighting of the scenery very carefully. This is not the case in your recording. So even if the pupil detection worked, it would be difficult to interpret the data in any mental effort context. 😕
Okaaaay
I have another video for someone without glasses
Can i send it to you?
Sure. 🙂
Awesome
So is it hopeless for that video?
I will check how much I can get out of it tomorrow.
Great , man
I sent you the other file
And i reallllly appreciate your help @papr 😃
Hi, I have a few questions regarding pupil size measurements: 1) How much of a difference in pupil size (in mm) between eyes can be considered normal? 2) I noticed quite large differences in pupil size measurements between offline & online analysis, which of the 2 methods tends to be more reliable and is there a way to check which method performs more accurately? 3) I have some recordings with pupil size measures frequently being below 2mm (which I thought was the natural lower range of pupil size), even when detection confidence is high. Are these measurements valid?
Hi, anywhere can I download the Pupil Mobile App (apk)? As I cannot install through Google Play.
I do not think so. 😕
Mail sent now ✉️
Thanks!
How can I subscribe with Pupil Capture from the remote host?
In newer versions of Capture, the Pupil Mobile device is listed in the Video Source menu -> Activate Device selector
. It is only listed, if there is a device correctly recognized in the network
@papr I dont find any "Capture Selection" option from Pupil Capture interface.
@papr Thanks!
There're something wrong with my macOS version Pupil Capture. The headset is connected via USB-C, it says device found but I got no video signal, both world camera and eye camera.
mmh. Could you try selecting the device explicitly from the Active Device
selector? And afterward, share the ~/pupil_capture_settings/capture.log
file?
If I select "Local USB" option, I got the following error message
Is it possible that you started Capture multiple times by accident? Or have a different program accessing the cameras? If not, please try restarting and trying again.
@papr I've restarted it and nothing changed. Here's the log file
2021-06-16 18:33:45,207 - world - [WARNING] launchables.world: Process started. 2021-06-16 18:33:45,208 - world - [DEBUG] ndsi.network: Dropping <pyre.pyre_event ENTER from bbff2452a3cc44629a11d4c351cc7baa> 2021-06-16 18:33:45,208 - world - [DEBUG] ndsi.network: Dropping <pyre.pyre_event ENTER from 24cbe3897bcf47b380d2e55f47face19> 2021-06-16 18:33:45,387 - world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. 2021-06-16 18:33:45,394 - world - [DEBUG] uvc: Found device that mached uid:'20:4' 2021-06-16 18:33:51,113 - world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. 2021-06-16 18:33:51,121 - world - [DEBUG] uvc: Found device that mached uid:'20:4' 2021-06-16 18:33:56,588 - world - [DEBUG] uvc: Found device that mached uid:'20:3' 2021-06-16 18:33:56,589 - world - [ERROR] video_capture.uvc_backend: The selected camera is already in use or blocked.
It is possible that there is an issue with the usbc-to-usbc cable. There are different types of usb-c cables and some of them are limited regarding what data they transfer. 😕 Do you have a different cable that you can try?
I've tried every cable I can find, still cannot get the video signal. But now I've installed Mobile App, some how Pupil Capture can read video from remote host. 🙂
OK, I'll try other cables. many thanks 🙂
Is there any plug in available for pupil core eye tracker? Tobii has developed a SDK for Unity and Unreal Engine so far I know. If I want to use eye Pupil Core eye tracker in Unreal Engine, how can I do so?
We do not officially have an Unreal Engine plugin - we target Unity 3d and have reference implementations in Python for calibration and interface at: https://github.com/pupil-labs/hmd-eyes/. Pupil Labs VR add-ons are also supported by WorldViz.
There were some community efforts a few years back (e.g. https://github.com/SysOverdrive/UPupilLabsVR ) but these are likely depreciated based on changes we have made to Pupil Core - but may still serve as a useful starting point. There is also this student thesis: https://www.cg.tuwien.ac.at/research/publications/2019/huerbe-2019-BA/
Hi@papr,I try to get images from pupil-capture and process them, but the processing speed of my code is less than 30hz. The sample code has the following description: * # The subscription socket receives data in the background and queues it for # processing. Once the queue is full, it will stop receiving data until the # queue is being processed. In other words, the code for processing the queue # needs to be faster than the incoming data.* I would like to ask if there is a way to get only the latest image frame.
I think my equipment Pupil Core ML-016 has a little problem because the main body and the eye camera is loosely connected which sometimes cause unstable video stream of the result in my computer. After being used for a week, now I can't receive any information from it so I opened the connection part of the eye camera and the main body and found that there are two wires seems to be disconnected and the white junction is loose so it can be easily pulled out. I wonder what can I do to fix it or could you please guide me how to get such equipment by myself (I searched on shopping website but couldn't find ML-016).
Please contact info@pupil-labs.com in this regard
Hi, I am new here and a rookie at that. I just got Pupil core from a senior colleague, and I have been able to do the set up. However, I am confused on how to approach my specific problem, which is to use Pupil core to collect data of object locations. Say I am holding an object, I want to be able to locate the object I am looking at, hence generating series of images + label. So essentially, I want to use Pupil core to generate data for a model I am about to train. I want to be able to use this data for multimodal fusion. Please, any pointers would really be appreciated. Thank you.
Hi, welcome! Check out this community fork https://github.com/jesseweisberg/pupil I think it does exactly what you are trying to do
Thank you I will check it out now
Hello folks, I have a question and I would be very thankful if you could answer this to me: When recording with the glasses it's nice to use the "fisheye" lens, because you have more information in the picture. I need to de-distort / equalize the image for object detection. But in theory, if you de-distort the image, the gaze data is not correct anymore, right? To me it seems like a very common problem so my question is if there is a build-in tool to adapt the gaze data to the new de-distorted image?
You can use the iMotiones exporter to get undistorted video and gaze 🙂
oh this sounds great thank you very much, I will check this out ! 🙂 🙂
Hi all I have another question, does gaze calibration affect the accuracy of the pupil size measurements?
The 3d model stability does, though
No, it does not
Thanks! The 3d model stability can be optimized by adjusting the 2D detector settings and giving the model time to stabilize while the participant moves their eyes around, right?
correct
Is it possible to recalibrate to monocular calibration post-hoc if one eye generally yields lower confidence?
yes, by renaming the video and timestamp file of that eye, running posthoc detection and calibration afterward
Okay, like renaming eye0
files to something else so that it is not “picked up” by pupil player?
Correct
Hi, in my recent recordings, the blue pupil estimation (?) heavily diverges from the actual measured pupil (red). Wonder, what may be the factors influencing that?
Setting Info: auto exposure mode (since the illumination changes constantly today), in algorithm mode: pupil int. range: 3, pupil min: 5, max: 50 freeze 3D model (after doing a rolling eye test, so that the green eyemodel adjusts correctly) model sensitivity: 0.997 resolution: 192x192, fr:120
blue and red circles not matching consistently is an indicator that the 3d model was not frozen in the correct place. Feel free to share an example recording with data@pupil-labs.com such that we can give more specific feedback
oh and using the orange arm extenders
will do!
Hi, @papr thank your response yesterday, I just found out my Pupil core has no world scene camera, is it something I can purchase separately? I also didn't see any slot to plug it in assuming I can buy separately. Please see attached picture.
Pupil Core comes with a scene camera be default. It can be configured otherwise before purchasing, see here https://pupil-labs.com/products/core/configure/ I am not sure, if the scene camera can be installed post-hoc. Please contact info@pupil-labs.com in this regard.
Ok thank you
Hi there! I’ve installed for the first time Pupil core on Win10 (I normally use Ubuntu). Everything went on quite fine, however when I start the intrinsic estimation Capture freezes and I get this message. Any help? Thank you all.
Is C:\Users\appveyor
your local user directory? Probably not, correct?
No, indeed it is not.
Hi , I did not find these two calibration configuration methods in the capture, are they canceled?
Binocular is used by default. If only one eye camera is available, or the confidence is bad for one eye, Capture falls back to monocular.
Which camera model and resolution did you choose?
Eye: Pupil Cam2 ID0 @ Local USB, Framerate 200, Res. 192, 192; World: Pupil Cam1 ID2 @ Local USB, Framerate 60, Res. 1280, 720.
Eye: Pupil Cam2 ID0 @ Local USB, Framerate 200, Res. 192, 192; World: Pupil Cam1 ID2 @ Local USB, Framerate 60, Res. 1280, 720.
Apologies, I meant distortion model, not camera model. It is a setting in the plugin's menu.
Fisheye
Hey there I have to work on recordings that were done with pupil capture version 0.7.6-2 and I would like to work on them with the new pupil player version, if possible. When I tried to import them the Player said "this recording needs a data format update, open once in pupil player v1.1.7" Now my question is, can I download the Pupil Player version v1.1.7 somewhere, or is there another way for updating the data format?
Please use the radial model for the 720p resolution
Ok, thank you !
Hello
I know there are manufacturer's values for Accuracy and recommendation. Please are there values for best practices when using 2D or 3D pipelines? Or can we just choose an average visual angle we got from our pilot studies, use it to calculate the stimulus size?
You should use the accuracies reached/measured in your pilot studies. Every experiment is different and it is important to make sure it works as expected.
Thanks
Hi guys, I am wandering if it is possible to calculate saccades when the recording is done using LSL. In the output .xdf file I have several variables. In fact, I have 22 variables like gaze_point_3d_y, confidence, etc. And I suppose these variables are the same than the one you obtain when you record using pupilCapture (but I'm not sure about that). Is there a simple way to visualize saccades in the data ? Maybe by computing the difference between the center of the screen (or the center of the visual field) and the position of the eye (gaze position I suppose) ? Thanks for the help
And I suppose these variables are the same than the one you obtain when you record using pupilCapture That is correct
In general, saccades can happen everywhere in the field of view. Do you have an experiment setup that expects saccades from the center of the screen? I, personally, would start by visualizing the gaze data itself before looking at higher level eye-movement classifications.
Hello and help! I was hoping to run an eye tracking study tomorrow with the VR eye tracker (silly I know) - at the moment I have about half an hour it's a 2 hour round trip to pick up an HTC Vive Pro. Question - just ran the demo and installed the recording software - the demo raycasts are all over the place and not where I am looking - I can't see any errors but pupil confidence is < .8 - I am wearing contacts does that affect it? Any quick tips?
I day preparation is indeed ambitious regarding preparation time. 😅 What type of data are you looking for?
Pupil detection is crucial for a good gaze prediction. I suggest making sure that there are no occlusion or similar that interferes with detection. Also, when 2d detection works well for you (blue circle fits the pupil well), look around / roll your eyes to fit the 3d model (green outline; should fit the eyeball; red and blue circles should overlap).
VR setups can be tricky to validate if you are on your own. I suggest making a recording in Capture and playing it back using Pupil Player with the eye video overlay plugin enabled. If you want you can share such a recording with data@pupil-labs.com for more specific feedback.
I did some OBS screen capture and the pupil seems well detected to me.
Hi again 🙂 is there an easy way to update the meta file format of already exsisting recordings?
Hi! Which type of data are you planning to update?
What I'd like to do is to analyse blinks with the new pupil player using the plugin, but I have recordings from Pupil capture version 0.7.6-2 unfortunetely
oh, that is pretty old 😅 I think you need to open the recording with v1.16 (?) once, before you can open it in the most recent version of Player
when trying to open the recordings, pupil player said I need to update the data format of the recording
Yeah, read more about it here https://github.com/pupil-labs/pupil/releases/v1.16
yeah, true 😄 okay, but how do I do that? Can I get this version somewhere?
You can download that version from the link above.
perfect!!
The upgrade happens automatically when opening the recording in that version. Make sure to make a backup of the recording if you still want to open the recording in v0.7. The upgrade changes the format such that v0.7 can no longer open the recording.
when downloading it, will it overwrite the version I already have?
The upgrade happens inplace.
Okay, thanks a lot!!
@papr thanks! It's raycast data for a brain injury population - looking at cognitive function - have things already setup with Pro Eye - but given equipment shortages, I thought I'd try to get the Pupil Labs happened - turns out it will probably take me more than a couple of hours - I am going to sit down properly and read the instructions. 🙂 For now, think I'll make do with the roundtrip.
Mahlzeit everyone. I'm still working on installing/implementing pupil-labs on my Win10. I'm trying to follow the documentation step by step and after the updated requirement.txt i got a bit further but now i'm stuck at this:
Looks like the libuvc or turbojpeg dll cannot be found. You can spend time debugging this. But did you know that you can run the pre-compiled version of Capture and do nearly any custom development work using user-plugins?
Do you know what i'm maybe missing?
I didn't... where can i find it?
Hello, we want to cite the PyE3D application in our paper. Is Swirski and Dodgson, 2013 appropriate to cite?
Hello! I am having some trouble with the Network API. It does not seem to do anything when I run the pupil_remote_control.py script from my command prompt, nor does it seem to do anything when I run the same code from a Jupyter Notebook. However, it shows that the code is running. Is there something additional that I must do to actually connect to my Pupil Core device?
Please make sure that you are running Pupil Capture and its network api is available at the port configured in script (by default 50020)
How do I check what port I am using?
It is displayed in the Network API menu in Capture
Sorry that I am very new to networking
Ah okay, thank you so much!
Hi, I am trying to do offline blink detection in Pupil Player, but it will not detect any blinks. In the GUI it does not show any activity either, just the horizontal threshold lines. The following was written to log:
player - [WARNING] blink_detection: No blinks were detected in this recording. Nothing to export.
player - [DEBUG] blink_detection: Recalculating took
2.7310sec for 670779 pp
245614.05567290605 pp/sec
size: 0
Thanks in advance 😄
Edit: And just to be clear, the recordings are long (~45 min) and include a lot of blinks and confidence drops. When I import shorter recordings, blinks are detected.
Let me correct. It looks like something is off with the timestamps. Basically the filter width is zero, which is why there is no filter response. Filter width is calculated based on:
filter_size = 2 * round(len(all_pp) * self.history_length / total_time / 2.0)
all_pp
are the pupil data (670779 entries in your case)
Anyone here can help on how to build in an audio capture plugin in Capture software?
It is nice if we have such plug in
I read a paper that used Pupil Lab core. They made use of Audio capture in their pupil lab software
If after calculating the Target size and we got say, 2.8deg x 2.8 deg, can we still account for offset in player by adding margins when creating surfaces ?
Hi, I tried to open the old recordings in player version 1.16 like you said, unfortuntely it tells me it can't open the file, that the directionary is missing. Any ideas?
You need to drop the folder containing the info.csv file, not a file
Yes, I dropped the whole folder (e.g. '000')
ok. Could you please let us know what the exact wording of the error message is?
it contains the info file
player - [ERROR] launchables.player: InvalidRecordingException: There is no info file in the target directory.
mmh. Not sure what is going wrong that it thinks that the info file is missing. Please share the recording with data@pupil-labs.com if you want us to investigate the issue.
okay, I'll send you one of the folders thanks!
It looks like you did not give data@pupil-labs.com access permissions for the shared link
This is what I mean by no activity 🤔
Looks like you do not have any pupil data. What type o recording do you have? You might want to run the post-hoc pupil detection.
It's a normal recording directly from Pupil Capture. Gaze and fixations are working.
Could you post a screenshot of the fps and confidence timelines?
total time is calculated based on the first and last timestamp. They seem to be very far apart in your case, such that the result is zero.
Please export the pupil_positions.csv and share it with data@pupil-labs.com such if you want us to investigate.
Feel free to try it on the data (participant_503) that I sent you for the surface cache bug. I will send you an exported pupil_positions.csv
for another recording as well.
All my recordings are indeed very long, around 2700 s, so that might be the problem.
That should be no problem if you have sufficient amount of pupil positions. I will have a look
Mail sent now 📬 Thank you.
participant_503 works for me
I am redownloading the recording now. Might take a while 😅
But participant_503/000 is fairly short for me.
@user-597293 Could you follow this step please? It might give us an indication of what is wrong.
Here's a screenshot (with a clear blink present). Pupil confidence and diameter does not show up as continuous lines, but as faint "scatter plots". Not sure why eye FPS is so fuzzy, I do think it remains fairly stable around 120 FPS throughout the recording.
The pupil_positions.csv
looks good. I will have to investigate this in more detail next week
participant_503 should be just short of 45 min 😅 It also has an exported pupil_positions ready in export folder 001 (only 500 MB, hehe).
Found the issue. You are using the R
pupil-remote command in your experiment, correct?
The issue in participant_503 is that it contains pupil data from before the time reset. Player sorts the data by time and places these at the end because they have a large timestamp. These large timestamps are then used to calculate the filter size and timeline subsampling.
You can use this notebook and adjust it to remove these samples https://gist.github.com/papr/96f35bebcfea4e9b9552590868ca7f54
- filter_pldata(info_json.parent, "pupil", lambda d_ts_topic: d_ts_topic[1] >= start)
+ duration = info_data["duration_s"]
+ duration += 60.0 # add a generous buffer
+ filter_pldata(info_json.parent, "pupil", lambda d_ts_topic: d_ts_topic[1] <= duration)
For future experiments, I highly recommend using local clock sync as it is demonstrated here https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py
Ah, I see! Looking through my code it also seems like I've forgot to add a sleep
call between setting the time and starting recording. Thanks a lot 😄
@papr @wrp Hey again, I got the IR's soldered on and now at the stage of getting the cams setup on the Pupil Software, however, when I open Pupil Capture (as Administrator), the camera image does not show up. Ive tried uninstalling devices in device manager as per the guide, but still no image. It says "Camera not found in red when opening the software, and when looking in the camera menu, it says camera is blocked?" Both laptop and desktop doing the same thing. What should I do?
Hi! I have two questions, 1. How to use Pupil Lab eye tracker in unreal engine? 2. according to the VRPN wiki (https://github.com/vrpn/vrpn/wiki/Available-hardware-devices) the Pupil Lab's eye trackers are unsupported by VRPN. Is there anyone, or any source to add pupilLab device drive?
Hi,
Quick question on fixations.csv
vs. fixations_on_surface_*.csv
. The latter contains several samples per fixation_id
, with differences between screen coordinates for each sample.. Any reasons not to use the average of all samples with same fixation_id
? Fixations are exported with a large dispersion value as I am only interested "the bigger picture" in terms of gaze position, so minor differences in position won't matter anyways. Thanks in advance 👍
This happens if the surface moves relative to the scene camera during the detected fixation period
Hello, I have a question: When calibration happens in the Pupil Labs Core software, is that done individually for each eye or is the "cyclopean eye" the one that's calibrated?
The software attempts to do both. It tries to fit two monocular and one binocular model.
Okay, I see. In terms of which of those three models is used to put the gaze location dot on the screen, I'm guessing that depends on the confidence values of each eye in a given frame of input?
If the matcher was able to create a binocular pupil pair, it will use the binocular model. And yes, that mainly depends on the pupil confidence.
Meaning, when one eye's confidence drops, it will switch to the monocular gaze location of the other eye?
Okay, I think I understand. Thank you for the quick answer!
Hello I had to suspend my project and could not follow for a while. So pelase forgive me if I put questions that have already been answered. I will be glad to follow your links in the chat history, in case.
@papr @user-597293 I am sorry if this message looks long, but I believe I have to provide some framework to ease whoever tries to help.
I am working with recorded videos with a camera that gets both eyes frontally. My aim is to study the dynamics of eye movement while they are following a target. My currently intended flow is as follows [with issues/questions in square brackets] :
1. I get a .h264
video file (RaspberryPi + HQ camera + raspivid, setting only resolution=1920x500 and fps=60)
[
- the resulting file, checking with ffplay
is 25 fps, not 60. Any clue?
- what are better suited video recording format/codec/settings, for pupil to read the video then?
- for precision/synchronisation purposes, I'd like to get the actual timestamps in my video, any advice how to make raspivid
save them in the video file?
]
2. I crop the video to obtain two videos, one for each eye
[
- I am using ffmpeg
from command line, but it is quite cumbersome to use those ] complex filtergraphs [. Any advice on using a programmatic approach (python/jupyter)
]
Hi.
Generally, I cannot say anything about raspivid and its output. It is possible that it actually does not run the cam at 60 but at 25 hz. I suggest looking at the pts (presentation timestamps) in the .h264 file for more exact time measurements. You can use https://github.com/PyAV-Org/PyAV to read this information. These timestamps are relative to the video file start and not synchronized between videos. In order to get synchronized ts, you will need to somehow measure the reception time of the frame data. Not sure if raspivid is able to do that.
I would also use pyav to read the frame data, cropping the images with the python code of your choice (e.g. https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.crop), and reencoding the resulting frames using pyav.
I would not recommend this approach. If you are able to generate _timestamps.npy files with synchronized timestamps, you only need a dummy info.player.json
file for Player to recognize the recording. Then you can use the full Pupil Player post-hoc detection capabilities.
pupil_capture
or pupil_service
and get pupil x, y, accuracy, eyeblinks (then fixations, saccades... etc.) by querying Pupil Remote
service via the Network API
[
- which of the three programs pupil_capture
/ _service
/_player
is better suited for doing the modelling/processing? is there any even better option (e.g headless)?
- until last attempts there was no way to synchronise the timing of the two videos. Has something changed in this respect?
- if I get actual timestamps in the video file, will each eye output data be referred to those? Or should I rather provide a _timestamps.npy
file explicitly?
]
Please feel free to direct me to outer links, or tell my that my workflow has to be changed from scratch. I'd like to stay on a traced path and not far from what others are doing. Thank you very much.@papr hello, im sorry if these questions were already asked, but i couldnt find in the history answers to them so im going to ask. 1. Based on what i understood from the documentation the coordinates of the pupil positions are relative to the camera, is there a way to calculate the positions (horizontal and vertical positions) of the eye itself? 2. Is it possible to calculate the gaze based on different "conditions"? (i mean based on the same pupils video and calibration i can calculate both the monocular gaze for each eye and the binocular) 3. Is there a way that i can take the current pupil positions that i have (in which their horizontal and vertical components are dependent on their angle to the cameras) and calculate the eye's horizontal and vertical components from them? (with or without the gaze detection)
Thanks alot!
Hi @user-765368.
1) Yes, in the case of pupil positions, data are relative to the eye camera coordinate system. sphere_center_*
is the position of the eyeball sphere (eye camera 3d space units are scaled to mm).
2) gaze_normal*_x,y,z
describes the visual axis for each eye (i.e. that goes through the centre of the eyeball and the object that’s looked at). Conceptually, this is describing the gaze direction for each eye individually (can be obtained from a binocular calibration). Important to note, this is relative to the world camera coordinate system, which is different from the pupil positions.
3) I’m not sure that I understand this question. Do you want the eyeball viewing direction with respect to the eye camera? If so, look at circle_3d_normal_*
which indicates the direction that the pupil points at in 3d space (relative to the eye camera coordinate system). Note that pupil data from each eye are in separate coordinate systems, i.e. relative to each eye camera.
Coordinate systems: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Data made available: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
Hello. Thank you for your kind supports. I am trying to use offline head pose tracker to calculate subjects' head moves while experiments. Could you tell me the conditions when the data of the world camera's pose is recorded in head_pose_tracker_poses.csv?
Thanks for the quick reply, i need to measure and compare the vertical and horizontal movements of the eyes(their movement and the difference in their position). Is there a way to calculate the movements and their ratios between the eyes? I was thinking of using the gaze tracking and the pupil data (probably Phi and Theta) of the eyes to calculate the ratios, is it possible? (about my third question i basically need to make it so that i can caculate the eyes movement and that they will be in the same coordinate system)
if you want both eye models to be in the same coordinate system, have a look at the gaze data. It includes the eye model data (center + gaze direction) in scene camera coordinates.
Hi @user-4bad8e. Can you elaborate a little on what steps you have taken for head pose tracking, e.g. have you used tracking markers, were they detected okay in the head pose tracker plugin? https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Hi, @nmt. Thank you for your response. I have used tracking markers, and I think these markers were detected in the head pose tracker plugin. However, it seems that only one marker (the red one) was shown in the head pose tracker visualizer.
if you are only interested in relative movements, you can use pupil data. relative changes in theta/phi as well as circle_3d_normals are comparable between both eye cameras
So if i understand you correctly, the size of a movement that results in an increment of 0.1 in the phi value of one eye is equivalent to the size of movement that will cause an increment of the same size in phi value (0.1) in the other eye? (im saying size on purpose as the increment/decrement is opposite for the eyes) Thanks!
This is a screenshot.
Screenshot for the head tracker visualizer.
You do not seem to have sufficient scene camera movement in order to triangulate the other markers in reference to the origin marker.
Thank you @papr ! I will try it again.
The good news is that you can reuse the 3d model from other recordings or even Pupil Capture.
You can use Pupil Capture to build a model, too! This can be helpful to get feedback in realtime
I could make a 3d model, thank you very much. I have a question about the logged data of camera poses in head_pose_tracker_poses.csv. As long as I saw the timestamp data in the CSV, the data is not recorded at regular intervals. Is there any threshold of the move of the camera to record the pose data or other algorithms?
Got around to assembling my Pupil Labs 3D printed eye tracking headset today. Not impressed with the design. Found it brittle/fragile. The cable guides broke off the eye camera mount and, more importantly, the world camera mount snapped when trying to attach it to the headband rod. Does anyone know of an alternative headband design?