is there a dedicated youtube series/documentation for how to actually plug in eye tracking information to projects
in code
I'm interested in using it in MATLAB, but anything would be helpful
May be this ? https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Hi, Is it possible to use pye3d-detector without using Pupil Capture? Our project needs to develop our own UI to visualize both Pupil eyes video and other sensors. Thank you
In a stand-alone context, we allow the usage of pye3d only for (1) academic use or (2) commercial use in combination with the Pupil Core headset.
Our project is for academic research. Are there any examples/documentation of the stand-alone usage for pye3d-detector?
For now Iβm using Pupil Labs Marker to calibrate the device... I suppose the calibration taken into account of the 2d image points and the 3d points of the marker to calibrate... I would like to know if itβs possible to replace the marker with my own calibration technique (that also gives me the image points and the 3d points)?
So this is the base choreography class https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/base_plugin.py#L99
It is responsible for aggregating pupil and reference data during calibration/validation. After the calibration/validation is done, it publishes the gathered data via its on_choreography_successfull()
function.
~~Pupil data is gathered automatically already.~~ Pupil data is collected best during recent_events()
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/hmd_plugin.py#L68-L72
The most important part is storing the reference data in .ref_list
. See this example from them hmd calibration were Unity sends VR reference data to Capture: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/hmd_plugin.py#L84
Ah, understood. Yes, it is possible since version 2.0. Let me look up the necessary class to subclass in a plugin.
That would be great π thank you!
Hi pupil labs team, I was reading your paper and I came across the system latency figure. What I understood from that figure is, gaze mapping only happens on every 4th frame, other three frames are ignored? I am confused because it shows the latency of 0.119 secs between every gaze estimation. Also it takes about world pipeline being 0.124 second. It means the gaze mapping result are available only 0.124 sec apart (gaze mapping result for every 4th frame) and in between world frames are ignored and are only available for offline mapping. Did I understand it correctly?
Each eye video frame generates a pupil datum. The pupil datum is sent to the world process. The world process queues all received pupil data, passes it to the gaze mapper, and publishes the resulting gaze [1]. The gaze mapper tries to match binocular pupil pairs with high confidence. [2]
[1] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_data_relay.py#L32-L41 [2] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/matching.py#L58-L102
Hey, could you be more specific which paper you are referring to? It is either not ours or out of date. π
From the paper: Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction
Yeah, that paper is partially out of date. The 2d detection algorithm description is still accurate.
Thanks @papr for the explanation. But what does here 0.119 sec refer to. I thought pupil data will be mapped as soon as pupil data is available to nearest world frame by comparing the datum. But why the wait of 0.119 sec? With exposure time of 0.033 sec then it waited (0.119-0.033) sec.
To be honest, I am not sure where that number comes from. I think it is important to understand the different components contributing to the pipeline delay: 1. Frame time (time it takes to expose the image + compress it) 2. Frame transmission (time it takes to transfer the image data via USB to PC) 3. [eye] Pupil detections (2d + 3d) 4. [eye] publishing results onto IPC 5. [world] queuing all available pupil data 6. [world] run pupil data matching 7. [world] gaze estimation 8. [world] publishing gaze results onto IPC
Delays 3-8 do not play a role for recorded data, as data is processed based on timestamps recorded during 1 (linux + mac) or after 2 (windows).
The world process always visualizes the most recent gaze and the most recent world video frame. It does not buffer frames.
This makes a lot of sense. I got confused because of those numbers. Thank you @papr . As always spitze explanation π
Is there any example on how to read jpeg from frame publisher. I changed the python example frame format but doesn't work.
I think you can use pillow to read jpeg data https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.fromarray
Other functionality of our concrete choreography implementations: Circle marker detection + visualization
Exactly what I was looking for!!!! Thanks a ton π
I have Pupil Capture v 3.0.7. How can I activate the Marker Tracking plugin? The link here doesn't work: https://github.com/pupil-labs/pupil/blob/marker_tracking/pupil_src/shared_modules/square_marker_detect.py
The plugin is named "Surface Tracker". The linked file implements our legacy marker detection. I strongly advise against using our legacy markers but AprilTags as stated in our documentation: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Ah ok, I thought it's something new. I have been using Surface Tracker a lot already.
Look under the 'Fixation Filter Thresholds' heading of best practices
Thanks. The maximum dispersion and minimum duration can be set posthoc on pupil player to decide preferences. I am currently going through these links. Thanks
It looks like we have to set up our timer to guide us on what filter to use. If the participant gaze for 400ms, the setting shouldn't be less.
Hi, I meet a problem while import Detector 2D from pupil detectors. Any suggestions? Thank you
What is the output for otool -L <path to detector_2d.cpython-38-darwin.so>
file?
It looks like the linked Opencv lib cannot be found in your anaconda environment.
here is the result
This is my output. As you can see, opencv is linked to absolute paths. Not sure why this is different for you, but I expect this to be due to Anaconda.
detector_2d.cpython-38-darwin.so:
/usr/local/opt/opencv/lib/libopencv_core.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
/usr/local/opt/opencv/lib/libopencv_highgui.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
/usr/local/opt/opencv/lib/libopencv_videoio.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
/usr/local/opt/opencv/lib/libopencv_imgcodecs.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
/usr/local/opt/opencv/lib/libopencv_imgproc.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
/usr/local/opt/opencv/lib/libopencv_video.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 400.9.4)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.250.1)
Thank you, I'll check my Anaconda env, maybe this is because I install opencv from pip.
The detectors need opencv headers and libraries to link against. These do not come with the pip installed opencv (at least not the headers).
In other words, the detectors would not have built if you did not have the requirements installed. But they did. So the necessary files are somewhere on your system but anaconda is not finding them.
Thank youπ. I do find them under Anaconda, env-name/lib/python3.8/site-packages/../../, as I'm using Mac, I apply install_name_tool -change [email removed] to absolute path manually. One can also use install_name_tool -add_rpath to replace [email removed]
hey guys, how many bits of data with above 0.8 confidence level should I gather before averaging and using them
I want to measure pupil dilation and eye virgence, but want to make sure the data is accurate. so what steps should I take in analyzing what's output by the system
Do I understand it correctly, that you were able to fix the problem by replacing [email removed] with absolute paths and it is now working as epxected?
Yes, it works now
@papr why is that subscribing to βgazeβ slows down the application extremely? (6-12 seconds of delay)
Which application slows down? Pupil Capture or your own?
My own
If you are rendering every gaze point one-by-one, it is expected to become slow as the rendering takes much more time (60hz for many monitors) than the incoming signal (up to 200hz).
How do I limit it?
Is there a way to get the pupil core time using a function in Matlab?
Yes, using the pupil remote t
command. Please be aware that there is transmission delay which can be estimated by measuring how much time the command takes. See this hmd-eyes implementation: https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/TimeSync.cs#L66-L71
Once you know the offset between your own and Pupil clock time, you can apply it to new timestamps from your own clock to get it in Pupil time.
Using two loops. A render loop and and an inner processing loop.
while program_should_run:
while socket.has_data:
data = process_data_from(socket)
render(data)
Consider this PI network example: https://docs.pupil-labs.com/developer/invisible/#network-api
while network.running:
is the outer render loop with the cv2 calls at the end. for sensor in SENSORS.values(): ... for data in sensor.fetch_data():
is the inner loop processing data, storing it in the world_img
and gaze
variables.
It doesnβt work... it throws the error in the dependency level.... ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
Please try to reinstall numpy
Now it says Devices with outdated ndsi version found. Please update these device
Apologies for not being clear. By "consider this example" I meant looking at it, not running it. The example is meant for streaming Pupil Invisible scene video and gaze data. It uses a different API than Pupil Core. Nonetheless, it showcases how to use the two loops to display the most recent data.
The documentation is extremely confusing and are all around the places in bits and pieces.
I agree that the documentation could be structured in a more user friendly way. This specific issue is more of an software architectural problem though and is not necessarily related to the network API itself. If you want to read more about how the network api works under the hood, checkout this part of the ZMQ guid about the PUB-SUB pattern https://zguide.zeromq.org/docs/chapter5/
9 gaze samples were being dispatched per frame. By persisting single gaze sample per frame while ignoring the rest solves the issue. π thank you
Could you please provide a skeleton loop for what a reading from matlab portion of pupil helpers would look like? what I've done is create a while loop that breaks when we the escape key is pressed, and that checks each bit of data for its confidence threshold. when 10 pieces of information are gathered that have a confidence threshold over 0.8, then the array with the data is averaged, and saved. the problem is, this results in the program falling behind at times. what should I do?
I have not sufficient experience in Matlab to do that, unfortunately. I can only provide conceptual help. Maybe @user-98789c can help you out here. They are also using the network api with Matlab.
any conceptual help would also be greatly appreciated. For example, I am taking an array of size 10, and averaging out the pieces of data that fill it as data is being gathered (that has a confidence threshold above 0.8). after 10 pieces of data are gathered, I average out the values in the array, and use the information. Is that conceptually fine? note, I'm gathering information using the 2d pupil capture software.
That would mean that you average data over time. You could think about this approach as if it was a simple lowpass filter with varying window size. One would only apply such a methodology if one wants to remove high-frequency gaze. And even then, one would probably use a lowpass filter that does not introduce lag, e.g. the OneEuro filter.
I might be able to give concrete processing recommendations if you share more details about your goal or issues that you are trying to solve.
furthermore, in my loop I wait until another 10 pieces of viable data is gathered before I average out the array, effectively meaning that each piece of data is used only once. should I change this? more importantly, am I missing the picture entirely?
Is this now how the pupil detector look like in v3.7?
There is no version 3.7. I guess you are referring to 3.0, correct? What you see is the new 3d detector "algorithm" view, correct. If you are wondering about the different circle colors and their meaning you should read https://docs.pupil-labs.com/developer/core/pye3d/
I have read it before. I will go through it again. Thanks. Please another question. I am still yet to solve the issue of annotation notification display showing up multiple times. I just can't do without doing a prolong click on the keyboard
Can it be done in such a way that a prolong stay on a keyboard key doesn't increase the number of notifications?
I'm sorry for the late reply. Here's a quick summary of the problem and what is trying to be accomplished. We have is an issue where the data we are collecting through the TCP connection grows further and further behind real time. The way this code works is that 10 samples worth of data with 0.8 confidence is gathered, and written to a file. A function reads the saved file in real time and averages the results internally. The while loop is looping as fast as matlab physically allows it to, and yet the data output is falling behind. There is some backlog of data being stored in the buffer that is growing and growing, and the issue is that I need to know that the information I'm receiving is current, not from 10 seconds ago.
Writing files is very slow. I guess that that part is slowing down your loop substantially.
If you are falling behind regardless, then Matlab is maybe the wrong tool for your use case. :-/
So I have a few questions, but would also appreciate thoughts and recommendations based on the above description. #1) Can we query from the top rather from the bottom? #2) Can we purge the buffer to keep it from filling up with old data. #3) How can we query faster in order to get all of the data. Can we grab all the data at once or does it have to be sample by sample?
1) No, the sub socket implements a first in, first out queue. 2) Yes, by recv() data from it and discarding it immediately. 3) only possible sample by sample.
I think the is that the buffer fills up and we fall out of time regardless of saving to disk or packing our array.
Is there a way to extract data from pupil capture without using zmq?
In the end, you are free to write your own plugin that uses what ever method you like to publish the data, similar to the hololens relay.
You can use LSL to receive data from Capture using this plugin https://github.com/labstreaminglayer/App-PupilLabs/tree/v2.1/pupil_capture
Alternatively, you can use the build-in hololens relay to receive the data via a udp socket
I have not used the latter for a while though. Might be that it is not working as expected.
Any other option as this is under LGPL?
Is licensing holding you back from using zmq, too?
Yeah
Apologies, but this confuses me. zmq is ~~BSD~~ MPL licensed. You do not get much more freedom than that, AFAIK. Am I overlooking something?
https://tldrlegal.com/license/mozilla-public-license-2.0-(mpl-2)
Is it MPL or BSD? When i looked at pyzmq i think it says BSD?
pyzmq is BSD https://github.com/zeromq/pyzmq/blob/master/COPYING.BSD czmq is MPL https://github.com/zeromq/czmq/blob/master/LICENSE
I thought pyzmq were bindings for czmq, but they are for libzmq.
libzmq is licensed under LPGL as it looks like (with static linking exception) http://wiki.zeromq.org/area:licensing
Looks like they are trying to transition to MPL as well https://github.com/zeromq/libzmq/issues/2376
Does using pyzmq come under LGPL or is it solely BSD? We don't want to use LGPL
I guess this leaves you with the custom plugin choice.
Looks like they are transitioning as well https://github.com/zeromq/pyzmq/issues/1039
Can using czmq be an alternative?
Technically, that is a possibility, too.
My apologies but have u seen any instances where data is extracted from pupil core with other message brokers.
Only those that I have mentioned already.
Thanks
Thank you for the reply! I have a quick followup: based on filter_messages.m, there is no way to clear the buffer or discard the information. What command should I use for that?
You need to call recv_message()
repeatedly to clear the buffered messages https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/filter_messages.m#L68
Oh I see, so the recv_message function clears it when being called. Alright, so every iteration of my while loop does [topic, note] = recv_message(sub_socket, 2048); which I'm guessing should clear the buffer. If that's not the problem, then the issue is probably in the way the while loop also writes to a file every 10 iterations
It removes a single message from the queue! (in case that was not clear)
ooh. is there a way to clear it entirely, and take the most recent one?
Not entirely sure how to do this with the matlab bindings, but you can check if messages are available with this command: https://github.com/pupil-labs/pupil/blob/bb7d15ca78f3e4f25e2291a02b5670a312260f04/pupil_src/shared_modules/zmq_tools.py#L128
You should loop while this returns true, and process the most recent result
and as a way of replacing the write to file process (to connect the reading of data to a separate MATLAB thread), should I use ports?
I do not have enough experience with Matlab what the respective best practices are in matlab. I suggest searching for the keywords "matlab multiprocessing communication"
or what's the most efficient way of accomplishing communication via MATLAB, both from pupil-labs to the reading gaze file, and from the read gaze file to other files/processes in my project that need the information
got it. thank you sir!
I really appreciate the time you've taken in answering my questions
Hey there core community! Im having a hardware issue and I'm hoping someone on here can point me in the right direction: I'm trying to get pupil core to work on my Linux laptop. The machine detects the cameras in the usb port, but pupil software can't detect the cameras. I've put the headset on my PC and it works fine. I've tried rebooting, unplugging, different usb ports ( I have 2 on the laptop ), and reinstalling the software.
Hey, your user needs to be added the plugdev
group. Afterward, restart the computer to apply the change.
thanks! I think that's definitely the source of the problem... I'm new to linux, but there seems to be an issue on my machine. I've added user to plugdev, it tells me user "already exists" and then when I look at the groups the user is in, plugdev isn't one of them. There also seems to be a permissions issue where I can't look into the /etc/group directory (permission denied) to make sure plugdev is in there. Anyway, this looks like it's more a software/linux issue than a pupil labs one. Thanks for pointing me in the right direction!
The command to add your current user to the group is sudo usermod -a -G plugdev $USER
(you will need the admin rights to perform that command)
That worked! I already have a successful recording logged. Thanks so much!
Hi, does anyone happen to know of a specific functional alternative for the Microsoft LifeCam HD-6000? I've been looking into it on behalf of a student group project and we were told that the camera should have these specs: "Webcam - UVC compliant cameras are supported by Pupil Capture. If you want to use the built-in UVC backend, your camera needs to fulfil the following: -UVC compatible (Chapters below refer to this document) -Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) -Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) -Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) -Support UVC_FRAME_FORMAT_COMPRESSED frame format"
None of us have enough experience to be certain whether certain models meet these requirements and we haven't found any documentation that references these codes.
Hi, first time poster here: We have an HTC Vive style Eye-tracking system, that we are testing with our own Android app running on a phone (Pixel 2XL, Android version 10). I'm trying to detect when the cameras attach to the USB bus and am enumerating the PID:VIDs. Unfortunately the cameras come up with the wrong IDs (0x507:0x581), whereas on Linux and on a different smartphone (Galaxy S21, Android version 11) the IDs come up as (0x0c45:0x64ab). On all systems the Microchip hub comes up correct (0x04d8:0x00dd). Any guesses as to what could cause this discrepancy?
Hi, I have a ball on my screen which moving from left to right.. How can I detect that my gaze are focusing on the ball? Thank you.
You should have a look at surface tracking for this π https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Good morning! I have a problem with time syncronization... So, I'm using the core glasses to track eye's movements in specific tasks drawn in psychopy. However, in specific space/eye location, they don't have the same timestamp... There's always a (different) delay associated... Does anyone know what's the problem here? Thank you!
Hey π What do you mean by "same timestamp"? What are you comparing it to?
I'm using a python code to extract both in 2 excel files and then plot in MatLab, but the stimulus (psychopy) and the data collected from the glasses don't match...
I mean, when a dot (from psychopy) is in a place of the screen, the timestamp associated does not correspond to the timestamp of the glasses when the eye is looking to that position...
Which clock do you use in Psychopy to measure the timestamp? And how large is the offset between the psychopy timestamp and the pupil timestamp?
How can I check which clock is beeing used in psychopy? The delay between them has been between 8 to 16 seconds, more or less. This number is always different...
Look at the experiment code that was responsible for generating your dat data. It is probably https://www.psychopy.org/api/clock.html
Also other thing that I noticed (not directly connected with this problem) is that the clock plugin in pupil player (vis_datetime) is giving me one hour less. I travelled between countries with 1h of difference... is this the reason? I mean, the clock in my computer is right...
The plugin shows datetime in UTC if I remember correctly. That should be independent of your time sync issue.
Did you implemented explicit time sync with Capture in your experiment?
I don't think so... I'm using a python code to generate the timestamp from the core data, then it goes for an excel file and then I plot in Matlab...
I'm using a python code to generate the timestamp from the core data Ah, ok. Let's move this to π» software-dev then. Can you share the mentioned code there?
Yes!
Hi, thank you for the response. Surface tracking will tell me that my gaze are on the surface or not. But how can I detect that my gaze are at specific object inside the surface?. Thanks
I assumed that you have knowledge over the object's current position, such that you could compare it to the gaze coordinates. Pupil Capture does not provide built-in object detection
Hi, does anyone know of a plugin or a simple python script for pupil data visualisation, e.g. pupil size over time?
I have trials throughout my data and I want to plot the pupil diameter for each trial, all in one figure (with different colors, maybe). For that, should I iterate through pupil timestamp? that makes it a dotted plot, rather than a continuous plot. How else should I iterate through tirals?
Can you think of a reason, why my recording is saved like this, missing some values?
The rows with the "missing" values are 2d detections (see method
column). This data does not have 3d information. Please be aware that this file include 2d + 3d data, as well as data from left and right eyes. Therefore, you will have to group by eye_id
and method
first, before you even can start visualizing data relative to your trial starts.
In order to do the latter, you need to slice the data into the block sections and subtract the first timestamp of each block from the remaining timestamps. This will give you relative timestamps since block start.
To be able to slice the data into trial blocks, I am about to merge pupil_positions.csv and the "label" column in annotations.csv , which is easy. But my concern is that the timestamps in the two files are not the same; meaning that I don't have pupil data at the timestamps that I have annotations. Is there a way to save annotations in the same csv file as the pupil_positions, or a way to save the exact same timestamps for both? or should I think of a whole other way to sync the two files?
Is there a way to only record wih the 3d method?
No. It depends on the 2d method. Usually, there are utilities for removing unwanted rows. You might also want to remove data below a specific confidence threshold.
removing rows from a pandas dataframe (as we use it in the tutorial) is easy. See cell 7 in the "Visualize Pupillometry Data" section of the tutorial.
and do you have a suggestion about iterating through trials? https://discord.com/channels/285728493612957698/285728493612957698/809383951772549150
Yes, my second paragraph of this message was meant as a response to your question: https://discord.com/channels/285728493612957698/285728493612957698/809419958261907529
Thanksss π
Should my response not satisfy your question, then it is possible that I misunderstood your question. Feel free to tell me, if that was the case, or should it happen anytime in the future.
Sure, I'm looking into implementing your comments, I'll ask again for further guidance, thanks a lot.
It is in the nature of events that they do not happen at the very same moment as the eye camera would record a video frame. Therefore, it is very unlikely that they would have the same timestamps. Instead, I would ~~filter between timestamps~~ slice data between two event timestamps.
Again, based on the tutorial, you can do:
trial_start = ...
trial_end = ...
mask_included = eye0_df['pupil_timestamp'].between(trial_start, trial_end)
trial_data_eye0 = eye0_df.loc[mask_included]
I'm trying to see if I understood it correctly: I have to merge my two files: annotations and pupil_positions. From the annotations, I only need the timestamp and the label columns. My annotations' labels are my trial numbers, so I am going to slice the pupil_positions when the annotation labels change. For this to happen correctly, I have to merge the pupil_timestamps (from pupil_positions) and timestamps (from annotations) all in one columns (and of course, in chronological order). Is there a way to do this? to combine two columns from two data frames, into one column, and in an ascending chronological order?
hi, why the eye tracking hardware is not working suddenly
Please contact info@pupil-labs.com in this regard
Hello, I'm interested in IR safety of the PupilCore, as I have to get an Ethics Committee aproval. Is there any documentation where the IR safety is specidied?
Please contact info@pupil-labs.com in this regard.
What program are you using to process the csv files?
Python and all the related packages (csv, pandas, etc.)
@user-98789c I would not merge the data frames. Since they have different columns, that is difficult to do. Instead, you can do this:
import pandas as pd
pupil = pd.read_csv("pupil_positions.csv")
# creates new column and fills with None
pupil["trial"] = None
# (1) calculate start and end timestamps based on
# annotions.csv
trial_1_timestamp_start = ...
trial_1_timestamp_end = ...
trial_1_label = ...
# (2) boolean mask that is true for rows that are
# between start and end timestamp
trial_boolean_mask = pupil["pupil_timestamp"].between(
trial_1_timestamp_start,
trial_1_timestamp_end,
)
# (3) set `trial` values to `trial_1_label` for rows
# belonging to trial 1
pupil["trial"].loc[trial_boolean_mask] = trial_1_label
# (4) repeat steps 1-3 for all trials
# (5) remove data that does not belong to any trial
mask_trial_not_available = pupil["trial"].isna()
mask_trial_available = ~mask_trial_not_available
pupil_clean = pupil.loc[mask_trial_available]
Thanks, this is a good way to do it. The problem is, I have 100 trials per run, 3 runs per participant and at least 20 participants! So, to calculate the trial start and end timestamps by hand would take a lot of time.. that's why I wanted to merge the two data frames, so that I have the timestamps in one place and slice through them based on labels..
Hey everyone,
I am trying to integrate a custom pupil detector into the Pupil Player. I followed the instructions given in the documentation and found the example file artificial_2d_pupil_detector.py on github (https://gist.github.com/papr/ed35ab38b80658594da2ab8660f1697c#file-artificial_2d_pupil_detector-py). Now to get started I wanted to see what this artificial detector does and put the file under ~/pupil_player_settings/plugins
However the plugin does not show up in the Plugin Manager. I tested on Pupil Player v3.1 and v2.6 without success. In the log files it says that the python plugin file is imported and scanned but the plugin does not show up in the Plugin Manager. What am I missing?
~~Let's move that discussion to π» software-dev~~ Actually, the plugin is not supposed to appear in the plugin manager as this is an eye process plugin. Please check your eye window. It should have a new menu entry for the artificial detector.
Here is the log file from Pupil Player v2.6 after launch
Of course, I would recommend calculating the start and stop timestamps automatically. But that depends on your annotation data structure and it is therefore up to you to implement the start/stop timestamp calculation :).
So far I wrote this, and I'm trying to make sure it's working correctly. Can you take a look at it, please?
` import os import pandas as pd from IPython.display import display
recording_location = 'C:/Users/CVBE/recordings/2021_02_12'
exported_pupil_csv = os.path.join(recording_location, '003', 'exports', '000', 'pupil_positions.csv') pupil_pd_frame = pd.read_csv(exported_pupil_csv)
exported_annotations_csv = os.path.join(recording_location, '003', 'exports', '000', 'annotations.csv') annotations_pd_frame = pd.read_csv(exported_annotations_csv)
detector_3d_data = pupil_pd_frame[pupil_pd_frame.method == 'pye3d 0.0.4 real-time'] detector_3d_data["trial"] = None
all_trials = max(annotations_pd_frame['label'])
plt.figure(figsize=(16, 5))
for i in 1,all_trials: trial_label = i trial_data = annotations_pd_frame[annotations_pd_frame.label == i] trial_start_timestamp = min(trial_data['timestamp']) trial_end_timestamp = max(trial_data['timestamp']) trial_boolean_mask = detector_3d_data['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) detector_3d_data["trial"].loc[trial_boolean_mask] = trial_label mask_trial_not_available = detector_3d_data["trial"].isna() mask_trial_available = ~mask_trial_not_available pupil_data_in_trial = detector_3d_data.loc[mask_trial_available] trial_data_eye0 = pupil_data_in_trial[pupil_data_in_trial.eye_id == 0] trial_data_eye1 = pupil_data_in_trial[pupil_data_in_trial.eye_id == 1] plt.plot(trial_data_eye0['pupil_timestamp'], trial_data_eye0['diameter_3d']) plt.plot(trial_data_eye1['pupil_timestamp'], trial_data_eye1['diameter_3d'])`
@papr Ah sorry, you are correct. This plugin was meant to work with Pupil Capture I guess. Thanks. What I am trying to achieve is to run a custom pupil detection after data collection in the Pupil Player. Is this possible?
That is possible. For that, you will need to put the plugin in ~/pupil_player_settings/plugins
and run the post-hoc pupil detection plugin
Okay thanks a lot! I was just using the wrong plugin template then π
Hello, Currently, I am using Pupil Core for my research. During the last experiment, one of the cameras has been stopped working suddenly and it showed "Init failed, Capture is started in ghost mode". However, when I hold the wires in a certain position, it starts to work again. I think the possible issue is related to the disconnection of one of the wires in the attached image; so, I appreciate it if you could help me to fix the issue.
Please contact info@pupil-labs.com in this regard
Thank you for your response. I already eamiled them and I am waiting for the response. However, I was wondering what is the service and repair procedure in terms of Hardware issues. As I am under pressure to conduct the experiments as soon as possible, I just wanted to know how long does it take if any repair is needed?
Hi, does anyone happen to know of a specific functional alternative for the Microsoft LifeCam HD-6000? I've been looking into it on behalf of a student group project and we were told that the camera should have these specs: "Webcam - UVC compliant cameras are supported by Pupil Capture. If you want to use the built-in UVC backend, your camera needs to fulfil the following: -UVC compatible (Chapters below refer to this document) -Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) -Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) -Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) -Support UVC_FRAME_FORMAT_COMPRESSED frame format"
None of us have enough experience to be certain whether certain models meet these requirements and we haven't found any documentation that references these codes.
Good question! @user-a16c82 I am facing the exact same issue. Any help would be great, I'm located in Canada, so I would prefer a hardware option that is accessible from here.
Hi, what is the best way to get the camera pose and orientation relative to the eyeball sphere?
Pupil data, including the eyeball sphere, is in eye camera coordinates. Therefore, this also gives you the camera pose relative to the eyeball
Conceptually, this looks correct.
Syntactically, this line looks incorrect:
for i in 1,all_trials:
You probably meantfor i in range(1, all_trials):
.
I have changed my code to this:
for index in all_indices:
trial_label = annotations_pd_frame['label'].loc[annotations_pd_frame['index'] == index]
trial_data = annotations_pd_frame[annotations_pd_frame == str(trial_label)]
trial_start_timestamp = min(trial_data['timestamp'])
trial_end_timestamp = max(trial_data['timestamp'])
trial_boolean_mask_eye0 = eye0_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp)
trial_boolean_mask_eye1 = eye1_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp)
eye0_high_conf_df["trial"].loc[trial_boolean_mask_eye0] = trial_label
eye1_high_conf_df["trial"].loc[trial_boolean_mask_eye1] = trial_label
mask_trial_not_available_eye0 = eye0_high_conf_df["trial"].isna()
mask_trial_not_available_eye1 = eye1_high_conf_df["trial"].isna()
mask_trial_available_eye0 = ~mask_trial_not_available_eye0
mask_trial_available_eye1 = ~mask_trial_not_available_eye1
pupil_data_in_trial_eye0 = eye0_high_conf_df.loc[mask_trial_available_eye0]
pupil_data_in_trial_eye1 = eye1_high_conf_df.loc[mask_trial_available_eye1]
In one run I have 4 conditions. Each condition has a number of trials (not the same number for all). Each trial repeats for 10 times. To slice my data, I can use indices and timestamps. My annotations contain the name of the trial as their labels, as shown in the attached screenshots (10 of each sequence, and the name could be a number or a word).
Now, for a reason that I can not figure out, nothing goes into the trial_data on the 3rd line.
Any idea what I'm doing wrong?
How can I translate the timestamps that are given in my export file to real world time or timestamps in my video length?
Time can be translated to Unix time using code like this https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
The export also includes a world_timestamps.csv file that is a mapping from frame index <-> frame pts <-> pupil timestamp
. frame pts
are the video file's internal timestamps. Multiplied with the video stream's time base, you can calculate the video frame's timestamp relative to the video's start.
Looking at my export file "Fixations on surface..." there is no frame pts in it? Only worldtimestamp, world index, fixation id, start_timestamp, duration...?
The file should be called world_timestamps.csv
and is exported together with the world.mp4
by the World Video Exporter
plugin
I want to know why this app can not work successfully like this.
Could you please translate the error message to English for me, please? Does it say which file is missing exactly?
In order to make sure Pupil Core actually tracks the pupil diameter : Is there some kind of plugin or any platform to visualise the changes in pupil diameter as a simple stimulation is being presented, like a simple flickering box, whose flickering frequency, duration, etc. changes and we can see the influences of these changes on the pupil diameter?
No, there is no such plugin built-in. It might be possible to build a stimulus like that in psychopy.
I have 20 trials here and I'm expecting all of them to be plotted in the same figure, but somehow only the last one is plotted. Python is supposed to hold the plots automatically, right?
It should indeed have plotted 40 graphs. Make sure
- that your for loop is called indeed 20 times
- that trial_data_eye0
and trial_data_eye1
are not empty (you can check by printing their shape, print(
)
- that trial_data_eye1['diameter_3d']
contains numeric data (instead of NaN
values)
The eyeball center (x,y,z) looks good but i could not find the information of the eyeball center orientation with respect to camera. I tried to find uskng solvePnP between projectedsphere(x,y) and sphere(x,y,z) but those values looks off. If this is already provided in the export files let me know. Thanks
The circle_3d_normal_x/y/z
column contains the current eye ball orientations for each datum
Isnβt it the normal vector for 3D pupil? May be there is a misunderstanding I wanted to know where the eye ball is relative to camera (or say camera orientation) and I do not care about the gaze direction at this moment.
There might be a misunderstanding, indeed. Could you elaborate what you mean by
eyeball center orientation The eye model has an orientation: Its gaze direction. What orientation are you referring to?
Not the gaze direction but what the orientation of eyeball wrt to camera. Theta and phi in this case
Theta and phi are just the gaze direction in spherical coordinates π
The gaze direction is different I suppose. Phi_c and theta_c should be same for a video provided that there is no head gear slippage .
You can convert the cartesian eye center to spherical coordinates with this function: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/visualizer_pye3d/utilities.py#L14-L19
same for a video provided that there is no head gear slippage The same applies to the eye ball center.
That is a helpful drawing, thank you! I will have a closer look on how to calculate this tomorrow.
That would be a great help. Thanks
I tried to find using solvePnP between projectedsphere(x,y) and sphere(x,y,z) but those values looks off.
Hello I'm new to pupil (and to Discord as well, I am more used to Slack).
I am interested in pupil as an easy way to record eye movements, and I like very much the idea of 3d tracking. What is the most straight forward way to get it running and obtain data from it with the simplest setup? (so far I just have an ordinary webcam). Is there some docs that I can read to get myself started?
Hey @user-82b4f1. I think it would be helpful if you could elaborate more on your setup and how you are planning to perform eye tracking with your webcam. Are you implementing something like Pupil DIY (https://docs.pupil-labs.com/core/diy/#diy)?
Hello @papr (I see you are the most active) is here the right place to ask for one just starting? Are there examples ready to run? E.g. one that use video snippets instead of a real time camera stream, just to be sure that everything is setup rightly?
Has anyone had an issue with fisheye model based undistortion with opencv for videos with 1920,1080 resolution? Because the pupil player export creates a rectangular video padding with black pixels, my undistortion function behaves badly horizontally, but very well vertically. It performs well in general for the 1280,720 resolution videos, but i need the FOV that the 1920,1080 provides. Would sincerely appreciate any tips or tricks people have found
These are the utility classes that we use in Core to handle camera distortion:
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py
Checkout the Fisheye_Dist_Camera
class and its undistort()
method.
hi, there is a blue circle and a red circle on the pupil, what is the difference? If I only want the blue one, how can I do?
Blue refers to 2d detection results. Red to 3d detection. If red is less stable this is due to the 3d eye model not being fit well. Roll your eyes to improve its fit. The green outline should be slightly smaller than your eye ball when fit well.
Because I think the blue one is more stable than the red one.
hello, anybody here. Does is ablut pupil.0.2d pupil0.3d
hey, I realized i am not able to install Pupil Invisible Companion with my phone Pixel 3a Xl. Would you let me know the reason I am not able to install the app? let me know. thanks!
Hi @user-83ea3f. The Invisible Companion App is only supported on OnePlus 6 and OnePlus 8 devices. The glasses do not work with other models
Also, am I able to connect my Core with Invisible app? let me know!
Also, Core is not compatible with the Invisible Companion App. They can be considered as separate entities. If you want to use Core in mobile studies, you might consider using a small form-factor laptop/tablet style PC
@user-82b4f1 To follow up on this note: Please understand that Pupil Capture only works for head-mounted eye trackers. It does not work for remote eye trackers (e.g. webcam placed in front of the subject).
Hi @papr, thank you. Our goal is to use the software with a piece of hardware made by us. So first of all we need to know whether the LGPL licence allows us to use it as it is, before building our client application. Yes of course having the camera well firm with respect to the head will make everything much easier. Even if we are not interested in gaze data, but only in eye movements (so no world camera) of course we are tempted to start from buying the headset from your store on shapeways . But we need to know: Of the following options a) using the software as it is; b) using the headset too, as it is; c) make modification to the software, or d) to the headset; which would be allowed by the current licensing scheme? Of course after some r&d the intention is, if it works, to make a product out of it.
Thanks, @papr, I will have a closer look at this. Additionally, I need the specification of the camera sensor (size, FOV) used for Pupil Labs core eye tracker (one with resolution 640 x 480). Is it possible to share the detail?
If you need the information regarding the sensor size anyway, please contact info@pupil-labs.com in this regard.
You can read the focal length from the camera intrinsics in https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L192-L211
If you just want to convert between pixels and 3d coordinate, use the intrinsics. There is usually no need for knowing the sensor size.
Hi, does anyone happen to know of a specific functional alternative for the Microsoft LifeCam HD-6000? I've been looking into it on behalf of a student group project and we were told that the camera should have these specs: "Webcam - UVC compliant cameras are supported by Pupil Capture. If you want to use the built-in UVC backend, your camera needs to fulfil the following: -UVC compatible (Chapters below refer to this document) -Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) -Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) -Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) -Support UVC_FRAME_FORMAT_COMPRESSED frame format"
None of us have enough experience to be certain whether certain models meet these requirements and we haven't found any documentation that references these codes.
Unfortunately, I do not know any specific model that fulfils these requirements. I extracted the requirements by looking at the pyuvc source code. It is possible to use other cameras but it would require you to use a custom version of pyuvc.
Alternatively, you can also use this community-provided video backend https://github.com/Lifestohack/pupil-video-backend that uses OpenCV to access the camera and streams the frames over the network to Pupil Capture. OpenCV supports a lot of different cameras.
@papr
I have msgpack=0.5.6 in a conda environment where I also have msgpack-python and msgpack numpy. Is there a way to make sure my code imports the right version of msgpack? I run into an error that I think is the result of my module importing the wrong version. As an aside, is it still necessary to use msgpack=0.5.6?
msgpack-python
has been deprecated in favor of msgpack
. I would highly recommend uninstalling msgpack-python
. Also, starting with Pupil 3.0, we also support newer versions of msgpack
OK, great. Thanks!
hi, blinking eyes have an influence on my detection, so how can I filter blinking operation
You can use the post-hoc blink detection in Player. Please be aware that it requires good 2d pupil detection to work well. The plugin will export time ranges in which blinks were detected. Afterward, you will have to exclude these time ranges from the pupil/gaze data in your post-processing script.
@papr
and how can I know the distance between my eye and the eye-tracking hardware?
You could calculate the norm of the sphere_center_x/y/z
vector to calculate the distance between eye camera origin and estimated eye ball center in mm if that is helpful to you. What do you need this information for? It is the first time someone asked this question.
the distance is one of the parameters in my project, I should test the distance in real time.
And the recommend use of the headset is to place the headset on the subject, adjust the cameras once, and avoid any slippage of the headset as much as possible. Your description would violate this recommendation. Are you sure that the parameter is "headset distance to eyes" and not "stimulus distance to eye"?
The eye sphere requires a series of pupil detections. In other words, it requires time to adjust! If you are moving the camera, sphere_center
might not be valid!
Thank you papr, I will think about your suggestions.
@papr we are building a test for our subjects to test the eye behavior. for that we have a video to show them. this video is about.. lets say 5 minutes, and in order to understand where the eye is looking in relation to a reference point we use the post-hoc method and create the reference points our selves. As you might imagine that's a lot of points(!). just to give an example - on a short video (1min) its about 800 points. Do you know of a better way to create the reference points?
If you use the circle marker as reference point, it can be detected automatically. The results are cached. You could try to find custom reference points and cache them in the same format. Afterward, Pupil Player should be able to use them.
sorry papr, my expression maybe not clear, I say again, I should know the distance after I move the relative positon between eyes and hardware.
ok, understood. In this case, I suggest adding an explicit model fitting phase* after moving the headset.
the thing is we have a different "marker" that the software doesn't understand. Can we program a new shape to be our new marker and thus the software itself will create the reference points to us?
That is possible too, but you will have to run the software from source. It is not possible to implement this easily as a plugin
thank you papr
unfortunatly we don't have the time to write a new plugin, we're a bit low on time π¦
thanks anyway! if any idea comes to mind that will help us a lot! π
@papr this is our "marker" in case it helps to understand:
Have you setup surface tracking for the monitor displaying that stimulus?
hello! I have a problem with capture - the EYE 1 camera cannot be found :/
Please contact [email removed] in this regard
this is the error message I am getting eye1 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. eye1 - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution (192, 192)! eye1 - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy! eye1 - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation! eye1 - [WARNING] launchables.eye: Process started.
ok thank you!
yes we have, why do you ask?
If you know the position of the stimulus in in surface coordinates, you can calculate it backwards to scene camera coordinates, which you then in turn can use as reference points. But this would require programming, too.
the thing is we don't really know the position in the surface coordinates because we don't use psycopy (it creates a really slow clunky video), so we generate the video using python and then record it and use the psycopy to manage the timing.
a) LGPL allows (commercial) use as is, see https://tldrlegal.com/license/gnu-lesser-general-public-license-v3-(lgpl-3) b) of course c) allowed under the conditions mentioned in a) d) Modifications to the headset are allowed (but looses warranty). You can also buy specific parts only as well. Contact info@pupil-labs.com in this case.
Please keep in mind that pye3d, our new 3d detector, is not LGPL licensed. Its use is allowed for (1) all academic uses, (2) when used as part of our bundled software releases, or (3) stand-alone in combination with official Pupil Labs hardware.
That's interesting and encouraging. About pye3d, do you confirm that conditions (1), (2) and (3) are all in OR logical relation?
Correct. Any of the cases allows the use of pye3d
Ok. So, following (2), while we setup our own hardware, is there a way how I can use recorded videos of the eye cameras, and supply them to pupil_service and therefore be able to read the data (via the network API I suppose) as if they were obtained in real time? Is there also maybe a repository with test videos?
Hi! I have an question about the data output of the surface tracker plugin. For a research project, we want to use an apriltag marker as a reference point in the environment. Hereby, I need the position of the marker in the camera coordinate frame. The surface position output only gives the transforming matrix of surface from image to surf or surf to image. Is it possible to calculated the position of the marker in the camera coordinate frame soley with these matrices? It has been a while since I worked with matrices, so my mathematics knowledge needs recab/update.
You can use the matrices to transform surface coordinates (e.g. corners) to scene image pixel coordinates. See this as a reference https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb
Thank you!
There is a hidden feature where you simply drop a (Capture) recorded video onto the window and it will start using it as input. There is no support for synchronised playback fo multiple videos though. You can find an example recording here: - core example https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing - vr example https://drive.google.com/file/d/11J3ZvpH7ujZONzjknzUXLCjUKIzDwrSp/view?usp=sharing
So, I tried to use the recordings found in the surface tracker. My current setup is:
Hi. Is there a way to better track surfaces that are distorted in the wide FOV camera? As you can see a lot of the screen is cropped out of the surface due to the distortion.
The surface tracker corrects for distortion internally. The preview just connects the corners with straight lines to give a rough idea of the surface area.
Hi, very silly question. Does the player only replays the recording or does it run the 2d and 3d detectors again?
You can run pupil detection and calibration post-hoc as well.
So, if I make some changes in the 3d detector code and run player, will it run the modified code? I want to re-use the recorded data and test with the modified code.
It will run the modified code. You can simply verify this by changing the detectors "method" output. That should reflect accordingly in the exported data
Thanks!
Hi, I'm trying to run player from source on Big Sur. I've installed all the dependencies and when I run player on I get following message File "/Users/yoncha01/Project/git/pupil/pupil_src/launchables/player.py", line 826, in player_drop from pupil_recording.update import update_recording File "/Users/yoncha01/Project/git/pupil/pupil_src/shared_modules/pupil_recording/update/init.py", line 15, in <module> from video_capture.file_backend import File_Source File "/Users/yoncha01/Project/git/pupil/pupil_src/shared_modules/video_capture/init.py", line 40, in <module> from .uvc_backend import UVC_Manager, UVC_Source File "/Users/yoncha01/Project/git/pupil/pupil_src/shared_modules/video_capture/uvc_backend.py", line 25, in <module> import uvc File "/Users/yoncha01/Project/git/pupil/pupil_venv/lib/python3.9/site-packages/uvc/init.py", line 4, in <module> import urlparse ModuleNotFoundError: No module named 'urlparse'
So if i tried to install urlparse, then I get ERROR: Could not find a version that satisfies the requirement urlparse ERROR: No matching distribution found for urlparse
Can you help me what I'm doing wrong?
Is it still possible to programmatically freeze the 3d model in the latest version of Pupil Capture? My notifications no longer have any effect. `{ "subject": "pupil_detector.set_property.3d", "name": "model_is_frozen", "value": True, # set to False to unfreeze
# remove this line if you want
# to freeze the model both eyes
"target": "eye0",
}`
Please note that the pupil detector network api changed in Pupil Core v3 https://github.com/pupil-labs/pupil/releases/tag/v3.0
It it possible that you did not install the Pupil Labs pyuvc? Please use pip install -r requirements.txt
to install the requirements, or check its content if you want to install specific dependencies only.
Now I'm getting the following message, seems like PyAv issue.
looking for avformat_open_input... missing looking for pyav_function_should_not_exist... missing looking for av_calloc... missing looking for av_frame_get_best_effort_timestamp... missing looking for avformat_alloc_output_context2... missing looking for avformat_close_input... missing looking for avcodec_send_packet... missing looking for AV_OPT_TYPE_INT...found looking for PYAV_ENUM_SHOULD_NOT_EXIST...missing looking for AV_OPT_TYPE_BOOL...found looking for AVStream.index... found looking for PyAV.struct_should_not_exist... missing looking for AVFrame.mb_type... missing
We didn't find avformat_open_input
in the libraries.
We look for it only as a sanity check to make sure the build
process is working as expected. It is not, so we must abort.
{
'topic': 'notify.pupil_detector.set_properties',
'subject': 'pupil_detector.set_properties',
'values': {'freeze model':True},
'eye_id': 0,
'detector_plugin_class_name': 'Detector3DPlugin',
}
Is this the correct name for the new plugin?
The plugin name is Pye3DPlugin
I recommend subscribing to notify.pupil_detector.properties
and sending
{
'topic': 'notify.pupil_detector.broadcast_properties',
'subject': 'pupil_detector.broadcast_properties',
'eye_id': 0
}
to get an overview over all possible properties and detector classes.
Sorry, I don't think I understand. Subscribe to the topic you mentioned, and then send that notification?
correct. In an extra script perhaps. Also add a short time.sleep()
between the subscription and sending the notification to make sure the subscription goes through.
Got it, thank you!
Are there any studies/python scripts/plugins done on the frequency analysis of pupil diameter? for example, pupil diameter recorded while participant stares at a flickering stimulus with a predefined frequency, and then by the means of FFT, ICA, etc. on the pupil diameter time series, the frequency is extracted.
Not sure if this is the sort of thing you mean but it may be of interest: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148805
Hey @papr , could you point me to documentation which lists how the confidence/goodness measure is calculated. I'm trying to find the relevant code with no luck so far.
Hey, the detection algorithms in Pupil Core do not perform any iris detection. Therefore, it is not possible with the built-in tools.
Is there a way to get 2d diameter of the Iris in addition to the pupil?
Which parts of the Pupil Capture software source code would need to change to use a different world camera via the USB-C mount? I'd like to use a zed mini (https://www.stereolabs.com/zed-mini/)
You would need to write your own video backend, similar to the realsense plugin (https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29) or https://github.com/Lifestohack/pupil-video-backend/
hi, I'm a complete noob π the logs show me this
anyone got an idea what I can change? I'm not able to see where I'm looking at, but eye0 and eye1 are calibrated and recognized
This issue is unrelated to you not being able to see the gaze prediction. It is related to the underlying user interface library. It is not able to open your "clipboard" (Zwischenablage) which is used to copy/paste text. Not sure why the access would be denied.
If you like share an example recording of you calibrating with [email removed] We can have a look and give specific feedback.
Hello, everyone, why does surface6 enter and surfce2 enters? Shouldn't surface6 come out first and then surface2 enter?
Surfaces are allowed to overlap. Your sequence of events is possible if Surface 2 is e.g. a subset of Surface 6.
thanks, papr!
Any idea why I get these instances of zero pupil diameter?
Thank you ! what kind of situation will generate subset to overlap?
This can either be intentional by setting up the surfaces appropriately (e.g. keyboard on desk; Surface A for keyboard, Surface B for desk), or unintentionally if the surfaces are separate due to the surface being incorrectly estimated. This can happen if there are only few markers available.
Looking at your screenshot again, it looks like all events happened in the same scene video frame. I suggest looking at the gaze_on_surface data for that frame to checkout the exact gaze behaviour.
The diameter is zero during blinks or other low confidence values for example. I suggest filtering your data by confidence first, before processing the data further. See the Plot Pupil Diameter section of on how to remove low confidence data: https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb
I see, Thank you.
I have zero pupil size also in high confidnce instances. They must be blinks, right?
These are probably not due to blinks but other measuring errors. I suggest following preprocessing guides, e.g. https://link.springer.com/article/10.3758%2Fs13428-018-1075-y One of the steps is to only include data points from a physiologically feasible range (e.g. 2.5-9mm)
Looks pyav is not able to find your ffmpeg correctly. Have you installed ffmpeg via brew?
yes, I did, I'm getting following message Warning: ffmpeg 4.3.2 is already installed and up-to-date. To reinstall 4.3.2, run: brew reinstall ffmpeg
What is your output for brew --prefix ffmpeg
?
/usr/local/opt/ffmpeg
Looks like the ffmpeg path is correctly used. It continues to not finding the ffmpeg symbols correctly.
What is your output for python -c "import platform; print(platform.platform())"
?
Is it static vs dynamic library issue?
dynamic. One sec, I think I know a solution.
Please try running `FFMPEG_DIR=$(brew --prefix ffmpeg) pip install "av @ [email removed] /cc @user-764f72 please try this one as well
same as before
ERROR: Failed building wheel for av Failed to build av ERROR: Could not build wheels for av which use PEP 517 and cannot be installed directly
Is there more information that you can share with us. If the error message is very long, please use https://gist.github.com/ or https://pastebin.com/ to share the complete output.
I've copied the complete output to https://gist.github.com/yjaycha/fe7e46a37e20846ceda49b9099566b5c
macOS-11.2.1-x86_64-i386-64bit
Please download this wheel and install it via:
pip install <path to wheel>
https://drive.google.com/file/d/135iNhqSHlU7BPwKfUfMFaLC2vp2PGMkS/view?usp=sharing
Done, do I need to run the requirment.txt again?
No, that might overwrite your av installation. Please try running Capture to see if anything is missing.
ahh... it is not finding uvc this time...
Traceback (most recent call last): File "/Users/yoncha01/Project/git/pupil/pupil_src/launchables/world.py", line 140, in world from uvc import get_time_monotonic ModuleNotFoundError: No module named 'uvc'
This was the output re. uvc when running requirement.txt, seemed alright
Building wheel for uvc (PEP 517) ... done Created wheel for uvc: filename=uvc-0.14-cp39-cp39-macosx_11_0_x86_64.whl size=200040 sha256=c5368201f2ef0d6158dfc381476bf44e0a4c4a591a704eec059797038fce0d81 Stored in directory: /private/var/folders/l5/dxqs5b1x67309qvyym_8b9640000gp/T/pip-ephem-wheel-cache-k6zr8dx_/wheels/cb/a5/c2/4db7d009c02fa34f5b138db80bfe561442e29aebcff314c5e9
The installation might not have gone through. Please try ```pip install "uvc @ [email removed]
Danke Sehr!! it works. I had to install pupil_detector and ndsi again after the pyuvc. Thank you so much!! You really saved me today.
Great to hear that this is working now π
- I have a jupyter session open where I test connecting to the service via the Network API (I think), where I test commands. I just suceeded to get some basic output but no pupil coordinates yet of any sort. More precisely I could follow the Network API guide but I am stuck here (https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone) where I can't get no output (actually I modified the loop so that it is not infinite, but no luck).
...continued from above Assumessub_port
to be set to the current subscription portsubscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}')
topic='pupil.0.2d.' subscriber.subscribe(topic) # receive all gaze messages
we need a serializerimport msgpack
i=0 while i<10: topic, payload = subscriber.recv_multipart() message = msgpack.loads(payload) print(f"{topic}: {message}") i+=1
I think the trailing dot in pupil.0.2d.
is too much. Subscription works by prefix-matching. i.e. only messages whose topics start with this string will be received. The pupil data topics do not have a trailing .
as far as I remember.
(sorry, I am still not good at formatting code in messages in Discord...)
There is a great markdown overview by Discord: https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline-
Hello EyeTracker, how did you generate this graph? I am trying to do something similar (from jupyter), but on pupil position, not diameter. But I'm still not able to read eye tracking data after the subscriber.connect(f'tcp://{ip}:{sub_port}')
and subscriber.subscribe(topic)
. Maybe I'm doing it the wrong way. Any working example would greatly ease my start. Can you point me to some?
I noticed this message only now. sorry. I'd be happy to help in any way I can.
Do you need the data in real time then? @user-98789c is following the pupil tutorials that work with Pupil Player exported csv files https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb
Yes real time will be a plus, but obviously I'll like to reanalize recorded data as well. Oh that's great it now works! it was only that extra dot!
Yes, I found it myself yesterday. Kind of unexpected that the good things can be done only within code blocks, e.g. one cannot make bullet lists or tyope a CTRL+ENTER to make a newline within the message? I must study it better
You can use SHIFT+ENTER to make
new lines. π
π That's a combination I didn't try.
So a few additional questions: 0. I am interested in eye positions only (unrelated to a world view). Is starting pupil_service and query the server via the network API the right thing? Are there leaner options? 1. Is there any tutorial available for real time read/graph/analyze for pupil tracking data? 2. I use to switch on and off the two "Detect eye 0 / 1" buttons in the main window to start capturing the two mp4 files. This way I can only switch one at a time (you mentioned there is no way to sync the two playbacks at the moment). Maybe adding a "start/stop eyes together [ ]" control to that window would ease the sync. Can you point me to the right place in the source where that window is managed?
start_stop_eye(0, True)
start_stop_eye(1, True)
to start both eyes.
Let's hear them π
Is it possible at all with the PupilCore device to test vergence of the eyes - how well they work together? I am interested in testing for subtle skew deviations between the two eyes. I understand the calibration task is taking a constant average between the gaze points for higher accuracy, but is there any way around that? What would you suggest? Or is this the wrong device for that? Many thanks in advance π
With Pupil Core, you can access the normal vectors for each eye individually. These describe the visual axis that goes through the eye ball center and the object that is looked at. You can see data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter. Also, this study that examined vergence using Pupil Core might be of interest: https://doi.org/10.1038/s41598-020-79089-1
You are comparing the whole data frame to your trial label, instead of comparing it to the trial column
- annotations_pd_frame[annotations_pd_frame == str(trial_label)]
+ annotations_pd_frame[annotations_pd_frame.label == str(trial_label)]
Although I think a nicer solution would be to group by label:
annotations_pd_frame.groupby("label").timestamp.agg(["min", "max"])
and loop over that result instead of the indices.
Checkout this tutorial for more information on pandas grouping functionality https://realpython.com/pandas-groupby/
Thanks, I'm working on it. Is there a relationship between the indices in the annotations.csv and the world indices in the pupil_positions.csv? and can they be set to be syncronous?
no, there is not. If you want to associate data of different frequency with each other, you need something similar to https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L359-L373
@nmt thank you very much!
Hi there, is it possible to turn on surface tracker and add surface with Pupil Remote?
Hey. Yes, it is possible to turn it on via a notification. But it is not possible to add a surface programatically. The intended workflow assumes a one-time setup of the surfaces and reusing the definitions across multiple sessions.
hello @papr , does the pye3d model publish also corneal shape data? Under which topic?
No, it does not. It assumes an average corneal shape (@user-92dca7 please correct me if this is incorrect/inaccurate). The pupil.0.3d
and pupil.1.3d
data contains already everything that pye3d publishes.
changed it according to your suggestion:
`import csv
all_labels = annotations_pd_frame['label'] label_names = list(all_labels)
eye0_high_conf_df["trial"] = None eye1_high_conf_df["trial"] = None
all_indices = annotations_pd_frame['index']
groups = annotations_pd_frame.groupby("label").timestamp.agg(["min", "max"])
for label in label_names: trial_label = label trial_start_timestamp = groups['min'].loc[trial_label] trial_end_timestamp = groups['max'].loc[trial_label] trial_boolean_mask_eye0 = eye0_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) trial_boolean_mask_eye1 = eye1_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) eye0_high_conf_df["trial"].loc[trial_boolean_mask_eye0] = trial_label eye1_high_conf_df["trial"].loc[trial_boolean_mask_eye1] = trial_label mask_trial_not_available_eye0 = eye0_high_conf_df["trial"].isna() mask_trial_not_available_eye1 = eye1_high_conf_df["trial"].isna() mask_trial_available_eye0 = ~mask_trial_not_available_eye0 mask_trial_available_eye1 = ~mask_trial_not_available_eye1 pupil_data_in_trial_eye0 = eye0_high_conf_df.loc[mask_trial_available_eye0] pupil_data_in_trial_eye1 = eye1_high_conf_df.loc[mask_trial_available_eye1]
label_names = list(dict.fromkeys(label_names)) for label in list(label_names): plt.figure(figsize=(16, 5)) plt.plot(pupil_data_in_trial_eye0['pupil_timestamp'], pupil_data_in_trial_eye0['diameter_3d']) plt.plot(pupil_data_in_trial_eye1['pupil_timestamp'], pupil_data_in_trial_eye1['diameter_3d']) plt.legend(['eye0', 'eye1']) plt.xlabel('Timestamps [s]') plt.ylabel('Diameter [mm]') plt.title('Pupil Diameter in ' + str(label) + ' sequences' )`
now the grouping is done correctly, but the same data goes into pupil_data_in_trial_eye0 and it's not sliced according to 'trial'.
I (or my colleagues) would be happy to dive into code review/help you with custom implementation questions. In order for me to devote the time and attention required, I ask you to consider purchasing a support contract.
So it does not really fit the corneal shape (a first model could be e.g. an off centred spherical surface whose main parameters are central protrusion and central curvature), it uses a fixed one from external data
You are correct @user-82b4f1, we are using population averages of relevant physiological parameters, i.e. assume an average cornea in our correction.
hi, i have recently started working with the pupil core eye tracker for a psychological experiment. for that, i have to record the gaze positions of the participants on the monitor with respect to the monitor itself as we have to plot them on later.
i am able to extract data from the python api but iβm not sure what the data means or what would be the optimal data i should be using to carry out my experiment. can someone guide me as to what norm_pos_x, gaze_3d_x or some better data that will help me with the same?
Hi @user-bf78bf, it sounds like you are trying to map your gaze points to the monitor. Have a look at our surface tracking plugin. You can add fiducial markers (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking) to your monitor and use the Surface Tracker plugin in Pupil Capture to automatically detect the markers, define your monitor as a surface, and obtain gaze positions relative to that. Read about surface tracking and data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
Can pupil core output the log of the eyeball in and out of each surface? Thank you.
After you have used the Surface Tracker plugin in Pupil Capture to automatically detect the markers, and defined your surfaces, you can export the gaze and surface data. The exported file: 'surface_events.csv' contains a log of when gaze enters and exits each surface. Read more about that here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
Hello, when I start Capture I am seeing this error and one of the eye cameras is failing to provide an image. It just started happening randomly. The eye camera window says the camera is disconnected, but everything seems to be well-connected.
Hi! I'm experince the same issue, did you manage to get this solved?
Hey, is there a possibility to write the detected blinks onsets and offsets into the LSL stream as markers?
Hi, can I run the player from the command line giving path to the recordings as an argument. So the player starts immediately?
Yes, you can.
(sorry to come in this conversation with @user-98789c ) beside the translation, it rather seems that the there is a ~160Β° rotation (not a one axis flip) of the two coordinate systems with respect to each other ...
Does this difference in the y axis between the two eyes have any meaning? Can it mean that the participant was tilting their head?
The eye cameras have independent coordinate systems. There is no special meaning in the coordinate differences in this case.
do you know the how much white space should I need to have for detecting?
You need at least a border of 1 "marker pixel". So the required size of the border scales with the displayed/printed size of the marker.
Hi all, does anyone have any data on how the new pye3d detector plugin compares to the old 3d model in terms of the accuracy of absolute units of pupil diameter? My initial impression is that it could be underestimating pupil size (in comparison), but maybe my pupils are really that small.
The old model indeed underestimates the pupil radius. See this as a reference https://www.researchgate.net/profile/Kai-Dierkes/publication/325634500_A_novel_approach_to_single_camera_glint-free_3D_eye_model_fitting_including_corneal_refraction/links/5cd42c3fa6fdccc9dd98b24e/A-novel-approach-to-single-camera-glint-free-3D-eye-model-fitting-including-corneal-refraction.pdf
Hello @papr, I have gotten myself two videos of eyes and am trying to get them "captured" with pupil_capture
, but I get the message _"Could not playback file! Please check if file exists and if corresponding timestamps file is present."_How can I generate such data files? Do I need more data files other than that? (in fact I supposed that was the purpose of pupil_capture
to generate all files after actual video grabbed in real time, or, just for test purposes, from a video recording.)
The file backend is primarily used in Pupil Player and is meant to playback Capture recorded videos. These are meant to be recorded from a live video source. You can use the eye videos from this example recording https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing It includes the timestamps. You can also generate timestamps for the video. See the docs for the required format: https://docs.pupil-labs.com/developer/core/recording-format/#timestamp-files
I tried to supply pupil_capture or pupil_service with my own videos eye0.mp4
and eye1.mp4
, (but not world.mp4
so far) and made timestamps files (identical) for each of them. But that doesn't seem to be enough. I get this error message No camera intrinsics available for camera eye0 at resolution (1280, 720)!
(which looks strange, as my video is 400x200; in fact a crop of a larger image via ffmpeg). Do I need any other file in the directory where I have the eye files? Do I need a (dummy) world video maybe?
In https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb the only difference between the first and the second plots is that the second one only contains high confidence values. How come the y-axis label in the second one is pixels, while in the first one is mm?
That is a mistake. Both should be in mm
. Thank you for pointing it out!
Thanks π
Hi @papr , I have questions around surface definition for multiple static face stimuli (e.g. 20) across participants (e.g. 20). I understand if the AOI is certain part of the face, eg mouth area, i can only do post hoc surface definition for each face and each participant which add up to 20 surfaces for each participant - I canβt reuse the surface define for the same face across participants? Is there a more efficient way to define the AOIs? Thank you in advance.