hello, I followed the instruction on the website to setup the pupil core DIY. I have the hardware setup. Both cameras turn on when i try it with my computer camera app. I installed pupil capture 3.5 software. However, the cameras are not discoverable to the software. Everytime I tweak it, It returns "camera is in use or blocked." Please how do i make the hardware discoverable to the software?
Also, the Microsoft HD-6000 is blur. It became blur after I inserted the IR filter. under the lense. How do i tweak it to focus?
NB: I have windows OS
Thanks.
Can you try running Pupil Capture as an administrator?
Do you solve it? How can i find the devices?
Hello! After inserting the Pupil core device into my personal computer, the system cannot recognize the usb device, so I cannot use Pupil Capture and other programs. I tried to update the driver and other solutions, but it still cannot be recognized. May I ask how else can I solve this problem? My computer is running Windows 11, version 22H2.π
hello! is there any way that i can avoid those gaze point distortions while eye blinking? https://drive.google.com/file/d/1MN8QynRLq856oyMH2l48EHfde_PhUid6/view?usp=drive_link
hey everyone! I am just getting started with analyzing pupil data.. so this might be a beginner question. Is there a way to get the annotation labels from annotations.csv into fixations.csv other than going in manually?
Hi @user-ca5273 ! Have you seen our python tutorials to work with the data? https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb https://github.com/pupil-labs/pupil-tutorials/blob/master/06_fixation_pupil_diameter.ipynb They demo how you can merge different parameters and could be a great source to show you how you can achieve what you want.
What are the power requirements of the individual Pupil Core eye cameras?
Hi @user-ca4e2e π ! Have a look at the previous answer from my colleague. https://discord.com/channels/285728493612957698/285728493612957698/1168794380019179600
Hi @user-d407c1 , yes, I was looking at these notebooks. Definitely a handy resource.. but didn't quite help me in what I want to achieve. I want to get the world labels in annotations into fixations. and the tutorials don't really cover that.. unless I am missing it.
I was thinking I could join the two files using the world index columns but they aren't exactly aligned in terms of row count
annotations and fixations both contain timestamps that you can merge similarly as shown here https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb
@user-d407c1 thanks! Two more questions, 1. I considered the timestamps and they don't exactly align between the two files. That's my main challenge and why I didn't go that route. In this case, do I just look at the start and end timestamps and join whatever is in between those?
Just to be 100% sure, you want to know the fixation id per annotation or the annotations within that fixation?
Hi there, I am collecting real-time gaze data using the pyplr module in Python, and I am using the function 'pupil_grabber'. It returns me a dictionary of different values, and I am using the value in b'ellipse' as the gaze location. Is it correct? If so, could anyone tell me in what units it is?
@user-d407c1 see snapshot of the two files I exported. What I want are columns AOI_* and Event_name in annotations to be joined to fixations by either timestamps or world index. We are interested in the AOI columns and the fixations data, and want to join the two somehow
Hi @user-ca5273 ! To achieve that you can do something along these lines:
import pandas as pd
annotations = pd.read_csv('annotations.csv') # change them to the adequate path
fixations = pd.read_csv('fixations.csv')
def find_matching_annotations(row):
matching_annotations = annotations[
(annotations['timestamp'] >= row['start_timestamp']) &
(annotations['timestamp'] <= row['end_timestamp'])
]
unique_aoi = matching_annotations['AOI_name'].unique()
unique_events = matching_annotations['Event_name'].unique()
combined_unique_values = list(unique_aoi) + list(unique_events)
return combined_unique_values
fixations['AOI_Event_Matches'] = fixations.apply(find_matching_annotations, axis=1)
print(fixations.head())
fixations.to_csv('fixations_with_annotations.csv', index=False)
Kindly note that I haven't tested it, so you might need to finetune the code.
Also, you will still need to place the proper path to each file when reading it, and if you do not have end_timestamp
, you may have to use start_timestamp
and add the duration
to it.
Hi Pupil lab, how do I get the real time gaze data in degrees in python?
Hi @user-7f68a7 ! I assume you are using Pupil Core, is that right? with the network API https://docs.pupil-labs.com/developer/core/network-api/ and ZMQ you can subscribe to any topic including gaze data in degrees. Depending on what you are looking for, you can for example subscribe to phi and theta or to the gaze normal 3d.
import msgpack
import zmq
ctx = zmq.Context()
pupil_remote = ctx.socket(zmq.REQ)
ip = "localhost"
port = 50020
pupil_remote.connect(f"tcp://{ip}:{port}")
pupil_remote.send_string("SUB_PORT")
sub_port = pupil_remote.recv_string()
subscriber = ctx.socket(zmq.SUB)
subscriber.connect(f"tcp://{ip}:{sub_port}")
subscriber.subscribe("gaze.3d.")
try:
print("Listening for 'gaze.3d.1.' messages...")
while True:
topic, payload = subscriber.recv_multipart()
message = msgpack.loads(payload, raw=False)
eye = "right" if "gaze.3d.0." in str(topic) else "left"
print(
f"Eye {eye}:\n"
f" gaze_normal_3d: {message['gaze_normal_3d']}\n"
f" phi: {message['base_data'][0]['phi']}\n"
f" theta: {message['base_data'][0]['theta']}\n"
)
except KeyboardInterrupt:
print("Stopped listening for 'gaze.3d.1.' messages.")
finally:
subscriber.close()
ctx.term()
hello! is there any way that i can avoid those gaze point distortions while eye blinking? https://drive.google.com/file/d/1MN8QynRLq856oyMH2l48EHfde_PhUid6/view?usp=drive_link
Hi @user-870276 ! usually one would remove the blink data when analysing it, as you can see the confidence drops when there is a blink, if you remove the data with low confidence you should get rid of this.
Pupil Player causing Desktop Windows Manager high GPU usage? I'm using Windows 11 on a laptop with discrete GPU. When Pupil Player is running, DWM also starts to use GPU. Interestingly, if I force Pupil Player to use GPU 1, DWM will still use GPU 0. I don't see think problem occurring on my office computer, which is a desktop.
Hi @user-5346ee π ! Could you check if your drivers are up to date?
Hello everyone, can some one please tell me what "norm_pos" data indicate?
Hi @user-7f68a7 ! Those refer to the position of the gaze in the scene camera in normalised coord. You can find this and the rest of parameters here: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv
thanks Miguel, so the [0,0] in the norm_pos is it in center of the screen or left bottom of the screen ?
The origin of 2D norm space is at the bottom left, you can find this here: https://docs.pupil-labs.com/core/terminology/#coordinate-system
hi
how to convert timestamps to the computer time in my excell file from recording
Hi @user-956f89 π. Check out this section of the docs for an overview of timestamps and how to convert them: https://docs.pupil-labs.com/core/terminology/#timestamps
my goal is to represent the data as pupil size 3d per real time of the computer
Hi Neil
thanks for the reply
is only via python
to convert between two times
?
it is complicated and i get stuck with my data now
i recorded for 2 minutes
several participants
and now unable to process my data
Converting the timestamps can also be done in spreadsheet software. It's essentially just a case of working out the temporal offset and adding it to the timestamps. The more important question is why do you want to convert to system time?
in our study we want to represent the data according to the circadian time of the day. so we need to have a real time representation
Ah ok. That's a valid motivation. If you were doing it for sync purposes there are certain caveats (outlined in the link I shared). In that case, I recommend implementing the steps on our docs but using spreadsheet software, if you're unable to run the Python example. Each step is explained in the code comments π
we record in different times during the day from morning to evening
the idea is to have a circadian graph or plot of pupil size at specific time of the day
but i can not open my csv file with python from my recording folder and i do know how to fix it.
so i am looking for any help or alternative solution
You can open your csv files with any spreadsheet software, such as excel or libreoffice
yes i did with excel
how convert 70000 observation to 2 minutes of recording is the challenge i have
hello can i run pupil lab capture in a raspberry wich has arm cpu??
Hey! What are indicators that a calibration would not be good? I mean if the circles are dark blue on each eye, the idconf on the top left show 1.00 most spots, and the red and yellow color don't flicker too much, why would it still have low confidence?
Hi, I am a Phd psych student, so please do forgive me if some of this is obvious to fix. I have been looking to install the MATLAB code through pupil helpers github repository, however I am running into significant problems with the prerequisite zmq master installation. I am running on Matlab2023b on an M2 macbook air. I seem to be running the following main error code: "Error using mex ld: warning: -undefined error is deprecated ld: warning: -undefined error is deprecated ld: Undefined symbols: _zmq_version, referenced from: _mexFunction in version.o clang: error: linker command failed with exit code 1 (use -v to see invocation)". I have installed a compiler for the mex and ran a setup command, however this error appears recurring. Any advice would be greatly appreciated π I am hoping to install this so that we can use the remote API to control pupil via Matlab on a PC seperate from our stimulus. π
Hi, I wonder if anyone can confirm my understanding. It appears from the documentation that one can gather data from the Pupil Invisible in Matlab, bit for the Pupil Core you need to use Python. Is that right?
I did a recording but unfortuantely it says the recording contains no scene video...im assuming this is a permissions issue, but no chance in getting it at this point correct?
Hi @user-b2d077 π. There are ways to communicate with Pupil Core using Matlab. Although our example code snippets are a bit more limited when compared to our Python examples. Check out this page for reference: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/README.md
please can someone help me to solve this message when trying to load my data
Hi @user-956f89 ! It seems like you are trying to run a Jupiter notebook, but you do not have your environment properly configured. This resource can help you get started with Jupiter Notebooks https://www.youtube.com/watch?v=h1sAzPojKMg&t=220s
from IPython.display import display Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'IPython'
also how to open info.player.json file
ok
thks
Hello,
I'm using annotations to the PupilCaptures to synch data from the eye cameras with other devices. Every 2 seconds I send an annotation to PupilCapture. I get the right number of annotations in the annotation and annotation_timestamps files, but the timestamps I get are all the same (=2.14). Any idea of why this is the case?
Hi @user-b5484c ! It is hard to know without further context, would you mind sharing how you create those annotations? Regading sync with other devices, have you seen https://docs.pupil-labs.com/core/best-practices/#synchronization ?
Hello, I'm currently engaged in research that involves the development of smartphone-based pupillometry for applications in neurofeedback training, real-time cognitive load assessment, and preliminary screening for ADHD, Autism, and Parkinson's disease. Given the extensive groundwork that already exists in this domain, I am exploring potential collaborations from computer vision, and neuropyschology domains that could enhance the precision and efficacy of my project. My research focuses not only on the measurement of pupil diameter but also on how these measurements can inform us about various neural and cognitive processes. With this in mind, I am keen on understanding how Pupil Labs' technology could be leveraged in my work, especially in terms of data collection and processing capabilities, as well as the algorithmic aspect of measuring pupil size with precision without the need for auxiliary equipments. The main challenges I foresee with smartphone camera integration involve accounting for variables such as gaze angle discrepancies and ambient lighting conditions, which could potentially interfere with the near-infrared (NIR) camera's ability to capture precise pupillary data. Given these constraints, I'm curious to learn about any existing or developing solutions that Pupil Labs may offer to address these issues, particularly for NIR camera usage without a focal lens.
Hello! We have never before used our Pupil Core for saccades estimation. We would like to reduce the weight of the output files as much as possible, since the experiment will be very long. Is it possible for us not to record eye video or is it better to do this in order to later extract some missing coordinates from the video? We will be very grateful for guiding us
Hi @user-aa03ac ! We highly advise retaining the eye-tracking video data for any subsequent analysis. For optimal results, especially during lengthy experiments, we also recommend splitting your session into blocks not exceeding 20 minutes each and asking the participants to roll their eyes to readjust the 3D eye model at the end of each segment, followed by a calibration. This practice is crucial to maintain the accuracy of the eye-tracking data, throughout the whole session and ensures that the recordings can be opened in Pupil Player later without issues (independent of your hardware). https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks
Hi team, i searched for this error "pyre.pyre_node: Group default-time_sync-v1 not found." here but my case is slighlty different i think. I am using Pupil Capture 3.5.1 and I am running it from my laptop. Today i also noticed another error saying pupil group not found. What could cause this problem?
Hi @user-6cf287 π ! Is the Pupil Groups
plugin activated on your Pupil Capture instance, and are you utilising it?
Regarding the potential impact on data quality, it is challenging to provide a definitive answer, as this often varies with the specific task and individual differences or even with each recording. While generalisations are not feasible, our recommended workflow aims to mitigate any decrease in data quality. That said, it may be acceptable to omit the instruction for subjects to roll their eyes if:
In relation to the eye tracker's performance with diverse populations, our experience indicates that it has been effectively used across various groups. Nevertheless, certain groups may present more challengesβfor instance, individuals wearing heavy eye makeup, such as mascara or eyeliner, might complicate pupil detection. Similarly, the facial structure commonly found in Asian ethnicities could pose difficulties as the eyelid tends to obscure the pupil. Is there anything else I can help you with?
Hi team, i search for this error "pkg-config is required for building PyAV", but my case is different. After successfully executing command "python3.8 -m pip install pkgconfig", the error won't go away when executing command "python3.8 -m pip install -r requirements.txt". The OS platform is Ubuntu 18.04. Any suggestions?
HiοΌI found a insoluble problem, its that when I am compiling the Pupil core program, there's a error reporting from python that:" FileNotFoundError: Could not find module 'C:\ProgramData\Anaconda3\envs....\pupil_apriltags\lib\apriltags.dll", I have tried to add 'add_dll_directory' before using the 'pupil_apriltags' library, but it still doesn't work. Do you have any suggestions?
Hi @user-fafdae π. May I ask why you don't wish to use our pre-compiled bundles?
When I try to calibrate, I get this error message. What should I do?
Hi @user-da5552! This message means that there are no pupil detection data. To calibrate, you'll need to ensure the eye cameras are positioned optimally and pupil detection confidence is high. Check out the getting started guide and let me know if you run into any further issues: https://docs.pupil-labs.com/core/#_3-check-pupil-detection
Hi Neil...
Thank you. When I use the Player, my exported annotations.csv file is empty (only the header is there).
To clarify: For my annotations I'm always sending the same string {"topic":"annotation", "label":"myclock", "timestamp": 2.14} at every two seconds. In the PupilCapture, my annotations clearly arrive and have appropriate timestamps (see CaptureAnnotationSaving.png); I'm not sure why they are negative, but they are clearly ~2s apart.
Now when I load my capture directory into PupilPlayer I can also see that the right number of annotations is loaded, but as you can see in PlayerAnnotationExporting.png they all have the same frame number, which already indicates that there is something wrong with the data I'm saving. Indeed when I play the data, all annotations are shown at the end of the session, and all at once. Then when I export the data the annotation.csv file is empty.
The only thing I need are the timestamps that appear in the terminal of the PupilCapure (as in CaptureAnnotationsSaving.png). Is this possible?
PS: I saw in one of your synch examples that you go back and forth getting timestamps from the pupil capture and estimating clock offsets between your device and PupilCapture, but I would rather avoid this solution; especially if I can have directly the timestamps being saved at the time the annotations arrive.
Thank you in advance.
Moving the convo here: https://discord.com/channels/285728493612957698/446977689690177536/1172146888892497981
Hi, I wanted to confirm is the vector dispersion formula in the code outputs degrees of disperson right?
Based on Blignaut's paper, i identified 1.11 degree as the appropriate parameter for Salvucci's I-DT using their regression model. I want to bounce it off with someone here.
Correct. It's degrees of visual angle.
Getting started w. Core
Hello, I'm trying to get the raw data from one eye in real time, without having to record. I'm using this code in Python: pgr_future = p.pupil_grabber(topic='frame.eye.0', seconds=0.01)
data = pgr_future.result() frame_bytes = data[0]['raw_data'][0] frame = [int(byte) for byte in frame_bytes] print(frame)
This way I'm getting some bytes from the eye camera. However, the number of values in the variable 'frame' is smaller than the set frame dimensions which is 129x129. I don't know what I'm doing wrong... How could I get the signal captured by the eye camera?
This script shows how you can grab eye video frames: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py
Hello! I'm a newbie here. Could you help please? I've recorded some trials on visual tracking of moving objects in real world (so head may move and so on). Opening data in "offline_data\gaze-mapping" results in norm_pos negative values. Same in gaze_positions.csv (obviously).
How should I set up the experiment, so this problem won't occur too frequent (https://discord.com/channels/285728493612957698/998187988943110257/998480510571528203) ? I've done calibrations for several times during each trial with special marker, as demonstrated here: https://www.youtube.com/watch?v=aPLnqu26tWI&ab_channel=PupilLabs
Also, may head position monitoring take into account the shift in the main camera view?
Hi @user-a6c91e π. Negative values can occur when gaze leaves the FoV of the scene camera. But generally speaking, this coincides with poor quality data. It's difficult to say why you may encounter this without seeing a recording. Can you share a screen capture of your calibration choreography + some of the recording such that we can take a look?
Hello, I am currently using Pupil Core for an experiment and I have been successfully sending trial information online from MATLAB to Pupil Capture. However, I would also like to send gaze information from the Pupil Labs recording to MATLAB. I want to implement a fixation break rule such that the trial restarts when fixation is broken. But I would like to use a specific fixation rule from a previous publication, so I want to get gaze location data and then implement fixation break rule on MATLAB (so I don't want to use the fixation detection from Pupil Capture, and I am also using AprilTags so surface tracker plug-in is also available). The MATLAB computer and the Pupil Labs computer are separate, and they currently communicate with an ethernet cable connection. Can I get some advice on the steps to follow? Can I stream the gaze location on the surface defined by AprilTags? I have zero prior experience with LSLs but I need this for my project.
Are you already using LSL? If so, here's a fork of the LSL plugin that streams surface gazes. It's still a work in progress and not well tested, but it may work for you (and feedback is welcome!) https://github.com/domstoppable/App-PupilLabs
Hey group! When Iβm doing the eye tracking for the abnormally aligned eye patients the gaze is not being detected properly, is there any way to resolve this issue?
Hey @user-870276! Can you share a screencapture showing the calibration and some of the recording such that we can provide concrete feedback?
also jus curious, should the pupil camera be side way like image 1 as mentioned on your website to get best accuracies in gaze detection? or like image 2? which one is best for getting higher accuracies?
Hi @user-870276 π ! Positioning the camera on the side can prevent obstructing the subject's line of sight. However, for some individuals, this might not be feasible due to corneal reflexes or pupil obstruction.
Ultimately, the placement of the camera largely depends on the unique facial features of each individual, such as eyelid coverage, iris colour, etc. I would recommend to find the most suitable camera position for each person before starting.
Regarding the first video you mentioned, it appears there's an iris coloboma in one of the eyes. This condition could cause the pupil detection system to struggle with recognising these atypically shaped pupils in combination with eyelid partially covering the pupil. Therefore, I suggest either trying different pupil detection parameters/detectors https://github.com/pupil-labs/pupil-community?tab=readme-ov-file#pupil-detector-plugins or focusing solely on the unaffected eye. Independently, you can benefit by fitting the 3D model by asking the subject to rotate their eyes around like this https://youtu.be/_1ZRgfLJ3hc
For the second video, where the pupils are notably small, adjusting the minimum size parameter in the pupil detector would be advisable to ensure accurate detection (see the screenshot)
added to this I used the screen calibration method do you suggest any other methods of calibration? angular accuracy is between 3-4 degrees and angular precision is between 0.2 and 0.4 degrees.
Is it possible to extend recording times of the core? We used the invisible and the limiting factor was the battery of the phone, if this is still the bottleneck is it possible to recharge the phone while recording with a power bank?
Hi @user-005de5 π. Can you confirm which eye tracker your enquiry is about? Pupil Core or Invisible?
Hey, I'm still fairly new at using pupil core though I'm trying to set up the needed plugins for pupillometry. I found the github link above but can't seem to get the plugins to work right. Could anyone help or point me towards a resource detailing the process?
Hey @user-1caf5c! Pupil Core records Pupil diameter both in pixels and mm. You don't need any plugins for this, it happens automatically. Recommend reading our best practices for pupillometry: https://docs.pupil-labs.com/core/best-practices/#pupillometry
I just noticed my mistake I meant the neon, sorry
Np! Neon already records for longer than Invisible, up to 4 hours depending on which settings you have enabled in-app. You can also extend the autonomy should you wish with a USB hub: https://docs.pupil-labs.com/neon/glasses-and-companion/companion-device/#using-a-usb-c-hub
When I open the pupil player app to open eyetracking files. It reports error: Bad file descriptor (c:\projects\libzmq\src\epoll.cpp:100οΌ HELP!
Hi @user-be0bae ! May I ask a couple of questions to help you? What version of Windows are you using? Did you install the bundle with admin rights?
Here are a few things you can try to solve this issue:
Ensure that the file epoll.cpp
exists at the specified path and that you have permission to access it.
Start Pupil Player with admin rights.
It seems like you have multiple drives. Could you try installing the bundle at "C:" and/or avoiding non-unicode characters on the path?
Hi team, i keep getting this warning: uvc: Could not set Value. 'Backlight Compensation'. what does this mean and would it effect the eye measurements? thank you
Hi @user-6cf287 π. Please see this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/818442406159843349
hi everyone, i have a question: Pupil Core uses the dark pupil method, in which the reflected IR light from the pupil is not captured by the eye cameras. What do the cameras capture then?
The cameras capture IR images of the pupils, which are segmented with a pupil detection algorithm. You can read more about that here: https://arxiv.org/abs/1405.0006
How and where set the mapper?
Hi @user-c9af80 ! Would you mind developing your question? which mapper are you looking to set it up? and what product are you using (is it Pupil Core?) and gaze mappers https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline ?
Hi, i did my plotting successfully following your coding instruction but i would like to have some support for how to change the coding to get "pupil_timestamp_datetime" in X axis instead of "pupil timestamp". What should i change in this script please to have diameter_3d against pupil_timestamp_date time in my plot: import matplotlib.pyplot as plt
plt.figure(figsize=(16, 5)) plt.plot(eye0_df['pupil_timestamp'], eye0_df['diameter_3d']) plt.plot(eye1_df['pupil_timestamp'], eye1_df['diameter_3d']) plt.legend(['eye0', 'eye1']) plt.xlabel('Timestamps [s]') plt.ylabel('Diameter [mm]') plt.title('Pupil Diameter')
Hi @user-956f89 ! The line plt.plot(eye0_df['pupil_timestamp'], eye0_df['diameter_3d'])
is in essence plot(x,y), so, if you change the first parameter you will get what you are looking for.
Dataframes are kind of like a table, if you want to know what options do you have on your eye0_df
"table", you can use print(eye0_df.columns)
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.columns.html
Hi, thank you for your answer
i already did that before but not sucessfull
import matplotlib.pyplot as plt plt.figure(figsize=(16, 5)) <Figure size 1600x500 with 0 Axes>
plt.plot(eye0_df['pupil_timestamp_datetime'], eye0_df['diameter_3d']) [<matplotlib.lines.Line2D object at 0x00000284D8A82E48>] plt.plot(eye1_df['pupil_timestamp_datetime'], eye1_df['diameter_3d']) [<matplotlib.lines.Line2D object at 0x00000284D9B80108>] plt.legend(['eye0', 'eye1']) <matplotlib.legend.Legend object at 0x00000284D996AA08> plt.xlabel('Timestamps [s]') Text(0.5, 0, 'Timestamps [s]') plt.ylabel('Diameter [mm]') Text(0, 0.5, 'Diameter [mm]') plt.title('Pupil Diameter') Text(0.5, 1.0, 'Pupil Diameter') plt.show()
can you tell if i have written something wrong in the script
Without seeing the output or error is hard to know, can you add the print columns command before to check what columns do you have there?
ok
print(eye0_df.columns) Index(['Unnamed: 0', 'pupil_timestamp', 'world_index', 'eye_id', 'confidence', 'norm_pos_x', 'norm_pos_y', 'diameter', 'method', 'ellipse_center_x', 'ellipse_center_y', 'ellipse_axis_a', 'ellipse_axis_b', 'ellipse_angle', 'diameter_3d', 'model_confidence', 'model_id', 'sphere_center_x', 'sphere_center_y', 'sphere_center_z', 'sphere_radius', 'circle_3d_center_x', 'circle_3d_center_y', 'circle_3d_center_z', 'circle_3d_normal_x', 'circle_3d_normal_y', 'circle_3d_normal_z', 'circle_3d_radius', 'theta', 'phi', 'projected_sphere_center_x', 'projected_sphere_center_y', 'projected_sphere_axis_a', 'projected_sphere_axis_b', 'projected_sphere_angle', 'pupil_timestamp_unix', 'pupil_timestamp_datetime'], dtype='object')
this is the out put
I see, so the issue is not that it is not plotting , but rather that you can not read the labels, right? you can use plt.xticks(rotation=90)
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.xticks.html to rotate the labels
as you see no datetime in the X axis
where should i write plt.xticks(rotation=90)
before writing plt show ()?
yes
same thing
i think because of high number of values in the axis
how can i reduce the axis values to every 10 seconds for example instead of plotting every second.
You can use xaxis.set_major_locator() function to define it
sorry to bother you Mgg
please help me how to write it according to the table i have written
pupil_timestamp_datetime eye_id confidence norm_pos_x norm_pos_y diameter_3d 1238 2023-06-24 14:53:00.981393408 0 0.819 0.458 0.615 1.214 1240 2023-06-24 14:53:00.992414208 0 0.915 0.459 0.614 1.216 1241 2023-06-24 14:53:01.001674240 0 0.820 0.460 0.613 1.236 1243 2023-06-24 14:53:01.012230144 0 0.890 0.458 0.615 1.191 1245 2023-06-24 14:53:01.019725312 0 0.994 0.458 0.615 1.207 1250 2023-06-24 14:53:01.030388224 0 0.901 0.459 0.615 1.187 1252 2023-06-24 14:53:01.038635264 0 0.731 0.459 0.615 1.189 1255 2023-06-24 14:53:01.047466240 0 1.000 0.459 0.615 1.205 1257 2023-06-24 14:53:01.059594240 0 0.971 0.458 0.615 1.201 1260 2023-06-24 14:53:01.067557376 0 1.000 0.459 0.615 1.200
i want to plot this table
and not all the value
Hello, I'm setting up an experiment where the subjects should: 1. Follow a dot along a moving sinusoidally 2. "Find Waldo" 3. Find the differences in a image
I am having some problems with the last two. For testing purposes, I'm just following the dot and looking at the edge of the images, but especially in trial 2 I get an offset which I can't really explain. Could someone help?
EDIT: I'm using the surface tracker, analyzing the gaze positions. I have got this recording the video and the using the pupil player. The calibration consistently shows an accuracy of >2 degrees (that a lot?). Don't mind the legend on the graph as fixations are the red dots and samples the blue stars
Have you inspected surface_tracker/gui.py
? Have a look at this class: https://github.com/pupil-labs/pupil/blob/e9bf7ef1a4c5f2bf6a48a8821a846c5ce7dccac3/pupil_src/shared_modules/surface_tracker/gui.py#L620
From what I see in that function, I don't think you're exposing the image data directly. You create a 4x4 transformation matrix stored in the trans_mat
variable and apply it using the world_tex.draw()
function, maybe? Correct me if I'm wrong π
.
Can you tell me what each component represents in the 4x4 OpenGL transformation matrix? Or maybe how to convert the OpenGL matrix to a 3x3 OpenCV matrix so that I can do the transformation and get the surface tracker image
?
Hi @user-4c48eb! Usually an unexpected offset indicates a sub-optimal calibration and/or pupil detection, but it's difficult to say for sure without seeing those things. Can you share a screen capture of your calibration choreograpy?
Unluckily I don't have one π¦ I'll do a new one ASAP and if the problem persists I'll come back to you. Thanks for the reply!
Something like this should work for you. As a quick-and-dirty test, I just put it in Surface_Window.gl_display_in_window
, so that I could easily turn it on and off by opening the surface window and closing it. You'll definitely find a better spot for it though
output_size = 512
# get the corners of the surface in the original image
norm_corners = np.array([[0, 0], [1, 0], [1, 1], [0, 1]], dtype=np.float32)
img_corners = self.surface.map_from_surf(
norm_corners, self.tracker.camera_model, compensate_distortion=False
)
# define the corners of our output image
output_corners = np.array([[0, 0], [output_size, 0], [output_size, output_size], [0, output_size]], dtype=np.float32)
# calculate the transformation matrix
output_trans_mat = cv2.getPerspectiveTransform(img_corners, output_corners)
# apply the transformation matrix
surface_image = cv2.warpPerspective(self.tracker.current_frame.img, output_trans_mat, (512, 512))
# save the image
cv2.imwrite(f'surface-image-{self.tracker.current_frame.index}.png', surface_image)
Ok thank you for the answer ππ
Dear people, as you can see in the screenshot, the brightness of the captured eyes is way to high, does someone know how I can change that? Thank you for your time:)
Hi @user-1aeacd ! Click on the camera icon at the sidebar, there you will find the options to change the exposure and gain.
Perfect It works!! Thank you!!:)
When I play with the absolute exposure time of the world view camera it seems to give good results for 126. However, I would like to set it to auto, but when I try to do an error message displays saying "WORLD: Could not set value. "Auto exposure mode"
Hi I am trying to upload all the videos but it seems like I cannot do it because it keeps loading and after 3 days it's still 0%
You'll want to choose aperture priority mode for auto adjustments (apologies for the confusing naming scheme)
Thank you:)
Is there a guide to pupillometry analyses that someone can share?
Hi Team, i noticed that the pupil timestamp and gaze timestamp are recorded differently as shown. I would like to understand the reason for this different timestamps and it is possible to synchronize them? For the pupil timestamp there are duplicates and i understand that this is due to the method being used either 2d or 3d. please ignore the different time format as it is somehow getting inconsistent due to european utf format i think
Hi @user-6cf287 π. Contained within the pupil data are left and right eye streams. The eye cameras operate independently and are not perfectly synchronised. Gaze data are generated using pupil data - we utilise a data matching algorithm to match pairs of eye images for these gaze estimations. You can read more about the entire process here: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
Additionally, you may want to alter the formatting in your spreadsheet software: https://discord.com/channels/285728493612957698/1149399816149925888/1149717927944278128
Hi Neil thanks for the explanation but I don't understand why the Pupil export started saving the timestamp as integers instead of decimals. Is this because of some settings in the Pupil player export? As I have downloaded the data before and did not face this issue but it is happening now. Also the person did not explain how he/she changed the formatting, any tips on that? thanks!
Hi! Could you please explain how the coordinate for the gaze position are generated? I'm working on creating heatmaps and would like to understand what the x_norm, y_norm, x_ scaled, y_scaled coordinates represent.
Hi @user-be0bae ! Are you using the surface tracker? I think this tutorial does exactly what you are looking for. https://github.com/pupil-labs/pupil-tutorials/blob/master/02_load_exported_surfaces_and_visualize_aggregate_heatmap.ipynb
the norm
values are normalised to the surface and the scaled
are relative to the size of the surface, have a look at the coord system of Pupil Core hee https://docs.pupil-labs.com/core/terminology/#coordinate-system
Thanks for your reply and materials.
It's nothing to do with Player. Rather, Excel is known to misinterpret .csv data depending on Excel's language settings. For example, 2.5 (2,5 in German notation) is interpreted as 2500 because the . in German is the separator for large numbers. This discussion has some tips on how to solve it: https://stackoverflow.com/questions/11421260/csv-decimal-dot-in-excel
Thanks, the issue was somehow resolved after re-exporting the data. I have no explanation π
Hello, my company is interested in eye-tracking system solutions and we have some questions about your Core product. Is it possible to chat ? π
Hi @user-fa19c6 ! Sure! What would you like to know? To request an online demo, please contact info@pupil-labs.com
Hello @user-d407c1 , thanks ! β’ Is the Core product having an integrated microphone? β’ What are available analysis software for this product? β’ Is it possible to have a real time analysis solution? β’ Is it possible to record analysis sessions? If so, is it saved on SD card or in cloud?
Thanks for following up! Pupil Core (https://pupil-labs.com/products/core) does need to be tethered to a computer, is it in this computer where data is stored (so, no SD card or Cloud).
After recording, you can load the recording to Pupil Player (https://docs.pupil-labs.com/core/software/pupil-player/) for analysis.
It does not have a microphone built-in, although it can be synced with an external audio source using LSL.
Regarding real-time analysis, there are many parameters that can be accessed in real-time. Could you specify which parameters you're particularly interested? This will help us provide more targeted information.
If you are looking for something more portable and with stereo microphones built-it in, you may want to check Neon https://pupil-labs.com/products/neon
Dear Community,
I am currently developing an "Intelligent User Interface with Head-mounted Eye Tracker." To enhance its functionality, I need to implement a feature that streams images with eye gaze markers, similar to those seen in Pupil Capture, to other parts of my application.
Could you please advise if there exists any existing helper function or utility within the Pupil Capture ecosystem that facilitates this? Or would it be necessary for me to custom-develop this functionality?
I appreciate your time and assistance:) Thank you!
Hi, @user-1aeacd - did you find what you need? Your project sounds very interesting and I'd love to hear more about it
Thank you ! It could be use in defferent cases: indoor without natural light, indoor with natural light, outdoor... Our needs are plural, but we need to analyze what the user will see, where their look will focus, for how long, if possible real time verbatims. The scenarios could be for the user using screens or physical products
Hi @user-fa19c6 ! thanks for following up, I think it can be beneficial to set up a demo and Q&A video call to discuss these cases and how it could fit. Please send an email to info@pupil-labs.com and we can look for a time that suits you.
Hello, thank you for the possibility u give to ask you questions! i would like to buy the software and the eye tracking that goes with...i have few questions:
Hi @user-c39646 ! Which eye-tracker are you interested most in? Pupil Core? The software to record and analyse is included in the price, no matter which, and there are no subscriptions.
We do not offer direct software for stimuli presentation, but you can use our products with Psychopy https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html or present it with Psychotoolbox and send annotations https://docs.pupil-labs.com/neon/real-time-api/track-your-experiment-in-matlab/
is it any code shared to use with phyton or matlab that allows me to show visual stimuli ?
thank you very much ! im interestend in the vivo pro full kit with which i will have to record pupil dilation
could you also tell me where i can find the soecification and requirements of a computer to assure the best performance of the sofrware please?
@user-d407c1 responded here: https://discord.com/channels/285728493612957698/285728635267186688/1177248889351446668
mgg
May I ask if there are any learning materials for the code of this project? I would like to call the gaze estimation module and annotate the gaze direction. How can I modify the code
Unfortunately we don't have documentation for this. May I ask why you need to run pye3d standalone?
Hi I am planning to use the pupil core eye tracker to investigate smooth pursuit movements as a subject follows a moving target in 3d space. For example, the target may start in the bottom left corner of the field of view at a distance of 50 cm from the observer, and then move to the top right corner of the field of view at a distance of 400 cm from the observer. I was wondering if you had any tips please on the best way to perform the calibration to ensure accurate gaze estimation? I plan to use a physical printed calibration market.
Hi, Iβve been running into an error when trying to connect to the core glasses via matlab. Iβve installed zmq, zmq master and the msg pack as pointed out on pupil helpers. After plenty of trouble shooting I finally got the mexfiles for the above compiled. Iβm now running into a new error when trying to run pupil remote: error using recv βresource temporarily unavailableβ. Any guidance would be greatly appreciated
Hi, @user-f76a69 - is there more of the Matlab output you can share? Also, what version of Matlab are you using?
hi, I'm doing the experiment on a kitchen top which involves slight head and body movements to reach the objects. For this setting using the surface tracker and headpose tracking increases the accuracy of gaze and fixations?
It won't improve the gaze accuracy, unfortunately, as they are separate entities. Good quality pupil detection and calibration are key here.
Hey @user-a09f5d! The main consideration really is ensuring good pupil detection, and then performing a calibration that covers the field of view of what you want to measure. Have you already collected some pilot data? We'd be happy to take a look at your recording and provide some feedback.
Hi @nmt The experiment is still in development but I am hoping to have some pilot data soon, at which point I would be happy to share a sample with you to check the calibration is as good as it can be. That would be fantastic. Thank you!
Hi @nmt If the offer still stand, what is your preferred method (I assume email) for me to send you an example recording?
Hi @nmt Following up on this message from a few months ago. I now have some pilot recordings and unfortunately the gaze estimation is not as good as we hoped/need. This might be in part due to the method of calibration (using a physical printed calibration target). If the offer still stands it would be very helpful if you or your team are able to offer some feedback on a sample recording? If so, what is the best way to send the recording (it is a large file)?
Which example specifically from the Pupil Helpers repo are you trying to run?
I was trying to run pupil_remote_control, as Iβm hoping to be able to remote start a recording from a separate pc.
Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1145994078161485824
Hi team, i would like to know if there are any tips on synching the eye tracker timestamp with the timestamps from another device B. Device B has fewer update frequency than the eye tracker. My plan is to postprocess the data collected and only have a resolution of 1 second for every data point. In this case, I am wondering whether I should use the mean of the eye tracker recording or just take the closes data point that matches device B's timestamp? Thanks
Hi @user-6cf287! My first question here: what clock did device B collect it's timestamps from? Pupil Core uses it's own clock to produce 'Pupil Time'. So it might not be as simple to correlate the two as you think.
Hi, I currently try to access the world camera frames of the pupil labs core in python and display it with openCV in realtime (as you would with a webcam). My intention is to use the world camera frames to do some object recognition tasks. I came across the Network API but unfortunately I couldn't find anything helpful regarding accesing the world camera data. Maybe I missed something? Another solution is the implementation of a plugin with the Plugin API. However, my question is, if there is a simpler solution. Thanks in advance.
Hi @user-1f606a ππ½ ! OpenCV is known to be unreliable, often leading to frame drops. We recommend pyav (https://pyav.org/docs/stable/) instead. We also have some tutorials and helpers that might be helpful:
Hi Neil, device B is connected to the computer as well and the data is also recorded using unix timestamp
In that case, you'd first need to convert Pupil time to unix/system time: https://docs.pupil-labs.com/core/developer/#convert-pupil-time-to-system-time. It would then be feasible, as you suggest, to find the nearest timestamps in each dataset.
That said, system/unix time is known to drift. For the best accuracy, you might want to look at our Lab Streaming Layer plugin: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture
Greetings π . My apologies if this question has been asked many times. I have an annotation for each relevant fixation on pupil player. However, on export the csv file does not offer a fixation duration column. Is there a way around to exporting both the annotation and fixation duration in one csv file? This would save me a lot of time manually adding from the fixations csv!
Never mind, I have managed to get it done through JupyterLab! π
Hi @user-cdcab0 , Iβm using 2023b on windows 11. The rest of the output looks like this. Error using recv Resource temporarily unavailable. Error in pupil_remote_control (line 33) Result= zmq.core.recv(socket) ;
Thanks! And, just to be sure, you have the Network API plugin enabled and the port numbers match?
@user-cdcab0 I do π
You mentioned in your first post that you are using the ZMQ packages referenced in our Pupil Helpers repo, but other users have reported that those do not work with Matlab 2023. Have a look here for a potential solution: https://discord.com/channels/285728493612957698/446977689690177536/1123432094526361753
thank you. yes we already converted the pupil time using the code snippet in the link you provided. If i understood correctly the LSL plugin should be included in pupil capture before recording right? as we have already done the recordings, it seems like we have to go with choosing the nearest point for time matching...
Yes, if the recordings are already made then you'll have to use the matching approach
Hello, I am trying to track gaze and fixations on a surface. With the 3D model gaze I had so much errors (approx. 3 deg) so I switched to 2D. The problem is that I now have some problems as post analysis from the pupil player data will show some gaze where I have not looked at. You can see the spikes in bottom left corner. Do you know how to fix it?
What's interesting is that in the video the gaze dot doesn't go there
I actually used that fork mentioned in that channel in order to get the mex compiling process for zmq working.
Are you running Matlab and Pupil Capture on the same PC? If not, tell me about your network configuration
Also, can you share your code?
I am running Matlab on one PC and Pupil Capture on a separate laptop, we created a local network via an ethernet cable. We are using the code as is on the Pupil Helpers repo. It's important for our experiment setup to run Matlab on one PC and Pupil capture on a separate laptop.
Aw, that code as-is assumes both are running on the same PC. You need to modify the endpoint so that it matches the IP address of the computer running Pupil Capture
Hi @user-cdcab0 π No I have not found what I need yet, do you have any suggestions?:)
circle()
function to render a circle on the world camera frame at the location of the gaze data (see: https://docs.opencv.org/4.x/d6/d6e/group__imgproc__draw.html#gaf10604b069374903dbd0f0488cb43670)@user-cdcab0 Thank you very much for the help, will try it right away:)