👁 core


user-412dbc 01 October, 2024, 15:02:47

Is it generally a good practice as part of the data pre-processing to exclude the Fixations that appear near the blinks? (e.g., 100-150ms before and after the blinks). If yes what is the easiest way to do that? Also, is it necessary to apply this? what are the drawbacks of not doing it? The only think that I am interested about in my analyses is the fixation duration within AOIs and I am using Windows system. Lastly, are there any common good practices that will guide me for preparing the datasets for the analyses?

Thank you very much, Panos

user-f43a29 01 October, 2024, 16:58:03

@user-412dbc , you may also find this tutorial about merging fixation and blink IDs into the gaze DataFrame useful.

user-f43a29 01 October, 2024, 16:02:59

Hi @user-412dbc , apologies that your earlier message had gotten lost.

  • Research groups may filter data differently based on blinks, depending on the experimental paradigm. Filtering data for 100-150 ms before and after blinks, as you suggest, is commonly seen in the literature. See this post from my colleague, @user-480f4c , for more info: https://discord.com/channels/285728493612957698/633564003846717444/1270305458493849661
  • If you want to filter data during a blink or around a blink, then you find the corresponding timestamps in the other data streams and simply ignore those datapoints in subsequent analysis. How exactly this is done depends on your programming/analysis environment. In Python, the pandas library can be quite helpful for this.
  • Whether such filtering is strictly necessary depends on your research goals. The decision is ultimately yours. A drawback to not filtering data during a blink is the potential to mistakenly conclude that a person looked at something critical when their eyes were actually closed.
  • If you want fixation duration within AOIs, then you may find the Surface Tracker docs and this Pupil Tutorial helpful. It shows how to load, process, and visualize fixation data for a surface. You can then extract the duration of each fixation on the surface.
  • For common practices, I recommend browsing through the published eyetracking literature. We maintain a list of publications that have used our eyetrackers. You can filter and view publications that only used Pupil Core.
user-412dbc 01 October, 2024, 19:07:22

Thank you very much Rob for your help, I deeply appreciate your and your colleagues' efforts.

I already took a look at some of these papers and noticed that none of the papers that I have read (until now) explicitly stated that they filtered their data for blink artifacts. Nonetheless, I will try to work that out in Python. Or else I am thinking to adjust the minimum fixation duration to 130-150ms that will also act as a protective factor for blinking impact. From your answer I suspect that there are not any programs that can load Pupil Core's datasets and remove the blink artifacts automatically, right?

user-f43a29 02 October, 2024, 09:03:03

@user-412dbc no problem. We are happy to help!

No, we do not have a tool that automatically filters the data during a blink, since there is some variability in how different research groups do it and so that you have the flexibility to try different approaches and paradigms that best fit your use case.

user-4514c3 02 October, 2024, 11:30:22

Hello team, we have a question. We're analyzing an experiment with the help of an engineer who's processing the video images to determine where the participants are looking at each moment and categorize what they're looking at (whether it's the target, a distractor, etc.). However, he has some doubts about which is the best file to analyze. He's using 'world.mp4' to project and extract image data, 'world_timestamps.npy' to retrieve the correspondence between the timetags and the video frames. With the 'fixations_timestamps.npy' and 'fixations.pldata' files, a function has been used to extract the information and combine it into a single data structure. From these files located in the main folder, we primarily get the image coordinates. But there are more files within the 'offline_data' folder related to fixations. We want to make sure we're extracting data from the correct files. Thank you very much!

user-4514c3 02 October, 2024, 11:33:13

Another question is that the fixations have thresholds to determine whether something is considered a fixation or not (based on angle and time). My concern is that during execution, the gaze might move faster and not consider certain areas as fixations when, in fact, they have been viewed. Should we adjust the fixation thresholds to make the detection more accurate? Or instead of using fixations, should we rely directly on gaze data or another less processed file? Thank you!

nmt 02 October, 2024, 12:07:29

Hi @user-4514c3! I have some follow-up questions to help me better understanding what you're trying to achieve and how I can help: 1. What exactly are you doing with the scene video frames that requires you to consume real-time fixation data stored in binary format (.pldata)? 2. Regarding gaze vs fixations - what sort of task were your participants doing?

user-4514c3 02 October, 2024, 18:27:38

We have used the files that we found most convenient to work with since we don't have much experience or proficiency with the device. That’s why we wanted to ask in case we were doing something wrong. Thank you very much. The participants' task is simply to pick up a geometric piece of a certain color from a board.

Chat image

user-c30df9 02 October, 2024, 22:14:42

Hello. I installed pupil core software v3.5 on Ubuntu 24.04.1 LTS , but pupil capture, player and service are crashing and not starting. I installed pupil_v3.5-8-g0c019f6_linux_x64. It says cysignals failed in the terminal. How can I get it to work on my system?

user-a5650e 02 October, 2024, 22:14:53

Hi @user-f7a2f7 , can you try the following:

  • Rather than starting it from the terminal, can you use the mouse to click on the App icon in the Applications launcher?
  • If you are on a laptop with an NVIDIA card and you have NVIDIA drivers installed, than make sure to right-click the app icon and choose “Launch with dedicated GPU”
user-614577 02 October, 2024, 22:15:04

Yes, I did use the mouse to click on the App icon and it crashed and I got the attached error. I don't see launch with dedicated GPU with right click. Is it missing any dependence? I see only New Window and Open in Terminal options with right click.

Chat image Chat image

user-f43a29 02 October, 2024, 22:16:29

@user-f7a2f7 Just so you know, I moved our conversation to the 👁 core channel.

user-6afd86 02 October, 2024, 22:15:31

@user-f7a2f7 Are you on a system with an Nvidia graphics card?

user-f7a2f7 02 October, 2024, 22:25:37

My laptop has an AMD radeon graphics card and the pupil software crashes right way. I am running pupil capture in another computer that has an NVIDIA GeForce RTX™ 4090. In the other computer it starts and I can see the two eye cameras, but crashes when I hit record and nothing is saved.

user-f43a29 02 October, 2024, 22:34:56

On the system with the AMD card, the error message suggests that Ubuntu is using the default Wayland compositor. Wayland is in principle still a rather new technology that is not fully compatible with all software yet and this could be the cause. I recommend trying to start an X11/Xorg session and see if Pupil Capture starts without issues then. See here for a way to do that.

Your Nvidia machine will default to X11 if you installed the Nvidia drivers. Regarding the crash on that computer, can you send a copy of the Pupil Capture logs? You will find this in your home directory in the pupil_capture_settings folder.

nmt 03 October, 2024, 04:22:12

There's nothing inherently wrong with consuming those files directly. However, they are an intermediate format, saved by Capture and intended to be loaded by Pupil Player. You can then export data into .csv files as needed. If you haven't already, read the Player docs.

Both workflows have been employed by our users. However, if you want to tweak the fixation detector thresholds, you'll need to use Pupil Player. Choosing the right thresholds for your task can be worthwhile. For an overview, read this section of the documentation.

Another reason to use Pupil Player is Surface Tracking. You have Surface Tracking markers visible in the screenshot you've shared. Do you intend to create AOIs and map your fixations to them?

user-4514c3 09 October, 2024, 15:44:56

Thank you. Yes, we are taking the results through 'exports'. We are using the pldata because we believe that, even after post-calibration, this data is updated by Pupil Player. We found the function to parse these files in your code (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py) (the load_pldata_file function). Since we have it integrated this way, it is easier for us to maintain it as such. If you think this could be a problem, we can switch to the CSV files. But if there’s no issue, using pldata should be fine for us.

What we have detected is that it seems some fixations in the data have the same timestamp. Is this correct? We're not sure if it's due to rounding issues in the float when encoded into the file or something else. In any case, it’s just a small amount (in the example we're analyzing, it's 178 out of 14,553 fixations)

user-471c66 06 October, 2024, 13:13:18

I am currently using the Pupil Core eye-tracking device . The setup includes only one pupil detection camera on the right side. I have been trying to perform calibration, but I consistently receive the following message:

“gaze_mapping.utils: An unexpectedly large amount of pupil data (> 20%) was dismissed due to low confidence. Please check the pupil detection.” “world - [INFO] accuracy_visualizer: Angular accuracy: 3.308 degrees” “world - [INFO] accuracy_visualizer: Angular precision: 0.123 degrees”

I’ve also noticed that when I keep my head still and look towards the upper-left corner, the device has difficulty detecting my pupil. Attached is an image showing my pupil while looking in the upper-left corner. Could this issue be caused by my eyelashes, and is that why there are inaccuracies in the tracking data?

Additionally, I tried adjusting the Outlier Threshold in the Accuracy Visualizer from the default 5.0 degrees to 1.0 degree, but the error did not decrease. Could you provide advice on how to further improve the tracking accuracy?

Thank you for the helps. 😉

Chat image Chat image Chat image

nmt 07 October, 2024, 06:09:57

Hi @user-471c66 👋. The message that over 20% of pupil data was dismissed due to low confidence suggests that pupil detection can be significantly improved. Good pupil detection is the foundation of the entire pupil core gaze pipeline, so we should start there. Please share a recording so we can provide concrete feedback: 1. Restart Pupil Capture with default settings. 2. Start a recording in Capture. 3. Set up the eye camera as usual, ensuring the pupil is centred in the camera's field of view. 4. Ensure the eye model fits well. To do this, slowly move the eyes around (e.g., by rolling them) until the eye model (blue circle) adjusts to fit the eyeball and the red ellipse robustly overlays the pupil, as shown in this video. 5. Perform a calibration routine. 6. Perform a validation routine. 7. Stop the recording and share the folder, e.g., 001, with [email removed]

user-d407c1 07 October, 2024, 09:05:59

@user-471c66 On top of @nmt's answer. Given that in one of the images that you shared, it seems like the eye brow is confounding the pupil detector, you may want to try setting a region of interest to delimit the areas where pupil can be found, more info and a video on this message: https://discord.com/channels/285728493612957698/285728493612957698/1214561339947876392

user-ac0587 08 October, 2024, 04:40:52

Hi there, my lab has been using the raw MP4 videos from the Core to study blink amplitude. I was wondering if it is possible to set the frame rate of the eye cameras to a fixed one as the variable once means I cannot determine the blink duration outside of the pupil player.

user-f43a29 09 October, 2024, 08:17:32

Hi @user-ac0587 , Pupil Core's cameras are free-running. This means that it is not possible to set them to a fixed frame rate when using them with Pupil Capture.

When you export the data from Pupil Player, you can also export the timestamps (in Pupil Time) to numpy and CSV files for the world video and eye videos:

  • eyeX_timestamps.csv and eyeX_timestamps.npy
  • world_timestamps.csv and world_timestamps.npy

You can then close Pupil Player and load those files into Python, MATLAB, etc to calculate the duration between two eye video frames. The variable frame rate does not prevent such calculations.

user-471c66 08 October, 2024, 15:53:59

@nmt Hi Neil, thank you for the respond. I have recorded a clip and followed your instruction. The folder is about 36MB, I have upload to the cloud. https://drive.google.com/file/d/1PsAMMwtR_TOAn7cIkg2zB11C9DoIy4_n/view?usp=sharing Thank you for the help.

user-471c66 08 October, 2024, 15:56:10

@user-d407c1 Hi Miguel, thank you. I tried with different angles, but seems like it always can detected my eye brow. I saw a video from official website that core could use a extender? And that might help? But I checked my box, it does not have that item.

user-d407c1 09 October, 2024, 08:29:23

Hi @user-471c66 ! I do not think the extenders would help om this case, but setting the Region of Interest on the eye window definetely would.

user-ac0587 09 October, 2024, 04:34:29

Hi again, I have been trying to find out how the Blink Detector estimates blink duration. Compared to the Tobii eye-trackers and our own algorithm, the Pupil Lab's blink detectors have significantly longer blink durations and I was wanting to know if there is some coefficient that is applied seeing as the Blink Detector uses pupil signal to determine blink onset and offset?

user-f43a29 09 October, 2024, 08:23:54

@user-ac0587 , for this question, I recommend checking the Blink Detector documentation.

There is no coefficient applied. Blink onset/offset are computed directly from 2D pupil detection confidence and the associated timestamps are used directly.

Note that you can change the default Blink Detector Threshold.

If it helps, the code for the Blink Detector is available on Github here.

user-ac0587 09 October, 2024, 08:39:03

Hi Rob, thank you so much for getting back to me and for your response. I will look into the documentation and the timestamps folders.

user-4c48eb 10 October, 2024, 15:49:28

Hello, I was doing a recording when the pupil capture software crashed with no error message. I now have a recording folder but if I try to open it in the pupil player program I get "player - [ERROR] launchables.player: InvalidRecordingException: There is no info file in the target directory.". Is it possible to recover the data in any way?

Also, while watching the following recordings I get the on-screen error "PLAYER: Error dc\PLAYER: error y=97 x=100" and the pupil data disappears. Any idea if this is recoverable as well?

user-f43a29 11 October, 2024, 07:43:29

Hi @user-4c48eb , yes, if Pupil Capture crashed during the recording, then the files in the folder will be in an incomplete state. Depending on the exact state, it can however be possible to recover some of the data. See here: https://discord.com/channels/285728493612957698/285728493612957698/1265299802208338000 and the linked message here: https://discord.com/channels/285728493612957698/285728493612957698/1097858677207212144

When you say "the following recordings", which recordings are you referring to?

user-fb8431 11 October, 2024, 12:56:47

Hello everyone, if I want to try to add the appearance-based deep learning line of sight estimation method to the Pupil core code, that is, complete the line of sight estimation through binocular images, is there any more convenient way, such as developing a plug-in; but I I have read the Plugin API tutorial on the official website, but I still don’t understand it in particular. Is there any faster way to learn it?

user-f43a29 14 October, 2024, 09:58:58

Hi @user-fb8431 , do you mean that you have developed your own deep learning gaze estimation model?

If so and you want to use it with Pupil Core, then yes, developing a plug-in is the fastest way.

What part of the process is not clear?

If it helps, you can also refer to plug-ins that others have made. The Gazers plug-ins might be of interest to you

user-4c48eb 13 October, 2024, 14:23:53

Sorry, after it crashed the first time I launched other recordings. That's what I mean with "the following recordings"

user-f43a29 14 October, 2024, 09:55:52

@user-4c48eb Could you try the following:

  • Restart Pupil Player with default settings. This button is found in the "General Settings" tab.
  • Load up one of those recordings again (but not the recording that crashes Pupil Player).
  • How does it work now?

If you still see the error, then please send a copy of the logs in the pupil_player_settings folder. This folder is found in your home directory.

user-4514c3 13 October, 2024, 17:32:07

Hello! We are currently analyzing data for a poster, but we’ve noticed that the fixations are not being marked correctly. We also have a few questions regarding this:

  1. How can we match the fixations to the world_frame while losing as little information as possible?
  2. What are the coordinates (I believe fixations and gaze use a bottom-left origin, and the image has a top-left origin)? (We found a related reference here: https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb)
  3. Are the image coordinates referring to the distorted image as shown in world.mp4, or is the image already processed in some way?
user-4514c3 13 October, 2024, 17:43:21

I’m attaching a picture in case it’s unclear. When I open Pupil Player, it shows that the subject is looking at one figure, but when analyzing the data with the files we’re using, it indicates they’re looking at a different figure, and at the same time, some fixations are lost.

Chat image

user-3187b1 14 October, 2024, 04:48:08

Hello! We want to run code only about gaze mapping (producing and mapping the red circle ). Which code do we have to run from github?

user-f43a29 14 October, 2024, 09:53:47

Hi @user-3187b1 , are you trying to map gaze to a surface that is defined by AprilTags?

user-11b3f8 14 October, 2024, 12:13:48

Hi We are a group of robotics Uni students and we are currently working on using the pupil labs core for some form of control interface. I am wondering if there is a way to access the live camera feed outside of pupil capture from the cameras on the core, i havent been able to find a solution for that, so i thought i'd ask here. Could there also be some compatibility issues with pupil capture and openCv?

nmt 14 October, 2024, 13:39:00

Hi @user-11b3f8! The easiest way to do this is with our real-time API - this example script shows you how: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py

You might also be interested in our video backend: https://github.com/pupil-labs/pyuvc?tab=readme-ov-file#pyuvc

user-fb8431 14 October, 2024, 13:58:02

ok thanks i will try it.

user-fce73e 14 October, 2024, 15:21:01

Hello! We are using Pupil Labs Core products well. However, we have a question. When looking at the CSV file, it seems that gaze normal0 (x, y, z) and gaze normal1 (x, y, z) are used to infer norm_pos. We're curious about the procedure for how this happens. Could you point us to which part of the source code we should look at? I think it might be in the gaze mapping section of the shared module, but it's more difficult to analyze than expected, and it seems challenging to run only that specific part of the code. Also, are there any examples in Pupil Helpers or Pupil Tutorials related to the content mentioned above? As a graduate student in the related field, I have gained significant help from your products, I look forward to your response. Thank you.

user-f43a29 14 October, 2024, 15:53:05

Hello @user-fce73e , yes, for the 3D calibration pipeline, the gaze_normal_0/1 values are relevant for determining norm_pos_x/y.

The relevant part of the gaze mapping (i.e., the one based on the default 3D calibration pipeline) is in gazer_headset.py. It contains methods for monocular and binocular gaze mapping. In particular, check the predict and _predict_single methods. For example, the lines that transform gaze_normal to norm_pos are here.

While there are no specific Helpers or Tutorials for the transformation, some details for the 3D calibration pipeline are contained in the description of pye3d and in the associated publications linked at the bottom of that page. In particular, the publications contain descriptions of the gaze mapping procedure.

It is true that these classes/methods depend on other parts of the Pupil software, so you would need to extract a fair amount to run it separately. May I ask why you need to extract the code rather than use the standard Pupil software or the Network API?

user-f43a29 14 October, 2024, 17:30:15

@user-3187b1 , for a saved recording, the position of the red circle in pixels (i.e., in 2D world camera coordinates) is essentially provided by Pupil Player. You load the recording into Pupil Player, enable the Raw Data Exporter plugin, and scale the resulting norm_pos_x/y value in the exported gaze_positions.csv file. The exact scaling factor will depend on the resolution setting you have chosen for the world camera.

You can also obtain norm_pos_x/y in real-time via the Network API.

Gaze position in 3D real-world coordinates is more involved. While the Raw Data Exporter plugin also exports gaze_point_3d_x/y/z, the Z component of this value can be unreliable over about 1 meter distance.

However, it is of course possible to determine how gaze maps to 3D world coordinates. One approach is to use computer vision algorithms. I would recommend checking our publication list to see how others have approached this problem with Pupil Core.

user-fb8431 15 October, 2024, 09:28:36

Gaze Plugin

user-76014e 15 October, 2024, 13:40:26

Hello - is there a product suitable for urban outdoor research? Is there a special offer for a university?

user-f43a29 15 October, 2024, 14:59:52

Hi @user-76014e , our latest eyetracker, Neon, is mobile and designed to be used outdoors and indoors. To be sure I can give you the best answer, may I ask exactly what you kind of research you are doing?

And yes, we offer academic discounts: a 700 EUR reduction off the cost of Neon bundles, for example.

If you'd like, you can schedule a 30-minute Demo and Q&A call to learn more. Just send an email to info@pupil-labs.com

user-a3c818 16 October, 2024, 14:23:37

Hi everyone! We have just purchased Pupil Lab's Core for our lab as I had the expectation I could still use the Android app to use the Core on the go. Does this mean I can only use the Core attached to a laptop/desktop to use it?

user-f43a29 16 October, 2024, 15:41:05

Hi @user-a3c818 , do you mean the Pupil Invisible or the Neon Companion App? Just to clarify, those apps are not compatible with Pupil Core.

Pupil Core can also be used when connected to a single board computer, but the preferred solution for mobile eyetracking is Neon. We can provide a Demo and Q&A call about it, if you'd like.

user-24f54b 16 October, 2024, 22:15:49

Hi all, can someone help me? One of the wires on my bare metal neon's PCB got disconnected. Can I resolder that or do the PCB and wire need to be replaced?

user-f43a29 17 October, 2024, 07:31:39

Hi @user-24f54b , we received your email and will continue communication with you there.

user-335ee0 17 October, 2024, 05:00:11

Hi, Has anyone added a second scene camera to the core?

nmt 17 October, 2024, 08:08:54

Hi @user-335ee0! May I ask what the use-case is? Physically, it is of course possible to add a second scene camera to Core's frame. Although our software doesn't support the capture of two scene cameras concurrently per se.

user-fb8431 17 October, 2024, 09:11:11

Hello everyone, I have a question. My custom deep learning gaze estimation plug-in inherits 'PupilDetectorPlugin'. I want to get the images of the left and right eyes at the same time, but in this way 'frame = event.get("frame")' only The data of the left eye or the right eye can be obtained randomly, so I want to know if there is a way to obtain the data of the left and right eyes at the same time.

user-cdcab0 17 October, 2024, 10:15:22

Hi, @user-fb8431 - let me see if I can help

I want to get the images of the left and right eyes at the same time

This is fundamentally impossible. The left and right eye cameras are two separate sensors on two separate devices. They will basically never capture images at the exact same moment in time - and if they ever do, it's a coincidence. You will need to capture one frame and save it in a variable to be processed with a frame from the other camera once that one arrives.

...only The data of the left eye or the right eye can be obtained randomly

Well it's not really random. The cameras operate at their configured sampling rate and, as soon as the sensor transmitted an image to the PC, you'll see an event for it in your plugin. The frame rates aren't exact though, so you'll see some fluctations in the sampling rate, but that's really as random as it gets

user-a4aa71 18 October, 2024, 11:08:37

Hello everyone, I was wondering.. to use the surface tracker plug-in it is necessary to use “apriltag” markers or you can also use custom markers to define the surface of interest (for example: colored squares) ?

user-cdcab0 18 October, 2024, 11:12:59

Hi, @user-a4aa71 - you do need to use AprilTag markers

user-0da520 18 October, 2024, 12:30:58

Hello! I have installed the Pupil repo on Ubuntu and was able to run python main.py capture. I want a minimalistic run without world camera and other things. I just want to get 3D coordinates of the pupils in real time. Is it possible? Do you have examples of code for similar tasks where I could learn?

nmt 18 October, 2024, 12:32:52

Hi @user-d71076. I've moved your message to the appropriate channel. What you suggest is possible. There are a few ways you could achieve this. The easiest might be to run Pupil Service, which is a minimalistic capture software: https://docs.pupil-labs.com/core/software/pupil-service/#pupil-service

Would that meet your requirements?

user-d71076 18 October, 2024, 13:35:55

Thank you.

If I run python main.py service from this repo: https://github.com/pupil-labs/pupil
I see 2 eye images with tracked pupils colored in red. I would like to have 3d coordinates of pupils or a full gaze message with frequency 200 fps.

Ideally I would prefer to get it in python. For example I tried to use pupil_labs.realtime_api.simple and the following code:

from pupil_labs.realtime_api.simple import discover_one_device
device = discover_one_device()
gaze_sample = device.receive_gaze_datum()
left = (gaze_sample.eyeball_center_left_x, gaze_sample.eyeball_center_left_y, gaze_sample.eyeball_center_left_z)
right = (gaze_sample.eyeball_center_right_x, gaze_sample.eyeball_center_right_y, gaze_sample.eyeball_center_right_z)

but it requires a phone with installed Neon Companion. I tried to find a way to avoid using cell phone and connect glasses directly to my PC. I tried to avoid using the cell phone in order to simplify my setup and to receive the gaze datum as fast as possible (without use of network),

Is it possible to connect glasses to a PC (Ubuntu) directly and recieve gaze data in python code with 200fps?

nmt 18 October, 2024, 14:00:38

@user-d71076, so you're actually using Neon? I was thrown off the scent given that you were talking about Pupil Capture, which is predominantly Pupil Core software.

user-b96a7d 18 October, 2024, 14:39:52

Hey andre! I am also an enthusiast of direcly using sensors and not using a phone in between for simplicity 😉

Last week I started writing my own plugins for the capture software. https://github.com/paulijosey/pupillabs_capture_plugins I tried to document the code with everything I figured out (you can also check the "plugin.py" file in the pupil_src/shared_modules dir to get more info).

Honestly I am not sure about the Hz you can get the gaze data with but it should be easy to log with those pointers :). Have fun and keep me posted.

(P.S: if you figure out a way to get IMU data with the capture software I'd be greatful 😉 )

nmt 18 October, 2024, 14:08:08

It's technically possible to run Pupil Capture/Service software with the Neon system. But note that it is experimental. Let me take a step back - can you elaborate on your end goal with all of this, what's your research question/application?

user-cc6071 21 October, 2024, 05:15:47

Hey everyone! 🍀

We're working on a new research project that involves tracking the eye movements of both a child (aged 3-5) and their parent while they watch cartoons together. We’re using two computers, each connected to a separate eye tracker. We’re doing calibration one after the other—first for the parent, then for the child. So far, everything works smoothly for individual recordings, but we’re about to start testing with dyads, and we want to make sure everything is set up correctly before we dive in 💪 .

I have a few questions and would really appreciate any advice:

Syncing two trackers: Do we need any additional tools to ensure proper synchronization between the two eye trackers running on separate computers? Is there a recommended way to align the data from two separate devices?

Security concerns: We’re using university computers, so we want to make sure that our experimental setup isn’t accidentally tampered with by others. Any tips on how to lock down our setup or prevent accidental interference?

Calibration tips: Do you have any suggestions on how to handle calibration for both participants (child and parent) to get the most accurate data? Should we be doing anything differently for dyads?

Additional tools or plugins: Are there any extra plugins, software, or tweaks we should consider for running two eye trackers simultaneously? Anything that might help us with data collection or processing?

We’re writing everything in Python using PsychoPy, and it’s been great so far for individual tests. Just want to make sure we’re covering all the bases as we move to working with parent-child pairs.

Thanks so much in advance for your help! We’d be super grateful for any insights or recommendations 😊

user-c2d375 22 October, 2024, 10:23:42

Hi @user-cc6071 👋 Thanks for sharing the details of your research project!

Yes, it is possible to synchronize two Pupil Core devices. To start/stop recordings simultaneously on two different computers running Pupil Capture, use the Pupil Groups plugin. For temporal synchronization between the two Pupil Capture instances, I recommend the Time Sync plugin.

As for securing your setup, Pupil Capture does not have built-in features for locking down the experimental environment. However, it keeps any enabled plugins active for future sessions and has a folder structure that prevents accidental overwriting of recordings through progressive numbering. Since you're sharing computers at the university, I strongly recommend backing up both your recordings and pupil_capture_settings folders after each session. This will help ensure that your data and settings are safely stored, preventing unintentional loss. If any settings in Pupil Capture are changed, you can easily restore them by replacing the current settings folder with your backup. Additionally, consider backing up the recordings folder after using Pupil Player to safely store the export folders generated during data export.

user-3187b1 22 October, 2024, 06:07:58

Hi, I have a question about intrinsic/extrinsic parameters. I have learned that intrinsic parameters can be obtained from camera_models.py. After performing calibration, is there a way to export the extrinsic parameters separately?? Thank you

user-b96a7d 22 October, 2024, 07:07:57

also interested here 👆

user-f43a29 23 October, 2024, 22:01:06

Hi @user-3187b1 and @user-b96a7d , I assume you are looking for the rotation and translation components of the extrinsics (aka, the rvecs and tvecs)? When you run a Camera Intrinsics Estimation for your Pupil Core cameras, then that is provided:

You will need to run Pupil Capture from source and modify the code to output those values.

May I ask what you will be doing with these values?

user-c2d375 22 October, 2024, 10:23:55

For the calibration, there aren’t any specific tips tailored for dyadic screen-based experiments. However, ensure that the pupil is centered in the camera's field of view when positioning the eye cameras, and that the 3D model is properly fitted around the eye. I also recommend reviewing our best practices for conducting experiments with Pupil Core to ensure accurate data collection.

Regarding data processing, could you elaborate on how you intend to work with your data? This will enable me to provide specific recommendations.

user-3b5a61 23 October, 2024, 19:19:47

Hey all, I would like to use Core on a participant and an external camera to film him while wearing it. Is there an already developed way to syncronise both recordings? Or, is it possible to get the clock value (or something similar) from Pupil Capture?? This way I can leftJoin the clock value from both and then sync it

user-f43a29 23 October, 2024, 22:34:53

Hi @user-3b5a61 , you may find this message about syncing with other cameras helpful: https://discord.com/channels/285728493612957698/446977689690177536/903634566345015377

To get the clock value (i.e., timestamps) for data (in a CSV file), you can load a recording into Pupil Player and export the Raw Data. The timestamps will be in Pupil Time, which can be converted to System Time using the code here.

user-eac531 24 October, 2024, 11:32:26

The connector and wiring of the eye camera have come off. Is it possible to reconnect them?

nmt 24 October, 2024, 12:03:39

@user-eac531, can you share a photo so we can better understand the situation and provide feedback?

user-76ebbf 31 October, 2024, 12:05:30

Hi. I have a question regarding calibration. In Core, there are three calibration methods: (1) using a screen marker, (2) using a single marker, and (3) using natural features. When I tried methods (2) and (3), they didn't work well, and it displayed that the confidence was below 20%. Is there any reference video that shows how to actually perform calibration?

user-d407c1 31 October, 2024, 12:14:29

Hi @user-76ebbf ! This confidence parameter arises from the confidence in pupil detection. How do your eye cameras look like? Have you adjusted them and built a model before calibrating? https://docs.pupil-labs.com/core/getting-started/#_3-check-pupil-detection

user-76ebbf 31 October, 2024, 12:28:10

Hi @user-d407c1 , thank you for your response.The eye camera's video is working without any issues, and I am able to capture the pupil and eyeball. Calibration using the screen marker works fine, with a confidence level of almost 100%. I've checked the documentation about calibration on the website, but it's possible that I might be misunderstanding it. Therefore, I would appreciate it if you could provide a detailed explanation or any reference videos regarding methods (2) using a single marker and (3) using natural features.

user-d407c1 31 October, 2024, 12:40:23

Thanks for confirming! For the single marker, you can use either a printed or screen-based marker, and there are a couple of ways to perform the calibration choreography:

  1. Have the user fixate on the marker and move their head slowly in a spiral pattern.
  2. Keep the user's head still while you move the marker slowly, prompting them to follow it with their eyes.

For the natural features calibration, the process can be trickier, as it requires clear communication to direct the subject to look at and select specific points.

I suggest that you share a recording with us [email removed] so we can review it and provide more tailored feedback.

If you haven't already, I'd also recommend checking out the best practices guide for additional tips.

End of October archive