πŸ‘ core


user-e16f11 02 June, 2025, 12:32:54

Chat image

user-94183b 03 June, 2025, 06:28:17

Hi everyone, I want to calculate the vergence from Pupil Core data. I have these parameters: ` target_x = 643 target_y = 315 gaze_pos_vid_x = 649.946106 gaze_pos_vid_y = 302.057208 distance_to_target = 869.268085 pup_diam_l = None pup_diam_r = 3.94412

Optical axis left (gaze_normal1)

gaze_normal1_x = 0.196103 gaze_normal1_y = -0.204533 gaze_normal1_z = 0.959015

Eyeball center left

eye_center1_3d_x = -40.468474 eye_center1_3d_y = 17.110107 eye_center1_3d_z = -14.915746

Optical axis right (gaze_normal0)

gaze_normal0_x = -0.067185 gaze_normal0_y = -0.151333 gaze_normal0_z = 0.986197

Eyeball center right

eye_center0_3d_x = 20.005274 eye_center0_3d_y = 14.670975 eye_center0_3d_z = -20.640095

3D gaze point

gaze_point_3d_x = 4.37738 gaze_point_3d_y = -24.66304 gaze_point_3d_z = 203.919871

1. Vergence from eye centers and distance to target:

eye_center_left = np.array([eye_center1_3d_x, eye_center1_3d_y, eye_center1_3d_z]) eye_center_right = np.array([eye_center0_3d_x, eye_center0_3d_y, eye_center0_3d_z])

interocular_distance = np.linalg.norm(eye_center_right - eye_center_left) vergence_rad_geom = 2 * np.arcsin(interocular_distance / (2 * distance_to_target)) vergence_deg_geom = math.degrees(vergence_rad_geom)

print(f"Vergence (distance-based): {vergence_deg_geom:.2f}Β°") # 4.01Β°

2. Vergence from optical axis vectors:

gaze_dir_left = np.array([gaze_normal1_x, gaze_normal1_y, gaze_normal1_z]) gaze_dir_right = np.array([gaze_normal0_x, gaze_normal0_y, gaze_normal0_z])

dot_product = np.dot(gaze_dir_left, gaze_dir_right) vergence_rad = np.arccos(dot_product) vergence_deg = math.degrees(vergence_rad)

print(f"Vergence (optical-axis-based): {vergence_deg:.2f}Β°") # 15.52Β°`

I know that the participant is looking at the target, and I'm confident that distance_to_target is accurate. But I'm getting a large difference between the vergence from distance (4.01Β°) and from optical axes (15.52Β°). What could explain this discrepancy?

user-d407c1 03 June, 2025, 07:43:20

Hi @user-94183b !

Note that distance computation from gaze normals intersection is a bit tricky, as they rarely intersect cleanly.

That said, if you now the depth is correctly measured, and the participant was looking at that distance, I would suggest to first check that the pye3d model is well fitted, the IPD you measure resembles the one used in the fitting process (bundle adjustment) and plot the vectors to get a better idea of whats happening.

user-94183b 03 June, 2025, 09:19:00

Hi @user-d407c1

Thank you for the suggestions! I followed your advice to check the model fit and found something interesting.

What's working: The 2D gaze mapping appears accurate - my target was at (643, 315) and Pupil Core's gaze_pos_vid shows (649, 302), so the participant was clearly looking at the correct location.

The problem: The depth estimation is completely wrong. My participant used a chinrest with the target at 800-900mm distance, but gaze_point_3d_z shows only 203.9mm (~20cm). This is a 4x underestimation!

My questions: How is gaze_point_3d_z calculated in the 3D gaze mapping process? Since the 2D accuracy looks good, what specific pye3d model quality metrics should I check to diagnose this depth estimation problem?

Chat image

user-d407c1 03 June, 2025, 10:08:10

How is computed is described on this post by my colleague @nmt (https://discord.com/channels/285728493612957698/285728493612957698/1098535377029038090). He also commented on why it might be inaccurate and shared some ideas on exploring alternatives for estimating depth: https://discord.com/channels/285728493612957698/285728493612957698/1100125103754321951

As for the model itself, I’d recommend opening the debug view on the eye cameras to qualitatively assess tracking and fit.

user-94183b 03 June, 2025, 10:25:14

Just to clarify my understanding: given the inaccuracy in viewing depth estimation that @nmt has discussed, would reliable vergence measurements not be feasible with Pupil Core? And since Pupil Neon only provides optical axis data, would the attached formula (from https://arxiv.org/pdf/2311.09242) not be applicable, making Neon unsuitable for vergence studies as well?

Chat image

user-d407c1 03 June, 2025, 11:46:12

It's important to distinguish between vergence measurement and depth estimation.

While vergence can be characterized, it changes very little for farther distances, making it unreliable for estimating depth. This is due to the non-linear relationship between vergence angle and depth β€” small angular changes correspond to large differences in depth as distance increases as well as individual characteristics where those axes may not even intersect.

Also worth noting, the paper you referenced compared vergence angles at different known distances, but at a quick glance, it did not attempt to estimate depth from those measurements β€” a different objective, IMO.

Finally, Neon provides the optical axis of each eye as part of a 3D informed model that is robust to slippage. Relative changes in the optical axes can shed light on vergence behaviour.

user-d407c1 03 June, 2025, 12:45:22

Is it Neon? or Pupil Core? Asking because the plot says the "Pupil Neon". Regardless, could you share an example rec. to data@pupil-labs.com so we can have a look at the rec. data quality as a first step?

user-94183b 03 June, 2025, 12:52:42

Sorry, I uploaded the wrong plot. This should be the correct one. Please ignore the Neon data (as its vergence is based on the optical axes). Unfortunately, I cannot share the participants' gaze data. What data quality problems can we have here?

Chat image

user-d407c1 03 June, 2025, 12:58:28

The main concern is the pye3d model that models the eyeball https://docs.pupil-labs.com/core/best-practices/#pye3d-model

user-94183b 03 June, 2025, 13:04:55

I did this step and for all the participants I had "a stable circle that surrounds the modelled eyeball, and this should be of an equivalent size to the respective eyeball." However, is there a way to check the model fit on the recorded data?

user-d407c1 03 June, 2025, 13:10:44

Lemme check and get back to you

user-d407c1 03 June, 2025, 15:39:51

In the meantime, since you can't share the recordings, perhaps you can outline the steps and how you compute it such that we can investigate further. Do you use that formula?

nmt 04 June, 2025, 10:10:22

Vergence Angle

user-e16f11 04 June, 2025, 12:12:15

Hello, i would like to know what is the average end to end delay for the pupil data that is sent over network. Has it been estimated? With my empiric method of estimating delay(60fps camera footage and going through frame by frame), i was able to get around 100ms of delay between the eye movement and the appropriate data being received by Unity. I get around 100fps in the world camera, 110fps on each eye camera and way above that in the Unity scene .

user-f43a29 04 June, 2025, 14:35:03

@user-e16f11 What kind of network are you using to send the data? Also, just to clarify, the FPS of the cameras and the latency of the data transmission are two independent processes.

user-a190ae 04 June, 2025, 17:48:10

Hi. I'm facing an issue with the Pupil Core. I'm trying to calibrate (using Screen Marker), but I always get these error messages after trying to do the calibration. Even though both eyes' conf. = 1.

Chat image Chat image

user-f43a29 04 June, 2025, 18:20:25

Hi @user-a190ae , could you share a screenshot of what the eye images look like when you get that message? You can send the screenshot via DM if you prefer and then we can continue conversation here.

user-a190ae 04 June, 2025, 18:30:46

Now I see this, I guess that means it has calibrated right?

Chat image

user-f43a29 04 June, 2025, 18:43:32

Hi @user-a190ae , yes. And you should have briefly seen a Calibration Accuracy display after finishing the calibration. You can then proeed to validate the calibration with the T button, if you would like.

user-e83888 06 June, 2025, 15:13:30

Hi , How can i detect Fixation and Saccades ? Is there any Tutorial or Github Repo for that ?

I had extracted lsl data into seperate csv , able to get gaze coordinates and based on that we got out Region of Interest but Fixation and Saccades are kindof New to me.

user-f43a29 06 June, 2025, 15:36:57

Hi @user-e83888 , great to hear about your progress!

Then, saccades are conceptually the gaze traces between two consecutive fixations, assuming that your experiment did not elicit smooth pursuit eye movements.

user-9cc9a4 08 June, 2025, 21:18:16

Hello @user-f43a29 , Our lab is planning to purchase a Pupil Labs eye-tracking device. After some research, I found the Surface Tracker plugin in Pupil Capture. Our goal is to automatically classify participants' eye fixations to a grid on a shelf in a visual world paradigm experiment, and to calculate the proportion of fixations per grid area. In this experiment, the participant, sitting on one side of the shelf, is talking to another person on the other side of the shelf and is following instructions from the other person such as "move the big funnel.''

I’m wondering whether this is achievable using Surface Tracker. For example, we are considering placing AprilTag markers on the corners of the shelf, then using the x_norm and y_norm values in gaze_positions_on_surface.csv to determine which grid cell the participant is fixating on (as described in this tutorial).

In previous studies, we manually coded the fixated objects (from EyeLink II) in Adobe Premiere. We want to avoid such labor.

Chat image Chat image

user-f43a29 09 June, 2025, 07:58:49

Hi @user-9cc9a4 , we also received your email and responded there. Briefly:

  • A sufficient amount of AprilTags need to be in view for Surface Tracking to work correctly. A quick test can help you find optimal tag positioning.
  • Four AprilTags are the minimum, but for your setup, 6 or 8 will probably be better.

And yes, you can use that tutorial.

user-ddd97c 09 June, 2025, 03:14:15

Hey @user-f43a29 We have the pupil cord running and we looked up, down, right, left. This figure is the LSL data and we're trying to understand this data. We have only 1 eye camera working. We are convinced about the confidence data points but are not sure about the other things coming from the LSL matrix. The number displayed is the number of rows from where we took the data. Can you assist us in understanding what each of these values are and from this value, how can we merge with our head movements that we have in motive system (Opti track)? We are doing everything in Matlab, and if there's any code you know that might help us. Will the microphone data (IMU) be synchronized with LSL with neon? Is there some type of information being transferred? This will help us reassure the purchase of the neon.

Chat image

user-f43a29 09 June, 2025, 08:16:03

Hi @user-7c5b51 , the format of LSL's gaze metadata is here. In MATLAB, you can also find the channel layout in the following field:

xdf_data.info.desc.channels.channel

The meaning of each of these channels in the context of Pupil Core is described here and you might find the description of Pupil Core's coordinate systems helpful in interpreting those values.

With respect to Neon, just to clarify, I assume you meant "movement data", when referring to the IMU, and not "microphone data"? In either event, while the IMU & microphone data are not currently streamed to LSL, you can still post-hoc synchronize them with little effort.

This is because Neon's Events are also streamed/synced over LSL. If you send an Event, you will find it in Neon's timeline and in LSL's timeline and can use that as an anchor point to align all of Neon's other datastreams. This is possible because all of Neon's data are timestamped by the same high-precision clock. If you would like to see IMU data explicitly streamed over LSL, then feel free to upvote this feature request: https://discord.com/channels/285728493612957698/1323065816501194793/1323065816501194793

When it comes to integration with a motion capture system, then regardless of whether you are using Neon or Pupil Core, you need to determine the pose of the headset in the motion capture coordinate system. This requires a calibration process. Do I understand that you have already attached motion capture markers to your Pupil Core? As that will also be necessary. You can check this thread for details on the motion capture calibration process: https://discord.com/channels/285728493612957698/1352339286502150205/1352432635812515991

user-fb8431 10 June, 2025, 03:59:44

Hello, Guys,I am currently using core to complete a series of testing tasks, and I have encountered some issues. I urgently need to test the eyelid opening and closing degree, data recovery time under gaze tracking loss, average angle error of 3D estimation, and other key technical indicators. Does pupil complete the testing of these technical indicators?

nmt 10 June, 2025, 10:30:56

Hi @user-fb8431. Pupil Core doesn't measure the degree of eyelid opening or closing, but it does include a blink detector. You can read more about that in this section of the documentation. May I ask how you're defining gaze tracking loss? Are you referring to blinks, or something else? In terms of angular error, a measure of accuracy is provided during calibration, and again if you perform a validation routine. Further details here.

user-b02f36 11 June, 2025, 01:40:38

Hello! I recently found that Heatmap was capable for output in Neon. Is it capable for Core?

nmt 11 June, 2025, 03:34:34

Hi @user-b02f36! Heatmaps are possible with Core when using the Surface Tracker Plugin. Detailed instructions can be found in this section of the docs.

user-b02f36 11 June, 2025, 07:31:23

I got it! Thank you sincerely, Neil!

user-1d9685 11 June, 2025, 07:35:44

Hi, I am encoutering high errors in using pupil core, the confidence of both eyes is around 0.82, how to decrease the error from here onwards? I tried fixing the intensity, min/max of pupil, but that didnt seem to work.

user-3e8d91 11 June, 2025, 07:48:19

Hi, I want to use the pupil core to detectif the eyes are looking on specific things on a vehicle dashboard, but on using pupil capture, the error is very much, how do I reduce the error. I have tried calibrating, but it shows poor accuracy. Also, it couldn't set the value for backlight compensation, and thus the eye video is very 'white'. What should I do.

user-3681ab 11 June, 2025, 07:48:30

Hi @user-1d9685 , first, you want to set the eye cameras back to auto-exposure. You find this setting by clicking on the Camera icon in each eye camera window. If that does not resolve it, then a general Restart with default settings might be worth it, as found in the General settings tab of the main Pupil Capture window.

With respect to the gaze accuracy, you first want to ensure that pupil detection confidence is high before proceeding to calibration. Can you send a screenshot of what Pupil Capture and the eye images look like before you start calibration?

user-f3b922 11 June, 2025, 07:48:39

Hi @user-f43a29 These are the images

Chat image

user-f43a29 11 June, 2025, 08:01:22

Hi @user-1d9685 , thanks for sending the image. The It seems that you've run a validation, rather than a calibration here. To run a calibration, press the C button.

Then, make sure to hold your head relatively still while only moving your eyes to each target. This should already improve calibration.

With respect to pupil detection confidence, while 0.82 is good, we think you can achieve >0.9. It seems that the left eye could be more centrally positioned in the eye camera. In other words, when that eye looks to the side, the pupil will no longer be in view of the eye camera. Also, is the participant potentially wearing mascara?

user-64951d 11 June, 2025, 07:48:49

I set the cameras to autoexposure, so that is fine now

user-45a355 11 June, 2025, 07:48:57

But there is a lot of error

user-5d91c6 11 June, 2025, 07:49:05

when I see things, for example, the edge of the screen, There is about 20-25mm of error

user-e16f11 11 June, 2025, 13:06:32

Hello, i was wondering if it was possible to easily change the calibration choreography so that the targets are gradually shrinking or something similar? The goal is to get the best calibration results by getting the subject to focus in the middle of the target as much as possible

user-f43a29 11 June, 2025, 14:40:20

Hi @user-e16f11 , the Pupil source code is completely free & open-source, so you can certainly change that part!

You can also make plugins that implement custom calibration choreographies. The community has contributed some here that you can use as reference.

However, before doing that, can I ask what kind of accuracy you are already getting? And the goal of your research is real-time interaction in 3D, correct? Do you have a rough estimate of your accuracy requirements?

Note that the calibration process depends on detecting the calibration targets in the world camera image and if they are too small, then this could potentially make accuracy estimates worse. I'm also not sure if the shrinking approach will force subjects to have smaller or more precise eye movements? Can you describe how you've determined that?

user-1d9685 11 June, 2025, 16:26:25

Hi @user-f43a29 I actually later achieved 0.99 eye confidence, even after that, there was error, any potential solutions for that? And I used the the calibration mode only, by clicking C.

user-f43a29 11 June, 2025, 16:29:01

Would you be able to make a recording in Pupil Capture of the process and share that with [email removed] You can put it on Google Drive, for example. That will help us narrow down the issue.

user-fb8431 12 June, 2025, 04:54:24

Hello Guys, I want to know how to obtain the error between the estimated gaze point and the actual point when verifying gaze error by clicking T, and I want to compare the distance error between the two points.

nmt 12 June, 2025, 05:03:31

This is what we call 'Accuracy'. It's calculated as the average angular offset (distance) in degrees of visual angle between fixation locations and the corresponding locations of the fixation targets, and it's already provided by Pupil Capture after you run the validation (T) routine.

user-e16f11 12 June, 2025, 07:56:14

I am getting 1 degrees of accuracy using 3D gaze mapping calibration and around 0.3 degrees using the 2D gaze mapping calibration, both are best case scenarios. On different users, the average i got was around 2-3 degrees on 3D gaze mapping calibration and 0.6 on 2D gaze mapping calibration The issue is that in my use case, i need the most accuracy i can get during the calibration process as my experiments are very reliant on accuracy. 2D gaze mapping is not realistic as my experiments are reliant on head movement as well, and 2D gaze mapping does not compensate for head movement that well, especially on the Z-axis (distance from the screen, especially moving away from it). The accuracy i get using 2D gaze mapping is roughly what i need but i noticed that i can't achieve that in 3D gaze mapping. So my thought process was to modify the calibration choreography in order to get the most out of that step, before having to process the gaze data, managing outliers and possible offsetting. The shrinking idea was just a proposition, i don't have anything to back that. The idea is to explore ways by which i might be able to improve the calibration results consistently.

user-f43a29 12 June, 2025, 23:03:30

Hi @user-e16f11 , first, just to clarify, the 2D pipeline does not mean "no head movement". The 2D and 3D calibration pipelines are two different ways to convert pupil detection into a gaze ray. Both are compatible with & independent of head movement, although with the 2D you want to be more careful that the headset does not slip. The 2D pipeline is sensitive to slippage, whereas the 3D pipeline is more robust to that.

Are you trying to use gaze_point_3d_z to estimate distance from the monitor? If so, please note that this value is not reliable for distances >= 1m.

With respect to accuracy, 0.3 to 0.6 degrees of accuracy would already be considered excellent in many branches of eyetracking research. Even with a chinrest, ~0.3 degrees would be considered at or close to the limit for a variety of equipment. Having said that, you can certainly try other methods and please let us know how it goes! You may want to give this Nine Point Calibration a try, for example.

Otherwise, note that it is not really possible to achieve <1 deg of accuracy in general with Pupil Coreβ€˜s 3D pipeline. But, the 3D pipeline gives you richer data in exchange.

Can you share a screenshot or video of what parts of your stimuli require such levels of accuracy?

user-412dbc 12 June, 2025, 08:29:49

Hey, one of the eye windows shows these rectangles which did not appear before for some reason. Do you know what they are and why they appear? Also, the participant has 4 diopters of myopia in that eye, is this relevant? Thank you in advance!

user-f43a29 12 June, 2025, 22:44:49

Hi @user-412dbc , it seems "algorithm" mode was activated. You can change it back to the default by clicking the 2D icon on the right of each eye camera window and changing the appropriate setting.

user-4a1dfb 13 June, 2025, 19:15:17

Question, if we want to build the Pupil Core device ourselves. And purchased the upgrades: Logitech C615 and the DSL235D-650-F3.2, are we supposed to take out the lens inside the logitech and screw in the new lens?

Also the C615 seems to have a built in autofocus motor, but with the new lens is it able to correctly focus on the environment?

user-6b8501 13 June, 2025, 20:41:43

Hello, I am recently considering the Core glasses, but I want to use them mainly for research with handheld phones and tablets. Can the Core glasses’ eye tracker tell when I’m looking at parts of my phone screen (e.g., individual menu items)? How small do those items need to be (in pixels or degrees of visual angle) for the tracker to spot my gaze accurately? Thank you!!

user-f43a29 16 June, 2025, 07:55:07

Hi @user-6b8501 and @user-9cc9a4 , yes it is possible to use Pupil Core that way. There is even a post on our Research Blog about it.

May I ask why you prefer Pupil Core for this task? If it helps, our most recent eyetracker, Neon, is easier to use, offers richer data streams, and is more comfortable to wear. We also have a guide on how to use Neon's newer analysis tools to study gaze on phones/tablets.

And, do you mean how large do the items have to be, rather than how small? To clarify, gaze is tracked by the eye cameras. Estimations of gaze do not use info about the world or surfaces in front of the wearer. So, objects can be small or large, or close or far, and gaze is tracked the same. Rather, it is a question of whether the gaze tracking accuracy is sufficient for your purposes and if the human observer can see the small items. The requirements for your experiment can be evaluated by checking publications in your line of research to see what others have done.

user-9cc9a4 14 June, 2025, 23:51:29

I would also like to know the possibility

user-c9ce7b 16 June, 2025, 04:18:32

Hi to everyone, I am new to Pupil Core product and I am interested to use them with pupils while reading a specific exam. Is there anyone I could talk to how to use the device? How to record the reading of the words etc

user-480f4c 16 June, 2025, 06:27:20

Hi @user-f427b0! You can find more details in how to get started with Pupil Core in our documentation. May I ask - are you planning to work with screen-based reading tasks?

user-4514c3 16 June, 2025, 16:38:38

Hi team! I was wondering if you apply any filters to the gaze data, or if it's the raw data. Thank you!

user-f43a29 16 June, 2025, 17:29:00

Hi @user-4514c3 , we provide you with the raw data.

user-357ae8 16 June, 2025, 18:30:46

Hi everyone, I am new to Pupil Core and I want to try and use the eye tracking glasses for a study. The participants need to be able to move objects around and I am trying to find the error between where they are looking and where I want them to look. Is there a way to do this in pupil player without having actual markers that the participants can see?

user-f43a29 17 June, 2025, 07:16:15

Hi @user-357ae8 , if your intention is to know where they look on different surfaces, then using Pupil Core's Surface Tracking plugin with the AprilTag markers is the standard approach.

Alternatively, you could try applying recent machine learning and DNN algorithms to the world camera feed. Algorithms for determining the 3D shape and position (i.e., the "pose") of objects in the scene could be helpful here.

user-f427b0 19 June, 2025, 05:02:20

Thank you Nadia, yes I have gone through the documentation. And, yes I am planning to work with read-based talks. So any suggestions on data collection but also on data analysis could help.

user-480f4c 20 June, 2025, 11:37:20

Hi @user-f427b0! For screen-based tasks, I'd recommend adding markers to your screen and using the Surface Tracker plugin, which enables you to map gaze data directly onto the screen and access it in screen-based coordinates.

I believe you're already in touch with my colleagues and will soon have your onboarding workshop! This topic will be covered in detail during that session as well. πŸ™‚

user-6b8501 24 June, 2025, 00:39:43

Hi Rob, thank you for your reply and for sending the blog post. We are looking for different eye tracking glasses solutions for testing usability in mobile games. So the mobile phone would be hand held and can’t be fixed. We were looking at Core because we liked it having no glasses, which would not interfere with gameplay. The mobile device in the blog post seemed to have some dots around the it to help with calibration, is that a required setup? Thank you!

user-f43a29 24 June, 2025, 09:00:30

Hi @user-6b8501 , while Pupil Core is very powerful, Neon is an improvement on it in every way. You should actually find less interference with Neon than with Pupil Core. You can sometimes see the Pupil Core's eye cameras a bit in your field of view, and if your participants need glasses, it can be awkward to use. Since Pupil Core requires calibration, you also have to be careful that participants don't touch it. If it slips, you have to pause the experiment, reset the headset, and re-calibrate. With Neon, you don't have to think about this. You simply put it on and are eyetracking

Relative to Pupil Core, Neon's shape has also been improved and optimized to comfortably fit more face physiognomies. With Neon, we offer frames with swappable magnetic lenses, to comfortably accommodate those who need glasses. There is also a minimal Pupil Core-esque frame, called Is this thing on?

The mobile device in the blog post had AprilTag markers attached to it. They are not used for calibration, but are rather used by both Pupil Core and Neon to detect the position & orientation of the mobile phone's surface and project gaze onto it. The AprilTags are a standard way to map the gaze signal to surfaces with our eyetrackers. You could also use the same approach from that blog post with Neon. If you prefer a markerless approach, then you could try modern DNNs, such as pose estimation networks.

There is also Reference Image Mapper for Neon, which is markerless, but it assumes that Areas of Interest are stable, not moving. If you know the position of the mobile phone over time, then you could potentially integrate it with Tag Aligner

user-c455b4 24 June, 2025, 07:04:42

Hi all, I have successfully setup Pupil capture to work with core on a Debian 12 machine. Everything works just fine except capture does not detect the installed LSL plugin. I placed the pupil_capture_lsl_recorder.py and a symlink to pylsl/ into ~/pupil_capture_settings/plugins, yet the plugin does not show in Pupil capture's Plugin Manager. Starting pupil_capture from the command line yields the following message: world - [WARNING] plugin: Failed to load 'pupil_capture_lsl_recorder'. Reason: 'module 'pylsl' has no attribute 'StreamInfo'' . I have no idea what that means and would be grateful for any hints.

user-f43a29 24 June, 2025, 07:59:02

Hi @user-c455b4 , can you try again with pylsl version 1.16.2?

user-c455b4 24 June, 2025, 08:37:02

Hi @user-f43a29 and thank you for your fast reply. I issued python3 -m pip install --force-reinstall -v "pylsl==1.16.2", and yet get the same message as before: world - [WARNING] plugin: Failed to load 'pupil_capture_lsl_recorder'. Reason: 'module 'pylsl' has no attribute 'StreamInfo''. Any more suggestions?

user-f43a29 24 June, 2025, 08:46:35

You may also want to try a fresh install of that version, rather than a re-install:

pip uninstall pylsl
pip install pylsl==1.16.2
user-f43a29 24 June, 2025, 08:43:21

Hi @user-c455b4 , thanks, you should also try the following:

  • Enter python -m site into the console.
  • Look for the USER_SITE entry in the output.
  • Go to this directory and copy the pylsl directory in there to /home/<your username>/pupil_capture_settings/plugins. This should also include the necessary liblsl.dll file.
  • Make sure that the files from the Pupil Capture LSL plugin are also in that plugins directory. See the image below for how it looks on a Windows machine.
  • Let us know how that works out.

If you rather installed pylsl into a virtual environment, then you similarly seach for the pylsl directory in the .venv/lib directories.

Chat image

user-c455b4 24 June, 2025, 08:54:54

I uninstalled pylsl and installed version 1.16.2, but the message is still the same. The contents of the plugin directory looks as expected: [email removed] ls -l pupil_capture_settings/plugins/ total 8 -rw-r--r-- 1 <myname> <myname> 7388 23. Jun 17:06 pupil_capture_lsl_recorder.py lrwxrwxrwx 1 <myname> <myname> 29 24. Jun 10:33 pylsl -> /home/<myname>/pyenvs/pylsl/

user-c455b4 24 June, 2025, 08:58:44

OK, so symlinking does not do the job, as opposed to what the docs say. I copied the pylsl directory into the plugins directory, et voila!

user-f43a29 24 June, 2025, 09:03:33

Thanks for the feedback. We will update the Documentation there accordingly.

user-c455b4 24 June, 2025, 08:59:07

Thanks a lot for your assistance!

user-f43a29 24 June, 2025, 09:03:01

You are welcome!

user-1d9685 24 June, 2025, 10:42:07

Hi, I want to export the surface video, which I get on clicking "open surface in window" and then overlay the heatmap on it. How to do it

user-d407c1 24 June, 2025, 10:50:30

Hi @user-1d9685 πŸ‘‹ ! Have a look at this notebook

End of June archive