core


user-f1f04a 01 June, 2022, 00:15:00

Hi, I'm looking at using a diy camera for eye tracking (hd-6000). The camera is picked up by Windows but not the pupil capture software. I assume it's got something to do with default settings not being 200fps 192Γ—192. Is there a driver or setup/config procedure to let it get recognised? Thanks

user-ff97c9 01 June, 2022, 11:46:31

Hello. If you have eye 0 or eye 1 detection enabled in the main window then there should be a window open on your system titled as "Pupil Capture - eye 0" or "eye 1". In this eye window please go under the "Video Source" tab from the right vertical buttons. In the video source pane there is a button for manual camera selection. To use any third party camera than the default PupilLabs devices' cameras you need to have this enabled, then select your camera from the drop down menu. I just tried this for you and it works very well but make sure the IR illumination is strong on the eye because the illumination is used both for calibration and gaze tracking and black pupil detection. If the IR is strong enough, you should see the iris a bright grey colour and the pupil dark black, hence the dark pupil detection.

papr 01 June, 2022, 11:55:04

Hi, please follow steps 1-7 of these instructions to manually install drivers for your camera https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-ff97c9 01 June, 2022, 10:55:01

Hello everybody! Just joined in the channel πŸ™‚ I'm an interaction and neuroergonomics researcher and have been using your Core and Invisible devices for about 2 years now. Recently, I discovered an issue with one of our Core recordings. I thought I'd take this opportunity, turn it into a reason and join in the channel to discuss it with others.

The issue I experienced is where the pupil detection algorithm of Pupil Capture loses the pupil momentarily, despite that there were no eye movements, then the algorithm continues normal operation after a couple of fixations. These couple of fixations were mapped incorrectly as a result of this situation.

What we did to fix this in the lab was to post-process the recordings in Pupil Player by running the post-hoc gaze detection, and voilΓ ! The issue was fixed. This is very curious to us as a lab because we had no thoughts on whether the online (Capture) and post-hoc (Player) pupil detection algorithms were different. Due to the nature of the issue we experienced with the recorded pupil detection and because the participant's eye did not move it must be the algorithm that is somehow different, right? The pupil being detected is almost perpendicular to the camera so it's not the camera angle, and the detection parameters were noted and used identically with the online recording in the offline analysis.

papr 01 June, 2022, 12:01:07

Hey, welcome to the channel! I am happy to see that you are using your experience to help others before your first question has even been answered. Much appreciated πŸ™

Capture and Player use the same algorithms for pupil detection and gaze mapping. The only possible difference that I see is the pupil data matching/pairing process that is required for gaze mapping. You can read more about it here https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching The order of the pupil data might have been different in realtime compared to the post-hoc case.

user-856af7 01 June, 2022, 15:10:29

Hello everyone, I'm new to using Pupil Core and am trying to conduct an experiment using the core system in a real world environment. I've decided to use the single target calibration method. Despite numerous attempts to calibrate both on myself and other people the system seems to be miscalibrated at least with what I can tell from the recordings as the visual circle tends to deviate from the actual targets used for the test. In some cases this deviation also increases further as the target moves further away from the center of view. I was mainly wondering if there was something I might be doing wrong in regards to the calibration process.

papr 01 June, 2022, 15:17:52

Hi Daniel, welcome to the community! πŸ‘‹

Without further information, my first guess would be that the pupil detection needs to be improved. Tuning it requires a bit of practice and can sometimes be difficult to get right if you are in an uncontrolled environment.

If you like you can send a Pupil Capture recording of you performing a calibration to [email removed] After the review, we will be able to give more concrete feedback.

user-856af7 01 June, 2022, 15:20:54

Thanks for the help! I'll try and send the ones I have over when I can make it to the lab again. Would the recordings of the brief tests I did after each calibration also be useful or is just the calibration data needed?

papr 01 June, 2022, 15:27:43

We would primarily need a recording that includes a calibration. Other recordings can be helpful, too.

user-c8e5c4 02 June, 2022, 09:34:35

Hey Everyone, I have a question regarding batch exporting. Is there a way to export data from a lot of participants without manually having to open and export each recording in the player? I've seen there's a batch export plugin (https://github.com/tombullock/batchExportPupilLabs) which i've tried to install, but it's not loading in pupil player. could you point me in a direction? Thanks a lot!

papr 02 June, 2022, 09:36:19

Hi! This is meant to run as a stand-alone script, not as a Player plugin. Similarly, see the extract_*.py scripts in the Post-hoc Analyses section.

user-96d41d 02 June, 2022, 09:38:36

Hi everyone! I conducted an experiment using Pupil Core in a real work environment. I am interested in the time participants are looking at a specific projected pattern on the wall. I just discovered that there is a plugin 'Surface tracker' which can provide information on how often a person is looking towards a specified area. However, for this, you need to place markers in the real environment. Unfortunately, I did not do this before the experiment. Does anyone know whether it is possible to place these markers afterward in the video and still be able to add a surface using the surface tracker plugin?

papr 02 June, 2022, 09:41:15

Hi, unfortunately, there is no method to add the markers post-hoc. It might be easier to manually annotate the scene video frames during which the subject looked at the pattern using the Annotation plugin.

user-c8e5c4 02 June, 2022, 09:42:30

alright, thank you πŸ™‚

user-96d41d 02 June, 2022, 13:10:40

Do you think it is possible to edit the world video and place markers in it to solve the issue of not having placed markers in the real environment? If I want to edit this video, what should the properties be such that I can open the world video in pupil capture. And do I need to change the filename in the code somewhere?

papr 02 June, 2022, 13:22:35

Pupil Player expects the following: - everything listed here under Timestamp Files and Video files https://docs.pupil-labs.com/developer/core/recording-format/ - that each packet has one frame, and that each packet's and frame's pts are the same

user-96d41d 02 June, 2022, 13:24:56

Thank you!

papr 02 June, 2022, 13:28:41

May I ask what your approach for editing the video is?

user-96d41d 02 June, 2022, 13:33:49

I want to try Motion Tracking in Adobe After Effects, but I'm not sure whether this is possible for such a big video (7GB). My thoughts are; if I can place the marker in the video and it moves along with my target area, it can be detected with the surface tracker. I really hope that this can be a solution to my problem, but I do have not much experience with video editing..

papr 02 June, 2022, 13:34:55

Interesting approach! Please let us know how this approach ends up working for you πŸ™‚

user-96d41d 02 June, 2022, 13:35:31

I will!😁

user-b14f98 02 June, 2022, 14:42:49

Hi @papr , would you mind answering a few questions about bundle adjustment for me? I know that the 3D model fitting process estimates the eye position within eye-camera space. The next two steps are to estimate the position and orientation of the eye cameras within world camera space. I know that an explicit value is provided / hardcoded for eye camera position, and eye camera orientation is estimated through bundle adjustment. My question is - does the normal player pipeline (most recent version) also refine camera position in world space during bundle adjustment?

papr 02 June, 2022, 14:49:28

For the normal Core headset, we fix the translation and let the bundle adjustment optimize the rotation of the eye cameras. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L71-L84

We do the same for hmd bundle adjustment https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L191-L204

user-b14f98 02 June, 2022, 14:43:51

The issue is that we are seeing an offset in our Blender model of the pupil data that suggests an error in eye positions/orientations within world camera space.

user-b14f98 02 June, 2022, 14:46:46

Chat image

user-b14f98 02 June, 2022, 14:46:56

There's an example of a nasty offset.

user-b14f98 02 June, 2022, 14:49:58

That was my understanding. So, I guess this is a bug on our end.

user-b14f98 02 June, 2022, 14:50:12

That gaze cursor is pointing where it should (at the gaze targets)

papr 02 June, 2022, 14:51:16

Note that the eye model locations may change over time if it is not frozen.

user-b14f98 02 June, 2022, 14:51:27

Yes, we are accounting for that. Thank you πŸ™‚

user-b8531d 03 June, 2022, 11:45:34

Hi everyone ! We have a Pupil Core in my team, and I am really interested in using it, but is there any way to turn it into a wireless device ? For example would some bluetooth adapter be sufficient ? Or a Raspberry card communicating through the Network API ?

papr 07 June, 2022, 09:37:08

People have used this to stream video from a RPi to a computer running Capture https://github.com/Lifestohack/pupil-video-backend/ Note: It has some known issues with camera intrinsics and I would consider it experimental.

user-219de4 03 June, 2022, 20:24:49

Hello everyone. I am using Pupil Core and connected to a Mac with USB-C cable. I met an issue that Pupil Capture can detect the device but no image captured from either worldview or eye0/1. Any solution??? Thank you!!!

nmt 04 June, 2022, 06:49:32

Hi @user-219de4. Are you running on macOS Monterey? If so, you will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-219de4 06 June, 2022, 21:20:15

Thank you so much, Neil! It works now! πŸ˜ƒ I met the similar issue with Window 10 though, could it be restricted by the admin access as well?

user-b43398 06 June, 2022, 03:56:30

Hi Papr, I just wanted to clarify, you recommend using the Extracting blink data script to be modified for getting the gaze data?

papr 07 June, 2022, 09:34:09

Hi, no, I was referring to the extract pupil data script πŸ™‚

user-a4bd50 06 June, 2022, 07:17:30

Hi @nmt, I' new to Pupil-Labs eye-trackers and need some help. I have question about Gaze Datum Format(https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format). Which number means current gaze location on world camera image? I tried with 'norm_pos' but it seems not the correct one

wrp 06 June, 2022, 09:13:39

@user-a4bd50 norm_pos is normalized coordinates of gaze data relative to world camera coordinate view.

If you are looking at surface data format gaze_on_surfaces : norm_pos is the normalized gaze coordinate relative to the surface.

user-a4bd50 06 June, 2022, 09:10:59

Does 'norm_pos' must relay to a surface? i've calibrate with Pupil Capture application.

user-a4bd50 06 June, 2022, 09:36:55

Hi wrp, thanks for you reply. norm_pos I get is quite wired that it changes very little(about 0.1) when I look from left edge to right edge. I use the example from here(https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone) and only topic was changed to "gaze.3d.01.". My codes below:

...continued from above Assumes sub_port to be set to the current subscription port

subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('gaze.3d.01.') # receive all gaze messages

we need a serializer

import msgpack

while True:

topic, payload = subscriber.recv_multipart()
message = msgpack.loads(payload)

#print(f"{topic}: {message}")

print(message.get(b'norm_pos'))
print('\n')
sleep(0.5)
user-a4bd50 06 June, 2022, 10:36:08

for more detials, I stare one point and shake my head. x or y in norm_pos should vary between [0,1]. But both of them only vary in 0.01. nothing changes even i mask the eye cameras

user-a4bd50 06 June, 2022, 11:02:51

Hi wrp, thanks for your help. problem seems bring by sleep(). things going well after i delete it. Why does it cause? Data get in subscriber.recv_multipart() is not realtime?

papr 07 June, 2022, 09:33:02

This is due to subscriber.recv_multipart() not returning the most recent datum but a buffered value. subscriber has an internal queue. If this queue is not processed quickly enough, new items will be dropped and old items will remain in that queue until processed. The goal should always be to query this function as often as possible.

nmt 06 June, 2022, 13:30:52

Hi @user-a4bd50. I'd recommend using this example script to filter gaze on surfaces: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py

user-3a4a19 06 June, 2022, 14:47:44

Hi there, I'm facing issues with core calibration and eyes recognition. I don't know why, but I cannot obtain a stable eye tracking. If I stare at a point on screen, the virtual pointer (ie the pink one) is far away from it. Can someone telling me what's wrong? If this is not the right place to post this, can you please tell me where can I get support? Thank you!

user-b9005d 06 June, 2022, 16:54:47

To use the monocular gaze plugin, do I just download the GitHub file and drag it into the plugins folder? Do I do it for the capture AND player folder or just one or the other?

papr 07 June, 2022, 09:26:50

For realtime calibrations, add it to the Capture folder. For post-hoc calibrations, add it to the Player folder. After restarting the app, the dual-monocular options should show up next to the regular 2D and 3D options.

nmt 06 June, 2022, 19:14:28

Hi @user-3a4a19 πŸ‘‹. Please share a recording that contains a calibration sequence with [email removed] and we will provide some feedback πŸ™‚

user-3a4a19 10 June, 2022, 10:01:36

Hi @nmt thank you, I will send the recording hoping for some feedback

user-3efcd2 07 June, 2022, 09:22:58

Hi! is it possible to use pupil lab sofware whit other industrial cameras?

papr 07 June, 2022, 09:27:54

Yes, but they need to fulfil very specific requirements to work out of the box. See this previous message on that topic https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

papr 07 June, 2022, 09:29:23

On Windows, this issue is related to an incorrect driver installation. Please see https://docs.pupil-labs.com/core/software/pupil-capture/#windows

user-219de4 08 June, 2022, 02:50:22

Thanks for your advice. Will keep you updates for further help πŸ‘

user-3efcd2 07 June, 2022, 09:33:34

if my cameras are good, how can I set the software to work with my cameras?

papr 07 June, 2022, 09:34:29

Are you on Windows?

user-3efcd2 07 June, 2022, 09:34:38

yes!

papr 07 June, 2022, 09:35:19

Please follow steps 1-7 from these instructions to install the drivers https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md Afterward, the cameras should show up in the video source menu.

user-3efcd2 07 June, 2022, 09:36:34

thank you

user-b8531d 07 June, 2022, 09:57:59

Ok thank you @papr ! And you are not aware of any bluetooth solution ?

papr 07 June, 2022, 10:00:17

Bluetooth does not have sufficient bandwith to transmit scene and eye video streams πŸ˜•

user-eb48c5 07 June, 2022, 10:11:10

Hi, I am not able to get good gaze positions (when a person focuses on a certain point, the gaze point shown by the eye-tracker doesn't reflect at that certain point), even after calibrating several times. Am I missing something in the process?

papr 07 June, 2022, 10:12:52

The most likely reason for that is that the pupil detection is not stable. See https://docs.pupil-labs.com/core/#_3-check-pupil-detection and https://docs.pupil-labs.com/core/best-practices/#pye3d-model for reference

user-eb48c5 07 June, 2022, 12:04:30

I am following these rules but is it possible because I'm calibrate on my laptop while my work environment is different?

papr 07 June, 2022, 12:06:13

That should work sufficiently well in most cases. For concrete feedback, please share a recording of you or another user calibrating. Please send the full recording to data@pupil-labs.com

user-7daa32 07 June, 2022, 19:19:42

There is gaze timestamp and World timestamps.... Which should be used in calculating duration?

user-eb48c5 08 June, 2022, 16:11:38

Is it possible to save surface information in surface tracking plugin if I want to use same environment with different participants so that I don't need to define surface separately each time (I'm not changing the markers)

user-219de4 08 June, 2022, 19:25:30

Hello! I am new to LabStramingLayer(LSL) and interested in learning its synchronization over multiple streams. For your setup, did you have two pupil core devices record and import stream by separate station and uniform a shared LSL timestamps on another location? Or did you record and save all signals on the same site? For the other streams of data, have you had other cam stream (e.g., webcam)? How did you set up additional camera input in LabRecorder? Thank you for your great help advanced! πŸ˜ƒ

papr 09 June, 2022, 09:58:19

Hi, the Pupil Capture LSL Relay plugin will change Pupil Capture's clock to match the LSL clock. That means that you can match any natively recorded data to the LSL recorded data post-hoc. Therefore, there is no need for transfer video data via LSL. I have also not come by a good software solution to do that, yet.

wrp 09 June, 2022, 08:57:48

@papr any thoughts on this?

user-19bba3 08 June, 2022, 21:03:01

❔ Question here. I've found in the chat history on discord that when pupil capture closes normally the settings are saved.

That's great. But what I'd really like to do is to save out two different default settings for the different experiments that I'm doing. Searching here and in the docs, I haven't found anything like this. Does anyone know if such a tool already exists?

wrp 09 June, 2022, 03:09:41

Hi @user-19bba3 πŸ‘‹ thanks for searching the chat history before diving into questions 😸

All settings for Pupil Capture and Pupil Player are saved in folders in your user's directory. pupil_capture_settings and pupil_player_settings respectively. These are folders that contain settings. One low-tech solution could be to set up Capture as you like for experiment 1 and move that folder to another location, then set up Capture as needed for experiment 2 and move that folder to another location. Then you can just swap in/out that the desired settings folder as needed in your user dir (name of the folder needs to be pupil_capture_settings). Same could apply to pupil_player_settings

There might be a more sophisticated way to do this, but this would one such way to achieve your goals.

user-25d81e 09 June, 2022, 09:50:33

Question. please tell me required PC specification for Pupil Core.

user-7daa32 09 June, 2022, 10:43:19

Why is it that the data from gaze_positions_on_surface are coming out negative or off ? The gaze_positions (not on the surface) came fine. I can't use the latter dimensions as 0 and 1

user-7daa32 09 June, 2022, 10:43:20

Why is it that the data from gaze_positions_on_surface are coming out negative or off in my calculations ? The gaze_positions (not on the surface) came fine. I can't use the latter dimensions as 0 and 1

papr 09 June, 2022, 10:55:13

Could you please provide an example?

user-7daa32 09 June, 2022, 11:09:28

After converting x and y positions to centimeters. X (0,1), Y (0,1) , Y (52.0192cm), X(28.702). I got negative and offset values even though they are within the defined surface.

papr 09 June, 2022, 11:10:35

From the shared Screenshots it looks like there is an issue with your conversion algorithm

user-7daa32 09 June, 2022, 11:16:00

It was Okay when I used the data from gaze_positions... Not the one from surface

user-7daa32 09 June, 2022, 11:15:56

It was Okay when I used the data from gaze_positions... Not the one from surface

papr 09 June, 2022, 11:18:54

Your screenshot showed positiv y values before conversion, negative values afterward. That means the error lies in the conversion. Without knowing the implementation, I can't make more concrete recommendations. πŸ˜•

user-6586ca 09 June, 2022, 15:09:20

Hi everyone! I'm preparing an experiment with potentially many areas of interest (approximately 60, or even more), using Pupil Core. My question : is there a maximum amount of AOI, that could be implemented in one work environment at once or is it unlimited? Maybe it depends on the maximum amount of Apriltags that could be used at once ? We will use probably 3 or 4 tags per area. My second question concerns the posthoc AOI edition. While adding a new AOI or editing it, the Capture Player lags and starts to runn slow. Is there any parameter that could be modified to improve it? Thank you for your answerπŸ™‚

papr 09 June, 2022, 18:39:24

Hey! While there is no software limit to the amount of surfaces, they require memory and computational resources. That said, the software was not designed to scale to so many surfaces.

The lagging happens especially on Windows due to an underlying implementation detail on how subprocesses are spawned. You should see less lagging on macOS or Linux.

I would also recommend reducing the number of AOIs if possible. For example, if you have multiple coplanar surfaces, you can replace them by one big surfaces and map its gaze to the sub-aois post-hoc.

user-6586ca 09 June, 2022, 18:57:32

Very well, thank you for your answer πŸ™‚

user-f51d8a 10 June, 2022, 01:43:14

Hai, We are actually distributor for pupils lab here in Malaysia. Our clients wants to know what are the compatible result analysis softwares that are compatible with the pupils lab? As I know we can get raw data by exporting results in excel sheets. Any advice on how we can present the data from the raw data excel sheet? Maybe into graphs? Are there any softwares for that? Thank you

papr 10 June, 2022, 06:58:25

Hi! We have a series of examples using Python here: https://github.com/pupil-labs/pupil-tutorials

user-78b456 10 June, 2022, 08:13:24

Hi all, does pupil capture stream out video captures or eye data?

I'm trying to see what I have access to in TouchDesigner and I'm struggling

papr 10 June, 2022, 08:17:14

Hey, yes, Pupil Capture streams data via the Network API https://docs.pupil-labs.com/developer/core/network-api/ But I have not heard about TouchDesigner yet. Therefore, I doubt it is compatible.

user-78b456 10 June, 2022, 08:22:33

So Im understanding it's streamed via tcp://localhost:50020 but the sample code uses the zmq library in python, is that parsing a bunch of text into video? Or is it being sent in JPEG format like it says at the bottom of the network API menu, under frame publisher?

TouchDesigner has a "Video Stream In" operator where you just put in the address of the video feed, it also has a "Syphon Spout In" operator. But I'm not getting anything from the TCP address... Maybe I haven't started streaming yet?

user-96d41d 10 June, 2022, 08:17:16

Hi! I have a question about the gaze_positions.csv file. I want to know the position of the fixation dot in the video within the world video. I think I need to look for this at the norm_pos_x and norm_pos_y variables. I am wondering where the 0 points are and whether they change when the participant moves its head (so also the world video changes in position).

papr 10 June, 2022, 08:26:15

Hey! Gaze is estimated in scene camera space. So it is relative to the subject's head. The origin of the normalized coordinate system is in the bottom left of the scene camera coord. system. See our documentation here https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-96d41d 10 June, 2022, 08:28:44

And is there data on the position of the head/world camera such that you could add the data of the gaze and the world camera to know where the gaze is within the world?

papr 10 June, 2022, 08:34:47

The Pupil Core Network API is fairly custom. That is why I doubt that it will work out of the box. Do you have a link to the reference documentation for these operators?

papr 10 June, 2022, 08:35:55

To contextualize gaze in the real world, you need an additional mapping step. For Core, we offer 2d surface mapping https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking and 3d head pose tracking https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

user-96d41d 10 June, 2022, 08:38:00

Thank you!

papr 10 June, 2022, 09:00:17

Both operators are not compatible with Capture. But they have a Python interface https://docs.derivative.ca/Category:Python You might be able to build something like a custom operator that uses the Network API examples to receive data.

user-78b456 10 June, 2022, 09:01:56

Alright I'll give it the ol college try! Thanks πŸ˜›

user-7e4e46 10 June, 2022, 11:54:13

Hey everyone, could someone please tell me if iMotion licensed software is essential for acquiring normalised gaze coordinates relative to the egocentric scene video? From what I've seen of the documentation on Pupil Labs Capture and Play, it seems as though this capability is available in the free software.

papr 10 June, 2022, 13:37:43

Hey, Pupil Core software provides gaze relative to the egocentric scene video by default. Could you clarify what you mean by normalised?

user-7e4e46 10 June, 2022, 14:05:45

That's great thanks for the response. By normalised I just mean a normalised coordinate system with respect to the scenic camera's pixels.

papr 10 June, 2022, 14:08:27

Yes. That is provided.

user-d43d31 10 June, 2022, 15:42:23

this product work for hololens 2?

wrp 12 June, 2022, 02:42:57

Duplicate post. Responded on vr-ar

user-b9005d 10 June, 2022, 18:53:23

Do you guys still sell the 120 hz eye cameras (the ones that can be focused)? We currently have the 200 hz cameras that came with the headsets but we were interested in adjusting the focus, which these ones don’t allow us to do

wrp 12 June, 2022, 02:41:31

Hi @user-b9005d we might have a few still please send us an email to sales@pupil-labs.com and our Assembly team will get back to you on Monday.

user-430fc1 14 June, 2022, 13:02:15

Hi there, I'm running Pupil Capture version 3.5.8 on Ubuntu 22.04 and so far it has unrecoverably crashed Ubuntu about 1/3 times, requiring a hard reset or REISUB. Any ideas what might be causing this? Seems to happen regardless of which USB port Pupil Core is connected to.

papr 14 June, 2022, 13:07:59

Hey, I am sorry to hear that. This is not a known issue. We are running this setup internally without any issues. What changed in your setup in comparison to prior Capture usages? Did you just setup Ubuntu 22?

user-430fc1 14 June, 2022, 13:09:46

Yes, just set up Ubuntu 22 on a new machine

papr 14 June, 2022, 13:10:17

Did you run Capture on the same machine with a different Ubuntu version before?

user-430fc1 14 June, 2022, 13:10:54

Its a completely new setup

papr 14 June, 2022, 13:12:38

This makes it difficult to pinpoint the cause of the issue. πŸ˜• Can you check if there any kind of system logs that could contain information about the freeze/crash?

user-430fc1 14 June, 2022, 16:26:40

@papr Question - when using pupil remote to start a recording and specifying a folder, e.g., 'R my_folder', is there a way to avoid it creating 000-padded subfolders inside that folder?

papr 14 June, 2022, 16:27:50

No, there is not. This feature ensures that Capture records into a fresh folder and does not overwrite anything.

user-97ca10 14 June, 2022, 17:14:34

Hello, I have a question about opening the pupil and gaze .pldata files. I need to import these into Matlab somehow. I'm guessing I need to write some python script which extracts the data and puts it into some nparray or excel file which I can then read in Matlab, just wondering where to start. Thanks!

papr 14 June, 2022, 17:18:17

The recommended workflow is to open and export the recording using Pupil Player. This will generate CSV files that can be easily imported using Matlab

user-6433d1 14 June, 2022, 17:16:09

Hi there. I am using Pupil Core glasses on Windows. When I open pupil capture, it says it cant connect to eye0 or eye1 devices and that no camera intrinsics are available for camera (disconnected) at resolution [192 192]. This is a brand new problem, I have previously been able to connect to both eye0 and eye1. The only thing that has changed is that I downloaded the pupil-core-network-client module. I followed the trouble shooting steps outlined here, with no luck: https://docs.pupil-labs.com/core/software/pupil-capture/

papr 14 June, 2022, 17:19:24

Could you please connect the headset, open the device manager, expand the Cameras and libUSBk categories, make a screenshot of the window, and share it with us?

user-eb48c5 15 June, 2022, 02:55:51

How do I read the transformation matrices from surf_positions excel file? Although they are in the matrix form as we write in MATLAB but MATLAB detects those cells as text, hence I can't directly read them as a matrix. Is there any way that I can import those matrices in some programming language?

papr 15 June, 2022, 10:17:56

You should be able to convert the string into a matrix with py.eval. But I am not sure if that works. I am basing this assumption on this documentation https://de.mathworks.com/help/matlab/matlab_external/differences-between-matlab-python.html

user-7e3aa3 15 June, 2022, 10:13:49

Hi. I am using pupil capture. Is there a way to change the default user_info fields on startup?

papr 15 June, 2022, 10:16:46

Hey, that is not supported

user-5c7deb 17 June, 2022, 00:03:15

Hi, I am looking to get diameter and center for iris, any suggestions or library that could be used?

user-aabf8b 17 June, 2022, 11:13:48

Hello guys πŸ‘‹ When plugging in the core glasses in our OnePlus 8, the companion app shows an error message saying that there's a USB problem. Do you know how to resolve that?

papr 17 June, 2022, 12:07:04

Hi. The Companion app is only designed to work with Pupil Invisible. Please use Pupil Capture to drive your Core headset.

user-bb3d42 17 June, 2022, 18:48:30

Hey all, does anyone have any tips on how to calculate angular velocity from the gaze positions file? I am trying to see if participants are doing saccades between various time stamps. I was thinking to use the data under the gaze_point 3d x,y,z columns but if I did this, I would need to use a Euler rotational matrix and I do not know how the local coordinates are set up. I was also thinking I could use a simple 2d method since participants are looking at a board with their head fixed in a chin strap. Just reaching to see if there are any ideas on discord. Thanks.

user-b14f98 20 June, 2022, 14:58:49

@papr at a more reasonable range (say, up to 500 deg/second) you will likely see noise characteristics that might cause a new user to worry that their calculations are wrong.

papr 20 June, 2022, 15:35:39

I would rather be interested how these larger numbers come to be. Is it that the spatial difference is larger than expected or the temporal one shorter? Can you reproduce these numbers in your own recordings?

user-b14f98 20 June, 2022, 15:00:24

....unless they notice the diff. in Y axis,

user-9230f4 20 June, 2022, 17:08:35

I searched around for this but wanted to double check (before adjusting my IRB)! A research assistant and I are working to set up an eyetracking study with Core. The headset is not picking up her pupils well with her eyeglasses on. Is Core designed to work with a user who wears eyeglasses?

papr 20 June, 2022, 17:10:54

eyeglasses can obstruct the eye cameras' views or (if filming through the glasses) create reflections that cause the pupil detection algorithm to work worse. If you share an example Pupil Capture recording with data@pupil-labs.com we will be able to tell you the exact cause.

user-9230f4 20 June, 2022, 17:13:02

Awesome, thanks. It's a small study, we might amend to exclude eyeglasses, but I will see if we need to send along a recording. We can definitely tell that it's less stable with her glasses than it is with me not wearing glasses.

user-6e1219 22 June, 2022, 11:51:22

Hii All, Is there any way or function to convert Gaze Co-ordinate to Pupil Co-ordinate.? It will be really helpful if someone can help me out with this.

papr 22 June, 2022, 11:58:38

Hey! πŸ™‚ Could you please elaborate in which type of information you are interested?

user-2e1368 22 June, 2022, 15:51:47

Good morning all, could you please help me how can I download Pupil Core Software on my Windows 10? Thank you!!

papr 22 June, 2022, 16:05:18

Hi! You ca find it at the bottom of this page https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads

user-6433d1 22 June, 2022, 18:27:48

Hi, we are using pupil core glasses to track gaze positions of a person looking at a different lights on a physical board that is angled at 30degrees with respect to a table it is placed on. I am trying to figure out which of our measures from your gaze_position csv file would be most suitable for figuring out where they are looking. Specifically, our task requires that they fixate on a specific lights, and we want to make sure they keep their eyes on it. Do you have any advice for which of the measures from the gaze_positions file would work best?

user-f9a8a4 23 June, 2022, 07:06:54

delete annotations

user-7c6eb3 23 June, 2022, 17:02:41

hi everyone! representing my academic design department, we're looking to invest in an eye tracking system. Are there any resources out there directly comparing the "core" and "invisible" products, describing the ideal use cases for each etc?

user-04dd6f 24 June, 2022, 15:53:47

Hi, my pupil player just keep crashing for no reason, especially when I was editing the surface (AOIs) or add/remove the markers. Hence, I would like to know that if this is a known bug on pupil player or just only me have this issue?

Many Thanks

Note: I'm running the pupil player v3.5.7 (with macOS Monterey)

user-04dd6f 28 June, 2022, 12:58:54

This is the detail report of the problem

pupil_player_problem_description.docx

nmt 24 June, 2022, 15:57:17

Hi @user-7c6eb3 πŸ‘‹. We don't have a specific resource that directly compares Invisible and Core online. That said, I can certainly outline some important points here.

Pupil Invisible is well-suited to use-cases ranging from art and design to sports performance, both in the lab and in the real world. It was designed to look and feel like a normal pair of glasses, and uses a real-time neural network to provide stable gaze estimation in all environments, without requiring calibration. This means it's fast and easy to set up, and you can take the glasses on and off again without worrying about re-calibrating. Invisible connects to a smartphone device, which makes the system fully portable.

Pupil Core is ideal for studies demanding high accuracy or pupillometry. It is fully open-source and can be extended and customised to meet research aims. It does, however, require calibration and a controlled environment such as a lab to achieve the best results. Pupil Core connects to a laptop/desktop computer via USB for operation.

Check out our blog to see how Pupil Invisible and Core are being used: https://pupil-labs.com/blog/

user-b9005d 27 June, 2022, 17:53:03

When doing post-hoc calibration of videos, sometimes there are sections of video where only one eye is properly confident in frame. If I mark a calibration dot where I know that one eye is fixated, how does that affect the gaze estimation once both eyes come back into focus?

user-b9005d 29 June, 2022, 18:26:23

Just following up on this

user-7c46e8 28 June, 2022, 14:11:59

Hi channel, we recently recorded some data with a pupil core device and we are interested in pupil size. However, when looking at the recorded data the pupil size of both eyes differs largely and seems impossibly small (<2mm) for both eyes. This, even though I discarded all data with confidence values <0.6 and the recorded videos of the eyes look normal. Would anyone be willing to have a look at my output, or in another fashion can help me figure out what is going wrong? (We've had this problem consistently now)

nmt 28 June, 2022, 14:28:10

Hi @user-7c46e8 πŸ‘‹. Getting accurate pupil size estimates depends on a well-fitting eye model. If you haven't already, I'd recommend checking out our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry

user-2cf61c 29 June, 2022, 02:16:39

Hi everyone! I am interested in the DIY kit, but the HD-6000 and B525 cameras used in it have been discontinued, is there other cameras that can be replaced?

user-6662a5 30 June, 2022, 13:40:36

Hi I am wondering how do I get a face blurring feature ?

papr 30 June, 2022, 13:42:27

Hey! Which product are you using?

user-6662a5 30 June, 2022, 13:42:47

the eye tracking glasses

papr 30 June, 2022, 13:43:29

Let me clarify: Do you use Pupil Core or Pupil Invisible?

user-6662a5 30 June, 2022, 13:43:52

core

papr 30 June, 2022, 13:46:13

The Pupil Core software does not have such a feature built-in. You would need to run a face blurring algorithm post-hoc on the recorded video. Unfortunately, I don't have any experience with a particular face blurring software s.t. I cannot give any specific recommendations in this regard.

user-6662a5 30 June, 2022, 13:46:32

Ok thanks

user-e45bce 30 June, 2022, 15:04:51

hola hola! como estan?

Saben si el VR/AR add-ons sirve para los oculus quest?

user-e45bce 30 June, 2022, 15:05:00

Hi Hi! how are they?

Do you know if the VR/AR add-ons work for the oculus quest?

nmt 01 July, 2022, 12:40:28

Hi @user-e45bce πŸ‘‹. The VR add-on isn't compatible with Oculus Quest. Please see this message for reference: https://discord.com/channels/285728493612957698/285728635267186688/956545909884321793

End of June archive