core


user-6b24c6 01 November, 2020, 07:48:52

Hi I just got my hands on pupil lab's core glasses. I have also gone through the documentation about connecting to the Network API. The documentation does state that apis can be integrated with programming language that supports zeromq and msgpack!! I wanna develop c# application where I can see live video coming from the device. Looking at the demonstration, most of the code is written in python. I wanna know if there is possibility of that?

user-765368 01 November, 2020, 09:53:44

@papr is there a way to limit the recording time in the capture to a certain time period?

user-e94c74 01 November, 2020, 10:40:50

Hi, I'm trying to manually visualize each eye's pupil video in same screen. I loaded numpy array of eye0_timestamps and eye1_timestamps for each video and used them to synchronize; however both eyes are misaligned - eye videos show time difference on the same blink. Could you give me some advice of how to align each eye's pupil video?

user-594d92 01 November, 2020, 10:52:27

Hey, happy Halloween! @papr Previously, the 'Pupil Capture' could run normally on my computer. But recently I according to the official website of the ' IPC Backbone' steps (https://docs.pupil-labs.com/developer/core/network-api/#ipc-backbone), try to output data in real time. Now open "Pupil Capture", and then calibrate. When the calibration is over, the 'Pupil Capture' will not respond, as shown in the picture. What is the problem and how to solve it? Thanks!

Chat image

user-d8853d 01 November, 2020, 12:27:40

Hey, I know pupils core software is not made for remote eye tracking but theoretically does it work? I created a virutal webcam by using akvcam but pupils capture software is not showing any virutal webcam. Is software actively filtering out the virutal webcam?

user-d8853d 01 November, 2020, 21:55:28

I believe libuvc is returning only uvc devices. Can a virutal camera from akvcam be simulated as uvc device? It is just for research purpose to see if a remote eye tracking is even possible and what are the implications.

papr 02 November, 2020, 08:31:46

@user-6b24c6 Hey, unfortunately, we only provide the examples in Python. Nonetheless, it is possible to use c# to to access the network api. You can have a look at our hmd-eyes project as a reference which uses the Network API to integrate our realtime eye tracking into Unity. https://github.com/pupil-labs/hmd-eyes

papr 02 November, 2020, 08:32:40

@user-765368 Not from the user interface of Pupil Capture. Instead, you would have to use a simple script that uses the Network API to remote control Pupil Capture. Read more about it here: https://docs.pupil-labs.com/developer/core/network-api/

papr 02 November, 2020, 08:34:49

@user-e94c74 Can you confirm, that the eyes are synced when using Pupil Player's eye overlay plugin? Also, please be aware that it is important how you extract frames from the video. Using OpenCV is not reliable for this. Check out our frame extraction tutorial for details: https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

papr 02 November, 2020, 08:37:11

@user-594d92 Thank you for reporting this. Could you share the following information/files: - Pupil Capture version - the capture.log file (You can find it in Home directory -> pupil_capture_settings) - the exact commands that you send to Capture via the Network API

papr 02 November, 2020, 08:43:49

@user-d8853d Our UVC backend only lists cameras that fulfil a specific set of requirements. Check out this Discord message for details: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

You will likely have to write your own video backend in order to access your virtual cameras. You can use this Realsense video backend plugin as a reference: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11

I think you best chance would be to write a custom "pupil detector plugin" that processes the remote video, doing the head pose estimation, double pupil detection, etc. Afterward, you will have to write a custom gaze mapping plugin that is able to run a calibration procedure fit for your use case.

user-594d92 02 November, 2020, 12:35:46

@papr - Pupil Capture version: 2.4.0 - I have uploaded the other files to: https://github.com/mimic777/Pupil-Capture

papr 02 November, 2020, 14:04:52

@user-594d92 Thank you. This is the relevant Github issue: https://github.com/pupil-labs/pupil/issues/1733#issuecomment-554641134 You should be able to work around the issue by running Capture on a user account with ASCII characters or by changing your TEMP directory to a path that only includes ASCII characters.

user-908b50 03 November, 2020, 04:55:48

So I know that a dispersion based algorithm is used to detect fixations. How does that work for surface tracking? Is the same algorithm mapped onto the area?

user-908b50 03 November, 2020, 05:02:20

Also, to clarify, the degree of visual angle (i.e. the default dispersion of 1.50 on player) is where the pupil gaze is with respect to the x and y world coordinates?

user-908b50 03 November, 2020, 05:05:37

Do you know of anyone that has calculated saccades using pupil labs ouput? When defined as the period between two fixations when the eye changes its position at a specific velocity, would it be possible to write code that calculates saccades?

user-908b50 03 November, 2020, 09:12:37

Also, based on my reading of the code, it seems the Salvucci dispersion metric is used? I am using a max threshold of 1.08 with a duration 150 - 350 ms.

papr 03 November, 2020, 09:20:25

@user-908b50 Hi 👋 Dispersion is calculated based on the 3d gaze directions in scene camera space. Fixations are mapped later to the surface. Therefore, something that looks like a fixation in scene camera space might not be a fixation on a surface, if the surface was moving. Ideally, one would use the surface mapped gaze to calculate the dispersion but this is currently not implemented in Capture/Player. This is mainly because it is not clear how dispersion in visual degrees can be calculated in surface coordinates.

Do you know of anyone that has calculated saccades using pupil labs ouput? Check out @user-2be752's work https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing

When defined as the period between two fixations when the eye changes its position at a specific velocity, would it be possible to write code that calculates saccades? It is possible to calculate gaze velocity, yes. It just might be too noisy when using a low confidence threshold and too sparse when using a high confidence threshold.

user-908b50 03 November, 2020, 09:58:29

Hi! Great, thanks for the link Pablo, will check it out. Not sure about velocity as of now, but I am definitely interested in counting the total number of saccades for my project. Perhaps that would simplify things. Alright, the part about a lack of mapping between visual degrees and surface coordinates makes sense, especially because surface moves! So basically, the scene camera space (from the world camera recording) are image vectors (x, y, and z directions) used for offline gaze (and later fixation) detection. So the 3D gaze vectors (as image vectors?) are used to calculate dispersion in visual degrees.

papr 03 November, 2020, 10:18:20

@user-908b50 My point regarding the saccades is that there are more eye movement types than just fixations and saccades. Defining saccades as the period between fixations might not be correct.

image vectors I would try to avoid this term as it mixes two different coordinate systems: (1) the 2d image plane that includes distortion, and (2) the 3d camera space that does not include distortion.

Gaze norm_pos coordinate system is equivalent to (1). Gaze gaze_point_3d lies in (2). Dispersion is calculated based on normalised gaze_point_3d vectors, i.e. in (2). This is the function that calculates the maximum dispersion based on a set of vectors: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L130-L133

papr 03 November, 2020, 10:18:55

The coordinate descriptions for reference: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-3c006d 03 November, 2020, 13:45:05

hey folks, i got a problem, i have troubles with detecting when i am in binocular modus. I get confidence values of almost 1 for both eyes but after calibration and validation the error is often more than 30-40% and i already got that i have to be at least under 30% to make good data. It is completely different when i am in monocular modus. When i only track one eye i am working with 13% error... what am i doing wrong?

papr 03 November, 2020, 13:47:01

the error is often more than 30-40% Could you specify what the message says that includes these values?

user-3c006d 03 November, 2020, 13:48:34

no .... i will try it with my coworker again (maybe its because i have make up on? someone seriously asked me if that could be the reason it doesnt work properly,...) i get back to you in a few minutes

papr 03 November, 2020, 13:50:30

@user-3c006d Make up can lead to bad pupil detection, yes. You should be able to verify this by looking at the eye window. When wearing mascara, the pupil detection often finds the eye lashes as false positive detections.

papr 03 November, 2020, 13:51:12

@user-3c006d Nonetheless, I think you are referring to the amount of data that is discarded for the 2d calibration. Can you confirm that you were attempting a 2d calibration?

user-3c006d 03 November, 2020, 13:52:10

i tried both 2d and 3d and yes when i switched to the algorithm view it recognised my eye lashes as possible pupil.

papr 03 November, 2020, 13:52:15

Discarded data is not problematic per se. I think running a validation and looking at the validation accuracy value (measured in degrees) is more important to judge the quality of a calibration.

papr 03 November, 2020, 13:53:26

@user-3c006d You can use the ROI mode in the eye videos to set a Region of Interest that excludes the eye lashes as much as possible. That should help with the false positives.

user-3c006d 03 November, 2020, 13:55:59

okay thank you! I tried it with my coworker now and as he is a man he doesnt use any make up... it worked perfectly ^^

user-3c006d 03 November, 2020, 13:56:11

jeez i would never have thought of that 😄

user-8effe4 03 November, 2020, 14:53:38

Hello everyone, I have a small (or big) issue with the pupil lab fixation data. I am plotting each norm_pos_x and norm_pos_y on the corresponding frame (I tried multiple ones between start_frame and end_frame). but unfortunately it is not being plotted on the exact location on the video where the fixation should be (it does not match the green circle that shows the fixation on the video). here is a picture to give you an idea. what shoud i do to solve this problem?

Chat image

papr 03 November, 2020, 14:57:32

@user-8effe4 What are you using for the frame extraction? Did you know that we have a tutorial that shows the recommended way for frame extraction and also draws the gaze point for a single frame? If not I can recommend to have a look https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

user-8effe4 03 November, 2020, 14:59:22

@papr in fact i tried that and i am now working on matlab but using the same principle. the only problem is that sometimes (like 4% of the time), it's not correct. And i am using fixation instead of gaze

papr 03 November, 2020, 14:59:51

@user-8effe4 So you extracted the frames with ffmpeg in your case, too?

user-8effe4 03 November, 2020, 15:00:25

@papr yes i did. and i am using fixation (not gaze)

user-8effe4 03 November, 2020, 15:03:28

I am having this error with some videos. but they are segmented fine. so i don't think it is the issue here

Chat image

papr 03 November, 2020, 15:04:37

@user-8effe4 Can you confirm that the tutorial works as expected? (with gaze data)

user-8effe4 03 November, 2020, 15:05:41

@papr okay i will try that and be back

papr 03 November, 2020, 15:08:41

@user-8effe4 Cool! Once you have successful replicated it, please try it with fixations, too. Basically, the idea is to use common ground from which we know that it works and then to change one variable at a time until we find the cause of the issue. 🙂

user-8effe4 03 November, 2020, 15:23:46

@papr Okay i just tried it. it is working for most of the cases in gaze (still some are far). also in the fixation i think even where there is a frame range for a fixation(for example from frame 218 to 224), the fixation sometimes is moving in this area. i tried using the first frame, the last frame and the middle frame but still always with this problem

user-430fc1 03 November, 2020, 17:44:08

Is it possible to freeze the 3d model with a notification?

papr 03 November, 2020, 17:47:13

@user-430fc1 yes

papr 03 November, 2020, 17:47:54

@user-430fc1 https://github.com/pupil-labs/pupil/pull/1575

user-430fc1 03 November, 2020, 17:50:00

@papr Thanks!

user-5ef6c0 03 November, 2020, 21:47:13

Hello all. I have a question regarding the resolution and framerate for the eye camera. Does lower resolution affect pupil detection accuracy? If not, is there any merit in using 400x400px instead of 200x200px? I'm assuming that leaving image quality aside, and everything else being equal, having more fps —i.e. lower resolution— is better. Is that assumption correct?

papr 03 November, 2020, 22:01:06

@user-5ef6c0 actually, 192x192 has better pupil detection than 400x400

papr 03 November, 2020, 22:01:28

And it allows the usage of 200Hz. Therefore, it is the recommended settings.

user-5ef6c0 04 November, 2020, 04:20:18

@papr thank you

user-5ef6c0 04 November, 2020, 12:31:44

Two calibration questions: when using the single marker choreography, does the size of the printed marker matters? Also, for my setup (people stacking blocks vertically over a horizontal surface, see pic below): should the marker be on the table's surface (i.e. horizontal) or should it be displayed vertically? or perpendicular to the viewing direction (i.e. 30-45deg above the horizontal plane?

user-5ef6c0 04 November, 2020, 12:32:16

Chat image

papr 04 November, 2020, 13:51:11

@user-5ef6c0 The calibration is always relative to the scene camera. The marker is detected best if marker is perpendicular to the scene cameras viewing direction, i.e. the concentric circles are circular in the camera image

papr 04 November, 2020, 13:51:36

The marker size matters for the detection. If it is too small, it will not be recognized.

user-5ef6c0 04 November, 2020, 14:00:35

@papr thank you. The size you provide in your A4 pdf file should work well at the distance shown here, right?

papr 04 November, 2020, 20:33:05

@user-5ef6c0 I think so, yes.

user-908b50 05 November, 2020, 00:08:20

@user-908b50 My point regarding the saccades is that there are more eye movement types than just fixations and saccades. Defining saccades as the period between fixations might not be correct.

I would try to avoid this term as it mixes two different coordinate systems: (1) the 2d image plane that includes distortion, and (2) the 3d camera space that does not include distortion.

Gaze norm_pos coordinate system is equivalent to (1). Gaze gaze_point_3d lies in (2). Dispersion is calculated based on normalised gaze_point_3d vectors, i.e. in (2). This is the function that calculates the maximum dispersion based on a set of vectors: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L130-L133 @papr Good point on saccades! Okay, great! Thanks for correcting me on the term. Based on the function, first the distance of the vectors (x, y, z) are calculated and then the largest distance is subtracted from 1.0. Each dispersion calculation is then compared to a maximum dispersion value (chosen by user) to detect a fixation. So basically you use the radius dispersion metric (not Salvucci).

papr 05 November, 2020, 08:35:57

@user-908b50 The reason why we subtract the pdist result from 1.0 is that the cosine metric is defined as

1 - (u @ v) / (norm2(u) * norm2(v))

in order to be a proper distance metric.

https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html

But we do not want the result of the distance metric but the actual angle.

papr 05 November, 2020, 08:40:38

@user-908b50 I am not sure what you mean by the radius metric. I will use this paper as bases https://cms.uni-konstanz.de/fileadmin/archive/informatik-saupe/fileadmin/informatik/ag-saupe/Lehre/WS_2016_2017/ET_TAP/p71-salvucci.pdf

The only occurrence of the term radius is

A circular area is usually assumed, and the mean distance from each sample to the fixation centroid provides an estimate of the radius. on page 74.

This is not what we do. We actually calculate the largest dispersion between all possible gaze vector pairs. But yes, we implemented a I-DT fixation detector (section 3.2, page 74).

I am not sure which algorithm you are referring to by Salvucci. Could you elaborate on that?

user-e94c74 05 November, 2020, 09:09:57

Dear @papr , it seems like surface fixations are calculated based on the original fixation data. Why do we generate multiple fixations based on the transformation on original one, rather than just calculating new surface fixations?

papr 05 November, 2020, 09:14:05

@user-e94c74 Checkout this comment https://discord.com/channels/285728493612957698/285728493612957698/773114352076455956

user-e94c74 05 November, 2020, 09:20:31

Oh I missed that comment. I just wondered how to interpret the surface fixations features - since there were several surface fixations from the same one - we can use the first element or calculate mean value among same fixations. Thank you for the fast reply.

papr 05 November, 2020, 09:41:10

@user-e94c74 These locations will only differ significantly if the surface moved relative to the scene camera during the fixation. So you could calculate the mean and the variance. If the variance exceeds a threshold of your choosing you can discard the fixation for too much surface movement.

user-19f337 05 November, 2020, 12:26:17

Hello, I exported some data in a fixation on surface csv file. I'm just a bit confused because there are multiple lines with the same fixation id, but different x,y coordinates on the surface. It has no sense that a fixation moves on the surface, so maybe you if you could explain me a bit about it, as I couldn't find the documentation relating to exports on surfaces. Thanks in advance ^^

Chat image

papr 05 November, 2020, 12:30:18

@user-19f337 actually, it makes sense that it can move as we do not detect fixations on the surface, but map existing ones to the surface. See my comments directly above your question.

user-19f337 05 November, 2020, 13:15:49

If it's a fixation on the surface, the eye should stay on the same spot on the given surface right? The surface moves, the eye moves and should follow that same point on the surface as a smooth pursuit.

papr 05 November, 2020, 13:18:28

@user-19f337 This case would result in a false negative fixation detection, as this would look like a smooth persuit in scene camera coordinates and will therefore not be detected as fixation.

papr 05 November, 2020, 13:21:28

In order to find fixations on a moving surface, the dispersion needs to be calculated in surface coordinates instead of scene camera coordinates. As mentioned in the linked comment above, it is unclear how one would calculate this though.

user-594d92 05 November, 2020, 13:37:01

Hello, I want to try to develop a plugin. Where should I start? I read the website of Pupil Labs and found the following sentence in the plugin.py file:

A simple example Plugin: 'display_recent_gaze.py'. It is a good starting point to build your own plugin. (Line 24) https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/plugin.py#L25-L167 Actually, I don't quite understand this sentence. How should I do it?

papr 05 November, 2020, 13:37:56

@user-594d92 It is referring to this file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/display_recent_gaze.py

papr 05 November, 2020, 13:38:34

Although, it is actually not the best starting point

papr 05 November, 2020, 13:39:31

try this instead:

from plugin import Plugin
from pyglui.cygl.utils import draw_points_norm, RGBA


class Display_Recent_Gaze(Plugin):
    """
    DisplayGaze shows the three most
    recent gaze position on the screen
    """

    def __init__(self, g_pool):
        super().__init__(g_pool)
        self.order = 0.8
        self.pupil_display_list = []

    def recent_events(self, events):
        for pt in events.get("gaze", []):
            self.pupil_display_list.append((pt["norm_pos"], pt["confidence"] * 0.8))
        self.pupil_display_list[:-3] = []

    def gl_display(self):
        for pt, a in self.pupil_display_list:
            # This could be faster if there would be a method to also add multiple colors per point
            draw_points_norm([pt], size=35, color=RGBA(0.2, 1.0, 0.4, a))

    def get_init_dict(self):
        return {}
papr 05 November, 2020, 13:40:47

I changed to things: 1. Inherit from Plugin: This allows you to enable the plugin from the plugin manager 2. Changed the color in draw_points_norm: This makes it different from the system plugin. Please be aware that you need to calibrate to see the visualization.

user-a98526 05 November, 2020, 13:54:06

@papr HI, I tried to use pupil2.5 but I showed this error. I used pupil2.1 in the previous version. How can I fix this problem, thank you. (pupil36) E:\pupil\pupil2.5\pupil\pupil_src>python main.py player Error calling git: "Command '['git', 'describe', '--tags', '--long']' returned non-zero exit status 128." output: "b'fatal: not a git repository (or any of the parent directories): .git\n'" Traceback (most recent call last): File "main.py", line 35, in <module> app_version = get_version() File "E:\pupil\pupil2.5\pupil\pupil_src\shared_modules\version_utils.py", line 74, in get_version version_string = pupil_version_string() File "E:\pupil\pupil2.5\pupil\pupil_src\shared_modules\version_utils.py", line 57, in pupil_version_string raise ValueError("Version Error") ValueError: Version Error

user-594d92 05 November, 2020, 13:55:14

@papr ok, thank you!👍

papr 05 November, 2020, 13:55:34

@user-a98526 ~~Are you running from source?~~ You are running from source but you did not clone the repository. Instead you downloaded it using a different method.

user-a98526 05 November, 2020, 13:56:15

yes

papr 05 November, 2020, 13:56:46

@user-a98526 You need to use git clone ... to get the repository. Please check the developer docs for details.

user-a98526 05 November, 2020, 14:03:21

thanks for your help. I am trying this installation method. By the way, does pupil invisiable have applications similar to pupil player and pupil capture? I found that using pupil invisiable is not as easy as pupil core.

user-a98526 05 November, 2020, 14:09:01

Or can I use the data from Pupil invisible in pupil player?

papr 05 November, 2020, 14:09:30

I found that using pupil invisiable is not as easy as pupil core. @user-a98526 I am very surprised by this statement. Can you specify what parts you feel are less easy?

Pupil Invisible is meant to be used with the Pupil Invisible Companion app for Android. The app can calculate gaze without calibration in real time. Recordings are uploaded to Pupil Cloud automatically. Pupil Cloud is the primary analysis platform for Pupil Invisible.

user-c780d4 05 November, 2020, 14:14:35

Hi, I do not mean to hijack this conversation but I'm looking to get a pupil tracker to study attention/engagement while the subjects (students) participate in lecture (active or passive learning). Without programming and coding knowledge, I'm looking for something that easy to use, robust in data collection and analysis (best if wireless). Invisible or Core?

papr 05 November, 2020, 14:16:15

@user-c780d4 I would strongly recommend using Invisible. It is much easier to use for your type of setup. Pupil Core is mostly meant for controlled lab environments.

papr 05 November, 2020, 14:17:02

@user-c780d4 Feel free to contact our sales team at info@pupil-labs.com if you have detailed questions in this regard. 🙂

user-a98526 05 November, 2020, 14:25:04

Perhaps because of the plugin function of pupil core, I just started using pupil invisible. Pupil has helped me a lot in robot control. Thank you for your help.

user-c780d4 05 November, 2020, 14:59:54

@papr Do I get raw data with invisible? Do I calculate my own attention/engagement? Or, your software would do that?

papr 05 November, 2020, 15:01:51

@user-c780d4 You get gaze data in scene camera coordinates and soon we will be releasing marker-based surface tracking to Pupil Cloud which allows you to generate heatmaps for predefined areas of interests. Do you have a reference for what you would use as a higher level metric for attention/engagement?

user-c780d4 05 November, 2020, 15:06:26

@papr From some papers I read, pupil size, saccade and gaze are all being used to analyze attentions. I was hoping, since Invisible isn't a "research device" as Core, there would be a algorithm to calculate that automatically.

papr 05 November, 2020, 15:09:26

@user-c780d4 This is definitively the vision that we have for Pupil Invisible and Pupil Cloud. But Pupil Cloud is being built from the ground up. It will take a while until we are at the point where we can calculate a high-level attention. Pupillometry is currently not available for Pupil Invisible. But we are working on that as well.

user-c780d4 05 November, 2020, 15:11:02

Should I use the Core then?

papr 05 November, 2020, 15:12:44

@user-c780d4 if you need access to pupillometry data, Pupil Core is currently your only option. 😕

user-c780d4 05 November, 2020, 15:14:32

@papr Core isn't wireless, correct? Can data be collected using your software without wring code? I heard that it has to be Python.

papr 05 November, 2020, 15:17:40

@user-c780d4 Please contact info@pupil-labs.com in this regard. You do not need to write code to collect data. But you might need to calculate higher level metrics like attention yourself.

user-8effe4 06 November, 2020, 00:20:10

Hello,

My question might be repetitive but it is really urgent as I ran out of time trying lots of techniques.

I have been trying to plot a “fixation_on_surface” file coordinates on the original picture of the surface but it never worked (because the surface is alway distorted and the pupîlcapture or player is converting these coordinate in à way that they are correct on the surface which makes them not useful to be plotted on à surface).

So, I have used the surf method to compare each frame of the video with the original surface and create an affine model which I used to transform the fixation coordinates (norm_pos_x and norm_pos_y) to their equivalents on the surface.

But still even now, when the surface is bigger than the screen, or the subject moves à lot, the results are not good (full of errors).

Now , is there any method/technique/ algorithm I can apply toi solve this problem?

Any help would be really appreciated.

papr 06 November, 2020, 09:05:07

@user-8effe4 you are correct, that Player maps gaze and fixations into an undistorted surface space. Either (1) you plot the undistorted mappings on an undistorted image of the surface (recommended) or (2) you use the scene camera-based gaze and fixations (before surface mapping) and plot them on the distorted scene camera video.

user-6e3d0f 06 November, 2020, 09:57:05

Hi, is any of your using the Pupil mobile app for research? I contacted [email removed] and they said its deprecated but still usable. Anyone in here who is working with it? I want to use it in a strict experimental setting, so I need to make sure it works 100%. Any experiences with that? Else I stick to a Cable to the Computer. Any when I use pupil mobile but stream it to my desktop and start calibrations. validation and recording from there on, is there anything problematic about that? I would still use pupil mobile for the streaming part, any negative points about that?

papr 06 November, 2020, 09:58:20

@user-6e3d0f if you use Pupil Mobile, record on the phone. Recording the stream in Capture is not reliable.

user-6e3d0f 06 November, 2020, 10:01:33

but If I start it from the desktop where it is streamed to, is it saved on the computer or on the mobile?

papr 06 November, 2020, 10:58:22

If you hit the R button in Capture on the left, it will record on the computer. If you use the remote recording plugin, it will record on the phone

user-8effe4 06 November, 2020, 14:31:05

@papr so if i plot the "fixation_on_surface" coordinates on the clean image i used for the experiment, they would fit? or am i required to do some data proocessing?

papr 06 November, 2020, 14:39:51

If the surface definition is aligned to fit that image then yes.

user-8effe4 06 November, 2020, 14:46:44

@papr i think the surface in the video is a bit rounded and the subject is moving his head a lot. i don't know if that is a deal breaker or not

papr 06 November, 2020, 15:09:53

@user-8effe4 depends on how well the surface is detected. If you want you can share the recording with data@pupil-labs.com and we can have a look.

user-8effe4 06 November, 2020, 16:08:23

OH OK THANKS I WILL

user-8d1ce2 06 November, 2020, 19:18:37

Is there a "getting started" document for pupil mobile? I've downloaded the app on my phone and can do a remote start/stop from capture. I've got it set (I think) to save data on my phone, but all the files appear to be empty. When I try to open the saved folder in player it crashes the program. I'm sure I'm just missing a step somewhere, but I can't seem to find any document that tells me how to set it up on my phone or on the capture program. Also, if using mobile isn't a great option because it is no longer supported, does anyone have a good recommendation for a mini computer that I could place in a backpack?

user-608a0d 09 November, 2020, 13:14:15

Hi, I have two questions: 1) We have been getting fairly sizeable differences in pupil diameters between left and right eyes when recording binocularly (1-2 mm difference, with some variability). Are we doing something wrong? We have previously worked with SMI mobile eye tracking and never got such big L-R eye differences. 2) We've also been getting very large pupil diameters (6-9 mm). In the same testing environment, our SMI glasses gave much smaller numbers. Does the pupil diameter (diameter_3d in the Pupil Player export) really correspond to proper millimeters?

user-7daa32 09 November, 2020, 14:29:31

Hello

Please what is responsible for camera windows hanging and unresponsive?

Chat image

user-7daa32 09 November, 2020, 14:30:21

Chat image

papr 09 November, 2020, 14:34:52

@user-92dca7 Could you please respond to @user-608a0d when you have time? Message link: https://discord.com/channels/285728493612957698/285728493612957698/775347524859985942

user-da7dca 09 November, 2020, 15:34:29

hey guys, i would like to use the realsense (305) as a worldcam and tried following plugin: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11. Unfortunately the code architecture seems to have changed that much that it is no longer possible to use the plugin since following import: from camera_models import load_intrinsics (line 18 in the plugin) is no longer available. Does anyone know how I should proceed? Thanks in advance!

papr 09 November, 2020, 15:40:27

@user-da7dca The easiest way would be to use Pupil v1.23. Alternatively, one would need to adjust the plugin in the following way:

Line 18
- from camera_models import load_intrinsics
+ from camera_models import Camera_Model

Line 332
- self._intrinsics = load_intrinsics(
+ self._intrinsics = Camera_Model.from_file(
user-92dca7 09 November, 2020, 16:00:22

Hi @user-608a0d, yes, pupil diameter estimates are reported in mm. Both eyes are measured independently, with the global scale of each radius depending on the quality of the fit of the respective 3D eye model. If the eye model is estimated too far away from the eye camera, this will lead to an overestimation of the pupil radius. This could explain the differences you observe between the left and right eye and the overall size. If possible, prior to your actual measurement, check that both the left and right eye model looks good and then freeze both models during your experiment (possible in the GUI).

papr 09 November, 2020, 16:03:39

@user-7daa32 Hi, it looks like there is an error that causes the software to hand. Could you share the capture.log file after reproducing the issue? You can find it in the pupil_capture_settings folder in your home directory.

papr 09 November, 2020, 16:10:56

@user-8d1ce2 Could you share the player.log file after reproducing the issue when opening the recording with Pupil Player? You can find it in the pupil_player_settings folder in your home directory.

user-7daa32 09 November, 2020, 16:12:35

@user-8d1ce2 Could you share the player.log file after reproducing the issue when opening the recording with Pupil Player? You can find it in the pupil_player_settings folder in your home directory. @papr I can't find a folder for version 2.5. just WinRAR archive file

papr 09 November, 2020, 16:13:57

@user-7daa32 This folder is not part of the installation or download folder. Please try searching for pupil_capture_settings in your Windows search bar.

user-8d1ce2 09 November, 2020, 16:46:55

@papr What is the best way for me to share the log file?

papr 09 November, 2020, 16:47:11

@user-8d1ce2 just upload it here 🙂

user-8d1ce2 09 November, 2020, 16:49:05

Log from attempt at mobile capture.

player.log

papr 09 November, 2020, 16:51:26

@user-8d1ce2 The logs indicate that the application shuts down but there is no indication of a crash. Could you share the recording with [email removed] such that we can try to reproduce the issue?

user-7daa32 09 November, 2020, 17:05:54

@user-7daa32 This folder is not part of the installation or download folder. Please try searching for pupil_capture_settings in your Windows search bar. @papr

Chat image

user-7daa32 09 November, 2020, 17:05:59

Chat image

papr 09 November, 2020, 17:06:44

@user-7daa32 The above screenshot shows the correct folder. Please share the capture file

user-8d1ce2 09 November, 2020, 17:14:00

@papr OK, I've sent the data

papr 09 November, 2020, 17:42:13

@user-8d1ce2 it looks like something went wrong during the conversion of the original recording? Do you still have a copy of it on the phone? Could you share it as well?

user-8d1ce2 09 November, 2020, 17:45:33

@papr Yes, I still have it on my phone, but it will take me a bit to figure out how to get it to you.

papr 09 November, 2020, 17:48:40

@user-8d1ce2 The easiest way should be to connect to phone to the computer, enable file transfer in the usb setting notification, and copy the recording from Movies -> Pupil Mobile -> local_recording -> ...

user-8d1ce2 09 November, 2020, 17:59:20

@papr That was actually what I did the first time, so I don't know that sending it again will change anything, but I've gone ahead and done that. You should receive it shortly.

papr 09 November, 2020, 18:00:32

@user-8d1ce2 The first shared recording was partially converted by Player. I hope that you were able to copy the unmodified files from the phone to reproduce the issue.

papr 09 November, 2020, 18:00:56

Unless, you opened the recording without copying the recording from the phone to the computer in the first place?

user-8d1ce2 09 November, 2020, 18:07:12

@papr Got it. Hopefully the new version will work then.

papr 09 November, 2020, 18:07:38

@user-8d1ce2 The second recording has also been modified. 😕

papr 09 November, 2020, 18:08:27

An unmodified version must not have a info.player.json or info.mobile.csv file. Instead, there should only be a info.csv file. (plus the other video related files)

user-8d1ce2 09 November, 2020, 18:11:48

@papr OK, I'll try it one more time. I didn't delete the folder on my computer before transferring, so the added files were probably still left behind. It should be there in a minute.

user-8d1ce2 09 November, 2020, 18:18:22

@papr shoots it's saying the zipped file is too big to e-mail. I just sent you a link to a google folder.

papr 09 November, 2020, 18:22:30

@user-8d1ce2 This time you shared the original 👍 I was able to open it successfully in Player. Could you give it another try, too? Ideally, install the latest v2.5 version first. This way we can be sure to use the same version.

user-8d1ce2 09 November, 2020, 18:34:09

@papr OK, so it opened, but there isn't any eye data in it. Are you seeing anything other than the world camera?

user-8d1ce2 09 November, 2020, 18:35:52

@papr I am using v.2.5

papr 09 November, 2020, 18:36:44

@user-8d1ce2 Yes, this is expected for a Pupil Mobile recording. Please run the post-hoc processing https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-908b50 09 November, 2020, 23:24:20

@user-908b50 The reason why we subtract the pdist result from 1.0 is that the cosine metric is defined as

py 1 - (u @ v) / (norm2(u) * norm2(v)) in order to be a proper distance metric.

https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html

But we do not want the result of the distance metric but the actual angle. @papr okay, thanks a lot! pdist now makes sense! It was indeed the Salvucci dispersion metric. I used the attached paper attached to inform the maximum dispersion threshold. It compares across different dispersion metrics. I was confused between Radius threshold and Salvucci dispersion metric based on the formula. Based on the author's suggestions, a range of 0.7 deg - 1.3 degree radius is the best. 1.0 radius threshold is equal to about 1.8 Salvucci dispersion metric.

user-908b50 09 November, 2020, 23:25:07

Here it is!

APP.pdf

user-b4120d 10 November, 2020, 04:35:45

Hello everyone - we just purchased a Core with USB-C connector and I was just wondering if the community here has some thoughts on "recommended" cameras to pair with it... couldn't really find this sort of info on the website so thought someone here might have an idea 🙂

user-b4120d 10 November, 2020, 06:04:31

For the world camera I mean 😉

user-e94c74 10 November, 2020, 07:52:39

Hi, currently I'm looking into timestamp and synchronization problem. It seems like start_time in info.player.json and timestamp in exports_info are different; that timestamp in exports_info are the time of the first frame of world camera. Two timestamp differs in few thousands of milliseconds. Here I have a question: I want to fully utilize start time in info.player.json (both Pupil Time and Unix Time), and first frame time as in exports_info DURING THE RECORDING WITH PYTHON/ZMQ. Is there a way to figure out such values during the recording experiment (can't find them for network API commands)? Or should we manually find them after the experiment? Thank you in advance 😄

user-7daa32 10 November, 2020, 14:45:41

@user-7daa32 The above screenshot shows the correct folder. Please share the capture file @papr unable to open Discord in my laptop. Please send an email address let me send the file. Thanks

papr 10 November, 2020, 15:01:28

@user-7daa32 [email removed]

user-8d1ce2 10 November, 2020, 20:26:23

@papr Following up on the post-hoc processing for data collected with mobile. Is it possible to calibrate in Capture using a single marker and save that calibration for post-hoc processing? I like the ability to verify/visualize the accuracy of the calibration that is offered in Capture, so I was hoping that I'd be able to use that calibration, but I don't see a way to save it.

papr 10 November, 2020, 20:30:03

@user-8d1ce2 You can run the validation as part of the post-hoc calibration plugin. No need to save it during the recording.

user-074809 10 November, 2020, 22:02:33

has anyone transformed gaze vectors from the pupil core to a world space? I have motion capture markers on the pupil eye tracker (to create its vector space relative to the world space) and I have known targets in the real world (to provide me some calibration and validation data), but my calibration transformations do not seem to be working. if anyone has done this and has a procedure I could look at, I would greatly appreciate it

user-b4120d 11 November, 2020, 01:46:34

@papr do you know of recommended cameras to use with the Core’s USB-c Mount?

papr 11 November, 2020, 07:02:59

@user-074809 our head pose tracker does that. Things to keep in mind are that you need to estimate the headsets pose in world space for every scene camera video frame, and that calibration targets need to be in scene camera coordinates if you want to use the built-in calibrations.

papr 11 November, 2020, 07:05:49

@user-b4120d Not really. Originally, it was meant for Intel's realsense cameras.

user-b4120d 11 November, 2020, 09:03:53

Thanks papr that makes a lot of sense

user-8effe4 11 November, 2020, 13:46:12

Hello, i have a question on the surface position csv file. what is the difference between the columns "surf_to_img_trans" and "surf_to_dist_img_trans"

papr 11 November, 2020, 13:53:09

@user-8effe4 The former transforms surface coordinates into undistorted 3d camera coordinates, and the latter transforms them into distorted pixel space. See https://docs.pupil-labs.com/core/terminology/#coordinate-system as reference

user-8effe4 11 November, 2020, 13:55:37

oh ok thank you

papr 11 November, 2020, 16:22:37

@user-7daa32 Thank you for providing the log files. The issue is caused when the 3d detector debug window is being minimized on Windows. Other OS are not affected.

As a workaround, please do not minimize the debug window. Either leave it open or close it completely.

We will fix this issue in our next release. Fix https://github.com/pupil-labs/pupil/pull/2047

user-7daa32 11 November, 2020, 16:30:07

@user-7daa32 Thank you for providing the log files. The issue is caused when the 3d detector debug window is being minimized on Windows. Other OS are not affected.

As a workaround, please do not minimize the debug window. Either leave it open or close it completely.

We will fix this issue in our next release. Fix https://github.com/pupil-labs/pupil/pull/2047 @papr That's right! Noted. Thanks

user-2150ee 11 November, 2020, 23:21:05

I had an issue come up that I can't seem to find an answer for anywhere. My core headset has some issues with the camera connectors, and over time the wires have broken going into the connector. I was working with one cameras for a while, but as of today both eye cameras won't connect. I either need to get a new connector, or a full set of wiring I can use to rebuild the harness. Info on where I can get the connectors or order the wiring would be great.

papr 11 November, 2020, 23:21:50

@user-2150ee please contact info@pupil-labs.com in this regard

user-2150ee 11 November, 2020, 23:22:47

Ok, I figured it might be a common thing -- thanks

user-da7dca 12 November, 2020, 10:01:05

hey maybe you can help me out, i have 2 custom eye cameras which i want to use in capture. However they share the same device name and serial number which results in a random camera arrangement in window eye0 and eye1 when initializing a new capture session from old settings. Is there some preferred way to deal with this problem? thanks in advance!

user-3cff0d 12 November, 2020, 16:50:40

@papr https://youtu.be/D6AIqIkTAQA We thought you might want to see this- some results from the neural network we're using to mask eye videos to process through Pupil software. This is some of the latest generated results, but I've run several prior revisions of the neural network's development alongside Pupil Core with great success.

papr 12 November, 2020, 16:58:23

@user-3cff0d Nice! The pupil detection is pretty solid, even when partially occluded during blinks. The iris acts out from time to time, but is mostly stable 👍

user-3cff0d 12 November, 2020, 17:03:37

This particular model was actually trained on on images fairly unlike the video on the left, which is likely why the iris and sclera detection isn't spot on. @user-331121 would know more about the training dataset and process than I

user-178ab2 13 November, 2020, 14:04:13

Hi everyone. I have a question as I'm planning the setup of a new lab and wondering if you would please give me some advice. We are soon going to have an immersive CAVE VR system which will have to be integrated with a bunch of wearable devices. The participant will be inside this CAVE while wearing various instruments (like wearable EEG) which are all wireless and without any wires. Data from the EEG&others will be recorded by a laptop wirelessly. The main goal is to have the participant free to move in this environment without being thethered to any instrument via wires. The multimodal setup will also include a Pupil Core. A very important thing is that all the recordings have to be synchronized. So I'm planning to have a control PC which handles the acquisition of all the various components over the network using LabStreaming Layer. I'm wondering if you have any suggestions on how it's best to record the eyetracking data without having the participant directly connected to a computer via the usb cable and how this can be integrated with the PC running Unity. I thought about two options: 1) Have a mini-PC or a tablet running Windows 10 placed inside a backpack worn by the participant. Pupil core can be connected to that via usb and then data could be streamed over the network via LSL 2) Have Pupil core connected to an Android mobile phone and use Pupil Mobile to stream data to the PC running Unity - which will also run Pupil Capture or Pupil Service. I've never used the app, I'm wondering if it is stable and able to send data with no delays or very minor delays?

The eyetracking data will then have to be fed into unity, I guess using the hmd eyes Unity plugin Do you think either one of these two options would work or do you have any suggsetions / think about potential issues?

Thank you all!

papr 13 November, 2020, 14:06:23

@user-178ab2 Can you confirm that you indeed mean a Pupil Core headset and not one of our AR/VR add-ons?

user-178ab2 13 November, 2020, 14:07:44

Yes a pupil core headset, sorry

papr 13 November, 2020, 14:12:36

@user-178ab2 Interesting. Have you checked if there is sufficient space in the VR headset for it? I would worry that this is not the case.

Btw, @user-d8853d is developing a python script that sends video frames from a RaspberryPI to Pupil Capture over the network [1]. You could connect the eye cameras to the raspberry pi, and stream the eye videos to Capture, while the unity uses the hmd-eyes ScreenCast [2] feature to stream the scene video. Capture would run pupil detection and gaze estimation, and publish the results using the LSL relay plugin. You just need to make sure that the RaspberryPI and Unity use a synchronized clock to timestamp their video frames.

[1] https://github.com/Lifestohack/pupil-video-backend [2] https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#Screencast

user-178ab2 13 November, 2020, 14:14:42

We won't have a VR headset, it's just a CAVE, with projection screens all around. So we will use the Pupil Core headset normally

Thanks so so much for your suggestion! I hadn't considered a raspberry pie. I will definitely look into that

papr 13 November, 2020, 14:16:43

@user-178ab2 In this case, you should be able to just use the RaspPI. @user-d8853d 's backend still requires a time sync feature. I think he would be very happy if you contributed that to his project. We can discuss implementations in software-dev

user-178ab2 13 November, 2020, 14:17:24

Amazing! Yes that would be great

user-6e3d0f 13 November, 2020, 14:36:16

Am I missing something? I want to do Single Marker Calibration with pupil mobile. So i started pupil mobile and checked on Pupil Capture if everything is setup correctly. Then I Start the Calibration and need to move around because my testing marker is not near my monitor. Is there any other way to start the calibration? How do I validate on Single Marker?

papr 13 November, 2020, 14:37:06

@user-6e3d0f Did you connect the headset to the computer running Capture, or do you stream the video?

user-6e3d0f 13 November, 2020, 14:37:39

headset is connected to mobile

user-6e3d0f 13 November, 2020, 14:37:42

and streamed onto desktop

papr 13 November, 2020, 14:39:24

@user-6e3d0f So, you are at the computer, the subject stands further away, you start the calibration on the computer, go to the subject, show the marker, perform the procedure, (1) hide the marker, go back to the computer, and stop the calibration. Everything after (1) can be replaced with (2) show the calibration stop marker.

user-6e3d0f 13 November, 2020, 14:40:08

XD

user-6e3d0f 13 November, 2020, 14:40:16

I may always used the Stop Marker 😄

user-6e3d0f 13 November, 2020, 14:40:20

Thanks @papr

papr 13 November, 2020, 14:40:40

Haha, yeah, that happens 🙂

user-6e3d0f 13 November, 2020, 14:51:35

But how do you validate with single marker calibration?

papr 13 November, 2020, 14:52:15

@user-6e3d0f You do the came, but instead of clicking the C button, you click the T (test/validation) button.

user-6e3d0f 13 November, 2020, 14:52:28

and then perform the same headmovements again?

user-6e3d0f 13 November, 2020, 14:52:45

like those head moves?

papr 13 November, 2020, 14:53:36

@user-6e3d0f If you e.g. rotated your head clock-wise during calibration, you could rotate it counter-clock-wise during validation.

user-6e3d0f 13 November, 2020, 14:54:29

Interesting. Im just curious how it is calculated there. Need to do some research. But if you say this is a valid approach, I'll try it out.

papr 13 November, 2020, 14:54:33

@user-6e3d0f Given the dynamic nature of the single marker calibration, it is unlikely that you will repeat the same scene camera positions during the validation anyway

papr 13 November, 2020, 14:56:15

The concept is always the same: Capture records pupil data (in eye camera coordinates) and gaze target locations (in scene camera coordinates). Afterward, it matches them temporally and calculates a mapping function. The different calibration methods only differ in the type of head/eye movement and how Capture gathers gaze target locations.

user-6e3d0f 13 November, 2020, 15:03:51

I'm just a bit confused, because we try to validate on the same data as we calibrated. Like if I use the screen mirror calibration, we validate on other given points. Thats what confuses me. But maybe thats just a bit to complex for me or I need to read more about the calibration validation part here. Nevertheless, thank you @papr, as always, a great pleasure to gain advise from you 🙂

papr 13 November, 2020, 15:10:01

@user-6e3d0f One major difference is that the single marker calibration usually generates a denser "gaze target cloud" (highly depends on the actual head movement) compared to the screen marker calibration, that generates smaller clusters of gaze targets.

papr 13 November, 2020, 15:10:45

The denser the validation gaze target cloud, the more meaningful the validation result will be.

user-6e3d0f 13 November, 2020, 15:12:35

https://i.imgur.com/xG7pKz3.png that looks a bit small, isnt it?

user-6e3d0f 13 November, 2020, 15:12:47

as the calibration area

papr 13 November, 2020, 15:13:01

@user-6e3d0f It does. Let me check, I had an example video somewhere.

user-6e3d0f 13 November, 2020, 15:13:38

Would be actually great to see a video for it :).

user-6e3d0f 13 November, 2020, 15:14:13

I'll try to measure eye movements on a mirror (so a person 2meter in front of a mirror) so the Field of View and calibrated area should be somewhat close to the mirror size, I hope thats possible

papr 13 November, 2020, 15:20:27

@user-6e3d0f I was not able to find it. I recorded quickly a new one and will send it to you via a DM

user-6e3d0f 13 November, 2020, 15:20:54

Thank you very much!

user-222203 15 November, 2020, 13:12:50

Hello. I have a question. i installed pupil GUI in ubuntu 18.04. i already installed dependencies and the repo. I started Pupil software, but i got an error like this

user-222203 15 November, 2020, 13:12:51

world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/home/can/pupil/pupil_src/launchables/world.py", line 127, in world from pyglui import ui, cygl, version as pyglui_version File "pyglui/ui.pyx", line 1, in init pyglui.ui File "pyglui/cygl/utils.pyx", line 1, in init pyglui.cygl.utils ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

world - [INFO] launchables.world: Process shutting down.

user-222203 15 November, 2020, 13:14:17

i think that i have the different version of Numpy... I want to know the version of Numpy which is used in this GUI

user-222203 15 November, 2020, 13:15:40

my numpy version is 1.13.3

user-d8853d 15 November, 2020, 16:03:15

@user-222203 I had the similar problem. I don't know if it will work for you but you can try upgrading the numpy.

user-d8853d 15 November, 2020, 16:03:42

pip install numpy --upgrade

user-d8853d 15 November, 2020, 16:04:12

If it still doesn't work, try: pip install numpy --upgrade --ignore-installed

user-10fa94 15 November, 2020, 22:08:39

hi! with pupil player, is there a way to export the full 360 degree view captured from the wide angle lens?

user-b7ea86 16 November, 2020, 09:41:24

Good morning everyone.

I have pupil-core glasses and I have a question about them. As far as I know, when you perform a calibration, there is a correction value for both eyes, right? Where do you get this value then ? I read the values of the glasses cyclically via a Python script (Network API). But with all these values I didn't find one that puts the offset of the two pupils in relation to each other. Does anybody know where I can get it from?

Best regards Sebastian

user-178ab2 16 November, 2020, 09:45:42

@user-178ab2 Interesting. Have you checked if there is sufficient space in the VR headset for it? I would worry that this is not the case.

Btw, @user-d8853d is developing a python script that sends video frames from a RaspberryPI to Pupil Capture over the network [1]. You could connect the eye cameras to the raspberry pi, and stream the eye videos to Capture, while the unity uses the hmd-eyes ScreenCast [2] feature to stream the scene video. Capture would run pupil detection and gaze estimation, and publish the results using the LSL relay plugin. You just need to make sure that the RaspberryPI and Unity use a synchronized clock to timestamp their video frames.

[1] https://github.com/Lifestohack/pupil-video-backend [2] https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#Screencast @papr sorry to bother you again. Thanks for all the useful suggestions. I looked into what you recommended and I'm worried about delays if using the the video is streamed from the Rasperry Pi to the PC running Unity&Capture, which then needs to be elaborated and the gaze estimation be passed to Unity again.

Do you think it is possible to use the hmd plugin in Unity & stream to it the gaze data already estimated from another PC? This other PC would run Core and could send the gaze/pupil data over the network to the Unity PC?

papr 16 November, 2020, 09:51:14

@user-178ab2 I was not aware that you needed the gaze signal in real-time. Do you use it for interaction with the environment?

papr 16 November, 2020, 09:52:34

The usage of EEG and LabStreamingLayer suggested that you wanted to process the data post-hoc. In this case, you do not need to worry about the delay as long as the clocks are properly synchronized.

user-178ab2 16 November, 2020, 09:54:47

Yes sorry, I didn't make myself very clear. I do need the gaze signal in real-time to interact with the environment and I was wondering if that can be an input to hmd eyes if sent from another PC over the network

papr 16 November, 2020, 09:56:33

@user-178ab2 In this case, you would have to skip using the PI, connect the headset directly with a USB cable to the computer doing the gaze estimation.

papr 16 November, 2020, 09:57:52

@user-178ab2 Be aware that the gaze estimation is relative to the scene camera, i.e. you will probably have to do some additional mapping step to map the gaze from scene camera coordinates to the actual CAVE coordinates (assuming that you need that).

papr 16 November, 2020, 09:58:30

To answer your actual question, the hmd-eyes plugin is able to receive gaze in scene camera coordinates, yes.

user-178ab2 16 November, 2020, 10:08:06

Thanks so much. Yes I'd need to transform the coordinates from the scene camera space to the CAVE space. I'm taking into considerations all the various options to see which one is more feasible. I might try as you said now in the first instance. Have a PC connceted to the headset and running the gaze estimation. This PC would then stream this data to the Unity PC where the hdm-eyes plugin should be able to receive this data from the network. Are there functions already implemented in the hmd-eyes tool to read data from the network? (apologies, these are very basic question. it's the first time ever I approach all of this)

papr 16 November, 2020, 10:09:26

@user-178ab2 Yes. Read more about it here: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md (You can ignore the calibration part as you would not use the hmd-calibration)

papr 16 November, 2020, 10:09:48

@user-178ab2 Also, I can recommend having a look at https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

user-178ab2 16 November, 2020, 10:15:07

Amazing, thank you so much @papr This is extremely helpful. I will look at those

papr 16 November, 2020, 10:17:02

@user-222203 I can also recommend to uninstall any numpy versions installed via apt-get

papr 16 November, 2020, 10:18:45

@user-10fa94 The wide angle lens only as a field of view of 139°x83° at 1080p. This is the highest field of view that you can get with the official Pupil Labs cameras. Therefore, we do not support 360° videos.

papr 16 November, 2020, 10:21:46

@user-b7ea86 The mapping of pupil (eye camera coordinates) to gaze data (scene camera coordinates) is more complex that an offset. Subscribe to calibration.result to get the class name of the implemented mapping function and its parameters.

user-b7ea86 16 November, 2020, 10:57:21

@papr thank you very much. You are a great machine!! 🙂

user-a6e660 16 November, 2020, 12:03:56

Hello everybody, I'm just about to connect the glasses to my mobile phone with a USB-C cable. I have installed the Pupil Mobile App and can view the data in the app. I read there is a way to get the data from another PC in the same network. There should be settings in Pupil Capture (PC) that I can't find. Can someone help me with my problem. Thanks

papr 16 November, 2020, 12:06:45

@user-a6e660 You can stream the video from Pupil Mobile to Pupil Capture for monitoring purposes. The recorded data needs to be transferred via USB.

user-a6e660 16 November, 2020, 12:12:31

@papr that's exactly what I want, I've already seen the official video from Pupil Labs (https://www.youtube.com/watch?v=atxUvyM0Sf8&feature=emb_logo), but I can't find the setting option shown in the video. What can that be?

user-a6e660 16 November, 2020, 12:12:47

Chat image

papr 16 November, 2020, 12:29:46

@user-a6e660 When your device is in the same network as the computer running Pupil Capture, go to the Video source menu in Capture, and select your device from the "Activate device" selector.

user-200ca9 16 November, 2020, 12:29:54

Hey guys! were trying to figure out the accuracy of the glasses. Namely, we are trying to understand the error the glasses produce between the real point (lets say the center of the calibration marker) and the gaze point marked by the glasses. Is there an easy way to extract that information?

papr 16 November, 2020, 12:30:31

@user-200ca9 Yes, checkout these docs https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy

user-200ca9 16 November, 2020, 12:31:06

Thank you! i'll check it out!

user-200ca9 16 November, 2020, 12:32:49

@papr is this data exported anywhere? we've seen this visually in the pupil capture but haven't found the accuracy in any of the CSV files..

papr 16 November, 2020, 12:33:31

@user-200ca9 No, it is not. You can reproduce these values in Pupil Player in the post-hoc calibration plugin.

user-a6e660 16 November, 2020, 12:40:30

@papr The glasses are in any case in the same network and successfully connected to the cell phone, but no device is displayed in the pupil capture, other ideas what could be the cause?

Chat image

papr 16 November, 2020, 12:41:14

@user-a6e660 Let's check the logs. Please share the pupil_capture_settings/capture.log file

user-200ca9 16 November, 2020, 12:41:46

@papr so i've found a short video showing manual calibration with the post-hoc, those the data points that the user chooses during the small sample are then exported?

user-200ca9 16 November, 2020, 12:41:59

does*

user-200ca9 16 November, 2020, 12:43:23

https://www.youtube.com/watch?v=mWyDQHhm7-w

user-200ca9 16 November, 2020, 12:43:37

(this is the video we checked)

papr 16 November, 2020, 12:45:30

@user-200ca9 The reference locations are not exported but saved in an intermediate format. You can read them from offline_data/reference_locations.msgpack using msgpack.

user-200ca9 16 November, 2020, 12:46:26

I see, we will check it out! thanks!

user-a6e660 16 November, 2020, 12:54:38

@papr

capture.log

papr 16 November, 2020, 12:56:29

@user-a6e660 It looks like Capture is choosing the wrong network interface (VMware Virtual Ethernet Adapter) for the connection. Are you running Capture in a VM?

papr 16 November, 2020, 12:58:12

Probably not, correct? In any case, you will need to disable this network interface in your settings such that Capture choses the correct network interface. Unfortunately, a manual selection is not possible.

user-a6e660 16 November, 2020, 13:37:26

Thanks very mutch, yes you right! I will try it in another network.

user-7daa32 16 November, 2020, 13:56:37

Hello

Please how do you use the Annotation plug-in. It's hard to understand at the user guide. Any video for it?

Would I click their various hotkeys anytime I want to start and end ?

If I have Start(S) and finish (F), are we going to manually click these buttons ?

I don't know if I am making sense again!

papr 16 November, 2020, 14:03:37

@user-7daa32 Correct, just click the buttons if you want to tigger the event. Please be aware that annotations are single points in time, not a period. Therefore, you need dedicated start and end annotations, like you suggested with S and F, to annotate periods.

user-7daa32 16 November, 2020, 14:06:08

@user-7daa32 Correct, just click the buttons if you want to tigger the event. Please be aware that annotations are single points in time, not a period. Therefore, you need dedicated start and end annotations, like you suggested with S and F, to annotate periods. @papr thanks. Can I have many Start Annotations and end annotations? All in one video

papr 16 November, 2020, 14:06:36

@user-7daa32 Yes, as many as you like. Annotations are not unique.

papr 16 November, 2020, 14:07:12

Just for completion, you can create annotations using our Network API, too.

user-7daa32 16 November, 2020, 14:07:20

@user-7daa32 Yes, as many as you like. Annotations are not unique. @papr and not seem to be accurate with time exactly

papr 16 November, 2020, 14:08:22

@user-7daa32 If you want to be more accurate in time than pressing a button manually, you will have to use a script/programming or create annotations post-hoc in Pupil Player.

user-200ca9 16 November, 2020, 14:55:39

@papr hey, I was going through the user guide, and saw the following (in relation to the markers): - An individual marker can be part of multiple surfaces. - The used markers need to be unique, i.e. you may not use multiple instances of the same marker in your environment. aren't these 2 points contradict one another?

user-7daa32 16 November, 2020, 16:40:21

@user-7daa32 If you want to be more accurate in time than pressing a button manually, you will have to use a script/programming or create annotations post-hoc in Pupil Player. @papr each time I press the annotations plugin in player, I got my computer freezed. Player will stop responding

papr 16 November, 2020, 16:42:09

@user-7daa32 I cannot reproduce the issue. Could you share the pupil_layer_settings/player.log file, similar to the capture.log file the other day but with a different file name and in a different folder

nmt 16 November, 2020, 17:17:34

@user-200ca9 An individual marker can be used as part of multiple surfaces. See this example with 6 markers. You can define 2 surfaces here, where the middle markers are used in both surfaces. What you cannot do is put 2 identical markers in the same recording.

Chat image

user-200ca9 16 November, 2020, 17:18:04

oohhhh ok! got it! thank you!!!

user-d8853d 16 November, 2020, 18:27:39

congratulation for new release 2.6 😃

user-7daa32 16 November, 2020, 19:00:43

@user-7daa32 I cannot reproduce the issue. Could you share the pupil_layer_settings/player.log file, similar to the capture.log file the other day but with a different file name and in a different folder @paprSent. I am experimenting this in capture too

papr 16 November, 2020, 19:00:58

@user-7daa32 Saw it. Please use a newer version of Pupil Player.

papr 16 November, 2020, 19:01:45

@user-7daa32 I do not see any issues in the Capture log

papr 16 November, 2020, 19:03:09

@user-7daa32 Yeah, you are using 2.0.161. The issue was fixed in v2.0-177

papr 16 November, 2020, 19:03:35

@user-7daa32 So, you can either using a 2.0 version from https://github.com/pupil-labs/pupil/releases/v2.0 or use the latest version released today

user-1529a4 17 November, 2020, 10:31:52

hey guys, can someone explain which angle is refered to in this image?

Chat image

papr 17 November, 2020, 10:42:58

@user-1529a4 Accuracy is calculated as the average angular offset (distance) (in degrees of visual angle) between fixation locations and the corresponding locations of the fixation targets. Precision is calculated as the Root Mean Square (RMS) of the angular distance (in degrees of visual angle) between successive samples during a fixation.

user-1529a4 17 November, 2020, 11:03:19

@papr what do you mean when you say visual angle

user-1529a4 17 November, 2020, 11:03:20

?

papr 17 November, 2020, 11:04:45

@user-1529a4 Imagine two rays, originating from the scene camera. One going through the gaze target (ground truth) and one through the estimated gaze location. By visual angle, we refer to the angle between these two rays.

user-1529a4 17 November, 2020, 11:06:25

hmm I understand what your saying, but there is a hidden assumption (if I understand correctly) that you are referring to these angle in 2D, because in 3D we have 2 angles to refer to, right?

user-1529a4 17 November, 2020, 11:07:24

this*

user-1529a4 17 November, 2020, 11:08:23

(like the phi and theta we are exporting into the export file..)

papr 17 November, 2020, 11:09:53

@user-1529a4 Not necessarily. You can also measure the smallest angle between the rays. No need to split into spherical components. We use the arccos of the cosine distance to calculate the angle.

papr 17 November, 2020, 11:10:25

Cosine distance of A and B: (A @ B) / (||A|| * ||B||)

papr 17 November, 2020, 11:10:42

@ is the dot product in this case.

user-1529a4 17 November, 2020, 11:10:49

aaahhh I see what you are saying...

user-1529a4 17 November, 2020, 11:11:02

so this angle is of the dot product

user-1529a4 17 November, 2020, 11:11:23

ok got it, thank you!

user-3c006d 18 November, 2020, 15:33:37

Hey, is there any possibility to change the head camera? We would need a camera which is not fish eyed. I need to be able to read the content the test person looked at in the video. Its not possible with the fish eye camera

papr 18 November, 2020, 15:37:27

@user-3c006d Your Pupil Core comes with a narrow lens that nearly does not have any distortion. Have you given that a try?

user-3c006d 18 November, 2020, 15:39:08

no i didnt even know about that til now xD

user-3c006d 18 November, 2020, 15:39:11

okay i try it

user-3c006d 18 November, 2020, 15:46:53

thank you! MUCH BETTER!

papr 18 November, 2020, 15:48:01

@user-3c006d After switching lenses, you should reestimate your camera intrinsics

user-3c006d 18 November, 2020, 15:51:34

alrighty! thanks!

user-765368 18 November, 2020, 22:48:14

@papr Is there a tutorial about live video from the eye cameras using the network API? it is stated that it is possible but not explained how...

user-d8853d 19 November, 2020, 07:38:34

@user-765368 there is documentation here. https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote

user-765368 19 November, 2020, 08:32:50

it is not there... or at least i couldn't find it...

papr 19 November, 2020, 08:48:08

@user-765368 Have you tried running the code examples linked by @user-d8853d? Were they successful?

user-0ee84d 19 November, 2020, 08:49:55

@papr is there any option to change the camera parameters to improvise the image quality?

papr 19 November, 2020, 08:50:42

@user-0ee84d You can change camera related parameters in the Video Source menu of each window.

user-0ee84d 19 November, 2020, 08:52:32

Thanks

user-0ee84d 19 November, 2020, 09:10:56

@papr is it possible to change those settings in Pupil Player?

papr 19 November, 2020, 09:33:28

@user-0ee84d Player just shows the recorded video. Exposure time needs to be adjusted before the recording.

user-765368 19 November, 2020, 10:03:47

@papr yes, the code works. but i only managed to record data and couldn't display it. is there a way using the network API to also live stream the feed from the eye camera?

papr 19 November, 2020, 10:06:01

@user-765368 this example receives the raw eye images from the eye cameras and uses opencv to visualize them. Is this what you would like to do? https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py

user-0ee84d 19 November, 2020, 10:52:29

@user-0ee84d Player just shows the recorded video. Exposure time needs to be adjusted before the recording. @papr I tried to set the auto exposure mode to auto... It throws an error message stating "World: could not set value. Auto exposure mode"

papr 19 November, 2020, 10:53:11

@user-0ee84d The naming of these options might be a bit confusing. Choose "aperture priority mode" for auto exposure.

user-765368 19 November, 2020, 10:58:02

@papr yes, thanks!

user-0ee84d 19 November, 2020, 11:25:55

Thank you. Why does the camera turns into fish eye camera if I set the resolution to 1920x1080? If it's 1280x720, it looks normal but with bad quality

papr 19 November, 2020, 11:28:00

@user-0ee84d The default wide-angle lens is a fish eye lens. If you want you can switch lenses. Checkout my previous conversation with @user-3c006d https://discord.com/channels/285728493612957698/285728493612957698/778644090664255505

720p is a crop of the least-distorted part of the 1920x1080 view.

user-0ee84d 19 November, 2020, 11:29:05

@user-0ee84d The default wide-angle lens is a fish eye lens. If you want you can switch lenses. Checkout my previous conversation with @user-3c006d https://discord.com/channels/285728493612957698/285728493612957698/778644090664255505

720p is a crop of the least-distorted part of the 1920x1080 view. @papr thank you

user-467cb9 20 November, 2020, 08:36:49

Hello, is possible in Pupil Core getting video (live stream) from cameras in real time? If so, what I should read for do it?

user-467cb9 20 November, 2020, 08:37:43

I would like to display this video in WPF Application in real time.

user-3ede08 20 November, 2020, 09:35:01

I have recorded some data and would like to have to current date time. How could I transform the timestamps to it ?

from datetime import datetime

timestamp = 1970207.964740
dt_object = datetime.fromtimestamp(timestamp)

print("dt_object =", dt_object)

This returns dt_object = 1970-01-23 20:16:47.964740

user-d8853d 20 November, 2020, 09:38:02

@user-3ede08 that should be a monotonic time in python and not epoch that you are trying to convert.

user-d8853d 20 November, 2020, 09:41:56

Hello Pupils lab team, The eye camera is 200 Herz and the diy camera is 30 Herz, is there any advantage of having high speed eye camera? Does it make eye tracking more accurate? I understand that we will have 6.6 times more data but does it translates to better quality? Have you done any any studies on that?

user-3ede08 20 November, 2020, 10:25:31

Hey @user-d8853d thx for you answer. My goal is to transform the pupil recorded timestamps into the actual data time. Could you provide some code lines that do that ?

user-8ed000 20 November, 2020, 10:48:02

Hello. I have some trouble getting our Pupil Core running on one of our Microsoft Surfaces. The Frame Rate is super low, always around 3-4 FPS. I can't really find a reason for this as both processor and RAM are not at max limit and there are no background processes running. Am I missing something or is the surfaces processor simply to weak? On my normal laptop (which admittedly a lot stronger) it works ok at around 15 FPS. Any recommendations are welcome 😃

papr 20 November, 2020, 10:52:32

@user-764f72 Could you please link your time sync tutorial for @user-3ede08?

user-3ede08 20 November, 2020, 10:53:23

Is it possible to get the timestamps display on the video (world.mp4 ) ?

papr 20 November, 2020, 10:54:08

@user-3ede08 with a custom plug in, yes. I am currently not at work, but once I am, I will be able to share it with you.

user-3ede08 20 November, 2020, 10:54:40

That will be fantastic, thanks

user-3ede08 20 November, 2020, 10:54:59

Will you do that today ?

user-764f72 20 November, 2020, 11:07:39

@user-3ede08 hi, this tutorial that should help you figure out how to transform the timestamps into datetime objects https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-3ede08 20 November, 2020, 11:09:02

Hi @user-764f72, thank you for that.

nmt 20 November, 2020, 11:49:46

@user-d8853d It really depends on your requirements. At 30Hz, you will get a general understanding of what the wearer is looking, but you will miss out on more detailed information on, for example, saccades

papr 20 November, 2020, 11:58:49

@user-d8853d @nmt Technically, you would calculate eye movement data on the 200Hz eye cam signal. Therefore, you are not missing out in that sense. The only issue is that the spatial resolution in scene coordinates might not be accurate during scene movement.

papr 20 November, 2020, 12:00:24

@user-467cb9 check out the frame publisher examples in our pupil helpers repository. https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py

nmt 20 November, 2020, 12:01:10

Aha I thought @user-d8853d was intending to track the eyes at 30Hz

papr 20 November, 2020, 12:01:59

@nmt oh, you are right. I misread that. I thought he was talking about the scene camera.

user-3ede08 20 November, 2020, 12:24:04

@user-3ede08 with a custom plug in, yes. I am currently not at work, but once I am, I will be able to share it with you. @papr Can you still do it today ?

papr 20 November, 2020, 12:34:16

@user-3ede08 Install it to ~/pupil_player_settings/plugins

vis_datetime.py

papr 20 November, 2020, 12:35:46

@user-3ede08 I just noticed that this plugin is a bit older and not yet compatible with the newest recording format. @user-764f72 could you please update it to make it compatible?

user-3ede08 20 November, 2020, 12:47:04

@papr Did you mean, that I should download vis_datetime.py and paste the file ~/pupil_player_settings/plugins ? How will I connect it to the file /Users/user_name/recordings/2020_11_20/000/exports/000/world.mp4 ?

papr 20 November, 2020, 12:50:52

@user-3ede08 This is how you add a plugin. https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin Afterward, open the recording in Player, activate the plugin and you should see the datetime in the bottom left.

papr 20 November, 2020, 12:51:20

@user-3ede08 Depending on your recording, you might want to wait for @user-764f72 to update the plugin.

user-d8853d 20 November, 2020, 12:52:56

@papr @nmt So does more hertz on eye camera means more accurate results on gaze estimation? Is there any study you guys did?

user-3ede08 20 November, 2020, 13:16:51

@papr The file is now in ~/pupil_player_settings/plugins/vis_datetime.py . But does not appear in the plugin list. I am waiting the new file from @user-764f72 I am using pupil player version 2.5.0

Chat image

user-764f72 20 November, 2020, 13:30:31

vis_datetime.py

user-764f72 20 November, 2020, 13:31:56

@user-3ede08 please use the plugin file I've attached above. I've updated the way the recording metadata is read, so it should work now. the plugin you're looking for in the plugin list is called Clock.

user-3ede08 20 November, 2020, 13:41:31

Thank you @user-764f72 Once I activate it, it displays the Player Synced and Player Systemtime in the middle of the Pupil player and disappear after some seconds. It is the way it should behave ?

user-764f72 20 November, 2020, 13:44:24

Chat image

user-764f72 20 November, 2020, 13:44:55

@user-3ede08 you should see the full date and time text at the bottom of the frame, as shown in the sceenshot

user-3ede08 20 November, 2020, 13:45:43

Got it. I was behind the first graph functions. @user-764f72 thank you so much

user-3ede08 20 November, 2020, 14:51:06

Meanwhile, is it possible to make it be more precise ? Something like this : 2020-11-20 08:35:33.534631968 (note the digit element) I have changed("%Y-%m-%d %H:%M:%S") to ("%Y-%m-%d %H:%M:%S:%M") in the vis_datetime.py. It does add something after the second, but that number does not change during the time. Any suggestions or idea will be welcome

user-6e3d0f 20 November, 2020, 14:53:24

Chat image

user-6e3d0f 20 November, 2020, 14:53:38

I Always get that Error in pupil mobile Any. Info?

papr 20 November, 2020, 14:55:19

@user-3ede08 Here is an overview over possible formatting options https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior

%M refers to

Minute as a zero-padded decimal number.

You probably want %f

Microsecond as a decimal number, zero-padded on the left.

user-3ede08 20 November, 2020, 14:58:28

Thank you @papr

papr 20 November, 2020, 15:14:14

@user-6e3d0f The KeySensor records keyboard inputs, e.g. from a bluethooth connected keyboard or presentation remote. Not sure why the error appears in this case though. You should be able to ignore it since it does not relate to the cameras.

user-6e3d0f 20 November, 2020, 15:26:50

👍🏾

user-2162fb 20 November, 2020, 15:55:35

I just received my pupil-labs core and it is not being recognized on windows.

user-2162fb 20 November, 2020, 15:56:04

I followed the troubleshoot instructions from the website, but still not connecting....

papr 20 November, 2020, 15:58:58

@user-2162fb Please make sure the cable is connected properly, both in the USB port as well as the headset clip. If it already is, please check the device manager and check if you can see "Pupil Cam" entries in any of the following cateories: Cameras, Imaging Devices, libusbk.

user-2162fb 20 November, 2020, 16:00:06

The cables are connected... I checked the device manager, but I can't see the pupil cams.. When I plug the headset in, it says that USB device not recognized

papr 20 November, 2020, 16:00:55

@user-2162fb So there are no entries in the named categories, after connecting the device?

user-2162fb 20 November, 2020, 16:01:11

No. I don't see libusbk ether

papr 20 November, 2020, 16:01:33

@user-2162fb ok, please contact info@pupil-labs.com in this regard.

user-2162fb 20 November, 2020, 16:04:44

Just emailed them.

user-3f5fe2 23 November, 2020, 21:26:19

Hi I'm new in the pupil core world and I would want to ask a support on an issue I've found in installing pupil 2.6 on windows 10.

user-3f5fe2 23 November, 2020, 21:28:34

Hi installed all the dependencies as indicated in the website and I tried to use the installer. I launched the capture but when I record a session I don't find any audio in the recording. Also in the plug in window I don't see any options to audio recording. Would you help me in understanding where I'm wrong in using or setting the software? Thank you very much in advance for your attention. Max

papr 23 November, 2020, 21:29:25

@user-3f5fe2 There is nothing wrong with the software or your installation. We discontinued audio recording support with our Pupil v2.0 release.

user-3f5fe2 23 November, 2020, 21:30:59

Oh Thank you. What do you suggest me to record audio in sync with the camera? I need to sign audio stimulus to see eventually pupil variations correlated. Thank you

user-3f5fe2 23 November, 2020, 21:31:40

Maybe a script that launch the microphone and capture?

papr 23 November, 2020, 21:32:14

@user-3f5fe2 Maybe recording audio with a different framework like labstreaminglayer might work better for you.

user-3f5fe2 23 November, 2020, 21:33:03

Could you help in one more thing please?

papr 23 November, 2020, 21:33:21

Sure, I will try 🙂

user-3f5fe2 23 November, 2020, 21:33:56

Why there is in the recording option sound only or silent even sound is not recorded yet?

user-d8853d 24 November, 2020, 10:48:59

Is there any plugin to calculate the visual accuracy. I mean your eye is looking at a object but the pupil tracking is showing the gaze somewhere else and you calculate the angle between those two points.

papr 24 November, 2020, 10:49:23

@user-d8853d yes, checkout the accuracy visualizer. it is loaded by default

user-f22600 24 November, 2020, 12:25:35

Hello, I'm considering getting the usb-c mount or high-speed camera configuration. Is it easy to write a plug in for a new sensor? say like Mynteye? and mount then on Core?

nmt 24 November, 2020, 16:20:58

@user-f22600 Our video backend only lists cameras that fulfil a specific set of requirements. Check out this Discord message for details: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

user-0ee84d 24 November, 2020, 16:21:09

@papr Is there any python example to obtain the gaze position in the world space/view space?

papr 24 November, 2020, 16:33:41

https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py Just comment line 23 and uncomment line 24. Afterward, you will receive gaze in normalized scene coordinates. See this for a coordinate system reference: https://docs.pupil-labs.com/core/terminology/#coordinate-system

Please be aware that you need to calibrate first, before you can get gaze data.

user-0ee84d 24 November, 2020, 16:22:02

Also is there any python example to load the folder containing the previous recordings and retrieve the gaze positions and the frames? :)

papr 24 November, 2020, 16:35:43

Usually, you would want to process the recording in Pupil Player. You can write code to read the intermediate data format if you wanted, though. It is documented here: https://docs.pupil-labs.com/developer/core/recording-format/

papr 24 November, 2020, 16:24:03

@user-f22600 Additionally, to the built-in backend, you could use @user-d8853d's opencv backend, that runs independently of Capture and streams the video to Capture via the network API: https://github.com/Lifestohack/pupil-video-backend/

Writing your own backend will require a lot of work, to be honest. If you are not dependent on the depth data, I would highly recommend buying the high-speed camera configuration.

user-0ee84d 24 November, 2020, 16:30:11

@papr trying to install pupil throws the following error

ERROR: Command errored out with exit status 128: git clone -q https://github.com/zeromq/pyre /tmp/pip-install-muyuu9_p/pyre Check the logs for full command output.

papr 24 November, 2020, 16:37:08

Why are you trying to run Pupil from source? There is nearly no reason to do that anymore. It is much simpler to run the bundled application that you can download here: https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads

user-0ee84d 24 November, 2020, 16:31:05

Fixed it...works

user-0ee84d 24 November, 2020, 16:31:10

Just pasted an updated url

user-0ee84d 24 November, 2020, 16:34:02

Thank you

user-0ee84d 24 November, 2020, 16:34:09

I will check

user-0ee84d 24 November, 2020, 16:34:41

Which version of ceres is required?

user-0ee84d 24 November, 2020, 16:35:18

Unable to build wheel for pupil-detector

user-0ee84d 24 November, 2020, 16:36:29

Thank you

papr 24 November, 2020, 16:37:53

I am assuming that you know about the bundled applications. I apologize for my tone if this was not the case.

user-0ee84d 24 November, 2020, 16:38:11

I would like to make use of the python api so I was trying to install it in my conda environment

papr 24 November, 2020, 16:40:13

There might be a misunderstanding. There are two types of APIs of which both work with the bundled applications: 1) Network api 2) Plugin api

See this section of the documentation for reference: https://docs.pupil-labs.com/developer/core/overview/#where-to-start

user-0ee84d 24 November, 2020, 16:39:01

I have installed the bundled application. However I need the python api in my conda environment.

user-0ee84d 24 November, 2020, 16:40:16

This seems to be interesting though... It seems that I will get the gaze data remotely while the pupil capture is running somewhere if I understand it correctly.

papr 24 November, 2020, 16:40:28

100% correct

user-0ee84d 24 November, 2020, 16:40:43

Perfect. Thanks

user-6e3d0f 25 November, 2020, 15:11:06

Did anyone of you work with Pupil Mobile? If so, what Mobile phones did you use? Anyone experience with the OnePlus 6T?

user-067553 25 November, 2020, 16:28:53

We have used Xiaomi Mi a2

user-6e3d0f 25 November, 2020, 15:11:20

(it works good on my 7T, but we need a more cheaper phone)

papr 25 November, 2020, 15:11:48

@user-6e3d0f Yes, the 1+ phones should all work well in terms of compatibility

user-200ca9 25 November, 2020, 15:12:44

i've dabbled a bit with it, using galaxy s8. it works, but i've haven't dived deep into all the files it generates

user-6e3d0f 25 November, 2020, 15:13:33

and speccs should not really make a big difference or? 6T is from late 2018

user-6e3d0f 25 November, 2020, 15:13:46

Pupil mobile homepage says Android device (Nexus 5x or Nexus 6p)

wrp 25 November, 2020, 15:15:19

@user-6e3d0f please note that Pupil Mobile is no longer maintained. It should still be working on OnePlus devices as @papr notes, but we have not evaluated on the devices that you or @user-200ca9 have listed.

user-6e3d0f 25 November, 2020, 15:15:35

We got an Old Samsung galaxy S7, but dont know if that works good enough

user-6e3d0f 25 November, 2020, 15:15:59

@wrp I know that, I worked with it for about 2 weeks now, and till now we didnt see any problems with it. For our use case, a mobile Setup is pretty much needed.

user-067553 25 November, 2020, 16:28:05

Hi all, I'm having this issue with the recordings that I've made with pupil mobile: I have a recording that lasts 40minutes (more or less) and this information is shown also in the file "info.mobile.csv" in the directory of the recording but when I try to open it in pupil player there are only 6 minutes. How can I fix that? Thank you

papr 25 November, 2020, 16:31:41

@user-067553 Does this happen for all of your recordings? Sounds like the scene camera got disconnected after 6 minutes. If you want you can share the recording with [email removed] and we can have a look.

user-067553 25 November, 2020, 17:04:54

Thank you @papr Is it possible to send only the info and the .npy files without sending you the whole recording? thanks

papr 25 November, 2020, 17:06:13

@user-067553 That would be a start. We can let you know if we need more files.

user-067553 27 November, 2020, 09:25:56

Hi @papr, I've sent the data at the address data@pupil-labs.com (do you need more informations?) thanks

user-765368 26 November, 2020, 10:52:19

hey @papr, im trying to write a script that does analysis of the pupil polar coordinates, what are the origin coordinates for theta and Phi? (i mean that in the 3d space, what are the theta = 0 and phi = 0 coordinates?)

user-690703 26 November, 2020, 16:46:40

Hi! I'm not sure you are able to help me with this @papr , but I would like to give it a try anyway. (Btw, your tips were extremely helpful so far, I am really grateful for them). I'm trying to replicate a study measuring Pupil Core precision, and in their code the authors refer to a file called "info.csv" (from where I should get the timestamps), but I can't find such a file. Am I missing something, or did it perhaps get renamed in the meantime?

user-d8853d 26 November, 2020, 20:05:50

I am extremely interested in this study. Could you please update me on your study? Perhaps you would like to link your github source code or link to your study?

papr 26 November, 2020, 16:47:39

@user-690703 The new equivalent file is info.player.json

user-690703 26 November, 2020, 16:47:53

Thank you very much!

papr 26 November, 2020, 16:49:19

@user-765368 Unfortunately, this is not well documented 😬 And I do not know it by hard right now.

papr 27 November, 2020, 10:04:46

@user-067553 I won't be able to have a look at it today. I will let you know once I had a chance to do that on Monday. 🙂

user-067553 27 November, 2020, 11:07:47

Meanwhile, If you have some documents to suggest me for this kind of problem that I could see to understand if I can do it by my own and fix it, I can also try to take a look.

user-067553 27 November, 2020, 11:04:28

Ok, thank you

user-594d92 27 November, 2020, 13:35:02

Hello @papr, I'd like to ask you two questions: First: According to the developer documentation, we can get the real-time eye movement data by : https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone. I want to make sure that this data is normalized to the point just on the screen or to all the points that the world camera takes. Second, we are trying to implement online fixation detection. May I ask where is the code to implement the online fixation detection function in the Pupil Core source code? Thank you!

papr 27 November, 2020, 13:42:58

@user-594d92 Gaze data is in scene camera coordinates. So if I understand your options correctly, the second one. If you want gaze in screen coordinates (first option), you will need to use surface tracking on your monitor and subscribe to surface-mapped gaze.

https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L669

papr 27 November, 2020, 14:50:12

@user-2ab393 Checkout the recent_events() method. The Fixation_Detector class is primarily a plugin, i.e. the glue code between algorithm and application. It reads the gaze data from the events dictionary, buffers it in a running window, and calls the gaze_dispersion function on the buffered data.

user-594d92 27 November, 2020, 13:58:20

So with Surface Tracking, is the data output in real time processed by the Surface Tracking plugin?

papr 27 November, 2020, 13:58:36

@user-594d92 Yes

papr 27 November, 2020, 13:59:13

Nearly all detections and analayses can be done in real-time in Capture or post-hoc in Player

user-594d92 27 November, 2020, 14:02:59

ok, thank you! 👍

user-430fc1 27 November, 2020, 14:07:59

Windows / Sophos updates look to have broke my Pupil Capture. Has anyone run into a similar issue with Pupil Capture? Maybe this is one for local IT support...?

Chat image

papr 27 November, 2020, 14:09:27

@user-430fc1 Is it possible for you to restore the file via sophos? Afterward, Capture should run as usual.

papr 27 November, 2020, 14:11:27

https://support.sophos.com/support/s/article/KB-000036919?language=en_US

Generic ML PUA detections explained Potentially Unwanted Application (PUA) is a term used to describe applications that, while not malicious, are generally considered unsuitable for business networks.

What to do [...] If a user needs to allow a PUA to run, it can be done by an admin through the Sophos Central dashboard. Locate the alert for the detection on the device and select Allow from the options. Doing this will restore the application and stop it from being detected again.

user-430fc1 27 November, 2020, 14:13:21

@papr Ah OK, thanks, will probably need to get admin to do this as it's a departmental machine

user-2ab393 27 November, 2020, 14:47:23

Hello @papr, I'd like to ask you one question:

What is the input data of online fixation detection algorithm ?

Thank you!

user-2ab393 27 November, 2020, 14:57:47

@papr OK, thank you. Let me understand it.

user-6e3d0f 27 November, 2020, 15:37:08

Is the output in the fixations_on_surface_name not handled the same as the export from the Fixation detector? Because Fixation Detector gave me 344 Fixations and the on_surface.csv has way more lines and shows shorter durations than what my sections are in the fixation detector?

papr 27 November, 2020, 15:39:13

Fixations with the same id should have the same duration. Since fixations can span multiple world frames, and the surface location could be different in each one, we map the fixation for each world frame. This is why you see more entries,

user-6e3d0f 27 November, 2020, 15:39:38

Yes, thats the case

user-6e3d0f 27 November, 2020, 15:41:26

If I want to work with this output data on my surface, and for example classify my surface like 0 - 0.5 is left and 0.5 - 1 is right and I want to classify my fixations if they are left or right inside the frame, I could simply use every entry or only every fixation_id? Because fixation_id differs in the position by a small amount. Or I could go with the average, right?

papr 27 November, 2020, 15:42:59

Group by fixation id, calculate average location, apply left/right classification. The less the surface moves during the fixation, the more valid this approach is

user-6e3d0f 27 November, 2020, 15:44:07

Thanks, I will stick to that.

user-6e3d0f 27 November, 2020, 15:44:23

If Fixation is fixed on a wall that means no moving? Or do you mean the less the world camera moves?

papr 27 November, 2020, 15:47:10

it is about the relative movement between surface and scene camera. if either one moves while the other does not, the fixation mapping will result in different locations

user-2ab393 27 November, 2020, 15:46:02

hi@papr May i have a question? When and how the recent_events() method is called, and what is the meaning of the parameter "events" it passes in.

papr 27 November, 2020, 15:48:43

Pupil Capute uses a loop to process data. Each iteration processes one scene frame and the most recent gaze data. This is done by calling recent_events() on each plugin once per loop iteration. events contains data added by other plugins, e.g. frame and gaze data

user-5ef6c0 27 November, 2020, 23:12:23

Hello. I have an eye-tracking session that unfortunately was recorded over a few bad disk sectors. The file affected is world.mp4: when loaded into pupil player it works fine as long as you don't reproduce it between the 7-20sec mark. The same with VLC. The rest of the file works fine in both software. When I try to copy the file to another drive, the process fails at 6% of the copy. I imported the recording into pupil player anyway and exported a new world.mp4 but skipping the first 840 frames of video. I thought that importing this new world.mp4 into pupil player with the rest of the files (the original ones) could work —I thought that perhaps pupil player was simply gonna match the shorter video with the corresponding data based on frames or timestamps, or something like that. However, pupil player crashes during load. I could forget about all this and just work on the original files (which I'm doing), but I'm afraid the faulty drive may die any minute now. So finally, this is my question: is there a way to match the shorter video with the original data? either by removing the data corresponding to the first 840 frames from all the original files, or by somehow filling the missing 840 frames of video with blank footage? I actually thought the latter option could work but I then thought this would probably mess up some of the video metadata and thus render it useless for pupil player. Anyway, any suggestions are welcome

papr 27 November, 2020, 23:51:04

@user-5ef6c0 it should work if you also copy the exporter world_timestamps.npy file into the recording folder

user-5ef6c0 28 November, 2020, 00:01:48

So what I just tried is: a) imported the original files which are in the disk with bad sectors (including the corrupted world.mp4) into pupil player b) in pupil player, exported the between frame 840 and the final frame. I only included the world video exporter and raw data exporter plugins. c) copied all original files from the disk with bad sectors into a new folder in a separate hard drive. I didn't include the original world.mp4. Let's call this new folder MIRROR. d) copied the new world.mp4 and world_timestamps.npy files (which I obtained after exporting from the original in pupil player), into the MIRROR folder. e) imported MIRROR folder into pupil player, but pupil player crashed during load. Is that what you were referring to? (PS: I edited to be more clear)

user-5ef6c0 28 November, 2020, 00:14:00

player - [INFO] camera_models: Loading previously recorded intrinsics... player - [INFO] video_export.plugins.world_video_exporter: World Video Exporter has been launched. player - [WARNING] video_capture.file_backend: Advancing frame iterator went past the target frame! player - [ERROR] video_capture.file_backend: Found no maching pts! Something is wrong! player - [INFO] video_capture.file_backend: No more video found player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "shared_modules\video_capture\file_backend.py", line 452, in recent_events_external_timing File "shared_modules\video_capture\file_backend.py", line 412, in get_frame video_capture.base_backend.EndofVideoError

user-5ef6c0 28 November, 2020, 00:14:15

that's what the console displays

user-5ef6c0 28 November, 2020, 00:32:23

I will try to insert 839 black frames at the beginning of the exported video using ffpmeg and see if that works

user-7daa32 28 November, 2020, 16:34:24

Unable to add surface in pupil capture

user-7daa32 28 November, 2020, 16:49:38

?? Please why?

papr 28 November, 2020, 16:35:21

@user-5ef6c0 ah, you need to delete the world_lookup.npy file after replacing the other file

papr 28 November, 2020, 16:35:35

it will be regenerated by Player based on the new files

papr 28 November, 2020, 16:50:55

@user-7daa32 You will have to provide more details for us to be able to help you. What is different than usual? Can you share a screenshot of the surface tracking menu and your scene video frame that you are trying to define a surface on?

user-7daa32 28 November, 2020, 16:53:33

Not clickable

Chat image

papr 28 November, 2020, 17:02:15

@user-7daa32 The issue is that surfaces cannot be added when the scene is frozen. Please disable the "Freeze scene" button and try again. There is a warning that should have shown up in this case but it looks like it is rendered behind the frozen scene image, effectively hiding it.

user-7daa32 28 November, 2020, 18:21:02

That's right

user-7daa32 28 November, 2020, 18:22:55

Can we do the annotations plugin and the surface plugin together. The annotation spreadsheet is emptied and I don't know why. I was clicking the keyboard. May I will need to click the annotations keys on the world window

papr 30 November, 2020, 11:07:50

Yes, the world window needs to be active to receive keyboard events.

user-292135 30 November, 2020, 07:32:06

Hi we have a long pupil mobile recording data, 1h 20min, and loaded into pupil player 2.5. Pupil Player only shows 25minutes without much errors in the log. What can we do for recovering the remaining?

papr 30 November, 2020, 11:10:44

Your issue seems to be related to @user-067553 's issue here https://discord.com/channels/285728493612957698/285728493612957698/781194512865034270

I still need to check their recording to see if one can restore/access the remaining data.

user-2ab393 30 November, 2020, 08:20:25

Hello,@papr The pupil core can output data in real time (https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone).I hope to use the real-time output data as the input of the real-time fixation detection algorithm. Now I have two problems. First of all,What does normpos of real-time output refer to?Second, what is the input data of real-time fixation detection in the eye tracker sourcecode(https://github.com/pupil-labs/pupil/blob/master/pupil_src)?

papr 30 November, 2020, 11:07:22
  1. Please see https://docs.pupil-labs.com/core/terminology/#coordinate-system and note that pupil norm_pos and gaze norm_pos are within two different coordinate systems (eye vs scene camera; https://docs.pupil-labs.com/core/terminology/#pupil-positions)
  2. Our fixation detector uses gaze data as input
user-3c006d 30 November, 2020, 11:21:07

Hey, i still have troubles with calibration. I do follow the steps but somehow i cant make it that the camera detects the pupils properly. Also the detected view in the recordings is not correct. It is not where it should be and too wiggly ...

papr 30 November, 2020, 11:28:25

@user-3c006d If the pupil detection is inaccurate, the gaze estimation will be, too. Feel free to share an example recording with [email removed] such that we can review the eye camera setup.

user-3c006d 30 November, 2020, 11:29:05

okay i will send it 🙂

user-3c006d 30 November, 2020, 12:03:22

done

user-3c006d 30 November, 2020, 13:12:30

did it arrive? had to send a dropbox link

papr 30 November, 2020, 13:13:03

@user-3c006d it did. We will come back to you via email once we have reviewed it.

user-3c006d 30 November, 2020, 13:13:25

okay thank you!

user-8ed000 30 November, 2020, 13:26:55

In the instructions for Surface Tracking at the very bottom there is a blog post mentioned about Surface Metrics with Pupil Player. That Link is dead though. Do you have the current link to that blog post? https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

papr 30 November, 2020, 13:28:18

@user-8ed000 I think we removed that link a while ago. It is possible that you are seeing a cached version of the documentation.

papr 30 November, 2020, 13:29:00

The blog post is out-of-date which is why we did not replace it with a newer link yet.

papr 30 November, 2020, 13:31:51

@user-8ed000 We have a few surface tracker related tutorials here: https://github.com/pupil-labs/pupil-tutorials

user-8ed000 30 November, 2020, 13:32:33

@papr Thank you, I will have a look at it

papr 30 November, 2020, 13:42:23

@user-292135 Could you please also share all ".npy" files of the recording with [email removed] @user-067553* could you please share the world.mp4 file?

user-292135 01 December, 2020, 04:02:12

I invited data@pupil-labs.com into my Box folder. thanks!

user-067553 30 November, 2020, 13:50:41

I have not exported it.. Because there is a mismatch between the durations (if you want I can try to export)..I have the world files in the "mjpeg" format I can send it to you these. Is it ok? @papr

user-067553 30 November, 2020, 13:44:36

ok! thank you

user-067553 30 November, 2020, 13:52:11

I think I have more than 40 sessions (40 minutes each) with this kind of problem.

papr 30 November, 2020, 13:54:55

@user-067553 The should be no need for exporting anything. The world.mjpeg files are the correct ones.

user-067553 30 November, 2020, 13:55:51

Ok thanks

user-2ab393 30 November, 2020, 13:56:09

@papr Which of the real-time output data is what you said "gaze data"?

papr 30 November, 2020, 13:57:16

@user-2ab393 Subscribe to gaze to get that data. Please be aware that you need to calibrate first to receive that data

user-2ab393 30 November, 2020, 14:19:40

@papr Why do I use the data as the input of the gaze detection algorithm, and then display the data on the screen is quite different from the real one. Is surface tracking required? If so, can real-time surface tracking be carried out?

papr 30 November, 2020, 14:33:09

Why do I use the data as the input of the gaze detection algorithm, and then display the data on the screen is quite different from the real one. What do you mean? The input to the fixation detector and the displayed gaze are the same. Please be aware that our fixation detector buffers data. It therefore includes a few older datums as well as the most recent gaze (which is displayed)

user-594d92 30 November, 2020, 15:41:36

Hello, @papr Because the world camera is slightly higher than the center of the screen, the image of the screen in the camera is not a rectangle. So I would like to know if the capture window I get on the screen by SurfTracking plug-in is not a rectangle, but an ordinary parallelogram, will it cause me any deviation in drawing points on the image with normalized coordinates.

Chat image

papr 30 November, 2020, 17:03:41

@user-594d92 the plugin assumes the surface to be a rectangle in real life. It will assume it to be distorted by perspective if it is not rectangular. But in this case it is possible that you need to reestimate your camera intrinsics

user-6e3d0f 30 November, 2020, 18:23:09

There is currently no way to assign post hoc surfaces without markers or?

papr 30 November, 2020, 20:38:29

@user-6e3d0f no, there is not

End of November archive