core


user-e2056a 01 August, 2019, 01:35:11

@papr Hi, what is the accuracy level of pupil diameter detection?

user-4ffdc3 01 August, 2019, 09:16:50

Dear pupil team, is there any possibility to try out the eyetracking glasses before buying?

papr 01 August, 2019, 11:55:41

@user-07d4db You basically need to create two sets of time ranges for each surface and calculate their intersections: 1. Surface detection range, calculated from surface_events.csv: Duration between enter and exit events for each surface. 2. Gaze on surface, calculated from gaze_positions_on_surface_X.csv: Duration of entries with consecutive positive on_surf values.

  1. refers to the time during which the surface was visible, 2. refers to the time during which the subject looked at the surface.

You can start by just calculating 2., but the file only includes gaze data while the surface was visible. i.e. the results might be incorrect, if the surface disappeared while the subject was looking at it.

papr 01 August, 2019, 11:58:40

@user-e2056a Checkout section 4.3. Pupil size estimates of https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf It uses synthetic images to evaluate pupil size estimations since it is difficult to gather ground truth in this area.

papr 01 August, 2019, 12:09:00

@user-4ffdc3 You should have received a private message from @user-97ae1e in regards to your request.

user-bb9207 01 August, 2019, 14:10:21

Hi - I am using pupil mobile - what are the ways in which I can display the location data i.e. where they have walked? and where is this data stored?

papr 01 August, 2019, 14:34:11

@user-bb9207 Hi, currently the location data is neither visualised or exported. You would have to write a Pupil Player plugin or custom script to read the location files and export that data.

user-c87bad 01 August, 2019, 16:32:49

@papr To sync different softwares, I used the difference between the different start time, but it seems that the timestamp is not right. It seems there is a delay. Software A recorded a time, then I used it minus the delta of start time. Sorry I haven't try the plugin now.

papr 01 August, 2019, 16:36:48

@user-c87bad how big is the delay. What magnitude are we talking about?

user-c87bad 01 August, 2019, 16:38:32

@papr I have no accurate difference, but less than 1 second.

user-c87bad 01 August, 2019, 16:39:50

What's the common range of the delay?

papr 01 August, 2019, 16:40:01

@user-c87bad Than this might be due to the device's time sync inaccuracy

papr 01 August, 2019, 16:40:23

Under a second sounds correct for normal network time sync

user-c87bad 01 August, 2019, 16:41:27

@papr All right! Thank you so much!

user-07d4db 01 August, 2019, 17:09:58

Dear Papr, I've got an other question: I am looking for the coordinates of AOIs, I defined with the surface tracker plugin. Can you tell me where I can find these? I only found an excel file that was unreadable called "surface definitons" in the raw data output. A hint of you would be very helpful! Thanks a lot!

user-c87bad 01 August, 2019, 17:12:46

@papr Sorry again. I've checked my timestamps in the file. The start time system is 1564678499.62692, the start time synced is 30555.503445168, and the time my video appear is 1564678521.14308, but when I check the gaze on the surface timestamp, it starts at 30562.56187. So there is difference of around 61139.581475168. Is that ms? Also I use the plugin but I found there is no difference on the fixation and gaze timestamp. Is that right?

papr 01 August, 2019, 17:12:48

@user-07d4db the surfaces are defined as homographies. These basically give you matrices which you can use to transform between points in the world coordinate system and points in the surface coordinate system, and vice versa. I do not know their names by hard, though.

papr 01 August, 2019, 17:13:54

@user-c87bad timestamps are in seconds

papr 01 August, 2019, 17:14:12

Which timestamps did you substract to get that difference?

user-c87bad 01 August, 2019, 17:14:56

I use the start time system - start time synced.

papr 01 August, 2019, 17:15:17

Also, gaze on surface is not the first gaze, but the time the surface is detected the first time. This is not equivalent to the start time in info.csv

user-c87bad 01 August, 2019, 17:15:22

And then use my UNIX timestamp - this difference

user-c87bad 01 August, 2019, 17:16:53

yes, the time video appear should be the time when surface is detected

user-07d4db 01 August, 2019, 17:16:54

Thank you papr! But I am still confused: where can I find the coordinates of my surfaces? I want to know them, so that I am able to check, whether the fixations contained in the "fixations.csv" file where on the surface or not

user-c87bad 01 August, 2019, 17:20:08

@papr So the final difference is about 14 sec. That's too big.

papr 01 August, 2019, 17:24:44

@user-07d4db Which operating system are you on?

papr 01 August, 2019, 17:25:28

@user-c87bad Mmh, the question is which device is out of sync. hard to tell ๐Ÿ˜•

user-07d4db 01 August, 2019, 17:28:13

What do you mean by operating sytem? I am conducting my masterthis using pupil labs. know i am trying to figure out how I can match the fixations with the coordinates of the surfaces, I defined. Unfortunatly I cannot use the fixations on surface files due to problems of reproducability of the data

papr 01 August, 2019, 17:28:26

The new version will export fixations_on_surface again, which includes the above information already. If you use Windows or Linux, I can link you the prerelease version.

papr 01 August, 2019, 17:28:44

I see, ok.

papr 01 August, 2019, 17:30:02

How player calculates this is by multiplying the fixations norm_pos with the world_to_surface transform matrix, yielding a location within the surface coordinate system. Then we simply check if the resulting point is between 0 and 1 for each dimension.

papr 01 August, 2019, 17:30:45

The matrix is exported in surf_positions<name>.csv as the img_to_surf_trans field

user-07d4db 01 August, 2019, 17:34:46

thank you very much. So is it possible to match the normalised position ccordinates of a fixation in the fixations.csv files with the inforamtions in the surface-positions_name.csv?

papr 01 August, 2019, 17:35:39

Yes, that is what fixations_on_surface is usually for

user-07d4db 01 August, 2019, 17:38:31

Okay! mhh well we cannot use this file unfortunatly. Could you explain me further what m_to_screen,m_from_screen means?

user-07d4db 01 August, 2019, 17:39:04

So are these the two relevant columnes of the file?

papr 01 August, 2019, 17:39:46

The first step is to understand what an homography is https://en.m.wikipedia.org/wiki/Homography_(computer_vision)

papr 01 August, 2019, 17:40:27

They are basically matrices to convert points between coordinate systems.

papr 01 August, 2019, 17:41:23

The coordinate systems in question are the Szene image and the surface.

user-c87bad 01 August, 2019, 17:42:09

@papr But I just use the time.time() and the keyboard. The difference should not be so large. I doubt if I use the wrong way.

papr 01 August, 2019, 17:45:05

@user-c87bad Ah wait, you are using the same computer to record the data of both "sensors"?

papr 01 August, 2019, 17:46:19

Just have a look at clock.py please. Without having a look at your code it is difficult to judge for me if something goes wrong.

user-c87bad 01 August, 2019, 17:47:02

Yes, I am using the same computer. But is that a big problem?

papr 01 August, 2019, 17:48:51

No, then it is clear that something is wrong, since capture uses time.time() to calculate start time (system)

papr 01 August, 2019, 17:49:11

but wait a second

papr 01 August, 2019, 17:49:29

you said unix time of keyboard press - start time difference, correct?

papr 01 August, 2019, 17:49:41

and this result, what do you compare it to?

papr 01 August, 2019, 17:50:15

You basically need an event that is recorded in capture and externally to be able to calculate the time sync offset

user-c87bad 01 August, 2019, 17:54:28

I compare that with the first world timestamp of the gaze on the surface. Because when press, the video starts which means the surface will be detected.

papr 01 August, 2019, 17:57:21

If it is about the first surface detection, then checkout surface_events.csv

papr 01 August, 2019, 17:58:25

Also, can you share the recording with data@pupil-labs.com , such that I can try to better understand the setup?

user-0fde83 01 August, 2019, 18:18:34

@papr Concerning @user-07d4db 's question: Did I understood you right that there is a different matrix for every surface to transform the fixations with? Is the surface coordinate system two-dimensional or tree-dimensional? And where exacly can those matrices be found?

papr 01 August, 2019, 18:19:14

@user-0fde83 There are two matrices (img_to_surf_trans and surf_to_img_trans) for each surface for each frame

papr 01 August, 2019, 18:20:09

Exported in surf_positions<name>.csv

papr 01 August, 2019, 18:22:33

I need to clarify: This is the pre-v1.13 behaviour. The >= v1.13 behavior is more complicated since it takes the distortion of the world camera into account

papr 01 August, 2019, 18:23:15

But to understand the general problem surface mapping tries to solve, it is important to understand the pre v1.13 behavior first

user-0fde83 01 August, 2019, 18:25:28

@papr I am trying to! So the first step would be to take the the fixations data and transform them to the surface coordinate system using img_to_surf_trans, is that right?

user-34688d 01 August, 2019, 18:25:51

Hi everyone, I am currently facing an issue with Pupil capture and am desperately trying to find a solution:

https://github.com/pupil-labs/pupil/issues/1570

Problem: The recorded world camera camera video is often shorter than the eye camera video, which is a consequence of missing frames. These frames appear as long grey periods when playing the files in Pupil player.

example file duration (m:s): eye - 35:28, world - 18:53 eye - 30:36, world - 27:07

I have not had issues in the past 1-2 months, but started doing longer recordings (~40 minutes) and it started appearing. I am also tracking 7 surfaces in total, but there is rarely more than 3.

Configuration: Pupil capture: 1.12.17 OS: Ubuntu 18.04, up to date i5-7200U CPU @ 2.50GHz ร— 4 , 15.6 GiB ram, Intel HD Graphics 620 Pupil core with fast world camera, and 200Hz monocular camera World camera: 1280x720 at 60fps Eye camera 400x400 at 120 fps

This is the only thing holding back my study so if anyone has a solution, I would be extremely thankful.

user-e2056a 01 August, 2019, 18:26:55

@papr Thank you, I saw in the document it shows the pupil radius error is 0.01mm from true value of 2mm. However, under the 2d mode, the pupil size unit was not mm, it was pixel. What is the accuracy level of pupil diameter measure under 2d mode?

user-c87bad 01 August, 2019, 18:27:00

@papr Hi! I've sent the data and the enter time in surface_events is equal to my first gaze time on surface.

user-c87bad 01 August, 2019, 18:27:12

Thank you so much!

papr 01 August, 2019, 18:27:26

@user-0fde83 No, the actual first step (which we have not talked about explicitly yet) is to correlate fixations to world frames, so that you know which surface transform matrix to use.

papr 01 August, 2019, 18:29:04

@user-34688d I saw that you posted the issue on Github before. Please just link the github issue the next time instead of copy-pasting the issue text, thank you.

user-34688d 01 August, 2019, 18:30:33

@papr added link

papr 01 August, 2019, 18:30:38

@user-34688d I will write an answer on Github for persistance. Please be patient, since you are asking multiple questions that all need time to consider

user-34688d 01 August, 2019, 18:31:19

@papr +1 thanks

papr 01 August, 2019, 18:32:25

@user-0fde83 Regarding correlation of fixations: Fixations often span multiple world frames, which makes it difficult to decide which transform matrix to use.

papr 01 August, 2019, 18:33:58

I think that the surface tracker tries to deduplicate these. Depending on the implementation, this might be the reason for the fixations_on_surface not being reproducible reliably,

user-0fde83 01 August, 2019, 18:35:41

@papr What do you think is duplicated there?

papr 01 August, 2019, 18:37:45

It is important to understand, that fixations are calculated from the point of view of the scene camera, i.e. if gaze does not move within the scene image, it is considered a fixation. A fixation can span over multiple scene frames. Is this clear so far? (Surface tracking is one of the more complex processes in Pupil, so explaining them is not that easy for me ๐Ÿ˜• )

user-0fde83 01 August, 2019, 18:40:57

Thank you a lot for trying, i am doing the best i can to understand. I got that so far, yes.

papr 01 August, 2019, 18:41:36

Ok, great. So let's assume we have three consecutive world frames during which we detected a fixation.

papr 01 August, 2019, 18:42:15

And during these we detect a moving surface, yielding three different transformation matrices.

user-0fde83 01 August, 2019, 18:42:24

A fixation with three frames, got it

papr 01 August, 2019, 18:42:51

Therefore, mapping the fixation with these matrices yields three different positions within the surface

papr 01 August, 2019, 18:43:06

i.e. a fixation in scene camera space != fixation in surface space

papr 01 August, 2019, 18:43:50

And looking at the implementation, until now, we have cheated our way around this fact by throwing two of the three mappings away,

user-0fde83 01 August, 2019, 18:44:15

One transformation matrix for each frame, right? How did you choose the frame to keep?

papr 01 August, 2019, 18:44:56

That is exactly the problem, we do not choose an explicit frame, but use a python dictionary which uses the fixation ids as keys.

papr 01 August, 2019, 18:45:41

It is totally unpredictable which of the three frames is being removed. The only thing that we are sure of is that one surface mapping remains.

papr 01 August, 2019, 18:46:18

And as long as the surface does not move in relation to the scene camera, that is totally fine, since the mapping should result in roughly the same spot.

papr 01 August, 2019, 18:47:01

But if the surface moves during these three frames, we get something like a smooth persuit movement from the point of view of the surface.

papr 01 August, 2019, 18:47:37

This is a fundamental problem when detecting fixations in scene camera space instead of a real world coordinate system.

papr 01 August, 2019, 18:48:32

The best solution for this would be to do head pose estimation, map gaze locations into the head coordinate system, and calculate fixations within this coordinate system.

user-0fde83 01 August, 2019, 18:49:43

What happens if some surfaces are only detected in few of these frames? Does a fixation, that usually could appear in the document, disappear then?

papr 01 August, 2019, 18:51:03

To finish the thought: But at this point you are chaining so many estimation processes to each other, that small estimation errors accumulate quickly to big errors that make the final fixation detection impossible.

papr 01 August, 2019, 18:52:17

I am not sure if I understand the question. If a fixation and a surface detection overlap temporally, the surface mapper will try to map the fixation to the surface.

user-0fde83 01 August, 2019, 18:57:36

A wrong thought, don't bother. Wouldn't it be possible, if a have three frames in the fixation, to cut the fixation in three parts, convert every single part with it's matrix, and then merge them to a new fixation after transformation?

papr 01 August, 2019, 19:00:08

A fixation does not have a single part. A fixation is just the mean of all gaze data during that period.

papr 01 August, 2019, 19:00:48

What I would do instead is to map all gaze independently onto the surface, and then run a fixation algorithm on the result.

papr 01 August, 2019, 19:01:23

The problem is that it is not possible to calculate angular differences between gaze in this coordinate system ๐Ÿ˜„

user-0fde83 01 August, 2019, 19:02:01

Ok, i get the problem. What is your solution in 1.13 and after?

papr 01 August, 2019, 19:02:48

The surface is a normalised coordinate system. So a fixation algorithm would have to get the real surface size as an input, to judge, if something is a fixation.

papr 01 August, 2019, 19:03:20

v1.13 and above do not handle fixations differently. v1.13 just introduces correct handling of the world cameras distortion.

papr 01 August, 2019, 19:04:07

But what you could also do with the existing tools:

papr 01 August, 2019, 19:05:32
  1. Use surface tracker to map gaze to surfaces
  2. Use the fixation detector to detect fixations
  3. Find all gaze that belongs to a fixation
  4. Find the surface mapping results for these samples
  5. Calculate the mean of the mapped gaze norm_pos values
papr 01 August, 2019, 19:06:37

This gives you a fixation in surface coordinates, which should be an accurate mapping of the scene-coordinate fixation, given that the surface did not move much during the fixation.

user-0fde83 01 August, 2019, 19:08:35

1.-3. i understand without a problem, but which results do you mean with surface mapping results?

papr 01 August, 2019, 19:09:54

gaze_positions_on_surface<name>.csv -> x_norm, y_norm

papr 01 August, 2019, 19:12:24

The appropriate gaze data can be identified by its timestamp

papr 01 August, 2019, 19:14:29

So use the timestamps from 3. to find the gaze mapping results in 1.

user-0fde83 01 August, 2019, 19:17:35

As we searched today for an alternative way to get to the data we need, we figured gaze_positions_on_surface has the same problem of not being able to replicated as fixations_on_surface has

papr 01 August, 2019, 19:18:12

Looking at the code, I do not see a reason why this should be the case.

user-0fde83 01 August, 2019, 19:20:51

That is what we saw, i don't know why

papr 01 August, 2019, 19:22:00

Ok, let's do an experiment. I give you an recording that has pre-detected gaze and surfaces. We use Player v1.12-17 to export it, and compare the gaze_positions_on_surface result.

user-0fde83 01 August, 2019, 19:24:28

Sounds good to me

user-0fde83 01 August, 2019, 19:30:24

What player should i use?

papr 01 August, 2019, 19:31:47

My result

papr_surface_demo_export-v1.12-7.zip

user-0fde83 01 August, 2019, 19:31:54

Ok, just give me a minute

papr 01 August, 2019, 19:31:59

Sure

user-0fde83 01 August, 2019, 19:40:31

ulysses2009data.zip

user-0fde83 01 August, 2019, 19:41:59

Here is my data

user-0fde83 01 August, 2019, 19:44:25

Doesn't look like the same output to me, also concerning gaze_positions_on_surface, or did i miss something?

papr 01 August, 2019, 19:44:50

I am investigating the differences right now

user-0fde83 01 August, 2019, 19:54:50

I am starting to wonder, they look the same as far as i can tell, but seem to have different sizes in terms of KB

papr 01 August, 2019, 19:55:47

So how did you make your comparisons today?

user-0fde83 01 August, 2019, 19:56:54

Also because of the file size, it was a pretty good measure to tell whether a file was the same or not with fixations_on_surface. Why do the sizes differ?

papr 01 August, 2019, 20:01:23

This might be due to csv dialect differences

papr 01 August, 2019, 20:01:47

File size is definitively not the way to go when it comes to compare the contents of an csv file

papr 01 August, 2019, 20:02:09

I am currently trying to read the csv with pandas and to compare values

user-0fde83 01 August, 2019, 20:04:58

Ok, thank you. I will look into our files to see whether there is a difference in the content

papr 01 August, 2019, 20:25:34

@user-0fde83 all gaze_positions_on_surface_* files are equal

user-0fde83 01 August, 2019, 20:36:13

Agreed, i downloaded the data multiple times here too and it always turned out to be the same, but the fixations_on_surface data is so as well. We thoght that the mistake was the same, because sometimes the data is consistent and sometimes it is not. This file seems to be consistent. Do you get the same fixations? If you do, do you have a set of data where you don't and does the gaze_positions_on_surface change there?

papr 01 August, 2019, 20:47:49

I forgot to detect fixations ๐Ÿ˜„

papr 01 August, 2019, 20:51:24

Now the gaze files are not equal anymore oO

user-0fde83 01 August, 2019, 20:54:00

Where does it differ?

papr 01 August, 2019, 20:54:18
column             equal
---
id                  True
start_timestamp     True
duration            True
start_frame         True
end_frame           True
norm_pos_x         False
norm_pos_y         False
x_scaled           False
y_scaled           False
on_srf              True
user-0fde83 01 August, 2019, 20:57:52

I guess we proved the gaze files are affected as well ๐Ÿ˜…

papr 01 August, 2019, 20:58:31

This is due to m_to_screen being different for the two recordings

user-0fde83 01 August, 2019, 20:58:50

Ah, ok, so no prove

papr 01 August, 2019, 20:59:09

But this shouldn't be the case though

user-0fde83 01 August, 2019, 21:03:44

Could this change also be responsible for missing values in fixations_on_surface or is it another mistake?

papr 01 August, 2019, 21:05:10

different homographies indicate either changed surface definitions or different marker detections, where the latter can lead to different amount of mapped data

papr 01 August, 2019, 21:11:33

ok, it looks like the transformations are different due to white spaces

papr 01 August, 2019, 21:14:13

Oh ok, and the columns in gaze_* that differed, only differ in 4.919842311323919e-12

user-0fde83 01 August, 2019, 21:16:43

I just checked our data ones more. The number of gazes differs. I have checked how data got missing in the fixations_on_surface and noticed that every "bigger" file included all smaller ones, plus some extra fixations to them. Often many fixations that followed on each other. Might it be altogether a problem of markers? If they sometimes are not noticed and the surfaces aren't defined the fixations and gazes would not show up in the files and therefore we would not be able to replicate them, but we have no problems at all if all markers are visible at all times

user-0fde83 01 August, 2019, 21:26:01

Data2.zip

user-0fde83 01 August, 2019, 21:26:09

Data1.zip

user-0fde83 01 August, 2019, 21:26:49

These files are from the same person, just downloaded twice. We have more, but unfortunately the files get too big

user-0fde83 01 August, 2019, 21:27:20

We have markers that are a lot harder to detect and i think therefore the mistake is getting bigger

papr 01 August, 2019, 21:27:48

Ok, but for our experiment the conclusion is: The files differ but only to a very very small value: https://gist.github.com/papr/fbf1c86e21d6594927c443b4b13949a3

papr 01 August, 2019, 21:28:13

I added the maximum difference for each column for each file that was different

user-0fde83 01 August, 2019, 21:28:36

Have you taken a look at the files i just sent?

papr 01 August, 2019, 21:29:14

No, I did not ๐Ÿ™‚ It is 23:29 over here. I am going to bed ๐Ÿ˜‰

user-0fde83 01 August, 2019, 21:29:43

Over here too ๐Ÿ˜‰

user-0fde83 01 August, 2019, 21:30:05

Good night, perhaps we can try to figure this out tomorrow

papr 01 August, 2019, 21:31:03

Theoretically, the marker detection should be reproducible. But I know it does not perform well if the markers are small.

papr 01 August, 2019, 21:31:56

In v1.15 we will add support for apriltags. This should eliminate a lot of these "bad detection" cases (given that you use apriltags in your recording)

papr 01 August, 2019, 21:32:46

@user-0fde83 Did you create a Github issue for this already? Please add the files there + the above description of the problem.

papr 01 August, 2019, 21:33:09

I will try to add a summary of my findings as well

user-0fde83 01 August, 2019, 21:34:08

Up to know i have not, i will try to when i can, but i still try to get familiar with Github

papr 01 August, 2019, 21:35:23

https://github.com/pupil-labs/pupil/issues/new

papr 01 August, 2019, 21:35:35

Ok, good night ๐Ÿ‘‹

user-0fde83 01 August, 2019, 21:38:18

Good night

user-ee433b 02 August, 2019, 08:34:03

Hello everyone, I plan to code a piece of software to synchronize pupil labs data with other data. I see that you use ZMQ, I looked at the connection.cs code from hmd-eyes to see how to code the receiving part, but I don't understand everything (especially because I don't know how are written the messages). Do you have some doc to explain how to code a receiving part? (or some documented code?) Thx

papr 02 August, 2019, 08:34:43

@user-ee433b Are you bound to use c#?

user-ee433b 02 August, 2019, 08:44:26

Not sure, I'll have to work with some other people and I don't know which language they prefer

user-ee433b 02 August, 2019, 08:45:12

(As I'm the one "bringing" Pupil Labs, I'm in charge of the communication with Pupil Labs)

papr 02 August, 2019, 08:45:26

Understood

user-ee433b 02 August, 2019, 08:45:52

I think they'll use C++, but it's just a guess

papr 02 August, 2019, 08:46:00

You came to the correct place to communicate ๐Ÿ‘

user-ee433b 02 August, 2019, 08:46:06

๐Ÿ˜‰

papr 02 August, 2019, 08:47:00

After reading that, the hmd-eyes code should make more sense to you.

user-ee433b 02 August, 2019, 08:48:04

I think I've already read that. To help me, do you have a doc explaining the format of the message broadcasted by Capture?

papr 02 August, 2019, 08:48:54

You mean which fields each type message body has?

papr 02 August, 2019, 08:50:01

There is no complete overview over that. Gaze data has different fields depending on e.g. if it was mapped binocularly or monocularly

user-ee433b 02 August, 2019, 08:56:50

For example: what is the data from a msgType : frame.eye.0 ?

papr 02 August, 2019, 09:03:06

So messages published by the frame publisher are special, since they are the only case where the message has three instead of two frames

papr 02 August, 2019, 09:03:55
  1. topic frame.eye.0
  2. msgpack encoded dictionary (includes format description for frame 3)
  3. uint8 frame buffer in format as described in frame 2
papr 02 August, 2019, 09:04:22

The format can be either selected in the frame publisher ui or via the start arguments when started via notification

user-ee433b 02 August, 2019, 09:06:49

Ok, so the images from the cam can be streamed alongside the results of the "pupil processing"

papr 02 August, 2019, 09:07:57

correct

papr 02 August, 2019, 09:08:30

Nearly everything that Capture produces as data can be streamed.

user-ee433b 02 August, 2019, 09:09:31

great ๐Ÿ˜„

user-ee433b 02 August, 2019, 11:10:23

I try to understand from connection.cs, it seems that the only time you extract data from msg, is in 'public void InitializeSubscriptionSocket(string topic)' I guess I miss something because where do you read all receiving data afterwards? I guess it's with the event/delegate 'OnReceiveData' in 'PupilTools' used in 'PupilDemo' for example where you get the data from the dictionary. But where do you build this dictionary from the msg?

papr 02 August, 2019, 11:15:26

Connection is only about the connection itself, not about the semantics of the data transmitted through it

papr 02 August, 2019, 11:17:41

@user-ee433b checkout gaze and gaze listener for gaze data

papr 02 August, 2019, 11:18:18

There are separate classes for separate types of messages.

user-99f716 02 August, 2019, 11:30:40

moonshot question here @papr... but does pupil have any sets in stock it would be willing to lend to a recent grad getting into ux research? I found online that pupil was one of the most affordable and consumer friendly headsets - but its still too exp for me :(. just moved to berlin and was pleasantly surprised that pupil is based out of here! anyway - moonshot question

user-e2056a 02 August, 2019, 20:00:00

Hi @papr , can I ask a couple questions? 1. I saw in the document it shows the pupil radius error is 0.01mm from true value of 2mm. However, under the 2d mode, the pupil size unit was not mm, it was pixel. What is the accuracy level of pupil diameter measure under 2d mode?

user-e2056a 02 August, 2019, 20:01:29
  1. @papr , we are seeing some missing files from our recording folder, if any of these file is missing from the recording: annotation.pldata annotation_timestamps.npy blinks.pldata blinks_timestamps.npy exports eye0.mp4 eye0_timestamps.npy eye1.mp4 eye1_timestamps.npy fixations.pldata fixations_timestamps.npy gaze.pldata gaze_timestamps.npy info.csv notify.pldata notify_timestamps.npy offline_data pupil.pldata pupil_timestamps.npy surfaces.pldata surfaces_timestamps.npy surface_definitions user_info.csv world.intrinsics world.mp4 world_timestamps.npy, Will we will be able to get gaze, fixation, blink and pupil size data?
user-e2056a 02 August, 2019, 20:02:22

@papr , related to the last question, which of these files are necessary to ensure the accuracy of gaze, fixation, blink and pupil size data?

user-07d4db 03 August, 2019, 06:13:38

Dear Papr, you had thursday night a longer conversation with @user-0fde83, a collegue of mine working with me on my masterthesis project. We are both a bit clueless how to get reliable and valid data out of my experiment, coping with the issue of unreproducability of the data. I want to measure the amount and mean duration of fixations within specified AOIs, using the surface tracker plugin. Do you have any proposal how we can cope with this proplen in an efficiant way? Thank you very much!

user-bc5d02 05 August, 2019, 14:09:55

@papr good day! Is it possible to run recordings, which were interrupted by windows error/ laptop discharging/ sleep mode onset? The recording folder contains all the files, but pupil player can't run it correctly. It runs only gray screen like world video is not available, but at the same time it shows dynamic of confidence id 0/1 (Print screen is added)

Chat image

papr 05 August, 2019, 20:38:55

@here We are pleased to announce the latest release of Pupil software v1.14! We highly recommend downloading the latest application bundles: https://github.com/pupil-labs/pupil/releases/tag/v1.14

user-e7102b 05 August, 2019, 22:46:07

hi @papr I've just installed pupil capture (latest version) on a brand new laptop running ubuntu 18.04. When i load the software it fails to find any of the eye-tracker cameras (i just get three blank gray screens). One of the error messages on the world cam screen is "init failed. Capture stared in ghost mode...". Is there something else I need to install in order for the cameras to be recognized? Thanks!

papr 06 August, 2019, 06:48:08

@user-e7102b are they listed as unknown in the uvs manager menu?

papr 06 August, 2019, 06:48:24

If so, you need to add your user to the plugdev group.

papr 06 August, 2019, 06:48:54

Afterwards, restart the computer and try again.

papr 06 August, 2019, 09:49:53

@user-99f716 Please contact info@pupil-labs.com in this regard

papr 06 August, 2019, 10:00:46

@user-e2056a 1. Please also check out "A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000": https://peerj.com/articles/7086/ 2. gaze, fixation, blink and pupil size data can be generated after-the-effect in Player as long as the world and eye videos and their timestamp files were correctly saved.

papr 06 August, 2019, 10:03:57

@user-bc5d02 Could you share the recording with data@pupil-labs.com ? I can check what can be recovered. But I am afraid that the video is most likely not recoverable.

papr 06 August, 2019, 10:05:00

@user-07d4db I still have to look into the reason why the gaze data on surfaces is not reproducible.

papr 06 August, 2019, 10:12:39

@user-07d4db @user-0fde83 regarding the exports above, that have different amount of data: Do you still have the export_info.csv file for both?

papr 06 August, 2019, 10:14:54

If yes, please share them. If not please try to reproduce the case where one exports a subset of the other. Please be aware that there was an issue in v1.13 that caused less surface detections than in versions < v1.13 -> resulting in a subset of mapped gaze data.

papr 06 August, 2019, 10:16:28

If you want, we can try to rerun our reproduction experiment on your own dataset, instead of the Pupil Labs demo surface tracking experiment.

user-07d4db 06 August, 2019, 15:55:48

Dear @papr,thank you very much for your response! Which export_info.csv file do you mean? the onesyou have send us or the ones we produced with our data? I think @user-0fde83 did the experiment with you with player version v1.12, so according to your explanation, tehre shoudn't be the issue with the reduced surface detection. Yes that would be great, if we coudl rerun the experiment with our data and you have a look at it! Which player version shoud I use? the newest one v1.14?

papr 06 August, 2019, 15:56:59

@user-07d4db The files that we have exchanged so far did not include a export_info.csv. We have only exchanged the surfaces subfolder of the actual export result.

papr 06 August, 2019, 15:57:15

The export result includes the export_info.csv file

papr 06 August, 2019, 15:57:37

And yes, please use the newly released v1.14 for the experiment.

user-07d4db 06 August, 2019, 16:00:12

Okay! Unfortunatly @user-0fde83 only has the complete data of the exepriment you did and he is currantly in vacations, so that I can not send to you the original export_info.csv files unfortunatly ๐Ÿ˜ฆ I will tell him to send them as soon as I can!

user-07d4db 06 August, 2019, 16:01:03

great! I will download te player and retry the experiment. Shall I send you as well a data folder of us, that I will use for the experiment?

papr 06 August, 2019, 16:01:33

I was not referring to the export_info.csv file from our experiment but the other uploads (Data1.zip and Data2.zip) which have a different amount exported gaze samples. If you do not have access right now, that is fine.

papr 06 August, 2019, 16:02:00

Yes, I need one of your recordings

user-07d4db 06 August, 2019, 16:02:16

And I have got a further question: you proposed to us the following steps: 1. Use surface tracker to map gaze to surfaces 2. Use the fixation detector to detect fixations 3. Find all gaze that belongs to a fixation 4. Find the surface mapping results for these samples 5. Calculate the mean of the mapped gaze norm_pos values Coud this be a solution as well? or only if the gaze_on surface_name.csv is correct?

papr 06 August, 2019, 16:02:35

It should contain existing surface definitions

papr 06 August, 2019, 16:03:14

This is only a solution if the gaze samples are close enough to each other. Missing samples breaks the approach

user-07d4db 06 August, 2019, 16:04:59

Okay! Thank you! I am sorry to ask, but how can I give you acces to the complete data folder of a participant, as it is so big?

papr 06 August, 2019, 16:08:05

@user-07d4db see my direct message

user-07d4db 06 August, 2019, 16:09:26

thanks! just give me minute to install everything and send you the folder ๐Ÿ˜ƒ

user-07d4db 06 August, 2019, 16:21:02

Do you need the eye and world viedeos to to have a look at the output? Becaus they need a lot of space and I cannot send less than 2 gb for free

papr 06 August, 2019, 16:21:34

world video yes, eye no

user-07d4db 06 August, 2019, 16:21:45

okay

user-e7102b 06 August, 2019, 19:39:11

@papr Thanks, adding the user to plugdev brought the cameras to life

user-e7102b 06 August, 2019, 20:11:16

Hey @papr , I have another question. @user-dfeeb9 and I wrote a pupil middleman script last year that we shared with pupil community to allow pupil capture to be controlled by MATLAB. When we wrote it the version of Capture was ~v1.5. I recently installed a newer version of capture on a different machine and noticed that all commands seemed to be working (e.g. start stop recording) except annotations were not being logged in capture. Presumably this is due to annotation format change in v1.9? I'd like to update the script so that annotations work with the newer versions of capture. Given that annotations are no longer special types of notifications, would it just be a case of sending annotations in the same way as we currently send other commands (e.g. start/stop rec)? Thanks

papr 06 August, 2019, 20:57:24

@user-e7102b you basically just need to remove the notify. part from the message topic

papr 06 August, 2019, 20:58:49

There is an example python script on how to send remote annotations in the Pupil-Helpers repository. You might also need to request a different port ๐Ÿค”

user-b37f66 06 August, 2019, 21:08:59

Hi, I'm trying to activate my pupil device on windows 10 machine. The cameras are classified to the 'cameras' section in the device manager instead of the new category - "libusbK Usb Devices". As a result, the cameras are not recognized and I can't record videos with the device. I followed the "Windows driver troubleshooting section" - https://docs.pupil-labs.com/#troubleshooting including manually installation with Zadig (https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md) but the drivers are still apearing under 'cameras' except for the 'integarated_webcam_hd' that appears under 'libusbK Usb Devices'. Thanks in advance.

papr 06 August, 2019, 21:10:47

@user-b37f66 please contact info@pupil-labs.com with this information.

user-adf88b 06 August, 2019, 21:12:41

papr, I'm modifying the blink_detection.py and I would like some clarification on the following code ``` activity = np.fromiter((pp["confidence"] for pp in self.history), dtype=float) blink_filter = np.ones(filter_size) / filter_size blink_filter[filter_size // 2 :] *= -1

    if filter_size % 2 == 1:  # make filter symmetrical
        blink_filter[filter_size // 2] = 0.0

    # The theoretical response maximum is +-0.5
    # Response of +-0.45 seems sufficient for a confidence of 1.
    filter_response = activity @ blink_filter / 0.45

    if (
        -self.offset_confidence_threshold
        <= filter_response
        <= self.onset_confidence_threshold
    ):
        return  # response cannot be classified as blink onset or offset
    elif filter_response > self.onset_confidence_threshold:
        blink_type = "onset"
    else:
        blink_type = "offset"

    confidence = min(abs(filter_response), 1.0)  # clamp conf. value at 1.```
user-7890df 07 August, 2019, 05:39:08

Hello, I noticed that the 3d model is fairly stable at 400x400 pixels (120Hz) however when increasing the frame rate to 200Hz (192x192 pixels) the model becomes very unstable and changes often - though the headgear has not moved. It would be a feature request, but I think it would be important to 'lock' the model when using real-time measurements as we would have a fairly good assumption that in a controlled condition the model shouldn't change significantly - rather than let a lower resolution jitter change the model every few (5-10) seconds. Any other way to decrease the model instability? Thanks!

papr 07 August, 2019, 06:35:55

@user-7890df This is already on our todo list

user-34688d 07 August, 2019, 13:20:26

Hey there, I am facing a problematic situation right now. During recording, the world camera disconnects and reconnects (Only happened to me on Windows 10). If I open the file in video player, it opens normally and I can see that there is not too much missing data.. If I open it in Pupil player, the first 50% of the recording is greyed out and the other 50% is ALL of the recording.

papr 07 August, 2019, 13:22:02

@user-34688d which version of Player do you use?

user-34688d 07 August, 2019, 13:24:42

1.12.17 on windows 10

papr 07 August, 2019, 13:25:39

Your issue is actually different from @user-bc5d02 's. Please upgrade to v1.14 which includes a work-around for your issue.

user-34688d 07 August, 2019, 13:25:50

ok will do. btw I updated the issue on github, the fps drop was due to the computer I was using being too slow.

user-34688d 07 August, 2019, 13:25:57

(I posted it a few days ago)

user-34688d 07 August, 2019, 13:26:13

turning off online surface tracking also helped a lot.

user-34688d 07 August, 2019, 13:29:03

I thought I could run it from a laptop, but turns out you need some serious single core performance and good hard drive to run everything at full speed.

user-34688d 07 August, 2019, 13:38:42

@papr minimizing the ID0 window crashes the process.

user-34688d 07 August, 2019, 13:38:49

in capture 1.14

user-34688d 07 August, 2019, 13:41:08

Issue seems to be launchables\eye.py" line 795 in eye shared_modules\gl_utils\utils.py, line 96, in make_coord_system_pixel_based OpenGL\error.py, line 232, in glCheckError

user-34688d 07 August, 2019, 13:41:44

OpenGL.error.GLError: GLError( err = 1281, description = b'invalid'value' baseOperation=glOrtho,

user-34688d 07 August, 2019, 13:41:55

cArguments = (0,0,0,0,-1,1)

user-365094 07 August, 2019, 13:45:46

We recently purchased the Pupil Core headset. I've used it about 3 times and it worked as expected, however today the two eye cams have stopped working. The world cam works, however the eye cams donโ€™t appear under the sources using the โ€˜Local USBโ€™ manager. I have tried two other computers using the most recent Pupil Capture software (v1.14) and the eye cams do not load on either computer.

When I turn off eye0 and eye1 and turn it back on, it says โ€˜init failedโ€™ and it starts in ghost mode. If I run 'system_profiler SPUSBDataTypeโ€™ on all three computers, I only see Cam1 and not Eye0 or Eye1. I have tried a different USB-C and USB cable without luck. I have also confirmed the white connectors for both eye cams are secure.

Does anyone have any suggestions on troubleshooting the two eye cams further?

papr 07 August, 2019, 13:47:46

@user-34688d You can use Capture v1.12 and Player v1.14 btw

papr 07 August, 2019, 13:48:12

@user-365094 You wrote an email to info@pupil-labs.com about the same issue, is that correct?

user-34688d 07 August, 2019, 13:48:35

@papr How will it deal with surfaces? I thought they were updated between version 1.12 and 1.13, and I get a message that they are deprecated.

papr 07 August, 2019, 13:49:33

@user-34688d Just run the surface detection offline. Just use Capture for recording the video. r do you need the surface data in real time?

papr 07 August, 2019, 13:50:18

And do I understand it correctly, that the crash-on-minimizing only happens in Capture v1.14, and not in v1.12?

user-34688d 07 August, 2019, 13:51:43

@papr about the crash yes. v1.12.17 does not crash when minimizing, v1.14.6 does. Both on Windows 10, same hardware config.

user-365094 07 August, 2019, 13:52:10

@papr yes, that's correct

papr 07 August, 2019, 13:52:52

@user-365094 You should receive a response via email shortly

user-34688d 07 August, 2019, 13:54:36

@papr Answering your earlier question, we do not need online tracking right now, but it was very convenient for defining our surfaces. I'll try to do as you said. Record with 1.12.17, process with player v1.14.6

papr 07 August, 2019, 13:56:29

If you want to save further CPU during recording, you can disable realtime pupil detection and run it offline in Player

user-34688d 07 August, 2019, 14:34:20

@papr Is there a way to have surface defnitions carried over between measurement sessions?

papr 07 August, 2019, 14:35:14

You should be able to set up the surface definitions in one recording, and copy the surface_definitions file to other recordings

user-34688d 07 August, 2019, 14:35:46

ok will give it a try

user-07d4db 07 August, 2019, 15:02:27

dear @papr, please see my private message ๐Ÿ˜ƒ

user-34688d 07 August, 2019, 15:19:05

@papr Copy pasting the surface_definition file is not a success. New meaasurements, with the copy pasted surface_definition file cause pupil player to stop responding after loading the file.

papr 07 August, 2019, 15:20:19

@user-34688d Mmh, this is unfortunate. Please create a new github issue requesting this feature

user-34688d 07 August, 2019, 15:21:15

@papr ok, and in the meanwhile do you have a suggestion on how to: 1. record with 1.12.17 2. use player v1.14.6 without having to redefine all surfaces?

papr 07 August, 2019, 15:22:27

I would run Capture v1.14, define the surfaces, turn off surface tracker, start a recording without minimizing the eye window

papr 07 August, 2019, 15:22:48

Not tested though

user-34688d 07 August, 2019, 15:22:58

yes we tried doing this, but it crashed after entering a surface name+enter

user-34688d 07 August, 2019, 15:23:41

should I open a new issue for that too?

papr 07 August, 2019, 15:24:11

one sec

papr 07 August, 2019, 15:31:29

We cannot reproduce the surface rename issue

papr 07 August, 2019, 15:31:58

Could you share the capture.log file after Capture crashing?

user-34688d 07 August, 2019, 15:35:37

hmm it seems to be working right now.

user-34688d 07 August, 2019, 15:36:09

I'll not minimize the eye window then and we are good.

papr 07 August, 2019, 15:37:04

ok. Just make sure to backup the log files when something crashes

user-14d189 08 August, 2019, 06:08:13

Hi, do you have a documentation for CE norms for your eye tracking cameras with illumination? I try to get ethics approval for upcoming experiments and they are very picky. Thanks in advance.

user-14d189 08 August, 2019, 06:08:44

I could not find anything on the webpage.

user-64b0d2 08 August, 2019, 07:35:31

@papr I have some further problems with sending remote annotations to Pupil capture. When I send the remote annotations while running Pupil Capture it seems to be working fine, and I see correct timestamps in capture. However, as mentioned before, I cannot export the annotations from the recording with pupil player. Now you helped me to read the annotation.pldata file manually, and I get the different annotations there, however, they all have the same timestamps which is incorrect.

My set-up is a bit complicated as I am communication from one computer to a second computer via an internet socket, and the second computer is running pupil capture and sends the annotations upon receiving input from the first computer. I adapted the remote_annotations.py script in order to do this, which still includes the time synchronisation, and I receive the message that the time sync is succesful. do you know what might be going wrong?

Here is my script:

eyetracker_communication

papr 08 August, 2019, 09:30:12

@user-14d189 Please check my private message

user-bb9207 08 August, 2019, 12:04:41

Pupil Player - Does anyone know how I can add annotations with a duration?

user-3e42aa 08 August, 2019, 12:56:19

Sorry if this is a FAQ, but what's the eye camera image size that was used for development of the 2D tracker. I just noticed that if I downscale my 800x600 frames to 400x200 or even 300x150 the tracking results get dramatically better

user-3e42aa 08 August, 2019, 12:59:09

I think this is because discontinuities in the pupil edge outlines increase in scale as the image scales, but I haven't figured out how to scale them for a new resolution. But I'm happy to just downscale the frames as well

user-34688d 08 August, 2019, 18:14:42

@user-3e42aa Just curious here. When you say tracking results, are you talking of the tracking confidence?

user-3e42aa 08 August, 2019, 18:23:46

The confidence yes, but with the bigger resolution it doesn't often even give an estimate (and the confidence gets to 0)

papr 08 August, 2019, 18:55:08

@user-bb9207 unfortunately, this is not possible

papr 08 August, 2019, 18:57:41

@user-3e42aa That might be due to better default values for the pupil-min and pupil-max values in the lower resolution settings. Please change the eye windows mode to "algorithm", activate the higher resolution , and try to make a screenshot of a situation that yields bad detection results.

user-3e42aa 08 August, 2019, 19:15:38

@papr It's not the pupil-min and pupil-max. And I'm actually using directly the C++ interface to Detector2D

papr 08 August, 2019, 19:16:34

@user-3e42aa Ok, but this does not exclude the possibility that the defaults of these variables effect the result

user-3e42aa 08 August, 2019, 19:17:13

I did control for them

user-3e42aa 08 August, 2019, 19:17:24

A sec

papr 08 August, 2019, 19:17:25

But generally, yes, it might be possible, that the detector performs better on lower resolution images

papr 08 August, 2019, 19:18:09

@user-3e42aa don't worry, I believe you if you say that controlled for it

papr 08 August, 2019, 19:18:38

Just wanted to state the defaults might be a reason for the performance differences.

user-3e42aa 08 August, 2019, 19:21:04

This gives .99 confidence when 400x300

Chat image

user-3e42aa 08 August, 2019, 19:22:20

This one 0.0 with same settings (apart from adjusted min/max pupil for resolution)

Chat image

user-3e42aa 08 August, 2019, 19:22:36

That's full 800x600

user-3e42aa 08 August, 2019, 19:22:49

Excuse the ugly OpenCV gui

papr 08 August, 2019, 19:23:23

Could you try turning off the coarse detection option?

user-3e42aa 08 August, 2019, 19:25:38

I'm not using it, as it's done cython side

papr 08 August, 2019, 19:26:06

@user-3e42aa Ah correct, I was just looking at it

papr 08 August, 2019, 19:27:13

Then I cannot tell you what exactly is causing this issue. I mean the screenshot shows that the pupil edges were detected well. For some reason the ellipse fitting fails

user-3e42aa 08 August, 2019, 19:34:22

I think it's a discontiunity in the curves

papr 08 August, 2019, 19:35:28

But there should be at least an attempt at fitting an ellipse to it

papr 08 August, 2019, 19:35:54

That the result is no ellipse at all is what I am confused about

user-3e42aa 08 August, 2019, 19:44:07

I don't fully understand how the algorithm, but my hunch is that it works by following the contours but it gives up due to a too large discontiunity

papr 08 August, 2019, 19:45:10

@user-3e42aa We use cv::fitEllipse to fit ellipses on the contour candidates (light blue lines)

user-3e42aa 08 August, 2019, 22:12:29

There's quite a bit of other processing going on also, so at least for me it's hard to say where the resolution-dependency stems from

user-3e42aa 08 August, 2019, 22:20:00

cv::findContours oddly takes no parameters, so there may be some dependency already there. In the larger resolution image the contour is broken at least in the bottom

user-3e42aa 08 August, 2019, 22:26:04

OpenCV may actually be quite difficult to make resolution independent. E.g. the canny detector aperture is capped at some quite low value. Also the morphological operations are defined in absolute pixel units. I think the way to go is to keep to lower resolution eye videos

papr 09 August, 2019, 03:46:22

@user-3e42aa which is more efficient in many ways anyway ๐Ÿ‘

user-f7f0ad 09 August, 2019, 11:03:56

Hi, I am trying to export heatmaps with the Surface Tracker Offline plugin in Pupil Player v1.14 (Windows). The data was recorded with Pupil Capture v1.12. I specified the surfaces and added widths and heights, e.g., 200.0 x 500.0. Heatmap Smoothness is set to 0.5. In the preview with the Show Heatmap option on, it looks all fine. But the exported PNGs just have 1-3 KB and resolutions of 31 x 77 px or even less. From the discussion above, I understand that I have to adjust the Heatmap Smoothness parameter to change the number of bins in the histogram which seems to influence the resolution of the heatmaps. I tried smoothness values of 0, 0.5 and 1.0. In all cases the exported heatmaps are tiny. What can I do to get heatmaps with reasonable resolutions, e.g., 200 x 500 px?

papr 09 August, 2019, 11:14:39

@user-f7f0ad I think @marc can answer this

marc 09 August, 2019, 12:06:55

@user-f7f0ad If you set the smoothness to smaller values you should get large heatmaps. E.g. at zero you should get a heatmap with width 1000 px, which is the largest possible output. The resolution of the heatmap is as you noted equal to the number of bins of the histogram. If you choose to have a coarse heatmap with few bins you will get a small heatmap. You can increase the resolution of the heatmap image (using neirest neighbor interpolation) to increase the size of the heatmap for easier presentation.

marc 09 August, 2019, 12:09:25

Since the current scheme for setting resolution and smoothness of the heatmap does not seem to be intuitive for many users we are considering a change to allow users to explicitly specify the resolution in terms of bins as well as the resolution in terms of image pixels for the output image.

user-f7f0ad 09 August, 2019, 12:24:23

@marc Thanks for the feedback. When I run the export with a smoothness of 0.0, the PNGs have a resolution of 1 x 1 px. Do I have to change the width and height settings?

marc 09 August, 2019, 12:33:16

@user-f7f0ad The size of the surface must be greater than zero, but you should not be able to set it zero in the first place using the UI. I am not able to reproduce this behavior with my recordings. Could you upload an example recording that shows this behavior somewhere and send a link to [email removed] so I can check what happens in Detail?

user-f7f0ad 09 August, 2019, 12:35:03

@marc I will try to provide an example recording and send you the link. Thanks.

user-c87bad 09 August, 2019, 17:47:58

Hi!!! In what kind of condition that will make there is a fixation in an area but no gaze points for the same specific time?

user-c87bad 09 August, 2019, 18:19:25

Also, I found there is a difference between the 'world_index' from gaze_position_on_surface <> and the start_frame_index from fixations.csv of the same timestamp, sometimes there is one frame difference. To sync, would that be better to use the frame index? Or timestamp?

user-91c156 09 August, 2019, 22:44:14

Hi, I have a hardware question about the Pupil Core. Could a development board be used to stream videos over wifi (instead of a phone). If yes what is the smallest board we could use?

user-22adbc 10 August, 2019, 08:02:43

Hi, I have a hardware question. My Pupil Core shows right eye upside-down. is there any setting to correct this?

user-6ec304 11 August, 2019, 22:47:30

Hello, I have a bit of a software and hardware question. When running normal calibration procedures (screen marker, manual marker, etc.) we will often get accuracy estimations of around 2-3 degrees, with a precision metric of around 0.05-0.01. On further review of recorded video we see that error (difference between estimated gaze point and actual gaze point) increases in magnitude as gaze moves further away from the center of the world camera FOV - even within the region originally calibrated for. Any idea why this may be? We are using the wide-angle world camera lens that comes with Pupil Core, but we have the same issue with the high speed world camera lens. Re-estimation of camera intrinsics has not helped.

wrp 12 August, 2019, 00:54:32

@user-22adbc the right eye image on Pupil Core is flipped because the sensor is physically upside down. You can leave this as is, it does not affect the pupil detection algorithm or gaze estimation. However, if you'd like to flip the image you can do so from the eye window general controls flip eye image

wrp 12 August, 2019, 00:55:17

@user-6ec304 monocular or binocular system?

user-6ec304 12 August, 2019, 01:29:40

@wrp binocular.

user-94ac2a 12 August, 2019, 12:30:48

What is the ideal distance between eye and eye camera to get the most accurate tracking? Maybe cameraโ€™s FOV is also important?

user-6ec304 13 August, 2019, 16:29:29

@wrp if it would be useful, I could provide a short recording of the world and eye videos documenting this issue

user-48e1d0 13 August, 2019, 18:07:43

Hey guys, I have a quick question on the proper care of a Pupil Core system. Would alcohol wipes/microfiber cloths be recommended for cleaning both the world camera and 200hz eye cameras used? If not, are there any alternatives?

wrp 14 August, 2019, 03:21:05

@user-6ec304 sure, please upload a short recording that demonstrates this behavior and share the link with data@pupil-labs.com

wrp 14 August, 2019, 03:22:49

@user-94ac2a Ideally the camera captures all movements of the eye, capturing the entire eye region but excluding eyebrows if possible. Small differences in depth should not have a negative impact on the quality of the eye image from the algorithm standpoint

wrp 14 August, 2019, 03:24:18

@user-48e1d0 microfiber cleaning cloths for lenses can be used for the world camera and eye camera lenses. But please be gentle when cleaning lenses. Also a small puff of compressed air is sometimes the best thing for getting dust off of a lens.

user-7890df 15 August, 2019, 04:21:45

Performance Question: what do you think would be the maximum frame rate for the software? So for example if one of the new USB-C 500Hz or 1000Hz cameras could be used, would the software be able to keep up (i.e. algorithm <1ms per eye position)?

user-c87bad 15 August, 2019, 09:32:28

Hi! I found a common error from the file exported from pupil player. I have a recording of around 2 and half minutes. The file of gaze position on surface only contain part of the data. However, the fixation data is for all the time. How can I get whole data of gaze position? Is this a bug?

gaze_positions_on_surface_Surface_1.csv

user-c87bad 15 August, 2019, 09:32:39

fixations.csv

user-c87bad 15 August, 2019, 09:36:38

Also I am pretty sure that my gaze is on the surface.

user-c87bad 15 August, 2019, 09:39:45

And I notice there is a notification saying the player error dc which I don't know what's that mean.

user-c87bad 15 August, 2019, 09:52:15

Error file is :

user-c87bad 15 August, 2019, 09:52:18

Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=85 x=13 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=17 x=70 Export eye0 Video - [ERROR] libav.mjpeg: overread 8 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=64 x=43 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=78 x=48 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=4 x=44 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=66 x=28 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=44 x=45 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=66 x=30 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=42 x=10 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=71 x=61 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=48 x=34 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=8 x=48 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=54 x=42 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=80 x=35

user-c87bad 15 August, 2019, 10:39:11

Another question is about the confidence. According to the doc, the confidence below 0.6 will be ignored, but from the file gaze on surface, the information correspond to the low confidence still remained. If I filter that myself, will that have influence the fixations? I mean the number of fixations on surface will reduce since need to map gaze timestamp.

user-7890df 15 August, 2019, 13:03:01

I have another question with regards to "variable frame rates" of the camera system. Is this a limitation of the camera, software or both? Is there any way to have a fixed frame rate (i.e. have the software poll at a fixed interval) as would be desired for reproducible scientific applications? What would be the downsides of doing it? Thank you very much!

user-4a4def 15 August, 2019, 19:32:24

hi all, does anyone know if a certain type of router is preferred/better when using wifi mode on Pupil Mobile on the MotoZ3 Play?

user-3bb376 16 August, 2019, 15:56:28

hi guys, does anybody know whether we could export the gaze mapping and intensity map for a certain duration from Player and how?

user-7890df 16 August, 2019, 17:55:46

With regards to visualization of saccades in pupil_player and binocular recordings (I have the 200Hz binocular system), fixations appear singular, but during fast eye movements, it looks likes its alternating between two paths, is that the left and right eye calibration mismatch that is plotted?

Chat image

user-7890df 16 August, 2019, 18:00:34

Also I've noted a bug in the pupil player, when doing repeat pupil detections and calibrations and different "trim video" brackets, it appears that in this instance, two different gaze positions are plotted, alternating even during fixation, here are the resulting stills from fixation and then from saccades. I tried to delete the "offline calibration" folder but can not reverse this behavior. Is there a way to 'start from scratch'? These recordings were done on the recent pupil_capture 1.13.29 on windows 10 Pro. At some point in the playback, the fixation becomes "split":

Chat image

user-7890df 16 August, 2019, 18:01:42

Then every subsequent saccade appears to alternate between these two calibrations (is it each eye, or two different calibrations plotting simultaneously?):

Chat image

user-a10852 17 August, 2019, 22:03:24

Hello, I'm currently having an issue playing the eye video in Pupil Player (1.13.29). When I go to play the video, the world camera video works fine; however, the eye video is frozen in place. I have looked at the mp4 file for the eye and it works fine, just not in Pupil Player. I did six separate recordings and while the first worked fine, the rest are all having this issue. Any help on how to fix this and/or to prevent this from occurring in the future would be greatly appreciated. Thanks

user-3eda07 18 August, 2019, 09:13:31

Hello I am new here, my name is arshad. I am from Mauritius . I am considering using pupil as part of my Mphil/PhD research

user-3eda07 18 August, 2019, 09:14:05

I downloaded the software but cannot find any sample or dummy file to try along

user-3eda07 18 August, 2019, 09:15:00

can someone help please? I haven't yet purchased the equipment. Just wanted to try the software, familiarize myself first

user-8779ef 18 August, 2019, 13:21:31

@papr @fxlange at exem?

user-8779ef 18 August, 2019, 13:21:47

Ecem? There is a pupil labs table here.

wrp 18 August, 2019, 13:23:31

@user-8779ef yes there are some members of the Pupil Labs team at ECEM - https://pupil-labs.com/news/ecem-2019

user-8779ef 18 August, 2019, 15:24:41

Yep! Just met them. Good group! the invisible is very impressive.

user-c87bad 19 August, 2019, 14:22:46

Hi, guys! It's really emergency. I cannot get all the gaze data but can get all the fixation data from player. And I found it may be something wrong about the marker cache. Can anyone tell me how to fix it?

wrp 20 August, 2019, 08:11:11

@user-c87bad Could you please provide a bit more description about the behavior you are observing and steps to reproduce this behavior?

user-deafd0 20 August, 2019, 14:29:09

hi , I can not get the fixation data from the player .also when open the image for the heatmap surface it is empty.can any one help me with that ?

user-c87bad 20 August, 2019, 21:53:51

@wrp I have a recording of around 2 minutes from pupil capture. After I got it to pupil player, I found there is a some red color on the marker cache line which means there is no surface detected. However, when playing the video I found this happened suddenly. The audience didn't move and markers were detected before, which means markers should be detected at that time. Then I have a test with recordings around 1 minute, 2 minutes and 3 minutes. But after dragging files to pupil player, markers all suddenly could not be detected for the last around 10 seconds. Then I used larger markers to test again with 1,2,3 minutes. The gaze data on the surface still wasn't complete. (Because the markers suddenly couldn't be detected)

user-3ca244 21 August, 2019, 12:57:08

Hello, would anybody be able to tell me what coordinate system is for the angular output, theta/phi?

Is this just regular spherical polar coordinates, and how is this orientated relative to the Cartesian axes?7

user-3ca244 21 August, 2019, 12:57:21

The manual is very unclear on this

user-c5fb8b 21 August, 2019, 13:01:16

Hi @user-3ca244 Are you talking about the raw data export?

user-3ca244 21 August, 2019, 13:08:28

Hi @user-c5fb8b, yes, I think so. I'm talking about the out theta and phi output keys when using the 3D gaze mapping mode

user-c5fb8b 21 August, 2019, 13:39:12

@user-3ca244 Theta and Phi are indeed spherical coordinates of the normal of the eye sphere. The corresponding components of the normal vector can be found as 'circle_3d_normal_x', 'circle_3d_normal_y' and 'circle_3d_normal_z' in the export. For conversion we use:

    r = np.sqrt(x ** 2 + y ** 2 + z ** 2)
    theta = np.arccos(y / r) 
    psi = np.arctan2(z, x)

Hope that helps!

user-c5fb8b 21 August, 2019, 13:41:03

The normal vector is already normalized, so r is always 1

user-3ca244 21 August, 2019, 14:04:05

Thanks @user-c5fb8b, is theta =0, phi=0 when the eye is looking directly forwards (or looking straight at the camera), or if it were to be somehow pointing at the sky?

user-c5fb8b 21 August, 2019, 15:02:38

@user-3ca244 May I ask for what purpose you need this information? I feel like you might be following a wrong lead. The theta and pi values, as well as the circle_3d_normal_* values are outputs of the 3d detector. They refer to camera coordinates, for every eye separately. This is actually not information about where people are looking! At least not directly, but rather data from a previous processing step.

If you want to have information about whether someone is looking forward, you should take a look at the gaze_normal0_* and gaze_normal1_* (where * is x/y/z) values, which are gaze direction vectors already mapped to world coordinates.

You can find some more information on all exported variables in the docstring of the Raw_Data_Exporter class here: https://github.com/pupil-labs/pupil/blob/9484d32b8420fab725d2bfc70da2e7b543ed92ae/pupil_src/shared_modules/raw_data_exporter.py#L27-L102

user-ec4bd6 21 August, 2019, 17:12:05

Hello! My name is Ana Clara and I'm new here. I am doing a pilot study with the pupil and learning how to use it ๐Ÿ˜ฌ . I would like to know if there is an acceptable data loss limit after calibration so that I can consider the data collection as valid. In the experiment, the participant should look at some images in a 660*550cm frame, 150cm away from her. Thanks!

wrp 22 August, 2019, 06:35:21

Hi @user-ec4bd6 - welcome to the Pupil community ๐Ÿ‘‹ ๐Ÿ˜ธ I would suggest that you use the accuracy measurement as reported after calibration. Please see: https://docs.pupil-labs.com/#notes-on-calibration-accuracy

user-8fd8f6 22 August, 2019, 13:53:18

@papr hi,When I increase the max duration of the fixation, the number of fixations drops and some fixations (which were in the lower duration) doesn't exist anymore. Why?

user-3ca244 22 August, 2019, 14:49:38

@user-c5fb8b

papr 22 August, 2019, 14:51:07

@user-8fd8f6 This is expected when two consecutive short fixations are detected as one longer fixation.

user-8fd8f6 22 August, 2019, 14:54:11

@papr I know what you mean. But sometimes one fixation for a certain frame doesn't exist any more. (It was in the lower duration but is not in the longer one)

user-8fd8f6 22 August, 2019, 15:03:12

@papr :for example (attached picture) the left one is max duration 220 and the right one is max duration 600. I highlighted some of the disappear fixations.

user-8fd8f6 22 August, 2019, 15:03:45

Chat image

user-3ca244 22 August, 2019, 15:09:51

Hi, @user-c5fb8b , Thanks for getting back to me. I am trying to get the pointing direction of the eye (or each eye individually) in the head coordinate system, with the centre of the eyeball at the origin. I'm not actually trying to find out where people are looking in world coordinates, therefore I think this information is what I need.

I think from the documentation and your explanation that the circle_3d_normal_x/y/z or the theta and phi values are what we want...? I now don't know which way the x/y/z axes are relative to the orientation of the camera or eyeball. I'm also not sure what the centre of rotation is, but I don't thing this matters, however presumably circle_3d_normal_x/y/z are vectors starting from sphere_center_x/y/z?

Thank you very much for your help so far ๐Ÿ™‚

user-c5fb8b 22 August, 2019, 15:20:25

@user-3ca244 In this case you should really use the gaze_normal0_x/y/z and gaze_normal1_x/y/z values as I described above, as they represent the view direction of the eyes in the coordinate system of the world camera. So they are both in the same coordinate system.

The world coordinate system is the same as the opencv coordinate system: positive x: right positive y: down positive z: forward

You can get the positions of the eyes in word coordinate system with eye_center0_3d_x/y/z and eye_center1_3d_x/y/z.

I recommend not using circle_3d_normal and sphere_center as they are in eye-camera coordinates, so the coordinate systems are different for eye0 and eye1 and you cannot directly compare those to each other!

I am trying to get the pointing direction of the eye (or each eye individually) in the head coordinate system, with the centre of the eyeball at the origin. Maybe I am misunderstanding you, but as the gaze_normal is just a directional vector, it does not matter where the origin is located.

user-3ca244 22 August, 2019, 15:45:03

@user-c5fb8b We have purchased a Pupil Core with no world camera and only one eye camera at the moment, as this is sufficient for our research. We do intend to add a world camera and perhaps a second eye camera in future.

I'd therefore like to get the coordinates relative to the persons head. I suppose the direction of axes must be determined by the orientation of the camera and will not necessarily be aligned with a purely vertical or purely horizontal axis of rotation of the eyeball.

Regarding the direction of the gaze_normal vector, you're right, it doesn't matter where the origin is. I was worried that it might matter where the origin is for the axes around which the theta and phi angles are measured, but I suppose this should not matter either as I think that the orientation of a rotated object does not depend upon the centre of rotation....

user-ec4bd6 22 August, 2019, 16:46:27

@wrp Thanks! I would like to ask something else. I looked at the output files and couldn't find (or don't know if I could find) such calibration information. Is this data stored somewhere after closing the record or can I only see it after calibration?

papr 22 August, 2019, 16:57:47

@user-ec4bd6 It is shown in the Accuracy Visualizer menu directly after calibration. Alternatively, you can calculate after the effect using Offline Calibration. https://www.youtube.com/watch?v=aPLnqu26tWI&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh&index=2

user-43fc43 22 August, 2019, 17:36:15

Has anyone experienced the issue of capture not recording when the monitor is turned off? When turning the monitor back on, the timing of the recording session acts as if it has been recording, but when stopped, it looks like collection has halted after the initial turn off of the monitor.

papr 22 August, 2019, 20:15:46

@user-8fd8f6 please create a bug report on Github for this

papr 22 August, 2019, 20:17:14
user-4ef728 22 August, 2019, 21:54:25

Hi I'm a little confused on the exported data that I get. what is the difference between Diameter and Diameter_3D?

user-c5fb8b 23 August, 2019, 07:11:20

Hi @user-4ef728 You can find more specific information about the data in the documentation: https://docs.pupil-labs.com/#detailed-data-format What you find there: - diameter is the pupil diameter in pixels in the image - diameter3d is the estimated real diameter in mm

user-deafd0 23 August, 2019, 15:07:45

hi I have a problem with the player some the records didn't open can any one help me with that .

papr 23 August, 2019, 15:09:51

@user-deafd0 Does Player crash or does it show grey frames instead of the recording? In case of crashes, can you share the player.log file after trying to open a recording in Player and it having crashed?

user-deafd0 23 August, 2019, 15:27:20

@papr thank you for your reply ,

Chat image

user-deafd0 23 August, 2019, 15:27:57

that what show in my screen when I try to open the record

papr 23 August, 2019, 15:28:54

@user-deafd0 It looks like you have an eye video but no eye timestamps. Can you share the info.csv file of this recording?

user-deafd0 23 August, 2019, 15:32:37

Chat image

user-deafd0 23 August, 2019, 15:32:49

@papr

papr 23 August, 2019, 15:33:26

@user-deafd0 Could you post a screenshot of the recording directory's content?

user-deafd0 23 August, 2019, 15:36:20

@papr

Chat image

papr 23 August, 2019, 18:45:55

@user-deafd0 for some reason the videos do not have any timestamp files. This is an indication that the recording was not terminated properly. Do the videos open in vlc?

user-c1220d 25 August, 2019, 09:45:21

Hi! I get this errore dropping the recording folder in pupil player. Is there any chance to recover it?. Thank u so much

user-c1220d 25 August, 2019, 09:45:27

Chat image

papr 25 August, 2019, 13:43:15

@user-c1220d this is a problem with this version of Pupil Player. Please upgrade.

user-5e6759 25 August, 2019, 23:35:33

hi, I have a question: How can I launch the pupil lab without the gui? I hope to launch it on a development board which doesn't x server running, so the error of opening glfw will pop up if I simply open pupil lab in the console.

wrp 25 August, 2019, 23:37:29

@user-5e6759 there is no windowless Pupil software. You can use Pupil Service, but this still has glfw windows, so not sure this meets your constraints.

user-5e6759 25 August, 2019, 23:42:28

so will it be okay if I comment all lines related with glfw window?

wrp 25 August, 2019, 23:45:09

@user-5e6759 I believe it's going to be a more involved set of changes than just commenting out a few lines of code.

user-1e0eb8 26 August, 2019, 01:03:27

Hi all, we are conducting research on gaze behaviour in sport. We used to have SMI ETG2 eye trackers, but we are not happy with them at all, and recently the smart recorder has stopped working. We are looking into the pupil trackers as an alternative. Does someone have some information for me in terms of their application to sport (accuracy, usability, etc...) Thanks

wrp 26 August, 2019, 03:19:17

Hi @user-1e0eb8 there are a number of people in the community using Pupil Labs hardware + software in sports research. I'm not 100% sure if this research has been published yet but you might want to check the citation list: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing You might also want to take a look at some research that Kyle Lindley (and Driveline Baseball) has been doing with Pupil Core with baseball players: https://twitter.com/kylelindley_

user-1e0eb8 26 August, 2019, 03:34:28

Thanks mate.

user-daa4af 26 August, 2019, 18:58:55

Where can I download the Core software for MacOS?

papr 26 August, 2019, 19:05:23

@user-daa4af hi, unfortunately, we do not have have a macos version for the v1.15 release. Please check the v1.14 release and download its macos version

papr 27 August, 2019, 08:08:35

@user-daa4af We have just uploaded the v1.15 macOS release. Please give it a try.

user-c94ac3 28 August, 2019, 06:51:41

hi pupil community, first time poster here. I was wondering if anyone might be available to help me with an experiment i am conducting

user-c94ac3 28 August, 2019, 06:52:49

i'm using a pupil eye tracker and software to track my students (english teacher) eye movements during test taking, but am having trouble calibrating my device (monocular) to get the accuracy I need

user-c94ac3 28 August, 2019, 06:53:01

any help would be greatly appreciated

user-c1220d 28 August, 2019, 08:30:59

@papr i tryed with 1.15 version but apparently it crushes when i activate "eye overlay" plugin

player.log

user-c1220d 28 August, 2019, 08:31:39

pupil player can actually start but only visualizing the world camera recording

papr 28 August, 2019, 08:33:58

@user-c1220d Looks like you are missing timestamps for eye0. Could you paste a screenshot of the recording directory?

user-c1220d 28 August, 2019, 08:35:55

Chat image

user-c1220d 28 August, 2019, 08:36:19

here it is

papr 28 August, 2019, 08:40:11

@user-c1220d Yeah, it looks like the eye0 video was somehow corrupted. The video file is empty (0 bytes) and does not have an according timestamp file. Remove the eye0 file and retry the overlay feature.

user-c5fb8b 28 August, 2019, 08:41:21

@papr eye1 is also 0 bytes for @user-c1220d s recording!

user-c1220d 28 August, 2019, 08:41:53

yesh, for some reason it didnt record properly

user-c1220d 28 August, 2019, 08:42:20

thank you for the help, unfortunately the recording failed

papr 28 August, 2019, 08:42:56

Oh yeah, and there is actual timestamp files. I didn't read the screenshot row-wise but by column. Ups.

papr 28 August, 2019, 08:43:19

@user-c1220d But yes, it looks like the eye videos where not properly recorded. Could you share the info.csv fiel with us?

user-c1220d 28 August, 2019, 08:43:59

yes of course

info.csv

papr 28 August, 2019, 08:46:19

@user-c1220d This is a very old version of Pupil Mobile. Have you considered upgrading?

user-c1220d 28 August, 2019, 08:48:08

yeah, maybe i should, does it have any complications with different android versions?

papr 28 August, 2019, 08:50:20

@user-c1220d There was a problem with detecting the world cameras on Android 9. But all older versions of Pupil Mobile have this problem. We released a work-around in the beta release.

user-c1220d 28 August, 2019, 09:11:38

ok thanks so much

user-c5dc99 28 August, 2019, 10:34:15

Hi everyone, I'm trying to install all dependencies for Ubuntu 16.0.4 and I'm facing many errors installing libuvc. As described in this github issue https://github.com/pupil-labs/pyuvc/issues/53 but I'm not able to fix it.

papr 28 August, 2019, 10:47:43

@user-c5dc99 please post your exact error in the issue above

user-3e42aa 28 August, 2019, 14:31:29

Any idea why we are getting IndexError from _convert_frame_index in pupil player while doing offline pupil detection?

papr 28 August, 2019, 14:32:06

@user-3e42aa No not really. Could you share that recording with data@pupil-labs.com ?

user-3e42aa 28 August, 2019, 14:32:20

It's rather huge

papr 28 August, 2019, 14:33:03

@user-3e42aa ok, then let's start with the complete traceback of the exception

user-3e42aa 28 August, 2019, 14:33:09

The eye videos are 3.6 gigs a piece

user-3e42aa 28 August, 2019, 14:33:31

Ok, a sec

papr 28 August, 2019, 14:33:36

@user-3e42aa you can compress them via ffmpeg -i original.mp4 compressed.mp4

user-3e42aa 28 August, 2019, 14:34:12

Ok. I actually did do that, but wanted to verify with the originals

user-3e42aa 28 August, 2019, 14:35:17

Do you need the scene camera video as well?

papr 28 August, 2019, 14:36:10

@user-3e42aa yes

papr 28 August, 2019, 14:37:07

Could you please share the traceback nonetheless?

user-3e42aa 28 August, 2019, 14:38:01

Coming up, had to get the laptop online

user-3e42aa 28 August, 2019, 14:38:26

eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace: Traceback (most recent call last): File "launchables/eye.py", line 664, in eye File "shared_modules/video_capture/file_backend.py", line 281, in run_func File "shared_modules/video_capture/file_backend.py", line 454, in recent_events_own_timing File "shared_modules/video_capture/file_backend.py", line 281, in run_func File "shared_modules/video_capture/file_backend.py", line 405, in get_frame File "shared_modules/video_capture/file_backend.py", line 377, in _convert_frame_index IndexError: index 0 is out of bounds for axis 0 with size 0

papr 28 August, 2019, 14:41:50

@user-3e42aa which version of player do you use?

papr 28 August, 2019, 14:43:30

And please share the info.csv file of the recording.

user-3e42aa 28 August, 2019, 14:43:40

v1.15-4-gbfa1cd9_linux_x64

user-3e42aa 28 August, 2019, 14:44:21

info.csv

user-3e42aa 28 August, 2019, 14:44:31

It was recorded with another version though

user-c87bad 29 August, 2019, 09:21:02

Hi! Can anyone tell me how to fix it?

user-c87bad 29 August, 2019, 09:21:03

I have a recording of around 2 minutes from pupil capture. After I got it to pupil player, I found there is a some red color on the marker cache line which means there is no surface detected. However, when playing the video I found this happened suddenly. The audience didn't move and markers were detected before, which means markers should be detected at that time. Then I have a test with recordings around 1 minute, 2 minutes and 3 minutes. But after dragging files to pupil player, markers all suddenly could not be detected for the last around 10 seconds. Then I used larger markers to test again with 1,2,3 minutes. The gaze data on the surface still wasn't complete. (Because the markers suddenly couldn't be detected)

papr 29 August, 2019, 09:54:15

@user-c87bad Which version of Player do you use and on which operating system?

user-40621b 30 August, 2019, 02:53:24

Hi @papr can you tell me how to get the video from world camera in a range of QR code only?

wrp 30 August, 2019, 02:55:14

@user-40621b could you please explain. What does in a range of QR code only mean? Does this mean crop the video based on the surface defined by markers? If so, this is not supported out of the box by Pupil software, but should be able to do post-hoc (or in real-time) by subscribing the surface topi in the network API and cropping the video frames based on the position of the surface.

user-40621b 30 August, 2019, 03:01:44

Yes that's what i mean @wrp . What is network API? How to get it? Thanks.

wrp 30 August, 2019, 03:12:56

Hi @user-40621b please see 1. https://docs.pupil-labs.com/#interprocess-and-network-communication 2. Examples of how to communicate with Pupil software with simple python scripts: https://github.com/pupil-labs/pupil-helpers/tree/master/python

user-10523f 30 August, 2019, 03:23:01

Okay @wrp I'll read first. Thanks

papr 30 August, 2019, 09:33:24

@user-c87bad Hey, I have seen that we did not properly respond to your question the first time that you asked it. I also saw, that there is an other open issue regarding your time measurements. We are currently in the process of processing open issues more systematically. You should receive a response to the time measurement issue next week.

Regarding the surface detection issue: Please share the recording with data@pupil-labs.com such that we can investigate the issue.

user-9cbfb2 30 August, 2019, 19:27:45

@papr Hi there! I'm also trying to integrate Pupil+Psychopy and I must ask: ยฟis it possible to give Pupil Capture the order to start recording from a few lines in Psychopy? Thank you very much!

papr 30 August, 2019, 19:28:45

@user-9cbfb2 yes it is, but only from the coder view.

user-9cbfb2 30 August, 2019, 21:12:33

@papr I'll try it tomorrow, thank you very much!

End of August archive