💻 software-dev


papr 01 November, 2018, 14:59:55

@user-911c89 This integration is still in the planning phase. Unfortunately, I cannot provide further assistance with this issue yet.

user-62b13b 02 November, 2018, 14:52:59

Hello I guess the question I have is a common question. I am currently using the pupil hardware for a study where people are looking at stuff where it is impossible to use the markers. Before I was used to use the SMI semantic gaze mapping function which allows to manually code to what part of a reference image people are looking at in order to calculate dwell time etc. How do people solve this issue with pupil labs eyetracking data?

user-62abb8 04 November, 2018, 15:24:31

Hello, I'm an Interaction design student at NTNU (Norway), we are considering to buy a few Pupil eye trackers to replace some eagling SMI glasses. One of the project that would justify the expense would involve the use of the world camera as a luminance meter, to eventually allow to use the pupil size as a metrics in different lighting conditions. in order to use a camera as a luminance meter (cd/m2) it is necessary to be able to either fix the exposure or save the parameters of the world camera as they change during the recording (f-stop iso, exposure time) is any of that data available trough your software? Thanks

papr 04 November, 2018, 18:26:38

@user-62abb8 fixing the exposure time is possible through software

user-e194b8 06 November, 2018, 10:29:16

Hi @papr , Have you uploaded the final version of the blink detector? If you remember me, I found a bug that foce you to restart the script before restart the capture. Discord has delete my last account, then I will use this one.

papr 06 November, 2018, 10:39:51

@user-e194b8 Hey, yes, I do. I have just updated the gist with the latest version

user-e194b8 06 November, 2018, 11:33:18

That's perfect! Other question, could we know the latency that happens between camera images received by the program and UDP signal sended to Unity? Is Pupil detection faster than blink detection? Thanks!

papr 06 November, 2018, 15:19:02

@user-e194b8 blink detection is based on the detected pupil data. Therefore it is faster, yes.

papr 06 November, 2018, 15:24:03

The delay depends on different factors, e.g. which plugins are active, your network connection, etc. You can measure it yourself by syncing time between the Capture and unity and measuring the time difference between pupil data timestamps and receiving them in unity

user-e194b8 07 November, 2018, 10:40:34
You can measure it yourself by syncing time between the Capture and unity

Can I do this with the Time sync plugin?

papr 07 November, 2018, 12:27:50

Yes

user-e194b8 07 November, 2018, 13:46:07

Reading it again, I had not understood what you wanted to tell me. It's only see the difference between timestamp and Unity time. I have this time, the time, about I have asked, is period between read the camera buffer (maybe this time is the TimeStamp) and pupil analyze the frame to obtain confiance and these things

user-06a050 09 November, 2018, 09:57:06

Hey! When installing the latest version of everything in the developer docs, Pupil Capture opens with a black window. Only when resizing it, the UI is shown. But everything is really slow, macOS also shows the rainbow ball when clicking anything. But when you select "Test image" in the backend manager, the UI is responding fast and no balls are shown. I have no idea where that comes from and it looks like nothing else is broken, but I wanted to let you know that there is an issue.

papr 09 November, 2018, 09:57:53

@user-06a050 Thank you, I will look into that. Which mac do you use?

user-06a050 09 November, 2018, 09:59:22

macOS 10.14.1, MacBook Pro Retina

papr 09 November, 2018, 09:59:56

Ok, thank you

user-06a050 09 November, 2018, 10:03:00

*Activating "Test image" and fake capture

user-06a050 09 November, 2018, 10:05:05

I'll create an issue on GitHub, maybe I'm also able to create a video from it

papr 09 November, 2018, 10:05:47

ok

user-06a050 09 November, 2018, 10:21:34

https://github.com/pupil-labs/pupil/issues/1381

user-29e10a 09 November, 2018, 13:28:40

@papr One question about the new annotation sending procedure: Is it right, that I must send my custom annotation with the topic "annotation", so to speak: "annotation.blabla", or do I have to use the notify topic with the subject "annotation", as: "notify.annotation.blabla" ... the latter works, but is not saved to annotation.pldata (and my issue with high frequency annotation persists) but the former doesn't work, the error is: "Req.XSend - cannot send another request" .... maybe I have to start a new publisher socket? But how does pupil know about that? I'm confused

user-29e10a 09 November, 2018, 13:47:25

Annotation Capture Plugin IS running

papr 09 November, 2018, 13:54:24

Are you sending the data to Pupil Remote or the IPC Pub Port?

user-29e10a 09 November, 2018, 13:54:55

pupil remote

papr 09 November, 2018, 13:56:51

The first way would be correct but you need to 1. request the PUB_PORT from pupil remote 2. create a pub/push socket connecting this port 3. Send the message to that socket

papr 09 November, 2018, 13:57:50

@user-29e10a Please checkout the new example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py

papr 09 November, 2018, 13:59:17

We will probably adapt Pupil Remote such that we will accept these kinds of messages such that a pub port is not necessary anymore.

Do not forget to call recv on the Pupil Remote socket after sending a message. The REQ-REP requires you to receive the servers response.

user-29e10a 09 November, 2018, 14:00:04

thank you, I will try that! 😃

user-82e7ab 09 November, 2018, 14:09:10

Hi, i just tried version 1.9 and it seems like you changed the data format sent via Pupil Remote: 1.8: {"topic":"gaze.3d.0.","eye_centers_3d":{0:[x,y,z]},"gaze_normals_3d":{0:[x,y,z]},... 1.9: {"topic":"gaze.3d.0.","eye_center_3d":[x,y,z],"gaze_normal_3d":[x,y,z],... In the 1.9 structure eye_center_3d and gaze_normal_3d are no longer dictionaries, but simple arrays, containing exactly one vector instead of one or two. Also, in 1.8 it was possible to also get gaze data packages like gaze.3d.01. which was a package containing data from both eyes. Is this functionality gone and the aforementioned dictionary structure no longer necessary? If so, you should definitively report such changes in the developer notes of your release message, as it completely broke our pupil integration O_o

papr 09 November, 2018, 14:16:12

@user-82e7ab Are you using macos?

papr 09 November, 2018, 14:16:39

You are receiving monocular data. That the fields are different than binocular data is expected

user-82e7ab 09 November, 2018, 14:18:07

what is macos?

papr 09 November, 2018, 14:18:25

macOS, the operating system for apple

user-82e7ab 09 November, 2018, 14:18:36

ah sry, no it's Windows

user-82e7ab 09 November, 2018, 14:18:41

(10)

papr 09 November, 2018, 14:20:06

Please make a recording and export the pupil data via the raw data exporter. Let's check if the eye cam clocks are in sync

papr 09 November, 2018, 14:21:17

Alternatively, it is possible, that one of the pupil detections yiels low confidence. This also results in monocularly mapped data

user-82e7ab 09 November, 2018, 14:23:44

ok, I did a recording via capture

user-82e7ab 09 November, 2018, 14:23:57

where do I find the raw data exporter?

user-82e7ab 09 November, 2018, 14:25:35

Pupil Capture started the Binocular 3D gaze mapper after calibration - and it's still running - not sure if this can still send monocular data?!

user-82e7ab 09 November, 2018, 14:31:20

OK, maybe this just did not happen to me before (receiving monocular data), but if this can happen, I'll add this to our receiver.

papr 09 November, 2018, 15:47:48

You need to open the recording with Player. The Raw Data Exporter will be loaded by default and will export the data to csv.

papr 09 November, 2018, 15:48:08

Yes, you should definitively support receiving of monocular data

user-88dff1 09 November, 2018, 16:39:06

Hey. Trying to integrate pupil with Unity. Got the capture running, imported the HMD_VR asset. Opened the 3d calib demo; it successfully connects to the Pupil, and then nullRefs when PupilGazeTracker tries to instantiate a "CalibrationPointExtendPreview" from Resrouces. Which isn't in any resources folder.

user-82e7ab 12 November, 2018, 06:41:15

Is there any complete documentation / synopsis / example output of how the sent gaze data can look like?

user-82e7ab 12 November, 2018, 06:50:12

How do I know if a received message contains monocular or binocular data? (The topicseems to be the same) Or does 1.9 only sends binocular data if the topic (gaze.3d.) ends with 01. instead of just 0. or 1. ?

user-82e7ab 12 November, 2018, 06:50:46

I've exported the raw data - should I just drop it here, or upload it somewhere else?

papr 12 November, 2018, 06:51:29

You can put it in a gist.github.com

papr 12 November, 2018, 06:52:46

Yes, .01. is binocular, while the rest is monocular. Please let us know if you find any inconsistencies

user-82e7ab 12 November, 2018, 06:54:44

OK, then could you add this to the release notes? Because in 1.8 the data was always stored in binocular format, even if the topic was .0. or .1.

papr 12 November, 2018, 06:55:15

@user-82e7ab https://github.com/pupil-labs/pupil/pull/1291

user-82e7ab 12 November, 2018, 06:57:30

Thx, but I was thinking of the "Developer Notes" section here https://github.com/pupil-labs/pupil/releases/tag/v1.9

papr 12 November, 2018, 06:57:32

The release notes did only mention #1286. You are right, that we should have mentioned this PR explicitly

user-82e7ab 12 November, 2018, 06:57:54

because this is the place where I whould expect such kind of API changes

user-82e7ab 12 November, 2018, 06:58:33

but thats just an idea - you're always super fast with giving us support - so it's no big problem 😉

papr 12 November, 2018, 06:58:51

I will add it to the release notes as soon as I am in the office. I 100% agree with you that this should have been mentioned.

user-82e7ab 12 November, 2018, 06:59:01

thx

user-82e7ab 12 November, 2018, 07:00:28

oh, your even working before arriving at the office .. now I feel bad .. thx again ; )

user-82e7ab 12 November, 2018, 07:00:52

just a quick wrap-up for me - from 1.9 on I can rely on the topic to tell monocular from binocular gaze data packages, right?

papr 12 November, 2018, 07:20:27

@user-82e7ab yes, you should be able to rely on that. This was the case for 1.8, too... But it was buggy. ;)

papr 12 November, 2018, 09:39:22

@user-82e7ab I created an API changes label to keep track of api changes within PRs. This should ensure an exhaustive list of all api changes for the next release notes.

For reference: https://github.com/pupil-labs/pupil/issues?utf8=%E2%9C%93&q=label%3A%22API+changes%22

user-82e7ab 12 November, 2018, 11:29:23

perfect! thx @papr

user-b91aa6 12 November, 2018, 18:09:53

Question 1: May I ask that what do the maturity, solver fit, confidence, performance, and perf.Grad mean in the 3D eye model?

Chat image

user-b91aa6 12 November, 2018, 18:10:00

@papr

user-b91aa6 12 November, 2018, 19:26:27

Question2: The green circle is the projected sphere in left image, however, the corresponding 3D eye model is shown in the right image, obviously they don't match. In the green circle, the pupil is in the projected sphere, but the pupil is not in the projected sphere in the right image. May I ask that why? How the green circle is obtained?@papr

Chat image

papr 13 November, 2018, 08:53:25

@user-b91aa6 Q1: These are displayed for debugging and represent internl parameters/intermediate results of the 3d model. They are only partially meaningful. See this paper for details on how the 3d model works: https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf

Q2: The debug window displays a 3d scene. You can click+drag within the window to change the camera angle. You will see that the sphere is positions behind the eye image plane. The perspective makes the sphere look smaller from tis angle.

user-b91aa6 13 November, 2018, 09:35:34

To Question1: Thank you very much for your reply. May I ask that can you explain them? I have read the paper and get the idea how 3D eye model work, but my current work needs me to know when the 3D eye model estimation is accurate. So I need to know the meaning of these metrics.@papr

papr 13 November, 2018, 09:43:31

Unfortunately, there is no exact way to tell if the model is accurate.

maturaty: The model is fit best if there are observations for many different pupil angles. The more angles there are, the higher the maturaty

papr 13 November, 2018, 09:53:45

confidence is a mixture between the 2d datum confidence, and how well the 2d datum fits the current model.

solver fit: measure for how well the model fits to the data it was fitted on (the trainings error).

performance is a running average of the confidence values

For more details, please have a look at the code.

user-b91aa6 13 November, 2018, 10:13:31

2D datum means the 2D pupil position?

user-b91aa6 13 November, 2018, 10:13:36

@papr

papr 13 November, 2018, 10:13:57

Correct

papr 13 November, 2018, 10:14:15

A 3d datum is always based on a 2d datum.

user-b91aa6 13 November, 2018, 10:18:21

Thank you very much. To Question 2: The origin of the coordinates system should be the eye-tracking camera, right? the eye image is what the eye tracking camera see. So the blue 3D sphere we see on the eye image should be the projected sphere, right?

Chat image

papr 13 November, 2018, 10:20:44

You are correct in regard to the coordinate origin. No, the blue sphere is not being projected in this view. It is the visualization of the actual 3d model sphere. This sphere is used to generated the projected green circle in the eye window.

user-b91aa6 13 November, 2018, 10:24:57

So if the view camera is at the same position as the eye-tracking camera, then, the sphere will be the projected sphere on the eye image, right?

papr 13 November, 2018, 10:28:15

Mmh, yes, this sounds about right. I now understand the issue. You are rigth to expect both spheres to be equally big. My guess is that the projected sphere (green) is somehow differently calculated/displayed. I will look into that.

user-b91aa6 13 November, 2018, 10:29:10

Can you reply to me after you check this? I need this in my current project. Thank you very much.

papr 13 November, 2018, 10:29:21

Sure

user-b91aa6 13 November, 2018, 10:29:58

Thanks a lot

papr 13 November, 2018, 10:50:14

@user-b91aa6 The issue is that we are using the focal length of the 120Hz cams for the projection. This is a bug. We should be using the focal length for the 200Hz cams. My colleague will create a Github issue for that.

user-b91aa6 13 November, 2018, 11:06:54

But the eye-tracking cameras for vive that I use is 120 HZ. focal_length = 620 in the codes. May I ask that what's the right focal length? Which one is wrong? The green circle in the eye image is wrong?

user-b91aa6 13 November, 2018, 11:06:57

@papr

papr 13 November, 2018, 11:38:29

Mmh, I will have to discuss with my colleagues but I am pretty sure that the 3d debug visualizer is buggy.

user-b91aa6 13 November, 2018, 12:38:42

Thank you very much.@papr

user-b91aa6 13 November, 2018, 15:54:51

What does the perf. Grad mean in the 3D eye model?

user-b91aa6 13 November, 2018, 15:54:54

@papr

user-b91aa6 13 November, 2018, 15:55:07

Chat image

papr 13 November, 2018, 15:58:00

@user-b91aa6 it's just the gradient of the performance within one iteration

user-b91aa6 13 November, 2018, 15:59:26

Thanks. Have you found that what's the problem for the 3D eye visualizer?

user-b91aa6 13 November, 2018, 15:59:32

@papr

papr 14 November, 2018, 10:53:07

Not yet

user-e194b8 14 November, 2018, 12:15:59

Ey, I am trying to check the delay for the adquisition of pupil data. I am subtracting a counter that I start when I connect Unity to PupilCapture and the PupilTresholds. Results are around 110 ms, could it be correct? In the pupilCapture (last version) I have connected the plugins: PupilRemote, Blink Detection (It works now perfect) and FramePublisher (which start when I launch Unity). Thanks!

mpk 14 November, 2018, 12:26:31

@user-e194b8 we typically measure 10ms delay from start of camera exposure until frame is in Pupil Capture. 5ms for processing, all else is added outside of Pupil Capture. maybe unity3d introduces a lag?

user-e194b8 14 November, 2018, 14:21:43

It could be, I am trying several approaches to reduce it

user-e194b8 14 November, 2018, 14:22:37

Have you measure the delay with your Unity demos?

papr 14 November, 2018, 14:23:54

How do you measure the delay? Do you synchronize clocks? If yes, how?

user-e194b8 14 November, 2018, 14:29:49

I record the first pupil.0 TimeStamp = time0. When I do this I start a counter in the Update. I remove this time0 from all the futures TimeStamps. Then only a substract beetwen each Timestamp and the current Update counter.

user-e194b8 14 November, 2018, 14:32:54

Approximately

Chat image

user-e194b8 14 November, 2018, 14:33:26

Maybe this is wrong

papr 14 November, 2018, 14:57:36

Thanks for the picture, this helped a lot! I am afraid, that you might not be calculating what you are expecting:

  • TS - time0 is the time interval between the recording of two frames. This should be equal to 1 / <fps>.
  • The counter is the time interval between the reception of two data points. If the pipeline would would have a fixed processing time and the network a fixed transmission time, then the counter would equal to TS - time0
  • Under above conditions your "delay" is zero and is independent of the network transmission time.
papr 14 November, 2018, 14:58:25

Since processing time and network transmission time varies, the "delay" value varies as well.

user-e194b8 14 November, 2018, 15:30:51

I think I understand you, the the Update timer is also delayed for the process and network times....

papr 14 November, 2018, 15:35:33

To calculate the delay between frame creation and data reception, I would recommend - synchronizing Pupil Capture to the unix time using this plugin https://gist.github.com/papr/45ec8a48d83338d007c1a5d49a35a966 - takeing the unix timestamp at the time of arrival - substracting these two unix timestamps to calculate the total delay

user-e194b8 14 November, 2018, 15:36:40

Perfect! I am going to try it

papr 14 November, 2018, 15:46:15

These are a few example values I get when calculating the delay using a script running on the same computer as Capture:

Delay: 0.017004728317260742
Delay: 0.014100313186645508
Delay: 0.01119089126586914
Delay: 0.013366222381591797
user-e194b8 14 November, 2018, 16:24:23

Awesome! I have a delay of 25.2952328938744 ms, its enought for our porpose! Thanks a lot!

papr 14 November, 2018, 16:24:36

👍

user-d45407 15 November, 2018, 15:50:53

Hello, I can't seem to get data off of pupil mobile. I drag the file onto pupil play but it displays a grey screen saying "this will take a while". I left it up for hours with no change. how long should this take?

Chat image

mpk 15 November, 2018, 16:34:06

@user-d45407 please make sure that there are no "umlaute" in your path. Also what does the log file in your pupil settings folder say?

user-d45407 15 November, 2018, 16:41:14

2018-11-15 09:49:24,850 - MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. 2018-11-15 09:49:27,434 - player - [INFO] launchables.player: Starting new session with 'C:\Users\tac0018\recordings\20181114103343557' 2018-11-15 09:49:27,447 - player - [INFO] player_methods: Updating meta info 2018-11-15 09:49:27,448 - player - [INFO] player_methods: Checking for world-less recording 2018-11-15 09:49:27,449 - player - [ERROR] launchables.player: Could not generate world timestamps from eye timestamps. This is an invalid recording.

user-d45407 15 November, 2018, 16:41:27

^from the settings.

mpk 15 November, 2018, 16:41:41

@user-d45407 it looks like the recording is not complete. Did you transfer this from Pupil Mobile?

mpk 15 November, 2018, 16:42:04

Please restart Android before transfer, we found that not all files where shown in windows otherwise.

mpk 15 November, 2018, 16:42:11

please then re-transfer the files and try again.

user-d45407 15 November, 2018, 16:43:43

will do!

user-d45407 15 November, 2018, 16:48:14

it did the same thing. I am going to try again with a different recording and let you know if that changes anything.

user-d45407 15 November, 2018, 16:54:54

So I think I see what the problem is. the phone I am using (nexus 5) is saving all of the data in different folders.

mpk 15 November, 2018, 16:55:10

@user-d45407 please then make sure to update Pupil Mobile!

user-d45407 15 November, 2018, 16:55:39

Chat image

user-d45407 15 November, 2018, 17:15:21

Thanks for your help!

user-d45407 15 November, 2018, 17:15:29

Its working now!

mpk 15 November, 2018, 20:16:48

Great to hear!

user-82e7ab 16 November, 2018, 08:46:36

hi, as of release 1.8 all messages (sent via zmq/msgpack) are required to have a topicfield, so all sent messages contain a topic. My question is, what the subjectfield is for, as this seems to always hold exactly the same data/string?

papr 16 November, 2018, 09:55:17

Hi @user-82e7ab

Notifications are special messages. Their topic has this strict format: notify.<notification subject>

Notifications are passed to all activae plugins via the on_notify() callback. Plugins as well have the possibility to sent notifications via notify_all().

So the answer to your question is that, even if your argument in regards to redundant information is valid, we still need it for the app-internal Plugin api.

user-82e7ab 16 November, 2018, 10:40:10

ok, so is it valid to say if I'm sending requests to pupil I'm save with always filling both, topic and subject, with the same information (with notify. prefix for topic)? I'm asking because e.g. in main.py the topic is checked for eye_process.should_start while in eye.py the subject is checked foreye_process.should_stop

papr 16 November, 2018, 10:41:01

Notifications should always have both, yes

user-82e7ab 16 November, 2018, 10:41:19

ok, thx

user-82e7ab 16 November, 2018, 10:44:20

Are you planning to "merge" these two at some point? Because, as you already mentioned, it's quite redundant data .. subject, topic and the topic again, as first part of a multi message

papr 16 November, 2018, 11:24:30

We will consider it.

user-cde59c 19 November, 2018, 09:34:51

I'm using single eye/camera headset to develop gaze controlled actions. The surface is defined as a full screen of the monitor. Two questions: Q1. Regardless calibration there is always an offset between observed point and gaze position received from Pupil. Is any ready to use solution to remove this offset? Q2. I'm writing the function to minimize such an offset based on known calibration point. Having some strange issues with that I've just discovered that in some situations (not recognized yet) gaze position could be not normalized (means out of range [0..1]). What is the meaning of such not normalized positions? Have I ignore them?

user-b91aa6 19 November, 2018, 11:21:50

Question 1: When the 3D eye model is projected to the eye image, is the eye-tracking camera taken as the pin hole camera? I only see the focus length paratemeter of the camera.

user-b91aa6 19 November, 2018, 11:21:51

Question 2: Because the focus lenth of the eye-tracking camera in the vive can be adjusted. How do you know the focus length?@papr

papr 19 November, 2018, 11:55:48

@user-cde59c 1) This offset is an estimation error measured in accuracy, see the Accuracy Visualizer for details. This offset is only rarely constant. Reducing the offset means improving the calibration. 2) Not all gaze points are neccessary in the field of view of the camera. Therefore it is technically valid to have negative/exceeding values. These are often outliers though. Keep an eye on the confidence values. These extreme gaze positions often have low confidence.

papr 19 November, 2018, 11:59:24

@user-b91aa6 I saw your questions the first time that you posted them. I will answer them when I know the answer to them. Please refrain from reposting your questions. Next time, I will try to acknowledge the question earlier, such that you know, that I have read it.

papr 19 November, 2018, 12:23:35

@user-b91aa6 Q1: I don't know. You probably know the code for the 3d visualizer better than me. Q2: I think this is just an estimation and is kept constant.

This visualizer is really just for debugging and most likely buggy. Unfortunately, I do not have the time to look into this problem.

user-b91aa6 19 November, 2018, 13:52:14

Thank you very much.

user-b91aa6 20 November, 2018, 10:03:44

When the 2D pupil is unprojected to the 3D space, is the eye-tracking camera regarded as the pinhole camera? Why it can be regarded as pin hole camera?

user-b91aa6 20 November, 2018, 10:04:08

@papr

papr 20 November, 2018, 10:05:51

They are considered pinhole cameras with distortion. See https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py

user-b91aa6 20 November, 2018, 10:06:58

Thank you very much

user-96755f 21 November, 2018, 10:31:28

Hello. I'm struggling with surface tracking. I've stamped all the markes, put on a monitor screen but it seems that they aren't detected at all. What can I do?

wrp 21 November, 2018, 10:31:52

@user-96755f can you send a quick image of your setup?

user-96755f 21 November, 2018, 10:35:38

https://imgur.com/a/nmWSGXJ this is how I setup the markers around the monitor

user-96755f 21 November, 2018, 10:36:48

I haven't done any calibration at all right know. I'm working on 2 different pc. In one i'm running pupil in the other I will give the stimulus trigger

user-96755f 21 November, 2018, 10:37:31

i'm currently using the diy bundle if this could help

user-8be7cd 21 November, 2018, 10:37:38

untick the inverted marker option

user-cde59c 21 November, 2018, 10:38:56

@user-96755f I had simillar problem. You need to print markers again and cut them off with white space (0,5 cm) arround marker

user-96755f 21 November, 2018, 10:41:31

@user-8be7cd ok this works but it starts to detect the markers on the screen my bad. @user-cde59c So I will try this.

user-96755f 21 November, 2018, 10:44:20

A short step by step guide so, what will it be? What should I do first after running Pupil? Go immediately on Surface tracker? when should I calibrate? Before or after?

papr 21 November, 2018, 10:58:27

@user-96755f surface tracking is independent of calibration. The issue is that you need a white border as mentioned by @user-cde59c

user-96755f 21 November, 2018, 11:05:58

@papr surface detection works now due to the new markers printed. But what i'm asking is do I need to calibrate gaze? Because where my subject is watching is not where the dots are. So do i need to do any calibration after surface tracking or before?

papr 21 November, 2018, 11:13:08

@user-96755f you definitely need to calibrate. Surface tracking is just an additional step that maps calibrated gaze from the scene camera coordinate system into the surface coordinate system

user-96755f 21 November, 2018, 11:16:49

What kind of calibration do you suggest? We are working on 2 different pc, so no 2-screen setup. I'm thinking about to print the big marker and then paste it on the wall behind the screen where the markers are. Is it a good idea?

user-cde59c 21 November, 2018, 11:19:05

@user-96755f just to let you know I'm just working exactly on the same problem 😃

user-96755f 21 November, 2018, 11:21:55

@user-cde59c if you are going to solve it before me, share your findings! I will do the same as welll

user-cde59c 21 November, 2018, 11:44:13

@user-96755f I can share with my conclusions as they are at the moement: 1. You need to calibrate headset with Pupil Player - as it need be calibrated; 2. The surface (= the monitor) takes only part of the world camera image; 3. In your or mine solution we are interested for receiving data (gaze positions) from the surface. As I understand those data they are normalized to the surface edges (kindly ask Pupil staff to confirm or decline); 4. So, the solution is to run your own calibration procedure on the defined surface. During it you know positions of calibration points and respected gaze positions. This allows to calculate everage offset for each calibration point; 5. Finally you need to write function to minimize offset for any received gaze point which is simply weigthed average of offsets collected during your surface calibration. The tricky thing is to calculate weights 😃

user-96755f 21 November, 2018, 11:52:00

Thank you so much, but it seems to much for my skills. I'm not really good at programming, so I will find a way more comfy I think. Maybe changing something in the setup. ANother question, there is a minimum of time for developing a good heatmap? Right know i'm working on short videos but they look all with a red filter. Sorry for all this stuff

wrp 21 November, 2018, 11:55:13

@user-96755f please set X Size and Y Size of each surface to see heatmaps

user-96755f 21 November, 2018, 11:56:15

ok size x and size x should they be real size right?

wrp 21 November, 2018, 11:57:09

Yes, real size is ok. Or just proportions that approximate the proportions of the surface

wrp 21 November, 2018, 11:57:57

If using Player you will need to click the recalculate yase distributions button after changing the surface size

user-cde59c 21 November, 2018, 11:58:58

x, y size in what? cm pr px?

user-cde59c 21 November, 2018, 12:02:25

and wher is this 'yaze distributions button'?

user-cde59c 21 November, 2018, 12:37:32

@wrp could you describe what is yase distribution button, please? I'm working with surface too.

papr 21 November, 2018, 12:37:55

I think he meant the (Re-)calculate Gaze Distribution button

user-cde59c 21 November, 2018, 12:39:49

sounds more redable, but I'm still don't know where it is.

papr 21 November, 2018, 12:40:36

It is in the right -sided menu of the Offline Surface Tracker

papr 21 November, 2018, 12:40:47

Above the surface sub menues

user-cde59c 21 November, 2018, 12:42:50

nothing such at my Pupil Player. I have only Surface Tracker menu . Nothing starts with 'Offline'

papr 21 November, 2018, 12:46:39

@user-cde59c Ah, I think you are using Pupil Capture. The online surface tracker does not have such a button.

user-96755f 21 November, 2018, 13:04:26

Thank you all! Everything is working fine!

wrp 21 November, 2018, 13:39:28

(thanks @papr for clarifying - was typing on mobile and made some typos)

user-96755f 21 November, 2018, 13:44:22

I just need one clarification: if I want to expose my subject to a series of images, how can I cut off each heatmap and see them alone for each image?

wrp 21 November, 2018, 13:53:03

@user-96755f you can "edit" each surface to change/offset the boundary from the markers.

user-96755f 21 November, 2018, 14:01:00

I will work on a slide show on screen

papr 21 November, 2018, 14:01:46

@user-96755f You could integrate the markers into the slide show. Just show different markers for each slide and define a suface for each

user-96755f 21 November, 2018, 14:03:15

Ok, just a little work on Photoshop!

user-4a7dd2 23 November, 2018, 10:01:20

Hey together, does someone know in which method/class the timestamps (world camera, eye cameras) are generated? Regards, Steve

papr 23 November, 2018, 10:01:44

@user-4a7dd2 depends on the selected backend

papr 23 November, 2018, 10:03:45

In case of the uvc/local usb cameras: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L605

user-3f0708 23 November, 2018, 15:39:39

good afternoon

I would like to know if someone has already used the mouse_controll script to control the mouse with their eyes and if the performance was good?

user-4a7dd2 24 November, 2018, 09:56:44

@papr Thank you

user-21d960 26 November, 2018, 22:31:17

So these are some tasks I would like to do with the eye tracker

user-21d960 26 November, 2018, 22:31:18

Chat image

user-21d960 26 November, 2018, 22:31:30

what software would you recommend to do this with the pupillab?

user-d81c81 27 November, 2018, 02:35:37

hey i have a trouble in building boost.python can anyone help me?

user-babd94 27 November, 2018, 09:41:44

hi, I want to receive world video frame. I tried to use "frame publisher" plugin in Pupil Capture and recv_world_video_frames.py, but I received only eye video frame. Is there anything else I need to do?

https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py

papr 27 November, 2018, 09:45:21

@user-babd94 hi, what version of Capture do you use?

user-babd94 27 November, 2018, 09:47:49

@papr I use version: 1.8.26

papr 27 November, 2018, 09:49:09

@user-babd94 there was a frame publisher related bug in that version. Please update Capture.

user-babd94 27 November, 2018, 10:12:43

@papr thank you! receiving of world video frame is successed!

papr 27 November, 2018, 10:12:54

Nice!

user-87fec3 28 November, 2018, 02:11:29

Is it possible to acquire images from the cameras into a .NET environment? In particular, I am looking to use the Pupil Labs goggles to do eye tracking on software we have developed in C#. Has anyone tried this before? If not, would there be anything preventing this from theoretically working? Any issues relating to drivers?

user-21d960 28 November, 2018, 02:12:54

@user-87fec3 i am trying to find a good software to develop with the pupil tracker, Im sure theres something out there and it doesnt make much sense to make something from scratch

user-21d960 28 November, 2018, 02:13:08

right now im using Okazolabs eventiDE

wrp 28 November, 2018, 06:34:55

@user-87fec3 you can subscribe to the IPC over the network and receive messages - and also video frames - here is an example of how to subscribe to Pupil and get world frames (reference in python) but you can adapt for c# - https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py

wrp 28 November, 2018, 06:38:25

@user-21d960 I am not familiar with/don't have any experience with Okazolabs - maybe there are some people in the community that use other experiment building tools

user-87fec3 28 November, 2018, 22:35:14

@wrp thanks for the reply. I wanted to confirm -- by IPC do you mean interprocess communication? If so, I am wondering if there would be any limitation using this IPC method? For example, let's say I want to record at 120 fps or even 200 fps. I could be wrong, but I get the sense that acquiring images over IPC could run into a bottleneck?

Also thanks for providing the example python code. Perhaps I shall give this a test as proof of concept. Just wanted to confirm how I should have the system set up to run. Should I be running Pupil Capture application while running this script? Or should I be using a different configuration that uses the Pupil Labs source code?

papr 28 November, 2018, 22:50:20

@user-87fec3 just run Pupil Capture as usual while running the script.

user-4580c3 29 November, 2018, 02:12:56

Hi there, im going to analyze 60 AOI sequence with one device. Can you represent the result of accumulating 60 data, not one image, as a single picture(AOI sequence)?

user-4580c3 29 November, 2018, 02:13:05

60 person

user-6c7426 29 November, 2018, 10:35:35

hey all! i am trying to develop an application on microsoft's hololens and implement pupil for getting the user's fixations count at the end of the demo. Is there any way possible to listen for an event that will return the fixations count? thank you

user-1bcd3e 29 November, 2018, 14:08:48

hi everybody! Does Imotion support recording data from Pupil? anyone has experience with Pupil with iMotions??

user-1bcd3e 29 November, 2018, 14:09:11

thank you😇

papr 29 November, 2018, 14:33:15

@user-1bcd3e hi yes, we have a plugin that exports recordings such that is compatible with imotions

user-6c7426 29 November, 2018, 15:52:44

I am getting this error when running run_capture.bat. it also happens when I run service and player. can someone help me resolve this issue please?

Chat image

papr 29 November, 2018, 15:57:12

@user-6c7426 The docs recommend to compile the pupil detector manually in this case since this gives you better feedback. See the docs for details. Generally we recommend to run the bundled application though.

user-6c7426 29 November, 2018, 17:14:27

@papr The thing is, when i compile the pupil detector manually it still gives me the same error for the cl.exe in visual studio's msvc

papr 29 November, 2018, 17:16:41

Mmh, this is unfortunate to hear. I don't have any experience with the windows dev environment though, sorry.

user-6c7426 29 November, 2018, 17:24:11

ok thank you

user-fbbb29 30 November, 2018, 13:26:35

Hi, thanks for your software. I'm trying to receive the world video, like @ papr and @user-babd94 were discussing earlier here. However, if I add below file to the pupil_capture_settings/plugins/ in v1.9.7 on ubuntu 18.04, Capture doesn't start anymore. In capture.logs, last thing it says: [DEBUG] plugin: Scanning: recv_world_video_frames.py Any ideas? https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py

wrp 30 November, 2018, 13:28:19

@user-fbbb29 this script is meant to be run independently - it is not a plug-in. So you have Pupil Capture app running and then execute/run the python script from a terminal/IDE

user-fbbb29 30 November, 2018, 13:31:27

Thanks!!!

user-6c7426 30 November, 2018, 13:33:53

Sorry for reposting but can someone help with that has had any similar experience with this error? Thank you again!

Chat image

user-87fec3 30 November, 2018, 14:09:06

@wrp Between using IPC and recording video using Pupil Capture, I am wondering which is the more feasible option. For saved video recordings, are the recordings saved at full resolution and frame rate? For example, if I acquired the data at 120 fps or 200 fps at the respective resolutions, is that how the video files would be saved? Or is it a slower playback at lower resolution?

wrp 30 November, 2018, 14:32:12

@user-6c7426 looks like you are running from source on Windows - the pupil detectors and calibration routines are not being built. Please try to build these first prior to running capture.

user-6c7426 30 November, 2018, 14:33:06

@wrp I tried to build them prior but still got the same error about cl.exe

wrp 30 November, 2018, 14:33:35

@user-6c7426 please see earlier conversation in this channel regarding compiling libs on windows

user-6c7426 30 November, 2018, 14:33:47

@wrp ok thank you

wrp 30 November, 2018, 14:34:14

@user-87fec3 please see the docs section on timestamps: https://docs.pupil-labs.com/#data-format

wrp 30 November, 2018, 14:35:32

Timestamps files ensure accurate playback rates.

user-6c7426 30 November, 2018, 16:38:10

@wrp Thank you very much. I managed to resolve my issue by using this suggestion: https://discordapp.com/channels/285728493612957698/446977689690177536/500182594235924491

End of November archive