👁 core


user-9cf0e5 02 October, 2017, 20:41:29

Is there a way by which I can save the video frames from both the cameras ...i tried the following using VideoWriter of OpenCV

Define the codec and create VideoWriter object

fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480)) while True: frame = cap.get_frame_robust()

user-9cf0e5 02 October, 2017, 20:41:54

out.write(frame)

mpk 02 October, 2017, 20:42:33

Try frame.img

user-9cf0e5 02 October, 2017, 20:42:39

but because the frame is not a numpy array rather a uvc.Frame type its not working

mpk 02 October, 2017, 20:43:15

Out.write(frame.img)

user-9cf0e5 02 October, 2017, 20:47:48

yeah got it

user-9cf0e5 02 October, 2017, 20:47:49

thanks

papr 02 October, 2017, 20:48:58

Beware that frame.img yields an rgb image. I could imagine that Opencv requires an BGR image

user-4a3b48 02 October, 2017, 21:19:57

@papr yeah sure I took care of that...thanks for the info btw

user-2798d6 03 October, 2017, 17:40:40

Hello! When in Player and going fixation by fixation, is there a way to see the time elapsed in the clip on the screen? I am trying to line up fixations with music to see where fixations are in terms of the music being played.

papr 03 October, 2017, 17:42:49

The seek bar should jump forward on jumping fixations

user-2798d6 03 October, 2017, 17:43:04

It does, but it doesn't tell me how much time has elapsed

user-41f1bf 03 October, 2017, 17:43:47

Do you want the duration of a fixation?

user-2798d6 03 October, 2017, 17:44:18

No, I've got those. I'd like to know that fixation #45 for example happened at 38seconds into the recording

user-41f1bf 03 October, 2017, 17:45:55

Hum.. You need to subtract start timestamp - fixation timestamp

user-2798d6 03 October, 2017, 17:45:55

The fixation start time on the excel spreadsheet has a column for timestamp but it's stamps like "327598.1496"

user-41f1bf 03 October, 2017, 17:46:44

Yep, you need to subtract from the first timestamp

papr 03 October, 2017, 17:46:58

The next release will show all information about current fixations. This will also show its timestamp

user-2798d6 03 October, 2017, 17:47:42

Cool! Do you have an eta on that release?

user-41f1bf 03 October, 2017, 17:47:45

Doing this you will have time in seconds from the start of the recording

user-2798d6 03 October, 2017, 17:48:03

And in the meantime- how do I know that the timestamp of the very first fixation is also the actual start of the recording?

papr 03 October, 2017, 17:48:12

But @user-41f1bf is correct. You might have to subtract the start timestamp

papr 03 October, 2017, 17:48:34

The first gaze point has the start timestamp

papr 03 October, 2017, 17:48:56

Note: not the first fixation

user-41f1bf 03 October, 2017, 17:49:27

Yes, the first gaze timestamp, not the first fixation

user-2798d6 03 October, 2017, 17:50:14

Ok - and where can I find that first gaze stamp? All I see is fixations

papr 03 October, 2017, 17:50:54

Use the raw data exporter to export gaze data

user-41f1bf 03 October, 2017, 17:50:55

Exporting raw gaze data

user-2798d6 03 October, 2017, 17:51:15

Ok, I'll give that a try. Thank you so much!

papr 03 October, 2017, 17:51:24

No problem

user-2798d6 04 October, 2017, 17:55:08

Hello! I am looking at a fixation csv file and wondered what unit of measurement the "start timestamps" are in? For example, my first fixation start timestamp is 327598.1496

papr 04 October, 2017, 17:55:46

The unit is seconds, as all timestamps in Pupil

user-2798d6 04 October, 2017, 17:56:21

Ok, so then my next question is why it doesn't start at 0?

papr 04 October, 2017, 17:57:54

We use the operating system's monotonous clock. The start time is arbitrary, but often corresponds to the boot of the computer

user-2798d6 04 October, 2017, 17:59:14

gotcha. Thanks!

papr 04 October, 2017, 17:59:26

No problem

user-8a9ca1 05 October, 2017, 13:47:45

anyone have recommendations for marker sizes when using manual marker calibration?

papr 05 October, 2017, 13:48:09

@user-8a9ca1 this depends in which distance you want to show them

user-8a9ca1 05 October, 2017, 13:48:45

is there a particular ratio of distance to size I should be aiming for?

papr 05 October, 2017, 13:49:24

difficult to say, usually bigger is better than smaller

user-8a9ca1 05 October, 2017, 13:50:04

I printed out the provided markers and they filled an entire sheet of paper, that seems like it might be a bit too big 😃

papr 05 October, 2017, 13:50:07

with monitor distance 4cm in diameter is more than enough

user-8a9ca1 05 October, 2017, 13:50:55

okay, that should give me a baseline, we'll try a few different sizes based on that

papr 05 October, 2017, 13:51:15

important is that the rings are distinguishable in your world camera image

papr 05 October, 2017, 13:51:25

meaning a low resolution requires bigger markers

user-8a9ca1 05 October, 2017, 13:51:31

okay, cool

user-4fa09a 05 October, 2017, 22:34:49

Could someone point me in the direction for the code for how pupil syncs multiple cameras? I've got some projects where I need to sync some usb cameras.

user-41f1bf 06 October, 2017, 00:14:05

Take a look at the zmq zyre code

user-41f1bf 06 October, 2017, 00:14:26

Also, take a look at pupil groups

user-41f1bf 06 October, 2017, 00:31:40

Sorry, do you mean remote time sync? Or local sync?

user-4fa09a 06 October, 2017, 00:34:52

I guess local? The goal is to sync frames together for vision. Although I think I have a handle on how it should work, capture multiple camera's frames with timestamps in seperate processes/threads -> send to a master process/thread which will match up frames, a code example would be helpful

user-4fa09a 06 October, 2017, 00:35:35

Also the uvc trick to get multiple cameras on a single usb bus is also something I'm looking into

user-41f1bf 06 October, 2017, 00:35:59

For local, all you need is a central monotonic time "server"

user-41f1bf 06 October, 2017, 00:36:41

In linux, they use clock_gettime

user-41f1bf 06 October, 2017, 00:38:38

Each frame will receive a timestamp as they arrive from a camera, assynchroneously,

user-41f1bf 06 October, 2017, 00:39:50

For remote, I dont know...

user-4fa09a 06 October, 2017, 00:40:24

ok, I think I can figure it out from that

user-4fa09a 06 October, 2017, 00:40:27

thanks a bunch!

user-41f1bf 06 October, 2017, 00:44:00

Pupil uvc cameras also support hardware timestamping. Right now I dont remember the details, please take a look at the docs

user-54a6a8 06 October, 2017, 14:57:36

Hello. I'm using the Vive add-on. Before I install the camera drivers (or after I uninstall them twice), they show up in control panel as Imaging devices> Pupil Cam1 ID0, and Pupil Cam1 ID1. After I install the drivers, they show up as libusbK USB Devices > Pupil Cam1 ID0, and Pupil Cam1 ID0. i.e., they are both ID0. See the attached image. The consequence of this is that when starting pupil_service, only one window opens. I can send a command over zmq to open the other one and I can use the GUI to select the source, so this isn't a deal breaker, I just thought it was odd and it might be nice if I can fix it.

Chat image

user-54a6a8 06 October, 2017, 15:21:14

Separate question: How much overhead does the eye process visualization take? I'm using pupil_service, and I don't really need to visualize two different high-res video streams while I'm running my experiment, so I'd be happy to close these windows (while gaze continues to be calculated) if it meant decreasing processor load at all.

papr 06 October, 2017, 15:50:18

Minimizing the eye windows will reduce the required load.

papr 06 October, 2017, 15:51:11

I am not sure but Pupil Capture only opens one eye window by default. Same is possible service.

papr 06 October, 2017, 15:52:26

If you open both windows and restart service, then both should open as well.

user-54a6a8 06 October, 2017, 16:33:30

@papr, thank you. IMO pupil_service shouldn't open any windows at all... it's just a service. It should be up to the client to change any settings or visualize the images. In PSMoveService (for tracking PSMove controller pose from its IMU and PSEye camera), whenever there's a client that is requesting the video feed, we write the frames to a shared memory location and then any client that wants to view the frame can. It's still up to the service to do CV on the raw frames, and actually the service writes the CV result (ellipses, ROI boxes, etc) onto the frame in shared memory so the client gets that too.

Anyway, I see that when the eye process window is minimized that it's still processing data but the visualization is not updating so that's good enough for now.

user-3f29e6 07 October, 2017, 23:58:47

Hi guys, I'm trying to build Pupil from source on Linux (on Ubuntu 16.04). I'm currently stuck installing some of the dependencies:

user-3f29e6 07 October, 2017, 23:59:02

$ sudo pip3 install git+https://github.com/pupil-labs/pyuvc

user-3f29e6 08 October, 2017, 00:00:08

Error is attached:

user-3f29e6 08 October, 2017, 00:00:17

pupil_dep.txt

user-3f29e6 08 October, 2017, 00:03:11

Notice the error message: /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libturbojpeg.a(libturbojpeg_la-turbojpeg.o): relocation R_X86_64_32 against `.data' can not be used when making a shared object; recompile with -fPIC

All the dependencies installed fine before your custom libjpegturbo & uvc, but I'm stuck there. Has anyone seen this problem? From the error message perhaps I need to use a different version of GCC? (My default GCC is 5.4.0)

mpk 08 October, 2017, 07:25:20

Please delete libturbojpeg.a and try again.

user-3f29e6 08 October, 2017, 07:33:34

Thanks, I tried deleting that file and rebuilding libturbojpeg and libuvc and then pyuvc but I still get the same error when building pyuvc.

user-3f29e6 08 October, 2017, 07:36:12

I got it working! I noticed I also had another version of libturbojpeg.a at "/usr/lib/x86_64-linux-gnu/libturbojpeg.a" in my system folder. After deleting that file and rebuilding everything, now pyuvc builds 😃 Thanks for your help @mpk!

user-3f29e6 08 October, 2017, 10:58:26

I managed to build all the Linux dependencies, but when I run it I get a runtime crash due to protobuf version. I tried searching the web about it, I get the impression I need to uninstall my system libprotobuf and install a newer version then reconfigure & rebuild OpenCV?

user-3f29e6 08 October, 2017, 10:58:34

/Linux/PupilLabs/pupil/pupil_src $ python3 main.py MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. [libprotobuf FATAL /Linux/opencv/3rdparty/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-ui6vjS/mir-0.26.3+16.04.20170605/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".) terminate called after throwing an instance of 'google::protobuf::FatalException' what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-ui6vjS/mir-0.26.3+16.04.20170605/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)

user-2798d6 08 October, 2017, 19:01:33

Hello, I was using the eye tracker to make a few recordings the other day, and something happened with 3 of the 4 videos I made. First, the audio didn't get recorded and then when I brought up the file in Player, it said there were a certain number of fixations, but going fixation by fixation, I was seeing scribbles instead of dots for the fixations. The exported video did the same thing. The time was running, but the video looked frozen. I'm wondering if something happened with my FPS? Would my computer capacity mess up FPS? I also had an external mic plugged in - I don't know if I was doing too much on my computer and that messed up the processing. I wanted to see if anyone else had run into this or if you have any ideas?

papr 08 October, 2017, 19:03:52

Hey @user-2798d6 If you want you can send me the recording and I will have a look at it tomorrow. I will test it with the new fixation detector and see if the old one is simply buggy

user-2798d6 08 October, 2017, 19:05:45

Sure thing! Is there a specific email address I should use or just the info email?

papr 08 October, 2017, 19:06:12

Just upload it to Google Drive and send me the link in a direct message 😃

papr 09 October, 2017, 09:54:06

@user-2798d6 Indeed, it looks like you recorded with an average FPS of 15. This is very likely due to high load. This also explains why you can see fixations even though there are gaze points displayed that do not belong to the fixation. Everything that is displayed (gaze, fixations, eye overlays) is shown during the world frame that they are closest to. If you only have few frames you end up with more data points per frame.

user-380f66 09 October, 2017, 10:50:27

Hi, I am just wondering how to record audio using pupil labs on a laptop? Thanks, Sara

wrp 09 October, 2017, 10:55:08

@user-380f66 you can load the Audio Capture plugin before recording and then select the audio source from within the plugin.

user-380f66 09 October, 2017, 10:55:48

Thanks! Could I get a link to this plugin? I searched but didn't see it.

wrp 09 October, 2017, 10:55:49

here is an example

Chat image

wrp 09 October, 2017, 10:57:36

You load new plugins from the plugin menu in the top of the UI in Pupil Capture's World window

Chat image

wrp 09 October, 2017, 10:58:28

BTW @user-380f66 audio syncing is only available on macos and linux at this time

user-380f66 09 October, 2017, 10:59:09

That is fine thanks, I have a mac. I can load it from the window. But do I need to download something first?

wrp 09 October, 2017, 10:59:50

No need for any other downloads other than Pupil software - the plugin is included in Pupil Capture. Please ensure that you're running the latest version of Pupil software: https://github.com/pupil-labs/pupil/releases/latest

user-380f66 09 October, 2017, 11:00:03

Ok, thanks.

wrp 09 October, 2017, 11:00:29

You're welcome 😄

user-380f66 09 October, 2017, 11:01:22

Also, I have a separate question. My external cameras and neuro equipment are all sampling at 25 fps. If I set the eye tracker to sample at this rate as well for a social cog study, would it cause problems with the sampling rate being too low?

user-380f66 09 October, 2017, 11:02:01

I would set both the world and eye cams to sample at this rate

wrp 09 October, 2017, 11:23:19

@user-380f66 I would recommend correlating time-stamped data post-hoc.

wrp 09 October, 2017, 11:23:35

Does your neuro equipment save timestamps?

user-0d187e 09 October, 2017, 11:40:23

Any idea why the python example below does not work with the new version of pupil service (V0.9.14): https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_messages.py

papr 09 October, 2017, 11:40:44

@user-0d187e Do you have an eye window open?

user-0d187e 09 October, 2017, 11:40:53

yes

papr 09 October, 2017, 11:41:21

Do you see a live image from the camera or a gray image?

user-0d187e 09 October, 2017, 11:42:52

I have the eye image but the loop breaks in its first round after topic = sub.recv_string()

papr 09 October, 2017, 11:43:40

So, do you mean that you get a Python exception? If yes, which one is it?

user-0d187e 09 October, 2017, 11:45:15

sorry I tested it again. not really! it just waits there. I think it's not connected to the service

user-0d187e 09 October, 2017, 11:45:35

How could I get the ip and port for the service

papr 09 October, 2017, 11:46:29

Ok, good to know. Open Pupil Capture and scroll down to the Pupil Remote menu. It should show you IP and port to which you have to connect to.

user-0d187e 09 October, 2017, 11:51:39

I killed all pupil services running in the background, and now it seems to be working even though the port number is different from what it was mentioned in the pupil capture

user-0d187e 09 October, 2017, 11:52:10

btw, I accidently closed the Remote menu. How can I have it shown again?!

user-0d187e 09 October, 2017, 11:52:38

sorry about these stupid questions!

papr 09 October, 2017, 11:52:47

It is a plugin. Use the "Open Plugin" selector at the top of part of the main menu

papr 09 October, 2017, 11:54:22

Pupil Service is a stand-alone app. You can run Service and Capture in parallel but they will have two Pupil Remote instances that are independent of each other.

user-0d187e 09 October, 2017, 11:55:17

I see. How to get the ip/port of the service then?

papr 09 October, 2017, 11:56:21

Pupil Service forces port 50020. It does not start if it cannot allocate this port.

user-0d187e 09 October, 2017, 11:56:48

Alright. Thanks

user-0d187e 09 October, 2017, 11:57:51

The thing is that when I close the pupil service on my mac it doesn't kill the process

papr 09 October, 2017, 11:58:17

How do you close it? Via a notification?

user-0d187e 09 October, 2017, 11:58:47

😃 right click and quit

papr 09 October, 2017, 12:02:46

Mmh. I see that this is an intuitive way to close it but the implementation does not support it. =/ We will work on a fix for that,

papr 09 October, 2017, 12:03:56

In the mean time you can use Pupil Capture. It is the same as Service but features an interface which you can use directly. Service is supposed to be controlled via network notifications.

user-0d187e 09 October, 2017, 12:05:40

pupil capture displays the images and that may slow down the processing, is that correct?

papr 09 October, 2017, 12:07:29

It uses exactly the same code to display eye images and calculate pupil positions as Service. You can minimize all windows to get the same performance as Service.

user-8250bf 09 October, 2017, 12:21:07

Hey! Do you have somewhere that I can read through what algorithms are used in the pupil system? I didn't really want to dig through the code and assume i'm missing something because the docs are great.

user-0d187e 09 October, 2017, 12:24:17

algorithms for what?

user-0d187e 09 October, 2017, 12:24:37

do you mean for pupil detection?

user-8250bf 09 October, 2017, 12:24:43

Sure

user-0d187e 09 October, 2017, 12:28:30

I cannot give you the exact answer as I don't know about the code. But I'm trying to help clarifying your quesiton. for pupil detection, part of the code detects the pupil blob in the image, part of it does ellipse fitting and, part of it does some geometrical calculations to estimate the 3D position of the eyeball. You need to explain what you need exactly!

user-8250bf 09 October, 2017, 12:31:32

To be honest, I was really just interested in the general flow from capture of image through to generated output and what underlying algorithms have been implemented. I have assumed they would either have been implemented from research papers, or written up as research papers themselves. The device obviously detects and tracks the pupil, and maybe some other things (I don't have the device). I'm interested in the flow of

user-8250bf 09 October, 2017, 12:31:56

Looking through the documentation on pupil-labs it looks very thorough, including how to cite the use of pupil itself but not what has been used to develop it.

user-8250bf 09 October, 2017, 12:32:04

So I thought I might be missing something obvious

papr 09 October, 2017, 12:36:20

The algorithm has been published as a research paper. https://arxiv.org/pdf/1405.0006.pdf

papr 09 October, 2017, 12:37:25

This is alos the paper that you should cite,

user-0d187e 09 October, 2017, 14:44:25

How to calibrate the cameras in order to have reliable 3d data (e.g. gaze_point_3d, gaze_normal_3d شدی eye_center_3d)

papr 09 October, 2017, 14:46:34

First off, the pupil detection needs to be good. Second, the 3d model should have converged. Third, the subject should have performed well during calibration.

user-0d187e 09 October, 2017, 14:47:20

of course. I mean world camera calibration to obtaine external camera parameters

user-0d187e 09 October, 2017, 14:47:35

if that's needed?!

user-8250bf 09 October, 2017, 14:47:58

That's great, thanks @papr

user-0d187e 09 October, 2017, 14:50:04

e.g. when I change the orientation of the world camera, it change the center of the camera. Then how the tracker knows where the new center is when reporting the 3D gaze coordinates relative to that center?

papr 09 October, 2017, 14:53:51

You break the calibration by changing any of the physical relations of the cameras. You will need to recalibrate afterwards

user-0d187e 09 October, 2017, 14:55:27

so the tracker calibrates the camera as well during the gaze calibration?

user-0d187e 09 October, 2017, 14:56:52

But then how does it now the depth of the gaze point in 3d (gaze_point_3d.z) using only one time calibration for one depth?

papr 09 October, 2017, 14:57:22

the 3d calibration mostly calibrates the physical relations of the cameras, yes. Additionally a few physiological parameters are fitted as well.

user-0d187e 09 October, 2017, 14:57:45

Unless the gaze_point_3d is always assumed to be inside the screen plane

papr 09 October, 2017, 14:57:55

The depth is only available using a binocular mapper. It uses the intersection of two mapped gaze normals as depth indicator

user-0d187e 09 October, 2017, 14:57:56

if you know what i mean?

user-0d187e 09 October, 2017, 14:58:35

I need to think about it

user-0d187e 09 October, 2017, 14:58:38

thanks though

user-0d187e 09 October, 2017, 14:58:40

😃

papr 09 October, 2017, 14:59:19

No problem. Do not hesitate to post further questions as they come up 😃

user-0d187e 09 October, 2017, 14:59:32

sure

user-45d36e 10 October, 2017, 11:05:15

Hi, we are trying to drag copies of our data directories for each tracker into pupil player, but we are getting a notice from pupil player that it needs to update its recording format, then it boots us out of the software (quits without warning). We are just wondering what we can do to troubleshoot this so that we can view our files in pupil player? Thanks!

mpk 10 October, 2017, 11:05:57

Hi,

Can you run pupil from terminal and paste the output?

user-45d36e 10 October, 2017, 11:10:43

Sorry--we're not super tech savvy--can you exp;lain to us how to test this?

mpk 10 October, 2017, 12:05:49

@Sara#2380 on windows the windows terminal opens automaticaly with the app. On Mac and linux just open the terminal app and type pupil_player and hit enter.

papr 10 October, 2017, 12:14:27

@user-45d36e A side note: The upgrade window is supposed to close after the upgrade procedure.Afterwards a new window should open. This might take a while though if you have very long recordings.

user-45d36e 10 October, 2017, 13:07:35

Ok, thank you. On a side note, for our data that is not with the phones, can we access audio in the pupil capture playback? Thanks!

user-45d36e 10 October, 2017, 13:08:56

sorry. pupil player playback

papr 10 October, 2017, 13:09:10

Do you mean record audio when you say access audio? If yes, this feature is currently not available on Windows. The audio/video framework that we use under the hood, does not support it reliably.

user-45d36e 10 October, 2017, 13:09:31

Yes, I am able to record audio and get an mp4 file on the mac

user-45d36e 10 October, 2017, 13:09:56

I just can't figure out how to get the audio when I player even though the audio file is in the directory

papr 10 October, 2017, 13:10:03

In order to have audio in Pupil Player you will have to record audio in Pupil Capture. You can do so using the Audio Capture plugin, that as mentioned above, does not work on Windows.

papr 10 October, 2017, 13:10:37

I see, you will need to export the video using the Video Exporter plugin to merge video and audio.

user-45d36e 10 October, 2017, 13:11:18

Ok, so I need to export the video while it is recording in pupil capture with that plugin? Or after it has saved in player?

papr 10 October, 2017, 13:12:19
  1. You open the audio capture plugin in Capture and make a recording.
  2. You open the recording in Player and export the video using hte Video exporter.
  3. The exported video should contain the audio.
user-45d36e 10 October, 2017, 13:17:56

Ok. So I recorded with audio and dragged the directory into pupil player. Then under audio I added the Video Export Launcher plugin. Then I clicked on the down arrow on the menu on the left to export, and it told me it had exported. Then I looked for a new file and also tried to paly the current one but didn't have audio.

user-45d36e 10 October, 2017, 13:18:02

I think I must be missing something!

user-45d36e 10 October, 2017, 13:18:47

Sorry, under analyzer

papr 10 October, 2017, 13:19:01

Did you open the video in Quicktime Player? It is known to have problems playing the audio of exported videos.

user-45d36e 10 October, 2017, 13:19:20

No, I was still trying to play it in pupil player

papr 10 October, 2017, 13:19:20

We recommend using the VLC media player to playback exported videos.

papr 10 October, 2017, 13:20:30

The exported video is just a video. It is not a valid Pupil Player recording. You will need a media player to playback audio. Audio playback from within Pupil Player is not supported.

user-45d36e 10 October, 2017, 13:20:53

Oh ok I see. Ok, I'll play around with this for a bit.

user-45d36e 10 October, 2017, 13:23:27

Ok, thanks. I got the audio with VLC. However, I think we may need to be able to view the videos with audio in quicktime as well. Is there a format we can convert our files to that might work better with quicktime player?

papr 10 October, 2017, 13:26:19

You can use VLC to convert them by following these instructions: https://en.softonic.com/articles/vlc-media-player-as-video-converter

papr 10 October, 2017, 13:26:47

But test your videos first in quicktime. Sometimes they work out-of-the-box

user-d27861 10 October, 2017, 13:33:17

Ok, thanks. Next issue (because we're on a roll today!). How do I sync 2 worldcam streams that are being recorded on the same macbook pro? I tried adding the time sync plugin, but it set both streams to clock master status and when I hit record on one stream, recording didn't start on the other stream.

papr 10 October, 2017, 13:35:50

Two things. First: Two start the recordings simultaneously you will need to activate Pupil Groups and make sure that they are in the same group (in which they should be by default). Second: One Capture instance is set to be master randomly. You can increase the bias of one instance to force it to be master.

user-d27861 10 October, 2017, 13:56:42

Hi papr, both ideas worked, thank you!

papr 10 October, 2017, 13:59:48

Nice! No problem.

user-f1d099 10 October, 2017, 22:16:43

Hello, would anyone happen to know the ratio Pupil Capture records in?

papr 10 October, 2017, 22:21:19

@user-f1d099 what do you mean by ratio? The video frame ratio?

user-f1d099 10 October, 2017, 22:24:49

The video scale ratio.

user-f1d099 10 October, 2017, 22:26:01

I am trying to work with the exported video in post and I need the ratio to scale it properly.

papr 10 October, 2017, 22:29:06

Depends in which resolution you recorded the video. If I remember correctly, it is written down in the info.csv file within the recording folder.

papr 10 October, 2017, 22:30:15

Alternatively, you can look it up yourself by opening the video in the VLC media player and display the file's meta information.

user-f1d099 10 October, 2017, 22:33:31

I will take a look at the csv file first

user-f1d099 10 October, 2017, 22:33:39

thank you for the quick reply

user-45d36e 11 October, 2017, 14:40:47

Hi, I have been trying to drag the folders to the pupil capture windows. The recordings are for no more than 6-8 minutes. i have been waiting for almost 5 minutes but the pupil capture just exits itself. do you know how long does it usual take for the folders to open on pupil player?

user-45d36e 11 October, 2017, 14:41:05

sorry pupil player windows, and not pupil capture*

mpk 11 October, 2017, 14:41:44

@user-45d36e there seems to be a relevant issue. Maybe related? https://github.com/pupil-labs/pupil/issues/837

mpk 11 October, 2017, 14:42:57

@user-45d36e nevermind this is not related. Sorry.

mpk 11 October, 2017, 14:43:16

Can you send one of your recordings to info[at]pupil-labs.com We will have a look and report back.

user-b79cc8 13 October, 2017, 10:49:40

Hello, I am just wondering if it is possible to manually adjust the max_dispersion and min_duration settings for the 3D Pupil Angle report?

user-2798d6 13 October, 2017, 18:25:58

@papr, I just saw your message from Monday about my low FPS. Would that be from having so many things plugged into my computer USB ports plus trying to run the pupil lab stuff or is my computer just overall not powerful enough?

user-2798d6 13 October, 2017, 18:26:08

@user-2798d6 Indeed, it looks like you recorded with an average FPS of 15. This is very likely due to high load. This also explains why you can see fixations even though there are gaze points displayed that do not belong to the fixation. Everything that is displayed (gaze, fixations, eye overlays) is shown during the world frame that they are closest to. If you only have few frames you end up with more data points per frame.

user-2798d6 13 October, 2017, 19:39:17

I am trying to get a timestamp for each fixation within an audio track. Previously, I subtracted the first gaze position time stamp from the first fixation timestamp to get that. However, in my current file, the first fixation occurred before the first gaze point. What should I do now to get the fixation time points?

user-d08045 16 October, 2017, 09:48:12

Hi all! Does anyone have a link to a bare-bones Unity example to calibrate and then just read 2D/3D gaze points? No recording etc, etc as in the example project that is available via gitHub.

papr 16 October, 2017, 09:51:15

@user-2798d6 It should not matter how many USB devices you have connected to your computer but which other programs you are are running. in parallel.

papr 16 October, 2017, 09:54:29

@user-2798d6 Are you using the offline fixation detector to get the fixations or do you extract them from the pupil_data file?

papr 16 October, 2017, 09:55:13

@user-d08045 I would recommend asking in the 🥽 core-xr channel. It is dedicated to unity-related questions.

user-d08045 16 October, 2017, 10:00:18

@papr Thanks, will do!

user-9d900a 17 October, 2017, 07:33:24

Hi! I just bought the pupil lab kit and am trying to build pupil on windows. I have encoutered issues with the GLFW library: .. "Failed to load GLFW3 shared library".. Anybody has had that issue?

papr 17 October, 2017, 07:34:53

Hey @user-9d900a We recommend using the bundled app on Windows. You can download it here: https://github.com/pupil-labs/pupil/releases/tag/v0.9.14

user-9d900a 17 October, 2017, 07:46:59

Hey papr, thanks for your prompt reply. I will most likely need to develop plugins for pupil.. so I figured starting from the source code would be the way to go... What would you suggest?

papr 17 October, 2017, 07:49:57

For most use-cases it is enough to use Pupil Capture's network interface. There are only a few cases where it makes sense to write a plugin. Would you mind telling me a bit more about your use-cases? I could give you hints on where to look next.

user-9d900a 17 October, 2017, 07:51:46

I'm interested in building a packaged app that will allow the computation of "cognitive load" metrics from pupil dilation measures...

papr 17 October, 2017, 07:54:28

Are you planning on doing this in an online-fashion with realtime feedback or would it be sufficient to analyse the data after a recording?

user-9d900a 17 October, 2017, 07:55:20

Well that's a specs we haven't decided on yet. Real-time would be amazing.. Off-line would be okay for a first version I suppose..

papr 17 October, 2017, 07:56:46

The offline version could be impelmented by making a recording, exporting the pupil data with Pupil Player and then analysing the exported csv file with your application. In case of online-analytics you can use Pupil Remote, Pupil Capture's network interface.

papr 17 October, 2017, 07:58:09

I would recommend reading our Development Overview ( https://docs.pupil-labs.com/#development-overview ) and the Interprocess and Network Communication chapter of our docs: https://docs.pupil-labs.com/#interprocess-and-network-communication

papr 17 October, 2017, 07:59:32

And in our Pupil Helpers repository you can find an example Python script on how to access pupil data via the network interface: https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_messages.py

user-9d900a 17 October, 2017, 08:57:59

Thanks @papr I'll look into this and may get back to you. Is Discord the best place to talk?

papr 17 October, 2017, 08:59:03

@user-9d900a Yes, it is. Do not hesitate to ask any questions.

user-2798d6 17 October, 2017, 16:12:53

@papr I've been using the offline fixation detector - Screen marker

papr 17 October, 2017, 16:13:40

@user-2798d6 Does that mean that you had Capture and Player running at the same time?

user-2798d6 17 October, 2017, 16:14:12

oh - no, I just have capture going while I'm doing recordings, then I open Player later

papr 17 October, 2017, 16:16:05

@user-2798d6 No, the issue is about the load during the recording. Edit: The question is if you had anything else running during the recording. Would you mind sharing your system details (cpu, memory, harddisk/ssd) if the former is not the case.

user-2798d6 17 October, 2017, 16:18:31

Are we talking about why the FPS was so low? I usually have capture going and then I also had iTunes running for some audio. Then I had an external mic plugged in and my computer was running the audio to bluetooth

user-2798d6 17 October, 2017, 16:20:06

I'm on a MacBook Air with 1.6 GHz processor, 8GB memory

papr 17 October, 2017, 16:22:42

Mmh, ok. Is it possible that you activated the smaller file, more CPU option in the recorder settings? This option transcodes the mjpeg stream to a h264 stream before writing it to the hard disk. This uses a lot of CPU and might be the reason for your low fps.

user-2798d6 17 October, 2017, 16:23:28

It's set on bigger file, less CPU

user-2798d6 17 October, 2017, 16:24:01

So the external mic/bluetooth/iTunes wouldn't be affecting it?

papr 17 October, 2017, 16:25:20

I cannot say it for sure. But I would recommend testing it without running the audio stuff.

user-2798d6 17 October, 2017, 16:33:22

Will do! One other question - I've been having some issues with the screen marker and the manual marker calibration lately. The eye cameras look like everything is set up well - I'm getting bright red pupil circles in different eye directions, but the minute I click the calibration button, it doesn't respond. Or the manual marker doesn't register.

user-2798d6 17 October, 2017, 16:35:43

And I also had the question about the first gaze timestamp when it comes after the first fixation timestamp. I'd like to be able to like up fixation times with my audio recording.

papr 17 October, 2017, 16:36:07

What do you mean by "it does not respond"? Does the white screen with the calibration marker appear? If no, does the whole app freeze? If yes, does the marker only show a red dot instead of a green one?

user-2798d6 17 October, 2017, 16:37:17

sorry-yes, the bullseye appears, but just shows a red dot. No blinking green at all even though the eyes look like they are being well detected

papr 17 October, 2017, 16:39:41

Make sure that the monitor is in the center of the scene video. If this is already the case, try moving the headset closer to the monitor. Also try to increase the monitor brightness to improve the marker's contrast.

user-2798d6 17 October, 2017, 16:40:00

I will try that!

user-45d36e 18 October, 2017, 10:24:31

Hi, how do you play the videos from pupil mobile recorded in an android into pupil player?

user-6273aa 18 October, 2017, 19:52:26

Hello, I'm trying to build from source on a Mac. I keep asserting at pyglui_version >= '1.9' in the world.py file. Looking through my packages, it indicates my version is installed as 1.10. Any help?

$ python3 main.py MainProcess - [INFO] os_utils: Disabled idle sleep. world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/Users/me/Code/pupil/pupil_src/launchables/world.py", line 98, in world assert pyglui_version >= '1.9', 'pyglui out of date, please upgrade to newest version' AssertionError: pyglui out of date, please upgrade to newest version

MainProcess - [INFO] os_utils: Re-enabled idle sleep.

user-3a93aa 18 October, 2017, 19:59:54

Hi folks. My robotics lab is using the eye tracker in conjunction with controlling a robot through ROS. Does anyone have experience with ROS integration in general? I've found a few packages that listen on the ZMQ socket and publish the data as ROS topics, which is useful, but the particular issue I'm currently concerned about is time synchronization with ROS. It looks like pupil uses the linux monotonic clock, whereas ROS uses the wall clock by default. Does anyone have advice/ideas about best practices for rectifying the two? Thanks

papr 19 October, 2017, 06:05:54

@user-6273aa This will be fixed as soon as we merge an outstanding pull request. Sorry for the inconvenience. You could use this commit as a temporary solution until we have merged the pull request: https://github.com/pupil-labs/pyglui/commit/8c2bcce41b82403ba59720012ff04f3934b46d6f

mpk 19 October, 2017, 06:46:17

@user-3a93aa you can set the pupil clock to whatever timebase you want via pupil remote: https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/pupil_remote_control.py

user-24362d 19 October, 2017, 15:37:05

Quick question: we're getting a unicodeDecodeError on a windows installation when we start the calibration. Suspect it might be an issue with windows itself, just wondering if anyone has any quick/dirty suggestions to try to get it working. Thanks!

user-6273aa 19 October, 2017, 15:50:23

Hello again, I'm having trouble getting IntelRealSense working as the world camera. pyrealsense installation is failing. I've tried the instructions on the pupil docs using pip3 install git+https://github.com/toinsson/pyrealsense and running pip3 install pyrealsense. Seems like it's not finding my rs.h. Running on OS X and it seems like support for RealSense is limited. Anyone have experience getting this working? Thanks!

papr 19 October, 2017, 15:51:49

@user-6273aa did you install librealsense? https://github.com/pupil-labs/librealsense

user-6273aa 19 October, 2017, 16:09:09

Thanks again! I got both installed correctly now. Looks like I missed a step from the librealsense installation compiling from source. It would be helpful to point the librealsense section in the developer docs to the pupil labs fork installation instructions. The docs are currently linked to the librealsense master which are not helpful at all. How do I launch the depth camera as the world camera now? I'm running python3 main.py from pupil_src. The pupil camera is showing by default and working, but I don't see a place to switch to the depth camera anywhere in the settings.

papr 19 October, 2017, 16:10:58

Yeah. We are working on improving the docs. The capture settings have switch button called Preview Depth

user-6273aa 19 October, 2017, 16:15:37

Is that under the General Settings gear on the right side? I don't see the switch button.

papr 19 October, 2017, 16:16:09

No, it should show under the camera icon

user-24362d 19 October, 2017, 16:18:21

Another question - we're getting significant amounts of the Turbojpeg jpeg2yuv errors. Is this typical? The only other reference I can find here was someone with a bad camera. The camera eye cameras occasionally don't load and occasionally have significant artifacts.

user-6273aa 19 October, 2017, 16:24:06

Hmm, I only see the options for the Pupil Cam1 ID0 controls (sensor settings and image post processing). Should the realsense camera show up in the UVC manager and do I need to change something there to see it? Thanks for the help.

papr 19 October, 2017, 16:28:11

You need to select the Realsense manager and afterwards the camera. Did it automatically select the eye video for the world window?

user-6273aa 19 October, 2017, 16:28:42

It automatically selected the eye video for the world window

user-6273aa 19 October, 2017, 16:29:15

Looking at the console I see an error: world - [ERROR] video_capture.uvc_backend: The selected camera is already in use or blocked.

papr 19 October, 2017, 16:31:44

I had mistakenly assumed that you had already selected the realsense camera.

papr 19 October, 2017, 16:35:29

It looks like you still have the UVC manager selected. Can you switch to the RealSense manager? Above the camera icon, there should be an list-like icon. Click on that. The top selection field should list the realsense manager

user-6273aa 19 October, 2017, 16:38:23

I only see Video File Source, Local USB, Test Image, Pupil Mobile

papr 19 October, 2017, 16:38:47

ok, then pyrealsense is not recognized correctly yet

papr 19 October, 2017, 16:39:52

Try to open a terminal, type python3 and afterwards import pyrealsense. This should hopefully give you a meaningful error message.

user-6273aa 19 October, 2017, 16:40:31
$ python3
Python 3.6.3 (default, Oct  4 2017, 06:09:15) 
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrealsense
>>> 
user-6273aa 19 October, 2017, 16:40:47

No error

user-6273aa 19 October, 2017, 16:41:36
$ pip3 install pyrealsense
Requirement already satisfied: pyrealsense in /usr/local/lib/python3.6/site-packages
Requirement already satisfied: numpy in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Requirement already satisfied: cython in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Requirement already satisfied: pycparser in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from pyrealsense)
papr 19 October, 2017, 16:43:01

Please try to use pip3 install -U git+https://github.com/pupil-labs/pyrealsense, restart Pupil Capture and check if the manager is still missing

user-6273aa 19 October, 2017, 16:44:20

It's working! Thanks papr 😃

user-6273aa 19 October, 2017, 16:44:34

Got the depth map

user-6273aa 19 October, 2017, 16:44:54

Appreciate the help

user-24362d 19 October, 2017, 16:53:44

We're getting significant amounts of the Turbojpeg jpeg2yuv errors. Is this typical? The only other reference I can find here was someone with a bad camera. The camera eye cameras occasionally don't load and occasionally have significant artifacts.

capture.log

papr 19 October, 2017, 16:57:31

@user-6273aa No problem. @user-24362d This issue is not critical. I cannot remember what caused them right now. Did you have a look at the github issues?

user-24362d 19 October, 2017, 16:57:59

I did not, I will check there.

user-24362d 19 October, 2017, 16:58:12

There was mention here of it being related to bandwidth

papr 19 October, 2017, 16:58:45

Yes, but you are probably using a usb-c clip, am I right?

user-24362d 19 October, 2017, 17:00:33

yes

user-24362d 19 October, 2017, 17:01:22

so it's not an actual problem or losing any data, and we can use the small plugin listed on github to make it go away?

papr 19 October, 2017, 17:03:14

Mmh. I am not entirely sure. @mpk might be able to answer this one.

user-24362d 19 October, 2017, 17:04:56

in the actual recordings, the frames at the same time as those errors are appearing flat grey, no image data

user-24362d 19 October, 2017, 17:05:35

or partial image data is greyed out

user-24362d 19 October, 2017, 17:06:44

actually, no whole frames are missing, just partial ones

user-41f1bf 19 October, 2017, 18:46:06

Hi, please help me with issue #880

user-41f1bf 19 October, 2017, 18:46:49

For some reason my def init_ui is not being called

user-41f1bf 19 October, 2017, 18:47:34

I am trying to update my custom calibration plugin

papr 19 October, 2017, 18:50:29

@user-41f1bf did you update pyglui? The icon bar is missing. The screen shot does not look like the new version...

user-006924 19 October, 2017, 21:44:47

Hi every one, is it technically possible to run two instances of pupil labs on one computer?

user-ecbbea 19 October, 2017, 21:53:51

Hello,

user-ecbbea 19 October, 2017, 21:54:05

Is it possible to push gaze_point_3d data over LSL?

wrp 19 October, 2017, 23:36:58

@user-006924 yes this is possible but if you have only 1usb controller you are likely to saturate it. You may have to reduce resolutions in order to be able to run both instances on a less powerful machine.

wrp 19 October, 2017, 23:41:32

@user-ecbbea this would require an update to out plugin in the LSL repo: https://github.com/sccn/labstreaminglayer/blob/master/Apps/PupilLabs/README.md

user-ecbbea 20 October, 2017, 00:59:35

Thanks @wrp, I figured it out. Can I ask why both pupil and gaze data are pushed using the same function when the available fields are different? It makes it hard to push things that are available in one topic but not the other

user-5d12b0 20 October, 2017, 13:17:55

@user-ecbbea and @wrp, I'm one of the more active developers on LSL. I was planning on updating Pupil-LSL plugin in the near future. I haven't looked at the existing plugin in any detail yet, but please let me know how you would like it to work. e.g., I notice that in this channel it is often recommended to write a client app instead of writing a plugin. Is that applicable here too? Is a LSL_Pupil app preferred to the plugin?

papr 20 October, 2017, 13:29:42

@user-5d12b0 Welcome to our community channel! I am the developer of the current Pupil LSL relay plugin. Thank you for your interest in updating the current Pupil LSL integration.

papr 20 October, 2017, 13:48:51

@cboulay Yes, for most cases it is recommended to use our network api instead of writing a plugin. LSL might be one of the exceptions though. I think we would both benefit from a small chat to update each other on our systems' requirements and constraints. Please send an email to pp@pupil-labs.com if you are interested in that as well.

user-29e10a 23 October, 2017, 09:22:49

Hey guys, is it possible to change camera settings as brightness, contrast etc. over network, via messages? setting the video recording path via network would be nice 😃

papr 23 October, 2017, 09:26:25

[email removed] Changing camera settings over the network is not supported for "Local USB" sources yet. But you can set the session name via the recording.should_start notification or by sending R <rec path> via Pupil Remote.

user-fe23df 23 October, 2017, 10:56:59

hi, what's the difference between eye image and world image?

user-fe23df 23 October, 2017, 10:57:36

I'm referring to the definitions of norm_pos_x and norm_pos_y

user-fe23df 23 October, 2017, 10:58:16

in pupil positions it's eye image while in gaze positions it's world image

user-c828f5 23 October, 2017, 15:27:20

Hello guys! I'm new to this channel so I'm not familiar with the protocol for asking quick questions. Here goes: 1. The right eye camera of tracker is flipped. I can 'flip' the image by clicking in the viewer but does this mean that the calibration will also use the 'flipped' image? 2. What are your experiences of recording scene+eyes+gaze along with other applications? Have you experienced bottlenecks? I intend of recording data from a 3D sensor in parallel.

papr 23 October, 2017, 15:28:51

Hey @user-c828f5 Welcome to the channel!

papr 23 October, 2017, 15:30:17

The flipping is just a question of visualization. The calibration will use the original image.

papr 23 October, 2017, 15:32:24

The answer to your second question depends on how much computation power your computer has. May I ask what type of 3D sensor we are talking about? Usually, this is a question of trying it out.

user-c828f5 23 October, 2017, 15:34:43

@papr Thanks so much for your quick response! It's great to hear that the calibration is not affected by the viz. I intend on using a ZED stereo camera, i.e, writing out 2 X 1080p images at 30Hz. Along with that, I intend on recording data from an IMU in parallel. I was curious about what strategies does pupil take when the system is overloaded. Does pupil drop frames? Stops execution?

papr 23 October, 2017, 15:37:26

@user-c828f5 Pupil Capture will drop frames if either the computation or writing the frames to disk takes too long. Did you hear about Pupil Mobile? You can connect the headset to an Android phone with USB-C and record locally on the phone. Calibration and detection can be done after the recording in Pupil Player.

user-c828f5 23 October, 2017, 15:41:01

Ah! I'm sorry, I did not research enough before asking that over here. I could try the following and update you/the group with this info: 1. Record Stereo images + IMU + Pupil data on a local machine. 2. In the event that fails, connect Pupil to an Android phone.

From my experience in the past, recording sensor data from 2 different systems causes timing offsets. Fingers crossed that option 1 works for me.

papr 23 October, 2017, 15:45:53

No worries! Your are welcome to ask any question here. Our documentation is not perfect yet. We are happy to point in the right directions. Concerning the potential time offset: Pupil Capture and Pupil Mobile use a time sync protocol to synchronize their clocks. It is open source and we have example scripts which you can use to synchronize time with 3rd-party systems as well. I think you would need to use time sync in both of your cases.

user-768a2c 23 October, 2017, 17:55:40

Hello, is there a channel for doing eye-pupil tracking and gaze detection on a raspberry pi? I have opencv built on there and a small IR camera to do the video capture. https://www.raspberrypi.org/products/pi-noir-camera-v2/

user-c828f5 23 October, 2017, 18:37:16

@papr Thanks a lot!

user-6ac66a 24 October, 2017, 09:07:52

Hello, I modified scripts in 'pupil_remote\recv_world_video_frames.py' to get frames came from camera.

like this topic, msg = recv_from_sub() if topic == 'frame.world': img = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) img2 = np.frombuffer(msg['raw_data'][1], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) cv2.imshow('test', img) cv2.imshow('test2', img2) cv2.waitKey(1)

But It doesn't work. how can I get frames from all cameras in pupil-headset?

user-6ac66a 24 October, 2017, 10:57:19

somebody help me in using recv_world_video_frames

papr 24 October, 2017, 10:59:58

@user-6ac66a Did you start the frame publisher plugin in Pupil Capture?

user-6ac66a 24 October, 2017, 11:00:22

frame publisher?

user-6ac66a 24 October, 2017, 11:01:02

Um... I just run Pupil Capture app and recv_world_video_frames.py

user-6ac66a 24 October, 2017, 11:01:13

It works when I only want to take world video

user-6ac66a 24 October, 2017, 11:01:33

but.. um... I don't know how to get eye images...

papr 24 October, 2017, 11:04:46

You need to change the topic to which you subscribe and filter. I will look up the correct topic in a bit. I am currently on mobile and will notify you as soon as I have more information for you.

user-6ac66a 24 October, 2017, 11:05:59

Thank you

papr 24 October, 2017, 12:19:38

@user-6ac66a Hey, so receiving frames works in the following way: Frames are a data type as everything in Pupil, e.g. pupil and gaze positions. Each data type has its own "topic". To receive a specific data type you need to subscribe to its topic. The example script does that in line 40:

sub.setsockopt_string(zmq.SUBSCRIBE, 'frame.')

See https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/recv_world_video_frames.py#L40

papr 24 October, 2017, 12:23:01

With this line you will subscribe to all frames, world frames as well as eye frames. In line 64 you filter all incoming frames manually by calling if topic == 'frame.world':. This gives you all world frames. If you want to filter by a specific eye id you will need to change it to if topic == 'frame.eye.0':. Remove the if-statement to have access to all images.

papr 24 October, 2017, 12:24:34

Additionally, you will have to revert your changes. Specifically img2 = np.frombuffer(msg['raw_data'][1], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) will not work.

user-6ac66a 24 October, 2017, 12:25:11

Umm... like... change the last number 3 to 1?

papr 24 October, 2017, 12:27:26

The frames come message by message. msg['__raw_data__'] only has data for one frame. Therefore calling msg['__raw_data__'][1] will fail. You will have to remove all your code that is related to the img2 variable.

user-6ac66a 24 October, 2017, 13:14:07

Hmm.. then is there any route to get both eye images immediately?

papr 24 October, 2017, 13:16:29

No. The eye cameras are captured independently of each other by Pupil Capture. You can correlate the images using msg['timestamp'].

papr 24 October, 2017, 13:18:20

May I ask what your use case is? Maybe I can suggest a possible solution.

user-6ac66a 24 October, 2017, 13:23:52

case of... um...? You mean error code?

papr 24 October, 2017, 13:25:02

Use case in the sense of what your application is. Why do you need two eye images at the same exact time?

user-6ac66a 24 October, 2017, 13:26:57

um, What I want to do was just... 'get raw world camera & 2 eye camera in each frame' and... with that data, I wanna test my own algorithm on gaze estimation.

papr 24 October, 2017, 13:33:10

Ah, I understand. Well, this is not quite possible since we cannot force all cameras to give us a picture at the very same time. Instead we grab pictures as they come and tag them with their original timestamp. This way you are able to handle different frame rates as well.

In you case I would introduce thre variables: recent_world, recent_eye0, recent_eye1. Each holds the most recent frame data for its specific topic. And then you do calculation depending on which data comes in: If it is an eye0 frame, you do detection on this frame and apply your mapping function onto recent_world frame. Do you understand what I mean? Instead of assuming that you get the data at the same time, you can cache incoming data and check afterwards if anything updated.

user-6ac66a 24 October, 2017, 13:39:11

hm... could you tell me what parts of the scripts(RECV_WORLD_VIDEO_FRAMES) should I change? sorry for my bad english.

papr 24 October, 2017, 13:52:30

@user-6ac66a This would be the recommended changes: https://gist.github.com/papr/5840cdee64a4d20d5139b2dfce77dcd2/revisions

user-6ac66a 24 October, 2017, 14:10:15

Thanks, running with that scripts,

Traceback (most recent call last): File "recv_world_video_frames.py", line 72, in <module> recent_eye0 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) ValueError: cannot reshape array of size 921600 into shape (640,640,3)

the error was like this. I think 'msg['height']' is should be 480. so I rewrite that line into

recent_eye0 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(480, 640, 3)

user-6ac66a 24 October, 2017, 14:10:56

and I printed out the results but it seems.. wrong img i think.

papr 24 October, 2017, 14:15:05

Oh yes, that is a bug in an older Pupil Capture version. This has been fixed already. Just update to the newest version and it should work correctly.

papr 24 October, 2017, 14:16:44

Can you post a screenshot of the image that you think is wrong? Maybe I can tell if this is expected or not.

user-6ac66a 24 October, 2017, 14:17:09

sure, wait a second, please.

user-6ac66a 24 October, 2017, 14:21:10

oops, um... sorry, I seems correct img, it was my mistake...

papr 24 October, 2017, 14:21:45

No worries! I am happy that it sems to work 👍

user-6ac66a 24 October, 2017, 14:22:14

The light of the lab went out and the image looked like 'something wrong'

user-6ac66a 24 October, 2017, 14:22:26

ALL DARK...

user-6ac66a 24 October, 2017, 14:24:01

I owe you a big appreciate. Thank you!

papr 24 October, 2017, 14:27:09

No problem 😃 Happy to help!

user-cf2773 24 October, 2017, 14:36:47

Hi everyone, any idea why Pupil Capture does not recognise the USBs for a particular user in Ubuntu? It works well for other users...

papr 24 October, 2017, 14:37:41

Hey @user-cf2773 this is due to an user permissions issue. You will need to add the user to a specific group called plugdev.

user-cf2773 24 October, 2017, 14:38:01

That's the info I was looking for, thanks a lot @papr !

papr 24 October, 2017, 14:38:20

No problem! 😃

user-768a2c 25 October, 2017, 02:36:27

sorry my message might have gotten lost in the mix. Is there by any chance I could run Pupil's software for eye gaze detection on the Raspberry Pi 3 (Python) using the rpi camera? https://www.raspberrypi.org/products/pi-noir-camera-v2/

wrp 25 October, 2017, 04:44:17

@user-768a2c I think you would be limited by CPU power of the RPI. Even if you do successfully set up all dependencies, I fear that you would have very low frame rate on the RPI.

wrp 25 October, 2017, 04:45:14

@user-768a2c what are you trying to achieve exactly?

user-6e1816 25 October, 2017, 06:57:07

@wrp How do you get the value of the gaze_distance in the Vector_Gaze_Mapper(https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_routines/gaze_mappers.py), is just given by you or calculated by some method?

user-ac5305 25 October, 2017, 11:33:06

what are the exact requirements for the opengl support?

user-ac5305 25 October, 2017, 11:33:13

re: https://github.com/pupil-labs/pupil/issues/837

user-006924 25 October, 2017, 14:00:32

Hi Everyone, Does anybody get this error in the picture? Pupil Player, the latest version on windows 10 was working fine one minute and suddenly closed and never opened again the next.

Chat image

papr 25 October, 2017, 14:02:53

Hey @user-006924 See the linked issue above. It is exactly about that problem.

user-006924 25 October, 2017, 14:10:43

@papr Thanks ^^

user-c828f5 26 October, 2017, 19:03:52

Hello all, how do I install more plugins to the pupil-player app?

user-c828f5 26 October, 2017, 19:04:50

I want to extract blink events (offline). I'm working on my event-detection algorithm which does not handle blinks.

papr 26 October, 2017, 19:06:15

Hey @user-c828f5 We currently do not offer offline blink detection. It is on our todo list though. :)

papr 26 October, 2017, 19:07:06

Please feel free to implement it and to make a pull request. We are happy about each contribution.

user-c828f5 26 October, 2017, 19:09:03

@papr Ah! Yes, I plan on doing that at some point. There are tons of event-detection algorithms which are fairly easily to implement and can be added into your SDK.

I also noticed that while exporting, it links each gaze point to the closest video sample. How do I correlate it with the correct eye image?

papr 26 October, 2017, 19:11:36

The pupil positions have their own timestamps. You can use them to infer the blink event timestamps. And afterwards you correlate them using the same function as for gaze.

user-c828f5 26 October, 2017, 19:15:22

I'm sorry, I don't understa

user-c828f5 26 October, 2017, 19:15:25

understand*

user-c828f5 26 October, 2017, 19:15:59

timestamp - timestamp of the source image frame.

This is the time (in MS since the app first started up) for each gaze sample.

user-c828f5 26 October, 2017, 19:16:45

I also have MP4 files for each eyes at 120 fps. How do I correlate each gaze sample to it's associated eye image?

papr 26 October, 2017, 19:23:42

Each frame gets its own timestamp, exactly. Pupil timestamps are their eye frame's timestamp. To correlate these timestamps to the scene frame timestamps, you will need to find the closest match. There is a method called correlate_data in player_methods.py which does that. Have a look at it to understand how gaze is correlated to the scene camera timestamps. Afterwards you can apply the same method to your blink events.

papr 26 October, 2017, 19:24:53

BTW, each video has also a timestamps file. It is a numpy file. Therefore you can easily extract timestamps on your own if necessary.

user-c828f5 26 October, 2017, 19:27:09

Got it.

Last question! Is there way in which I can write out the absolute time stamp? In hours, minutes, seconds, ms etc? I need to correlate all of this data with other sources.

papr 26 October, 2017, 19:31:12

This is difficult to do since the absolute start time is not saved in a precise way. It is written in the info.csv file. You can take the the difference between this time and the first timestamp to calculate a fixed time difference which you can use to calculate your dates.

The correct way to do this would be to synchronize the clocks between capture and your other sources before making the recording.

papr 26 October, 2017, 19:33:49

I need to leave now, but I will answer any questions when I am back. :) so feel free to ask for details.

user-c828f5 26 October, 2017, 22:01:35

@papr The problem with that approach is that the starting time stamp is written out in H, M and S. At the worst, I'm looking at a delay of 1000ms, which is a lot. It doesn't have a MS component. I could go in and modify the plugin .. I'll update you with results.

As far as data writeout goes: I can record [email removed] along with Scene + Eyes + Gaze data on the same computer without any frame dropping.

papr 26 October, 2017, 22:02:58

Exactly. That is what I meant with not saved in a precise way. You will need to synchronize clocks before recording data.

user-c828f5 26 October, 2017, 23:33:28

Hmm, got it. I also have an off the shelf calibration routine but I'll try and learn how to synchronize clocks. The good thing is that all sources are being recorded from the same system. Also, I noticed that you do have an online blink detector based on confidence values. I could perhaps try tapping into that and write out blink ON OFF events.

papr 27 October, 2017, 11:50:28

@user-c828f5 The blink events are actually recorded in the pupil_data file. You can access them by deserializing the file yourself using msgpack or via a Player plugin and accessing g_pool.pupil_data['blinks'].

user-2798d6 27 October, 2017, 17:23:38

Hello! I have a question about calibration - I am using the glasses on conductors while they look at a musical score that is on a music stand in front of them. I started by having them calibrate on the computer screen, but the area of the musical score and the distance of the score away from the person is a little different than the computer screen, so I felt like I wasn't getting a good calibration. Now, I'm using the manual marker calibration. The distance is a little less than 1.5 meters - will that be ok? Exactly how accurate can I expect the calibration to be when I'm looking on the screen in Capture. Should It be that what it looks like they're looking at is exactly what they're looking at down to the 1 degree of error?

user-b55d70 29 October, 2017, 01:43:05

Hello! I followed the instructions for installing windows dependencies. When I run main.py I got this error: C:\work\pupil\pupil_src>python main.py MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. cl : Command line warning D9025 : overriding '/W3' with '/w' cl : Command line warning D9002 : ignoring unknown option '-std=c++11' error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.11.25503\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2 world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "C:\work\pupil\pupil_src\launchables\world.py", line 113, in world import pupil_detectors File "C:\work\pupil\pupil_src\shared_modules\pupil_detectors\__init__.py", line 16, in <module> build_cpp_extension() File "C:\work\pupil\pupil_src\shared_modules\pupil_detectors\build.py", line 25, in build_cpp_extension ret = sp.check_output(build_cmd).decode(sys.stdout.encoding) File "C:\Python36\lib\subprocess.py", line 336, in check_output **kwargs).stdout File "C:\Python36\lib\subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['C:\\Python36\\python.exe', 'setup.py', 'install', '--install-lib=C:\\work\\pupil\\pupil_src\\shared_modules']' returned non-zero exit status 1. I have tried reinstalling multiple times but ended up the same error message. Any help would be appreciated!

user-8250bf 30 October, 2017, 10:40:41

Hey all, how's it going. Has anyone ever replaced the eye camera with an off the shelf webcam/camera? Whilst it doesn't look like a huge amount of faff to buy the suggested webcam and alter the filters etc I was wondering if anyone has found a decent off-the-shelf IR camera

mpk 30 October, 2017, 12:49:04

@user-8250bf there is not really. This is why we have this: https://pupil-labs.com/cart/?0_product=e120upgrade&0_qty=1

user-8250bf 30 October, 2017, 14:41:48

Ah that's cool, great idea! Thanks.

user-2798d6 30 October, 2017, 17:04:37

Reposting in case this got lost in the feed: Hello! I have a question about calibration - I am using the glasses on conductors while they look at a musical score that is on a music stand in front of them. I started by having them calibrate on the computer screen, but the area of the musical score and the distance of the score away from the person is a little different than the computer screen, so I felt like I wasn't getting a good calibration. Now, I'm using the manual marker calibration. The distance is a little less than 1.5 meters - will that be ok? Exactly how accurate can I expect the calibration to be when I'm looking on the screen in Capture. Should It be that what it looks like they're looking at is exactly what they're looking at down to the 1 degree of error?(edited)

papr 30 October, 2017, 17:14:26

@user-2798d6 You can use the accuracy visualizer plugin to calculate the error of your calibration in degrees.

user-380f66 30 October, 2017, 18:37:26

Hi, I am just wondering how to improve the accuracy of an offline calibration. I have refined the range down to the right section of the video and it seems to be detecting the marker, but sometimes the fixation appears far away from the marker and I know the person was looking at it. thanks!

user-2798d6 30 October, 2017, 20:02:55

@papr: Thanks for the accuracy visualizer suggestion! SO my question is what should that look like if the calibration is good and what does it look like if it's bad?

user-2798d6 30 October, 2017, 20:07:19

So if it looks like this...what does that mean?

Chat image

user-ef948c 30 October, 2017, 23:56:01

Hi Guys,

user-ef948c 30 October, 2017, 23:56:25

I'm working on a research project that want's to make use of the pupil labs eye tracker on an Android platform

user-ef948c 30 October, 2017, 23:56:41

does anyone know how and if there is an sdk available to interface the eye tracker with Android?

papr 31 October, 2017, 08:30:36

@user-ef948c We have an Android app called Pupil Mobile. You can connect the headset to USB-C phones and use the app to make recordings and to stream the video to your computer running Pupil Capture. https://play.google.com/store/apps/details?id=com.pupillabs.pupilmobile

user-ef948c 31 October, 2017, 08:31:43

@papr theres not a way to do the tracking and calibration on the device itself?

papr 31 October, 2017, 08:34:00

@user-2798d6 the orange lines are the prediction errors. You want them as short as possible. Your case looks good but you should spread the markers more during the calibration.

papr 31 October, 2017, 08:35:52

@user-ef948c No. If you need live interaction, you can stream the video to Pupil Capture which does the detection/mapping and broadcasts the result via Capture's network interface.

user-ef948c 31 October, 2017, 08:37:34

Dang. We had bought the pupil labs tracker hoping it would be ideal for a embedded prosthetic solution not coupled to pc hardware. We'll have to reevaluate

papr 31 October, 2017, 08:39:57

The cameras are connected via the UVC standard. You can implement your own app that does the detection on the phone. The algorithm is open source and has been published in a paper.

user-ef948c 31 October, 2017, 08:41:52

Roger. Thanks for the info. Will the Android app source be released at all? Just wondering

papr 31 October, 2017, 08:48:22

No, not as far as I know. But the UVC lib that we use is open source as well https://github.com/saki4510t/UVCCamera

user-0f3eb5 31 October, 2017, 11:49:35

Hi all, has anyone succeeded in aggregating data from multiple participants in order to create a heatmap that highlights the most common gaze locations?

user-0f3eb5 31 October, 2017, 11:50:07

If so, which programming language have you used to achieve that?

papr 31 October, 2017, 11:53:24

@user-0f3eb5 A common way would be to use the Python libraries numpy and matplotlib. Be aware that a gaze position in the field of view itself is meaningless if you do not know where the subject was looking at.

papr 31 October, 2017, 11:54:23

You can solve this by using our surface tracker plugin and aggregating gaze positions within defined surfaces.

user-0f3eb5 31 October, 2017, 11:56:47

@papr Thank you. I haven't learned Python (yet) but will dive into it. Do you know if anyone has used R to do it?

papr 31 October, 2017, 11:59:21

You can export the surface gaze into csv files and process them with R. It is not necessary to do this with Python.

user-0f3eb5 31 October, 2017, 12:02:30

Super cool. I haven't purchased the gear yet because I want to make sure I am able to aggregate the data, preferably with R (glad to hear it doesn't necessarily have to be Python). Do you have / know if there is example data (csv exports) from different sessions of the same surface area that I can play around with to see if I can make it happen?

papr 31 October, 2017, 12:19:29

We have an example data set that includes multiple surfaces but only includes data from one session. This should be enough to start with though. 1. Download and install Pupil Player https://github.com/pupil-labs/pupil/releases/tag/v0.9.14 2. Download the data set https://drive.google.com/uc?export=download&id=0Byap58sXjMVfZUhWbVRPWldEZm8 3. Start Pupil Player 4. Drop the unzipped dataset onto the Player window 5. See https://docs.pupil-labs.com/#pupil-player for an introduction into Pupil Player 6. Open the Offline Surface Tracker 7. Wait for the surfaces to be detected 8. Export the surface data by hitting the e key or clicking the ⬇ thumb on the left 9. The dataset folder should have a folder called export that includes the exported csv files

user-0f3eb5 31 October, 2017, 12:26:00

Thank you, I'll try it out!

papr 31 October, 2017, 12:26:55

Feel free to ask further questions in case that encounter any problems with the above points. 🙂

user-0f3eb5 31 October, 2017, 12:27:44

Thanks a lot!

user-0f3eb5 31 October, 2017, 13:22:35

I have downloaded the Pupil Player and the demo data. I went through the steps and exported the Offline Surface Tracker data. In there there are also heatmap png's per surface, however, those png's are empty. What should I do to be able to export those heatmaps?

user-0f3eb5 31 October, 2017, 13:23:00

as you can see all of the png's are Zero Bytes

Chat image

papr 31 October, 2017, 13:24:14

@user-0f3eb5 Were the csv files exported correctly? The empty heatmaps look like a bug. I will try to replicate this issue.

user-0f3eb5 31 October, 2017, 13:26:56

I have indicated here which files were empty, it includes some of the CSV files

Chat image

user-0f3eb5 31 October, 2017, 13:27:49

I will also redo the whole process from the start to see if I can replicate it

papr 31 October, 2017, 13:28:22

You do not need to re-download everything. Just start at step 3

user-0f3eb5 31 October, 2017, 13:28:31

Ok

user-0f3eb5 31 October, 2017, 13:29:37

Started from step 3, same outcome

user-0f3eb5 31 October, 2017, 13:33:00

Maybe I should mention that when I open the Offline Surface Tracker, I get this message:

user-0f3eb5 31 October, 2017, 13:33:46

Chat image

user-0f3eb5 31 October, 2017, 13:34:06

"No gaze on any surface for this section"

papr 31 October, 2017, 13:35:41

Ok, this explains a possible reason for the bug. No gaze -> no data for heatmaps -> no heatmaps

user-0f3eb5 31 October, 2017, 13:45:18

Uninstalled and re-installed pupil player just in case, unfortunately same result

papr 31 October, 2017, 13:45:57

Did you activate the "Offline Calibration" by any chance?

user-0f3eb5 31 October, 2017, 13:47:42

I don't remember doing that, but possibly I did when I was exploring...

papr 31 October, 2017, 13:48:32

Please click the "Restart with default setttings" button in the General Settings menu and try to open the Offline Surface Tracker only

papr 31 October, 2017, 13:50:32

Oh, my bad. There is no such button in this version. Please select Gaze From Recording in the Data Sourcefield

user-0f3eb5 31 October, 2017, 13:51:09

that worked!!!

user-0f3eb5 31 October, 2017, 13:51:24

🙏

papr 31 October, 2017, 13:51:32

So the heatmaps are exported correctly?

user-0f3eb5 31 October, 2017, 13:52:41

I think so, here's an example of how they are exported now, is this correct?

Chat image

papr 31 October, 2017, 13:53:37

Nice! The heatmaps in this version are rotated due to a bug which is fixed in the upcoming version

user-0f3eb5 31 October, 2017, 13:54:21

awesome, thanks! I can start merging heatmaps in R now 😃

user-0f3eb5 31 October, 2017, 13:58:44

I just noticed these are still empty in the export

Chat image

papr 31 October, 2017, 13:59:52

Could you upload the files to https://gist.github.com/ ?

user-0f3eb5 31 October, 2017, 14:04:28

not sure if this is what you wanted, but I've put the contents of the first file in here: https://gist.github.com/anonymous/a5f8612b265596300f00b1060c1ecc6e

papr 31 October, 2017, 14:05:56

Oh, my bad. Fixations are only mapped onto surfaces if they are available. Start the Offline Fixation Detector to detect fixations. Afterwars repeat the export and these files should be filled.

user-0f3eb5 31 October, 2017, 14:12:58

"Gaze Position 2D Fixation Detector" worked 😃

user-2798d6 31 October, 2017, 15:23:29

Thanks @papr! One other question about the Accuracy Test - It gives numbers for angular accuracy and angular precision - what should those numbers be?

papr 31 October, 2017, 15:25:19

@user-2798d6 Do you ask for the meaning of these numbers or which numbers to expect?

papr 31 October, 2017, 15:27:06

You should be mostly interested in Angular Accuracy. Everything below 1 degree is very good. 2-3 degrees is ok, but not great. >3 you should redo the calibration.

user-2798d6 31 October, 2017, 16:32:26

Ok, good to know! Thank you!

user-0f3eb5 31 October, 2017, 21:14:37

@papr heatmaps with R work fine, thanks for your help! 😃

End of October archive