👁 core


papr 01 June, 2018, 05:27:22

@user-f1eba3 This indicates a broken jpeg frame. Did it appear once or consistently multiple times?

mpk 01 June, 2018, 06:42:17

@user-f1eba3 please try a different USB hub if this issue is frequent. An occasional message like this is ok.

user-af87c8 01 June, 2018, 07:59:49

@user-f1eba3 In my experience this can also happen if you have long USB cables or more than one USB-hub in between. Two things help: External power to the USB hub & a pure USB 3.0 hub line (works even if the USB-Clip of the Pupillabs ET is 2.0 and not USB-C). changing the USB-Bus (not hub) might also help

user-af87c8 01 June, 2018, 14:43:12

@mpk + @papr Thanks for the fast fixes! Nice job 😃

user-dbbe31 05 June, 2018, 08:45:40

Hey Pupil, Love the work your doing. I was wondering if I could borrow a eye-tracking head set so that I could validate son designs. I'm currently doing a MSc in Medical Device Design and having you head set in the surgical theater would be great, It would make sure that the UX is well designed for the surgeons. Any help on this would be greatly appreciated. I have tried to build your open source version of the glass but the lack of both time and skill are hampering my completion of it. All the best.

wrp 05 June, 2018, 09:04:21

Hi @user-dbbe31 - Thanks for the kind feedback! We also received your email and have replied to your question in that context.

user-dbbe31 05 June, 2018, 09:19:53

Great thanks for that.

user-006924 05 June, 2018, 17:12:48

Hi Pupil Labs, I bought the new cables that @wrp suggested to use my google pixel 2 phone with the HoloLens eye tracker add on through pupil mobile. In the NDSI manager My phone is detected, but I get an error about time sync not being loaded. Please find attached an image of the error msg.

Chat image

user-006924 05 June, 2018, 17:13:40

HoloLens, cellphone and computer are all on the same network

user-90270c 05 June, 2018, 17:22:46

Hi, How are PupilLabs users managing the newer Macbook shift to all thunderbolt ports? We are in need of faster laptops and seeking recommendations. Many thanks!

papr 05 June, 2018, 18:11:25

@user-90270c just buy usb-c to usb-c cables. You should be fine using them in combination with the new MacBooks.

papr 05 June, 2018, 18:13:15

@user-006924 don't worry about it. You have time sync enabled.

user-006924 05 June, 2018, 18:14:16

@papr So that's not causing the issue of me not getting any feed either on cell phone or pupil capture?

user-006924 05 June, 2018, 18:16:47

I basically made sure the three devices are on the same network, opened pupil mobile and chose pupil mobile in pupil capture, the phone and the laptop are correctly detected in pupil capture, but nothing is really happening. Am I missing a step?

papr 05 June, 2018, 18:23:23

@user-006924 did you select the camera from the drop down menu?

user-006924 05 June, 2018, 18:24:52

@papr you mean inside pupil mobile?

papr 05 June, 2018, 18:25:56

No, in capture. In the ndsi manager plugin menu

user-006924 05 June, 2018, 18:27:32

in pupil capture, when I open pupil mobile app, it automatically brings my mobile in remote host, but nothing happens when I click the drop down in front of the "remote host" or "select to activate"

papr 05 June, 2018, 18:28:35

OK. Just to clarify: The select to activate menu is empty?

user-006924 05 June, 2018, 18:29:07

yes

papr 05 June, 2018, 18:29:54

Can you post a screen shot of the pupil Mobile app with the headset connected?

papr 05 June, 2018, 18:30:20

You have connected the headset to the phone already, correct?

user-006924 05 June, 2018, 18:31:46

If by connecting you mean using the USBC cable to connect the head set and open the pupil mobile app, yes I have. I'll send a screenshot right away.

papr 05 June, 2018, 18:32:09

OK that is what I meant 🙂

user-006924 05 June, 2018, 18:32:47

Chat image

user-006924 05 June, 2018, 18:33:19

The UVC source in Pupil Capture says: Ghost capture, capture initialization failed

papr 05 June, 2018, 18:33:41

OK, the cameras should appear there.

user-006924 05 June, 2018, 18:34:09

inside the app?

papr 05 June, 2018, 18:34:35

Mmh. Can you look for an option called OTG Storage in your phone's settings? And enable it if you find it?

papr 05 June, 2018, 18:34:48

Yes, the cameras should be listed within the app

user-006924 05 June, 2018, 18:35:10

I'll look for it right now

user-006924 05 June, 2018, 18:41:40

I can't find that option in my phone, I just googled pixel 2 OTG and I found that an adapter shipped with the phone should be used. I'll check with the adapter and see if the cameras appear. Thank you

papr 05 June, 2018, 18:43:15

@user-006924 OK, good luck. Else I would not know what the problem could be

user-006924 05 June, 2018, 19:04:50

@papr Thanks for reminding me to check the OTG option. It's working 🙂

user-006924 05 June, 2018, 19:06:04

Just one question, when I'm recording in pupil mobile if I want to have the pupil gaze data I should also record in pupil capture as well?

user-006924 05 June, 2018, 19:06:24

I haven't specified a saving location inside the app.

papr 05 June, 2018, 19:06:37

@user-006924 you can copy and import Pupil Mobile recordings in Player

user-006924 05 June, 2018, 19:07:08

Got it, thanks again.

user-c7a20e 05 June, 2018, 23:03:31

anyone ran Pupil eye capture on raspberry pi3? I'm still not sure how well it can handle camera image processing in realtime

user-fcc645 06 June, 2018, 02:19:38

I am getting this error on windows 10 64 bit when I install pyglui "pyglui-1.22-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform.". Please help

wrp 06 June, 2018, 02:20:36

@user-fcc645 are you using python 3.6?

user-fcc645 06 June, 2018, 02:20:42

yes

wrp 06 June, 2018, 02:21:04

Ok. We can look into this.

user-fcc645 06 June, 2018, 02:21:19

thanks

wrp 06 June, 2018, 02:21:31

Question - is there an absolute necessity to build Pupil from source on Windows?

user-fcc645 06 June, 2018, 02:21:54

yes please I am a windows user

wrp 06 June, 2018, 02:23:27

Understood. Just asking because the win dev environment is quite fragile. Also wanted to point out that one can do a lot with plugins using the app bundle

user-fcc645 06 June, 2018, 02:25:35

I would like to extend this work in .net framework having multiple streams from multiple devices and doing analysis

user-fcc645 06 June, 2018, 02:27:00

it would be great if you are aware of any such open source project and direct me to it cheers

user-fcc645 06 June, 2018, 02:27:43

"C# project"

wrp 06 June, 2018, 02:29:44

I will try to reproduce your pyglui install issue and get back to you

user-fcc645 06 June, 2018, 02:29:56

thanks

wrp 06 June, 2018, 03:01:44

@user-fcc645 let's migrate this discussion to the 💻 software-dev channel

user-fcc645 06 June, 2018, 03:19:08

sure

papr 06 June, 2018, 07:47:07

@user-c7a20e After working with a Raspberry Pi 3b+ on a different project, I would say that its computing power will not be sufficient.

papr 06 June, 2018, 07:48:37

It could be enough to make recordings without pupil detection/gaze estimation and simply record the world/eye videos and open the recording on a different computer. But I did not test that

user-c7a20e 06 June, 2018, 07:50:20

Thanks. Nah that will be pointless for my project. That said what is the processing power and RAM need for two of the 120Hz cameras and real-time gaze estimation? For another abandoned project I have also Odrioid and Asus Tinkerboard laying around I could try, if you think there's a point.

papr 06 June, 2018, 07:51:05

Do you only need the pupil detection? Or do you need the gaze estimation pipeline as well?

user-c7a20e 06 June, 2018, 07:51:27

It's been a while since I've used the Pupil API, what was the difference?

papr 06 June, 2018, 07:53:17

Pupil detection just finds the position of the pupil within single eye video frames. Gaze estimation uses detected pupil positions and tries to estimate their associated gaze target within the scene camera

papr 06 June, 2018, 07:54:39

But I would be interested in your actual use-case as well.

user-c7a20e 06 June, 2018, 07:57:51

Here's the goal, I want to put two pupil cams inside of a custom OSVR headset I am working on. Would like to offload as much of processing as possible from the host machine so it can use its processing resources and RAM on other things such as 3d rendering. A good option seems to put something like a Tinkerboard between the PC and the headset. Idieally a Tinkerboard-type board would handle all the Pupil processing and just stream position data to the PC. This would likely also minimize any latency.

user-c7a20e 06 June, 2018, 07:58:41

Since there's already a breakout board like used by PSVR, Tinkerboard-like board could fit inside of it.

papr 06 June, 2018, 08:05:17

I understand. It would require some/a lot of manual work but you could create a slimmed-down version of Capture that does not have an interface at all that simply runs pyuvc, pupil detectors and hmd calibration. I am speaking of using the components and combining them with a custom script

papr 06 June, 2018, 08:05:49

But I do not know if the processing power would be enough. But this would be your best shot at getting this working

user-c7a20e 06 June, 2018, 08:06:17

I wasnt thinking of using Capture but python API. I think the GUI and preview adds too much processing.

papr 06 June, 2018, 08:07:21

Yes, I agree. Unfortunately we do not have a separate python api. As I said, you would need to extract the components by yourself and use them in a custom python script

user-c7a20e 06 June, 2018, 08:08:20

What do you mean? Separate API for what?

papr 06 June, 2018, 08:09:27

"I wasnt thinking of using Capture but python API" there is no python api which you could simply use liek a different module, e.g. numpy

user-c7a20e 06 June, 2018, 09:18:45

sure, but arent the individual components exposed to Python as an API?

user-c7a20e 06 June, 2018, 09:21:23

(don't want to start an exe GUI program each time for a HMD with integrated pupil cams)

papr 06 June, 2018, 09:21:39

There is not such thing as import pupil_capture in Python

papr 06 June, 2018, 09:22:09

But you can clone the repository, add the shared_modules folder to your Python path and import single plugins/files by hand

papr 06 June, 2018, 09:23:02

that is what I meant by "you would need to extract the components by yourself"

user-c7a20e 06 June, 2018, 09:23:24

Sounds simple enough

user-c7a20e 06 June, 2018, 09:25:46

that said, can capture be launched without GUI

papr 06 June, 2018, 09:25:54

No

user-c7a20e 06 June, 2018, 09:26:09

Now thats a problem.

papr 06 June, 2018, 09:26:45

That's why I am telling you that you need to extract the components by yourself 😉

user-c7a20e 06 June, 2018, 09:27:31

Wait so Capture is just using the Python modules in the shared_modules folder?

papr 06 June, 2018, 09:28:38

Capture is plugin based. Most of its functionality is separated into multiple plugins. These live in shared_modules

papr 06 June, 2018, 09:29:34

Be aware that these plugins expect to be called from within Capture. Directly calling them might not always work.

user-c7a20e 06 June, 2018, 09:32:39

Capture is pupil_src/main.py right?

papr 06 June, 2018, 09:33:11

main.py is just the launcher that starts the different processes. The different processes can be found in launchables/

user-c7a20e 06 June, 2018, 09:34:35

Yes, but it is what the Capture exe runs right?

papr 06 June, 2018, 09:35:56

On Windows you need to execute run_capture.bat, on Linux/macos you start Capture by running python3 main.py, yes

user-c7a20e 06 June, 2018, 09:37:08

Okay, thanks for guiding me through this. I could write a Python script to do what I want by stripping unnecessary parts for me and adding needed code. Is this something others might find of use? I can't be only one interested in using Pupil this way.

papr 06 June, 2018, 09:38:42

That is the way to go. I am very sure that there would be users interested in that. I would recommend to put this project into its own github repository, e.g. Pupil Capture headless. When it is done we can add it to our community repository

user-c7a20e 06 June, 2018, 09:38:59

OKay, thanks.

user-c7a20e 06 June, 2018, 09:40:31

Some unrelated questions: 1) Why do you not use a hot mirror adhesive film adhered to a plexiglass so you can have the eye tracking cameras facing directly the eyes and having nothing obstructing user's view. Sounds more practical but maybe I am missing something

user-c7a20e 06 June, 2018, 09:42:39

2) Has research been done to prove faster refresh rate in cost of resolution is justified (only 200x200 at 200Hz)

papr 06 June, 2018, 09:44:40

Pupil detection works well on 200x200. Higher framerates are very welcome in our field in order to be able to detect fast movements like mircosaccades

user-c7a20e 06 June, 2018, 09:47:09

have you checked the research paper by NVidia on camera latency for foveated rendering? They discuss saccades there and it appears it is not an issue because of the saccadic blindness phenomenon. Otherwise I think 300Hz might not have been fast enough for any usage either

user-e5aab7 06 June, 2018, 14:44:20

@user-c7a20e I would be interested in it, I am currently trying to the NOIR module to work on the pi. I am just trying to get a gaze vector data from the tracking

user-c7a20e 06 June, 2018, 17:03:01

@user-e5aab7 Hi, in what exactly?

user-e5aab7 06 June, 2018, 17:34:22

@user-c7a20e from my understanding you are planning to strip the pupil source code down to just focus on capturing (eye tracking)?

user-e5aab7 06 June, 2018, 17:34:56

@user-c7a20e if so, then I would be interested in the stripped down version of pupil

user-156f2b 07 June, 2018, 06:47:06

https://docs.pupil-labs.com/#diy Anyone know the latency of Logitech C525/C512 and Microsoft HD-6000?

user-c7a20e 07 June, 2018, 12:52:52

@user-e5aab7 Yes, but don't expect a clean stripping. Will probably remove bunch of stuff and add bunch of comments

user-c23839 07 June, 2018, 15:49:37

Hello, does anyone know if there is a way to export data from a recording directory without using the pupil player GUI? I have been looking to use a command line prompt. If there is a pupil resource on this already, I would appreciate direction to the source. I see a command in batch_exporter.py, however, this does not seem function without the main.

papr 07 June, 2018, 15:51:18

No there is no such possibility. But you can load the recording data directly. See the file_methods.py on how to deserialize the pupil_data file

user-c23839 07 June, 2018, 15:51:56

I will take a look there. Thanks!

user-c23839 07 June, 2018, 16:25:15

I can deserialize pupil_data. I am looking to deserialize additional plugin data files. Can I do this if the plugins are enabled at the point of recording in pupil capture? If so, where would this additional plugin data go? Is this data only generated at the point of export?

user-c23839 07 June, 2018, 16:28:13

I am thinking that exporting this specific data can only be done through a notification to the network

papr 07 June, 2018, 16:28:48

Which plugin are you talking about specifically?

papr 07 June, 2018, 16:29:07

Notifications are stored automatically during a recording if they include record=True

user-dc89dc 07 June, 2018, 16:40:50

Hello everyone! (I'm new here)

user-c23839 07 June, 2018, 16:44:00

No specific plugin. I would like to essentially write a .py script to automate the start/stop recording as well as the export of this data (for example, blink detection, saccade detection, etc) without requiring a human to interface with the GUI

papr 07 June, 2018, 16:44:23

Hey @user-dc89dc Welcome to the channel!

papr 07 June, 2018, 16:45:13

@user-c23839 start/stop recording can be done using Pupil Remote

papr 07 June, 2018, 16:46:09

blink data should be present in pupil_data. But we do not have saccade detection yet

user-dc89dc 07 June, 2018, 16:57:49

So I'm working on identifying cognitive states based on gaze data

user-dc89dc 07 June, 2018, 16:58:46

And the dataset is several minutes of a control experiment and a test experiment, labelled 0 and 1

user-dc89dc 07 June, 2018, 17:00:04

I've put together a binary classifier to predict on a new sequence of gaze data whether the person is not stressed (0) or stressed (1)

user-dc89dc 07 June, 2018, 17:01:39

My features are rolling statistics (of preceding 5 seconds), of position, speed and fixation position - these all go into the training of an SVM

user-dc89dc 07 June, 2018, 17:02:53

Was wondering if anyone here has feature engineered for gaze data? Would love to hear thoughts!

user-dc89dc 07 June, 2018, 17:03:47

(apologies if this is the wrong channel for this?)

wrp 08 June, 2018, 04:25:43

@user-dc89dc thanks for sharing this information - there are likely quite a few individuals in the community interested in cognitive load. I will continue the discussion in the 🔬 research-publications channel

user-103621 08 June, 2018, 08:27:29

Hello everyone,

user-103621 08 June, 2018, 08:38:42

i wrote an inquiry to the [email removed] email adress in german. Is that a problem ?

user-b571eb 08 June, 2018, 08:43:04

Hi there. Today my one eye camera suddenly showed me two eyes. I always thought the other side was only cables without function. However, my original right eye image is now always upside down. Also I was wondering how data quality would be. Did it happen before? Thank you.

papr 08 June, 2018, 08:43:24

@user-103621 this should not be a problem

papr 08 June, 2018, 08:45:51

@user-b571eb Could you send a screenshot of what you mean by "one eye camera suddenly showed me two eyes"? The right eye image is flipped because the camera is physically flipped. This does not impact pupil detection. You can flip the visualization in the eye window's general settings

user-b571eb 08 June, 2018, 08:47:58

I didn’t expect to have both eye images.

papr 08 June, 2018, 08:48:14

You mean both eye windows?

user-b571eb 08 June, 2018, 08:49:57

Chat image

papr 08 June, 2018, 08:50:27

Yes, this is expected if you have a binocular headset 😃

papr 08 June, 2018, 08:50:58

One camera video feed for each eye

user-b571eb 08 June, 2018, 08:51:24

Oh. I thought I ordered one eye camera only. That’s why I am a bit shocked.

user-b571eb 08 June, 2018, 08:52:10

So the sampling rate is not 200 hertz?

papr 08 June, 2018, 08:52:46

If you click on the camera icon on the right side you will be able to change resolution and frame rate

user-b571eb 08 June, 2018, 08:52:56

Ok. Thanks

wrp 08 June, 2018, 08:53:53

@user-b571eb can you DM me your order id or the name associated with the order - so that I can confirm that we have shipped you the correct hardware?

user-b571eb 08 June, 2018, 08:54:51

I’ll do per email.

wrp 08 June, 2018, 08:54:56

perfect, thanks

user-ea0ec0 08 June, 2018, 11:13:05

Hi, I'm just starting up with my newly purchased hardware on windows 7 64bit, and I'm seeing some strange behavior: When I try to run pupil_capture.exe it fails and then promptly deletes itself as well as all other executable files and many other files from the installation directory.

user-ea0ec0 08 June, 2018, 11:13:53

Has anyone encountered something like this and can help? Thanks!

user-ea0ec0 08 June, 2018, 11:16:07

correction - PupilDrvInst.exe remains, but other executable files are gone

papr 08 June, 2018, 11:16:43

Hey @user-ea0ec0 Windows 7 is not supported! You should be using Windows 10

user-ea0ec0 08 June, 2018, 11:17:20

OK, I'll check on windows 10. Thanks!

user-e5aab7 08 June, 2018, 16:11:00

is there some way to open just a single eye GUI? When I run python3 main.py it opens world and 2 eyes.

mpk 08 June, 2018, 16:18:06

@user-e5aab7 try running Pupil Service. Its like capture except is does not run the world camera.

user-e5aab7 08 June, 2018, 16:21:28

@mpk i tried pupil service in the launchables directory by doing python3 service.py and didn't get any response

mpk 08 June, 2018, 16:29:16

You need to run python main.py service

mpk 08 June, 2018, 16:29:28

Service is the arg being parsed

user-e5aab7 08 June, 2018, 16:31:14

@mpk Ah yes it worked, thank you.

user-344dcd 08 June, 2018, 19:10:15

Hi all, I am trying to visualize the polyline of the gaze for longer than 5 seconds. I have found this modified version of the scanPath plugin, but it doesn't show me the box of scanPath once the plugin has been loaded. https://gist.github.com/willpatera/34fb0a7e82ea73e178569bbfc4a08158 Any hint?

wrp 09 June, 2018, 03:24:33

Hey @user-344dcd this is an old gist of mine and is depreciated. Please modify the plug-in from pupil source code instead of this gist.

user-41f1bf 09 June, 2018, 18:07:51

@user-c23839 a command line tool to export data would be great.

user-41f1bf 09 June, 2018, 18:08:35

Right now there is no such a thing as "Pupil Player Service" only "Pupil Capture Service"

user-29e10a 11 June, 2018, 11:22:18

Hi Guys, is it possible that Pupil Capture needs about 20% more CPU on Windows in v1.7 instead of v1.6? I'm running on a Core [email removed] with one VR application in parallel. I just checked the two versions and the total load of my machine on v1.6 is about 80%, and with 1.7 it's 100% (according to Windows Task Manager) and I'm also getting frame drops to below 100 fps on the two eye videos? hmmm...

user-29e10a 11 June, 2018, 11:22:45

you tweaked the detector algorithms, maybe this consumes a lot more of horsepower?

user-593589 11 June, 2018, 11:44:25

@user-41f1bf There is a possibility to export from command line - via batch_exporter main() method https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/batch_exporter.py

user-593589 11 June, 2018, 11:51:31

Can anybody tell me where I can receive live data in pupil capture of "gaze_normal1_x", "gaze_normal1_y", "gaze_normal1_z" from 3d eye model? On one hand I think I can use zermq but where to get the data within pupil capture using a plugin?

papr 11 June, 2018, 11:52:44

@user-593589 you can access data from the events dictionary passed to your plugins recent_events function

user-4b5226 11 June, 2018, 11:53:15

We just received the kit for the Vive & are working on metric captures. Excited !

user-cf2773 11 June, 2018, 12:13:37

Hi @wrp, yes I have noticed it was referring to an old version of the pupil source code. I did try to modify the plug-in from the source code (i.e., changing max=5 to max=1000 here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/vis_scan_path.py#L110), but the trace still disappears after a few seconds. Do you know if I have to modify anything else? Thanks

user-103621 11 June, 2018, 13:01:59

Hey, i already wrote an email about this but maybe i could get a quick answer here. Which world camera is used in the highspeed model ? Is it the same as in the DIY section ? Since the FOV for the highspeed with the extra lenses are much higher than the 3D model we consider getting an additional tracker but with the highspeed camera instead of 3D camera. Or do u even use a different camera with your own specs ? i need this information for my Master thesis so id appreciate any help .

user-c7a20e 11 June, 2018, 15:56:15

Hi, is it possible to change the mjpeg compression amount to control usb bandwith, both with pupil cam and webcam

mpk 11 June, 2018, 15:59:21

@user-c7a20e its not possible to change this. Its hardcoded into the embedded firmware.

mpk 11 June, 2018, 16:00:01

the underlying limitations of usb transfer are also more subtle then just fitting streams of size x into bandwidth below y.

user-c7a20e 11 June, 2018, 21:59:02

what would that be?

user-c6649c 12 June, 2018, 00:12:32

Hello, I'm trying to download pupil apps on my computer windows 10 for my internship but when I want to open pupil player or pupil capture it doesn't work, it just shuts off. Maybe you can help me ? Thank you !

Chat image

mpk 12 June, 2018, 06:26:44

@user-c6649c you will need updated graphics drivers. This will fix the issue.

mpk 12 June, 2018, 06:28:07

@user-c7a20e have a look at this. Its a bit dated (rolling shutter does not apply anymore) https://github.com/pupil-labs/pupil-docs/blob/master/developer-docs/usb-bandwidth-sync.md

user-dc89dc 12 June, 2018, 15:32:41

Are there any smooth pursuit detection algorithms?

mpk 12 June, 2018, 15:35:50

@user-dc89dc not in Pupil but I think there are a few papers that could be implemented.

user-dc89dc 12 June, 2018, 15:38:15

Thanks @mpk - could you link me to a few perhaps?

user-b30222 12 June, 2018, 16:14:16

When running screen marker calibration from the latest bundle d release, everything works fine. When I run the same thing from source (downloaded master today) it does not work. A single marker appears on screen but does not move. By adding a few print statements to the source code I can see that it is not detecting any onscreen markers. No errors are written to the console. Any ideas?

papr 12 June, 2018, 16:15:35

I cannot reproduce this issue when running from the current master.

papr 12 June, 2018, 16:16:03

Make sure that the markers are in the field of view. They should display a green dot in the center when detected properly

user-b30222 12 June, 2018, 16:36:03

Strange. No green dot, Works immediately when I run from the bundled version. Perhaps a dependency issue? I am running from Anaconda and did not follow install instructions exactly as in docs. I will reinstall and see if it works.

user-6302ac 12 June, 2018, 16:43:41

Is there a way to do a calibration without a world camera based on known real-world coordinates? For example I know how to map pixel positions on my screens into real angles in degrees, can I use that to do calibration without needing a separate world cam? Sorry if this is already in the docs, but I couldn't find it.

papr 12 June, 2018, 17:24:47

@user-6302ac You have to use the hmd calibration for that. Be aware that this assumes your headset to be fixed within the real world

user-a6a5f2 12 June, 2018, 20:12:18

What is the FOV of the eye cameras, both the older 120hz and new 200hz versions?

user-c7a20e 13 June, 2018, 09:47:34

@mpk Thanks for the link. This raises more questions to me. Since I only need two eye cameras and have 2 separate usb controllers on my target PC, can I tell Pupil to use either uncompressed mjpeg or raw frames? I believe this would improve the overall latency by few ms by eliminating or greatly reducing the latency introduces by encoding/decoding (compressing/uncompressing) . It would, Im guessing, require capturing in grayscale rather than RGB to reduce the size of each frame by x3.

user-c7a20e 13 June, 2018, 09:50:51

Another question, wouldn't it be a good idea to try to sync two eye cameras as close as possible by modulating left/right eye IR LEDs at the same time and comparing the frames and tuning off/on one of the cameras until the LED illumination can be seen in both the left and right eye frames?

papr 13 June, 2018, 09:52:09

I don't think the firmware allows to disable compression

user-af87c8 13 June, 2018, 13:12:06

@user-dc89dc @mpk I would be interested in smooth pursuit detection algorithms too.

papr 13 June, 2018, 13:16:15

We will look at adapating the implementation from this paper https://www.nature.com/articles/s41598-017-17983-x.pdf

user-dc89dc 13 June, 2018, 19:36:07

Interesting, I was looking at that today too haha

user-dc89dc 13 June, 2018, 19:38:49

@papr do you guys plan on working off of his implementation too? https://gitlab.com/nslr/nslr-hmm/blob/master/nslr_hmm.py

papr 13 June, 2018, 19:41:29

Yes, we still need to evaluate it but we would like to use as much as possible.

user-73ee8f 13 June, 2018, 20:16:03

Does pupil work with Ubuntu 18?

wrp 13 June, 2018, 20:30:05

@user-73ee8f yes Pupil does run on Ubuntu 18.04

user-d180da 13 June, 2018, 23:44:18

Mindwave mobile & pupil lab can be synchronized together?

user-d180da 13 June, 2018, 23:45:10

http://www.alchemytech.com.tw/product_info.php?product=0

user-d180da 13 June, 2018, 23:45:54

@user-e5aab7?

user-37d99a 14 June, 2018, 13:10:36

just got the headset and I am looking at the raw data - is there any pluggins or analysis software that can take the csv file and easily measure for saccades etc.

user-41f1bf 14 June, 2018, 17:38:42

@user-d180da pupil is open, so you can synch anything with pupil timestamps. It really depends on "Mindwave" being flexible enough to be adapted to generate correlated timestamps.

user-11dbde 15 June, 2018, 17:37:14

Hi guys. When do you expect to have the pupil-labs calibration/tracking working properly on Hololens?

user-d3a1b6 15 June, 2018, 18:45:12

@papr fyi:regarding https://www.nature.com/articles/s41598-017-17983-x.pdf, the nslr_hmm functions appears to take degrees of horizontal/vertical rotation as input but pupil player exports normalized XY gaze positions. Because of this I haven't been able to get nslr working with pupil labs output

papr 15 June, 2018, 18:49:39

@user-d3a1b6 This is not problem. We can convert x/y to degrees within the world camera if the world camera's intrinsics are provided. This is true for Pupil Cams or if you use the Camera Intrinsics Estimation plugin

user-d3a1b6 15 June, 2018, 18:50:25

@papr thanks for the tip, I will dig a little deeper then

papr 15 June, 2018, 18:51:47

Or you simply use the 3d pupil vector to calculate angular changes

papr 15 June, 2018, 18:52:10

It would be nicer if the event detection was independent of the gaze mapping

papr 15 June, 2018, 18:53:34

pupil_datum['circle_3d']['normal'] will give you the 3d vector relative to the eye camera

user-d3a1b6 15 June, 2018, 19:29:33

are those data in the pupil_positions.csv output from Pupil Player export? I see columns circle_3d_normal_x,y,z with alternating rows for each eye

papr 15 June, 2018, 19:29:46

correct

papr 15 June, 2018, 19:31:09

make sure to filter for high confidence values, low confidence pupil data is very noisy

user-f1eba3 15 June, 2018, 22:38:04

Hello pupil labs, can I use pictures from your website for a presentation (mentioning of course the origin) like the one bellow

user-f1eba3 15 June, 2018, 22:38:06

Chat image

papr 16 June, 2018, 07:20:03

Yes, such a Screenshot is fine

papr 16 June, 2018, 07:20:43

Alternatively you could make a Screenshot of the store page ;)

user-525392 18 June, 2018, 01:11:02

hey guys I am trying to build pupil and it seems that I have a problem with boost, however the online docs say that when checking out the code their should be a directory called capture in pupil_src but it doesnot exist

Chat image

user-525392 18 June, 2018, 01:13:24

also .detector3d is looking for this boost lib boost_python3-vc140-mt-1_65_1, however this is msvc 14 boost build not msvc 14.1

user-525392 18 June, 2018, 01:15:01

I am windows just for info

user-525392 18 June, 2018, 01:17:27

Chat image

user-525392 18 June, 2018, 01:20:40

In general I am trying to stream the raw gaze data in real-time that's why I am trying to build from source .. is there a way to do this without having to build?

user-8e4642 18 June, 2018, 04:19:39

Hi. I have problems with the offline fixation detector plugin. In some recordings it is not able to detect fixations. It starts detecting fixations, but at some point it stops and the pupil player closes suddenly. I work with pupil_v1.7-42-7ce62c8_windows_x64. Somebody knows how can I solve this and get the fixations?

mpk 18 June, 2018, 06:41:21

@user-8e4642 can you share the output from the logfile when this happens? Or share this recording with data@pupil-labs.com ?

mpk 18 June, 2018, 06:43:06

@user-525392 if you want to get access to raw data no need to build from source. Just access the data bus via a simple script like this: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

user-8779ef 18 June, 2018, 12:39:47

Hey folks - having trouble updating pyndsi. Is this a known issue, or am I just special?

user-8779ef 18 June, 2018, 12:40:11

I seem to have an error when building the wheel.

user-8779ef 18 June, 2018, 12:45:22

Nevermind, I found the relevant issue: https://github.com/pupil-labs/pyndsi/issues/35

user-8779ef 18 June, 2018, 12:54:34

...unfortunately, this is a real problem for mac users. I can't seem to compile 😦 The issue, as pablo identified it, is that the build requires c++11. I'm not quite sure how to control the pip compiler ... working on it, but if anyone has a method, let me know!

user-8779ef 18 June, 2018, 12:54:55

( this MAY be an issue related to the use of Anaconda)

user-8779ef 18 June, 2018, 13:01:52

Ok, here's the fix: CFLAGS=-stdlib=libc++ pip3 install git+https://github.com/pupil-labs/pyndsi

user-73ee8f 18 June, 2018, 15:18:00

Does pupil labs know how much CPU usage from pupil is due to the GUI? I'm going through right now slowly removing features but before I continue I wanted to see if its even worth going through? I am trying to make a lighter weight version that could potentially run on a Pi and capture gaze date but currently pupil is to heavy for a pi and was hoping a stripped down version could run more smoothly

mpk 18 June, 2018, 15:42:42

@user-73ee8f I think its less than 1% for the GUI. Drawing the video frames costs a bit more. You should turn that of. In the world windows most cost is mjpeg decompression and gaze mapping.

mpk 18 June, 2018, 15:43:00

in the eye its ~95% pupil detection.

user-c351d6 18 June, 2018, 15:53:33

Hi guys, is there a way to watch and modify the surfaces_definitions file directly? Today we set up a pilot experiment and we found a bug which caused that surfaces were tracked but not displayed in the surface tracking plugin. This leads to the problem that you can't delete the surface anymore.

user-8779ef 18 June, 2018, 16:10:19

So, I'm now getting an error.../Users/gjdiaz/anaconda/envs/py36/bin/python /Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/main.py player MainProcess - [INFO] os_utils: Disabled idle sleep. player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/launchables/player.py", line 583, in player_drop from player_methods import is_pupil_rec_dir, update_recording_to_recent File "/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/player_methods.py", line 17, in <module> import av File "/Users/gjdiaz/anaconda/envs/py36/lib/python3.6/site-packages/av/init.py", line 9, in <module> from av._core import time_base, pyav_version as version ImportError: dlopen(/Users/gjdiaz/anaconda/envs/py36/lib/python3.6/site-packages/av/_core.cpython-36m-darwin.so, 2): Library not [email removed] Referenced from: /Users/gjdiaz/anaconda/envs/py36/lib/python3.6/site-packages/av/_core.cpython-36m-darwin.so Reason: image not found

MainProcess - [INFO] os_utils: Re-enabled idle sleep.

Process finished with exit code 0

user-8779ef 18 June, 2018, 16:11:20

Lib AV is installed. Any thoughts?

user-8779ef 18 June, 2018, 16:13:39

...sorry, pyAV

user-8779ef 18 June, 2018, 16:55:49

Sadly, libavformat appears to be related to ffmpeg.

papr 18 June, 2018, 16:56:50

ffmpeg is just an interface for libav*, so yes it is related

papr 18 June, 2018, 16:57:29

Looks like pyav was not installed correctly. Try rebuilding it

user-8779ef 18 June, 2018, 17:02:35

Yeah, no luck.

user-8779ef 18 June, 2018, 17:02:56

I have successfully installed av-0.4.1.dev0...but the message persists.

papr 18 June, 2018, 17:04:07

What about installing an older version?

user-8779ef 18 June, 2018, 17:07:19

...I can try that.

user-8779ef 18 June, 2018, 17:09:57

pip3 install git+https://github.com/pupil-labs/PyAV/releases/tag/v0.3.1 ?

user-8779ef 18 June, 2018, 17:10:47

obviously, that doesn't work. WOrking on how to get at an older version through pip..

papr 18 June, 2018, 17:12:13

Git clone url CD pyav Git checkout old_version Pip3 install .

user-8779ef 18 June, 2018, 17:18:10

Thanks @papr 😃

user-8e4642 18 June, 2018, 18:59:23

@mpk This is a screen of what happens just before the pupil player stops.

Chat image

papr 18 June, 2018, 19:06:38

I think I know the error. Could you please share the data set such that I can investigate its origin.

user-8e4642 18 June, 2018, 19:14:34

@papr I will send the data set to [email removed] It's ok?

papr 18 June, 2018, 19:32:01

Yes please

user-73ee8f 18 June, 2018, 20:33:52

@mpk would it be feasible to rewrite eye.py in C++/C to make the pupil detection run smoother on a Pi? or would that require rewriting all of pupil in C++? I know in some places pupil was written in C++ where speed was needed and was just wondering if rewriting where the most CPU was used would help a pi process it better?

papr 18 June, 2018, 20:37:08

@user-73ee8f the Pupil detection is already written in c. Everything else in eye.py is ui code. If you want to spend time optimizing you will have to look into the pupil_detectors module

user-c351d6 19 June, 2018, 11:18:17

@mpk is there a way to watch and modify the surfaces_definitions file directly? Today we set up a pilot experiment and we found a bug which caused that surfaces were tracked but not displayed in the surface tracking plugin in the list of surfaces. This leads to the problem that you can't delete/rename the surface anymore. You also could copy then surface definitions to create faster AOIs which require the same markers.

papr 19 June, 2018, 12:08:47

@user-c351d6 This is a msgpack-encoded file. Be aware that bad formatting of that file can cause Capture to crash! I would recommend to delete the file and define your surfaces again.

user-c351d6 19 June, 2018, 12:26:43

@papr Ok, I understand this. However, this bug occurs quite often and we are working with a lot of surfaces. Redefining the surfaces costs a lot of time (and causes stress). Defining surfaces by using a text editor would be great and also a fast solution. We are able to work around the problem by making backups of the surfaces all the time but you may want to add this to your list of bugs.

mpk 19 June, 2018, 12:27:10

what about defining the surfaces once and copying the def file into the recordings?

user-c351d6 19 June, 2018, 12:39:31

@mpk Thank you for the advise, we've already planned to do this when we will run the experiment. Especially defining surfaces which lay over the same marker area is not that easy using the AR interface because you are getting quite a lot of menus within the picture and it seems like pupil player / capture crash more often when having many surfaces within the configuration. Creating the def file once is actually not that easy.

papr 19 June, 2018, 13:35:28

@user-c351d6 what is your reason to define multiple surfaces over the same markers? Why not define a single surface instead?

user-8779ef 19 June, 2018, 13:51:16

@papr WHen attempting to install a previous version of PyAV, I get the following error:

user-8779ef 19 June, 2018, 13:51:19

gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/gjdiaz/anaconda3/include -arch x86_64 -I/Users/gjdiaz/anaconda3/include -arch x86_64 -Ibuild/temp.macosx-10.7-x86_64-3.6/include -I/usr/local/Cellar/ffmpeg/4.0.1/include -I/Users/gjdiaz/anaconda3/include/python3.6m -Iinclude -I/Users/gjdiaz/anaconda3/include/python3.6m -Ibuild/temp.macosx-10.7-x86_64-3.6/include -c src/av/plane.c -o build/temp.macosx-10.7-x86_64-3.6/src/av/plane.o src/av/plane.c:596:10: fatal error: 'libavfilter/avfiltergraph.h' file not found #include "libavfilter/avfiltergraph.h"

user-8779ef 19 June, 2018, 13:51:50

This is true of any version I've checked, and those extend from the 4.0 baseline to the second-to-last commit.

user-8779ef 19 June, 2018, 13:52:01

(not all of those, but a few from that period).

user-8779ef 19 June, 2018, 13:54:24

avfiltergraph.h is a component of ffmpeg.

user-8779ef 19 June, 2018, 13:58:10

...and I can comfirm I have ffmpeg 4.0.1 installed, and opencv 3.4.1_5 .

user-c351d6 19 June, 2018, 14:07:36

@papr For instance when you want to determine how often someone gazed on a particular area of an monitor you could define many surfaces as AOIs using the same marker. So far I know, there is no way to define multiple AOI's within a surface without writing a complex script, isn't it? With this solution I just could parse the logfiles of the surfaces and use the on_srf column to calculate my dwell times. Do you see an easier way to get the dwell times?

user-d3a1b6 19 June, 2018, 14:18:11

@papr re: converting pupil labs gaze positions to horizontal/vertical rotations... I reached out the the first author of nslr-lmm and he said:

The pupil labs system gives locations normalized to the scene camera frame (-1 to 1 or 0 to 1, can't remember which). Just multiplying these with the scene camera's FOV angle should work fine; the approximation error is very small and the NSLR-HMM should work just fine with those.

So I took the norm_pos_x and y data from gaze_positions.csv and applied the formula (x-0.5) * 50, where 50 degrees is the FOV of the world camera. These rotations seemed to produce decent results in nslr-lmm but wanted to run this approach by you. I bet I’m making a mistake somewhere.

user-73ee8f 19 June, 2018, 15:28:07

@papr in your opinion do you think optimizing pupil_detectors modules will provide a noticeable difference in performance ?

papr 19 June, 2018, 15:58:21

@user-049a7f I think that there is some potential to optimize the code. I don't know it well enough to tell you how much potential exactly. I would try running it as it is and see how many frames you can process with it

papr 19 June, 2018, 16:00:47

@user-d3a1b6 Sounds great! This can be improved in accuracy by using the camera intrinsics, but as the author said, this does not make a lot of difference. The question is how demanding the algorithm is in terms of cpu and memory.

papr 19 June, 2018, 16:01:42

@user-8779ef did you install ffmpeg with anaconda as well? Maybe it is not able to find its location

user-8779ef 19 June, 2018, 16:11:47

I suggest nobody use anaconda / vritual envs for this. I was unable to change the conda compiler to C++11.

user-8779ef 19 June, 2018, 16:11:56

...and so unable to install pyndsi

user-8779ef 19 June, 2018, 16:12:44

I've now found that my python path was set incorrectly (though I'm not sure what the correct path is...), and so pip was not correctly installing certain packages. Trying to reinstall and see what happens.

papr 19 June, 2018, 16:15:51

@user-c351d6 You are right that defining multiple surfaces solves your problem. Nonetheless, I would recommend to use a single surface for the complete monitor, set the surface's size to the monitors resolution. This way you will have to assign the gaze points to each displayed monitor element after the recording. But you would save the surface tracking cpu usage.

papr 19 June, 2018, 16:16:43

@user-8779ef yeah, conda does everything in a isolated fashion...

user-8779ef 19 June, 2018, 16:18:23

Yeesh.... now this:

user-8779ef 19 June, 2018, 16:18:26

"ImportError: dlopen(/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/calibration_methods.cpython-36m-darwin.so, 2): Library not loaded: /usr/local/opt/boost-python/lib/libboost_python3.dylib Referenced from: /Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/calibration_methods.cpython-36m-darwin.so Reason: image not found"

user-8779ef 19 June, 2018, 16:18:47

I can confirm that boost_python3 and boost have been "untapped."

user-cd9cff 19 June, 2018, 16:20:33

@papr Hello, I am new to pupil labs and am not as experienced. Can you please explain to me how to use AOI and direct me to any documentation Pupil Labs may have on the topic? I am trying to design an experiment in where the eye must not stray from a red dot in the center and I am using pupil labs to monitor the eye and make sure that it does not deviate.

papr 19 June, 2018, 16:22:35

@user-cd9cff https://docs.pupil-labs.com/#surface-tracking

user-cd9cff 19 June, 2018, 16:24:00

@papr Thank you so much!

user-cd9cff 19 June, 2018, 16:33:42

@papr Do you know of any project that has done surface tracking with Matlab?

papr 19 June, 2018, 16:34:50

Not sure, but these guys might have: https://github.com/mtaung/pupil_middleman#pupil-middleman

Else I would recommend to extend our matlab helper script to receive surfaces

user-049a7f 19 June, 2018, 19:59:10

@papr By running do you mean running pupil? or is there some way to run pupil_detector alone? ........When running pupil and using just doing main.py service I get sub 15 FPS (but it feels like 5FPS average to be fair)

papr 19 June, 2018, 20:01:06

You can pass images directly via python but I meant running Capture without ui, yes

user-049a7f 19 June, 2018, 20:18:50

@papr Sorry I am a bit confused, is there documentation for passing images directly via python? and I did not know you can run capture without UI, how is that done? service still has some UI so minimizing it (and even removing video playback) could possibly save some consumption. issues I am running into

papr 19 June, 2018, 21:03:43

@user-049a7f no, there is no documentation other than the code itself. You will have to manually remove the ui code to run Capture without ui. Have a look at eye.py and how it calls detect() on the pupil detector. Unfortunately I cannot link code lines since I am on mobile.

papr 19 June, 2018, 21:05:43

You can use pyuvc to access the camera images directly without capture btw and then pass these images directly into the pupil detector instance. You will have to write your own code for that though.

user-3e42aa 19 June, 2018, 22:42:38

Hi all. Noting you're been discussing NSLR and its implementation, I'd like to give a heads up that the current C++ version has issues with non-linux environments. The Python version works, but is probably too slow for widespread usage. On Windows the problem is that it uses too new C++(17) features to build on VC++. On MacOS it builds, but apparently it for some reason gets stuck on the segmentation. I don't have access to an OSX evironment, so I can't really debug it further. I'd love to see NSLR be integrated in Pupil and can try to eg. port it to C++11 for a more portable build when I find the time.

user-3e42aa 19 June, 2018, 22:44:00

(For my defense both MS and Apple seem almost actively hostile to people trying to support their platforms without paying them, which I have no intent of doing)

user-3e42aa 19 June, 2018, 23:13:51

@papr regarding the CPU usage, with the C++ version it shouldn't be a problem. It should handle hundreds of thousands of samples per second. Also, if you disable the noise optimization, which you probably can as you know the equipment, the performance is further 10 times or so faster.

user-3f0708 20 June, 2018, 01:51:32

good evening

user-3f0708 20 June, 2018, 01:53:06

I need help. Has anyone used the mouse_control.py code available from github in the latest version of the pupil labs?

user-3f0708 20 June, 2018, 02:01:59

Well I wanted to move the mouse with the movement of the eyes through the code mouse_control.py with the help of the markers, but I can not. Fco the calibration process soon after I execute the code mouse_control.py but the mouse does not move.

papr 20 June, 2018, 07:39:58

@user-3e42aa Hey, nice to hear from you! We would probably wrap the c++ code in a cython extension instead of using the python code if the performance increase is so drastic. But I will have to look into the paper +code in the coming weeks. Btw, do I understand it correctly that your code is also realesed under the Open Access license?

wrp 20 June, 2018, 08:07:05

@user-3f0708 Hi - I think I also received your email. Apologies for the delayed reply. I just tested the code on Linux (Ubuntu 18.04) and it works as designed. You will need to install dependencies zmq, msgpack, and pyuserinput in order to run mouse_control.py. You will also need to define a surface named screen in Pupil Capture using the surface tracking plugin.

user-c351d6 20 June, 2018, 08:52:43

@papr Is there already a program/script/gui to assign AOIs (display elements) within a single surface? The experiment is conducted at a chair of psychology, the most experimenters have no experiance in any programming languages.

papr 20 June, 2018, 09:03:05

@user-c351d6 not that I know of. But this would be a matter of comparing values. E.g. you know that an element with size WxH was displayed at (X,Y) then gaze points lie on that surface if both of these conditions hold true: - X <= gaze_x <= X+W - Y <= gaze_y <= Y+H

user-c351d6 20 June, 2018, 09:07:40

@papr Thank you for the information, we will consider doing this. Will a surface be tracked even though it's not completly in the FOV of the world camera?

papr 20 June, 2018, 09:11:34

A surface will be tracked as long as at least 2 markers of the surface are detected.

user-41711d 20 June, 2018, 10:08:54

Hello. I make a project with the pupil labs and the Hololens. I tried to use the blink detection and noticed that it isn't aviable for the Hololens. I want to ask why this doesn't work and if it will be added in the future?

user-3e42aa 20 June, 2018, 10:08:57

@papr There are Python bindings for the C++ version already included, so no need for cython wrapping. It's released under AGPL.

user-3e42aa 20 June, 2018, 10:12:00

Well, actually the python implementation is included also in the article as a listing, so it sort of is under CC-BY-4

papr 20 June, 2018, 10:13:59

OK, cool!

papr 20 June, 2018, 10:14:49

I am looking forward to replace as much of our event detection code with yours. Makes my life easier 😃

user-b6398e 20 June, 2018, 11:38:42

hi everybody,

user-b6398e 20 June, 2018, 11:42:54

I am starting to work with the pupil headset, but when I try to calibrate it with my Mac (2,6 GHz Intel Core i7, version 10.12.6), using the screen marker calibration, the software crashes. Any idea about this? What should I do?

user-b6398e 20 June, 2018, 11:44:10

I have tried it with windows, and it worked. I also tried the manual calibration, in the Mac, but I was not able to do it.

papr 20 June, 2018, 11:58:15

@user-b6398e Please follow these steps: 1. Start capture 2. Calibrate/chrash software 3. Upload ~/pupil_capture_settings/capture.log , i.e. in your home folder, look for a folder called pupil_capture_settingsand upload the capture.log file that is included

user-3f0708 20 June, 2018, 12:42:58

Is it possible to send a procedure tutorial to run the mouse_control.py code? And I'm using Linux (Ubuntu 16.04)

wrp 20 June, 2018, 14:06:22

@user-3f0708 1. Start Pupil Capture (ensure pupil remote plugin is enabled - it is enabled by default settings) 2. Load surface tracker plugin from plugin manager in the GUI 3. Define a surface that corresponds to your computer screen name it "screen" (you can name it what you like, but note that you will need to update the surface name in mouse_control.py) 4. Calibrate 5. Start mouse_control.py e.g python3 mouse_control.py 6. See the cursor move to where you gaze on screen.

user-3f0708 20 June, 2018, 14:26:38

Can you tell me in the code mouse_control.py, where does the name of the surface that I can change the name match?

user-3f0708 20 June, 2018, 14:26:51

@wrp Can you tell me in the code mouse_control.py, where does the name of the surface that I can change the name match?

user-3f0708 20 June, 2018, 14:29:45

@wrp thank you

wrp 20 June, 2018, 14:30:00

@user-3f0708 you're welcome 🙂

user-e2056a 20 June, 2018, 15:37:08

I wonder if we could identify a fixed area from the raw data output. We were trying to not use the surface trackers in our experiments since they might be distracting. I know that in raw data file, the left corner of the world video is the origin. My question was whether we could identify a fixed area while the world camera was moving with participant's head. Thank you.

papr 20 June, 2018, 15:38:13

@user-e2056a Not with the given tools. You will either have to annotate the fixed area yourself or use some kind of object detection.

papr 20 June, 2018, 15:39:01

The problem is that the relationship between headset and your AOI is not fixed.

user-e2056a 20 June, 2018, 15:40:39

I see, thank you

user-73ee8f 20 June, 2018, 17:43:14

Any one have an opinion on what would be the best way to add a new video capture option for a stream? For example, if I want to run pupil on my machine but have it process live footage being broadcast through the network from another machine? Would it be possible to modify one of the backends?

papr 20 June, 2018, 17:45:40

Either you implement your own backend or you implement a pyndsi host. This is a python example that implements such an host for a connected uvc camera: https://github.com/pupil-labs/pyndsi/blob/master/examples/uvc-ndsi-bridge-host.py

papr 20 June, 2018, 17:47:11

This is the protocol specification: https://github.com/pupil-labs/pyndsi/blob/master/ndsi-commspec.md

user-3f0708 20 June, 2018, 19:24:05

@wrp One doubt this line of code screen_size = sp.check_output (["./ mac_os_helpers / get_screen_size"]). Decode (). Split (",") of mouse_control.py which represents this directory ["./mac_os_helpers/get_screen_size" ]?

user-3f0708 20 June, 2018, 19:24:19

@papr One doubt this line of code screen_size = sp.check_output (["./ mac_os_helpers / get_screen_size"]). Decode (). Split (",") of mouse_control.py which represents this directory ["./mac_os_helpers/get_screen_size" ]?

user-cd9cff 20 June, 2018, 22:29:49

@papr We are using a stereoscopic display and we would like to monitor eye position while the observer is performing the task. The observer is asked to fixate the center of the screen. We would like to flag trials in which eye position deviates outside a tolerance windowA The stereoscopic display is optical (see attached image). The observer views two monitors through two mirrors tilted at 45° with respect to the line of sight. The monitors themselves are parallel to the line of sight but displaced about 1 m to the left and right of the line of sight. The left and right eye displays are configured to be a single extended display so that they are synchronized.
Our questions are related to the issue of how best to perform a calibration under these circumstances. Briefly, if we are using our dichoptic setup, can we calibrate with only the right display (seen by the scene camera) and if that is not possible, what is our best alternative - using a separate display, physical markers, etc. Option 1: Using the Right Display:The scene camera can only see the right display through the mirror. Does it make sense then to do a monocular calibration with only the right display? The default Pupil Labs calibration is for the entire (extended) display. So if we want to calibrate just the right display, how should be go about it? Is there a modified Pupil Labs calibration that we can use, use our own custom calibration routine, or use something else such as surface markers on the display ? {if we understand the documentation, we will not be able to select the right display by itself as it is configured to be part of one extended display.) Option 2. Using another display: Instead, should we calibrate binocularly on another display or a physical surface that is at the same viewing distance and has the same dimensions as each eye's display?

Chat image

user-ced35b 11 March, 2022, 01:11:22

May I ask how you ended up solving this calibration problem?

wrp 21 June, 2018, 07:40:07

@user-3f0708 if you are using Linux this code block will not be executed. If you clone the entire pupil-helpers repository you will find the mac_os_helpers dir. @user-3f0708 let's continue this discussion in 💻 software-dev channel as it is moving towards more development specific questions.

papr 21 June, 2018, 08:10:51

@user-cd9cff I would hanlde this as if it was a vr-headset. You would assume that the subject does only see the screens. In this case you would have to setup your experiment such that it uses the HMD Calibration plugin and provides the screen positions for each displayed marker.

The problem is that you will have issues with slippage as soon as the subject moves.

user-e2056a 21 June, 2018, 13:24:27

Hi, is it possible to find the longest gaze duration on a specific surface?

user-e2056a 21 June, 2018, 13:29:51

One more question, suppose we are using the left corner of the world video as the origin, how do we measure the location of an area of interest? Do we measure the proportion of the AOI relative to the world video, and compare to the normalized coordinate?

papr 21 June, 2018, 13:29:53

@user-e2056a Not with the built-in tools. Export the surface data for your surface and look at the gaze positions for it. You are looking for the longest sequence of gaze points whose on_srf value is True and whose confidence is higher than a given thresold, e.g. 0.8

papr 21 June, 2018, 13:32:04

@user-e2056a You can use the to_world transformation matrix to convert homogeneus surface coodinates into world coodinates.

user-e2056a 21 June, 2018, 13:33:32

Thanks @papr, where do I find the to-world transformation matrix?

papr 21 June, 2018, 13:35:19

in the srf_positons<surface name>.csv file

user-e2056a 21 June, 2018, 13:36:14

Thank you

user-b91aa6 21 June, 2018, 16:03:19

May I ask that why the eye image for the top eye is a little more blurry than the bottom eye?

Chat image

user-b91aa6 21 June, 2018, 16:03:33

@papr

user-b91aa6 21 June, 2018, 16:04:18

And why you want to flip the eye image ? Thank you very much.

papr 21 June, 2018, 16:06:03

The image is flipped because the camera is flipped. This does not influence pupil detection/gaze mapping.

user-b91aa6 21 June, 2018, 16:06:51

Why the top eye image is a liitle more blurry than the bottom eye?

user-b91aa6 21 June, 2018, 16:06:56

@papr

papr 21 June, 2018, 16:07:27

Do you use the 120Hz or the 200Hz cameras?

user-b91aa6 21 June, 2018, 16:07:37

120Hz camera

user-b91aa6 21 June, 2018, 16:08:05

This makes my 3D eye detection not that stable as the bottom eye

user-b91aa6 21 June, 2018, 16:09:22

The bottom eye, the eye contour is very clear. But that of the top eye is not that clear

Chat image

user-b91aa6 21 June, 2018, 16:09:28

@papr

papr 21 June, 2018, 16:09:29

Then you can carefully (!) adjust the focus of the left/eye1 camera.

user-b91aa6 21 June, 2018, 16:10:14

Which direction should I turn the camera?

papr 21 June, 2018, 16:10:19

https://docs.pupil-labs.com/#focus-cameras

user-b91aa6 21 June, 2018, 16:11:33

I tried, but I don't see which direction it is supposed to bring the eye into the focus of the camera

papr 21 June, 2018, 16:13:16

I can't tell you that since I do not know how far the eye is from the camera nore do I know how far in the lense has been roated. You will have to try yourself. Alternatively you can try changing the eye camera distance to the eye.

user-b91aa6 21 June, 2018, 16:13:57

OK. Thanks

user-525392 21 June, 2018, 16:56:13

Hey guys I am trying the mobile module with my Pupil. When I start the Pupil Capture on the host machine it shows that I need to install the time sync module. Do you guys know where can I find the plugin and how to install

papr 21 June, 2018, 16:56:52

No, you just have to activate it. Open the Plugin Manager on the right and enable the Time Sync option

user-73ee8f 21 June, 2018, 17:02:23

What FOV does the eye tracking camera have on the pupil headset?

user-525392 21 June, 2018, 17:05:20

@papr Thank you for the prompt response. On the host machine I get this message when I enable the time sync module.

user-525392 21 June, 2018, 17:05:30

Chat image

papr 21 June, 2018, 17:06:40

This just means that there is no other time sync node yet.

user-cd9cff 21 June, 2018, 17:21:57

@papr In our setup, the two screens are essentially one large screen

Chat image

papr 21 June, 2018, 17:22:16

Yes, that is fine

user-cd9cff 21 June, 2018, 17:22:47

The longer screen we have divided into two sections, and so we need a calibration procedure that will be the same on both the right and left sides of the screen

user-cd9cff 21 June, 2018, 17:23:16

Chat image

user-cd9cff 21 June, 2018, 17:23:31

The present HMD calibration is in the middle repeated only once

user-cd9cff 21 June, 2018, 17:23:51

But is there a way that we can out a calibration on the left side and on the right

papr 21 June, 2018, 17:23:54

You will have to display the markers yourself using your own programm. Ideally you show two markers on both "screens" and send their location to Capture

user-cd9cff 21 June, 2018, 17:24:10

By markers, are you talking about surface markers?

papr 21 June, 2018, 17:24:31

no calibration markers

papr 21 June, 2018, 17:25:16

or anything that you want to, but it should have a clear point of attention whose location you can send to Capture

papr 21 June, 2018, 17:26:14

E.g. a you display cross, tell the subject to look at the cross, and send the cross'es center as reference location to Capture

user-8779ef 21 June, 2018, 17:26:45

Hey papr, any way for me to order the 3D printed HTC clip-on component from shapeways?

user-8779ef 21 June, 2018, 17:27:09

...or some other way? I was recently sent the LED ring on the circiular ribbon cable...

papr 21 June, 2018, 17:27:16

@user-8779ef I dont know that, sorry

user-8779ef 21 June, 2018, 17:27:25

( there was a miscommunication ).

user-8779ef 21 June, 2018, 17:27:34

Ok. I'll talk to MPK. Thanks!

user-cd9cff 21 June, 2018, 17:27:51

So, we would display some sort of marker on both "screens", and send those as reference objects to Capture?

papr 21 June, 2018, 17:28:01

correct

user-cd9cff 21 June, 2018, 17:28:07

or is there a special thing that makes a calibration marker different?

papr 21 June, 2018, 17:29:02

No, the only special thing about our calibration markers is that Capture is able to auto-detect them in the world video. But since you provide the locations yourself, you do not need to use our markers

user-cd9cff 21 June, 2018, 17:29:43

Then what is the situation behind the HMD calibration plugin that you were talking about earlier? How does that play a role in this?

papr 21 June, 2018, 17:30:26

I dont know hat you are referring to

user-525392 21 June, 2018, 17:31:44

@papr so regarding the Pupil Mobile. I am trying to use it in real-time. So is it possible to stream the eye-camera to the Pupil Capture then the Pupil Capture does the pupil detection, or everything has to be offline. Is the Mobile module just for offline purposes?

user-cd9cff 21 June, 2018, 17:31:58

You told me earlier today...

"I would hanlde this as if it was a vr-headset. You would assume that the subject does only see the screens. In this case you would have to setup your experiment such that it uses the HMD Calibration plugin and provides the screen positions for each displayed marker.

The problem is that you will have issues with slippage as soon as the subject moves."

What is the HMD Calibration plugin and what role would that play in my setup?

papr 21 June, 2018, 17:35:29

The HMD Calibration plugin is one of the calibraion methods. Usually,e.g. screen marker calibration, the plugin collects reference data by auto-detection the markers in the scene videos. In your case this is not possible. Instead you can use the HDM Calibration. It is unique in the sense that it does not collect reference data itself, since we do not have a typical scene video in a vr environment. The scene is the environment itself. Therefore the applications that runs the VR env needs to display the markers. The good thing is that it knows the location (since it is drawing them) and can send them to Capture where the HMD Calibration plugin collects the reference data.

user-cd9cff 21 June, 2018, 17:37:07

Where can I get the file for the HDM Calibration?

papr 21 June, 2018, 17:37:59

It comes with Pupil Capture

papr 21 June, 2018, 17:38:35

This is an example client for the hmd calibration: https://github.com/pupil-labs/hmd-eyes/tree/master/python_reference_client

user-cd9cff 21 June, 2018, 17:48:37

So, using HMD, I need to manually code the markers into my environment and send those locations to Capture?

papr 21 June, 2018, 17:49:01

correct

user-cd9cff 21 June, 2018, 17:49:12

How would I send the locations to Capture?

papr 21 June, 2018, 17:49:32

See the example code above

user-cd9cff 21 June, 2018, 17:50:22

I understand now, thank you so much for your patient explanation

papr 21 June, 2018, 17:50:34

I recommend to throughly understand the example in order to build ontop of it

user-cd9cff 21 June, 2018, 18:10:11

@papr Do you know if I there is a script for the hmd calibration client in matlab

user-cd9cff 21 June, 2018, 18:10:14

?

papr 21 June, 2018, 18:10:53

No, not that I know of. But you can build ontop of our matlab example scripts: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab

user-c7a20e 21 June, 2018, 18:12:11

are pupil eye cams not running in BW mode but rather RGB? Seems like a waste of refresh rate, bandwith and latency if so

papr 21 June, 2018, 18:15:59

The cameras transmit the jpeg data. We use libutbrojpeg we uncompress the YUV data from the jpeg frames and use the Y plane (which is the grey image) for processing

user-cd9cff 21 June, 2018, 18:18:02

@papr Since surface tracking is established in matlab, it feels like it is the right way to go. If I can make the scene camera pick up the surface markers that I have coded into the stimulus on the right display, will the calibration work fine?

i.e. I don't yet understand why surface trackers won't work for my situation. There is code in matlab for them vs. HMD having only python scripts

user-c7a20e 21 June, 2018, 18:18:31

but the camera itself if capturing in BW mode should be able to quadruple its fps, or take much less time at same refresh rate, sure the frame compression/decompression time will stay the same with jpeg but the capture time should decrease

papr 21 June, 2018, 18:19:10

@user-cd9cff The issue is that the world camera does not pick up what the users sees. How do you want to evaluate the users gaze without know where the user looked at?

papr 21 June, 2018, 18:20:47

@user-c7a20e I don't have insights into the configuration of the camera firmware/hardware. @mpk is the expert on this.

user-464538 21 June, 2018, 18:23:28

Hi all, does anyone know if there is a way to record the binocular 3d gaze mapper debug window? Or if there is a way to calibrate and record each eye separately in the binocular system?

user-cd9cff 21 June, 2018, 18:24:24

@papr The way the spectroscope is setup, the world camera picks up the display on the right eye perfectly fine. And since the right eye display is the same as the left eye display, the world camera can see what the user sees. If I code surface markers into the right eye display, the world camera will see them too

user-c7a20e 21 June, 2018, 18:24:48

actually looks like I may have been wrong, at least ordinary jpg files can be grayscale

papr 21 June, 2018, 18:29:20

@user-cd9cff oh ok, then I misunderstood the setup. What exactly is it that you want show in your experiment? Why do you need to split the screens?

user-cd9cff 21 June, 2018, 18:35:09

Physically, we have two monitors. The right monitor is for the right eye and the left monitor is for the left eye. But, we used a DualHead2Go in order to merge both monitors into one large screen. Now, we are running the stimulus on the right half of this large screen and the left half of this large screen. The right half of the large screen fills up the right monitor while the left half of the large screen fills up the left monitor. When I look through the mirrors, since the scene camera is on the right side of the face, the scene camera picks up the image in the right mirror, which is the right monitor. Additionally, the stimulus is being run in Matlab. My idea was to code the surface markers (attached below) into the Matlab stimulus and have Capture recognize these markers and calibrate itself.

user-cd9cff 21 June, 2018, 18:35:14

Chat image

papr 21 June, 2018, 18:40:05

Now I have understood the setup. My question is what you want to archive. My guess is that this kind of setup is useful if you want to show specific stimulus to a single eye without showing it the other. But you already said that both "screens" will be showing the same stimulus. Sorry if I am missing something very obvious here.

user-cd9cff 21 June, 2018, 18:42:05

We want to show the same stimulus to both eyes and simply track how well each individual eye sees the stimulus. We did not split up the screens to show the eyes different stimuli, but to show them the same stimuli separately and have the brain put the images together.

user-cd9cff 21 June, 2018, 18:42:48

Don't worry, it's a confusing setup to understand

papr 21 June, 2018, 18:47:40

Ok. In this case you want to use a dual-monocular gaze mapper. Unfortunately, it is not possible to calibrate it directly. Instead a inocular gaze mapper will be created after calibration. We wanted to add an option for that but it did not have high enough priority yet. You will have either to run a modifed version of the source code or record everything, incl calibration procedure. As soon as we release this feature you will be able to use the offline calibration function to recalibrate using the dual monoclar mapper

user-cd9cff 21 June, 2018, 18:48:56

I'm a little confused, could you explain the necessary procedure a little clearler please?

papr 21 June, 2018, 18:52:21

So in your case you want to track the eyes individually. This is what a dual monocular gaze mapper does. The binocular mapper merges two pupil positions into a single gaze point. The screen calibration creates a binocular mapper by default. There is no ui option to change that befaviour. This means that you will have to change the source code in order to change that behaviour.

user-cd9cff 21 June, 2018, 18:55:01

@papr Is there anyway to put the calibration screen on the right and left diaplys seperately?

user-cd9cff 21 June, 2018, 18:55:10

displays*

papr 21 June, 2018, 18:55:18

Why don't you simply mirror the screens?

papr 21 June, 2018, 18:56:11

Like you would mirror a beamer during a presentation

user-cd9cff 21 June, 2018, 18:56:39

The Matlab code that I have only works with this screen setup

user-cd9cff 21 June, 2018, 18:57:07

is there anyway to put the calibration even on one half of the large display?

user-cd9cff 21 June, 2018, 18:57:20

screen*

papr 21 June, 2018, 18:57:25

@user-cd9cff no, not built in

papr 21 June, 2018, 18:57:44

You either run it in full screen or use the window mode.

papr 21 June, 2018, 18:58:11

So window mode might eactually work for you

user-cd9cff 21 June, 2018, 18:58:14

Will coding surface markers into the stimulus on the right display work, or am I beating a dead horse with the surface markers?

papr 21 June, 2018, 18:58:27

No, this should work fine

papr 21 June, 2018, 18:58:47

This is the recommended setup if you want map gaze to a screen

user-cd9cff 21 June, 2018, 18:59:34

So i can simply code the marker(see attached image) into the stimulus and capture will automatically detect it and calibrate?

user-cd9cff 21 June, 2018, 19:00:02

Chat image

papr 21 June, 2018, 19:01:27

Just to clarify the terms. The square markers are used for surface tracking, i.e. you activate the surface tracker, show the markers within the scene, register a surface for the markers, and Capture will start mapping gaze to that surface.

user-cd9cff 21 June, 2018, 19:01:57

Yea

papr 21 June, 2018, 19:01:57

Separately from that you will have to calibrate.

user-cd9cff 21 June, 2018, 19:02:58

So if we can't use them to calibrate, the I will either have to figure out how to use HMD, or I will have to change the source code for dual-noncular gaze mapping?

papr 21 June, 2018, 19:03:59

Calibration != surface tracking. You will need both for your experiment

user-cd9cff 21 June, 2018, 19:04:40

Last question: What if I calibrated using my personal laptop screen and then used the surface trackers for the stimulus?

user-cd9cff 21 June, 2018, 19:04:44

would that work?

user-cd9cff 21 June, 2018, 19:05:09

Would Capture be able to scale the calibrated view to the stimulus using the surface markers?

papr 21 June, 2018, 19:12:18

Technically you can calibrate in your double screen setup just fine using the screen marker calibration window on the right screen. But you will have to major issues:

  • your subject only sees the marker on the right, therefore the right eye will fixate the marker. But you do not know what the left eye will do. Nonetheless the calirbation procedure expects that both eyes fixate the calibration marker. Therefore it is not clear if the left eye will be calibrated correctly. You can fix that by showing the calibration marker on both screens at the same time.

  • the binocular gaze mapper uses vergence to estimate, i.e. depth, as well as the binocular gaze point. This binocular gaze point is meaningless in you setup since you want to investigate gaze for each eye separately. This is why you need a monocular gaze mapper. And you will have to change the code to use it instead of a binocular mapper.

papr 21 June, 2018, 19:15:26

Further clarification: calibration markers are the concentric circles displayed during screen marker calibration.

The arguments above are independent of the surface tracking/displaying square markers. If you display them, setup a surface based on them, then Capture will use current gaze, independently which gaze mapper was configured, and will transform it into the coodinate space of the surface that you have setup before

user-cd9cff 21 June, 2018, 19:17:10

So, does that mean that I can calibrate on a seperate screen and then use surface markers inn the stimulus?

user-cd9cff 21 June, 2018, 19:17:48

if that won't work, how do I show the calibration marker on both screens at the same time?

papr 21 June, 2018, 19:17:53

Yes, you can use the surface markers in your stimulus

user-cd9cff 21 June, 2018, 19:18:12

Oh, so it will work even if I have it calibrated on another screen

user-cd9cff 21 June, 2018, 19:18:31

?

papr 21 June, 2018, 19:18:37

But as I said, surface markers are not used for calibration but for surface tracking.

user-cd9cff 21 June, 2018, 19:19:16

Yes, my plan is to do normal calibration on my personal laptop, and then move my head to the spectroscope and use the surface markers coded in the stimulus

user-cd9cff 21 June, 2018, 19:19:21

so i'll be doing both

papr 21 June, 2018, 19:19:34

This might work, yes.

papr 21 June, 2018, 19:20:01

But the issue with the dual-monucular gaze mapper persists nonetheless

user-cd9cff 21 June, 2018, 19:20:38

The issue being that at the edn of the day, I will have a gaze that combines both eyes instead of showing the eyes seperately

papr 21 June, 2018, 19:20:47

correct!

user-cd9cff 21 June, 2018, 19:20:49

so I wont be able to track the eyes seperately

user-cd9cff 21 June, 2018, 19:21:05

That might not be a problem, I'll have to try this out and see

user-cd9cff 21 June, 2018, 19:21:17

Thank you so much for your invaluable help

papr 21 June, 2018, 19:22:01

No problem! I will see if I can squeeze the option for the dual monocular mapper into the next release 😉

user-cd9cff 21 June, 2018, 19:22:13

Thank yo so much!

user-cd9cff 21 June, 2018, 20:45:38

@papr Do you know how to show the calibration marker on both screens? Is there a Pupil setting that can do that?

user-525392 21 June, 2018, 20:51:16

@papr , Do you guys have a resource that I can read more extensively about the Pupil Mobile module. It is just in the documentation it is not clear if I can use it in real-time or not.

papr 21 June, 2018, 20:53:07

@user-cd9cff This is not build in. You would not have this issue if you mirrored the screens... yes would have to change that matlab script... but this would simplify everything, including the script itself

user-cd9cff 21 June, 2018, 20:53:44

I understand, thank you

papr 21 June, 2018, 20:53:47

@user-525392 https://docs.pupil-labs.com/#pupil-mobile

wrp 21 June, 2018, 23:47:02

@user-525392 you can use Pupil Mobile for real time streaming of video data over WiFi or for local recording of sensor data on the Android device.

user-2feb34 22 June, 2018, 07:19:39

Hey guys, I have problems with drivers for Pupil labs. There is no Libusbk in device manager at all. I am using windows 10 and htc vive pupil addon. Can anybody help?

wrp 22 June, 2018, 08:47:50

@user-2feb34 what version of Pupil Capture are you using?

wrp 22 June, 2018, 08:48:29

Do Pupil Cam devices show up in another category when you connect them via USB? How are you connecting the Pupil cameras to your computer?

user-3f0708 22 June, 2018, 14:37:49

@wrp can I use the mouse_control.py without the markers?

user-3f0708 22 June, 2018, 14:37:51

can I use the mouse_control.py without the markers?

papr 22 June, 2018, 14:39:07

No, you need the markers to detect the screen

user-3f0708 22 June, 2018, 14:41:02

ok

user-e38712 23 June, 2018, 12:29:54

Hello guys, is it possible to detect surface without codes qr?

papr 23 June, 2018, 13:46:09

The built in surface tracker does only work with these markers. You will have to implement your own object detection if you don't want to use our markers. @user-41f1bf has some work on that

user-41f1bf 23 June, 2018, 16:39:35

You can found the plugin I have been using for tracking screens without fiducial markers here: https://github.com/cpicanco/pupil-plugin-and-example-data

user-41f1bf 23 June, 2018, 16:42:27

You can found a complete description of the approach here http://doi.org/10.1002/jeab.448

user-b571eb 24 June, 2018, 14:17:34

Dear Pupil Labs Team, I need the authorization of using this photo in my application for the ethical approval. Is there any formal form I need to prepare? Or it would be enough if I note the copy right as well as the citation in my references? Thank you.

Chat image

user-c351d6 24 June, 2018, 20:50:54

I'm sorry to bother you guys again. We are planing to conduct a experiment with the Pupil Labs eyetracker. Today, I tried to record a session with the Pupil Mobile App and recognized that the duration of a recording is limited to around 10 minuts and 4GB file size. This is quite a problem because our sessions will take around 20 minutes and there is no possibility to restart a recording. Then I tried to activate the h.264 compression which is decreasing the file size dramatically but pupil player or any other programm is not able to play this recordings. Is there a way to have longer recordings then 10 minutes with the Pupil Mobile App and not using a Wifi?

wrp 25 June, 2018, 02:54:36

Hi @user-b571eb If you note our company name and website address as credit for the image, we can approve the usage for ethical approval application. You can also email us at info@pupil-labs.com for further confirmation/approval.

wrp 25 June, 2018, 05:39:55

@user-c351d6 are you using the latest version of Pupil software and Pupil Mobile? What device are you using with Pupil Mobile?

user-c351d6 25 June, 2018, 06:32:26

@wrp I'm using the latest version for both. The device is a One Plus 5T. I could provide you a short recording. The experiment will be conducted in the beginning of august and I have to find a solution for this quite fast.

mpk 25 June, 2018, 06:38:58

@user-c351d6 just to make sure. You should run h254 compression on the world video. This will open and play in Pupil Player. We recommend using mjpeg for the eyes. I think this should stay below 4gb for 20mins.

user-c351d6 25 June, 2018, 06:44:09

@mpk Yes, I just used h.264 for the world video and thats the problem, the world video can't be played. After dragging the recording into pupil player, the application terminates without an error. After deactivating of the compression it's working again.

mpk 25 June, 2018, 06:44:34

@user-c351d6 are you using windows?

user-c351d6 25 June, 2018, 06:45:05

@mpk same behaviour on mac and windows

mpk 25 June, 2018, 06:45:12

Whats the output from terminal when the app stops?

mpk 25 June, 2018, 06:45:15

ah ok.

mpk 25 June, 2018, 06:45:21

let me try to reproduce this here.

user-c351d6 25 June, 2018, 06:45:55

I will upload a short recording to a cloud drive.

mpk 25 June, 2018, 06:49:23

@user-c351d6 do you have a hi speed or hi-res world camera?

user-c351d6 25 June, 2018, 06:49:52

hi-res

user-c351d6 25 June, 2018, 06:50:16

However, the recodring was on 1280x720 at 30Hz

mpk 25 June, 2018, 06:55:40

thats improtant input!

mpk 25 June, 2018, 06:55:56

The hi res camera does camera side h264 compression.

mpk 25 June, 2018, 06:56:14

this means it different form the hi speed.

mpk 25 June, 2018, 06:56:36

If you can share a recording I can fix this or let out our devs know that this needs fixing.

user-c351d6 25 June, 2018, 06:59:37

@mpk I've sent you a link via pn

user-c351d6 25 June, 2018, 07:01:20

Thanks for the fast support, please let me know how you make progress on this if possible

mpk 25 June, 2018, 07:12:21

@user-c351d6 ok. I see this file is not right. I have deligated this to our devs. I ll get back to you with a fix or updated release for Pupil Mobile.

user-2feb34 25 June, 2018, 09:12:19

Hi guys! Is there any neat way to integrate pupil-labs with UE4? I saw there is instruction for unity. Do you have something similar for UE4?

user-e38712 25 June, 2018, 09:22:59

hello guys, any ideas what gone wrong here?

capture.log

user-e38712 25 June, 2018, 09:23:24

I'm unable to open this recording in Pupil Player

user-e38712 25 June, 2018, 09:26:55

Memory, CPU and drive are going ~100% and nothing happens

papr 25 June, 2018, 09:27:40

Are you trying to open a very big recording?

user-e38712 25 June, 2018, 09:27:50

that's right

papr 25 June, 2018, 09:29:05

Unfortunately I am on mobile and currently not able to open the log file. Does the log say "out of memory error" at the end?

papr 25 June, 2018, 09:30:27

It is a known issue that Player is not able to handle very big recordings well. We are currently working on an approach to fix that.

user-e38712 25 June, 2018, 09:30:49

there are no errors

user-e38712 25 June, 2018, 09:30:57

only warnings, debug and info

user-e38712 25 June, 2018, 09:31:36

2018-06-23 15:11:15,327 - world - [DEBUG] av_writer: Opened 'C:\Users\zezima\recordings\2018_06_23\001/world.mp4' for writing. 2018-06-23 15:11:15,362 - world - [INFO] recorder: Started Recording. 2018-06-23 15:34:59,360 - world - [DEBUG] recorder: Closed media container 2018-06-23 15:34:59,362 - world - [INFO] camera_models: Calibration for camera world at resolution (1280, 720) saved to C:\Users\zezima\recordings\2018_06_23\001/world.intrinsics 2018-06-23 15:35:57,805 - eye1 - [WARNING] uvc: Turbojpeg jpeg2yuv: b'Invalid JPEG file structure: two SOI markers' 2018-06-23 15:36:35,092 - world - [INFO] recorder: Saved Recording. 2018-06-23 15:36:44,515 - eye1 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. 2018-06-23 15:36:44,537 - eye1 - [DEBUG] uvc: Stream stopped 2018-06-23 15:36:44,549 - eye1 - [DEBUG] uvc: Stream closed 2018-06-23 15:36:44,550 - eye1 - [DEBUG] uvc: Stream stop. 2018-06-23 15:36:44,544 - eye0 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. 2018-06-23 15:36:44,573 - eye1 - [DEBUG] uvc: UVC device closed. 2018-06-23 15:36:44,575 - eye0 - [DEBUG] uvc: Stream stopped 2018-06-23 15:36:44,579 - eye0 - [DEBUG] uvc: Stream closed 2018-06-23 15:36:44,579 - eye0 - [DEBUG] uvc: Stream stop. 2018-06-23 15:36:44,589 - eye0 - [DEBUG] uvc: UVC device closed. 2018-06-23 15:36:44,688 - world - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. 2018-06-23 15:36:44,689 - world - [DEBUG] uvc: Stream stopped 2018-06-23 15:36:44,693 - world - [DEBUG] uvc: Stream closed 2018-06-23 15:36:44,693 - world - [DEBUG] uvc: Stream stop. 2018-06-23 15:36:44,727 - world - [DEBUG] uvc: UVC device closed. 2018-06-23 15:38:54,117 - eye1 - [INFO] launchables.eye: Process shutting down. 2018-06-23 15:38:55,460 - eye0 - [INFO] launcha

user-e38712 25 June, 2018, 09:32:57

2018-06-23 15:38:56,250 - world - [DEBUG] plugin: Unloaded Plugin: <accuracy_visualizer.Accuracy_Visualizer object at 0x000000000790E940> 2018-06-23 15:38:56,250 - world - [DEBUG] plugin: Unloaded Plugin: <display_recent_gaze.Display_Recent_Gaze object at 0x000000000790E8D0> 2018-06-23 15:38:56,257 - world - [DEBUG] plugin: Unloaded Plugin: <system_graphs.System_Graphs object at 0x000000000790E6D8> 2018-06-23 15:38:56,259 - world - [DEBUG] plugin: Unloaded Plugin: <plugin_manager.Plugin_Manager object at 0x000000000790E780> 2018-06-23 15:38:56,261 - world - [DEBUG] plugin: Unloaded Plugin: <calibration_routines.screen_marker_calibration.Screen_Marker_Calibration object at 0x000000000790E1D0> 2018-06-23 15:38:56,262 - world - [DEBUG] plugin: Unloaded Plugin: <video_capture.uvc_backend.UVC_Manager object at 0x00000000078FE358> 2018-06-23 15:38:56,263 - world - [DEBUG] plugin: Unloaded Plugin: <log_display.Log_Display object at 0x00000000078FE208> 2018-06-23 15:38:56,265 - world - [DEBUG] plugin: Unloaded Plugin: <surface_tracker.Surface_Tracker object at 0x000000000786DF60> 2018-06-23 15:38:56,266 - world - [DEBUG] plugin: Unloaded Plugin: <fixation_detector.Fixation_Detector object at 0x00000000078FE198> 2018-06-23 15:38:56,267 - world - [DEBUG] plugin: Unloaded Plugin: <calibration_routines.gaze_mappers.Binocular_Vector_Gaze_Mapper object at 0x0000000020916588> 2018-06-23 15:38:56,368 - world - [DEBUG] plugin: Unloaded Plugin: <pupil_remote.Pupil_Remote object at 0x00000000078E9B00> 2018-06-23 15:38:56,368 - world - [DEBUG] plugin: Unloaded Plugin: <pupil_data_relay.Pupil_Data_Relay object at 0x00000000078E98D0> 2018-06-23 15:38:56,371 - world - [DEBUG] plugin: Unloaded Plugin: <video_capture.uvc_backend.UVC_Source object at 0x00000000078E9630> 2018-06-23 15:38:56,474 - world - [INFO] launchables.world: Process shutting down.

user-e38712 25 June, 2018, 09:33:21

ok, so probably there is a problem, that the files are too big

papr 25 June, 2018, 09:33:37

Ah wait. This is a Capture log but you are describing issues with Player. You need to look for player.log in the player settings folder

user-e38712 25 June, 2018, 09:34:19

I know, I thought that it might be problem with recording

user-e38712 25 June, 2018, 09:35:24

2018-06-25 10:54:16,445 - MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. 2018-06-25 10:54:17,753 - player - [ERROR] player_methods: No valid dir supplied (C:\Users\zezima\Desktop\MGR\pupil_v1.7-42_windows_x64\pupil_player\pupil_player.exe) 2018-06-25 10:54:28,306 - player - [INFO] launchables.player: Starting new session with 'C:\Users\zezima\recordings\2018_06_23\000' 2018-06-25 10:54:28,315 - player - [INFO] player_methods: Updating meta info 2018-06-25 10:54:28,317 - player - [INFO] player_methods: Checking for world-less recording 2018-06-25 10:54:29,484 - player - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend 2018-06-25 10:54:30,261 - player - [INFO] launchables.player: Application Version: 1.7.42 2018-06-25 10:54:30,261 - player - [INFO] launchables.player: System Info: User: zezima, Platform: Windows, Machine: DESKTOP-9U84NDA, Release: 10, Version: 10.0.17134 2018-06-25 10:54:30,546 - player - [DEBUG] video_capture.file_backend: loaded videostream: <av.VideoStream #0 mpeg4, yuv420p 1280x720 at 0x774dc88> 2018-06-25 10:54:30,546 - player - [DEBUG] video_capture.file_backend: No audiostream found in media container 2018-06-25 10:54:30,566 - player - [DEBUG] video_capture.file_backend: Auto loaded 60891 timestamps from C:\Users\zezima\recordings\2018_06_23\000\world_timestamps.npy 2018-06-25 10:54:30,593 - player - [INFO] camera_models: Previously recorded calibration found and loaded! 2018-06-25 10:58:08,983 - player - [INFO] file_methods: C:\Users\zezima\recordings\2018_06_23\000\pupil_data has a deprecated format: Will be updated on save

papr 25 June, 2018, 09:38:41

Please wait after dropping the recording onto the the player window. At some point the should open

user-e38712 25 June, 2018, 09:40:43

ok, I'll give it more time, once I got bluescreen, we'll see what happens

user-7f5ed2 25 June, 2018, 09:47:12

hello everyone..... is there anyone who has done this project or have any idea about it...."cursor movements with eye gaze "

user-e38712 25 June, 2018, 09:50:01

https://www.youtube.com/watch?v=qHmfMxGST7A

wrp 25 June, 2018, 09:51:19

Hi @user-7f5ed2 yes, that is done using surfaces (you can see these in the video linked above) defined in Pupil Capture and this helper script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py

user-e38712 25 June, 2018, 10:04:03

player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "shared_modules\file_methods.py", line 58, in load_object File "msgpack/_unpacker.pyx", line 164, in msgpack._unpacker.unpack (msgpack/_unpacker.cpp:2622) File "msgpack/_unpacker.pyx", line 139, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:2068) MemoryError

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "launchables\player.py", line 246, in player File "shared_modules\file_methods.py", line 64, in load_object File "shared_modules\file_methods.py", line 48, in _load_object_legacy _pickle.UnpicklingError: unpickling stack underflow

player - [INFO] launchables.player: Process shutting down.

user-e38712 25 June, 2018, 10:04:22

@papr you were right, there is memory error in Player

user-7f5ed2 25 June, 2018, 10:07:04

@wrp there is a error in that code

user-7f5ed2 25 June, 2018, 10:07:06

File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 880, in runfile execfile(filename, namespace)

File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/Richa Agrawal/Downloads/Compressed/Computer_Vision_A_Z_Template_Folder/Code_for_Windows/Code for Windows/circle detection.py", line 7, in <module> from pymouse import PyMouse

File "C:\ProgramData\Anaconda3\lib\site-packages\pymouse__init__.py", line 92, in <module> from windows import PyMouse, PyMouseEvent

ModuleNotFoundError: No module named 'windows'

wrp 25 June, 2018, 10:10:18

@user-7f5ed2 this script depends on zmq, msgpack, and pyuserinput - https://github.com/PyUserInput/PyUserInput

wrp 25 June, 2018, 10:10:46

The error you are seeing appears to initiate from pymouse aka pyuserinput

wrp 25 June, 2018, 10:11:10

please check out pyuserinput docs for installation on windows

wrp 25 June, 2018, 10:11:29

@user-7f5ed2 I have never personally run this script on windows only linux and macos

user-9c7236 25 June, 2018, 11:07:59

Hey guys, new to the chat, anyone here used pupil + psychopy to analyse pupil dilation data?

user-bfecc7 25 June, 2018, 18:43:33

Hello everyone,. Have any of you seen this error when exporting data in Pupil Player? I'm having it come up on several different videos I am trying to export. Also when I accidentally ran the exporting without the worldtimestamps file I was able to export the video. Any thoughts or do I need to make a formal question in the Github?

Chat image

mpk 25 June, 2018, 18:45:04

@user-bfecc7 are you using recordings made with Pupil Mobile while timesync was running?

mpk 25 June, 2018, 18:45:15

we need a bit more infos on your setup.

user-bfecc7 25 June, 2018, 18:47:26

Hello, yes my apologies. We are running pupil mobile with timesync running. Collecting data from 2 pupil labs applications running at the same time and connecting them with pupil groups.

mpk 25 June, 2018, 18:48:12

@user-bfecc7 ok. I think something happended that made one of the pupil mobile sessions 'forget' the time sync offset. (app crash?)

mpk 25 June, 2018, 18:49:16

we can fix this with a script. Best if you can share this recording (just the _timestamps.npy files) with data[at]pupil-labs.com . We can confirm the issue and show you the script to fix it.

mpk 25 June, 2018, 18:49:23

I ll also let our dev know, that we need to make this more robust against this kind of failure 😃

mpk 25 June, 2018, 18:52:44

One more question: Are you recording on the Pupil mobile device or streaming to a Pupil Capture instance and record there?

user-bfecc7 25 June, 2018, 18:56:44

We are streaming to a computer and having it record there. This is for several videos and we have not had an app crash to my knowledge during a recording session. Also it shouldn't be a network issue as the only two devices on a private network are our pupil labs devices.

mpk 25 June, 2018, 18:57:33

@user-bfecc7 ok. I think I know whats going ok. Let me confirm this with our Android dev and I hope we have this fixed in the next release.

mpk 25 June, 2018, 18:59:05

We can most likely fix your existing recordings. Check out @papr answer to this issue: https://github.com/pupil-labs/pupil/issues/1203

mpk 25 June, 2018, 18:59:53

Please also send this timestamps file to us so I can double check that my assumptions are indeed correct regarding your failure case.

user-3f0708 26 June, 2018, 03:20:39

Do you have an example of associating the pupil with the processing program (https://processing.org/)?

wrp 26 June, 2018, 03:21:52

@user-3f0708 there are no examples of Pupil + Processing that I am aware of - maybe someone from the community uses Processing. However, if you want to stream data in realtime to Processing (Java) then you would need to install zeromq (zmq) and msgpack in your Processing (Java) client/app.

user-41711d 26 June, 2018, 08:10:51

Hello. I make a project with the pupil labs and the Hololens. I tried to use the blink detection and noticed that it isn't aviable for the Hololens. I want to ask why this doesn't work and if it will be added in the future?

user-b6398e 26 June, 2018, 10:19:22

Hello, We have pupil labs 200hz binocular.We are doing a calibration and the definition of the eyes cameras is worst than the videos in youtube, We configured the focus, the distance of the cameras, contrast and gain but the definition still be the same. Any ideas?

user-b6398e 26 June, 2018, 10:19:31

thank you!

wrp 26 June, 2018, 10:22:36

@user-b6398e please note that 200hz eye cameras can not be focused - they are designed with a set focus for the depths that are used by the hardware. Manually trying to focus the 200hz binocular cameras will damage the cameras.

wrp 26 June, 2018, 10:22:40

What kind of eye image do you see?

wrp 26 June, 2018, 10:23:01

If you can share an example, we can provide you with concrete feedback

user-b6398e 26 June, 2018, 10:30:45

Thank you wrp,

user-b6398e 26 June, 2018, 10:32:02

We were talking with papr yesterday and we improve some details but the definition still be unfocused

user-b6398e 26 June, 2018, 10:32:24

We send you a videos in wetransfer

wrp 26 June, 2018, 10:37:19

@user-b6398e thanks for the videos - this helps to clarify.

wrp 26 June, 2018, 10:37:49

Can you please go to Pupil Capture's world window and in the General > Restart with default settings

wrp 26 June, 2018, 10:38:31

Pupil detection in this example looks to have low confidence

wrp 26 June, 2018, 10:39:23

(I'm also seeing that eye1 is running at 30fps - is this intentional?)

papr 26 June, 2018, 10:39:27

I told @user-b6398e yesterday that eye1 seemed a bit too dark and that they should try increasing the cameras gain

wrp 26 June, 2018, 10:40:04

Thanks @papr for following up on this. From the videos @user-b6398e sent it also looks like some other params were manipulated for the pupil detector that may not have been so beneficial to pupil detection

user-b6398e 26 June, 2018, 11:01:22

We tried with gain parameter but we realised that was not the key . We have tried today with more light in the room but the definition still be bad. We have record another video for clarify with you the problem of the definition of the eye cameras

user-b6398e 26 June, 2018, 11:04:31

The calibration is worst than the first calibration that we sent you by wetransfer

user-072005 26 June, 2018, 16:13:15

The Player software is crashing, which it does regularly with any new set of data I have. But now it is doing on files that I was able to view previously. I even tried opening them in the older versions. Is there something I can do so I can actually read the error message before it closes on me?

user-cd9cff 26 June, 2018, 16:21:25

@papr Hello, I am using the filter messages program from the Matlab helpers repository and when I request norm pos, I am getting back an x and a y value for each eye (pupil 0 and pupil 1). However, is there anyway to access gaze data as well? Essentially, is there a way to access gaze data in real-time with Matlab?

user-a8c41c 26 June, 2018, 19:59:08

Hello, I am working on a project with the pupil labs eye trackers and HTC Vive. How does pressing C on the keyboard work to calibrate the trackers? My group members and I are very confused and would love some help. Thank you!

user-2686f2 26 June, 2018, 20:12:33

Is there any documentation, or research papers, that go into detail about the difference between detection & mapping in 2d vs 3d? The pupil-labs website's documentation just has a small paragraph about 3d creating a 3d model of the eye to help with gaze tracking, but doesn't say anything at all about what 2d is doing differently.

user-cd9cff 26 June, 2018, 20:16:25

@user-a8c41c When you start pupil capture, the world camera feed will pop up. ON the left side of the feed you will see the buttons C,T,R; Pressing c on your keyboard is the same as pressing C on the world feed and that will start the calibration sequence

user-2686f2 26 June, 2018, 20:21:08

@user-d3a1b6 That does a good job of explaining the 3d mapping, and lines up with what I thought it was doing; I feel like I have a good grasp on the 3d mapping mode. I don't really have any intuition about how the 2d mapping mode works. Appreciate the link though.

papr 26 June, 2018, 22:32:56

@user-a8c41c @user-cd9cff the hmd case is a special case. Calibration is started through unity since unity is responsible for providing the scene

papr 26 June, 2018, 22:35:11

@user-cd9cff yes, it is possible to subscribe to gaze. You will have to subscribe to the gaze topic in the same way as you subscribe to the pupil data. Uncomment this line https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/filter_messages.m#L58

papr 26 June, 2018, 22:37:45

@user-2686f2 the paper that @user-d3a1b6 linked has not been implemented. But it is based on Swirskis 3d model that we use.

user-cd9cff 26 June, 2018, 22:39:55

@papr That doesn't work, the socket comes back empty

user-cd9cff 26 June, 2018, 22:41:18

Also, that line references socket instead of sub_socket

papr 26 June, 2018, 22:58:50

You are right, that is a mistake. It should be sub_socket

user-cd9cff 26 June, 2018, 23:21:26

Even if it is sub_socket, it still returns an empty socket with no data

mpk 27 June, 2018, 06:39:02

we have added a lot of addional logic for gaze mapping and co. We plan to publish a whitepaper in this pipeline.

mpk 27 June, 2018, 06:42:37

@user-b6398e It looks to me that Pupil detection is not working right. The 3d model is breaking down this leads to broken calibrations.... Try adjusting the camera to fil the eye form a bit more downward. Also please reser the pupil capture app. Use alogrithm view to confirm the the pupil min and max size are set around the actual pupil value observed.

mpk 27 June, 2018, 06:44:28

@user-41711d blink detection can be added to the hololens example. Since it is part of Pupil Capture and Service its just a questions of adding this to the hololens relay stream. Where do you need blink detection. Do you need it in realtime?

user-2686f2 27 June, 2018, 07:01:34

@mpk Have you published anything regarding the 2d mapping?

mpk 27 June, 2018, 07:50:39

@user-2686f2 yes we have a white paper that outlines the 2d pipeline. I generally recommend using 3d mode though.

user-54f521 27 June, 2018, 08:12:50

@papr Hello. I make a project with the pupil labs and the Hololens. I tried to use the blink detection and noticed that it isn't aviable for the Hololens. I want to ask why this doesn't work and if it will be added in the future?

mpk 27 June, 2018, 08:17:04

@user-54f521 do you need this is realtime?

user-54f521 27 June, 2018, 08:35:06

yes. i would just need to be able to subscribe to it like for the gaze. But instead just need the info if the user has blinked (for example to use it as some form of input)

papr 27 June, 2018, 08:35:49

Are you using Capture oder Service? And which version?

papr 27 June, 2018, 08:37:35

Technically this should be possible. Please be aware that blinks are not really a good type of user input since it is very difficult to differentiate between voluntary and involuntary blinks...

user-c351d6 27 June, 2018, 08:44:07

Hi guys, is it actually possible to get valid/good calibration results when connecting the eyetracker via pupil mobile and wifi to pupil capture? Is seems like there is quite a delay.

papr 27 June, 2018, 08:45:44

@user-c351d6 the delay does not play a role as long as the frames have correct timestamps. Make sure that you are using Time Sync !

mpk 27 June, 2018, 08:46:57

@user-c351d6 the delay can affect the gaze mapping. The get good result we recommend using fast wifi and a network that is not used by a lot of other clients.

papr 27 June, 2018, 08:48:29

@mpk which effects are you talking about exactly?

mpk 27 June, 2018, 08:48:58

@papr if one eye is delayed the mapping with be monocular as the binocular pairs cannot be paired in realtime.

papr 27 June, 2018, 08:49:48

Ah yes, I was not aware of this implication

user-c351d6 27 June, 2018, 08:54:57

@mpk Thanks, I will do some tests. Not sure whether we can host our own wifi, it's in a hospital. However, we need a fallback solutions what we could use to record more than 10 minutes in the case you will be not able to fix the compression problem.

mpk 27 June, 2018, 08:56:40

@user-c351d6 understood. Making you own wifi comes down to getting a wifi router for 80EUR/USD. Our android dev is working on the c930e issue.

user-54f521 27 June, 2018, 09:06:58

@papr we are using the unity plugin and the capture. Yes we are aware that it difficult, but would like to experiment with using it.

user-2686f2 27 June, 2018, 09:09:07

@mpk Could you link a pointer to the 2d pipeline? At the moment I'm getting much better results using the 2d mapping and would like to understand the difference between the two modes.

mpk 27 June, 2018, 09:10:11

@user-2686f2 sure: https://arxiv.org/abs/1405.0006

mpk 27 June, 2018, 09:10:51

@user-2686f2 for VR and AR 2d is the recommended conf. For mobile eye tracking I recommend 3d.

user-2686f2 27 June, 2018, 09:14:53

@mpk Thanks very much. My use-case is looking down at a piece of paper while drawing on the paper. Any recommendation of 2d vs 3d for that?

mpk 27 June, 2018, 09:18:41

@user-2686f2 if you dont have a lot of headmovements. You might get better results in 2d!

user-2686f2 27 June, 2018, 09:19:09

@mpk Ok, great. You've been a big help, thanks!

user-2a4b02 27 June, 2018, 09:52:16

Hello, I am a college student studying pupil tracking. Is there a document that explains the entire process or principles of pupil tracking? It was not found in pupil docs. I would like to hear about the github code also . thank you.

user-c351d6 27 June, 2018, 09:58:26

@mpk It's more about the wifi policy of the hospital :/ For some reasons you are usually not allowed to host a wifi there.

user-a8c41c 27 June, 2018, 13:32:32

Thank you so much for the help! Could anyone give me a bit of a run through on how to actually initiate the calibration process in Unity? My teammates and I would greatly appreciate it.

user-cd9cff 27 June, 2018, 13:57:27

@papr Do you know how to make the socket return data when it is querying gaze data as the socket that is returned to me is returned empty. (This is an extension of my question above)

papr 27 June, 2018, 13:58:43

I am not sure what goes wrong. I will have to test this on Friday when I have access to Matlab

user-24e31b 27 June, 2018, 20:32:12

Morning, anybody active in chat here?

wrp 28 June, 2018, 02:29:17

Hi @user-24e31b - there are people here - welcome 😺 👋

user-24e31b 28 June, 2018, 02:34:59

Afternoon @wrp, I'm in New Zealand so normally the rest of the world is sleeping or out and busy when I'm here at work! 😦

user-24e31b 28 June, 2018, 02:35:49

I work for the Aviation Security Service and we are interested in purchasing the Pupil Labs Glasses for some in-house research!

user-24e31b 28 June, 2018, 02:36:16

Though I could ask the developers/manufacturers thought I would check in here first

wrp 28 June, 2018, 02:36:43

Nice to meet you and welcome to the community! You are actuall speaking with one of the co-founders of Pupil Labs 😸

user-24e31b 28 June, 2018, 02:37:02

Hahaha, slightly bias then, however will be able to help me out 😄

user-24e31b 28 June, 2018, 02:37:19

Seem like a really good community you have built. Love the open source + docs stuff too!

wrp 28 June, 2018, 02:37:36

I will certinaly defer to the community for questions that I feel are answered by someone with less bias.

wrp 28 June, 2018, 02:37:50

And will be happy to answer any questions that you have

user-24e31b 28 June, 2018, 02:38:11

I have been researching the last couple of days (short I know, but the budget is coming up!) and just wanted to check that Pupil Labs is capable of what we are looking for, research wise.

user-24e31b 28 June, 2018, 02:38:55

I have put a business case forward to purchase the highspeed, 200hz binoculars for two main usages and research ares.

user-24e31b 28 June, 2018, 02:39:39

1) Have users navigate through our screening points (and the rest of the airport) 2) Use an online training program which displays x-ray images

user-24e31b 28 June, 2018, 02:40:31

I can see from the online videos and testimonials that Pupil Labs is capable of #1, but a little confused on if it could give us valuable data on #2

wrp 28 June, 2018, 02:44:35

@user-24e31b Responses to your points and notes: 1 - Yes, Pupil is designed for real-world use cases as you noted (e.g. navigating through spaces). For this research applicaiton, I would encourage you to take a look at Pupil Mobile so that you can capture raw data on an android device instead of a laptop/desktop.

1.1. Pupil Mobile - enables you to connect the Pupil eye tracking headset to Android devices (our Pupil Mobile bundle uses the Moto Z2 Play device). With the device we spec/resell you record up to 4 hours of video locally on the phone, or stream video and sensor data over Wifi. The bundle comes with: Moto Z2 Play (black), hot-swappable Moto power pack, 64gb SD card, USBC-USBC cable, and is pre-loaded with the Pupil Mobile app. (Note: Pupil headsets with 3d world camera are not compatible with Pupil Mobile).

wrp 28 June, 2018, 02:45:23
  1. Screen based research (I am assuming that x-ray images are being displayed on a screen).

2.1. Surface Tracking - In order to determine the relationship of an object in the scene and the participant, you can add fiducial markers in your scene and use the surface tracking plugin in Pupil Capture. This plugin will enable you to automatically detect markers, define surfaces, and obtain gaze positions relative to these surfaces in the scene. Read more about surface tracking in the docs: https://docs.pupil-labs.com/#surface-tracking

wrp 28 June, 2018, 02:46:17

You might also want to take a quick look at the citation list that we maintain here: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing

user-24e31b 28 June, 2018, 02:46:41

Great! For using Pupil Mobile, does that app work with other phones that use the USB-C connection. (Saw in the docs certain phones listed, any updates on this?)

wrp 28 June, 2018, 02:47:01

There is actually some recent work in the aviation domain (albeit in control tower/control room - but also uses surface tracking) listed in the citation list

user-24e31b 28 June, 2018, 02:48:09

I did see the surface tracking (with the fiducial markers) and it looks good. Now, I'm not wizard at python, and I can navigate around a pc, how difficult is hte initial set up?

wrp 28 June, 2018, 02:50:02

Other Android devices with USBC may work, but have only tested in-house Moto Z2 Play, OnePlus 3/3T/5/5T/6, Nexus 6p. Other members in the community have reported that some Samsung devices (S8?) work, but we have not verified this on our end. One of the main reasons we recommend Moto Z2 is due to the expansion port - what Moto calls "mods" - that you can use to connect and hot swap battery packs. Additionally this device has external SD card.

wrp 28 June, 2018, 02:50:26

Re coding and Pupil - You do not need to know how to write code to use Pupil.

wrp 28 June, 2018, 02:50:51

If you want to extend Pupil (develop on top of or implement something that is not included in Pupil software) then you will need to write code.

user-24e31b 28 June, 2018, 02:55:18

Cool! Thanks for the info. Is the Eye Camera powered by USB from the laptop (the none mobile package) then? And, can the Moto Z2 be bought separately at a later date?

wrp 28 June, 2018, 03:00:01

@user-24e31b The Pupil headset is powered by USB from laptop, desktop, or android device. You can purchase the Pupil Mobile bundle at a later date if desired and you will just plug the headset into the android device via USBC-USBC cable (supplied in the bundle) - no extra hardware required.

user-24e31b 28 June, 2018, 03:07:48

Sounds great. I'm very excited to start getting involved. 👀

user-24e31b 28 June, 2018, 03:08:59

What is the main benefit and difference between the binoculars and single eye tracker glasses? Is it an accuracy aspect or a capture, speed?

wrp 28 June, 2018, 03:12:38

@user-24e31b Some notes: More data - Binocular eye tracking provides more observations of eye movements and observations of both eyes - this "redundancy" of observations is especially beneficial at at extreme eye angles; where one eye may be extremely proximal (looking towards the nose and difficult for cameras to detect) and the other eye extremely distal (and easier to detect the pupil).

3D gaze coordinate and parallax compensation - With binocular data, Pupil Capture can estimate a gaze coordinate as a 3d point in space using binocular vergence, and can compensate for parallax error. With a monocular system calibration is accurate for the depth at which you calibrated; this is sufficient for some use cases like screen based work.

Binocular data for classificiation - We can leverage binocular data for classification of blinks, fixations, and more.

wrp 28 June, 2018, 03:13:18

There are other benefits - but this is the short list (maybe other members of the community can chime in when they get online 😄 )

user-24e31b 28 June, 2018, 03:16:08

Copy that. Thanks! I'm still get to grasp with the concept and the tech behind it. Forgive my ignorant questions =) When the glasses are recording footage, is the data video file stored separately to the eye data then combined in the pupil player?

wrp 28 June, 2018, 03:16:45

@user-24e31b we welcome any and all questions here 😸

wrp 28 June, 2018, 03:17:13

Regarding data format - https://docs.pupil-labs.com/#data-format

wrp 28 June, 2018, 03:18:55

Short answer is that eye video(s) and world video are saved as videos. Other data is stored in pupil_data file and the data can be visualized, analyzed, and exported with Pupil Player

wrp 28 June, 2018, 03:20:04

You can also download Pupil Player - https://pupil-labs.com/software - and download a sample dataset (like this one: https://drive.google.com/file/d/0Byap58sXjMVfRzhzZFdZTkZuSDA/view?usp=sharing) and take a look

user-24e31b 28 June, 2018, 03:21:00

Will check out those links, on mobile ATM, just wanted clarification on the format used... Alrighty, well that's all I can think of for now. Will submit the business case, really love the looks of the your hardware and service.

user-24e31b 28 June, 2018, 03:21:43

Hopefully I can get back to you with an order confirmation. =)

user-24e31b 28 June, 2018, 03:21:50

Tomorrow!

wrp 28 June, 2018, 03:22:00

@user-24e31b Thanks for the feedback! We look forward to hearing from you 😸

user-a8c41c 28 June, 2018, 13:42:19

@wrp Hello! When my teammates and I entered the hierarchy for the Calibration scene in Unity (we downloaded your online plugins), we will press the play button and see nothing but an endless black grid space and blank loading screen. Therefore, we assume the calibration has not taken place. We are able to press 'C' and to connect to Pupil Service, however we are not able to see anything through the headset. We believe this is due to an error message we are getting from Unity.exe (Unity was running smoothly with other programs we ran). The issue is we are completely new to eye tracking technology and do not know how to fix this even after reading through your Pupil Docs, do you have any troubleshooting tips?

user-8306a2 28 June, 2018, 15:37:57

hi everyone

user-8306a2 28 June, 2018, 15:38:14

could someone from the pupil team explain how exactly does the hololens integration work?

user-8306a2 28 June, 2018, 15:38:34

AFAIK the hololens USB port is not an OTG port, so I don't see how pupil connects to the hololens

user-8306a2 28 June, 2018, 15:42:08

ah nvm, finally found this that it connects to a separate pc https://github.com/pupil-labs/hmd-eyes

user-cd9cff 28 June, 2018, 15:55:12

@user-a8c41c I had the same problem when I used the 3D tracking file, but the problem went away when I used the 2D Tracking file

user-464538 28 June, 2018, 18:46:03

Hi all, I research people with intermittent eye turns. have the headsets with binocular eye cameras, and I'm wondering if there is any way to get access to the data that goes into forming the gaze prediction? when the eye turns the gaze dot drifts off in the direction of the turn but I would love to have more access to where each eye is pointing separately.

papr 28 June, 2018, 18:51:17

@user-464538 gaze data is based on Pupil data. You have access the same way as you have access to the gaze data

user-072005 28 June, 2018, 20:13:55

When I put files into player that used to work fine, it errors saying something like it can't make the files because they already exist. I tried older versions of the software, but that didn't help. Is there anything I can do? My paper is due in a couple days and half my data isn't viewable.

papr 28 June, 2018, 20:14:32

Can you post the exact error message?

user-072005 28 June, 2018, 20:15:23

No, I don't really know the correct term for this but the black box showing the messages closes instantly

user-072005 28 June, 2018, 20:15:36

I only got that bit from causing the error repeatedly until I could catch a glimpse

papr 28 June, 2018, 20:16:03

Which os do you use?

user-072005 28 June, 2018, 20:16:09

Windows

papr 28 June, 2018, 20:16:45

Please upload the player.log file in the pupil_player_settings folder

user-072005 28 June, 2018, 20:18:47

I don't see a folder named that. It should be in this folder, right? pupil_player_windows_x64_v1.7-42-7ce62c8

papr 28 June, 2018, 20:19:24

No

papr 28 June, 2018, 20:19:40

There is a separate folder in your user folder

user-072005 28 June, 2018, 20:20:14

Oh ok I see it

papr 28 June, 2018, 20:20:16

Just search for player.log and you should find it

user-072005 28 June, 2018, 20:20:53

player.log

user-072005 28 June, 2018, 20:21:18

It had a different error than I saw I guess

papr 28 June, 2018, 20:22:25

mmh, could you please remove all spaces from all folder names?

papr 28 June, 2018, 20:22:32

just replace them with _

user-072005 28 June, 2018, 20:25:11

Does this include the Pupil Cam files?

papr 28 June, 2018, 20:25:37

what do you mean?

user-072005 28 June, 2018, 20:26:00

Chat image

papr 28 June, 2018, 20:26:26

These are pupil mobile files. these need to stay as they are

papr 28 June, 2018, 20:26:31

I mean the folder names

user-072005 28 June, 2018, 20:26:32

ok that's what I thought

papr 28 June, 2018, 20:26:55

specifically 2018-03-19- do not change

user-072005 28 June, 2018, 20:27:32

player.log

user-072005 28 June, 2018, 20:27:40

Ok now it is getting the error I saw before

user-072005 28 June, 2018, 20:28:02

well actually

user-072005 28 June, 2018, 20:28:05

ignore that one...

user-072005 28 June, 2018, 20:28:51

player.log

user-072005 28 June, 2018, 20:28:55

that one

papr 28 June, 2018, 20:29:12

ok

papr 28 June, 2018, 20:29:34

not sure why it runs through the upgrade process again

user-072005 28 June, 2018, 20:29:56

It seems to only be doing it on a few of the files since this just happened with half of this folder

user-072005 28 June, 2018, 20:31:54

nevermind, it's doing it with all of it now

user-072005 28 June, 2018, 20:32:07

it just happened when I was halfway through analyzing the files

papr 28 June, 2018, 20:32:18

ok, please put the following files in a sub folder named "backup" for each recording that has that issue: - world - eye0 - eye1 - audio

user-072005 28 June, 2018, 20:32:52

can that be a folder in the folder with the rest of the files, like the offline data folder is?

papr 28 June, 2018, 20:32:57

did you change or replace the info.csv files?

papr 28 June, 2018, 20:33:14

similar to the offline_data folder yes

user-072005 28 June, 2018, 20:33:17

There's a possibility I did back in March but not recently

papr 28 June, 2018, 20:33:41

such that each recording has its own backup folder including the files above

papr 28 June, 2018, 20:34:49

start with one recording and lets see if it works

user-072005 28 June, 2018, 20:35:18

so I should move them, not copy?

papr 28 June, 2018, 20:35:38

you need to move, exactly

papr 28 June, 2018, 20:35:51

such that they are not in there original place anymore

user-072005 28 June, 2018, 20:36:52

that fixed it

papr 28 June, 2018, 20:38:37

ok, I will make an issue to handle this situation in the next release

user-072005 28 June, 2018, 20:39:01

Great, thanks so much for the help. I was getting a bit panicked about the paper

papr 28 June, 2018, 20:39:13

I can imagine 😄

user-072005 28 June, 2018, 20:39:30

It was data I got while I was abroad, so I really couldn't redo it!

papr 28 June, 2018, 20:46:20

Issue for reference: https://github.com/pupil-labs/pupil/issues/1221

user-cd9cff 28 June, 2018, 22:49:56

@papr I downloaded all of the folders for pupil in python following the instructions in pupil docs for windows dependencies., but I can't seem to open any of the python scripts even though I have python downloaded

papr 29 June, 2018, 07:21:33

@user-cd9cff Sorry, could you recap what you are trying to accomplish by installing the source?

user-d9bb5a 29 June, 2018, 09:48:57

Good afternoon. And with what you can open these files: world_timestamps.npy

papr 29 June, 2018, 09:49:27

Python and numpy

user-d9bb5a 29 June, 2018, 09:49:55

Thanks)

user-cd9cff 29 June, 2018, 17:53:06

@papr I'm sorry about the previous question, python is working fine with pupil on my computer now. However, I would prefer to use Matlab because the stimulus is coded in matlab. Were you able to figure out the problem about quering gaze data using matlab?

papr 29 June, 2018, 17:56:37

@user-cd9cff hey, you do not have to run from source to run the Matlab scripts. You can use the bundled application. Unfortunately I was not able to test gaze subscription yet :-/

End of June archive