💻 software-dev


user-66516a 02 October, 2019, 12:51:43

Hello, Is there a way to compute eye blinks offline based on the pupil diameter and confidence data (hmd)?

papr 02 October, 2019, 12:52:59

@user-66516a Pupil Player supports offline blink detection using the confidence signal. Are you looking for that or do you want to implement it yourself?

user-66516a 02 October, 2019, 12:54:53

Thanks for the quick answer @papr. Unfortunately, I only have *.csv files on hand, so I'd like to implement it myself

user-66516a 02 October, 2019, 13:07:46

(The team I work with extracted pupil labs data into *.csv files using python and msgpack library, but they only did so for the timestamp, pupil diameter, confidence and position. I'd like to find a way to compute the number of blinks based on the available data I have. )

papr 02 October, 2019, 13:08:08

@user-66516a this is definitively possible

papr 02 October, 2019, 13:29:27

@user-66516a This is the relevant code from the blink detector: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L344-L367

user-66516a 02 October, 2019, 13:46:57

Thanks for the indication @papr ! I'll implement it shortly

user-1bcd3e 03 October, 2019, 10:51:13

dear all, is there any plug in of Object Detection that can be added on pupil ?

user-1bcd3e 03 October, 2019, 10:52:00

we need to start to use deep learning for object recognition and we wonder if anyone use something

user-1bcd3e 03 October, 2019, 10:52:05

thnks

papr 03 October, 2019, 10:52:24

@user-1bcd3e Check out this third-party fork of Pupil: https://github.com/jesseweisberg/pupil

user-1bcd3e 03 October, 2019, 10:53:57

ok thanks

papr 03 October, 2019, 10:53:59

Please be aware that this fork does not seem to be maintained anymore (last commit Dec 18, 2017).

user-1bcd3e 03 October, 2019, 10:54:25

ok

user-1bcd3e 03 October, 2019, 10:54:49

maybe it will run on an old version of pupil

papr 03 October, 2019, 10:55:42

It is basically an old version of Pupil since they forked the repository. If this will be used for a new project, I highly recommend to extract their changes into a plugin that can be used with a current version of Pupil.

Plugin API docs: https://docs.pupil-labs.com/developer/core/plugin-api/

user-a7d017 07 October, 2019, 01:54:04

Hi everyone, I wanted to modify an existing plugin to add a little bit more functionality, is there a recommended IDE to use to import the source code into and run from?

user-a7d017 07 October, 2019, 03:49:24

@papr specifically I'd like to perform some calculations on the raw data and export an additional .csv file at the same time as the gaze/pupil positions. After the raw data exporter plugin is loaded, where is the data held before being written to a .csv?

papr 07 October, 2019, 16:36:42

@user-a7d017 In this case I would recommend a custom plugin: https://docs.pupil-labs.com/developer/core/plugin-api/

Please be aware that the pupil api docs are work in progress and will be extended in the coming days.

The data is stored in self.g_pool.pupil_positions and self.g_pool.gaze_positions, where self is an instance of your plugin.

You should also implement:

def on_notify(self, notification):
    if notification["subject"] == "should_export":
        self.do_export()
user-a7d017 07 October, 2019, 21:26:50

@papr Thank you for the response, very helpful. I have set up the dependencies and dev environment to run from source on Windows, but seeing as Pupil runs both Python and C code, is there a recommended IDE to use?

papr 07 October, 2019, 21:28:16

@user-a7d017 I personally use VS Code. Also, you rarely need to interact with the c code. So you can use any text editor you like, since the relevant code is python.

user-fc194a 09 October, 2019, 14:31:16

@user-b08428 did you ever find out how to send annotations via Unity?

papr 09 October, 2019, 14:35:45

/cc @fxlange 👆 Python reference: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py

user-fc194a 09 October, 2019, 19:02:40

@papr I already looked at this code. I also looked at the API call for remote annotation here: https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations. What I am not sure about are the key-values I need to send to the function

public void Send(Dictionary<string, object> dictionary) (RequestController and in turn Request.SendRequestMessage is using this Dictionnary to send via MessagePack)

I originally thought it would be easy enough with {{"subject", "annotation"}, {"label", "my annotation label"},{"timestamp", <pupil time>}, {"duration", 1.0},{"custom_key", "custom value"}}

However it does not seem to get recorded. Any idea where I am going wrong? (Let me know if I should ask this in hmd-eyes forum instead...)

fxlange 09 October, 2019, 20:42:48

Hi @user-fc194a - indeed let's move this topic to 🥽 core-xr. in short: support for remote annotations directly in hmd-eyes is on our list and thanks to @user-141bcd we already have a Pull Request implementing it.

user-4d0769 11 October, 2019, 15:56:00

@papr I have my One+ connected to my pupil invisible headset. It's on the same network as my macbook running the pupil invisible monitor. The monitor doesn't seem to detect any device. Am I missing a step?

papr 11 October, 2019, 15:56:49

@user-4d0769 hey, what operating system are you using?

user-4d0769 11 October, 2019, 15:57:17

@paper macos Mojave (10.14.6)

papr 11 October, 2019, 15:57:41

@user-4d0769 what kind of wifi is it? Are you at an university?

user-4d0769 11 October, 2019, 15:58:00

It's my corporate wifi

papr 11 October, 2019, 15:58:47

They might be blocking udp traffic

papr 11 October, 2019, 15:59:14

It is recommended to run a dedicated wifi that you can configure yourself.

user-4d0769 11 October, 2019, 15:59:35

gotcha. Would turning on tethering on the phone and connecting the laptop to it work?

papr 11 October, 2019, 16:00:17

No, that does not work, since the phone does isolate the connected peers

user-4d0769 11 October, 2019, 16:14:49

Ok I got it working on a separate wifi as you suggested 🙂 However it only streams the eye tracking. The picture is not streaming. Any advice?

papr 11 October, 2019, 17:31:36

@user-4d0769 try restarting the phone.

user-a82c20 15 October, 2019, 19:09:26

I have a plugin for capture that accepts a signal from an external source and timestamps whenever this signal changes. The plugin for the player, then exports the signal and its timestamp to signal.csv . I want to be able to easily link the timestamped signals to the pupil_info.csv data. Am i correct in thinking the that including world_index in signal.csv file would be helpful? if so how do i add the world_index to the signal and timestamp =

user-14d189 16 October, 2019, 06:43:17

HI, I try to get some particular frames out of the eye videos. somewhat successful with

user-14d189 16 October, 2019, 06:43:27

video = cv2.VideoCapture("eye0.mp4") total_frames = video.get(7) frame_idx = 100

while True: video.set(1, frame_idx) ret, frame = video.read() cv2.imshow('frame', frame)
frame_idx = frame_idx+1 key = cv2.waitKey(10) if key == ord('q'): break video.release() cv2.destroyAllWindows()

user-14d189 16 October, 2019, 06:44:22

now matching timestamps is a bit more difficult. Is is possible to access the timestamps inside the original eye video?

user-ff9c49 16 October, 2019, 13:29:26

Hi! Maybe this information has already been reported but I am new in the Pupil Labs chat... I would like to ask you if you could give me information about how to transform norm_x and norm_y into pixel coordinates of the screen. Thanks!

user-a7d017 16 October, 2019, 15:04:34

@papr Hello again, is it possible to run and debug Pupil Player using an IDE? Currently I run Player using Powershell as described in the Pupil docs and use logging but it is time consuming. I apologize for the newbie question. As part of the custom plugin I'm working on I'd also like to be able to import a file into Player, either by adding a button or by drag-and-drop into window as we do with the recording directory. Would you mind pointing me in the right direction to accomplish these?

wrp 16 October, 2019, 15:29:01

@user-ff9c49 If you want norm coords --> world camera pixels, then multiply by the resolution of your world camera and flip over Y axis: https://github.com/pupil-labs/pupil/blob/1d8ce3200d3c799b67eaf62a73c4053edc2fe5db/pupil_src/shared_modules/methods.py#L535-L546 -- see also docs here: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-c5fb8b 16 October, 2019, 15:30:55

Hi @user-a7d017 I can quickly jump in for @papr here: I assume you are running the *.bat files from powershell? You can also run pupil player just via python with

python pupil_src/main.py player

This way you can easily set it up to run via an IDE. The specific setup depends on which IDE you are using. Visual Studio Code e.g. needs launch configurations for your debug targets (https://code.visualstudio.com/docs/editor/debugging#_launch-configurations ), here you can e.g. use:

{
            "name": "Run Player",
            "type": "python",
            "request": "launch",
            "cwd": "${workspaceFolder}/pupil_src",
            "program": "${workspaceFolder}/pupil_src/main.py",
            "args": [
                "player"
            ],
            "subProcess": true,
        }

But any other IDE should be straighforward to attach to this script.

Regarding your extension ideas:

Drag and drop might be a bit difficult to implement for anything other than recordings. You can find the relevant code piece here: https://github.com/pupil-labs/pupil/blob/cbc4aa8f788371ac5a3822d4819ed69b9310d86e/pupil_src/launchables/player.py#L212

I'd recommend to create a custom plugin, if you want to have some UI. You can read about custom plugin development here: https://docs.pupil-labs.com/developer/core/plugin-api/

papr 16 October, 2019, 15:32:31

@wrp @user-ff9c49 also be aware that the origin for the norm coordinates is the bottom left of the image.

wrp 16 October, 2019, 15:32:45

revised my answer above, thanks @papr

user-ff9c49 16 October, 2019, 16:29:06

@wrp @papr I am sorry, but it is not entirely clear to me... I will explain my situation: I am showing a visual stimulus (using Psychtoolbox, Matlab) on a screen with a resolution of 1920, 1080 px. At the same time through UDP protocol (python-matlab) I am receiving in real time the coordinates x and y (norm_pos_x, norm_pos_y). Because these coordinates go from 0 to 1 (related to the world camera), I do not know where they are on the screen (location in pixels). So I would like to know when these coordinates are in a specific position on the screen (for example: 480, 540 px). So, to do this I have to convert the values of x,y coordinates into pixel values on the screen where the target appears... Should I first define the screen surface using Apriltags? Thank you very much.

wrp 16 October, 2019, 16:32:19

@user-ff9c49 yes - define a surface with AprilTags as markers. Then you will be able to understand gaze relative to the surface (e.g. your screen) in norm coords, then can map back to pixel space. Relevant example that uses surfaces and maps from norm surface coords to screen coords: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py#L85-L95

user-ff9c49 16 October, 2019, 16:33:37

@wrp thanks a lot! I will work on it!

user-ff9c49 17 October, 2019, 11:36:27

@wrp Hello! I defined the surface using Apriltags and I added it in the surface tracker section (called surface 1). After this...Are norm_pos_x and norm_pos_y (which come from pupil capture in real time) related to that surface?

papr 17 October, 2019, 11:41:42

@user-ff9c49 hey :) the norm_pos meaning depends on the type of data you are looking at. The pupil norm_pos relates to the eye image, the gaze norm_pos relates to the world image, and gaze_on_srf norm_pos relates to the surface. What topic/data type are you subscribed to? To receive data in relation to your surface, you will have to subscribe to "surface".

user-ff9c49 17 October, 2019, 11:57:55

@papr Hi! I was telling pupil labs to send me gaze data... so then I will have to switch to surface data... Once I do this and I have defined the surface using Apriltags (I will save that surface)... I will get x and y coordinates... Will these coordinates represent the previously defined surface? After this I will have to translate those coordinates into pixels... Thanks! Pablo

papr 17 October, 2019, 12:02:06

If you subscribe to the surface data, it will include gaze data that has been mapped on to the surface, yes. You can convert the gaze norm pos to Pixel locations (given your surface is e.g. an image with a known resolution) by multiplying the norm_pos with the resolution.

papr 17 October, 2019, 12:02:32

But yes, your description of the procedure is correct.

user-ff9c49 17 October, 2019, 13:36:42

@papr I have already subscribed through sub.setsockopt_string(zmq.SUBSCRIBE, 'surface'), before I had 'gaze'... so now the data I get come from surface.

The problem is that these data always appear when the Apriltags (with which I have previously defined the screen) are recorded by the camera, if there are no Apriltags in the visual field of the camera I do not receive any data... should the Apriltags always be present in the visual field in order to receive the data (msg.items)?

user-ff9c49 17 October, 2019, 13:55:21

@papr In other words... as soon as the camera does not detect the markers, it does not send data... This would imply that during a visual experiment, the markers must also be present on the screen where the visual stimulus will appear?

papr 17 October, 2019, 14:03:44

@user-ff9c49 yes, that is correct. Without the markers, the surface cannot be tracked, and therefore it is not possible to map gaze onto it.

user-ff9c49 17 October, 2019, 14:04:10

@papr Thanks!

user-a82c20 17 October, 2019, 15:59:28

my plugin uses local_g_pool.get_timestamp() to get timestamps of its events. Now I need to link these timestamps with the eye event timestamps. Are they directly relatable? i.e. Can I just merge the my plugin data from the exported csv files and the eye0 data from pupil_positions .csv, and then sort by timestamp to get a sequence of events?

user-ff9c49 17 October, 2019, 16:19:41

@papr Hi again! After run the following part (to receive data from surface): surface_name = "screen" topic, msg = sub.recv_multipart() gaze_position = loads(msg, encoding="utf-8") if gaze_position["name"] == surface_name: gaze_on_screen = gaze_position["gaze_on_surfaces"] print(gaze_on_screen)

I get the following: [{'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.0.', 30107.333638], 'timestamp': 30107.333638}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.1.', 30107.338397], 'timestamp': 30107.338397}, {'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.0.', 30107.341707], 'timestamp': 30107.341707}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.1.', 30107.346466], 'timestamp': 30107.346466}, {'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.0.', 30107.349776], 'timestamp': 30107.349776}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [1.6850792169570923, 0.20157144963741302], 'confidence': 0.017619357408812316, 'on_surf': False, 'base_data': ['gaze.3d.1.', 30107.354535], 'timestamp': 30107.354535}, .......etc.

Which are the data corresponding to the gaze coordinates (x, y) of the right and left eye? Would they be the following? Right eye: 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182] Left eye: 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663]

papr 17 October, 2019, 16:37:02

Yes, the number after gaze.3d. identifies which eye was used to generate the datum. If you see .01. it is a binocular datum, which is composed from both eyes

papr 17 October, 2019, 16:37:54

@user-ff9c49 also be aware of the on_srf field, which tells you if the gaze point was actually on the surface or not.

user-a7d017 21 October, 2019, 03:12:51

@user-c5fb8b @papr Hello again, I am using the Bisector class method by_ts to retrieve one datum from the gaze positions Bisector object. I used the first timestamp of my recording as argument for by_ts and the returned object is a Serialized_Dict:

Serialized_Dict(mappingproxy({'topic': 'gaze.3d.01.', 'eye_centers_3d': mappingproxy({0: (20.774985455709622, 16.915129077756458, -18.54670009963165), 1: (-40.13386724526508, 15.28138313906937, -19.847435981683557)}), 'gaze_normals_3d': mappingproxy({0: (-0.33739927208901044, 0.07517266051180616, 0.9383553710111023), 1: (-0.1724254950770747, 0.10358711446607727, 0.9795606966206931)}), 'gaze_point_3d': (-98.5101095431107, 47.00852398806294, 312.6064929173059), 'confidence': 1.0, 'timestamp': 1584.016145, 'base_data': (mappingproxy({'topic': 'pupil', 'circle_3d': mappingproxy({'center': (-3.0519413693505326, 4.727869870084321, 94.76084000172979), 'normal': (-0.6312904829970989, 0.5401029664294477, -0.5565618669396967), 'radius': 2.3487197325443194}), 'confidence': 1.0, 'timestamp': 1584.017338, 'diameter_3d': 4.697439465088639, 'ellipse': mappingproxy({'center': (76.15720887466523, 126.83202490778612), 'axes': (15.657013090213452, 30.744993262109645), 'angle': -41.10445613827838}), 'norm_pos': (0.39665212955554807, 0.33941653693861396), 'diameter': 30.744993262109645, 'sphere': mappingproxy({'center': (4.523544426614654, -1.753365727069051, 101.43958240500615), 'radius': 12.0}), ....

I am only interested in accessing the values of key 'gaze_point_3d' for my calculations. I tried a few different ways to access the values, such as .get() and trying indexes, but none is working. What should I use to access the values?

user-c5fb8b 21 October, 2019, 09:46:55

@user-a7d017 What happens when you access by key or use .get()? Do you get an error or an exception? Serialized_Dict implements pythons __getitem__() which is used when indexing an object with square-brackets [] and also implements .get(). So I'd like to know what fails for you. Without having looked at it in more detail, I feel like this should work for example:

mydict = ... # whatever you did to get the dict
print(mydict["topic"]) # should print "gaze.3d.01"
print(mydict.get("topic")) # should print the same
user-a7d017 22 October, 2019, 03:41:39

@user-c5fb8b

with this code: gaze_bisector = self.g_pool.gaze_positions gaze_datum = gaze_bisector.by_ts(1584.016145) print(gaze_datum.get("gaze_point_3d"))

I receive this error:

Traceback (most recent call last): File "C:\work\pupil\pupil_src\launchables\player.py", line 583, in player p.on_notify(n) File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 160, in on_notify self.export_data(notification["ts_window"], notification["export_dir"]) File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 196, in export_data print(gaze_datum.get(gaze_point_3d)) NameError: name 'gaze_point_3d' is not defined

However I have just tried

print(gaze_datum['gaze_point_3d'])

which did work. Thanks

user-a7d017 22 October, 2019, 03:42:25

@user-c5fb8b In the future how can I format the discord posts to have different background/border for code snippets like in your posts?

user-c5fb8b 22 October, 2019, 06:31:41

@user-a7d017 From looking at you error message I figured out what went wrong:

Traceback (most recent call last):
  File "C:\work\pupil\pupil_src\launchables\player.py", line 583, in player
    p.on_notify(n)
  File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 160, in on_notify
    self.export_data(notification["ts_window"], notification["export_dir"])
  File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 196, in export_data
    print(gaze_datum.get(gaze_point_3d))
NameError: name 'gaze_point_3d' is not defined

See the last line: print(gaze_datum.get(gaze_point_3d)) You simply forgot enclosing quotation marks around the string key. This should have worked as well: print(gaze_datum.get("gaze_point_3d"))

user-ff9c49 22 October, 2019, 15:01:06

@papr So for example --> 'on_surf': False does this mean that the gaze is not within the predefined surface?

papr 22 October, 2019, 15:01:29

@user-ff9c49 correct

papr 22 October, 2019, 15:02:14

This is determined by the norm_pos of the mapped gaze datum

papr 22 October, 2019, 15:03:28

If any of the two coordinates, x or y, is not in the [0,1] interval, on_srf is set to false

user-ff9c49 22 October, 2019, 15:05:51

@papr So [0,1] interval refers only to the defined surface?

papr 22 October, 2019, 15:08:11

These are the surface boundaries in surface coordinates

papr 22 October, 2019, 15:08:34

That is why we check the norm_pos of the surface mapped gaze, not of the original gaze

user-ff9c49 22 October, 2019, 15:28:14

@papr and directly to convert these norm_pos coordinates (x,y - "gaze_on_surfaces") to pixels, would simply be, knowing the resolution of the screen... for example: screen_size = 1920,1080px

norm_pos coord. (x,y) screen coord. (x,y) (0,0) (0,0) . . (0.5,0.5) (960,540) . . (1,1) (1920,1080)

papr 22 October, 2019, 15:28:33

Correct

papr 22 October, 2019, 15:29:28

Please be aware that Pixel coordinates often have their origin in the top left though

papr 22 October, 2019, 15:29:57

Which means 0,0 maps to 0,1080

papr 22 October, 2019, 15:30:14

And 1,1 to 1920,0

user-ff9c49 22 October, 2019, 15:32:09

@papr True! Thanks for the advice!

user-ff9c49 23 October, 2019, 09:33:33

@papr Hi! I guess the use of markers is because you do not assume that the subject is static in front of the area defined by the markers (for example a specific screen). In the case we know that the subject position is static (ensuring stabilization with a bite bar) in front of a specific screen, could this screen be defined only through the calibration process (without the use of Apriltags)? Once the calibration area is defined, would the x and y coordinates (from 0 to 1) belong only to that calibration window? or do they still belong to the whole world camera area?

user-c5fb8b 23 October, 2019, 09:54:45

@user-ff9c49 The norm_pos coordinates always are relative to the whole world camera image. Did you record the calibration procedure as well? Then it should be possible to access the coordinates of your calibration area in world camera coordinates by using the offline calibration plugin, which will detect the markers automatically in the recording. But it will be a bit tricky to access the data and then you will have to do a manual conversion of gaze data into (your custom) surface coordinates. If you did not record the calibration procedure, you can only do this fully manually.

The much easier solution would be to use apriltag markers for your surface as this is exactly what the surface tracker plugins are designed for. The calibration procedure is not intended to be used as an "area tracker".

user-ff9c49 23 October, 2019, 10:34:55

@papr @user-c5fb8b My situation is as follows (maybe you can advice me regarding the problems I am going to comment):

I am using Pupil Labs to perform a visual experiment in which I intend to measure saccadic eye movements. The projection of the visual stimulus is done through Psychtoolbox (Matlab). So I first made a UDP connection to send the data from Python to Matlab and have the information in real time. So the idea behind is that this information will be used in Matlab to perform a gaze contingent experiment (through Psychtoolbox - Matlab). Mainly I have two issues:

1) Delay. There is a significant delay in sending and receiving data (from when I send them with Python to when I receive and process them in Matlab). Therefore this delay would be critical for the purpose of the gaze contingent.

2) Screen definition and norm_pos (x,y) conversion in pixels for gaze contingent. First I thought to place Apriltags in the frame of the screen, the problem is that the experiment must be performed in low light conditions and therefore the detection of such Apriltags would not be efficient. I have also thought about projecting them via Matlab (in the "corners of the screen" at the same time as the experiment is running), but in this case I think they can influence the results because they can distract the subject. That is why I asked if there is any way to define the screen without the use of these markers (for example defining the window through calibration, and after that calibration the variables x and y are only referred to that "calibrated area" and not to the world camera).

user-ff9c49 23 October, 2019, 11:25:51

@papr @user-c5fb8b I also wanted to ask the following:

Connecting from (zmq.SUBSCRIBE, 'gaze'), the data from each eye is received separately in different "frames" (line items) and in random order (for example, I receive 2 lines of data from the right eye, then 1 line from the left eye, etc.). On the other hand, if I connect from (zmq.SUBSCRIBE, 'surface'), I get the data from both eyes in the same "frame" (line items)

I know that the reason for this randomness (in the case of subscription 'gaze') is because the processes of both eyes are executed independently of each other.

But why is it that in the 'gaze' subscription the information is received from each eye in different "frames" and in the case of the 'surface' subscription the information is received in the same frame?

For example: --> Using (zmq.SUBSCRIBE, 'gaze') I get: dict_items([('topic', 'gaze.3d.1.'), ('eye_center_3d', [-8.459387 ....... etc ....... ('norm_pos', [0.534134,0.9829978])]) dict_items([('topic', 'gaze.3d.1.'), ('eye_center_3d', [-3.312847 ....... etc ....... ('norm_pos', [0.521251251, 0.9712988])]) dict_items([('topic', 'gaze.3d.0.'), ('eye_center_3d', [-5.324567 ....... etc ....... ('norm_pos', [0.534121312, 0.9123177])])

--> Using (zmq.SUBSCRIBE, 'surface') I get: dict_items([{'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], .... etc}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663], .... etc])

So using the 'gaze' subscription I get information from right and left eye in different dict_items lines ("frames"), but in 'surface' subscription I get the information from both eyes in the same dict_item line. Is there a reason? Thanks!

papr 23 October, 2019, 11:45:41

1) The lowest delay that you can get is by subscribing directly instead of having an additional step to send the data via UDP.

2) We currently do not support markerless area-of-interest tracking.

papr 23 October, 2019, 11:52:05

3) The difference must be in your code and how you parse the message. All messages send by Capture, contain at least two frames: a) topic, string indicating content of b) b) payload, msgpack-encoded dictionary

It looks like your code for receiving gaze returns information about both frames, topic and payload, while your surface code only returns information about the payload.

Please be aware, that the payload includes a field named "topic", which should correspond to the topic in frame a)

user-ff9c49 23 October, 2019, 13:18:11

@papr regarding your point number 2.. sorry but just to clarify, after the calibration step, x and y coordinates (0,1), to which area do they correspond? To the area delimited in the calibration or to the area of the entire world camera?

papr 23 October, 2019, 13:18:31

to the area of the entire world camera

papr 23 October, 2019, 13:19:37

Pupil Core is not a remote eye tracker, where you calibrate in relation to a real world area. Instead you calibrate to a coordinate system that is relative to the subject's head (the world camera).

user-ff9c49 23 October, 2019, 13:20:45

@papr Ok! Thanks a lot!

papr 23 October, 2019, 13:21:35

@user-ff9c49 IN your case, I would recommend to give displaying the markers in the corners a try. You can make a trial run and see, if subjects actually pay attention to them

papr 23 October, 2019, 13:22:08

Actually, that would be a great study on its own... 🤔

user-ff9c49 23 October, 2019, 13:24:58

@papr yes .. I am currently working on this because looks like it is one of the most positive solutions to that issue...

user-ff9c49 24 October, 2019, 16:01:40

@papr Hi again! Regarding the screen detection (using Apriltags):

1) Can the distance at which the world camera is placed and the size of the screen be factors that hinder the detection of Apriltags? I have shown 4 markers in each of the corners. At the beginning the camera was placed 57cm (aprox.) from the screen and the screen dimensions are 38x30cm (aprox.) After observing serious problems in the detection, I have changed several times the size of the markers, their position (closer to the center of the screen), the distance of the camera, inverted the background color and the markers color, but I have not achieved a constant and accurate detection.

2) Can the curvature distortion produced by the world camera be an issue in the detection?

3) Any advice?

papr 24 October, 2019, 16:17:06

@user-ff9c49 Please give our new v1.17 release a try. It includes improved marker detection by default.

papr 24 October, 2019, 16:17:30

But yes, distortion and size of the markers matter.

user-ff9c49 25 October, 2019, 12:52:53

@papr Hi! Just to clarify...again a question about the conversion of norm_pos coordinates to screen pixel coordinates...

The screen defined with the markers has a resolution of 1920x1080px and the world camera, 1280x720px. The origin (screen) is: 0, 1080 (means [0, 0] in norm_pos) and 1920,0 (means [1, 1] in norm_pos).

For that conversion, should I take into account the resolution of the screen defined by the markers and the resolution of the world camera? or just the screen resolution? In this case it would be: x_norm_pos_pixels = norm_pos[0](x_screen) y_norm_pos_pixels = norm_pos[1](y_screen)*-1

papr 25 October, 2019, 13:56:16

@user-ff9c49 you do not have to care about the world camera resolution

user-2be752 25 October, 2019, 19:27:38

Hi, is there a particular set of apriltags that is detected better than others? also, i'm printing my markers (can't show them through matlab or sth like that) so I was wondering if you guys had recommendations to facilitate their detection. Thanks!

wrp 26 October, 2019, 01:56:19

@user-2be752 if you're not already please make sure you're using the latest release of Pupil Core software (v1.17). Regarding detection robustness of apriltags, please refer to AprilTags publications: https://april.eecs.umich.edu/papers/details.php?name=krogius2019iros - note that performance will differ with a Pupil Core headset due to different camera used

Quick tips: You might want to ensure that tags are large enough and have enough white border.

user-c9d205 29 October, 2019, 10:48:00

Any idea why would I ever get negative timestamp values

papr 29 October, 2019, 11:04:10

@user-c9d205 If you sync your time with a different clock that is definitively possible

user-c9d205 29 October, 2019, 11:07:39

I am running req.send_string('T 0.0') before fetching the gaze and world data. Not only do they not match (not even close), the gaze timestamps begin negative

user-c9d205 29 October, 2019, 11:07:51

Am I doing something wrong?

papr 29 October, 2019, 11:08:29

@user-c5fb8b could you check if you can reproduce this ☝

user-c9d205 29 October, 2019, 11:08:57

I can send my code in whole, its not very complicated

papr 29 October, 2019, 11:09:05

@user-c9d205 that would be helpful!

user-c9d205 29 October, 2019, 11:10:14

recv_pictures.py

user-c9d205 29 October, 2019, 11:22:21

Any news?

papr 29 October, 2019, 11:22:54

@user-c9d205 We will come back to you as soon as we performed the test

papr 29 October, 2019, 11:49:32

@user-c9d205 are all of the timestamps negative? And in what range are they?

user-c5fb8b 29 October, 2019, 11:58:11

@user-c9d205 It might be that the first incoming gaze timestamps are not yet effected by the time sync. After a couple of items, the gaze and world timestamps should be in the same range. Can you check this?

user-c9d205 29 October, 2019, 12:07:48

The thing is I am syncing the time before even subscribing to the topics, I'll check anyway

papr 29 October, 2019, 12:12:31

@user-c9d205 yes, this is what I find weird, too

user-c9d205 29 October, 2019, 12:13:34

Timesync successful. World timestamp: 12.775317878999886, Gaze timestamp: 12.982123878999573 World timestamp: -0.11712121599975944, Gaze timestamp: 12.986158378999335 World timestamp: -0.08361121600046317, Gaze timestamp: 12.99019287899955 World timestamp: -0.050102216000595945, Gaze timestamp: 12.994227378999767 World timestamp: -0.01659221600039018, Gaze timestamp: 12.998261878999529 World timestamp: 0.016917783999815583, Gaze timestamp: 13.00229637899929 World timestamp: 0.05042778400002135, Gaze timestamp: 13.006330878999506 World timestamp: 0.08393678399988858, Gaze timestamp: 13.010365378999722 World timestamp: 0.11744678400009434, Gaze timestamp: 0.14493528399998468 World timestamp: 0.1509567840003001, Gaze timestamp: 0.1489697840002009 World timestamp: 0.18446678399959637, Gaze timestamp: 0.15300428399996235 World timestamp: 0.21797678399980214, Gaze timestamp: 0.15703878400017857

user-c9d205 29 October, 2019, 12:13:42

These are the first few timestamps

user-c9d205 29 October, 2019, 12:14:39

It does look like after a couple of stamps they sync up, like @user-c5fb8b said

user-c9d205 29 October, 2019, 12:18:09

Can you reproduce this at your end?

user-c5fb8b 29 October, 2019, 12:31:42

@user-c9d205 I see the same behavior with the gaze time syncup delay over here. We will have a look at the cause of this. Can you give us some info on what setup you are using? Particularly: - operating system - Pupil version - type of headset

user-c9d205 29 October, 2019, 12:34:27

ubuntu 16.04 version 1.17.6 How do I know the type of headset?

user-c5fb8b 29 October, 2019, 12:35:58

Is it a recent version of the pupil core headset? There are also some very old versions still around. And there's the VirtualReality headset extension for the HTC Vive.

user-c9d205 29 October, 2019, 12:40:39

very recent

user-c5fb8b 29 October, 2019, 12:41:38

Ok thanks! We will take a deeper look at the underlying issue.

user-c9d205 29 October, 2019, 12:47:03

Happy to help 🙂

user-c9d205 29 October, 2019, 12:47:09

Thank you too

user-ff6753 30 October, 2019, 12:03:54

Can someone please give me a simple link that will give me the Pupil Core software v1.17 for my Mac? I can't find it anywhere. Thank you!

user-c5fb8b 30 October, 2019, 12:13:15

@user-ff6753 Here you go: https://github.com/pupil-labs/pupil/releases/download/v1.17/pupil_v1.17-6-g5a872c8c_macos_x64.zip

In the future you can find the bundles on GitHub in the release section: https://github.com/pupil-labs/pupil/releases There you'll just have to scroll to the end of the current release, there's a section Assets with the bundles for all operating systems.

user-ff6753 30 October, 2019, 12:39:13

Cheers!!!

user-ff6753 30 October, 2019, 13:15:32

Okay. I've got all 3 apps loaded up on my Macbook Pro but none of them will open. I've rebooted but nothing. Glasses plugged in. I've open my music recording software and it doesn't recognize that anything is plugged in. Hmmmm. Any suggestions? I'm following the Pupil Labs Getting Started page. Started with Capture but nothing.

papr 30 October, 2019, 13:33:26

@user-ff6753 Is there any window opening if you double-click the application?

papr 30 October, 2019, 13:33:43

@user-ff6753 I am looking for any kind of error message you might be getting.

user-ff6753 30 October, 2019, 13:52:28

Absolutely nothing. They are in my apps folder. I double click and nothing. Tried to open all 3 several times.

papr 30 October, 2019, 15:45:40

@user-ff6753 Could you run the following command in a terminal:

/Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture

and let us know what the output is?

papr 30 October, 2019, 15:46:47

Also, please be aware that the bundle is not supported on older macOS versions than macOS Sierra 10.12 or on Macs with an Intel Xeon processor.

user-ff6753 30 October, 2019, 17:28:23

I'm running 10.11.6 El Capitan. Can I get an older version?

papr 30 October, 2019, 17:30:06

@user-ff6753 v1.15 should be supported on your system: https://github.com/pupil-labs/pupil/releases/download/v1.15/pupil_v1.15-71-g30eb56e4_macos_x64.zip

user-ff6753 30 October, 2019, 18:07:46

This version doesn't work either. I made sure to unload the newer versions.

papr 30 October, 2019, 18:54:13

@user-ff6753 Could you run the following command in a terminal:

/Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture

And let us know what the output is?

user-ff6753 30 October, 2019, 21:03:33

[632] Error loading Python lib '/Applications/Pupil Capture.app/Contents/MacOS/Python': dlopen: dlopen(/Applications/Pupil Capture.app/Contents/MacOS/Python, 10): Symbol not found: _clock_getres Referenced from: /Applications/Pupil Capture.app/Contents/MacOS/Python (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib in /Applications/Pupil Capture.app/Contents/MacOS/Python

papr 30 October, 2019, 21:15:36

@user-ff6753 Oh, I just saw that I made a mistake when I looked up the macOS 10.11 compatible version. Please try v1.14 https://github.com/pupil-labs/pupil/releases/download/v1.14/pupil_v1.14-9-g20ce19d3_macos_x64_.zip

user-ff6753 30 October, 2019, 21:31:42

Same result unfortunately. Any other options? Thanks again for helping me!

papr 30 October, 2019, 21:33:39

@user-ff6753 I am sorry about that. Could you try this (very) old version, just to check. If this one does not work there is something else going on... https://github.com/pupil-labs/pupil/releases/download/v1.11/pupil_v1.11-4-gb8870a2_macos_x64_scipy_v1.1.zip

user-ff6753 30 October, 2019, 21:43:40

Same thing. I have a sneaking suspicion that I'm doing something wrong now. I've never had such a difficult time getting any of my multiple MIDI controllers to work.

papr 30 October, 2019, 21:45:19

@user-ff6753 Two question: 1) Could you please repeat running /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture in a terminal. 2) MIDI controllers? I do not fully understand the connection.

user-ff6753 30 October, 2019, 21:49:09

I'm an electronic musician/educator working with a chiropractor that specializes in clients with severe physical disabilities. We want to introduce music production to some of them using Ableton Live.

papr 30 October, 2019, 21:50:30

@user-ff6753 That sounds really cool! And you want to use eye tracking as a method of interaction with ableton live?

user-ff6753 30 October, 2019, 21:51:04

Exactly!

user-ff6753 30 October, 2019, 21:52:08

Illegal instruction: 4

user-ff6753 30 October, 2019, 21:52:23

That's all it said this time.

papr 30 October, 2019, 21:53:22

Nice! I would like to hear their experience with your setup (once we got Pupil Capture running successfully 😅 )

user-ff6753 30 October, 2019, 21:53:47

We're not getting any younger!

user-ff6753 30 October, 2019, 21:54:24

I my head I see this being really cool for everyone involved.

papr 30 October, 2019, 21:54:48

Could you make a screenshot of the "About This Mac" window? You can open it through the "Apple" menu in the top left of your Mac. Attached you can find my machine as example.

Chat image

user-ff6753 30 October, 2019, 21:58:53

Chat image

papr 30 October, 2019, 22:02:42

@user-ff6753 Mmh, v1.11 should have worked on that machine. Unfortunately, there is no way for me to debug this. There is an alternative though. Instead of running the bundled application, you could run from source. These are the instructions: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-macos.md

papr 30 October, 2019, 22:04:51

Some of these instructions might take a while to complete since homebrew does not provide pre-compiled versions for macOS 10.11 anymore, since it macOS 10.11 has reached its end-of-life to my knowledge.

user-ff6753 30 October, 2019, 22:07:06

It's getting pretty late here in France and my brain is a little too fried for this. Will tackle tomorrow morning. Will you be around same bat time tomorrow for possible Q&A?

papr 30 October, 2019, 22:07:40

I am based in Berlin, so yes 👍

user-ff6753 30 October, 2019, 22:08:47

Cheers!

End of October archive