💻 software-dev


user-b14f98 04 October, 2021, 15:20:13

Hey guys, I see that average eyeball size (sphere_radius ) has changed from 12 to 10.39230485. The documentation does not yet reflect this change. In pupil_gaze_postiions.txt: "sphere_radius - radius of the eyeball. This is always 12mm (the anthropomorphic avg.) We need to make this assumption because of the single camera scale ambiguity."

user-b14f98 04 October, 2021, 15:20:51

We are trying to render the pupil size accurately in our 3D blender model, so any input you can offer would be appreciated.

user-b14f98 04 October, 2021, 15:21:52

...for example, I remember there was a bug in which radius/diameter were confused. I'm not sure when this was addressed, and if it's something we should take into account.

papr 12 October, 2021, 07:24:28

@user-b14f98 @user-3cff0d For your information: https://github.com/pupil-labs/pye3d-detector/pull/36

user-ed7a94 17 October, 2021, 13:14:13

hello, everyone, I want to get access to real-time data via network API, I follow the instruction on this link https://docs.pupil-labs.com/developer/core/network-api/#pupil-groups. But I the python demo did show any data. it's just led to infinite waiting. The code is like this: import zmq ctx = zmq.Context()

The REQ talks to Pupil remote and receives the session unique IPC SUB PORT

pupil_remote = ctx.socket(zmq.REQ)

ip = 'localhost' # If you talk to a different machine use its IP. port = 50020 # The port defaults to 50020. Set in Pupil Capture GUI.

pupil_remote.connect(f'tcp://{ip}:{port}')

Request 'SUB_PORT' for reading data

pupil_remote.send_string('SUB_PORT') sub_port = pupil_remote.recv_string()

Request 'PUB_PORT' for writing data

pupil_remote.send_string('PUB_PORT') pub_port = pupil_remote.recv_string()

the subscription was successful, you will start receiving data.

...continued from above Assumes sub_port to be set to the current subscription port

subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('gaze.') # receive all gaze messages

we need a serializer

import msgpack

while True: topic, payload = subscriber.recv_multipart() message = msgpack.loads(payload) print(f"{topic}: {message}")

user-ed7a94 17 October, 2021, 13:14:31

then I set "topic, payload = subscriber.recv_multipart(flags=zmq.NOBLOCK)" , it reports" Traceback (most recent call last): File "getbackbonemessages.py", line 37, in <module> topic, payload = subscriber.recv_multipart(flags=zmq.NOBLOCK) File "D:\Users\splut\Anaconda3\envs\Furhat\lib\site-packages\zmq\sugar\socket.py", line 491, in recv_multipart parts = [self.recv(flags, copy=copy, track=track)] File "zmq\backend\cython\socket.pyx", line 791, in zmq.backend.cython.socket.Socket.recv File "zmq\backend\cython\socket.pyx", line 827, in zmq.backend.cython.socket.Socket.recv File "zmq\backend\cython\socket.pyx", line 191, in zmq.backend.cython.socket._recv_copy File "zmq\backend\cython\socket.pyx", line 186, in zmq.backend.cython.socket._recv_copy File "zmq\backend\cython\checkrc.pxd", line 20, in zmq.backend.cython.checkrc._check_rc zmq.error.Again: Resource temporarily unavailable " can anyone help on this??

papr 17 October, 2021, 16:36:43

Ah, please be aware that gaze is only available after a successful calibration. Try subscribing to pupil for testing the script

user-0e7e72 18 October, 2021, 16:09:42

Could someone point me to the source code of the Marker detection for the surface tracker?

papr 18 October, 2021, 16:21:22

Apriltag or circle markers?

user-b1782d 18 October, 2021, 18:05:39

hey guys, is there any ways of using pupil with my own hardware instead of pupil core?

papr 18 October, 2021, 18:15:24

There is. The cameras either need to fulfill specific contraints (https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498), use this third-party backend https://github.com/Lifestohack/pupil-video-backend, or you can write your own video backend.

user-0e7e72 18 October, 2021, 18:36:54

For the Apriltags please

papr 18 October, 2021, 18:37:26

https://github.com/pupil-labs/apriltags

user-55fd9d 18 October, 2021, 19:13:54

Hey everyone! I am trying to get normalized gaze position (on defined surface with apriltags) via python.

subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('surfaces.Surface1')

messages = [] start_time = time.time() while time.time() < start_time+5: topic, payload = subscriber.recv_multipart() message = msgpack.loads(payload) messages.append(message)

x = [] y = [] for message in messages: for j in range(len(message[b'gaze_on_surfaces'])): x.append(message[b'gaze_on_surfaces'][j][b'norm_pos'][0]) y.append(message[b'gaze_on_surfaces'][j][b'norm_pos'][1])

I defined the surface (Surface1) and performed all calibrations. Gaze mapping looks pretty good in GUI. I collected the data over 5 second while looking at the middle of the AOI and then averaged position data (x, y) in 25ms bins. I tried this for several times and every time it looks weird. After a short while the gaze position gets away from (0.5, 0.5), despite me looking at the the same point. I would very appreciate any help!!

Chat image

papr 18 October, 2021, 19:15:53

Please filter low confidence data points:

for message in messages:
  for gaze in message[b'gaze_on_surfaces']:
    if gaze["confidence"] > 0.6:
      x.append(gaze[b'norm_pos'][0])
      y.append(gaze[b'norm_pos'][1])
user-55fd9d 27 October, 2021, 14:02:34

Thanks! This is an important point. However, this was not the problem. I collected the data from wrong time points.

subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('surfaces.Surface1')

After I subscribe the topic it buffers the data from the time point on. Later when I make requests it sends the most past data which it didn't send so far.

user-37ee59 25 October, 2021, 15:26:57

I've got one question according this sample. My code freezes when i call "subscriber.recv_multipart()". It is written, that i alsways have to "recv_string()" after sending a string. But do i also always have to send a string in order to receive a message back?

Any ideas why my code does freeze at hat location?

Chat image

papr 25 October, 2021, 15:30:33

You need to differentiate between the socket types.

To interact with REQ ("request"), use send_string ("send request") and recv_string ("receive response") as pairs.

To interact with SUB ("subscription"), use recv_multipart(). So the code is correct.

One special thing is that you subscribe to gaze. Gaze is only available after calibration. For testing, you can subscribe to other topics that are available without calibration, e.g. pupil

user-37ee59 25 October, 2021, 15:31:44

Wow, perfekt! Its working now. Thank you so much!

user-50567f 26 October, 2021, 21:51:41

Is there a code to reconstruct the eye positions from the data that was streamed through LSL?

papr 26 October, 2021, 21:52:55

If the data was recorded by latest LSL relay plugin and 3d pupil detection was turned on, then yes

user-50567f 26 October, 2021, 21:53:35

Yes! Where can I find the code?

papr 26 October, 2021, 21:54:42

I mean the data is stored in the xdf file recorded by lsl. Checkout the LSL documentation on how to read the xdf files. Is there something else you want/need to know to achieve your goal?

user-50567f 26 October, 2021, 21:55:53

The data is stored in XDF file! I’ll go through it! If I run into an issue I’ll get back to you!

user-37ee59 27 October, 2021, 09:27:17

I'm currently looking at data, gathered from using python and subscribing my socket to "pupil.". I am struggling to find out what all the label, like "theta", "phi", "location", ... stand for. Is there some kind of sheet where i can look up the definitions?

nmt 27 October, 2021, 09:35:21

Check out the online documentation for descriptions of pupil and gaze data made available: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-37ee59 27 October, 2021, 10:17:22

I've got a question regarding the data received when subscribing to "pupil". Does that data already depict the movement of the eye, or does it only tell how the eye is shaped and where it thinks the middle of the iris should be? In other words, would it be possible to write my own eye tracking with this data, or do I need the data that I get when I subscribe to "gaze"? Does "gaze" include the data that tells me what position I am looking at, or just alignment information about the eye?

user-37ee59 27 October, 2021, 10:20:11

What I'm getting at is actually whether this data, which I got by subscribing to "pupil", corresponds to the movement of the eye, or is just noise.

Chat image

papr 27 October, 2021, 14:06:10

pupil data is eye movement data (to be exact: the current estimation of the eye position and direction) in eye camera coordinates. gaze is the estimated location where the subject is looking at in scene camera coordinates. To translate pupil to gaze you need a calibration function. That you can customize.

user-37ee59 27 October, 2021, 16:28:16

Perfekt. Thank you. That already helped me.

user-5d4724 28 October, 2021, 09:43:22

Hello All,

user-5d4724 28 October, 2021, 09:55:36

We are trying to communicate to pupil core from matlab. We can start and stop recording remotely using python scripts via matlab. to send annotations we incorporate excerpts of However, we are unable to send annotations using parts of function remote _annotation.py from matlab. We are using functions from resource https://github.com/pupil-labs/pupil-helpers/tree/master/python. It would be a great help if someone can share example scripts that can help us achieve this. We are new with python. Also we tried sending annotation using ZeroMQ but were unable to install that successfully in matlab (windows).

nmt 28 October, 2021, 12:54:42

Hi @user-5d4724 👋 . Is there a reason why you can't use Python alone for these communications?

user-5d4724 28 October, 2021, 16:46:24

Yes , basically sir the experiment which we have designed requires the use of matlab along with python. That's why we are not using python alone.

nmt 29 October, 2021, 12:51:49

Thanks for confirming. In that case, my suggestion would be to try to get ZeroMQ running with Matlab. I've not worked with calling Python functions from within Matlab and so can't offer assistance in that regard

user-5b0955 28 October, 2021, 20:42:37

Hi Guys, I have a question: we just brought Pupil Invisible. In our study, we want to capture the egocentric camera view from pupil invisible and three camera views from another three cameras (ZED camera). Before buying that product one of the pupil-lab representatives informed us that pupil-lab has a software application that helps to synchronously collect multiple camera views. I try to find that software but could not. Could you please share the link to the software application? Thanks.

nmt 29 October, 2021, 13:21:09

Hi @user-5b0955 👋. We have indeed recently implemented open-source tooling that can be used to manually synchronise and display multiple videos (e.g. from a GoPro) alongside Pupil Invisible / Core recordings within Pupil Player. Further details to get this set up are below:

1) Script to generate Pupil-Player-compatible recordings from third party video files: https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a - This can be run via python generate_pupil_player_recording.py <path to third-party video>. See the requirements.txt file at the bottom of the linked webpage for the dependencies - The script generates a folder based on the video file's name, moves the video to the new folder and generates all necessary files

2) Video overlay plugin for Pupil Player with manual temporal-alignment option: https://gist.github.com/papr/ad8298ccd1c955dfafb21c3cbce130c8 - See our docs on how to add the plugin to Pupil Player here: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin. If added successfully, you should be able to turn it on in the Plugin Manager menu - Once you have opened the Pupil Invisible / Core recording in Pupil Player, you can drag & drop the new third-party video file(s), now called 'world.mp4', onto the Player window - For each third-party video, an overlay should appear on the screen – you can adjust the temporal offset in the plugin's menu to manually align the videos so they are time-synced. Note: you would need an external event (e.g. hand clap, flash of light) with which to synchronize the videos - The 'World Video Exporter' Plugin will then export everything shown in Pupil Player (original recording + third-party video overlays)

Pupil Player can be downloaded from here: https://pupil-labs.com/products/core/ (scroll down to 'Download Desktop Software')

End of October archive