software-dev


user-6b3ffb 02 November, 2021, 11:36:22

Hello Guys! I wanted to ask if there is a way to perform the calibration without using a screen. Can I start/stop the calibration process and get if it is successful through zeromq message?

papr 02 November, 2021, 11:41:33

Hi ๐Ÿ™‚ You can use the single marker calibration method in physical mode. Then you can print the markers on e.g. paper and show them to the subject in this way. After having the calibration selected in the UI, it can be started via the Network API as usual (sending the C command via Pupil Remote). There will also be a notification if the calibration failed or not.

user-55fd9d 04 November, 2021, 10:24:10

Hi everybody! I am trying to collect data via the network API and I have a problem with the speed of request processing. It takes around 25ms to request and get surface data, but I would really need something below 16ms. Is there a way to do that? Thanks! I tested the speed with the following code:

subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('surfaces.Surface1')

recording_time = 5 messages = [] message_time = [] t0 = time.time() while time.time() < t0 + recording_time: t1 = time.time() topic, payload = subscriber.recv_multipart() message = msgpack.loads(payload) t2 = time.time() messages.append(message) message_time.append(t2 - t1)

user-55fd9d 05 November, 2021, 17:28:40

First it's quick but then it's gets slow

Chat image

papr 06 November, 2021, 10:09:08

Hey, while doing this, could you please check the fps graph in the world window drops at the same time as the response time increases?

user-55fd9d 08 November, 2021, 13:52:16

Hi @papr! Thanks for the answer. Indeed, the speed of the requests depend on the frame rate of the world view camera. Is there any way to get data without the limitation? So that if there are no updates of the gaze I just get the most recent update without wait for the next update to come?

papr 08 November, 2021, 13:55:33

Not sure if I understand. You are interested in surface-mapped gaze, correct? For that, you need two components: gaze and surface-locations. If one of them is not available, the surface gaze mapping will not work.

user-55fd9d 08 November, 2021, 14:01:09

yes, I want surface-mapped gaze. If I request position of the gaze on the surface I get answer with the speed comparable to the frame rate of the world view camera, however, every message contains several "gazes" due to the higher frame rate of the eye-cameras. I'd like to get messages with rate comparable to the frame rate of the eye-cameras and not of the world-view camera.

papr 08 November, 2021, 14:03:19

I understand, but in order to surface-map the gaze, the surface needs to be detected first. And this can only happen at scene-camera sampling rate. Capture buffers gaze until the detection is done and afterward maps and publishes the gaze as a batch

user-55fd9d 08 November, 2021, 14:14:26

okay, thank you!

papr 08 November, 2021, 14:20:36

An alternative solution would be to subscribe to surfaces and non-surface-mapped gaze and mapping the gaze yourself. You could use the last received surface definition instead of waiting for a new one.

This tutorial shows how to convert surface coordinates to scene coordinates https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb You need to do the opposite but the logic is the same.

You would need the img_to_surf_trans matrix that is part of the surface datum to transform scene camera coordinates into surface coordinates.

user-55fd9d 08 November, 2021, 14:21:49

great, that sounds good! Thanks again!

user-73b616 12 November, 2021, 08:55:15

Hi, is it pls possible to get unprojected circle normal corrected for refraction using the current implementation? Like using Refractionizer before there is any pye3d model?

Chat image

papr 12 November, 2021, 08:58:18

Yes, this is possible. All 3d fields in the pye3d pupil datum are corrected for refraction, including pupil_datum["circle_3d"]["normal"]. The projected values (e.g. pupil_datum["ellipse"] or pupil_datum["projected_sphere"]) are not corrected for refraction.

user-73b616 12 November, 2021, 09:02:10

Thank you and are those corrections model independent? Like given different 3d model fit would the circle_3d normal be the same for the same image?

papr 12 November, 2021, 09:08:21

No, the correction is model dependent. The refractionizer basically performs a polynomial regression but uses the sphere.center, circle_3d.normal and circle_3d.radius as input.

user-73b616 12 November, 2021, 09:07:08

thinking of it again I'm not sure how that would be possible I just somehow think that given the normal, camera ray and fixed eye radius there might be some refration model that would correct the normal

papr 12 November, 2021, 09:10:34

See this part of the source code for reference https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/base.py#L267-L272

user-73b616 12 November, 2021, 09:52:19

Thank you and do you have any idea if refraction correction is mostly affected by X,Y coordinates of sphere center? I.e. incorrect depth does not matter that much?

user-92dca7 12 November, 2021, 10:02:29

@user-73b616 Yes, that is the case, gaze estimates are more sensitive to errors in the z-coordinate of the sphere center. A quantification of this effect can be found in Fig. 6A of the following publication: https://www.researchgate.net/publication/333490770_A_fast_approach_to_refraction-aware_eye-model_fitting_and_gaze_prediction

papr 12 November, 2021, 09:56:34

@user-92dca7 do you know the answer to the above question?

user-73b616 12 November, 2021, 10:24:15

thank you both

user-b14f98 13 November, 2021, 23:44:04

@papr I've spent some time on the intrinsic matrix issue today. It's a bit premature to be completely confident in my results, but I thought I would share an early report. First, I've found a flip along both the azimuth and elevation. Second, I've found a compression. Again - these are preliminary results, and I need to dig some more. ...if you want to see the jupyter notebook that I used to create this figure, you're welcome to. I've just given you read access to a private repo, pupil-trial-analsis. The jupyter notebook projectionB.ipynb demonstrates my approach.

I would appreciate if you don't share the package without asking.

Chat image

user-b14f98 14 November, 2021, 16:41:45

I can confirm the left / right flip.

user-b14f98 14 November, 2021, 16:42:16

Oh, sorry. let's move this to vr-ar.

user-98789c 15 November, 2021, 16:32:34

Hello all, do you know of any online (real-time) fixation- duration/direction processing plugins/software/scripts/etc. used with either Invisible or Core?

nmt 17 November, 2021, 17:14:44

There is a real-time fixation detector that you can run in Pupil Capture. Just enable it in the Plugin menu

user-709f65 17 November, 2021, 15:06:59

Hi everyone, new in PupilLabs. As far as I know, you record videos with PupilCapture and then you import the recoding to pupilPlayer, and then export the results to obtain the raw data about pupil positions and pupil gaze. Currently, I am able to record videos with matlab, but I don't know how to export the data to get the acutal data, using Matlab. Does any of you tried to obtain data using Matlab? do you have any documentation that I can look at? thank you very much ๐Ÿ™‚

nmt 17 November, 2021, 17:16:14

Hi @user-709f65 ๐Ÿ‘‹. You can subscribe to gaze and pupil data in real-time. Have a look at the Matlab section of the Pupil Helpers repo: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab, and specifically filter_messages.m

user-709f65 25 November, 2021, 19:28:24

Hi @nmt . Thank you so much for your response. I've been trying to get data on the pupil diameter and gaze, with the functions of that folder, and specifically with the one you mentioned, but I was not able. I am not able to get actual numbers. Do you know what could I try?

user-9eeaf8 22 November, 2021, 15:06:50

Hello, I have a data format question: In a pupil labs core record, what is the data format of world.intrinsics and eye1.intrinsics? How can I open that file in python? Thanks for any hints

papr 25 November, 2021, 19:30:46

@user-9eeaf8

import msgpack

with open("world.intrinsics", "rb") as fp:
    intrinsics = msgpack.unpack(fp)
user-9eeaf8 29 November, 2021, 16:50:47

Thank you very much! ๐Ÿ™‚

user-709f65 28 November, 2021, 21:29:14

It took longer than expected, but now I understand those functions! Thank you so much for the help ๐Ÿ™‚

user-709f65 28 November, 2021, 21:31:10

I have a different question. I want to track the gaze, and I guess the calibration procedure is needed. However, in my current setup I will not use a world camera. Is it possible to perform a calibration, without the world camera?

papr 29 November, 2021, 15:06:41

Technically yes, but you will need to provide calibration reference locations within a virtual and head-fixed coordinate system. E.g. in a VR headset, you can use Unity to display the markers in its main camera (user's point of view) and send the 3d locations to Capture. You would have to reimplement that functionality in Matlab.

user-709f65 29 November, 2021, 17:07:26

I see...it might be more complicated than expected. And is there a way to load a dummy calibration? via Capture or via Matlab?

papr 29 November, 2021, 17:11:02

Could you maybe elaborate as why you don't need a scene camera? To what visual context do you want to relate the gaze signal to?

user-709f65 29 November, 2021, 17:41:59

Yes, sure. We work with an haploscope system, what basically shows a different image to each eye to study depth perception, using two monitors and a set of mirrors. Iยดve attached an image.

We want to measure gaze while performing some visual tasks of moving objects, placing PupilLabs cameras behind the first mirrors (which are hot mirrors, that allows IR light to go through)., one for each eye. That's why we don't need scene camera, because each eye will have a different scene (i.e. different monitor). The main issue now is that Pupil Labs needs a calibration to start collecting gaze data. Hope everything is clear enough.

Thanks !!

Chat image

papr 29 November, 2021, 17:50:20

But in order to understand where the person is looking at, you need some kind of mapping between eye state and monitor location. Finding this mapping is what we call calibration. You can get the raw eye state in eye camera coordinates as "pupil" data. In your case, I would write a custom calibration plugin for Capture that collects reference data from the system that also drives the monitors.

user-b7a87f 30 November, 2021, 09:24:57

actually i wanted to use pupil core world camera like a web cam. Basically i want to give that as my input to yolo.

user-b7a87f 30 November, 2021, 09:25:24

so how can i take the real time time video stream as input ? any idea

End of November archive