software-dev


user-65b830 01 April, 2020, 18:59:42

hello! is there an existing plugin that would enable me to stream info about the level of pupil dilation over the API in real time? thanks!

papr 01 April, 2020, 19:00:45

@user-65b830 This data is already being published. Subscribe to pupil and access the diameter and/or diameter_3d field

user-ff9c49 06 April, 2020, 11:30:57

Hi! Could you provide me the focal length and physical dimensions (width and height in mm) of the world scene, under a resolution of the world camera of 1280x720 px?

user-ff9c49 06 April, 2020, 11:43:59

or provide the horizontal and vertical FoV of the world camera?

papr 06 April, 2020, 12:30:01

@user-ff9c49 you can find the prerecorded camera intrinsics here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L45-L60

papr 06 April, 2020, 12:30:37

Alternatively, you can measure them yourself by running the Camera Intrinsics Estimation plugin in Capture.

user-ff9c49 06 April, 2020, 12:36:02

Thanks! I have taken a look in this information (camera_models.py) and therefore it would be 829.35x799.57 mm

user-ff9c49 06 April, 2020, 12:38:58

Are these dimensions of the image plane or focal length?

papr 06 April, 2020, 12:41:36

@user-ff9c49 These values refer to camera model used in opencv: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

papr 06 April, 2020, 12:41:57

fx, fy are the focal lengths expressed in pixel units.

user-ff9c49 06 April, 2020, 12:51:18

Thanks! And has the world camera a 100Β° fisheye field of view?

papr 06 April, 2020, 12:52:40

@user-ff9c49 I do not know how big the field of view is exactly, but if you run the world camera with 1920x1080, you will get fisheye-lens-distorted images

papr 06 April, 2020, 12:53:07

The lower resolutions are cropped and can be modelled with a radial distortion model

user-ff9c49 06 April, 2020, 13:23:15

I was just wondering because I was interested in transferring normalized data in degrees, taking into account the physical dimensions of the image plane

papr 06 April, 2020, 13:24:45

@user-ff9c49 We do that with opencv, too: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L544

user-ff9c49 06 April, 2020, 16:17:33

Thanks a lot! And have you info regarding the sensor size?

user-ff9c49 06 April, 2020, 16:18:16

width and height?

user-ff9c49 06 April, 2020, 16:18:22

in mm

papr 06 April, 2020, 16:20:53

@user-ff9c49 I have sent you information regarding the sensor in a private message.

user-346f9b 10 April, 2020, 20:02:19

Hello! Is it possible to define Static Region of Interest in Pupil code? Instead of getting the surface from markers?

papr 10 April, 2020, 20:18:28

Hi @user-346f9b Could you give an example for a static region that you had in mind? Do you mean something like a computer screen?

user-346f9b 10 April, 2020, 20:19:12

Yes

papr 10 April, 2020, 20:22:17

In this case, you are talking about what we call "markerless area of interest tracking". This is a challenging computer vision problem for mobile eye trackers as they can move freely in relation to their environment and the software needs to keep track of the AOI within the scene video feed.

Unfortunately, this is currently not supported by the Pupil Core software.

user-346f9b 10 April, 2020, 20:23:59

Oh I see. I was thinking whether it is possible to specify the x,y coordinates of the area of interest.

user-346f9b 10 April, 2020, 20:24:25

Instead of processing the AOI from the environment

user-346f9b 10 April, 2020, 20:26:01

@papr I really appreciate your prompt reply!

papr 10 April, 2020, 20:26:21

@user-346f9b That would be possible if the Pupil Core headset would not move in relation to the target. But even with a chin rest this is difficult to achieve.

user-346f9b 10 April, 2020, 20:27:06

Oh I see. Anyways, thank you very much @papr.

papr 10 April, 2020, 20:27:35

@user-346f9b No problem. Is there a reason why you cannot use the markers?

user-346f9b 10 April, 2020, 20:29:21

@papr I wanted to use it in a medical research application where Doctors would be using the eye tracker and it is not possible for me to be there to set up markers.

papr 10 April, 2020, 20:30:18

@user-346f9b I see. This is indeed a very limited environment.

user-346f9b 10 April, 2020, 20:30:38

@papr yes.

user-2be752 14 April, 2020, 20:17:06

Hi, I was wondering if you could point me to the place/file where I can match the frames with the timestamps

user-c5fb8b 15 April, 2020, 06:47:58

Hi @user-2be752, are you talking about export data, are you writing a plugin, or are you parsing the files of a recording on your own?

user-2be752 15 April, 2020, 06:48:53

I'm parsing the files on my own. I am specifically going throuhg the annotations.pldata, but they come with timestamps. I need to now find the video frames where each annotation happens, that's why I need to know how the timestamps and frames match.

user-c5fb8b 15 April, 2020, 06:53:18

@user-2be752 every video has a corresponding *_timestamps.npy file. This gets a bit confusing, if we are talking about Pupil Mobile or Pupil Invisible recordings however, since they can be split into multiple parts. Are you just interested in recordings made with Pupil Capture?

user-2be752 15 April, 2020, 06:55:35

@user-c5fb8b, yes, with pupil capture

user-c5fb8b 15 April, 2020, 06:55:37

@user-2be752 Also, for many analysis tasks it becomes a lot easier if you export the data first. This will create much simpler data formats. This will only be applicable if you do not want to feed anything back into Pupil Player though.

user-2be752 15 April, 2020, 06:58:06

I see that, but for a lot of the analysis we have to anyways parse the files ourselves, so to be able to make everything automatic, plus we have so many subjects...

user-c5fb8b 15 April, 2020, 06:59:12

Ok, in that case you can read in the data from *_timestamps.npy files with

np.load("path/to/eg/world_timestamps.npy")
user-c5fb8b 15 April, 2020, 06:59:52

This will return an array of timestamps. The index will correspond to the frame number for the given video.

user-2be752 15 April, 2020, 07:00:37

oh, that seems straightforward!!

user-c5fb8b 15 April, 2020, 07:06:04

For Pupil Capture recordings it is, yes πŸ˜„

user-2be752 15 April, 2020, 07:06:20

thank you πŸ™‚

user-5ef6c0 20 April, 2020, 14:19:14

Is it possible in pupil player to have multiple annotation data files that I can load and unload on the fly by means of renaming or moving out of the folder? I want to reuse the same key shortcuts for different events on the recordings (e.g. using key H to annotate when participants looks at a specific, but then on a separate set of annotations use key H to annotate when participants grab the same object).

papr 20 April, 2020, 14:22:48

@user-5ef6c0 This will only work if you rename the files while the recording is not opened in Player.

user-5ef6c0 20 April, 2020, 14:28:50

@papr ok, thank you

user-5ef6c0 20 April, 2020, 15:00:47

@papr One more question. It seems there's no way to delete an annotation from inside pupil player (or is it?). If I wanted to manually delete some before exporting to csv, what would you suggest. it seems timestamps and actual annotations are in separate files; I can open and edit the timestamps with a python IDE but I've had no luck opening the pldata file.

papr 20 April, 2020, 15:19:21

@user-5ef6c0 The timestamps file is only an auxilary file and can be generated from the pldata file. Unfortunately, there is currently no way to delete an annotation from within Player.

papr 20 April, 2020, 15:20:55

You would also have to close the recording before editing the file. But I would not recommend it. I think it is easier to filter the expoerted csv file.

user-5ef6c0 20 April, 2020, 15:38:10

@papr got it, thank you

user-430fc1 23 April, 2020, 11:14:06

Hi All, I am hoping to use a live stream of the world camera to get the frame number that corresponds to the onset of a bright light in darkness, and then to get the pupil timestamp that corresponds to this frame. I was able to run this script and print the data for each frame to the consolehttps://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py. Perhaps I could do a simple calculation on this data (e.g. recent_world.mean()) to find the frame of interest and modify the script to return the actual frame number, then find the closet corresponding timestamp afterwards. Can anyone comment on whether this sounds viable? If so, what kind of latencies would I be dealing with? Thanks and regards. Joel

user-c5fb8b 23 April, 2020, 11:16:54

Hi @user-430fc1 do you need to detect this in real-time while recording (i.e. do you need to respond somehow to the onset of the light in the experiment)? Or would it be sufficient to identify the pupil timestamp best corresponding to the light onset afterwards?

user-430fc1 23 April, 2020, 11:19:13

@user-c5fb8b Thanks for a quick response - It is sufficient to get the pupil timestamp that best corresponds to the light onset

user-c5fb8b 23 April, 2020, 11:31:49

@user-430fc1 I think your idea with frame.mean() for a very simple light onset detection sounds feasible, you'll just have to figure out an appropriate threshold.

Regarding hooking this up with Pupil there are certainly a couple of different ways, subscribing to the messages might be the easiest one. I assume you have found the script for subscribing to the actual frame data as well? https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py Frame data is not published by default, since it requires a lot of network resources. The script contains code to enable the FramePublisher plugin as well, which will also put the raw image data on the network. If this stresses the network too much you could think about writing a user plugin for Pupil Capture instead, which will get directly hooked into our event loop.

Have you thought about how to save the data (i.e. the detected timestamp)? You could create an annotation with the detected timestamp of the light onset. This way the data will get saved to the recording automatically and will be part of the export from Pupil Player as well. For reference see https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py (note that the timing parts would not be needed since you would know the appropriate timestamp).

Maybe @papr also has some additional ideas?

user-430fc1 23 April, 2020, 11:50:47

@user-c5fb8b Thanks for the helpful reply! Yes, that was the script I meant to link to in my original post, which I've edited to avoid confusion. I hadn't thought about the best way to save the detected time stamp, but the way you suggest sounds ideal. I originally sent annotations in my script immediately prior to sending a command to my light device, but with this particular device (which takes HTTP requests) there is on average a window of 200 ms of uncertainty as to when the status of the light actually changes, and I noticed in pupil player that the annotation was often associated with a frame that did not correspond close enough with the frame when the light changed. Given the uncertainty of my light device, I thought that perhaps I would need to start collecting the frames before I send my http request and then stop once the request has been proceesed (by which time the light will be on). Then I could process the frame data afterwards to find the target timestamp.

user-c5fb8b 23 April, 2020, 11:53:15

This sounds feasible in general. You'll have to play around with this a bit I guess. Feel free to come back here if you have any questions regarding the Pupil side of this integration πŸ™‚

user-430fc1 23 April, 2020, 11:54:32

@user-c5fb8b thank you!

user-430fc1 23 April, 2020, 12:48:27

@user-c5fb8b quick question - how could I modify the recv_world_video_frames.py example so that it returns the frame number and / or pupil timestamp along with the numpy array of the frame data?

user-c5fb8b 23 April, 2020, 12:51:31

@user-430fc1 Here's the FramePublisher code: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/frame_publisher.py#L80-L90 As you can see, there's a key timestamp in the message, that you can just read from.

papr 23 April, 2020, 13:15:29

@user-430fc1 Just to add to @user-c5fb8b comments.

You could create an annotation with the detected timestamp of the light onset. This is a great solution.

I would also recommend using the timestamp instead of the frame index, as Player aligns data by time not by indices.

user-0b619b 24 April, 2020, 07:00:23

Hi! I've met some problem with the installation of pupil capture and pupil players on Mac. It said β€œ 'Pupil Capture' can’t be opened because Apple cannot check it for malicious software.This software needs to be updated. Contact the developer for more information." But I downloaded it from the Pupil Labs website. Could anyone tell me how to install the software?

user-c5fb8b 24 April, 2020, 07:02:34

Hi @user-0b619b I assume you are using macOS Catalina?

user-c5fb8b 24 April, 2020, 07:05:10

The warning you are seeing comes from additional security certifications that Apple introduced in macOS Catalina. We are currently working on getting these ready for our new versions of Pupil. Until then you can ignore this message and open the app anyways, by right-clicking the app and selecting "open" from the context menu. In the following dialog, you should be able to open the app this time. Sorry for the inconvenience!

user-0b619b 24 April, 2020, 07:12:38

@user-c5fb8b Thanks! πŸ˜‰ It works!

user-f8c051 24 April, 2020, 10:22:09

hey Pupil team! long time... Ive been tinkering bit more with the pyuvc library, which now works very well with our microscope, however I'm still having some problems working with the focus, I checked multiple methods and it seems to be a PYUVC limitation.

Basically it seems like the the "Absolute Focus" property is being updated only upon the connection with the device. So unless I explicitly update this value, for example controls_dict['Absolute Focus'].value = 100 it will never update even if the camera focus is actually changed (for example if I'm in auto-focus mode). Which basically makes it impossible to make a smooth transition from Auto-Focus to Manual focus.

For example in this library: https://www.npmjs.com/package/uvcc/v/1.0.1 when I run a "get absoluteFocus" command I'm getting the correct updated value - when the camera is in AF mode I run the command and get an X value, I move the camera > focus Auto Adapts > I run the command again and get an updated X value, I also have focus buttons on the scope, and the absoluteFocus value updated if i request if via uvcc, but not via PYUVC

Is there a way to query the camera and get the actual "Absolute Focus" value in PYUVC?

Many thanks!

user-f8c051 24 April, 2020, 10:38:57

It also seems like when I run a command such as 'controls_dict['Absolute Focus'].value -= 1' it is getting the latest focus value from the scope but updating it to a value from an older state... its a very strange behavior bit hard to explain

user-f8c051 24 April, 2020, 10:45:27

for example when i run this command several times:

print("---------------")
print(controls_dict['Absolute Focus'].value)
controls_dict['Absolute Focus'].value += 1
print(controls_dict['Absolute Focus'].value)
user-f8c051 24 April, 2020, 10:46:12

I'm getting this output:

91
138
---------------
138
92
---------------
92
139
---------------
139
93
---------------
93
140
user-f8c051 24 April, 2020, 10:47:36

The actual focus value in the camera (after querying the camera with UVCC) is actually the top number in each pair

user-f8c051 24 April, 2020, 10:49:13

something is very strange

user-f8c051 24 April, 2020, 10:53:48

it works better if I use a variable to update the focus:

        print("---------------")
        abs_focus += 1
        print(controls_dict['Absolute Focus'].value)
        controls_dict['Absolute Focus'].value = abs_focus
        print(controls_dict['Absolute Focus'].value)
user-f8c051 24 April, 2020, 10:55:23

but still the 1st two updates are still doing the strange "older value" pull:

---------------
104
92
---------------
92
101
---------------
101
102
---------------
102
103
---------------
103
104
---------------
104
105
---------------
105
106
papr 24 April, 2020, 10:56:32

@user-f8c051 Hey, yes, I also suspect that this is an issue in pyuvc. Unfortunately, due to the current situation, our resources are limited in regard to looking into this issue.

user-f8c051 24 April, 2020, 10:56:50

and still the actual focus, after the update is the last number you see +1

user-f8c051 24 April, 2020, 10:57:19

asif controls_dict['Absolute Focus'].value always showing a value from the past

papr 24 April, 2020, 10:57:20

I would be happy to review a pull request in case you are able to find the cause of the issue and fix it. πŸ™‚

user-f8c051 24 April, 2020, 10:57:49

interesting proposal! will try to have a look over the weekend!

papr 24 April, 2020, 10:58:43

Btw, uvcc seems like a great reference implementation!

user-f8c051 24 April, 2020, 10:59:31

yes its good! but our whole project is in pythonπŸ˜†

user-f8c051 24 April, 2020, 11:00:06

can you give me a hint into which file i should look? which one handles the properties updates?

user-f8c051 24 April, 2020, 11:03:36

thanks!!!!

papr 24 April, 2020, 11:04:28

@user-f8c051 So this file contains the libuvc variable definitions: https://github.com/pupil-labs/pyuvc/blob/master/cuvc.pxd

This file contains a high level definition of the implemented uvc controls: https://github.com/pupil-labs/pyuvc/blob/master/controls.pxi

Specifically, this function accesses the uvc control values: https://github.com/pupil-labs/pyuvc/blob/master/controls.pxi#L647-L660

This function enumerates the available controls: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L609-L649

user-f8c051 24 April, 2020, 11:05:02

Awesome! I'll dig into it!!

papr 24 April, 2020, 11:05:13

Good luck!

user-f8c051 25 April, 2020, 09:49:52

hey @papr I did some digging around and thinking, the focus is a tricky control. Not like all the other controls it takes time to update in the camera, the physical motion of the lens takes up-to a few seconds. So I'm not sure the current functionality of PYUVC is actually wrong. The best way IMO is just adding another method which grabs the latest value from the camera upon request. But I guess this is something to discuss with your team. So for now, I made a workaround in my code and summarized my insights and thoughts here: https://github.com/pupil-labs/pyuvc/issues/71

papr 25 April, 2020, 10:05:53

@user-f8c051 thanks for the update! I will give it a detailed read on Monday

user-f8c051 25 April, 2020, 10:07:06

thanks! have a good weekend!

papr 25 April, 2020, 10:07:39

@user-f8c051 you too!

user-6752ca 27 April, 2020, 04:27:09

Hi. would you please guide me which software I need for using binocular pupil eye tracking ?

papr 27 April, 2020, 08:50:21

@user-6752ca Checkout our Getting Started documentation. It summarises the most important information https://docs.pupil-labs.com/core/#getting-started

End of April archive