hello! is there an existing plugin that would enable me to stream info about the level of pupil dilation over the API in real time? thanks!
@user-65b830 This data is already being published. Subscribe to pupil
and access the diameter
and/or diameter_3d
field
Hi! Could you provide me the focal length and physical dimensions (width and height in mm) of the world scene, under a resolution of the world camera of 1280x720 px?
or provide the horizontal and vertical FoV of the world camera?
@user-ff9c49 you can find the prerecorded camera intrinsics here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L45-L60
Alternatively, you can measure them yourself by running the Camera Intrinsics Estimation plugin in Capture.
Thanks! I have taken a look in this information (camera_models.py) and therefore it would be 829.35x799.57 mm
Are these dimensions of the image plane or focal length?
@user-ff9c49 These values refer to camera model used in opencv: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
fx, fy are the focal lengths expressed in pixel units.
Thanks! And has the world camera a 100Β° fisheye field of view?
@user-ff9c49 I do not know how big the field of view is exactly, but if you run the world camera with 1920x1080, you will get fisheye-lens-distorted images
The lower resolutions are cropped and can be modelled with a radial distortion model
I was just wondering because I was interested in transferring normalized data in degrees, taking into account the physical dimensions of the image plane
@user-ff9c49 We do that with opencv, too: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L544
Thanks a lot! And have you info regarding the sensor size?
width and height?
in mm
@user-ff9c49 I have sent you information regarding the sensor in a private message.
Hello! Is it possible to define Static Region of Interest in Pupil code? Instead of getting the surface from markers?
Hi @user-346f9b Could you give an example for a static region that you had in mind? Do you mean something like a computer screen?
Yes
In this case, you are talking about what we call "markerless area of interest tracking". This is a challenging computer vision problem for mobile eye trackers as they can move freely in relation to their environment and the software needs to keep track of the AOI within the scene video feed.
Unfortunately, this is currently not supported by the Pupil Core software.
Oh I see. I was thinking whether it is possible to specify the x,y coordinates of the area of interest.
Instead of processing the AOI from the environment
@papr I really appreciate your prompt reply!
@user-346f9b That would be possible if the Pupil Core headset would not move in relation to the target. But even with a chin rest this is difficult to achieve.
Oh I see. Anyways, thank you very much @papr.
@user-346f9b No problem. Is there a reason why you cannot use the markers?
@papr I wanted to use it in a medical research application where Doctors would be using the eye tracker and it is not possible for me to be there to set up markers.
@user-346f9b I see. This is indeed a very limited environment.
@papr yes.
Hi, I was wondering if you could point me to the place/file where I can match the frames with the timestamps
Hi @user-2be752, are you talking about export data, are you writing a plugin, or are you parsing the files of a recording on your own?
I'm parsing the files on my own. I am specifically going throuhg the annotations.pldata, but they come with timestamps. I need to now find the video frames where each annotation happens, that's why I need to know how the timestamps and frames match.
@user-2be752 every video has a corresponding *_timestamps.npy file. This gets a bit confusing, if we are talking about Pupil Mobile or Pupil Invisible recordings however, since they can be split into multiple parts. Are you just interested in recordings made with Pupil Capture?
@user-c5fb8b, yes, with pupil capture
@user-2be752 Also, for many analysis tasks it becomes a lot easier if you export the data first. This will create much simpler data formats. This will only be applicable if you do not want to feed anything back into Pupil Player though.
I see that, but for a lot of the analysis we have to anyways parse the files ourselves, so to be able to make everything automatic, plus we have so many subjects...
Ok, in that case you can read in the data from *_timestamps.npy files with
np.load("path/to/eg/world_timestamps.npy")
This will return an array of timestamps. The index will correspond to the frame number for the given video.
oh, that seems straightforward!!
For Pupil Capture recordings it is, yes π
thank you π
Is it possible in pupil player to have multiple annotation data files that I can load and unload on the fly by means of renaming or moving out of the folder? I want to reuse the same key shortcuts for different events on the recordings (e.g. using key H to annotate when participants looks at a specific, but then on a separate set of annotations use key H to annotate when participants grab the same object).
@user-5ef6c0 This will only work if you rename the files while the recording is not opened in Player.
@papr ok, thank you
@papr One more question. It seems there's no way to delete an annotation from inside pupil player (or is it?). If I wanted to manually delete some before exporting to csv, what would you suggest. it seems timestamps and actual annotations are in separate files; I can open and edit the timestamps with a python IDE but I've had no luck opening the pldata file.
@user-5ef6c0 The timestamps file is only an auxilary file and can be generated from the pldata file. Unfortunately, there is currently no way to delete an annotation from within Player.
You would also have to close the recording before editing the file. But I would not recommend it. I think it is easier to filter the expoerted csv file.
@papr got it, thank you
Hi All, I am hoping to use a live stream of the world camera to get the frame number that corresponds to the onset of a bright light in darkness, and then to get the pupil timestamp that corresponds to this frame. I was able to run this script and print the data for each frame to the consolehttps://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py. Perhaps I could do a simple calculation on this data (e.g. recent_world.mean()) to find the frame of interest and modify the script to return the actual frame number, then find the closet corresponding timestamp afterwards. Can anyone comment on whether this sounds viable? If so, what kind of latencies would I be dealing with? Thanks and regards. Joel
Hi @user-430fc1 do you need to detect this in real-time while recording (i.e. do you need to respond somehow to the onset of the light in the experiment)? Or would it be sufficient to identify the pupil timestamp best corresponding to the light onset afterwards?
@user-c5fb8b Thanks for a quick response - It is sufficient to get the pupil timestamp that best corresponds to the light onset
@user-430fc1 I think your idea with frame.mean()
for a very simple light onset detection sounds feasible, you'll just have to figure out an appropriate threshold.
Regarding hooking this up with Pupil there are certainly a couple of different ways, subscribing to the messages might be the easiest one. I assume you have found the script for subscribing to the actual frame data as well? https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py Frame data is not published by default, since it requires a lot of network resources. The script contains code to enable the FramePublisher plugin as well, which will also put the raw image data on the network. If this stresses the network too much you could think about writing a user plugin for Pupil Capture instead, which will get directly hooked into our event loop.
Have you thought about how to save the data (i.e. the detected timestamp)? You could create an annotation with the detected timestamp of the light onset. This way the data will get saved to the recording automatically and will be part of the export from Pupil Player as well. For reference see https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py (note that the timing parts would not be needed since you would know the appropriate timestamp).
Maybe @papr also has some additional ideas?
@user-c5fb8b Thanks for the helpful reply! Yes, that was the script I meant to link to in my original post, which I've edited to avoid confusion. I hadn't thought about the best way to save the detected time stamp, but the way you suggest sounds ideal. I originally sent annotations in my script immediately prior to sending a command to my light device, but with this particular device (which takes HTTP requests) there is on average a window of 200 ms of uncertainty as to when the status of the light actually changes, and I noticed in pupil player that the annotation was often associated with a frame that did not correspond close enough with the frame when the light changed. Given the uncertainty of my light device, I thought that perhaps I would need to start collecting the frames before I send my http request and then stop once the request has been proceesed (by which time the light will be on). Then I could process the frame data afterwards to find the target timestamp.
This sounds feasible in general. You'll have to play around with this a bit I guess. Feel free to come back here if you have any questions regarding the Pupil side of this integration π
@user-c5fb8b thank you!
@user-c5fb8b quick question - how could I modify the recv_world_video_frames.py example so that it returns the frame number and / or pupil timestamp along with the numpy array of the frame data?
@user-430fc1 Here's the FramePublisher code: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/frame_publisher.py#L80-L90
As you can see, there's a key timestamp
in the message, that you can just read from.
@user-430fc1 Just to add to @user-c5fb8b comments.
You could create an annotation with the detected timestamp of the light onset. This is a great solution.
I would also recommend using the timestamp instead of the frame index, as Player aligns data by time not by indices.
Hi! I've met some problem with the installation of pupil capture and pupil players on Mac. It said β 'Pupil Capture' canβt be opened because Apple cannot check it for malicious software.This software needs to be updated. Contact the developer for more information." But I downloaded it from the Pupil Labs website. Could anyone tell me how to install the software?
Hi @user-0b619b I assume you are using macOS Catalina?
The warning you are seeing comes from additional security certifications that Apple introduced in macOS Catalina. We are currently working on getting these ready for our new versions of Pupil. Until then you can ignore this message and open the app anyways, by right-clicking the app and selecting "open" from the context menu. In the following dialog, you should be able to open the app this time. Sorry for the inconvenience!
@user-c5fb8b Thanks! π It works!
hey Pupil team! long time... Ive been tinkering bit more with the pyuvc library, which now works very well with our microscope, however I'm still having some problems working with the focus, I checked multiple methods and it seems to be a PYUVC limitation.
Basically it seems like the the "Absolute Focus" property is being updated only upon the connection with the device.
So unless I explicitly update this value, for example controls_dict['Absolute Focus'].value = 100
it will never update even if the camera focus is actually changed (for example if I'm in auto-focus mode). Which basically makes it impossible to make a smooth transition from Auto-Focus to Manual focus.
For example in this library: https://www.npmjs.com/package/uvcc/v/1.0.1 when I run a "get absoluteFocus" command I'm getting the correct updated value - when the camera is in AF mode I run the command and get an X value, I move the camera > focus Auto Adapts > I run the command again and get an updated X value, I also have focus buttons on the scope, and the absoluteFocus value updated if i request if via uvcc, but not via PYUVC
Is there a way to query the camera and get the actual "Absolute Focus" value in PYUVC?
Many thanks!
It also seems like when I run a command such as 'controls_dict['Absolute Focus'].value -= 1' it is getting the latest focus value from the scope but updating it to a value from an older state... its a very strange behavior bit hard to explain
for example when i run this command several times:
print("---------------")
print(controls_dict['Absolute Focus'].value)
controls_dict['Absolute Focus'].value += 1
print(controls_dict['Absolute Focus'].value)
I'm getting this output:
91
138
---------------
138
92
---------------
92
139
---------------
139
93
---------------
93
140
The actual focus value in the camera (after querying the camera with UVCC) is actually the top number in each pair
something is very strange
it works better if I use a variable to update the focus:
print("---------------")
abs_focus += 1
print(controls_dict['Absolute Focus'].value)
controls_dict['Absolute Focus'].value = abs_focus
print(controls_dict['Absolute Focus'].value)
but still the 1st two updates are still doing the strange "older value" pull:
---------------
104
92
---------------
92
101
---------------
101
102
---------------
102
103
---------------
103
104
---------------
104
105
---------------
105
106
@user-f8c051 Hey, yes, I also suspect that this is an issue in pyuvc. Unfortunately, due to the current situation, our resources are limited in regard to looking into this issue.
and still the actual focus, after the update is the last number you see +1
asif controls_dict['Absolute Focus'].value
always showing a value from the past
I would be happy to review a pull request in case you are able to find the cause of the issue and fix it. π
interesting proposal! will try to have a look over the weekend!
Btw, uvcc seems like a great reference implementation!
yes its good! but our whole project is in pythonπ
can you give me a hint into which file i should look? which one handles the properties updates?
thanks!!!!
@user-f8c051 So this file contains the libuvc variable definitions: https://github.com/pupil-labs/pyuvc/blob/master/cuvc.pxd
This file contains a high level definition of the implemented uvc controls: https://github.com/pupil-labs/pyuvc/blob/master/controls.pxi
Specifically, this function accesses the uvc control values: https://github.com/pupil-labs/pyuvc/blob/master/controls.pxi#L647-L660
This function enumerates the available controls: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L609-L649
Awesome! I'll dig into it!!
Good luck!
hey @papr I did some digging around and thinking, the focus is a tricky control. Not like all the other controls it takes time to update in the camera, the physical motion of the lens takes up-to a few seconds. So I'm not sure the current functionality of PYUVC is actually wrong. The best way IMO is just adding another method which grabs the latest value from the camera upon request. But I guess this is something to discuss with your team. So for now, I made a workaround in my code and summarized my insights and thoughts here: https://github.com/pupil-labs/pyuvc/issues/71
@user-f8c051 thanks for the update! I will give it a detailed read on Monday
thanks! have a good weekend!
@user-f8c051 you too!
Hi. would you please guide me which software I need for using binocular pupil eye tracking ?
@user-6752ca Checkout our Getting Started documentation. It summarises the most important information https://docs.pupil-labs.com/core/#getting-started