πŸ’» software-dev


user-87c4eb 05 November, 2019, 07:39:15

Hi, I have a question concerning the Network API and streaming gaze data (https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format). We have the binocular core tracker with 120Hz for each eye camera, so I expected gaze data to be streamed at 120Hz (combining the 2x120Hz). However, in my software, I receive gaze data at 240Hz with each gaze datum containing a two pupil datum. Is it intended to stream at 240Hz? As it seems to be redundant, how can I extract the non-redundant 120Hz signal? (using Pupil Capture 1.17.6). Thanks in advance for your answer!

user-2be752 06 November, 2019, 03:37:47

Hi there, is there anywhere I can get more documentation on the structure of the new surface tracker? I'm trying to go over the code to implement it to my design but I'm having a bit of trouble differentiating each of the functions. Thanks in advance!

papr 06 November, 2019, 10:49:15

@user-87c4eb Pupil Capture only matches two pupil datums into a binocular gaze datum if the confidence of both data points is sufficiently high (0.6) and you calibrated successfully already. IIn your case, you see two monocular gaze data streams, each running at 120Hz

I receive gaze data at 240Hz with each gaze datum containing a two pupil datum

Do they include one or two pupil datums? Could you check and share the topic of the gaze data that you receive?

papr 06 November, 2019, 10:50:05

@user-2be752 Unfortunately, there is no such documentation. πŸ˜• Are you looking to modify something in particular? In this case I might be able to give pointes.

user-2be752 06 November, 2019, 17:48:00

@papr No worries, you guys are super helpful! I've developed my own apriltag detection and a way of matching each tag id to one of 3 surfaces (each one defined by 4 markers), but now I think it might be easier to just use your code. However, i don't seem to find where the actual tag detection is happening. If you could point me to the right part of the code where the marker detection and surface matching is happening that'd be very helpful. Otherwise, if you could point me to where I could pass on my definition of the surfaces by 4 markers that could also work.

user-d77d0f 06 November, 2019, 17:51:44

Hi! I'm trying to run the GazeRaysDemoScene in unity but it doesn't seem to be working (all other demos work perfectly fine). I get an error saying: "NullReferenceException: Object reference not set to an instance of an object PupilLabs.DisableDuringCalibration.Awake()". Any idea of how to solve it?

papr 06 November, 2019, 18:35:35

@user-d77d0f please post unity related questions in πŸ₯½ core-xr :)

user-b13152 07 November, 2019, 03:28:41

Hi @papr! why the value of fixation_id is same, but the world index and position change? I want to use the data for the area of interest. Thanks

Chat image

user-b13152 07 November, 2019, 03:48:55

Hi @wrp! why the value of fixation_id is same, but the world index and position change? I want to use the data for the area of interest. Thanks

Chat image

wrp 07 November, 2019, 04:57:11

@user-40621b A fixation is made up of a cluster of gaze positions. It appears that what you are seeing in fixations_on_surface_<surface_name>.csv are the gaze positions that make up each fixation. Example from your screenshot:

Fixation ID = 74 Fixation starts at world_index frame = 492 Fixation ends at world_index frame = 499

Multiple datums: You could take the mean of the gaze norm_pos_x and norm_pos_y if you want to represent a single position for the fixation.

user-b13152 07 November, 2019, 08:10:24

@wrp , thanks for your respond. Can tell me, how is the fastest way to make a mean on the fixation? because I have many data files.

user-87c4eb 07 November, 2019, 08:19:42

@papr Thanks for your answer. I looked into it again: when subscribing "gaze.3d.01." I get between 1500 and 1800 samples in 10 seconds. So, it's not 240Hz, but sth. between 150 and 180Hz. Maybe the eye cameras deliver more than 120Hz? My FPS views for the cameras jump between 90 and 180Hz...

user-2be752 07 November, 2019, 19:42:02

@papr any thoughts or clues of where I could find this?

papr 07 November, 2019, 20:26:30

@user-87c4eb Could you share your code such that we can try to reproduce the experiment?

papr 11 November, 2019, 13:53:00

@user-87c4eb We were able to reproduce the issue and fixed it here: https://github.com/pupil-labs/pupil/pull/1728

user-87c4eb 11 November, 2019, 14:14:56

Hi @papr . Great news! Thanks for the update.

papr 11 November, 2019, 17:03:09

@user-87c4eb Please be aware that we are still evaluating the new behaviour. This change will take a few days before it is merged if at all. I will keep you posted.

user-bb648c 11 November, 2019, 21:15:14

Hey! My name is Richard and I'm setting up a Pupil Labs eye tracker for a project in UC Berkeley's Whitney Lab. Right now, we're trying to use the surface_tracker module to detect three surfaces using 12 apriltag markers (we’re using it directly in the code, not through pupil player). I currently have all the markers saved into the marker_cache of a Surface_Tracker object, but I'm confused as to how I can detect the presence of one of more surfaces given the markers I saw for that frame (we're allowing users to move their head during recording, so the surfaces could change every frame). Does anyone have any pointers or tips?

papr 12 November, 2019, 20:58:18

@user-c5fb8b Could you please give @user-bb648c and @user-2be752 pointers regarding the surface code structure when you are back in the office?

user-c5fb8b 13 November, 2019, 09:23:51

Hi @user-2be752 and @user-bb648c, let me give you a quick overview of the SurfacTracker structure. I'll do a combined answer for both of you since there's a lot of overlap.

Marker detection for the Surface_Tracker is happening in surface_marker_detector.py, see here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_marker_detector.py#L232-L267 Note self._detector is an instance of the pupil-apriltags detector. The code of this is not in Pupil, but here: https://github.com/pupil-labs/apriltags

user-c5fb8b 13 November, 2019, 09:23:59

The entry point for the main logic is in recent_events of the surface tracker base class: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker.py#L423-L432 Here essentially the 3 _update_* functions run all the code. The implementations are a bit different between surface_tracker_online and surface_tracker_offline though. Online works in Capture and tries to optimize for real-time speed, while offline works in Player and calculates an accurate marker cache for the entire video at the beginning in a background thread.

user-c5fb8b 13 November, 2019, 09:24:11

For an easy overview of how the pipeline works, I'd recommend you take a look at surface_tracker_online and surface_online, as there are no caches and background-threads which makes it a lot clearer. Basically the tracker computes the marker locations in a frame and then passes this onto it's defined surfaces with Surface.update_location (see implementations in derived classes): https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L277-L280

user-c5fb8b 13 November, 2019, 09:24:18

The surface stores an internal "definition". That is every needs one definition-frame with the information of which markers correspond to which surface coordinates. In Pupil you set these definitions by dragging the surface corners in the UI. We store both distorted and undistorted surface locations: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L76-L88 Serialization/Deserialization of the surface definitions is managed by surface_file_store.py, which is part of the Surface_Tracker base class and works independent of online/offline mode.

user-5fa537 13 November, 2019, 16:25:50

Hey sports fans... still having some issues getting the Core to work w/ the Realsesne 435i on Linux

user-5fa537 13 November, 2019, 16:26:09

I can get the sensor to work independently, have the right 'stuff' installed in the python-verse

user-5fa537 13 November, 2019, 16:26:38

but capture never 'sees' the sensor.

user-5fa537 13 November, 2019, 16:27:02

tried as root, also have sacrafied a chicken and am presently doing voodoo dance.

user-5fa537 13 November, 2019, 16:27:33

eyecams are 'seen' no problem

user-5fa537 13 November, 2019, 16:27:46

doesn't work on dedicated iron or VM

user-5fa537 13 November, 2019, 16:29:20

one minor thing- the capture software 'seems' to want me to install -both- pyrealsense -and- pyrealsense2. Only 2 supports the 4xx series IIRC

user-5fa537 13 November, 2019, 16:29:54
world - [INFO] launchables.world: Application Version: 1.18.35
world - [INFO] launchables.world: System Info: User: flip, Platform: Linux, Machine: gibson, Release: 4.15.0-66-generic, Version: #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019
world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend
world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720]
w
user-5fa537 13 November, 2019, 16:31:05

later, it asks for pyrealsense2 to be installed, but it is there in my global python install. Does it need to be localized into the app or does the app use a special PYTHONPATH thingie perchance?

papr 13 November, 2019, 16:34:31

@user-5fa537 The Realsense D400 backend needs pyrealsense2, the Realsense R200 backend needs the pyrealsense module

user-5fa537 13 November, 2019, 16:34:37

yep-

user-5fa537 13 November, 2019, 16:35:00

installed into the system python or some other 'secret' place πŸ™‚ ?

papr 13 November, 2019, 16:35:22

@user-5fa537 If you are running from bundle, the global python env is ignored. The bundle has its own env.

user-5fa537 13 November, 2019, 16:35:35

ahh- OK.

papr 13 November, 2019, 16:35:45

@user-5fa537 You need to install pyrealsense2 into ~/pupil_capture_settings/plugins

user-5fa537 13 November, 2019, 16:35:54

I suspected this!

user-5fa537 13 November, 2019, 16:36:03

@papr thank you! shall do-

user-5fa537 13 November, 2019, 16:36:57

i will work on an IMU plugin once I get it working.

user-5fa537 13 November, 2019, 16:37:15

we need the data for some ML stuff that Gabe's group at RIT did

user-5fa537 13 November, 2019, 16:37:33

so, nothing like necessity being the Mofo of invention and all that

user-5fa537 13 November, 2019, 16:40:03

@papr - py 2.7 or 3.x btw?

papr 13 November, 2019, 16:43:35

@user-5fa537 3.6

user-5fa537 13 November, 2019, 16:45:18

@papr πŸ‘ thanks. figured, but confirmation beats all that πŸ™‚

user-ff9c49 13 November, 2019, 16:48:15

@papr Hi! Is there any way to do the calibration step directly in a secondary screen? Thanks!

papr 13 November, 2019, 16:49:03

@user-ff9c49 You can display the screen calibration markers on a different monitor by selecting the correct monitor in the calibration menu

user-f8c051 13 November, 2019, 17:37:32

Hey All !! I'm having some trouble installing pyuvc I think @papr already helped me overcome a few issues on github but I'm still stuck

user-f8c051 13 November, 2019, 17:38:08

Im on Catalina, and geting the following error uvc.c:15558:103: error: too many arguments to function call, expected 4, have 5 __pyx_v_status = uvc_stream_start(__pyx_v_self->strmh, NULL, NULL, __pyx_v_self->_bandwidth_factor, 0);

user-f8c051 13 November, 2019, 17:39:01

when running python3 setup.py install

papr 13 November, 2019, 17:39:14

@user-f8c051 BTW I have not bee able to get pupil to run from source on catalina yet

user-f8c051 13 November, 2019, 17:39:31

ah

user-f8c051 13 November, 2019, 17:40:43

but I'm actually not trying to run pupil, im just trying to use pyuvc to control a microscope lol

user-f8c051 13 November, 2019, 17:41:14

although pupil as awesome! I love it and used it for a different research

papr 13 November, 2019, 17:42:28

Ah, cool! But sad to here that there more problems than installing pyglui on Catalina...

user-f8c051 13 November, 2019, 17:44:25

I ended up installing libuvc via brew and now just stuck compiling pyuvc

user-f8c051 13 November, 2019, 17:45:17

any ideas what might be the source of that error?

user-f8c051 13 November, 2019, 17:47:00

Chat image

user-2be752 13 November, 2019, 18:13:41

@user-c5fb8b wow this is great! a few questions: recent_events takes self and events, could you explain what exactly is passed onto this? also, if I understood correctly, for us to define a surface without the gui we then need to just pass the markers that make up a particular surface onto Surface.update_location?

papr 13 November, 2019, 18:46:57

@user-f8c051 isn't this the same exact error as before?

user-f8c051 13 November, 2019, 19:01:49

@papr the error I posted ion git-hub was regarding compiling libuvc

user-f8c051 13 November, 2019, 19:02:25

but then I just installed libuvc via brew and now Im stuck compiling pyuvc

papr 13 November, 2019, 19:02:47

You need to compile our libuvc, else it won't work.

papr 13 November, 2019, 19:03:02

https://github.com/pupil-labs/pyuvc

papr 13 November, 2019, 19:03:26

You should only install turbojpeg and lib_usb_ via brew

user-f8c051 13 November, 2019, 19:03:33

ah I was suspecting it might be the issue

user-f8c051 13 November, 2019, 19:04:28

but your version of libuvc doesnt seem to compile on Catalina

user-f8c051 13 November, 2019, 19:04:49

https://github.com/pupil-labs/libuvc/issues/36

user-f8c051 13 November, 2019, 19:05:03

or maybe Im still missing something

user-f8c051 13 November, 2019, 19:05:09

Im not so good with Cmake

user-f8c051 13 November, 2019, 19:08:22

ok let me try the fixed branch

papr 13 November, 2019, 19:08:28

with these instructions I am actually able to build libuvc on my catalina

user-f8c051 13 November, 2019, 19:11:19

im getting

user-f8c051 13 November, 2019, 19:11:29
CMake Error at /Applications/CMake.app/Contents/share/cmake-3.16/Modules/FindPkgConfig.cmake:511 (message):
  pkg-config tool not found
Call Stack (most recent call first):
  /Applications/CMake.app/Contents/share/cmake-3.16/Modules/FindPkgConfig.cmake:643 (_pkg_check_modules_internal)
  CMakeLists.txt:32 (pkg_check_modules)
papr 13 November, 2019, 19:11:39

brew install pkg-config

user-f8c051 13 November, 2019, 19:12:47

ok seemed to work

user-f8c051 13 November, 2019, 19:13:06

will try to compile pyuvc now

papr 13 November, 2019, 19:13:17

That should work out of the box

user-f8c051 13 November, 2019, 19:15:48

compiled!!! yay!

user-f8c051 13 November, 2019, 19:17:56

Seems to work!!! yay!! will post the solutions on github under the issues i opened!

papr 13 November, 2019, 19:20:59

Great thank you. Please close it afterwards πŸ‘

user-f8c051 13 November, 2019, 19:21:13

Many thanks for the quick response!!!

user-f8c051 13 November, 2019, 19:33:47

@papr one last question, is there a documentation somewhere for pyuvc? Listing all the methods?

papr 13 November, 2019, 19:34:54

no, not really. I can just recommend to have a look at the source code https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx Alternatively, you can always use the dir(obj) function to list all available attributes of an object obj

user-f8c051 13 November, 2019, 19:35:33

alright will tinker around! thanks again!

user-2be752 14 November, 2019, 06:28:23

@user-c5fb8b A bit of an update: per your awesome tips, now we have a code that first detects all markers in each frame and stores them in an object tracker.marker_cache (tracker being surface_tracker_offline.Surface_Tracker_Offline(gpool). Then we loop trough the frames and in each iteration we call surface_offline.update_location() passing the frame index and the markers detected on that frame. Still my problem is that how is this function going to know exactly what markers make each surface - I guess I can only pass those markers that make a particular surface in a bigger loop.

My other problem is that when I pass the markers into surface_offline.update_location() I get this error:

File "/pupil/pupil_src/shared_modules/surface_tracker/surface_offline.py", line 97, in update_location self.update_location(frame_idx, marker_cache, camera_model) File "/pupil/pupil_src/shared_modules/surface_tracker/surface_offline.py", line 84, in update_location self._fetch_from_location_cache_filler() File "/pupil/pupil_src/shared_modules/surface_tracker/surface_offline.py", line 146, in _fetch_from_location_cache_filler for cache_idx, location in self.location_cache_filler.fetch(): File "/pupil/pupil_src/shared_modules/background_helper.py", line 115, in fetch raise datum AssertionError: push_url was not set by foreground process

I would appreciate any help πŸ™‚

user-c5fb8b 14 November, 2019, 07:29:36

Hi @user-2be752 glad I could help. The recent_events() function gets called every frame for all plugins. The dictionary events that is passed as parameter is basically a global dictionary that collects all data from all plugins. The plugins have a specific order, so e.g. the pupil detectors get called first and store their result data in events. Later plugins in the same frame can then use that information from events and do more processing with the data from the previous plugins. In Pupil the only thing that basically happens is that tracker.recent_events() gets called every frame. The events dict contains information from the video sources, so it also contains the images. Here you can see that the surface tracker base class does then all the necessary further steps in recent_events(): https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker.py#L423-L432 I assume you could either try to just use recent_events(), where you mock up an events dict. I think you will only need the frame key in events:

for img in video:
    # img should be a numpy array
    events = {"frame": img}
    tracker.recent_events(events)

Note that this is untested :D The other way would be to recreate the logic that you find in surface tracker base recent_events() yourself and call that for every frame.

user-c5fb8b 14 November, 2019, 07:38:49

@user-2be752 Looking a bit more at the error you are getting, I fear this might come from our multi-threading setup in Pupil. Most of the background tasks rely on the IPC (inter process communication) setup that we set up when starting pupil (in main.py). Without this setup it might not be possible easily to use the background processing that e.g. surface_tracker_offline uses for filling the cache. In that case you would have to either try the online-tracker or re-engineer the caching procedure to not work asynchroneously. Actually I think you might not even need the caches, as they are only needed for being able to search through the video quickly without delay. In case you just want to process a video only once from front to back you can "just" omit the caching and process the markers immediately.

user-c5fb8b 14 November, 2019, 07:39:23

I'd recommend trying to make the online surface tracker work first and then see if that's already sufficient for your purposes.

user-2be752 14 November, 2019, 07:39:54

if i'm not using the gui at all, in which way using the offline or the online is different?

user-c5fb8b 14 November, 2019, 07:48:56

@user-2be752 Mostly the offline version does caching, which allows it to quickly jump to a specific frame without delays. But also since we are caching, we can do more intense processing in the offline detector. A parameter for detection is the min_marker_perimeter, which basically controls for how tiny markers we search. Setting this parameter smaller will result in more markers detected, but it will also take longer. You can however also set this parameter for the online detector.

user-2be752 14 November, 2019, 07:50:01

I see... the nomenclature is pretty similar for both, right? I can try witht he online and see if it's more straightforward

user-c5fb8b 14 November, 2019, 07:50:29

Yes please try this. A lot of the functions come from the base class Surface_Tracker, so they are the same in both online and offline.

user-2be752 14 November, 2019, 07:50:47

otherwise, how would i be able to omit the caching?

user-c5fb8b 14 November, 2019, 07:51:20

Well. You would have to rewrite the code of the offline tracker πŸ˜…

user-2be752 14 November, 2019, 07:52:29

oh no πŸ˜‚ I much rather use the code, it's so useful!

user-c5fb8b 14 November, 2019, 07:52:37

But I will think about a better solution in that case. But no promises.

user-2be752 14 November, 2019, 07:53:19

sounds good, I'll try tomorrow (I'm in US time) with the online version, cross fingers!

user-2be752 14 November, 2019, 07:55:23

one more thing, we are also trying to write a recalibration function so we can recalibrate offline... there use to be a function called calibrate_and_map in the old API in gaze_producers, do you have an idea of where this may be done in the new API?

user-c5fb8b 14 November, 2019, 07:56:16

Hm, maybe @papr has an idea for this? ☝️

papr 14 November, 2019, 15:53:54

@user-87c4eb We have evaluated several gaze matching algorithms and have decided to go for #1731.

You can find the evaluation here: https://nbviewer.jupyter.org/github/pupil-labs/pupil-matching-evaluation/blob/master/gaze_mapper_evaluation.ipynb

papr 14 November, 2019, 15:57:32

@user-2be752 This is the relevant entry point for offline calibration: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/gaze_from_offline_calibration.py#L66

Offline Calibration consists of three steps: 1. Reference location detection 2. Calibration 3. Mapping

Each step has its own controller.

user-2be752 14 November, 2019, 19:21:13

@user-c5fb8b it seems that using the tracker_online.recent_events() is working so far! quick question: what's the difference between g_pool.rec_dir, g_pool.user_dir?

papr 14 November, 2019, 19:22:02

@user-2be752 rec_dir: Directory of the recording, user_dir: ~/pupil_player_settings

papr 14 November, 2019, 19:22:31

You probably only want to care about rec_dir

user-2be752 14 November, 2019, 20:11:38

awesome, thanks@

user-f8c051 14 November, 2019, 20:41:47

Hey its me again, Ive been tinkering with pyuvc abit, but could not figure out how to apply the control settings.

It works quite well, detects my usb microscope and gets 30fps video feed!

in the example.py there are those 3 lines:

# Uncomment the following lines to configure the Pupil 200Hz IR cameras:
controls_dict = dict([(c.display_name, c) for c in cap.controls])
controls_dict['Auto Exposure Mode'].value = 1
controls_dict['Gamma'].value = 200

I tried to modify them to adjust white balance:

controls_dict['White Balance temperature,Auto'].value = 0
controls_dict['White Balance temperature'].value = 2000

but nothing really happened

user-f8c051 14 November, 2019, 20:43:10

I guess I'm missing a few more lines to actually apply those settings?

user-f8c051 14 November, 2019, 20:46:10

Thanks in advance!

papr 14 November, 2019, 20:47:06

To my knowledge this should be sufficient. Does the example script setup a logger?

papr 14 November, 2019, 20:47:12
import logging
logging.basicConfig()
user-f8c051 14 November, 2019, 20:47:25

yeh

papr 14 November, 2019, 20:47:46

logging.basicConfig() is the important part here

papr 14 November, 2019, 20:48:36

You should see errors/warning if you modify controls that are not implemented by the microscope

papr 14 November, 2019, 20:48:51

Maybe it helps to set debug logs? logging.basicConfig(level=logging.DEBUG)

user-f8c051 14 November, 2019, 20:49:45

let me try

user-f8c051 14 November, 2019, 20:51:23

no errors

user-f8c051 14 November, 2019, 20:51:30

trying with Brighness now

user-f8c051 14 November, 2019, 20:51:35

Brightness

user-f8c051 14 November, 2019, 20:51:48

should be more visible

user-f8c051 14 November, 2019, 20:52:31

yes if i misstype the property name i do get an error

papr 14 November, 2019, 20:52:43

Brightness is one of the post-processing controls IIRC. Try exposure time first

user-f8c051 14 November, 2019, 20:52:46
Traceback (most recent call last):
  File "/Users/yurikleb/Projects/Stablescope/uvc_control_tests.py", line 18, in <module>
    controls_dict['Brightnesss'].value = -100
KeyError: 'Brightnesss'
user-f8c051 14 November, 2019, 20:52:55

ok

papr 14 November, 2019, 20:53:27

That is a crash. That is different from the error log messages that you get when you try to set a control that is not implemented on the hardware

user-f8c051 14 November, 2019, 20:54:46

Chat image

user-f8c051 14 November, 2019, 20:55:06

those are the controls Im getting

papr 14 November, 2019, 20:55:25

Yeah, these are those that are implemented in pyuvc

user-f8c051 14 November, 2019, 20:56:24

but in that file i actually see more controls like focus ect..

papr 14 November, 2019, 20:57:57

Turns out, pyuvc checks for available controls: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L639

papr 14 November, 2019, 20:58:06

Didn't know that πŸ˜…

user-f8c051 14 November, 2019, 20:58:13

lol

user-f8c051 14 November, 2019, 20:59:47

so before i found pyuvc i was using https://github.com/joelpurra/uvcc And it actually showed me bit more controls like focus and I could change them

user-f8c051 14 November, 2019, 21:00:25

now Im trying to replicate the same functionality with pyuvc

papr 14 November, 2019, 21:01:28

This looks like a great tool!

papr 14 November, 2019, 21:01:49

now Im trying to replicate the same functionality with pyuvc makes sense

user-f8c051 14 November, 2019, 21:02:12

These are the controls I get with uvcc

[
  "absoluteExposureTime",
  "absoluteFocus",
  "absolutePanTilt",
  "absoluteZoom",
  "autoExposureMode",
  "autoExposurePriority",
  "autoFocus",
  "autoWhiteBalance",
  "backlightCompensation",
  "brightness",
  "contrast",
  "gain",
  "saturation",
  "sharpness",
  "whiteBalanceTemperature"
]
user-f8c051 14 November, 2019, 21:06:45

I think pyuvc might be getting some of the controls wrong πŸ™„

papr 14 November, 2019, 21:07:21

Definitively possible. Feel free to submit a PR with a fix if you find a bug πŸ˜‰

papr 14 November, 2019, 21:08:48

I think pyuvc might be getting some of the controls wrong πŸ™„ But are you seeing any kind of effect?

user-2be752 14 November, 2019, 21:09:02

Another quick question: I've noticed every time I calibrate during a recording, there are 3 notifications: 1 calibration.calibration_data, 2 calibration.calibration_successful (or sth of the sort) and 3 calibration.calibration_data. Is this 3rd one on purpose there to mark the end of the calibration? Or is it some kind of mistake?

user-f8c051 14 November, 2019, 21:09:08

not really, just tried a few properties

user-f8c051 14 November, 2019, 21:09:21

thats my code

from __future__ import print_function
import uvc
import logging
import cv2

logging.basicConfig(level=logging.DEBUG)

dev_list = uvc.device_list()
print(dev_list)
cap = uvc.Capture(dev_list[0]["uid"])


controls_dict = dict([(c.display_name, c) for c in cap.controls])
controls_dict['Brightness'].value = -100
# controls_dict['Auto Exposure Mode'].value = 1
# controls_dict['White Balance temperature'].value = 2000

# print(cap.avaible_modes)
# print(controls_dict)

for x in range(1):
    print(x)
    cap.frame_mode = (640, 480, 30)
    for x in range(100):
        frame = cap.get_frame_robust()
        # print(frame.img.shape)
        cv2.imshow("img", frame.bgr)
        cv2.waitKey(1)
cap = None
user-f8c051 14 November, 2019, 21:09:38

was hoping to get a dark img

papr 14 November, 2019, 21:10:03

@user-f8c051 Maybe you can download Pupil Capture and play with the ui? https://github.com/pupil-labs/pupil/releases This might make it easier to interact with the microscope for now.

user-f8c051 14 November, 2019, 21:10:19

interesting idea!

papr 14 November, 2019, 21:12:50

@user-2be752 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/recorder.py#L350-L361 At recording start, the recorder adds the last known calibration to the recording. This way you can remap your gaze, even if you did not record the calibration procedure itself

user-2be752 14 November, 2019, 21:16:53

@papr oh so that first calibration.calibration_data may be from the last known calibration?

papr 14 November, 2019, 21:17:45

correct. you should be able to verify this by looking at its timestamp. It should be prior to the actual recording start

user-2be752 14 November, 2019, 21:18:32

and that would be also followed by the calibration successful notification? or only calibrations during the recording have that part?

papr 14 November, 2019, 21:19:17

Recorder only adds calibration_data

user-2be752 14 November, 2019, 21:20:47

great, thanks!!!

user-f8c051 14 November, 2019, 21:23:47

@papr pupil software seems to work with the scope, but the controls behave bit strange, there is no focus control, which i actually need (it works in uvcc), white balance updates but resets itself.. but most controls do behave ok

papr 14 November, 2019, 21:25:44

Interesting. I will give uvcc a try on our own devices on Monday.

user-f8c051 14 November, 2019, 21:25:51

thank you!!!

user-f8c051 14 November, 2019, 21:27:48

by the way, what is the pupil service app used for?

papr 14 November, 2019, 21:30:27

It is basically the same as Capture, but does not map gaze in batches but as soon there is data available. It comes with the limitation that it only support a limited amounts of plugins. Recording data for example is not possible.

user-f8c051 14 November, 2019, 21:31:27

oh ok, yeh just found the documentation for it: https://docs.pupil-labs.com/core/software/pupil-service/#talking-to-pupil-service

user-f8c051 14 November, 2019, 21:42:57

@papr the controls do seem to work if I apply them on every frame:

    cap.frame_mode = (640, 480, 30)
    for x in range(300):
        controls_dict['Brightness'].value = 0
        controls_dict['White Balance temperature'].value = 2000
        frame = cap.get_frame_robust()
        # print(frame.img.shape)
        cv2.imshow("img", frame.bgr)
        cv2.waitKey(1)
user-f8c051 14 November, 2019, 21:51:08

only need to figure out the focus control 😁

papr 14 November, 2019, 21:52:17

the controls do seem to work if I apply them on every frame Interestingly, Capture's ui does this implicitly, too...

user-f8c051 14 November, 2019, 21:52:46

Apply the control values on each frame?

papr 14 November, 2019, 21:53:59

Yeah, but I am not 100% sure right now. Capture's ui works by binding to an attribute of an object, e.g. ui.Slider("value", controls_dict['Brightness'])

papr 14 November, 2019, 21:54:32

And in order to stay synced with it, it fetches its value on each frame and only changes it if it was changed by the user.

papr 14 November, 2019, 21:54:58

So no, Capture does actually not set it each frame πŸ€”

papr 14 November, 2019, 21:56:12

what happens if you set the values only every second loop iteration? Do you get alternating images?

user-f8c051 14 November, 2019, 21:56:33

let me try

papr 14 November, 2019, 21:57:22
for idx, x in enumerate(range(300)):
    if idx % 2 == 0:
        ...
user-f8c051 14 November, 2019, 21:57:53

yeh πŸ™‚

user-f8c051 14 November, 2019, 22:00:56

Brigtness seems to stay stable

user-f8c051 14 November, 2019, 22:01:06

White balance is changing

papr 14 November, 2019, 22:02:29

Changing meaning alternating between two values, or something smooth in between? This might be just the automatic white balancing of the microscope?

user-f8c051 14 November, 2019, 22:03:20

trying to disable the AWB

user-f8c051 14 November, 2019, 22:06:36

if i set AWB to 1 controls_dict['White Balance temperature,Auto'].value = 1

then it ignores controls_dict['White Balance temperature'].value = 2000

user-f8c051 14 November, 2019, 22:07:10

oh wait another test...

user-f8c051 14 November, 2019, 22:11:00

never mind :) yeh so WB control does behave very starnge

user-f8c051 14 November, 2019, 22:11:44

also in Pupil Capture, it updates for a frame/few frames when i drag the scroller

user-f8c051 14 November, 2019, 22:11:51

then resets

user-f8c051 14 November, 2019, 22:17:27

it seems that some settings work only when applied on each frame inside the for loop

user-f8c051 14 November, 2019, 22:18:09

brightness also seems to work only when inside the for loop

papr 14 November, 2019, 22:18:40

Could you share an successfully captured image? Just to get a reference for what you are looking for

user-f8c051 14 November, 2019, 22:19:54

contrast as well

user-f8c051 14 November, 2019, 22:20:28

Chat image

user-f8c051 14 November, 2019, 22:20:33

Chat image

user-f8c051 14 November, 2019, 22:20:48

this is my finger with different WB settings, captured via pyuvc

papr 14 November, 2019, 22:21:28

Cool!

user-f8c051 14 November, 2019, 22:21:35

yes!

user-f8c051 14 November, 2019, 22:21:54

focus was applied via the Scope hardware button

papr 14 November, 2019, 22:22:17

Does it trigger auto-focus?

user-f8c051 14 November, 2019, 22:22:19

but its for sure controllable via UVC

user-f8c051 14 November, 2019, 22:22:43

I could not control focus with pyuvc

papr 14 November, 2019, 22:23:06

can you read it out while the scope hardware focuses?

user-f8c051 14 November, 2019, 22:23:19

but via uvcc i can send an autofocus command

user-f8c051 14 November, 2019, 22:23:55

no nothing appears in the pyton log

papr 14 November, 2019, 22:24:25

In the log? Are you explicitly fetching its value every loop iteration?

user-f8c051 14 November, 2019, 22:24:39

no

user-f8c051 14 November, 2019, 22:24:51

ah now i got your question

user-f8c051 14 November, 2019, 22:25:30

not sure how to read the serial when i press the focus button on the scope, im not sure it sends any data even

papr 14 November, 2019, 22:26:27
for x in range(300):
    print(controls_dict['autoFocus'].value)
user-f8c051 14 November, 2019, 22:27:54

im not getting an autoFocus key in my dictionary

papr 14 November, 2019, 22:28:25

Ah I misunderstood the control list above

user-f8c051 14 November, 2019, 22:29:20

those are the controls i see via uvcc:

yurikleb-macbook:~ yurikleb$ uvcc --vendor 0x636c --product 0x905e controls
[
  "absoluteExposureTime",
  "absoluteFocus",
  "absolutePanTilt",
  "absoluteZoom",
  "autoExposureMode",
  "autoExposurePriority",
  "autoFocus",
  "autoWhiteBalance",
  "backlightCompensation",
  "brightness",
  "contrast",
  "gain",
  "saturation",
  "sharpness",
  "whiteBalanceTemperature"
]
user-f8c051 14 November, 2019, 22:30:24

but pyuvc shows:

DEBUG:uvc:Found device that mached uid:'20:6'
DEBUG:uvc:Device '20:6' opended.
DEBUG:uvc:avaible video modes: [{'size': (1280, 720), 'rates': [30]}, {'size': (720, 480), 'rates': [30]}, {'size': (640, 480), 'rates': [60, 30, 29, 28, 25]}, {'size': (1920, 1080), 'rates': [20]}]
DEBUG:uvc:Adding "Zoom absolute control" control.
DEBUG:uvc:Adding "Backlight Compensation" control.
DEBUG:uvc:Adding "Brightness" control.
DEBUG:uvc:Adding "Contrast" control.
DEBUG:uvc:Adding "Gain" control.
DEBUG:uvc:Adding "Power Line frequency" control.
DEBUG:uvc:Adding "Hue" control.
DEBUG:uvc:Adding "Saturation" control.
DEBUG:uvc:Adding "Gamma" control.
DEBUG:uvc:Adding "White Balance temperature" control.
DEBUG:uvc:Adding "White Balance temperature,Auto" control.
DEBUG:uvc:Setting mode: 640,480,30
Estimated / selected altsetting bandwith : 309 / 3072. 
!!!!Packets per transfer = 32 frameInterval = 333333
DEBUG:uvc:Stream start.
DEBUG:uvc:Stream stopped
papr 14 November, 2019, 22:33:36

I guess you will to start debugging a bit πŸ˜• This is the section that parses available controls in pyuvc: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L616-L635

papr 14 November, 2019, 22:33:58

pyuvc probably does something different than uvcc

user-f8c051 14 November, 2019, 22:34:54

😱

user-f8c051 14 November, 2019, 22:35:10

ok Ill keep tinkering tomorrow πŸ™‚

user-f8c051 14 November, 2019, 22:35:47

so each time i change something in uvc.pyx I need to re-compile and re-install?

papr 14 November, 2019, 22:36:37

Correct. πŸ˜• But look, output_terminal is not being parsed at all. Maybe, you can start by checking the output terminals?

user-f8c051 14 November, 2019, 22:38:06

omg reverse engineering terminal outputs... that's gonna be a fun weekend

user-f8c051 14 November, 2019, 22:38:11

lol

papr 14 November, 2019, 22:39:01

Who does not love some Cython debugging? πŸ˜„ Paging @user-0f7b55

user-f8c051 14 November, 2019, 22:39:40

😱 🀣

user-f8c051 14 November, 2019, 22:39:50

I'll call it a day

user-f8c051 14 November, 2019, 22:40:02

Thanks for the Support!!!

papr 14 November, 2019, 22:40:13

Me, too, have a nice evening.

user-f8c051 14 November, 2019, 22:40:16

much appreciated!

user-2be752 15 November, 2019, 08:18:24

@user-c5fb8b good news, I managed to write a code that calls in every frame the function recent_events from surface_tracker_online (as you suggested). Struggled a bit with creating the event dicts, but found out there is a class Frame in fake_backend.py that does the job. I'm now passing the resulted markers into surface_tracker_online.on_add_surface_click(). I'm thinking of passing the ones that belong to each surface separately, that way creating as many surfaces as we predefine. I'll make this publicly available in our repository in case it's useful for anybody πŸ™‚ thank you so much for your help!!!

user-c5fb8b 15 November, 2019, 08:56:35

@user-2be752 Glad we could help! Looking forward to seeing your solutions πŸ™‚

user-57b8ce 21 November, 2019, 18:21:34

Hi there, I somehow recall that pupil labs had the offer of replacing the front facing camera with a rgbd camera, but I don't find it on the webpage. Is this still an option? Thanks in advance.

papr 21 November, 2019, 19:52:47

@user-57b8ce There is no built-in rgbd option anymore, correct. Instead, you can buy a headset with a USB-c mount on which you can mount the rgbd camera of your choice.

user-57b8ce 21 November, 2019, 21:53:21

@papr That sounds great. Thanks for the quick response too πŸ™‚

wrp 22 November, 2019, 02:10:41

@user-57b8ce this configuration of Pupil Core https://pupil-labs.com/cart/?pupil_wusbc_e200b=1 is an example of what @papr was referring to.

user-57b8ce 22 November, 2019, 12:22:41

@wrp Thanks for the link. Yes, I have seen it. But we already have a normal binocular tracker, and it seems costly to buy another whole tracker. Is there any option of adding the rgbd camera mount as a plugin?

papr 22 November, 2019, 12:26:30

@user-57b8ce @user-755e9e might be able to answer this this

user-755e9e 22 November, 2019, 13:11:57

@user-57b8ce the High Speed Pupil Core Binocular has a JST SH1 connection for the world camera and uses USB2.0 technology. For this reason it cannot work with a commercial rgbd camera.

user-57b8ce 22 November, 2019, 20:32:22

Alright, get it. Thanks a lot for both of you, especially for the fast responses πŸ™‚

user-c9d205 24 November, 2019, 13:00:38

How do I fetch fixations from the python API?

papr 24 November, 2019, 13:29:33

@user-c9d205 If you want to receive fixations via the network api, you can modify this example script to subscribe to fixation instead of pupil. in line 23 https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py

user-5fa537 27 November, 2019, 17:10:44

@papr - I'm having trouble building your pyav here in linux land figured that, before I dig forever into it that I'd check if there was some known-thing that I'm missing here searched discord history, etc, nothing obvious i think its just a broken ffmpeg install on my behalf well, more speficically because I'm using anaconda so i just brute force-installed from conda-forge, so I'm good πŸ™‚ presuming y;all haven't made any changes that are crucial

papr 27 November, 2019, 17:11:35

@user-5fa537 What does python -c "import av; print(av.__version__)" give you?

papr 27 November, 2019, 17:12:17

@user-5fa537 There are some additional features, that we implemented, especially for Player, that are not present in the official package... e.g. buffered decoding

papr 27 November, 2019, 17:12:56

What Linux are we talking about btw? πŸ™‚

user-5fa537 27 November, 2019, 17:31:28

(pupil) [email removed] ξ‚° ~ ξ‚° python -c "import av; print(av.version)" 6.2.0

user-5fa537 27 November, 2019, 17:31:47

ubuntu 18.04

papr 27 November, 2019, 17:32:35

Yeah, that is the wrong version. You might have problems running Capture, and definitively when running Player. What is the issue that you had? Did you install ffmpeg via apt?

user-5fa537 27 November, 2019, 17:33:09

yeah- which is strange, because it couldn't find a few libs...

papr 27 November, 2019, 17:33:48

Please deinstall the conda-forged av and start over. We might be able to help yu finding these libs

user-5fa537 27 November, 2019, 17:33:53

I'm off to lunch with the step-father, I'll inquire more closely after I get back after a few pints, etc. Always a few pints at the American Turkey holiday

user-5fa537 27 November, 2019, 17:34:01

ok- will do!

papr 27 November, 2019, 17:34:43

We will probably not be available when you come back. berlin timezone πŸ˜‰ But just post the missing dependencies here and we can come back to you tomorrow morning.

user-5fa537 27 November, 2019, 17:35:01

coolness- I'd rather be in Berlin, or even Gießen

user-5fa537 27 November, 2019, 17:35:14

thus, intoxication

user-65eab1 30 November, 2019, 11:16:24

hi all, i am new on pupil labs. I have downloaded pupil capture, player and service. I calibrate after using pupil capture. However, I do not know how to get data from pupil capture. Can you help me to get real time data from pupil capture application? (on windows 10 machine)

End of November archive