Hi, I have a question concerning the Network API and streaming gaze data (https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format). We have the binocular core tracker with 120Hz for each eye camera, so I expected gaze data to be streamed at 120Hz (combining the 2x120Hz). However, in my software, I receive gaze data at 240Hz with each gaze datum containing a two pupil datum. Is it intended to stream at 240Hz? As it seems to be redundant, how can I extract the non-redundant 120Hz signal? (using Pupil Capture 1.17.6). Thanks in advance for your answer!
Hi there, is there anywhere I can get more documentation on the structure of the new surface tracker? I'm trying to go over the code to implement it to my design but I'm having a bit of trouble differentiating each of the functions. Thanks in advance!
@user-87c4eb Pupil Capture only matches two pupil datums into a binocular gaze datum if the confidence of both data points is sufficiently high (0.6) and you calibrated successfully already. IIn your case, you see two monocular gaze data streams, each running at 120Hz
I receive gaze data at 240Hz with each gaze datum containing a two pupil datum
Do they include one or two pupil datums? Could you check and share the topic of the gaze data that you receive?
@user-2be752 Unfortunately, there is no such documentation. π Are you looking to modify something in particular? In this case I might be able to give pointes.
@papr No worries, you guys are super helpful! I've developed my own apriltag detection and a way of matching each tag id to one of 3 surfaces (each one defined by 4 markers), but now I think it might be easier to just use your code. However, i don't seem to find where the actual tag detection is happening. If you could point me to the right part of the code where the marker detection and surface matching is happening that'd be very helpful. Otherwise, if you could point me to where I could pass on my definition of the surfaces by 4 markers that could also work.
Hi! I'm trying to run the GazeRaysDemoScene in unity but it doesn't seem to be working (all other demos work perfectly fine). I get an error saying: "NullReferenceException: Object reference not set to an instance of an object PupilLabs.DisableDuringCalibration.Awake()". Any idea of how to solve it?
@user-d77d0f please post unity related questions in π₯½ core-xr :)
Hi @papr! why the value of fixation_id is same, but the world index and position change? I want to use the data for the area of interest. Thanks
Hi @wrp! why the value of fixation_id is same, but the world index and position change? I want to use the data for the area of interest. Thanks
@user-40621b A fixation is made up of a cluster of gaze positions. It appears that what you are seeing in fixations_on_surface_<surface_name>.csv
are the gaze positions that make up each fixation. Example from your screenshot:
Fixation ID = 74 Fixation starts at world_index frame = 492 Fixation ends at world_index frame = 499
Multiple datums: You could take the mean of the gaze norm_pos_x and norm_pos_y if you want to represent a single position for the fixation.
@wrp , thanks for your respond. Can tell me, how is the fastest way to make a mean on the fixation? because I have many data files.
@papr Thanks for your answer. I looked into it again: when subscribing "gaze.3d.01." I get between 1500 and 1800 samples in 10 seconds. So, it's not 240Hz, but sth. between 150 and 180Hz. Maybe the eye cameras deliver more than 120Hz? My FPS views for the cameras jump between 90 and 180Hz...
@papr any thoughts or clues of where I could find this?
@user-87c4eb Could you share your code such that we can try to reproduce the experiment?
@user-87c4eb We were able to reproduce the issue and fixed it here: https://github.com/pupil-labs/pupil/pull/1728
Hi @papr . Great news! Thanks for the update.
@user-87c4eb Please be aware that we are still evaluating the new behaviour. This change will take a few days before it is merged if at all. I will keep you posted.
Hey! My name is Richard and I'm setting up a Pupil Labs eye tracker for a project in UC Berkeley's Whitney Lab. Right now, we're trying to use the surface_tracker
module to detect three surfaces using 12 apriltag markers (weβre using it directly in the code, not through pupil player). I currently have all the markers saved into the marker_cache
of a Surface_Tracker
object, but I'm confused as to how I can detect the presence of one of more surfaces given the markers I saw for that frame (we're allowing users to move their head during recording, so the surfaces could change every frame). Does anyone have any pointers or tips?
@user-c5fb8b Could you please give @user-bb648c and @user-2be752 pointers regarding the surface code structure when you are back in the office?
Hi @user-2be752 and @user-bb648c, let me give you a quick overview of the SurfacTracker structure. I'll do a combined answer for both of you since there's a lot of overlap.
Marker detection for the Surface_Tracker is happening in surface_marker_detector.py
, see here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_marker_detector.py#L232-L267
Note self._detector
is an instance of the pupil-apriltags detector. The code of this is not in Pupil, but here: https://github.com/pupil-labs/apriltags
The entry point for the main logic is in recent_events
of the surface tracker base class: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker.py#L423-L432
Here essentially the 3 _update_*
functions run all the code. The implementations are a bit different between surface_tracker_online
and surface_tracker_offline
though. Online works in Capture and tries to optimize for real-time speed, while offline works in Player and calculates an accurate marker cache for the entire video at the beginning in a background thread.
For an easy overview of how the pipeline works, I'd recommend you take a look at surface_tracker_online
and surface_online
, as there are no caches and background-threads which makes it a lot clearer. Basically the tracker computes the marker locations in a frame and then passes this onto it's defined surfaces with Surface.update_location
(see implementations in derived classes): https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L277-L280
The surface stores an internal "definition". That is every needs one definition-frame with the information of which markers correspond to which surface coordinates. In Pupil you set these definitions by dragging the surface corners in the UI. We store both distorted and undistorted surface locations: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L76-L88
Serialization/Deserialization of the surface definitions is managed by surface_file_store.py
, which is part of the Surface_Tracker
base class and works independent of online/offline mode.
Hey sports fans... still having some issues getting the Core to work w/ the Realsesne 435i on Linux
I can get the sensor to work independently, have the right 'stuff' installed in the python-verse
but capture never 'sees' the sensor.
tried as root, also have sacrafied a chicken and am presently doing voodoo dance.
eyecams are 'seen' no problem
doesn't work on dedicated iron or VM
one minor thing- the capture software 'seems' to want me to install -both- pyrealsense -and- pyrealsense2. Only 2 supports the 4xx series IIRC
world - [INFO] launchables.world: Application Version: 1.18.35
world - [INFO] launchables.world: System Info: User: flip, Platform: Linux, Machine: gibson, Release: 4.15.0-66-generic, Version: #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019
world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend
world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720]
w
later, it asks for pyrealsense2
to be installed, but it is there in my global python install. Does it need to be localized into the app or does the app use a special PYTHONPATH
thingie perchance?
@user-5fa537 The Realsense D400 backend needs pyrealsense2
, the Realsense R200 backend needs the pyrealsense
module
yep-
installed into the system python or some other 'secret' place π ?
@user-5fa537 If you are running from bundle, the global python env is ignored. The bundle has its own env.
ahh- OK.
@user-5fa537 You need to install pyrealsense2
into ~/pupil_capture_settings/plugins
I suspected this!
@papr thank you! shall do-
i will work on an IMU plugin once I get it working.
we need the data for some ML stuff that Gabe's group at RIT did
so, nothing like necessity being the Mofo of invention and all that
@papr - py 2.7 or 3.x btw?
@user-5fa537 3.6
@papr π thanks. figured, but confirmation beats all that π
@papr Hi! Is there any way to do the calibration step directly in a secondary screen? Thanks!
@user-ff9c49 You can display the screen calibration markers on a different monitor by selecting the correct monitor in the calibration menu
Hey All !! I'm having some trouble installing pyuvc I think @papr already helped me overcome a few issues on github but I'm still stuck
Im on Catalina, and geting the following error
uvc.c:15558:103: error: too many arguments to function call, expected 4, have 5
__pyx_v_status = uvc_stream_start(__pyx_v_self->strmh, NULL, NULL, __pyx_v_self->_bandwidth_factor, 0);
when running python3 setup.py install
@user-f8c051 BTW I have not bee able to get pupil to run from source on catalina yet
ah
but I'm actually not trying to run pupil, im just trying to use pyuvc to control a microscope lol
although pupil as awesome! I love it and used it for a different research
Ah, cool! But sad to here that there more problems than installing pyglui on Catalina...
I ended up installing libuvc via brew and now just stuck compiling pyuvc
any ideas what might be the source of that error?
@user-c5fb8b wow this is great! a few questions: recent_events takes self and events, could you explain what exactly is passed onto this? also, if I understood correctly, for us to define a surface without the gui we then need to just pass the markers that make up a particular surface onto Surface.update_location?
@user-f8c051 isn't this the same exact error as before?
@papr the error I posted ion git-hub was regarding compiling libuvc
but then I just installed libuvc via brew and now Im stuck compiling pyuvc
You need to compile our libuvc, else it won't work.
You should only install turbojpeg and lib_usb_ via brew
ah I was suspecting it might be the issue
but your version of libuvc doesnt seem to compile on Catalina
or maybe Im still missing something
Im not so good with Cmake
ok let me try the fixed branch
with these instructions I am actually able to build libuvc on my catalina
im getting
CMake Error at /Applications/CMake.app/Contents/share/cmake-3.16/Modules/FindPkgConfig.cmake:511 (message):
pkg-config tool not found
Call Stack (most recent call first):
/Applications/CMake.app/Contents/share/cmake-3.16/Modules/FindPkgConfig.cmake:643 (_pkg_check_modules_internal)
CMakeLists.txt:32 (pkg_check_modules)
brew install pkg-config
ok seemed to work
will try to compile pyuvc now
That should work out of the box
compiled!!! yay!
Seems to work!!! yay!! will post the solutions on github under the issues i opened!
Great thank you. Please close it afterwards π
Many thanks for the quick response!!!
@papr one last question, is there a documentation somewhere for pyuvc? Listing all the methods?
no, not really. I can just recommend to have a look at the source code https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx
Alternatively, you can always use the dir(obj)
function to list all available attributes of an object obj
alright will tinker around! thanks again!
@user-c5fb8b A bit of an update: per your awesome tips, now we have a code that first detects all markers in each frame and stores them in an object tracker.marker_cache (tracker being surface_tracker_offline.Surface_Tracker_Offline(gpool). Then we loop trough the frames and in each iteration we call surface_offline.update_location() passing the frame index and the markers detected on that frame. Still my problem is that how is this function going to know exactly what markers make each surface - I guess I can only pass those markers that make a particular surface in a bigger loop.
My other problem is that when I pass the markers into surface_offline.update_location() I get this error:
File "/pupil/pupil_src/shared_modules/surface_tracker/surface_offline.py", line 97, in update_location
self.update_location(frame_idx, marker_cache, camera_model)
File "/pupil/pupil_src/shared_modules/surface_tracker/surface_offline.py", line 84, in update_location
self._fetch_from_location_cache_filler()
File "/pupil/pupil_src/shared_modules/surface_tracker/surface_offline.py", line 146, in _fetch_from_location_cache_filler
for cache_idx, location in self.location_cache_filler.fetch():
File "/pupil/pupil_src/shared_modules/background_helper.py", line 115, in fetch
raise datum
AssertionError: push_url
was not set by foreground process
I would appreciate any help π
Hi @user-2be752 glad I could help.
The recent_events()
function gets called every frame for all plugins. The dictionary events
that is passed as parameter is basically a global dictionary that collects all data from all plugins. The plugins have a specific order, so e.g. the pupil detectors get called first and store their result data in events
. Later plugins in the same frame can then use that information from events
and do more processing with the data from the previous plugins.
In Pupil the only thing that basically happens is that tracker.recent_events()
gets called every frame. The events
dict contains information from the video sources, so it also contains the images. Here you can see that the surface tracker base class does then all the necessary further steps in recent_events()
: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker.py#L423-L432
I assume you could either try to just use recent_events()
, where you mock up an events
dict. I think you will only need the frame
key in events
:
for img in video:
# img should be a numpy array
events = {"frame": img}
tracker.recent_events(events)
Note that this is untested :D
The other way would be to recreate the logic that you find in surface tracker base recent_events()
yourself and call that for every frame.
@user-2be752
Looking a bit more at the error you are getting, I fear this might come from our multi-threading setup in Pupil.
Most of the background tasks rely on the IPC (inter process communication) setup that we set up when starting pupil (in main.py).
Without this setup it might not be possible easily to use the background processing that e.g. surface_tracker_offline
uses for filling the cache.
In that case you would have to either try the online-tracker or re-engineer the caching procedure to not work asynchroneously.
Actually I think you might not even need the caches, as they are only needed for being able to search through the video quickly without delay. In case you just want to process a video only once from front to back you can "just" omit the caching and process the markers immediately.
I'd recommend trying to make the online surface tracker work first and then see if that's already sufficient for your purposes.
if i'm not using the gui at all, in which way using the offline or the online is different?
@user-2be752
Mostly the offline version does caching, which allows it to quickly jump to a specific frame without delays.
But also since we are caching, we can do more intense processing in the offline detector. A parameter for detection is the min_marker_perimeter
, which basically controls for how tiny markers we search. Setting this parameter smaller will result in more markers detected, but it will also take longer.
You can however also set this parameter for the online detector.
I see... the nomenclature is pretty similar for both, right? I can try witht he online and see if it's more straightforward
Yes please try this. A lot of the functions come from the base class Surface_Tracker, so they are the same in both online and offline.
otherwise, how would i be able to omit the caching?
Well. You would have to rewrite the code of the offline tracker π
oh no π I much rather use the code, it's so useful!
But I will think about a better solution in that case. But no promises.
sounds good, I'll try tomorrow (I'm in US time) with the online version, cross fingers!
one more thing, we are also trying to write a recalibration function so we can recalibrate offline... there use to be a function called calibrate_and_map in the old API in gaze_producers, do you have an idea of where this may be done in the new API?
Hm, maybe @papr has an idea for this? βοΈ
@user-87c4eb We have evaluated several gaze matching algorithms and have decided to go for #1731.
You can find the evaluation here: https://nbviewer.jupyter.org/github/pupil-labs/pupil-matching-evaluation/blob/master/gaze_mapper_evaluation.ipynb
@user-2be752 This is the relevant entry point for offline calibration: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/gaze_from_offline_calibration.py#L66
Offline Calibration consists of three steps: 1. Reference location detection 2. Calibration 3. Mapping
Each step has its own controller.
@user-c5fb8b it seems that using the tracker_online.recent_events() is working so far! quick question: what's the difference between g_pool.rec_dir, g_pool.user_dir?
@user-2be752 rec_dir: Directory of the recording, user_dir: ~/pupil_player_settings
You probably only want to care about rec_dir
awesome, thanks@
Hey its me again, Ive been tinkering with pyuvc abit, but could not figure out how to apply the control settings.
It works quite well, detects my usb microscope and gets 30fps video feed!
in the example.py there are those 3 lines:
# Uncomment the following lines to configure the Pupil 200Hz IR cameras:
controls_dict = dict([(c.display_name, c) for c in cap.controls])
controls_dict['Auto Exposure Mode'].value = 1
controls_dict['Gamma'].value = 200
I tried to modify them to adjust white balance:
controls_dict['White Balance temperature,Auto'].value = 0
controls_dict['White Balance temperature'].value = 2000
but nothing really happened
I guess I'm missing a few more lines to actually apply those settings?
Thanks in advance!
To my knowledge this should be sufficient. Does the example script setup a logger?
import logging
logging.basicConfig()
yeh
logging.basicConfig() is the important part here
You should see errors/warning if you modify controls that are not implemented by the microscope
Maybe it helps to set debug logs? logging.basicConfig(level=logging.DEBUG)
let me try
no errors
trying with Brighness now
Brightness
should be more visible
yes if i misstype the property name i do get an error
Brightness is one of the post-processing controls IIRC. Try exposure time first
Traceback (most recent call last):
File "/Users/yurikleb/Projects/Stablescope/uvc_control_tests.py", line 18, in <module>
controls_dict['Brightnesss'].value = -100
KeyError: 'Brightnesss'
ok
That is a crash. That is different from the error log messages that you get when you try to set a control that is not implemented on the hardware
those are the controls Im getting
Yeah, these are those that are implemented in pyuvc
but in that file i actually see more controls like focus ect..
Turns out, pyuvc checks for available controls: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L639
Didn't know that π
lol
so before i found pyuvc i was using https://github.com/joelpurra/uvcc And it actually showed me bit more controls like focus and I could change them
now Im trying to replicate the same functionality with pyuvc
This looks like a great tool!
now Im trying to replicate the same functionality with pyuvc makes sense
These are the controls I get with uvcc
[
"absoluteExposureTime",
"absoluteFocus",
"absolutePanTilt",
"absoluteZoom",
"autoExposureMode",
"autoExposurePriority",
"autoFocus",
"autoWhiteBalance",
"backlightCompensation",
"brightness",
"contrast",
"gain",
"saturation",
"sharpness",
"whiteBalanceTemperature"
]
I think pyuvc might be getting some of the controls wrong π
Definitively possible. Feel free to submit a PR with a fix if you find a bug π
I think pyuvc might be getting some of the controls wrong π But are you seeing any kind of effect?
Another quick question: I've noticed every time I calibrate during a recording, there are 3 notifications: 1 calibration.calibration_data, 2 calibration.calibration_successful (or sth of the sort) and 3 calibration.calibration_data. Is this 3rd one on purpose there to mark the end of the calibration? Or is it some kind of mistake?
not really, just tried a few properties
thats my code
from __future__ import print_function
import uvc
import logging
import cv2
logging.basicConfig(level=logging.DEBUG)
dev_list = uvc.device_list()
print(dev_list)
cap = uvc.Capture(dev_list[0]["uid"])
controls_dict = dict([(c.display_name, c) for c in cap.controls])
controls_dict['Brightness'].value = -100
# controls_dict['Auto Exposure Mode'].value = 1
# controls_dict['White Balance temperature'].value = 2000
# print(cap.avaible_modes)
# print(controls_dict)
for x in range(1):
print(x)
cap.frame_mode = (640, 480, 30)
for x in range(100):
frame = cap.get_frame_robust()
# print(frame.img.shape)
cv2.imshow("img", frame.bgr)
cv2.waitKey(1)
cap = None
was hoping to get a dark img
@user-f8c051 Maybe you can download Pupil Capture and play with the ui? https://github.com/pupil-labs/pupil/releases This might make it easier to interact with the microscope for now.
interesting idea!
@user-2be752 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/recorder.py#L350-L361 At recording start, the recorder adds the last known calibration to the recording. This way you can remap your gaze, even if you did not record the calibration procedure itself
@papr oh so that first calibration.calibration_data may be from the last known calibration?
correct. you should be able to verify this by looking at its timestamp. It should be prior to the actual recording start
and that would be also followed by the calibration successful notification? or only calibrations during the recording have that part?
Recorder only adds calibration_data
great, thanks!!!
@papr pupil software seems to work with the scope, but the controls behave bit strange, there is no focus control, which i actually need (it works in uvcc), white balance updates but resets itself.. but most controls do behave ok
Interesting. I will give uvcc a try on our own devices on Monday.
thank you!!!
by the way, what is the pupil service app used for?
It is basically the same as Capture, but does not map gaze in batches but as soon there is data available. It comes with the limitation that it only support a limited amounts of plugins. Recording data for example is not possible.
oh ok, yeh just found the documentation for it: https://docs.pupil-labs.com/core/software/pupil-service/#talking-to-pupil-service
@papr the controls do seem to work if I apply them on every frame:
cap.frame_mode = (640, 480, 30)
for x in range(300):
controls_dict['Brightness'].value = 0
controls_dict['White Balance temperature'].value = 2000
frame = cap.get_frame_robust()
# print(frame.img.shape)
cv2.imshow("img", frame.bgr)
cv2.waitKey(1)
only need to figure out the focus control π
the controls do seem to work if I apply them on every frame Interestingly, Capture's ui does this implicitly, too...
Apply the control values on each frame?
Yeah, but I am not 100% sure right now. Capture's ui works by binding to an attribute of an object, e.g.
ui.Slider("value", controls_dict['Brightness'])
And in order to stay synced with it, it fetches its value on each frame and only changes it if it was changed by the user.
So no, Capture does actually not set it each frame π€
what happens if you set the values only every second loop iteration? Do you get alternating images?
let me try
for idx, x in enumerate(range(300)):
if idx % 2 == 0:
...
yeh π
Brigtness seems to stay stable
White balance is changing
Changing meaning alternating between two values, or something smooth in between? This might be just the automatic white balancing of the microscope?
trying to disable the AWB
if i set AWB to 1
controls_dict['White Balance temperature,Auto'].value = 1
then it ignores
controls_dict['White Balance temperature'].value = 2000
oh wait another test...
never mind :) yeh so WB control does behave very starnge
also in Pupil Capture, it updates for a frame/few frames when i drag the scroller
then resets
it seems that some settings work only when applied on each frame inside the for loop
brightness also seems to work only when inside the for loop
Could you share an successfully captured image? Just to get a reference for what you are looking for
contrast as well
this is my finger with different WB settings, captured via pyuvc
Cool!
yes!
focus was applied via the Scope hardware button
Does it trigger auto-focus?
but its for sure controllable via UVC
I could not control focus with pyuvc
can you read it out while the scope hardware focuses?
but via uvcc i can send an autofocus command
no nothing appears in the pyton log
In the log? Are you explicitly fetching its value every loop iteration?
no
ah now i got your question
not sure how to read the serial when i press the focus button on the scope, im not sure it sends any data even
for x in range(300):
print(controls_dict['autoFocus'].value)
im not getting an autoFocus key in my dictionary
Ah I misunderstood the control list above
those are the controls i see via uvcc:
yurikleb-macbook:~ yurikleb$ uvcc --vendor 0x636c --product 0x905e controls
[
"absoluteExposureTime",
"absoluteFocus",
"absolutePanTilt",
"absoluteZoom",
"autoExposureMode",
"autoExposurePriority",
"autoFocus",
"autoWhiteBalance",
"backlightCompensation",
"brightness",
"contrast",
"gain",
"saturation",
"sharpness",
"whiteBalanceTemperature"
]
but pyuvc shows:
DEBUG:uvc:Found device that mached uid:'20:6'
DEBUG:uvc:Device '20:6' opended.
DEBUG:uvc:avaible video modes: [{'size': (1280, 720), 'rates': [30]}, {'size': (720, 480), 'rates': [30]}, {'size': (640, 480), 'rates': [60, 30, 29, 28, 25]}, {'size': (1920, 1080), 'rates': [20]}]
DEBUG:uvc:Adding "Zoom absolute control" control.
DEBUG:uvc:Adding "Backlight Compensation" control.
DEBUG:uvc:Adding "Brightness" control.
DEBUG:uvc:Adding "Contrast" control.
DEBUG:uvc:Adding "Gain" control.
DEBUG:uvc:Adding "Power Line frequency" control.
DEBUG:uvc:Adding "Hue" control.
DEBUG:uvc:Adding "Saturation" control.
DEBUG:uvc:Adding "Gamma" control.
DEBUG:uvc:Adding "White Balance temperature" control.
DEBUG:uvc:Adding "White Balance temperature,Auto" control.
DEBUG:uvc:Setting mode: 640,480,30
Estimated / selected altsetting bandwith : 309 / 3072.
!!!!Packets per transfer = 32 frameInterval = 333333
DEBUG:uvc:Stream start.
DEBUG:uvc:Stream stopped
I guess you will to start debugging a bit π This is the section that parses available controls in pyuvc: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L616-L635
pyuvc probably does something different than uvcc
π±
ok Ill keep tinkering tomorrow π
so each time i change something in uvc.pyx I need to re-compile and re-install?
Correct. π But look, output_terminal
is not being parsed at all. Maybe, you can start by checking the output terminals?
omg reverse engineering terminal outputs... that's gonna be a fun weekend
lol
Who does not love some Cython debugging? π Paging @user-0f7b55
π± π€£
I'll call it a day
Thanks for the Support!!!
Me, too, have a nice evening.
much appreciated!
@user-c5fb8b good news, I managed to write a code that calls in every frame the function recent_events from surface_tracker_online (as you suggested). Struggled a bit with creating the event dicts, but found out there is a class Frame in fake_backend.py that does the job. I'm now passing the resulted markers into surface_tracker_online.on_add_surface_click(). I'm thinking of passing the ones that belong to each surface separately, that way creating as many surfaces as we predefine. I'll make this publicly available in our repository in case it's useful for anybody π thank you so much for your help!!!
@user-2be752 Glad we could help! Looking forward to seeing your solutions π
Hi there, I somehow recall that pupil labs had the offer of replacing the front facing camera with a rgbd camera, but I don't find it on the webpage. Is this still an option? Thanks in advance.
@user-57b8ce There is no built-in rgbd option anymore, correct. Instead, you can buy a headset with a USB-c mount on which you can mount the rgbd camera of your choice.
@papr That sounds great. Thanks for the quick response too π
@user-57b8ce this configuration of Pupil Core https://pupil-labs.com/cart/?pupil_wusbc_e200b=1 is an example of what @papr was referring to.
@wrp Thanks for the link. Yes, I have seen it. But we already have a normal binocular tracker, and it seems costly to buy another whole tracker. Is there any option of adding the rgbd camera mount as a plugin?
@user-57b8ce @user-755e9e might be able to answer this this
@user-57b8ce the High Speed Pupil Core Binocular
has a JST SH1
connection for the world camera and uses USB2.0
technology. For this reason it cannot work with a commercial rgbd camera.
Alright, get it. Thanks a lot for both of you, especially for the fast responses π
How do I fetch fixations from the python API?
@user-c9d205 If you want to receive fixations via the network api, you can modify this example script to subscribe to fixation
instead of pupil.
in line 23 https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
@papr - I'm having trouble building your pyav here in linux land figured that, before I dig forever into it that I'd check if there was some known-thing that I'm missing here searched discord history, etc, nothing obvious i think its just a broken ffmpeg install on my behalf well, more speficically because I'm using anaconda so i just brute force-installed from conda-forge, so I'm good π presuming y;all haven't made any changes that are crucial
@user-5fa537 What does python -c "import av; print(av.__version__)"
give you?
@user-5fa537 There are some additional features, that we implemented, especially for Player, that are not present in the official package... e.g. buffered decoding
What Linux are we talking about btw? π
(pupil) [email removed] ξ° ~ ξ° python -c "import av; print(av.version)" 6.2.0
ubuntu 18.04
Yeah, that is the wrong version. You might have problems running Capture, and definitively when running Player. What is the issue that you had? Did you install ffmpeg via apt?
yeah- which is strange, because it couldn't find a few libs...
Please deinstall the conda-forged av and start over. We might be able to help yu finding these libs
I'm off to lunch with the step-father, I'll inquire more closely after I get back after a few pints, etc. Always a few pints at the American Turkey holiday
ok- will do!
We will probably not be available when you come back. berlin timezone π But just post the missing dependencies here and we can come back to you tomorrow morning.
coolness- I'd rather be in Berlin, or even GieΓen
thus, intoxication
hi all, i am new on pupil labs. I have downloaded pupil capture, player and service. I calibrate after using pupil capture. However, I do not know how to get data from pupil capture. Can you help me to get real time data from pupil capture application? (on windows 10 machine)