Has anyone used the .csv tracking data for graphing to validate data from multiple users?
do you know where i can find the python file that launch capture? i'm searching in pupil_src but i can see only the ones of player and service
@user-c1220d to launch Pupil apps from source on macOS and Linux you can do the following:
# launch pupil capture
python3 pupil/pupil_src/main.py
# launch pupil player
python3 pupil/pupil_src/main.py player
If you are running from source on Windows you will need to use cmd prompt and run the .bat
files. Example to run Pupil Capture from source on Windows:
cd pupil\pupil_src\
run_capture.bat
ok but i was searching in order to read it and try to understand how it works, i don't need to run it, but thanks
@user-c1220d have a look at main.py
It is the launcher that starts all the subprocesses
If you are interested in Capture in particular have a look at world.py
ok thanks so much , in addition i would try to get the raw images from the eye camera on linux. is this possible? the problem is due to the fact that pupil_player doesn't provide a clear eye camera video
Pupil Player just plays the recorded eye videos. I don't think you will get images better than that. Nonetheless, have a look at the Frame Publisher plugin. You can subscribe to the frame
topic after enabling that plugin in order to receive the image data over the IPC.
You can also send an example recording to [email removed] I can have a look at it tomorrow and give feedback if I find anything to improve quality.
i'm talking about the videos of the eyes, not from the world camera. I would like to record the videos form the eyes that i see when i run capture (eye0 and eye1 videos). Is this possible? Anyway thank you for tha availability
@user-c1220d yes, eye videos are available through the same mechanism.
frame.eye
should give you eye video frames only
But Capture already records these video streams in the eye0/1 video files. You should be able to open them using vlc.
yes i have 2 files .mp4 from eye0 and eye1 but they don't start in VLC. I though they were there only to allow pupil_player to work. Actually i cant use them
Could you send them over to [email removed] such that I can have a look at them tomorrow? Do you have ffmpeg installed? Could you check if a simple conversion helps?
ok i'll check and then i'll send you them if the conversion doesn't work. Thank you again.
ok @papr it works, thank you so much, I thought the files were inaccessible.
only another question, do you know something about the crash of capture when i load the frame publisher plugin? it happens in the linux version
Which version do you use?
yes, i confirm. capture crashes when you load framePublisher with windows and linux versions both
with pupil 1.8
the problem didn't exixst with 1.7
beacuse 1.7 linux crashes as well
Thank you, I will try to recreate the issue
I was able to reproduce the issue on 1.8. This was due to a mistake during refactoring the zmq_tools. ;/ I am pushing a fix right now.
Do you have a log from the 1.7 crash. I cannot recreate that one.
actually i can't find it, normally where it's placed?
In the pupil_capture_settings folder
But the log will contain the crash trace only of you did not start capture afterwards.
@papr I forgot that the issue is that the debugger interferes with startup. STuck at: "player - [INFO] gaze_producers: Calibrating section 1 (Cal 1 Tray) in 3d mode..."
No issue when no debugger is attached.
I'll keep on playing. There are a few ways to attach debuggers to subproccesses in pycharm.
@papr i sent a mail to data@pupil-labs.com with the log, thank you
All, wondering if there's a way to access video stream from world view camera directly in Matlab? thanks
In addition to my previous query; I do receive the gaze data in matlab using provided ZMQ protocol.
@user-a74d48 here is a reference script that shows how to subscribe to and display video from the world camera in python
- https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@user-a74d48 I know that this is not matlab code, but hopefully this should provide you with a simple reference that can be re-implemented in matlab.
@papr @wrp Is there a possibility to control contrast, framerate, gamma and such things of the pupil cams through a plugin or network communication?
@user-29e10a You would have to access the UVC_Source
plugin directly via self.g_pool.capture
from within your custom plugin.
The uvc plugin does not have a notification-interface
@papr ok, thanks a lot, so this is a start, I can implement my notification interface by myself... you don't have an example for this (I'm familiar with pupil labs plugin development though) flying around, don't you? π
Unfortunately not. Check out the self.g_pool.capture.uvc_capture.controls
dictionary though
Its elements are wrappers around the device controls and should be accessed via the .value
attribute
ok, I will try this tomorrow and come back to you if I struggle
@papr Is it possible that only the world cam is in the self.g_pool.capture?
I need the eye cameras...
That is correct, since plugins run in the world process only.
I think you will have to modify eye.py
to run your custom code.
hm, that means I have to bundle myself. on windows. ... π¦ is there a possibility, that you can add a communication stack to the eye cameras via network? or at least provide the eye cams in g_pool?
or, enhance the plugin system to be able to run in the eye processes π
Good Morning I'm having trouble executing the mouse_control.py code available in github, because the mouse is moving slowly when I perform the tracking of the look with the pupil. Can someone tell me why this is happening?
Hi! When executing logger.info("Test") anywhere in my plugins, it won't appear in the console. Neither do logger.debug() messages. I'm not familiar with Pythons logging package, what could I be doing wrong?
Log messages have different levels: debug, info, warning, error, fatal. Capture/Player only show messages with a level of info
or higher. Therefore your output should be visible.
I get "MainProcess - [INFO] os_utils: Disabled idle sleep." and some more default messages.
But not my own. The logger is created like this: "logger = logging.getLogger(name)"
Are you sure that your plugin is running? Is it listed in the plugin manager? Did you load it from there?
Yes, yes, yes. And restarted multiple times, data is sent on the backbone too (which I check using a different script which subscribes to the ipc backbone).
Ok. Then try using print
instead of logger.info
and test if this results in any output
Be aware that logger output in a background process does not work.
print
works, nice, thanks! Plugins are executed as a separate thread, aren't they? So logger will never work there?
No, plugins run as part of the main-thread event loop
And there is a big difference between threads and processes in Python. Logging in threads should work, in processes not.
Ah okay
Still funny that logger.info
doesn't work, but print
does work just fine.
@user-29e10a We cannot make them available in thr world process, since the eye cams run in their own separate process. The eye process is not designed to be fully modular as the world/player processes are.
But we are planning on refactoring a lot of code by separating core logic from event-loop related stuff. This could result in something useful for you eventually.
@user-06a050 not sure what the issue is. Did you try using logger.warning()
?
logger.warning()
works!
Maybe I have to set the level somewhere? Could I've done it by accident?
It might be possible, that the log level is set to warning instead of info yes. You can check by running logging.getLogger().level
. -- INFO=20
, WARNING=30
Yeah it's set to 30
logging.getLogger().setLevel(20)
sets the level to 20
But I have to this always? Like when starting my plugin?
Do this after import logging
Be aware that this changes the logging level for all plugins, not just yours
Okay, thanks again for lightning fast and quality answers!
I must admit that I'm often annoyed that I don't find any resources on the internet for Pupil, but this makes certainly up for it!
Yeah, our documentation definitively needs some work!
Well I was talking about useful answers on stackoverflow.com or blog posts, that kind of stuff. It's natural that you cannot document every single bit of information. But more or better documentation is always good π π
Yes, content is one issue. The other one is the structure of the documentation.
@papr good morning! question: I'm using the Pupil Unity integration with some other stuff in the pipeline. If the video recording is disabled, everything works flawlessy. If I turn on the eye and world recording with destination on a SSD, everything works flawlessy. If I choose a destination on a HDD, I experiencing some lag in Unity. I often see the lighthouses for a fraction of a second inside the vive, so as if there is a short tracking loss. This happens, when the main thread in Unity can't keep up or waits for another process. I'm not sure how the slower writing to a HDD slows down the main thread of Unity (because Unity itself does not write anything to harddisk) or if the request/response-pattern for netmq and pupillabs have to wait until pupil finished writing to disk and then sends the response. The situation improved from 1.7 to 1.8. I experimented with wrapping the whole networking stuff with async methods (I do not use anything from pupil inside my VR) but that didn't helped, or I missed something. Do you know what I mean? (FYI: I have vga resolution on all cameras with 120 fps (30 respectively) and have chosen bigger file, less cpu)
@papr, ok I figured something out. When starting the eye process via network "eye_process.should_start" I reverse engineered the "overwrite_cap_settings" argument I saw in your code. I managed to set the default settings of the eye cams via network in Python and C# ... maybe this could be useful to others and should be inserted into the docs?
@user-29e10a That is correct! This is a special case and is hard to support as a stable API though.
I just realised that we are closing in to the one-year anniversary of Pupil v.1.0... and disabling Batch Exporter/Auto Trim Marks π¬
is there a way to calibrate without a world camera and without it being in VR? like, look at a screen marker at a known distance, or anything at all??
essentially i have people wearing augmented reality glasses. i want to know what icons they are looking at...but i dont have access to a world camera due to the system.
@user-24270f You simply use the hmd calibration. Instead of providing reference locations that are placed in VR you provide reference locations that are placed on the glasses.
If you mean in software, I don't have that option
@user-24270f do you need the data in real-time?
nope! thats the beauty of it. i can "calibrate" it afterwards
as long as what they are looking at's errors are just a simple offset, i could adjust that
but im not sure if using it withour calibration causes other types of gaze errors
essentially i am running it from a stick-pc. and i dont have access to a keyboard or anything while in use, just before hand one time for setup. so i would need to be able to turn it on, and pupil capture starts up and just starts recording by itself, till the system is turned off. so i would like to be left with the csv of gaze positions over a session. do we know if thats possible? starting and ending recording through script?
I want to get gaze normx and normy dynamically, so that i can control a gimble camera as i move my eye ball. What python code i am looking for?
@user-e711e0 have a look at this: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
Thank you @mpk . I will have a look at this. Looks promising π
@user-24270f One way could be to do offline calibration using manually annotated features. But you would have to know where the subject was looking at at multiple points in time. This means that you will need either auditive cues in your recording or some other way of annotation. Nonetheless, this work flow is far from optimal and will probably result in inaccurate calibrations. If you want, you can send me details and reasoning about your setup in a personal message and I will try to find ideas to improve the calibration process for you.
What is the best way to capture blink and eye movement events and send them to a website? I have experience programming in Python if that helps but is there an api I can connect with easily enough using another language?
hi (again),
how do I receive blink-related data via zmq/msgpack like it is mentioned here?:
https://github.com/pupil-labs/pupil-docs/blob/master/user-docs/pupil-capture.md#blink-detection
The Blink Detection
plugin is running and I subscribed to topics gaze
and blink
but receive only gaze
messages.
thanks for any help
no idea?
@user-82e7ab Which version of capture are you running?
1.8.26
mmh, could you try subscribing to blinks
instead of blink
?
no, nothing for blinks
either
how does Capture visualize blinks?
I think they are visualized by darkening the screen during blinks
hm, I don't see any darkening effect during the blinks. I'm currently using fake source for world camera and file source (recorded with capture) for both eye cameras.
Do you start the eye videos via script?
the confidence plots (top within Capture main window) drops clearly below 0.5 on blinks and is then rendered in red, but no darkening
no, I selected the recordings via gui
Blink detection uses binocular confidence drops. this means this method requires the videos to be synchronized
ah ok
how do i load recordings via script? (should be more in sync than starting both videos via clicking the gui)
(you mentioned starting eye videos via script)
Give me a second. I will compose a quick example script
thx I now also tried to observe the darkening while wearing the pupil headset - nothing (by "darkening the screen" you meant darkening the capture window, right?)
yes
Did you play around with the parameters?
i.e., increasing/decreasing onset/offset thesholds
yes, i did
I'm using 3d detection& mapping mode (if that matters)
Mmh. I just noticed that it is acutally not possible to accurately sync the videos using the start_eye_capture notification... The file source is simply not designed to run in a synchronized way in capture...
ok, that is no big issue - it's no problem to wear the headset for blink testing .. as long as I get some blinks π
I just realised that this is a bug! The blink detector uses data from the deprecated pupil_positions
event field. This issue is not related to desynced file source videos... I will create a PR and a temporary fix that you can use.
how long will it take to be fixed, i.e. when do you plan your next release? it's not yet that urgent for me to have blinks recognized
so you maybe don't need to create a temporary fix just for me
@user-82e7ab https://github.com/pupil-labs/pupil/pull/1283 See the attached temporary solution
thanks @papr now it's working - darkening the Capture window as well as receiving blinks
messages
(btw. it also works for subscribtion to blink
without s
)
Nothing urgent, but there seem to be an issue with darkening the screen and the text in the confidence plots in Capture. The text turns into black boxes after the first darkening/blink
@user-82e7ab Yeah, I noticed this visualization bug on some other occasions as well. No idea what is hapening there π Great to hear that blinks work again!
another one ... sorry^^
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version.
world - [INFO] launchables.world: Application Version: 1.8.26
world - [INFO] launchables.world: System Info: User: ###, Platform: Windows, Machine: ###, Release: 10, Version: 10.0.17134
world - [INFO] plugin: Added: <class 'plugin.Analysis_Plugin_Base'>
world - [INFO] plugin: Added: <class 'blink_detection_fixed.Blink_Detection_Fixed'>
world - [WARNING] launchables.world: Process started.
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
...
eye0 - [INFO] camera_models: No user calibration found for camera Pupil Cam2 ID0 at resolution (400, 400)
eye0 - [INFO] camera_models: No pre-recorded calibration available
eye0 - [WARNING] camera_models: Loading dummy calibration
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
...
eye1 - [INFO] camera_models: No user calibration found for camera Pupil Cam2 ID1 at resolution (400, 400)
eye1 - [INFO] camera_models: No pre-recorded calibration available
eye1 - [WARNING] camera_models: Loading dummy calibration
eye1 - [WARNING] uvc: Could not set Value. 'Backlight Compensation'.
eye0 - [WARNING] launchables.eye: Process started.
Estimated / selected altsetting bandwith : 617 / 800.
!!!!Packets per transfer = 32 frameInterval = 83263
eye1 - [WARNING] launchables.eye: Process started.
Estimated / selected altsetting bandwith : 617 / 800.
!!!!Packets per transfer = 32 frameInterval = 83263
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "launchables\world.py", line 482, in world
File "OpenGL\error.py", line 232, in glCheckError
OpenGL.error.GLError: GLError(
err = 1281,
description = b'ung\xfcltiger Wert',
baseOperation = glViewport,
cArguments = (0, 0, 1330, 720)
)
the last few times I start Capture it crashes after a few seconds
Yeaah... I saw that too. It was not consistent for me though. My guess is that this is some kind of raise condition. Please make create an Github issue for this and include the traceback.
ok
btw .. thanks for the great support .. always super fast response π
How do you determine when to darken the screen for blinks in Capture?
I was tempted to declare a blink being between successive blinks
notifications having type
"onset"
and "offset"
but from the docs I read "onsets and offsets do not appear necessarily as pairs"
(https://github.com/pupil-labs/pupil-docs/blob/master/user-docs/pupil-capture.md#blink-detection)
@user-82e7ab The fixations have an id. All blinks with the same id belong to one blink
Also, we only darken the screen if there is a blink event for the current world frame.
what I get looks like this
{
"topic":"blinks",
"type":"onset",
"confidence":0.63203,
"base_data":[
{
"topic":"pupil.0",
"circle_3d":{
"center":[
-5.06534,
-0.619572,
29.0156],
"normal":[
-0.603412,
0.406124,
-0.686263],
"radius":1.66552},
"confidence":0.999299,
"timestamp":71767.8,
"diameter_3d":3.33105,
"ellipse":{
"center":[
92.4231,
186.168],
"axes":[
41.8154,
71.6205],
"angle":-29.6706},
"norm_pos":[
0.231058,
0.534581],
"diameter":71.6205,
"sphere":{
"center":[
2.17561,
-5.49305,
37.2507],
"radius":12},
"projected_sphere":{
"center":[
236.211,
108.574],
"axes":[
399.455,
399.455],
"angle":90},
"model_confidence":0.963608,
"model_id":1,
"model_birth_timestamp":71705.5,
"theta":1.989,
"phi":-2.29204,
"method":"3d c++",
"id":0},
{
"topic":"pupil.1",
...
I guess all the base_data
items are frames that contributed to the blink (onset in this case).
But there is no id except the one that decodes the eye/cam.
Oh you are right. I think I mistook this with fixations...
Am I right in that the Blink Detection plugin does not hold any state? It just detects the two blink events (on-/offset), right? I'm asking because I receive sequences like this:
onset
onset
offset
onset
onset
onset
offset
onset
onset
offset
onset
offset
onset
onset
offset
offset
onset
onset
offset
offset
I want to get real-time feedback on whether the user is currently doing a blink or not.
To get a continuous blink state I can't use pairs.
Should I just use onsets
and wait for a fixed amount of time, or wait for the confidence to increase above a certain level?
Could you suggest something?
You are right. And the only thing the plugin visualizes are onsets
ah ok
Using onsets+confidence thresholds sound like a good idea!
Do you know if the thresholds set within the plugin are absolute values or differences?
they are normalized absolute values.
assuming onset confidence threshold = 0.5
, does the confidence has to drop below 0.5 (e.g. 0.6 to 0.4) or decrease about 0.5 (e.g. 0.8 to 0.3)?
just to get the right idea for tweaking the values
So our algorithm looks for sharp confidence drops (onsets) and increases (offsets) over a specific sliding window.
The sharper the decrease/increase the stronger the filter response. This response is what we compare the threshold against.
ok, so it's more like a confidence drop about 0.5 than falling below the confidence value 0.5
thx
correct
is it possible to query/request the current offset confidence threshold
from a remote app?
unfortunately not
to bad
You could add that functionality to the fixed plugin code though
Something similar to this:
def on_notify(self, notification):
if notification['subject'] == 'blink_detection.broadcast_offsets':
self.notify_all({'subject': 'blink_detection.current_offsets', 'onset': self.onset_threshold, 'offset': self.offset_threshold})
Untested code, might contain typos/wrong variable names
in your external script subscribe to notify.blink_detection
and request by sending a notification called blink_detection.broadcast_offsets
thx, this should work for now, but I was more looking for a long-term solution. This would either be gone with the next release or I will not get updates to the Blink Detection plugin, or have to include this on all our machines after each update =/
Do you think you will add this functionality for this option (and maybe other parts of pupil) like for SUB_PORT
or v
(Version) ?
No, there is very little general usecase to this kind setting retrieval. You would need to maintain your own version since this is a very specific/custom change.
ok
I'll then keep a separate parameter at our side β to make sure we are always using the latest pupil version, i.e. blink detection plugin. But thanks anyway!
Maybe there is a chance that the plugin itself will hold a status at some point instead/in addition to of the events?
What do you mean by "hold status"?
the plugin maintaining a "current blink state (bool)" e.g. switch to true on blink onsets and switch back to false on saccade offsets or when exceeding a fixed fallback confidence threshold, if offset event was not detected (not sharp enough)
This would need a clearer definition of a blink in the context of confidence increase. Unfortunately, I cannot give this a lot of priority at the moment. Please create an other github issue for this.
no problem, i was just asking ; )
Do you have an idea why tracking of one eye is constantly worse than the other? both cam instances share exactly the same settings
Mmh, there is no apparent reason for this. Not sure what the problem is.