Good morning, folks! We're having an issue here, where we cannot get the second camera of the binocular to programmatically start streaming. I was hoping that someone could lend some advice.
Both cameras initiate just fine. We are capturing at [email removed]
The issue we encounter is that the call to start the second stream fails with a UVC_ERROR_IO
error.
auto code = uvc_stream_start(stream, nullptr, this, 2.0, 0);
This is accompanied with a windows dialog, complaining that the USB controller does not have enough resources.
Any advice on this? We tried lowering the bandwidth etc, but to no avail.
N.b. The "Pupil Capture" demo works fine for us. Both cameras capture, no usb resource issue. So that's why we believe this is a software issue.
Just an addendum from what we've found today.
uvc_get_stream_ctrl_format_size(*devH, ctrl, UVC_FRAME_FORMAT_ANY, 320, 240, 30);
results in a large bandwidth being set:
Estimated / selected altsetting bandwith : 3072 / 3072.
If we amend the ctrl to use a different supported size (e.g. [email removed] The bandwidth is much lower (~800) and we dont get the usb resource issue.
@user-ebf6d1 Could you try [email removed] I think one issue is that [email removed] does not seem to be available
Hey @papr, yeah. I have looked at that. unfortunately the version of libuvc provided to us hasnt been built with the LIBUVC_HAS_JPEG flag set. Will we need a rebuild of libuvc? or do you think there is another means of getting the bgr data from from the frame other than uvc_any2bgr
(which returns UVC_ERROR_NOT_SUPPORTED)
I dont have that level of insight into libuvc. I think (70% sure) that you will have to recompile libuvc including that flag.
Hey @papr, It looks like that flag was intentionally removed by you guys in https://github.com/pupil-labs/libuvc/commit/45e267d9c9dd3311e851496e38ffaf8438ae3512#diff-af3b638bc2a3e6c650974192a53c7291
Do you happen to know offhand what the suggested way to get the bgr data is, if the frame format fetched from the stream is UVC_FRAME_FORMAT_MJPEG?
Ah, we use turbojpeg to uncompress the mjpeg data.
ah okay, so we should grab the turbojpeg repo and decompress the uvc_frame_t data?
Do you know of any examples of this?
This is what we do in our uvc wrapper, pyuvc: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L267
self._uvc_frame
is of type uvc_frame_t
thanks π
Hi all
I have just gotten my Pupil kit today
Trying to calibrate it but Pupil Capture seems to be crashing everytime i open it
Hey @user-d3de40 Welcome to the channel! Which version of Capture do you run? And on which operating system?
I've tried downloading the 3D and 2D calibration build but it fails everytime
Windows 10, Pupil v1.8
Obtained from here: https://github.com/pupil-labs/pupil/releases/tag/v1.8
What do you mean by 2d/3d build?
Do I understand it correctly, that you can see the live stream of all cameras in Capture and that the program crashes when you hit C
for calibrate?
One sec, looking for the link
Here it is: https://github.com/pupil-labs/hmd-eyes/releases/tag/v0.5.1
Calibration guidance app
So i can get to the whole calibration part but it always fails
Ah, ok, so you got the HMD addon.
yep
Any idea why Capture is crashing?
Because i cant calibrate it
Unfortunately, I do not know if this unity plugin release is up-to-date. π Did you have a look at this: https://github.com/pupil-labs/hmd-eyes/tree/v0.5.1/unity_pupil_plugin_vr ?
Please also see our π₯½ core-xr channel for further ar/vr related questions.
Okay I will. Thanks papr!
How could we flip the eye image frame?
Do you mean flip it such that is recorded in the flipped manner?
The pupil is detected in flipped image
and also record it in a flipped manner
I try to flip the gray of the frame, but the gray in frame is not writable
By default we write the original jpeg data to the file.
for example frame.gray=np.fliplr(frame.gray)
Yes, I understand. But just because you modify it, ti does not mean that the system will record that data. π So even if it was modifiable, it would not be saved since the default saves the original jpeg data.
This is what you can do:
Actually, I noticed that you need to change a lot of code to get this done. π May I suggest to use ffmpeg to post-process the video?
For example, If I want the pupil to be detected in flipped image
Is it complex?
Unfortunately, I need to leave for now, but when I come back I can lay out the frame processing steps that Pupil makes. This should help you to understand the complexity.
Generally: There is no need to flip the image. Pupil detection works the same, independently of the image being flipped or not.
Basically, I need to do a visualization, which makes the pupil moves the same way as the gaze
In Player, you can flip the eye videos in the Vis Eye Overlay plugin.
The thing is that If I can flip eye image directly, I don't need to flip the result
detection result
I haven;t understood why flipping image is complex?
I understand. But it is not as easy as flipping some decoded image buffer. You will need to flip the underlying jpeg buffer and its resulting yuv buffer
I need to leave now, sorry. Talk to you later!
OK. Thanks
Hello, looking for reach blinks in Unity, I canβt subscribe to "blink" or βblinksβ messages. I have read that you found a solution for this previously (https://github.com/pupil-labs/pupil/pull/1283).
The problem Is that I canβt find pupil_capture_settings/plugins in my pupil_capture_windows_x64_v1.8-26 folder. I'm sure I've missed some step but I do not know where. (I have found a plugin folder in utils/trainer but when I copy the blink_detection_fixed, I doesnβt appear in the pupil_capture/plugin_manager to launch it)
Thank you so much
Hey, the pupil_capture_settings/plugins
should be in your home folder, not the pupil_capture_windows_x64_v1.8-26
Thank you! I thought I was going crazy π
Hi @papr, thanks for pointing me to the right direction, i've managed to get it working
Using the Unity plugin, is there a way for me to get the size of the pupil?
@user-d3de40 This is part of the pupil data. Either you subscribe to pupil
explicitly or you can use the pupil data within the gaze
data base_data
field.
Thanks @papr. I'm sorry, but where can i find this>?
I think the unity plugin subscribes to the gaze data after calibration. This is used to visualize gaze within the scene. The base_data
is part of that. I do not know the exact code location though.
This is an python example as reference: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
Okay, thanks a lot again @papr!
Is there a way to make the pupil recordings save in multiple files? Like say I want files in 10 minute blocks - is that possible?
This is not possible yet
Okay, thank you @papr
Morning, Is it possible read HTC Vive input in pupil world capture?
@user-5509fe The world window usually captures the scene video. The HTC addon does not have a scene camera. Therefore the world window stays grey when using the addon
Thank for the answer. It's not possible have HTC reg (and pupils reg) without modifying the source code of my VR application. Is that correct?
What do you mean by "reg"?
"recording", sorry
The gaze recording is independent of the unity recording. But your vr application needs to provide the coordinate system in which capture can estimate gaze. This is where our unity plugin comes in.
@papr is it possible to start pupil service (or capture) with another remoteport than 50020 via commandline args? So without the need to interact with the gui or to open it at least once with the standard port.
@user-29e10a this is not possible yet. Please create an issue for that.
Hi again guys. I have tested the blink plugin fixed and it wors really good. I think it only has a problem (maybe itβs my fault). This is my workflow:
Launch Pupil Caputure, start Blink plugin, launch Unity Calibration and Blink detection (everything ok), stop Unity, relaunch Unity. In this point the blink detection doesnβt work, I need to go to Pupil Capture Plugin and uncheck and check again the plugin.
Do you have any idea why this is happening? Could I do this plugin reset through Unity?
π
@user-d3bdd1 Could you run this script while doing the described procedure and let us know about the output? https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py
I managed to launch the .py but, when I went to test pupil capture, one of the cameras doen't initialize. I only have installed python 2.7 and the libraries zqm and mgspack in a secondary folder. This is the error that I have when I try to turn on the eye1 (it appears in UVC Manager as unknown):
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
Running PupilDrvInst.exe --vid 1443 --pid 37425
OPT: VID number 1443
OPT: PID number 37425
Running PupilDrvInst.exe --vid 1443 --pid 37426
OPT: VID number 1443
OPT: PID number 37426
Running PupilDrvInst.exe --vid 1133 --pid 2115
OPT: VID number 1133
OPT: PID number 2115
Running PupilDrvInst.exe --vid 6127 --pid 18447
OPT: VID number 6127
OPT: PID number 18447
Running PupilDrvInst.exe --vid 3141 --pid 25771
OPT: VID number 3141
OPT: PID number 25771
eye1 - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
eye1 - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [320, 240]
eye1 - [INFO] camera_models: No pre-recorded calibration available
eye1 - [WARNING] camera_models: Loading dummy calibration
eye1 - [WARNING] launchables.eye: Process started.
@user-d3bdd1 Wait, does Capture crash if you run the script?
unknown
means that the drivers were not installed correctly for that camera
Now it crash always. I can't start the pupil capture correctly
(I have reboot the computer too)
(and restart default settings in the app)
Btw, the script above is not Python 2 compatible
It's strange, i am running with it. Maybe It is using internally other python version, I have a 3.6v installed with Visual Studio
Well, yes then. This is the output I was looking for. π
Could you repeat that including
start Blink plugin, launch Unity Calibration and Blink detection (everything ok), stop Unity, relaunch Unity.
Ok, 1 minute
(u'notify.start_plugin', {u'topic': u'notify.start_plugin', u'name': u'Blink_Detection_Fixed', u'subject': u'start_plugin'})
(u'notify.set_detection_mapping_mode', {u'topic': u'notify.set_detection_mapping_mode', u'mode': u'2d', u'subject': u'set_detection_mapping_mode'})
(u'notify.eye_process.should_start.1', {u'topic': u'notify.eye_process.should_start.1', u'eye_id': 1, u'subject': u'eye_process.should_start.1'})
(u'notify.eye_process.should_start.0', {u'topic': u'notify.eye_process.should_start.0', u'eye_id': 0, u'subject': u'eye_process.should_start.0'})
(u'notify.eye_process.started', {u'topic': u'notify.eye_process.started', u'eye_id': 1, u'subject': u'eye_process.started'})
(u'notify.eye_process.started', {u'topic': u'notify.eye_process.started', u'eye_id': 0, u'subject': u'eye_process.started'})
(u'notify.set_detection_mapping_mode', {u'topic': u'notify.set_detection_mapping_mode', u'mode': u'2d', u'subject': u'set_detection_mapping_mode'})
(u'notify.set_detection_mapping_mode', {u'topic': u'notify.set_detection_mapping_mode', u'mode': u'2d', u'subject': u'set_detection_mapping_mode'})
(u'notify.eye_process.should_stop.1', {u'topic': u'notify.eye_process.should_stop.1', u'eye_id': 1, u'subject': u'eye_process.should_stop.1'})
(u'notify.eye_process.should_stop.0', {u'topic': u'notify.eye_process.should_stop.0', u'eye_id': 0, u'subject': u'eye_process.should_stop.0'})
(u'notify.eye_process.stopped', {u'topic': u'notify.eye_process.stopped', u'eye_id': 1, u'subject': u'eye_process.stopped'})
(u'notify.eye_process.stopped', {u'topic': u'notify.eye_process.stopped', u'eye_id': 0, u'subject': u'eye_process.stopped'})
(u'notify.set_detection_mapping_mode', {u'topic': u'notify.set_detection_mapping_mode', u'mode': u'2d', u'subject': u'set_detection_mapping_mode'})
(u'notify.eye_process.should_start.1', {u'topic': u'notify.eye_process.should_start.1', u'eye_id': 1, u'subject': u'eye_process.should_start.1'})
(u'notify.eye_process.should_start.0', {u'topic': u'notify.eye_process.should_start.0', u'eye_id': 0, u'subject': u'eye_process.should_start.0'})
(u'notify.eye_process.started', {u'topic': u'notify.eye_process.started', u'eye_id': 0, u'subject': u'eye_process.started'})
(u'notify.eye_process.started', {u'topic': u'notify.eye_process.started', u'eye_id': 1, u'subject': u'eye_process.started'})
(u'notify.set_detection_mapping_mode', {u'topic': u'notify.set_detection_mapping_mode', u'mode': u'2d', u'subject': u'set_detection_mapping_mode'})
(u'notify.set_detection_mapping_mode', {u'topic': u'notify.set_detection_mapping_mode', u'mode': u'2d', u'subject': u'set_detection_mapping_mode'})
And afterwards the you are not able to receive any blink data in unity anymore, correct?
yes
Are you able to see the blink visualizations in Capture, generally? You will need a second person to evaluate that
where do I should see this?
I think the visualiation darkens the world video if a blink onset is detected.
I only see this, when u do the blink, the red circle disappear, I cant see anything more
Are you running Service?
No, I am using Capture with the service plugin
With the service plugin? Can you be more specific?
Pupil remote, sorry
Ok, what do you see in the world window? Is the background gray or colored?
gray
Ok, then the visualization does not work. Please select the UVC Manager
icon on the right, select Test Image
and click Activate Test Image
Afterwards you should see a blue/green gradient as a background.
yes
Now try to blink. The gradient should darken
Yes it does it, but when I turn off Unity and relaunch it, the darken gradient doesn't appear
If I restart the plugin manually ( check and unchek) it works
I could record a video if u want to check it
No I believe you, I just needed to make sure that blinks would actually not be detected
Yes, it should be a problem in te Blink detection fixed plugin. Maybe it breaks when the service runs out? I could explain why this only happens when I turn off Unity
I cannot reproduce the issue by turning the eyes off and on again
I have tried something else,
Maybe you can reproduce this:
-Fake Manager: Test Image -> Activate -Reboot Pupil Capture -Blinks will not be detected, instead you uncheck and check the blink Detection fixed plugin
Are you using the blink detection or the blink detection fixed?
I am running the currnt pupil master, so the fixed version
I cannot reproduce the issue with the alternative method either
Which bundle do you use?
1.8.26 with https://github.com/pupil-labs/pupil/pull/1283
Ok, I did this exact setup on Linux, same behaviour as before. Let me try the windows machine
Not reproducable on Windows either
Can you reproduce this whole thing without running Unity at all?
yes, im going to record it, maybe im doing something wrong
Unfortunately, I have to leave now. I might be able to answer question while on mobile later though.
You can use this as well to check if blinks can be received: https://gist.github.com/papr/18d7c36c8d811e9e25a0b21db1fbd57b
Note: This will not work after you restarted capture. In this case you need to restart the script as well
Hey, gang - can anyone help me out?
When reading a norm_pos
from a gaze.2d.0
topic, in what space do those coordinates exist?
It must have some relation to the calibration, right?
Is it possible to get normalized coordinates of the reference markers from calibration? It seems like I've found them in calibration.calibration_data
under ref_list
, but the points don't make sense to me. More specifically, I don't understand why screen_pos
changes so much. I assume, and it appears, that this is the position of the marker screen coordinate pixels, but I have values like [644.0690746307373, 454.7461643218994]
and [644.1794700622559, 454.43712425231934]
right after each other. Are these actually the same point?
I think I figured out that screen_pos
represents the marker's position in the world camera's view. Is there any way to retrieve the reference point's screen coordinate?
@user-81072d You are on the right track! All coordinates are in relation to the world camera's coordinate system. screen_pos
is the position within the world frame in pixels (origin top/left). norm_pos
is the same but having the values scaled between 0 and 1 in their respecitive dimensions. Additionally the y-axis is flipped. Therefore the origin of the normalized coordinate system is in the bottom/left of the frame.
You need surface tracking if you want gaze/calibration/reference data in relation to an actual screen.
Thanks, @papr! We've been using surface tracking, but it's very shaky. Participants in the present study are stationary, so we're exploring how reliably we can track screen-gaze w/out surface tracking
@user-81072d @user-41f1bf has a surface tracker that detects screens instead of surface markers. I don't know how well that performs though.
@user-41f1bf - have a github link? ^
@user-81072d can you share a world recording? Surface tracking can be quite robust actually.aybe markers are wrong or to small?
Sure, give me a few minutes
@user-81072d https://github.com/cpicanco/capture_plugins
If you are using stationary and a white backgroud it is very stable.
Actually, the tracker was tested more extensively on pupil player and I did not implement multi thread detection, so you player will block until all frames being detected
This is a simplified and more comprehensive version.
Is there a way to display surface tracking in pupil player?
The offline surface tracker is much more stable than the online appeared, but I don't see how to display the surface which was being tracked live. Maybe I can just screne-cap the world view window live
Ok, the screen capture software didn't want to grab my full screen, so I just recorded the monitor on my cell phone. Not the best, but it illustrates the problem we're having with the surface tracker.
The image in that album shows the setup I used to record those videos. The pupil tracker is completely stationary. The first video I recorded was with the lights on, but in the study, the room will be dark, so the second video has the lights off
@user-81072d are you referring to the debug window that shows the current content of the surface?
Yeah
The surface tracking debug window or the world view with surface tracking visualizations turned on. Anyway, I think you can see the shakiness in the videos I shared
@user-81072d Yes, this is available in Player as well
@user-81072d I saw the recording and I think the reason is a 'funny' learning of the surface. Can you remove the surface and re-add it in Pupil Player?
your setup should be very easy to track robustly.
@mpk - I'd be happy to try that, but we see similar results on different machines. @papr - is the surface tracker in Player performing the tracking during playback, or is it playing-back tracking which occurred during recording? I know it's probably the same code at it's base, but the surface tracker in Player seems much more consistent/stable
@user-81072d The surface tracker uses the same logic for online and offline tracking. It does not play back the recorded surfaces.
@papr - So is there any reason why Player seems to be so much more consistent tracking surfaces?
I've noticed that it can be a little difficult to get all of the fiducials focused simultaneously. There's a very small range, and outside of that, it starts losing fiducials intermittently. When it accurately tracks all of them, it seems pretty consistent
@user-81072d its not different machines. Just that if the surface is learned in a funny way it may help to remove and re-add the surface in Pupil Player after the fact.
@mpk - right. I've tried adding/removing the surface a few times now, and it didn't seem to make a difference. It seems like it's losing a fiducial or two intermittently. I noticed that the ones it was losing seemed a little out-of-focus, and adjusting the focus seemed to help, but it's hard to keep all of the fiducials well-focused simultaneously. Maybe we need even bigger fiducials?
@user-81072d if you have this many markers it should not matter if we loose one ore two intermittently. The surface should remain stable.
@mpk - that's what I thought, which is why I started with as many as I did. Although if it loses one of the corner markers, the tracked surface skews. If it loses two corner markers, it skews even more. It's much more stable if we only track the four center markers in our configuration, since it never seems to lose those
I'm trying to run Pupil Capture from source. I've followed the extensive setup for Windows dependencies, but I'm getting an error when I run pupil_capture.bat. The stack trace is here: https://pastebin.com/wKj5XcPL
Hi @user-2686f2 May I ask what you are doing that requires building from source? Can you achieve you goals by using plugins or subscribing to the IPC? I ask because building all dependencies on Windows is (as you experienced) very tricky. Here are some Notes:
1. Build pupil detectors and optmization_calibration independently to debug any dependency issues prior to running capture. Go to the dir pupil_src/shared_modules/calibration_routines/optimization_calibration/
and with python3 run python setup.py build
and do the same for /pupil_src/shared_modules/pupil_detectors
2. Once you have ensured that pupil detectors and calibration modules are properly built, you should use run_capture.bat
to run Pupil Capture
@wrp I plan on developing various plugins. I assumed I needed to build from source for that. If not, could you point me to some documentation that would get me started there? (I'd also like to change the file names of certain files when they are exported from Pupil Player, as a convenience measure. I wasn't sure if that could be done by a plugin).
@user-2686f2 you can just run the app bundle and put your plugins in the pupil_capture_settings/plugins
folder (found in your User
folder). Please see: https://docs.pupil-labs.com/#plugin-guide
Of course, building from source will provide you with even more access, but you can already get lots of access via IPC from a plugin.
@wrp I tried it on Windows 7.It was normal.
What is reason?
@user-4580c3 is this question a reference to your discussion in the π core channel?
Please provide some context @user-4580c3
@wrp When I try to run python setup.py build for the pupil_detectors, I get this error: error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe' failed with exit status 2
@user-2686f2 just to be sure, are you using x64 Native Tools Command Prompt for VS 2017 Preview
?
I'm using x64 Native Tools Command Prompt for VS 2017 (no Preview, I have the full VS 2017 community edition installed)
ok
are all paths correctly specified in setup.py
?
@wrp do you know TeamViewer? Im sorry but, Can you confirm by remote control?
@wrp Yes, all those paths are correct. Though one warrants clarification: /ceres-windows/Eigen. What should be immediately inside that folder? The documentation wasn't clear exactly what should have been copied there.
@user-2686f2 let me double check that
Eigen
is directly copied into ceres-windows
Unzip Eigen and rename the contained eigen directory to Eigen
Copy the Eigen directory _(e.g. the entire directory and its contents)_ into ceres-windows
@wrp Right, that's where I was unclear. The zip file is "eigen-eigen-67e894c6cd8f.zip", which extracts to a folder called "eigen-eigen-67e894c6cd8f" I was unclear if I should rename that folder to "Eigen" and copy all of it's contents, which include other folders, like bench, blas, cmake, and a folder called Eigen....should I just be copying this sub-folder Eigen, or the whole thing?
@user-2686f2 here's the Eigen folder that is within ceres-windows
hope this helps
@wrp Yep, well, it clears that part of the documentation up. I've switched that out and re-built the ceres stuff. Unfortunately, I'm still getting the same error. However, I believe I've just fixed the problem.
@wrp I'm using VS 2017, the full version, not the Preview. Any new devs that try this are probably going to be using the full version. I don't know that I can even find an old copy of the VS 2017 Preview. This means I'm stuck using the MSVS 14.15, instead of the version 14.1 from the Preview. VS 2017 v15.8 is a new update that came out a week or so ago. I was getting the following error in the large output when I tried 'python setup.py build':
error C2338: You've instantiated std::aligned_storage<Len, Align> with an extended alignment (in other words, Align > alignof(max_align_t)). Before VS 2017 15.8, the member type would non-conformingly have an alignment of only alignof(max_align_t). VS 2017 15.8 was fixed to handle this correctly, but the fix inherently changes layout and breaks binary compatibility (only for uses of aligned_storage with extended alignments). Please define either (1) _ENABLE_EXTENDED_ALIGNED_STORAGE to acknowledge that you understand this message and that you actually want a type with an extended alignment, or (2) _DISABLE_EXTENDED_ALIGNED_STORAGE to silence this message and get the old non-conformant behavior.
I added this line "#define _ENABLE_EXTENDED_ALIGNED_STORAGE" to the following 3 files in the folder '/pupil_src/shared_modules/pupil_detectors': detector_2d.cpp detector_3d.cpp singleeyefitter/EyeModelFitter.cpp
Now it builds fine.
Thanks just for the notes @user-2686f2
We will update docs with this feedback. Please make a PR with suggested changes and we can test
Will do.
@wrp On last comment. I needed to install torch as well, which is absent from the documents. I found the wheel here: https://pytorch.org/get-started/locally/, downloaded the wheel and installed with pip. Now Pupil Capture and Pupil Player both launch fine from the .bat files.
@user-2686f2 if you have time, could you make a PR to https://github.com/pupil-labs/pupil-docs/ with your changes?
Question 2: How the maturity is computed? How the confidence is computed? How the performance is computed?
@papr
@wrp I intend to make a PR, but I've run into an issue. 2 of the files I changed are .cpp files, which are auto generated by some cython code and so not in source control. I don't know cython well enough to ensure the correct change gets made. I planned on looking into it tonight.
@user-2686f2 Remove them and make sure that they will be ignored using .gitignore in the future. Auto-generated code does not belong into the repository.
@user-b91aa6 The red sphere are potentially new models while blue is the current model. New models are fitted if new pupil observations doe not fit the old model anymore, as it is the case in your screenshot.
@user-b91aa6 irt to question 2: what do you mean by performance?
@papr I think he is referring to the 3d model performance in the debug window
@papr Yes, the two files in question are already in .gitignore. The problem is that a recent update to Visual Studio introduced a new bug. If those two files don't have a specific #define flag in them, then the .cpp files can't compile, breaking the build process. I've tried to understand the .pyx/cython code, but I don't understand them well enough to make them auto-generate the .cpp files with the necessary #define flag.
@user-2686f2 I am not sure if I understand. The cpp code is generated by Cython when setup.py is executed. It is based on the pyx files. Therefore the only files in the repository should be the pyx file and the cpp files should be deleted.
a short question: can i only get pupil diameter with 3d detection mode?
yeah
@user-ea779f Do you mean in mm?
the dia values aren't changing in 2d detection mode
but yes, in the end i want to get my pupil size
@user-ea779f pupil diameter in 2d mode is only measured in pixels and can therefore vary based on eye camera distance to the eye.
I see, thanks for your quick response. so the best way is using 3d detection mode and subscribing to u8"pupil"? then i should get the dia values which are in mm?
@user-ea779f The graphs only show diameter_3d
which is in mm
and only available in 3d mode. Please be aware that the values' accuracy is subject to 3d model fittness and cornea refraction which is not modelled by our software yet.
Yes, if you want realtime access to these values you need to subscribe to pupil
i do not need any calibration to just subscribe pupil for the diameter_3d, do i?
@user-ea779f thats correct!
thanks!
@papr I am not trying to add the cpp files to the repository. I'm aware they don't belong there. I know the pyx files generate the cpp files. Because of the bug I mentioned, the cpp files that are generated need to have a new line of code in them. I don't know how to modify the pyx files so that the auto-generated cpp files include the necessary extra line of code.
@user-2686f2 Thank you for the clarification! Did you try to simply add the required #define line of code to the pyx file?
@papr I just tried, but it didn't do anything (I suspect because #define starts with #, which is just the comment character for python, so it's ignored).
@user-2686f2 ah correct. Could you try this? https://stackoverflow.com/a/5705865
@papr This is the line that needs to be added: "#define _ENABLE_EXTENDED_ALIGNED_STORAGE", which is not an integer constant, so that doesn't appear to work.
I'm using a diy eye camera. I can't get some of the listed resolution in eye process when there are several available resolutions with same first dimension(such as 1280x720 and 1280x480). I found that function frame_size() in uvc_backed.py only check the first dimension of frame size. Maybe we can check both the first dimension and the second dimension?
@user-ec0ec0 Please create a Github issue for that. Please also link the line of code that you are referring to.
@user-2686f2 Did you create this issue? https://github.com/pupil-labs/pupil/issues/1331
Hi! I need some info. I'm going to work with the Pupil DIY project. I've notice that while the program is ok on Linux, there is an issue with the microsoft hd cam. It's not detected. So am I obliged to use Win10 or iOs? Thanks for the reply
Has anyone used the mouse_control.py script and had a satisfactory result regarding mouse movement?
I have trouble getting it installed correctly. What steps did you take?
@papr No, but that is the same issue I'm trying to fix (or rather, have fixed, just don't know how to apply it to cython).
@user-2686f2 Could you please comment in the issue that you are affected by this. Would be nice to have an overview over the affected users.
@papr Can do. (github name is also StevCode)
@papr https://github.com/pupil-labs/pupil/issues/1331#issuecomment-430418074
Is there any way to detect two pupils in one frame? My eye cameras are on the same device and the device can only be used by one eye process. Any ideas? Thanks.
@user-ec0ec0 I think the best way would be to stream the video to both eye processes and then use the ROI feature to crop the left/right region. The video backend would need to be changed for that.
@mpk @user-ec0ec0 one could use ndsi to stream one eye cam to both processes as you suggested. This does not require any changes. I will link an example later.
nice idea!
@user-ec0ec0 https://github.com/pupil-labs/pyndsi/blob/master/examples/uvc-ndsi-bridge-host.py I was not able to test it yet, but this script opens one uvc camera and provides it via ndsi.
@papr Good job! Thank you! I'll try it.
Hello. I'm trying to build a basic Python application that gets gaze positions in world coordinates in real time. Here are the steps:
NaΓ―vely I expected the x and y coordinates of norm_pos to go from 0 to 1 (since I'm glancing around the monitor edges), but that doesn't seem to be the case, looking at this data.
In fact, it looks like my calibration wasn't applied. Maybe it's using some previous calibration, or maybe I'm looking at raw data of the pupil position?
My question is how can I do this very basic task? How do I calibrate, and then get calibrated data in world coordinates?
Re-reading the documentation, I suppose that when you subscribe to 'pupil', you get positions in pupil coordinates, not world coordinates. So what do you subscribe to in order to get world coordinates? Is there any example?
@user-52cbe2 subscribe to the gaze
topic
Thank you, @wrp.
I now subscribe to 'gaze' messages. However, just doing a screen marker calibration in Pupil Capture, I run my Python program and look at msg['norm_pos'], it's still nowhere near where I'm looking.
I know this because I display a dot at norm_pos (assuming that (0,0) is the lower left-hand corner and (1,1) the upper right-hand one), and this dot -- although it does move when I make eye movements -- is nowhere near where I'm looking (and the amplitude of the dot's jumps during my saccades is much smaller than my saccade amplitude).
Could anyone tell me what I'm doing wrong? Is there some way I need to tell the system to apply my last calibration? Am I looking at the wrong message, or the wrong part of the message?
Thank you.
@wrp Could you please any comment this issue?https://github.com/pupil-labs/pupil/issues/1208
@user-52cbe2 could you share the code with me? I could have a look π
@user-324a3b perhaps @user-2686f2 could provide you with some notes regarding recent setup from src on Windows there are some changes that need to be made for recent versions of MSVC
we bundle with an older version of MSVC - so we have not encountered issues that newer users are facing with newer versions of MSVC.
So any notes on env changes to developer notes should be made as a PR to https://github.com/pupil-labs/pupil-docs/
@papr Here is the code. First the pupil_tracker class:
And the program that tries getting the gaze data:
I am looking for a way to export the pupil player as a video with where the person is looking as well. I can get the head mounted camera to export, but not the location of the eye. is there a way to do this?
@user-d45407 You might need to use offline calibration to generate gaze data. See the docs for details.
I have the gaze data, I can't get it to convert to mp4. I used a screen recorder and just played the pupil play, but I would like a better solution.
Simply hit e
or the export button on the left. If the video export launcher is enabled, it will export a mp4 video with the visualizations
It currently only exports the world video.
Correct. We will release an eye video exporter with the next release
Sorry, I misread your initial comment
its alright, thanks for your help!
@papr Hello, I am trying to extract and view gaze positions in real-time at a very basic level using python. I believe the filter_messages.py script in the pupil_helpers repo should accomplish this (correct? https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py ) after un-commenting line 24. I place this file in the ./pupil_capture_settings/plugins folder and launch Pupil Capture but the Capture terminal stops after the third line (System Info). After using a KeyboardInterrupt the most recent call that shows up is: FIle "zmq\backend\cython\checkrc.pxd", line 12, in zmq.backend.cython.checkrc._check_rc. So it appears there is a back end error that must be occurring in the python scripts infinite loop? Any advice on this would be helpful.
@user-911c89 Yes, the filter_messages.py
script is the correct script to use to subscribe to different topics like gaze
, pupil
, notifications
, and more. This is a script that is intended to be run independently and not as a plugin. It is not intended to be saved in the plugins
folder. Instead, you should just start Pupil Capture and then run the filter_messages.py
script in another terminal
@user-2686f2 Hi, did you use the VS 2017 community (not preview), and boost with 1_64_0-msvc-14.1.64.exe as shown as description?
do you have a doc saying what will be in the next release?
@user-d45407 See https://github.com/pupil-labs/pupil/milestones and https://github.com/pupil-labs/pupil/projects/2
@papr thank you! also I was looking to recreate this: https://pupil-labs.com/blog/2014-07/pupil-player/
but the heat surface doesn't change based on where I am looking.
this is an example. the green dot is my eye, but the red surface doesn't change color.
@user-d45407 what needs to be changed is the surface size. (see submenue for that surface) then the heatmap will update and look better.
ah, thank you! why does it not update as the video plays?
It's an agregate of all gaze of the video so it should not change.
is there a way to get it so it isn't an aggregate
Hey guys, I just learned about Pupil and looking at its Github master branch I found a lot of header files and a few cpp ones, C++ files are only 15% of the repository. Looks like most of the low level stuff come from Eigen and Ceres, is that right? I'm aiming to integrate it with UE4 projects. Any advice?
@user-3c03d9 there has been previous work on that. Not sure if it was published already
On a similar note: You want to use our network interface to fetch data from Capture. See the Pupil Helpers repository for examples
Thank you @papr , I'm aiming to track which in game object is the user looking at in order to get statistics. Maybe running the capture service and reading the data from an UE4 application using localhost. Is that a possibility?
@user-3c03d9 thats more or less the idea. You can get all Pupil ang gaze data in realtime over localhost from Pupil Capture/Service.
Thank you @mpk
Hello, whenever I generate the fixations_on_surface files they are empty. Are they supposed to be that way or am I setting something up wrong?
Never mind, I labeled the surface and everything came up.
hello. i'm working on Ubuntu 18.04.1 , I've succeeded installing all the indipendencies. The player starts as well but eventually goes in "ghost mode". I'm using the diy kit. Webcams are listed as uknown and apparently are blocked or in use. What can I do?
@user-96755f make sure that your user is in the plugdev
(?) group.
ok I'll check. Should I restart after?
That should help, yes
It works! Thanks a lot!
@user-324a3b @wrp Sorry for the late reply. I don't have a suggestion for this. I began with a clean install and have only installed the most up-to-date Visual Studio 2017 Community Edition. I don't have an older version of MSVC to contend with. (The folder C:\Program Files (x86)\Microsoft Visual Studio 14.0\ doesn't exist on my machine). I can only recommend uninstalling the Preview version and using the Community Edition.
@wrp I've done a pull request for updated Windows Dependencies for all the issues I ran into. https://github.com/pupil-labs/pupil-docs/pull/223
@papr @wrp With regards to actually being able to compile the source code, this is still an issue: https://github.com/pupil-labs/pupil/issues/1331, however the author closed it after I posted a comment about a work around.
I'd be happy to include the work around in an updated section of the Windows Dependencies docs, but I don't know how to actually fix the underlying issue.
Thanks @user-2686f2 - I will review and test changes after the forthcoming release and look into resolving the issue for recent versions if msvc
@user-2686f2 Thanks for your reply. I will format windows and install the most up-to-date Visual Studio 2017 Community Edition. then I will be come back.
@user-324a3b I've updated the Windows Dependencies doc to include a few other issues I ran into while getting the source to run. My pull request won't be accepted until after the new release, so you can check it out here: https://github.com/pupil-labs/pupil-docs/blob/a3436b4fcd5fc9817c45b5e9e4b6115cbaa1432d/developer-docs/windows.md
@user-2686f2 Thanks for your kindness again. I re-installed windows 10 pro 64 bit and VS2017 community (not preview) after format. and I've followed carefully your revised instruction. I meet the problem when I input command 'python setup.py build'. the error detail is "do not detect the 'boost_python3-vc140-mt-1_64.lib'. because the boost generated 'boot_python3-vc141-mt-1_64.lib' in 'c:\work\boost\stage\lib'. when I faced with same the problem before, I revised the user-config.jam in boost, using msvc : 14.1 ; => using msvc : 14.0 ;. so boost generated the 'boost_python3-vc140-mt-1_64.lib' and I got the error " fatal error C1007: unrecognized flag '-Ot' in 'p2' " before. I do not revise the user-config.jam 14.1 -> 14.0 yet. How can I solve it? ps: I update from ceres-2015.sln to VS2017, I agreed the windows SDK version 10.0.17134.0. is it right?
@user-324a3b Unfortunately, I'm not sure I can be much more help with this issue. I never had a problem with boost specifically, it just several other things that affected my build. I can say that my user-config.jam does have 'using msvc : 14.1;'.
In reply to your P.S., yes, I believe that's what I did.
@papr @mpk we have an example script that publishes a notification to the IPC backbone and we can successfully receive that notification using the filter_messages.py script when running the two scripts via the terminal, however once we use this function inside Vizard, the whole program crashes on the "pub_port = req.recv_string()". Any ideas on why this may be? We have heard you guys are in the process of integrating with Vizard currently.