@user-b91aa6 This is due to the fact that the script has not been updated for Python 3. I will update it today.
Thanks.@papr
Why dll load failed in pupil detector while running pupil lab source code in windows?
Looks like the pupil_detector was not built correctly/failed building. From the docs:
When starting run_capture.bat, it will build module pupil_detectors. However, if you are debugging, you may want to try building explicitly. From within `pupil/pupil_src/capture/pupil_detectors` run `python setup.py build` to build the pupil_detectors.
Thanks. May I ask what this step mean? It doesn't tell us to do something
@papr
It is just a note. The following steps refer to it.
But you are right that this step could use some clarification
How do I know whether the pupil detector is build successfully?
Meet the problem again.
It seems that something is wrong with the instruction.
We don't have capture, but shared modules folder
This is outdated information. I have just submitted a PR to change that
I am doing exactly the same as the instruction. But the pupil detector still can't be imported
@user-b91aa6 please note that runnig from source on windows is not easy. We cannot give support in this regard beyond trying to answer specific questions within reason. I recommend that you 1) try installing in Linux (much simpler) or purchase our support package for dedicated video support.
Can I run codes using Cygwin or Virtual Machine?
no.
try a live usb stick.
I don't have a second computer to run pupil lab
@user-b91aa6 you can run Pupil from the bundle on Windows. You can install linux alongside windows on your machine.
I am using C++ to receive data from pupil lab, why the gaze data obtained in C++ codes always about 37 seconds behind the real time gaze data in PupilLab?
@papr
@user-b91aa6 looks like you need to synchronize your clocks
Thanks for your reply. But the python codes don't do that, python just subscribes to the gaze topic, then, the gaze data can be received correctly. How to synchronize the clocks?@papr
There should be a section about that in the docs
Good afternoon. Can someone come across saccades in Pupil labs. How do you calculate their speed and distance?
@user-d9bb5a You will have to do that manually. There is no official plugin yet that detects saccades.
Thank you. And tell me how to correctly do this, taking into account our equipment and capabilities?
How to calculate them correctly with our data and equipment
Look up a paper on saccade detection, implement the algorithm, and run it on the csv data exported by Player. The details depend on your chosen algorithm.
with Windows it can be done?
oh ... but you can show an example ... I do not understand how to do it (
I don't have an example
thank you very much. I will look for solutions)
@user-fcc645 I just installed pyglui-1.2.2
with pip install pyglui-1.22-cp36-cp36m-win_amd64.whl
I rebuilt the wheel just in case it makes any difference - and uploaded the rebuilt wheel to https://github.com/pupil-labs/pyglui/releases/tag/v1.22
please could you try to install again with pip install --upgrade pyglui-1.22-cp36-cp36m-win_amd64.whl
is it possible that you have multiple versions of Python installed on your system?
also are you using the most recent version of pip
- python -m pip install --upgrade pip
Please check the above points and let me know if you are able to install the wheel
I did have python installed previously now I have deleted all its folder and will try again
How many versions and what versions of Python were installed @user-fcc645 ? Additionally, it may be beneficial to use uninstallers on Windows
can you please send me a link of the python installer that you used for testing if the executable is different than python-3.6.5-amd64.exe
I had python 27 installed
could the amd64 and x64 cause any issue?
mine is x-64 based processor
Tested on a Windows 10 machine with Python v3.6.1
the wheel is for a 64bit machine
ok
I think the issue here is that pip or python2.7 is causing a problem on your machine
therefore the mismatch/error
I have uninstalled everything and will start again
Is it possible to have pupil player analysis (graphs etc) in the pupil capture added through a plugin or something?
the installation issue is solved thanks for your help
@user-fcc645 That's good to hear, thanks for the update. To confirm, this was related to multiple Python versions installed on your machine, correct?
In response to graphs - you could create a plugin that displays graphs in another window.
yes correct I had multiple python versions installed
Question about the gaze position: why does the gaze position jitter and seems not continuous?
@papr
Two reasons: 1) pupil detection/gaze estimation error. 2) eye movement itself is not necessarly continuous.
Is it possible to control mjpeg compression quality? From what I understand pupil cams can only stream mjpeg as uncompressed is too much for USB2 and they dont seem to support USB3. But I think reducing MJPEG compression ratio would make the cameras need less processing to be done and so maybe squeeze one or two more fps from the latency.
@user-c7a20e At which resolution are you running the cameras?
@user-c7a20e jpeg compression can not be changed.
@mpk you mean the compression ratio cannot be changed?
correct.
the limiting factor is the sensor not the usb bandwidth.
what do you mean? limiting factor to what?
For the amount of incoming fps
you said "pupil cams can only stream mjpeg as uncompressed is too much for USB2" . mpk says that this is not true
I see
About 3D calibration, it seems that I also need to send two parameters called translation_eye0, translation_eye1, what do they mean?
@papr
Do we have a Python example for 3D calibration?
I am not sure what value needs to be set there. My guess is that it is the distance between the two vr-headset screens. But I would recommend to look that up in the hmd-eyes project.
I do not have an example for 3d HMD calibration, sorry
But I checked many times and don't find where they do the 3D calibraion in the hmd-eyes project codes, can you help me check that which file is doing the 3D calibration? Thank you very much.@papr
Sorry for interrupting but why is 3d calibration needed for HMDs?
No, I am sorry, but I do not have the time to dive into the hmd-eyes project.
@user-c7a20e We highly recommend to use the 2d HMD Calibration.
please explain difference
3d calibration requires to estimate he geometry between eye cameras and scene. This is difficult to do. 2d calirbation uses simple 2d regression and is therefore much simpler to use
So 2d calibration determines eyeball rotation angle relative to its center?
And according to your explanation 3d calibration tries to determine exactly at what pixel on the screen in front of it it is looking at? I don't understand how you could do that. Because any misalignment of the HMD would offset the result
No. The calibration procedure itself just learns the mapping function between pupil positions (eye camera coordinate system) and gaze positions (scene coordiante system)
But yes, slippage is a common problem in eye tracking.
2d pupil detection tries to detect the pupil and fit a 2d ellipse to it. The 2d mapping is a polynomial function that maps (x-eye0
, y-eye0
, x-eye1
, y-eye1
)->(x-scene
, y-scene
)
3d pupil detection uses a time-series of 2d ellipses to fit a 3d model. The result of this procedure is a 3d pupil vector (relative to the eye cameras). The 3d mapping uses a linear mapping/matrix multiplication to map into the 3d scene space
I dont understand the need for 3d pupil detection, unless it is dynamic and solves slippage.
@user-c7a20e you are exactly right. That is what it does. New models are fitted on the fly as soon as new observations do not fit the old model
@papr Is it correct to say that the 3D mapping estimates the vector at the center of the pupil that is orthogonal to the pupil's surface. This alone would be a noisy estimate. So, over time, these vectors are used to estimate the center of the globe on which the pupil lies (the eye center).
...and the benefit is that the additional constraint that the orthogonal vector must also cut through the eye center reduces error.
Hi, I am looking for blink detection using Matlab LSL
Found the parameters
In mobile eye tracking headset, how do we perform 3D gaze calibration since you don't know the 3D position of reference points?@papr
We assume a specific distance
But during bundle adjustment we allow the optimization to move the points on the z-axis as required
I haven't totally understood the role of bandle adjustment, what the bundle adjustment is optimizing?@papr
It tries to estimate geometrical relation of the eye and world cameras. The result are two mapping function/matrices that can map 3d vectors from the eye cam space into the world space
Thank you very much for your reply. "during bundle adjustment we allow the optimization to move the points on the z-axis as required", what do the points mean? What the points represent?
@papr
The reference points/calibration targets
Its a technical detail that the optimization is allowed to adjust these. Nothing much to be concerned about
What's the object funciton for bundle adjustment optimization?
@papr
I don't know this detail
Quenstion about 3D calibration: What the last gaze distance mean in estimating the 3D gaze positon?
@papr
@user-b91aa6 this is the magnitude of the 3d gaze point.
Thank you for your reply. Here is what I don't understand, 3D gaze point is a point, why should we care about the last gaze distance.@mpk
Because we cannot estimate depth during monocular mapping. Therefore we reuse the most recent gaze distance
So you are assuming that the gaze depth is the same as last gaze when the gaze mapping has to do monocular gaze mapping, and the depth of last gaze will be the depth of the current gaze?@papr
Thank you very much.
the last gaze distance is updated as soon as we can do a binocular mapping, yes
Thanks.
Currently, can I get 2D gaze using 3D eye tracking model?@papr
@user-b91aa6 , I'm guessing not, but very curious to hear @papr 's response. The 2D marketplace demo seems to switch the 3D mapper to the 2D gaze mapper upon initialization. Switching it back into a 3D gaze mapper produced horribly inaccurate results the last time I tried.
@user-b91aa6 I don't quite understand why switching the gaze mapper would lead to inaccuracies in the 2D gaze representation - I would have imagined the gaze mappers to be interchangeable.
However, this is all related to the horrible issue I posted about 14 days ago here: https://github.com/pupil-labs/hmd-eyes/issues/47 .: That data representation and gaze mapper methods are conflated. The 3D scene demo pairs the 3D gaze mapper with a unusably noisy representation of gaze in depth, rather than a ray (as in the 2D demo). It seems we lack the ability to pair the 3D gaze mapper with the more useful 2D gaze representation.
I've been waiting for someone to address or acknowledge this limitation, but it seems HMD Eyes is not receiving much attention right now.
@user-8779ef the reason that 2d detection and mapping are interlinked is indeed somewhat abitrary. For mobile eye tracking this combination does make sense but we have not implemented a very good 3d pipeline for hmds yet. I have some ideas I d love to try but we dont have the time for this right now.
A small improvement would be using 3d pupil detection and regressing to gaze data via the normals and not norm positions. This should give some degree of slippage compensation for hmd whiles also yielding stable 2d gaze data.
@mpk, what are differences between normals and norm positions?
One is 3d directions from the eye model.
The other is position of the pupil center in eye image.
Do we must provide markers with depth in HMD 3D calibration?
@mpk
Yes.
May I ask that what's the reason?
@mpk
I apologize for this naive or broad question but if I wanted to modify pupil source code, what IDE would allow me to modify it most freely (any recommendations? VS?)? Or was the console used? I am trying to utilize pupils eye tracking to get gaze vector data that can be used in unity or unreal. I wanted to use a Pi for the eye tracking but using the entire pupil GUI is a no go.
@mpk Thanks for the response! Keep in mind that the 3D integratino market is currently wide open. Tobii is the only competition, and ... mEH. MEH MEH MEH.
I have been teaching workshops on eye tracking in 3D (last year at ECEM, this year at VSS). I have been advocating your mobile trackers and telling people that I can't yet advocate the HMD integration. I would truly love to give it my full endorsement.
@user-e5aab7 I have been using sublime text for simple plugins, PyCharm, VS and Geany should do the work too.
whats the purpose of init.py in launchables?
@user-049a7f this makes the files inside importable. Its a python specific detail.
Does all the actual eye tracking occur in eye.py?
Yes.
@mpk is there any way to just run the eye.py on footage without opening the pupil GUI? Or would they eye tracking found in eye have to be taken and coded as a stand alone?
@user-049a7f you will have to change the source code if you want to run the detection stand-alone.
hi all, new user here. whats the simplest way to start pulling gaze data through c++ or c#? i dont want to use unity or python if i can avoid it. i just wish to record x and y positions over the course of a session in the htc vive, which i will then use to create my own heatmaps and stuff
all of the docs all talk about unity and python so was hoping to be pointed in the right direction for just a pure c++ or c# plugin that i can use with visual studio
Hello, i'm planning to buy an pupil labs eye tracking add-on for my htc. i'm an experienced Unreal Engine Developer and it's nessessary for me to get it running in Unreal Engine. For my first project i just need to get the pupil dilation. Is there any SDK or Plugin that i can use?
@user-ea779f @user-f1eba3 did work on that
@user-24270f look at the HMDeyes project on the Pupil GitHub... it is a Unity project, but you can see how to connect to the network communication with C#. Just isolate this stuff and you're good to go... you always need to run Pupil Capture (or Service) as an executable, just to be clear on this
👍
Thanks for your quick reply! So it's basically possible with low to medium effort to get it running in unreal engine, thanks that helps a lot
@user-ea779f I made a way to receive synchronoulsy data from pupil. If this data is found in the pupil or gaze topic then you just have to use some of my code and get that dilation
the plugin also has some calibration mechanism that you would basically want to delate
so i tried a usb3.0 extension cable so that i could actually reach the computer, and it wouldnt work. take it out, use the supplied cable only. now it works. so i bought this thing and the supplied cable is not actually long enough to reach the PC. what gives?
using a htc vive, so the vive HMD connection is usb-c but the provided cable has the standard usb port end so it must go back to the pc. has anyone got it working with a specific brand of usb extension cable?
@user-24270f I send an answer in the other chat.
pupil capture records and exports a csv with gaze_positions which seems to be a good way to get the data. since im using VR, i have no world view to overlay the data using the player
has anyone got a recording of the PC view (of the game youre playing, for example) into the player to replace the world view?
that was my plan, to record gameplay then overlay the gaze data for post-play analysis. but if i can use the tools provided, all the better
@user-24270f I'm interested in replacing the world video with the game video, too – I had no time yet, but I have two ideas: Creating a Fake UVC device which streams from FFMPEG (recording the game view is implemented already in HMDeyes) or make usage of the "frame_publisher" plugin from pupil, you can send custom data over the network and display this as an overlay in the pupil capture window. I'm not sure if this will be recorded as world video, but at least it would be a start.
@user-29e10a This would be a great addition. I worry a bit about adding real-time image compression to your pipeline, which is already quite busy. However, I think some kind of compression would be necessary, or the real-time recording feature will not be widely used.
Hello, I am unable to get the pupil service app to run on Mac OS High Sierra. I get the following error:
2018-06-15 20:06:25,707 - service - [ERROR] launchables.service: Process Service crashed with trace: Traceback (most recent call last): File "launchables/service.py", line 149, in service File "shared_modules/plugin.py", line 294, in init File "shared_modules/plugin.py", line 321, in add File "shared_modules/service_ui.py", line 103, in init socket.gaierror: [Errno 8] nodename nor servname provided, or not known
Has anyone run into this error? Thanks!
@Jorge#3168 hey, which Mac do you use?
Hi sorry for the question...but when it is expected that the calibration/tracking will be working properly in Hololens?
@papr Mac Air from 2015
@user-41b9be which version of Service do you use?
@papr 1.7.42
Hi everyone,
I have a question regarding the IPC backbone of Pupil. I am trying to intercept the frame raw data (sent as a compressed jpeg by the camera through USB if I understand it correctly) through communicating with the IPC. However, I tried to listen to the SUB_PORT and only the notifications, logs and gaze+eye positions data are sent through there. Do you guys know from which port the raw image data of the world frame are transmitted? Is there anything I can do to grab them manually?
Thank you!
@user-7e60fc you need to turn on the frame publisher plugin
@papr and then the frame will be sent to the IPC and I can just subscribe to catch it?
@user-7e60fc correct. The topics start with frame
if I remember correctly
Hi @papr,
The topic is "frame.world". I was able to retrieve the raw data from the IPC backbone, but I couldn't deserialize it with msgpack. I keep getting an error: "msgpack.exceptions.ExtraData: unpack(b) received extra data". Do you know how to fix this? I use msgpack 0.5.6
@user-7e60fc check out this example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@mpk Thank you so much!
Hey there, I found that in some of our data, our normalized X and Y pupil positions are outside the bounds of [0, 1]. What does this mean? For example, we had a datapoint where one of the dimensions was reading a value of 12
@user-ecbbea LIkely means that the estimated gaze location was beyond the limits of the screen.
ehr, of the scene camera.
Anyone here developing / running from source on a mac?
@user-8779ef I do. Why do you try installing stuff with anaconda anyway?
good evening I need help. Has anyone used the mouse_control.py code available from github in the latest version of the pupil labs?
Well I wanted to move the mouse with the movement of the eyes through the code mouse_control.py with the help of the markers, but I can not. Fco the calibration process soon after I execute the code mouse_control.py but the mouse does not move.
Hi @user-3f0708 I responded in the 👁 core channel. We can continue the discussion in this channel if desired. Please post only in one channel 😄
@papr Don't worry - gave up on anaconda very quickly. However, I'm still unable to run....
So you are still having problems with pyav?
No, now it's this ...
"ImportError: dlopen(/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/calibration_methods.cpython-36m-darwin.so, 2): Library not loaded: /usr/local/opt/boost-python/lib/libboost_python3.dylib Referenced from: /Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/calibration_methods.cpython-36m-darwin.so Reason: image not found" I can confirm that boost_python3 and boost have been "untapped."
( installed via brew )
Are you sure that you want that to be untapped?
Eh? Maybe i'm using the wrong language. I've installed boost / boost_python3 according to instructions
"brew install boost brew install boost-python3"
Brew installs are system-wide, right?
well, python wide
But what do you mean by this?
"I can confirm that boost_python3 and boost have been "untapped.""
As far as I know untapping in brew means not to link the libs in the places where the linker looks for them
Huh. I said that because, when I install, I believe it says something about untapping. Perhaps it's pouring. I don't know... they try and be too cute 😛
In any case, they packages are already installed and up-to-date
Is there some possibility of path issues related to brew?
yes. Because the linker was able to find th lib during compilation but now it does not find it anymore
Try to delete all so files in /Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/ and delete the build folder as well. Afterwards start capture. It should rebuild the module
Strange path issue now...
Console copy/paste:
gbook:~ gjdiaz$ CFLAGS=-stdlib=libc++ /Users/gjdiaz/anaconda3/envs/Pupil3b/bin/pip install git+https://github.com/pupil-labs/pyndsi
Your PYTHONPATH points to a site-packages dir for Python 3.x but you are running Python 2.x!
PYTHONPATH is currently: "/usr/local/lib/python3.6/site-packages:"
You should unset PYTHONPATH
to fix this.
gbook:~ gjdiaz$
you are using the anaconde pip to install ndsi. This will install the package to some isolated anaconda directory
yeesh, didn't notice that. Let me see if that's the issue ( I thought I installed pip on pyhton 3 / not conda)
Ok, compiled with: "MACOSX_DEPLOYMENT_TARGET=10.13 pip3 install git+https://github.com/pupil-labs/pyndsi"
capture started. Great! Now I'll try player...
If you are not sure which binary you are running you can always run which pip3
in the terminal and it will tell you the exact path to the binary that you will use. In this case replace pip3 with the binary that you are unsure of
Thanks.
So, deleting that "build" folder did work.
I'm back in the game!
😃
nice!
Thanks, papr! That was killing me.
How to fetch eye images from pupil service?
@user-b91aa6 I think this should work if you change it to receive eye images instead: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Thanks. I adjusted the camera focus of left eye, I can't get a clear eye image as the right eye. Any idea to solve this?
@user-b91aa6 the left eye is not focused. I would recommend rotating the lesn until is is further out and then slowy working inward until in focus.
Given that you have a 120Hz headset
@papr looks like a vive addon.
Correct!
When you measure the accuracy in Vive, how do you measure the accuracy, beczuse the accuracy is different in center area and periphery area. Only accuracy in center is measured?
why the periphery area is always blured for left eye camera, center area of the image is sharp
I could imagine that the lens is simply dirty...
I clean the lens surface using a brush. What't the best way to clean it?
I strongly recommend to use a microfiber cloth
Can the pupil service fetch fixation in real time?
@user-b91aa6 yes, you can subscribe to the fixation classification plugin for realtime fixation data.
You need to explicitly start it though via a notification
Yes, thanks for that clarification @papr
Thank you very much