@user-f1eba3 This indicates a broken jpeg frame. Did it appear once or consistently multiple times?
@user-f1eba3 please try a different USB hub if this issue is frequent. An occasional message like this is ok.
@user-f1eba3 In my experience this can also happen if you have long USB cables or more than one USB-hub in between. Two things help: External power to the USB hub & a pure USB 3.0 hub line (works even if the USB-Clip of the Pupillabs ET is 2.0 and not USB-C). changing the USB-Bus (not hub) might also help
@mpk + @papr Thanks for the fast fixes! Nice job 😃
Hey Pupil, Love the work your doing. I was wondering if I could borrow a eye-tracking head set so that I could validate son designs. I'm currently doing a MSc in Medical Device Design and having you head set in the surgical theater would be great, It would make sure that the UX is well designed for the surgeons. Any help on this would be greatly appreciated. I have tried to build your open source version of the glass but the lack of both time and skill are hampering my completion of it. All the best.
Hi @user-dbbe31 - Thanks for the kind feedback! We also received your email and have replied to your question in that context.
Great thanks for that.
Hi Pupil Labs, I bought the new cables that @wrp suggested to use my google pixel 2 phone with the HoloLens eye tracker add on through pupil mobile. In the NDSI manager My phone is detected, but I get an error about time sync not being loaded. Please find attached an image of the error msg.
HoloLens, cellphone and computer are all on the same network
Hi, How are PupilLabs users managing the newer Macbook shift to all thunderbolt ports? We are in need of faster laptops and seeking recommendations. Many thanks!
@user-90270c just buy usb-c to usb-c cables. You should be fine using them in combination with the new MacBooks.
@user-006924 don't worry about it. You have time sync enabled.
@papr So that's not causing the issue of me not getting any feed either on cell phone or pupil capture?
I basically made sure the three devices are on the same network, opened pupil mobile and chose pupil mobile in pupil capture, the phone and the laptop are correctly detected in pupil capture, but nothing is really happening. Am I missing a step?
@user-006924 did you select the camera from the drop down menu?
@papr you mean inside pupil mobile?
No, in capture. In the ndsi manager plugin menu
in pupil capture, when I open pupil mobile app, it automatically brings my mobile in remote host, but nothing happens when I click the drop down in front of the "remote host" or "select to activate"
OK. Just to clarify: The select to activate menu is empty?
yes
Can you post a screen shot of the pupil Mobile app with the headset connected?
You have connected the headset to the phone already, correct?
If by connecting you mean using the USBC cable to connect the head set and open the pupil mobile app, yes I have. I'll send a screenshot right away.
OK that is what I meant 🙂
The UVC source in Pupil Capture says: Ghost capture, capture initialization failed
OK, the cameras should appear there.
inside the app?
Mmh. Can you look for an option called OTG Storage in your phone's settings? And enable it if you find it?
Yes, the cameras should be listed within the app
I'll look for it right now
I can't find that option in my phone, I just googled pixel 2 OTG and I found that an adapter shipped with the phone should be used. I'll check with the adapter and see if the cameras appear. Thank you
@user-006924 OK, good luck. Else I would not know what the problem could be
@papr Thanks for reminding me to check the OTG option. It's working 🙂
Just one question, when I'm recording in pupil mobile if I want to have the pupil gaze data I should also record in pupil capture as well?
I haven't specified a saving location inside the app.
@user-006924 you can copy and import Pupil Mobile recordings in Player
Got it, thanks again.
anyone ran Pupil eye capture on raspberry pi3? I'm still not sure how well it can handle camera image processing in realtime
I am getting this error on windows 10 64 bit when I install pyglui "pyglui-1.22-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform.". Please help
@user-fcc645 are you using python 3.6?
yes
Ok. We can look into this.
thanks
Question - is there an absolute necessity to build Pupil from source on Windows?
yes please I am a windows user
Understood. Just asking because the win dev environment is quite fragile. Also wanted to point out that one can do a lot with plugins using the app bundle
I would like to extend this work in .net framework having multiple streams from multiple devices and doing analysis
it would be great if you are aware of any such open source project and direct me to it cheers
"C# project"
I will try to reproduce your pyglui install issue and get back to you
thanks
@user-fcc645 let's migrate this discussion to the 💻 software-dev channel
sure
@user-c7a20e After working with a Raspberry Pi 3b+ on a different project, I would say that its computing power will not be sufficient.
It could be enough to make recordings without pupil detection/gaze estimation and simply record the world/eye videos and open the recording on a different computer. But I did not test that
Thanks. Nah that will be pointless for my project. That said what is the processing power and RAM need for two of the 120Hz cameras and real-time gaze estimation? For another abandoned project I have also Odrioid and Asus Tinkerboard laying around I could try, if you think there's a point.
Do you only need the pupil detection? Or do you need the gaze estimation pipeline as well?
It's been a while since I've used the Pupil API, what was the difference?
Pupil detection just finds the position of the pupil within single eye video frames. Gaze estimation uses detected pupil positions and tries to estimate their associated gaze target within the scene camera
But I would be interested in your actual use-case as well.
Here's the goal, I want to put two pupil cams inside of a custom OSVR headset I am working on. Would like to offload as much of processing as possible from the host machine so it can use its processing resources and RAM on other things such as 3d rendering. A good option seems to put something like a Tinkerboard between the PC and the headset. Idieally a Tinkerboard-type board would handle all the Pupil processing and just stream position data to the PC. This would likely also minimize any latency.
Since there's already a breakout board like used by PSVR, Tinkerboard-like board could fit inside of it.
I understand. It would require some/a lot of manual work but you could create a slimmed-down version of Capture that does not have an interface at all that simply runs pyuvc, pupil detectors and hmd calibration. I am speaking of using the components and combining them with a custom script
But I do not know if the processing power would be enough. But this would be your best shot at getting this working
I wasnt thinking of using Capture but python API. I think the GUI and preview adds too much processing.
Yes, I agree. Unfortunately we do not have a separate python api. As I said, you would need to extract the components by yourself and use them in a custom python script
What do you mean? Separate API for what?
"I wasnt thinking of using Capture but python API" there is no python api which you could simply use liek a different module, e.g. numpy
sure, but arent the individual components exposed to Python as an API?
(don't want to start an exe GUI program each time for a HMD with integrated pupil cams)
There is not such thing as import pupil_capture
in Python
But you can clone the repository, add the shared_modules folder to your Python path and import single plugins/files by hand
that is what I meant by "you would need to extract the components by yourself"
Sounds simple enough
that said, can capture be launched without GUI
No
Now thats a problem.
That's why I am telling you that you need to extract the components by yourself 😉
Wait so Capture is just using the Python modules in the shared_modules folder?
Capture is plugin based. Most of its functionality is separated into multiple plugins. These live in shared_modules
Be aware that these plugins expect to be called from within Capture. Directly calling them might not always work.
Capture is pupil_src/main.py right?
main.py is just the launcher that starts the different processes. The different processes can be found in launchables/
Yes, but it is what the Capture exe runs right?
On Windows you need to execute run_capture.bat
, on Linux/macos you start Capture by running python3 main.py
, yes
Okay, thanks for guiding me through this. I could write a Python script to do what I want by stripping unnecessary parts for me and adding needed code. Is this something others might find of use? I can't be only one interested in using Pupil this way.
That is the way to go. I am very sure that there would be users interested in that. I would recommend to put this project into its own github repository, e.g. Pupil Capture headless
. When it is done we can add it to our community repository
OKay, thanks.
Some unrelated questions: 1) Why do you not use a hot mirror adhesive film adhered to a plexiglass so you can have the eye tracking cameras facing directly the eyes and having nothing obstructing user's view. Sounds more practical but maybe I am missing something
2) Has research been done to prove faster refresh rate in cost of resolution is justified (only 200x200 at 200Hz)
Pupil detection works well on 200x200. Higher framerates are very welcome in our field in order to be able to detect fast movements like mircosaccades
have you checked the research paper by NVidia on camera latency for foveated rendering? They discuss saccades there and it appears it is not an issue because of the saccadic blindness phenomenon. Otherwise I think 300Hz might not have been fast enough for any usage either
@user-c7a20e I would be interested in it, I am currently trying to the NOIR module to work on the pi. I am just trying to get a gaze vector data from the tracking
@user-e5aab7 Hi, in what exactly?
@user-c7a20e from my understanding you are planning to strip the pupil source code down to just focus on capturing (eye tracking)?
@user-c7a20e if so, then I would be interested in the stripped down version of pupil
https://docs.pupil-labs.com/#diy Anyone know the latency of Logitech C525/C512 and Microsoft HD-6000?
@user-e5aab7 Yes, but don't expect a clean stripping. Will probably remove bunch of stuff and add bunch of comments
Hello, does anyone know if there is a way to export data from a recording directory without using the pupil player GUI? I have been looking to use a command line prompt. If there is a pupil resource on this already, I would appreciate direction to the source. I see a command in batch_exporter.py, however, this does not seem function without the main.
No there is no such possibility. But you can load the recording data directly. See the file_methods.py on how to deserialize the pupil_data file
I will take a look there. Thanks!
I can deserialize pupil_data. I am looking to deserialize additional plugin data files. Can I do this if the plugins are enabled at the point of recording in pupil capture? If so, where would this additional plugin data go? Is this data only generated at the point of export?
I am thinking that exporting this specific data can only be done through a notification to the network
Which plugin are you talking about specifically?
Notifications are stored automatically during a recording if they include record=True
Hello everyone! (I'm new here)
No specific plugin. I would like to essentially write a .py script to automate the start/stop recording as well as the export of this data (for example, blink detection, saccade detection, etc) without requiring a human to interface with the GUI
Hey @user-dc89dc Welcome to the channel!
@user-c23839 start/stop recording can be done using Pupil Remote
blink data should be present in pupil_data. But we do not have saccade detection yet
So I'm working on identifying cognitive states based on gaze data
And the dataset is several minutes of a control experiment and a test experiment, labelled 0 and 1
I've put together a binary classifier to predict on a new sequence of gaze data whether the person is not stressed (0) or stressed (1)
My features are rolling statistics (of preceding 5 seconds), of position, speed and fixation position - these all go into the training of an SVM
Was wondering if anyone here has feature engineered for gaze data? Would love to hear thoughts!
(apologies if this is the wrong channel for this?)
@user-dc89dc thanks for sharing this information - there are likely quite a few individuals in the community interested in cognitive load. I will continue the discussion in the 🔬 research-publications channel
Hello everyone,
i wrote an inquiry to the [email removed] email adress in german. Is that a problem ?
Hi there. Today my one eye camera suddenly showed me two eyes. I always thought the other side was only cables without function. However, my original right eye image is now always upside down. Also I was wondering how data quality would be. Did it happen before? Thank you.
@user-103621 this should not be a problem
@user-b571eb Could you send a screenshot of what you mean by "one eye camera suddenly showed me two eyes"? The right eye image is flipped because the camera is physically flipped. This does not impact pupil detection. You can flip the visualization in the eye window's general settings
I didn’t expect to have both eye images.
You mean both eye windows?
Yes, this is expected if you have a binocular headset 😃
One camera video feed for each eye
Oh. I thought I ordered one eye camera only. That’s why I am a bit shocked.
So the sampling rate is not 200 hertz?
If you click on the camera icon on the right side you will be able to change resolution and frame rate
Ok. Thanks
@user-b571eb can you DM me your order id or the name associated with the order - so that I can confirm that we have shipped you the correct hardware?
I’ll do per email.
perfect, thanks
Hi, I'm just starting up with my newly purchased hardware on windows 7 64bit, and I'm seeing some strange behavior: When I try to run pupil_capture.exe it fails and then promptly deletes itself as well as all other executable files and many other files from the installation directory.
Has anyone encountered something like this and can help? Thanks!
correction - PupilDrvInst.exe remains, but other executable files are gone
Hey @user-ea0ec0 Windows 7 is not supported! You should be using Windows 10
OK, I'll check on windows 10. Thanks!
is there some way to open just a single eye GUI? When I run python3 main.py it opens world and 2 eyes.
@user-e5aab7 try running Pupil Service. Its like capture except is does not run the world camera.
@mpk i tried pupil service in the launchables directory by doing python3 service.py and didn't get any response
You need to run python main.py service
Service is the arg being parsed
@mpk Ah yes it worked, thank you.
Hi all, I am trying to visualize the polyline of the gaze for longer than 5 seconds. I have found this modified version of the scanPath plugin, but it doesn't show me the box of scanPath once the plugin has been loaded. https://gist.github.com/willpatera/34fb0a7e82ea73e178569bbfc4a08158 Any hint?
Hey @user-344dcd this is an old gist of mine and is depreciated. Please modify the plug-in from pupil source code instead of this gist.
@user-c23839 a command line tool to export data would be great.
Right now there is no such a thing as "Pupil Player Service" only "Pupil Capture Service"
Hi Guys, is it possible that Pupil Capture needs about 20% more CPU on Windows in v1.7 instead of v1.6? I'm running on a Core [email removed] with one VR application in parallel. I just checked the two versions and the total load of my machine on v1.6 is about 80%, and with 1.7 it's 100% (according to Windows Task Manager) and I'm also getting frame drops to below 100 fps on the two eye videos? hmmm...
you tweaked the detector algorithms, maybe this consumes a lot more of horsepower?
@user-41f1bf There is a possibility to export from command line - via batch_exporter main() method https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/batch_exporter.py
Can anybody tell me where I can receive live data in pupil capture of "gaze_normal1_x", "gaze_normal1_y", "gaze_normal1_z" from 3d eye model? On one hand I think I can use zermq but where to get the data within pupil capture using a plugin?
@user-593589 you can access data from the events dictionary passed to your plugins recent_events function
We just received the kit for the Vive & are working on metric captures. Excited !
Hi @wrp, yes I have noticed it was referring to an old version of the pupil source code. I did try to modify the plug-in from the source code (i.e., changing max=5 to max=1000 here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/vis_scan_path.py#L110), but the trace still disappears after a few seconds. Do you know if I have to modify anything else? Thanks
Hey, i already wrote an email about this but maybe i could get a quick answer here. Which world camera is used in the highspeed model ? Is it the same as in the DIY section ? Since the FOV for the highspeed with the extra lenses are much higher than the 3D model we consider getting an additional tracker but with the highspeed camera instead of 3D camera. Or do u even use a different camera with your own specs ? i need this information for my Master thesis so id appreciate any help .
Hi, is it possible to change the mjpeg compression amount to control usb bandwith, both with pupil cam and webcam
@user-c7a20e its not possible to change this. Its hardcoded into the embedded firmware.
the underlying limitations of usb transfer are also more subtle then just fitting streams of size x into bandwidth below y.
what would that be?
Hello, I'm trying to download pupil apps on my computer windows 10 for my internship but when I want to open pupil player or pupil capture it doesn't work, it just shuts off. Maybe you can help me ? Thank you !
@user-c6649c you will need updated graphics drivers. This will fix the issue.
@user-c7a20e have a look at this. Its a bit dated (rolling shutter does not apply anymore) https://github.com/pupil-labs/pupil-docs/blob/master/developer-docs/usb-bandwidth-sync.md
Are there any smooth pursuit detection algorithms?
@user-dc89dc not in Pupil but I think there are a few papers that could be implemented.
Thanks @mpk - could you link me to a few perhaps?
When running screen marker calibration from the latest bundle d release, everything works fine. When I run the same thing from source (downloaded master today) it does not work. A single marker appears on screen but does not move. By adding a few print statements to the source code I can see that it is not detecting any onscreen markers. No errors are written to the console. Any ideas?
I cannot reproduce this issue when running from the current master.
Make sure that the markers are in the field of view. They should display a green dot in the center when detected properly
Strange. No green dot, Works immediately when I run from the bundled version. Perhaps a dependency issue? I am running from Anaconda and did not follow install instructions exactly as in docs. I will reinstall and see if it works.
Is there a way to do a calibration without a world camera based on known real-world coordinates? For example I know how to map pixel positions on my screens into real angles in degrees, can I use that to do calibration without needing a separate world cam? Sorry if this is already in the docs, but I couldn't find it.
@user-6302ac You have to use the hmd calibration for that. Be aware that this assumes your headset to be fixed within the real world
What is the FOV of the eye cameras, both the older 120hz and new 200hz versions?
@mpk Thanks for the link. This raises more questions to me. Since I only need two eye cameras and have 2 separate usb controllers on my target PC, can I tell Pupil to use either uncompressed mjpeg or raw frames? I believe this would improve the overall latency by few ms by eliminating or greatly reducing the latency introduces by encoding/decoding (compressing/uncompressing) . It would, Im guessing, require capturing in grayscale rather than RGB to reduce the size of each frame by x3.
Another question, wouldn't it be a good idea to try to sync two eye cameras as close as possible by modulating left/right eye IR LEDs at the same time and comparing the frames and tuning off/on one of the cameras until the LED illumination can be seen in both the left and right eye frames?
I don't think the firmware allows to disable compression
@user-dc89dc @mpk I would be interested in smooth pursuit detection algorithms too.
We will look at adapating the implementation from this paper https://www.nature.com/articles/s41598-017-17983-x.pdf
Interesting, I was looking at that today too haha
@papr do you guys plan on working off of his implementation too? https://gitlab.com/nslr/nslr-hmm/blob/master/nslr_hmm.py
Yes, we still need to evaluate it but we would like to use as much as possible.
Does pupil work with Ubuntu 18?
@user-73ee8f yes Pupil does run on Ubuntu 18.04
Mindwave mobile & pupil lab can be synchronized together?
@user-e5aab7?
just got the headset and I am looking at the raw data - is there any pluggins or analysis software that can take the csv file and easily measure for saccades etc.
@user-d180da pupil is open, so you can synch anything with pupil timestamps. It really depends on "Mindwave" being flexible enough to be adapted to generate correlated timestamps.
Hi guys. When do you expect to have the pupil-labs calibration/tracking working properly on Hololens?
@papr fyi:regarding https://www.nature.com/articles/s41598-017-17983-x.pdf, the nslr_hmm functions appears to take degrees of horizontal/vertical rotation as input but pupil player exports normalized XY gaze positions. Because of this I haven't been able to get nslr working with pupil labs output
@user-d3a1b6 This is not problem. We can convert x/y to degrees within the world camera if the world camera's intrinsics are provided. This is true for Pupil Cams or if you use the Camera Intrinsics Estimation plugin
@papr thanks for the tip, I will dig a little deeper then
Or you simply use the 3d pupil vector to calculate angular changes
It would be nicer if the event detection was independent of the gaze mapping
pupil_datum['circle_3d']['normal']
will give you the 3d vector relative to the eye camera
are those data in the pupil_positions.csv output from Pupil Player export? I see columns circle_3d_normal_x,y,z with alternating rows for each eye
correct
make sure to filter for high confidence values, low confidence pupil data is very noisy
Hello pupil labs, can I use pictures from your website for a presentation (mentioning of course the origin) like the one bellow
Yes, such a Screenshot is fine
Alternatively you could make a Screenshot of the store page ;)
hey guys I am trying to build pupil and it seems that I have a problem with boost, however the online docs say that when checking out the code their should be a directory called capture in pupil_src but it doesnot exist
also .detector3d is looking for this boost lib boost_python3-vc140-mt-1_65_1, however this is msvc 14 boost build not msvc 14.1
I am windows just for info
In general I am trying to stream the raw gaze data in real-time that's why I am trying to build from source .. is there a way to do this without having to build?
Hi. I have problems with the offline fixation detector plugin. In some recordings it is not able to detect fixations. It starts detecting fixations, but at some point it stops and the pupil player closes suddenly. I work with pupil_v1.7-42-7ce62c8_windows_x64. Somebody knows how can I solve this and get the fixations?
@user-8e4642 can you share the output from the logfile when this happens? Or share this recording with data@pupil-labs.com ?
@user-525392 if you want to get access to raw data no need to build from source. Just access the data bus via a simple script like this: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
Hey folks - having trouble updating pyndsi. Is this a known issue, or am I just special?
I seem to have an error when building the wheel.
Nevermind, I found the relevant issue: https://github.com/pupil-labs/pyndsi/issues/35
...unfortunately, this is a real problem for mac users. I can't seem to compile 😦 The issue, as pablo identified it, is that the build requires c++11. I'm not quite sure how to control the pip compiler ... working on it, but if anyone has a method, let me know!
( this MAY be an issue related to the use of Anaconda)
Ok, here's the fix: CFLAGS=-stdlib=libc++ pip3 install git+https://github.com/pupil-labs/pyndsi
Does pupil labs know how much CPU usage from pupil is due to the GUI? I'm going through right now slowly removing features but before I continue I wanted to see if its even worth going through? I am trying to make a lighter weight version that could potentially run on a Pi and capture gaze date but currently pupil is to heavy for a pi and was hoping a stripped down version could run more smoothly
@user-73ee8f I think its less than 1% for the GUI. Drawing the video frames costs a bit more. You should turn that of. In the world windows most cost is mjpeg decompression and gaze mapping.
in the eye its ~95% pupil detection.
Hi guys, is there a way to watch and modify the surfaces_definitions file directly? Today we set up a pilot experiment and we found a bug which caused that surfaces were tracked but not displayed in the surface tracking plugin. This leads to the problem that you can't delete the surface anymore.
So, I'm now getting an error.../Users/gjdiaz/anaconda/envs/py36/bin/python /Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/main.py player MainProcess - [INFO] os_utils: Disabled idle sleep. player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/launchables/player.py", line 583, in player_drop from player_methods import is_pupil_rec_dir, update_recording_to_recent File "/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/player_methods.py", line 17, in <module> import av File "/Users/gjdiaz/anaconda/envs/py36/lib/python3.6/site-packages/av/init.py", line 9, in <module> from av._core import time_base, pyav_version as version ImportError: dlopen(/Users/gjdiaz/anaconda/envs/py36/lib/python3.6/site-packages/av/_core.cpython-36m-darwin.so, 2): Library not [email removed] Referenced from: /Users/gjdiaz/anaconda/envs/py36/lib/python3.6/site-packages/av/_core.cpython-36m-darwin.so Reason: image not found
MainProcess - [INFO] os_utils: Re-enabled idle sleep.
Process finished with exit code 0
Lib AV is installed. Any thoughts?
...sorry, pyAV
Sadly, libavformat appears to be related to ffmpeg.
ffmpeg is just an interface for libav*, so yes it is related
Looks like pyav was not installed correctly. Try rebuilding it
Yeah, no luck.
I have successfully installed av-0.4.1.dev0...but the message persists.
What about installing an older version?
...I can try that.
pip3 install git+https://github.com/pupil-labs/PyAV/releases/tag/v0.3.1 ?
obviously, that doesn't work. WOrking on how to get at an older version through pip..
Git clone url CD pyav Git checkout old_version Pip3 install .
Thanks @papr 😃
@mpk This is a screen of what happens just before the pupil player stops.
I think I know the error. Could you please share the data set such that I can investigate its origin.
@papr I will send the data set to [email removed] It's ok?
Yes please
@mpk would it be feasible to rewrite eye.py in C++/C to make the pupil detection run smoother on a Pi? or would that require rewriting all of pupil in C++? I know in some places pupil was written in C++ where speed was needed and was just wondering if rewriting where the most CPU was used would help a pi process it better?
@user-73ee8f the Pupil detection is already written in c. Everything else in eye.py is ui code. If you want to spend time optimizing you will have to look into the pupil_detectors module
@mpk is there a way to watch and modify the surfaces_definitions file directly? Today we set up a pilot experiment and we found a bug which caused that surfaces were tracked but not displayed in the surface tracking plugin in the list of surfaces. This leads to the problem that you can't delete/rename the surface anymore. You also could copy then surface definitions to create faster AOIs which require the same markers.
@user-c351d6 This is a msgpack-encoded file. Be aware that bad formatting of that file can cause Capture to crash! I would recommend to delete the file and define your surfaces again.
@papr Ok, I understand this. However, this bug occurs quite often and we are working with a lot of surfaces. Redefining the surfaces costs a lot of time (and causes stress). Defining surfaces by using a text editor would be great and also a fast solution. We are able to work around the problem by making backups of the surfaces all the time but you may want to add this to your list of bugs.
what about defining the surfaces once and copying the def file into the recordings?
@mpk Thank you for the advise, we've already planned to do this when we will run the experiment. Especially defining surfaces which lay over the same marker area is not that easy using the AR interface because you are getting quite a lot of menus within the picture and it seems like pupil player / capture crash more often when having many surfaces within the configuration. Creating the def file once is actually not that easy.
@user-c351d6 what is your reason to define multiple surfaces over the same markers? Why not define a single surface instead?
@papr WHen attempting to install a previous version of PyAV, I get the following error:
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/gjdiaz/anaconda3/include -arch x86_64 -I/Users/gjdiaz/anaconda3/include -arch x86_64 -Ibuild/temp.macosx-10.7-x86_64-3.6/include -I/usr/local/Cellar/ffmpeg/4.0.1/include -I/Users/gjdiaz/anaconda3/include/python3.6m -Iinclude -I/Users/gjdiaz/anaconda3/include/python3.6m -Ibuild/temp.macosx-10.7-x86_64-3.6/include -c src/av/plane.c -o build/temp.macosx-10.7-x86_64-3.6/src/av/plane.o src/av/plane.c:596:10: fatal error: 'libavfilter/avfiltergraph.h' file not found #include "libavfilter/avfiltergraph.h"
This is true of any version I've checked, and those extend from the 4.0 baseline to the second-to-last commit.
(not all of those, but a few from that period).
avfiltergraph.h is a component of ffmpeg.
...and I can comfirm I have ffmpeg 4.0.1 installed, and opencv 3.4.1_5 .
@papr For instance when you want to determine how often someone gazed on a particular area of an monitor you could define many surfaces as AOIs using the same marker. So far I know, there is no way to define multiple AOI's within a surface without writing a complex script, isn't it? With this solution I just could parse the logfiles of the surfaces and use the on_srf column to calculate my dwell times. Do you see an easier way to get the dwell times?
@papr re: converting pupil labs gaze positions to horizontal/vertical rotations... I reached out the the first author of nslr-lmm and he said:
The pupil labs system gives locations normalized to the scene camera frame (-1 to 1 or 0 to 1, can't remember which). Just multiplying these with the scene camera's FOV angle should work fine; the approximation error is very small and the NSLR-HMM should work just fine with those.
So I took the norm_pos_x and y data from gaze_positions.csv and applied the formula (x-0.5) * 50, where 50 degrees is the FOV of the world camera. These rotations seemed to produce decent results in nslr-lmm but wanted to run this approach by you. I bet I’m making a mistake somewhere.
@papr in your opinion do you think optimizing pupil_detectors modules will provide a noticeable difference in performance ?
@user-049a7f I think that there is some potential to optimize the code. I don't know it well enough to tell you how much potential exactly. I would try running it as it is and see how many frames you can process with it
@user-d3a1b6 Sounds great! This can be improved in accuracy by using the camera intrinsics, but as the author said, this does not make a lot of difference. The question is how demanding the algorithm is in terms of cpu and memory.
@user-8779ef did you install ffmpeg with anaconda as well? Maybe it is not able to find its location
I suggest nobody use anaconda / vritual envs for this. I was unable to change the conda compiler to C++11.
...and so unable to install pyndsi
I've now found that my python path was set incorrectly (though I'm not sure what the correct path is...), and so pip was not correctly installing certain packages. Trying to reinstall and see what happens.
@user-c351d6 You are right that defining multiple surfaces solves your problem. Nonetheless, I would recommend to use a single surface for the complete monitor, set the surface's size to the monitors resolution. This way you will have to assign the gaze points to each displayed monitor element after the recording. But you would save the surface tracking cpu usage.
@user-8779ef yeah, conda does everything in a isolated fashion...
Yeesh.... now this:
"ImportError: dlopen(/Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/calibration_methods.cpython-36m-darwin.so, 2): Library not loaded: /usr/local/opt/boost-python/lib/libboost_python3.dylib Referenced from: /Users/gjdiaz/PycharmProjects/pupil_pl_2/pupil_src/shared_modules/calibration_routines/optimization_calibration/calibration_methods.cpython-36m-darwin.so Reason: image not found"
I can confirm that boost_python3 and boost have been "untapped."
@papr Hello, I am new to pupil labs and am not as experienced. Can you please explain to me how to use AOI and direct me to any documentation Pupil Labs may have on the topic? I am trying to design an experiment in where the eye must not stray from a red dot in the center and I am using pupil labs to monitor the eye and make sure that it does not deviate.
@user-cd9cff https://docs.pupil-labs.com/#surface-tracking
@papr Thank you so much!
@papr Do you know of any project that has done surface tracking with Matlab?
Not sure, but these guys might have: https://github.com/mtaung/pupil_middleman#pupil-middleman
Else I would recommend to extend our matlab helper script to receive surfaces
@papr By running do you mean running pupil? or is there some way to run pupil_detector alone? ........When running pupil and using just doing main.py service I get sub 15 FPS (but it feels like 5FPS average to be fair)
You can pass images directly via python but I meant running Capture without ui, yes
@papr Sorry I am a bit confused, is there documentation for passing images directly via python? and I did not know you can run capture without UI, how is that done? service still has some UI so minimizing it (and even removing video playback) could possibly save some consumption. issues I am running into
@user-049a7f no, there is no documentation other than the code itself. You will have to manually remove the ui code to run Capture without ui. Have a look at eye.py and how it calls detect() on the pupil detector. Unfortunately I cannot link code lines since I am on mobile.
You can use pyuvc to access the camera images directly without capture btw and then pass these images directly into the pupil detector instance. You will have to write your own code for that though.
Hi all. Noting you're been discussing NSLR and its implementation, I'd like to give a heads up that the current C++ version has issues with non-linux environments. The Python version works, but is probably too slow for widespread usage. On Windows the problem is that it uses too new C++(17) features to build on VC++. On MacOS it builds, but apparently it for some reason gets stuck on the segmentation. I don't have access to an OSX evironment, so I can't really debug it further. I'd love to see NSLR be integrated in Pupil and can try to eg. port it to C++11 for a more portable build when I find the time.
(For my defense both MS and Apple seem almost actively hostile to people trying to support their platforms without paying them, which I have no intent of doing)
@papr regarding the CPU usage, with the C++ version it shouldn't be a problem. It should handle hundreds of thousands of samples per second. Also, if you disable the noise optimization, which you probably can as you know the equipment, the performance is further 10 times or so faster.
good evening
I need help. Has anyone used the mouse_control.py code available from github in the latest version of the pupil labs?
Well I wanted to move the mouse with the movement of the eyes through the code mouse_control.py with the help of the markers, but I can not. Fco the calibration process soon after I execute the code mouse_control.py but the mouse does not move.
@user-3e42aa Hey, nice to hear from you! We would probably wrap the c++ code in a cython extension instead of using the python code if the performance increase is so drastic. But I will have to look into the paper +code in the coming weeks. Btw, do I understand it correctly that your code is also realesed under the Open Access license?
@user-3f0708 Hi - I think I also received your email. Apologies for the delayed reply. I just tested the code on Linux (Ubuntu 18.04) and it works as designed. You will need to install dependencies zmq
, msgpack
, and pyuserinput
in order to run mouse_control.py
. You will also need to define a surface named screen
in Pupil Capture using the surface tracking plugin.
@papr Is there already a program/script/gui to assign AOIs (display elements) within a single surface? The experiment is conducted at a chair of psychology, the most experimenters have no experiance in any programming languages.
@user-c351d6 not that I know of. But this would be a matter of comparing values. E.g. you know that an element with size WxH
was displayed at (X,Y)
then gaze points lie on that surface if both of these conditions hold true:
- X <= gaze_x <= X+W
- Y <= gaze_y <= Y+H
@papr Thank you for the information, we will consider doing this. Will a surface be tracked even though it's not completly in the FOV of the world camera?
A surface will be tracked as long as at least 2 markers of the surface are detected.
Hello. I make a project with the pupil labs and the Hololens. I tried to use the blink detection and noticed that it isn't aviable for the Hololens. I want to ask why this doesn't work and if it will be added in the future?
@papr There are Python bindings for the C++ version already included, so no need for cython wrapping. It's released under AGPL.
Well, actually the python implementation is included also in the article as a listing, so it sort of is under CC-BY-4
OK, cool!
I am looking forward to replace as much of our event detection code with yours. Makes my life easier 😃
hi everybody,
I am starting to work with the pupil headset, but when I try to calibrate it with my Mac (2,6 GHz Intel Core i7, version 10.12.6), using the screen marker calibration, the software crashes. Any idea about this? What should I do?
I have tried it with windows, and it worked. I also tried the manual calibration, in the Mac, but I was not able to do it.
@user-b6398e Please follow these steps:
1. Start capture
2. Calibrate/chrash software
3. Upload ~/pupil_capture_settings/capture.log
, i.e. in your home folder, look for a folder called pupil_capture_settings
and upload the capture.log
file that is included
Is it possible to send a procedure tutorial to run the mouse_control.py code? And I'm using Linux (Ubuntu 16.04)
@user-3f0708
1. Start Pupil Capture (ensure pupil remote plugin is enabled - it is enabled by default settings)
2. Load surface tracker plugin from plugin manager in the GUI
3. Define a surface that corresponds to your computer screen name it "screen" (you can name it what you like, but note that you will need to update the surface name in mouse_control.py
)
4. Calibrate
5. Start mouse_control.py
e.g python3 mouse_control.py
6. See the cursor move to where you gaze on screen.
Can you tell me in the code mouse_control.py, where does the name of the surface that I can change the name match?
@wrp Can you tell me in the code mouse_control.py, where does the name of the surface that I can change the name match?
@wrp thank you
@user-3f0708 you're welcome 🙂
I wonder if we could identify a fixed area from the raw data output. We were trying to not use the surface trackers in our experiments since they might be distracting. I know that in raw data file, the left corner of the world video is the origin. My question was whether we could identify a fixed area while the world camera was moving with participant's head. Thank you.
@user-e2056a Not with the given tools. You will either have to annotate the fixed area yourself or use some kind of object detection.
The problem is that the relationship between headset and your AOI is not fixed.
I see, thank you
Any one have an opinion on what would be the best way to add a new video capture option for a stream? For example, if I want to run pupil on my machine but have it process live footage being broadcast through the network from another machine? Would it be possible to modify one of the backends?
Either you implement your own backend or you implement a pyndsi host. This is a python example that implements such an host for a connected uvc camera: https://github.com/pupil-labs/pyndsi/blob/master/examples/uvc-ndsi-bridge-host.py
This is the protocol specification: https://github.com/pupil-labs/pyndsi/blob/master/ndsi-commspec.md
@wrp One doubt this line of code screen_size = sp.check_output (["./ mac_os_helpers / get_screen_size"]). Decode (). Split (",") of mouse_control.py which represents this directory ["./mac_os_helpers/get_screen_size" ]?
@papr One doubt this line of code screen_size = sp.check_output (["./ mac_os_helpers / get_screen_size"]). Decode (). Split (",") of mouse_control.py which represents this directory ["./mac_os_helpers/get_screen_size" ]?
@papr
We are using a stereoscopic display and we would like to monitor eye position while the observer is performing the task. The observer is asked to fixate the center of the screen. We would like to flag trials in which eye position deviates outside a tolerance windowA
The stereoscopic display is optical (see attached image). The observer views two monitors through two mirrors tilted at 45° with respect to the line of sight. The monitors themselves are parallel to the line of sight but displaced about 1 m to the left and right of the line of sight. The left and right eye displays are configured to be a single extended display so that they are synchronized.
Our questions are related to the issue of how best to perform a calibration under these circumstances.
Briefly, if we are using our dichoptic setup, can we calibrate with only the right display (seen by the scene camera) and if that is not possible, what is our best alternative - using a separate display, physical markers, etc.
Option 1: Using the Right Display:The scene camera can only see the right display through the mirror. Does it make sense then to do a monocular calibration with only the right display? The default Pupil Labs calibration is for the entire (extended) display. So if we want to calibrate just the right display, how should be go about it? Is there a modified Pupil Labs calibration that we can use, use our own custom calibration routine, or use something else such as surface markers on the display ? {if we understand the documentation, we will not be able to select the right display by itself as it is configured to be part of one extended display.)
Option 2. Using another display: Instead, should we calibrate binocularly on another display or a physical surface that is at the same viewing distance and has the same dimensions as each eye's display?
May I ask how you ended up solving this calibration problem?
@user-3f0708 if you are using Linux this code block will not be executed. If you clone the entire pupil-helpers repository you will find the mac_os_helpers
dir. @user-3f0708 let's continue this discussion in 💻 software-dev channel as it is moving towards more development specific questions.
@user-cd9cff I would hanlde this as if it was a vr-headset. You would assume that the subject does only see the screens. In this case you would have to setup your experiment such that it uses the HMD Calibration plugin and provides the screen positions for each displayed marker.
The problem is that you will have issues with slippage as soon as the subject moves.
Hi, is it possible to find the longest gaze duration on a specific surface?
One more question, suppose we are using the left corner of the world video as the origin, how do we measure the location of an area of interest? Do we measure the proportion of the AOI relative to the world video, and compare to the normalized coordinate?
@user-e2056a Not with the built-in tools. Export the surface data for your surface and look at the gaze positions for it. You are looking for the longest sequence of gaze points whose on_srf
value is True and whose confidence
is higher than a given thresold, e.g. 0.8
@user-e2056a You can use the to_world
transformation matrix to convert homogeneus surface coodinates into world coodinates.
Thanks @papr, where do I find the to-world transformation matrix?
in the srf_positons<surface name>.csv
file
Thank you
May I ask that why the eye image for the top eye is a little more blurry than the bottom eye?
@papr
And why you want to flip the eye image ? Thank you very much.
The image is flipped because the camera is flipped. This does not influence pupil detection/gaze mapping.
Why the top eye image is a liitle more blurry than the bottom eye?
@papr
Do you use the 120Hz or the 200Hz cameras?
120Hz camera
This makes my 3D eye detection not that stable as the bottom eye
The bottom eye, the eye contour is very clear. But that of the top eye is not that clear
@papr
Then you can carefully (!) adjust the focus of the left/eye1 camera.
Which direction should I turn the camera?
I tried, but I don't see which direction it is supposed to bring the eye into the focus of the camera
I can't tell you that since I do not know how far the eye is from the camera nore do I know how far in the lense has been roated. You will have to try yourself. Alternatively you can try changing the eye camera distance to the eye.
OK. Thanks
Hey guys I am trying the mobile module with my Pupil. When I start the Pupil Capture on the host machine it shows that I need to install the time sync module. Do you guys know where can I find the plugin and how to install
No, you just have to activate it. Open the Plugin Manager on the right and enable the Time Sync option
What FOV does the eye tracking camera have on the pupil headset?
@papr Thank you for the prompt response. On the host machine I get this message when I enable the time sync module.
This just means that there is no other time sync node yet.
@papr In our setup, the two screens are essentially one large screen
Yes, that is fine
The longer screen we have divided into two sections, and so we need a calibration procedure that will be the same on both the right and left sides of the screen
The present HMD calibration is in the middle repeated only once
But is there a way that we can out a calibration on the left side and on the right
You will have to display the markers yourself using your own programm. Ideally you show two markers on both "screens" and send their location to Capture
By markers, are you talking about surface markers?
no calibration markers
or anything that you want to, but it should have a clear point of attention whose location you can send to Capture
E.g. a you display cross, tell the subject to look at the cross, and send the cross'es center as reference location to Capture
Hey papr, any way for me to order the 3D printed HTC clip-on component from shapeways?
...or some other way? I was recently sent the LED ring on the circiular ribbon cable...
@user-8779ef I dont know that, sorry
( there was a miscommunication ).
Ok. I'll talk to MPK. Thanks!
So, we would display some sort of marker on both "screens", and send those as reference objects to Capture?
correct
or is there a special thing that makes a calibration marker different?
No, the only special thing about our calibration markers is that Capture is able to auto-detect them in the world video. But since you provide the locations yourself, you do not need to use our markers
Then what is the situation behind the HMD calibration plugin that you were talking about earlier? How does that play a role in this?
I dont know hat you are referring to
@papr so regarding the Pupil Mobile. I am trying to use it in real-time. So is it possible to stream the eye-camera to the Pupil Capture then the Pupil Capture does the pupil detection, or everything has to be offline. Is the Mobile module just for offline purposes?
You told me earlier today...
"I would hanlde this as if it was a vr-headset. You would assume that the subject does only see the screens. In this case you would have to setup your experiment such that it uses the HMD Calibration plugin and provides the screen positions for each displayed marker.
The problem is that you will have issues with slippage as soon as the subject moves."
What is the HMD Calibration plugin and what role would that play in my setup?
The HMD Calibration plugin is one of the calibraion methods. Usually,e.g. screen marker calibration, the plugin collects reference data by auto-detection the markers in the scene videos. In your case this is not possible. Instead you can use the HDM Calibration. It is unique in the sense that it does not collect reference data itself, since we do not have a typical scene video in a vr environment. The scene is the environment itself. Therefore the applications that runs the VR env needs to display the markers. The good thing is that it knows the location (since it is drawing them) and can send them to Capture where the HMD Calibration plugin collects the reference data.
Where can I get the file for the HDM Calibration?
It comes with Pupil Capture
This is an example client for the hmd calibration: https://github.com/pupil-labs/hmd-eyes/tree/master/python_reference_client
So, using HMD, I need to manually code the markers into my environment and send those locations to Capture?
correct
How would I send the locations to Capture?
See the example code above
This includes the whole procedure: https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/hmd_calibration_client.py
I understand now, thank you so much for your patient explanation
I recommend to throughly understand the example in order to build ontop of it
@papr Do you know if I there is a script for the hmd calibration client in matlab
?
No, not that I know of. But you can build ontop of our matlab example scripts: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
are pupil eye cams not running in BW mode but rather RGB? Seems like a waste of refresh rate, bandwith and latency if so
The cameras transmit the jpeg data. We use libutbrojpeg we uncompress the YUV data from the jpeg frames and use the Y plane (which is the grey image) for processing
@papr Since surface tracking is established in matlab, it feels like it is the right way to go. If I can make the scene camera pick up the surface markers that I have coded into the stimulus on the right display, will the calibration work fine?
i.e. I don't yet understand why surface trackers won't work for my situation. There is code in matlab for them vs. HMD having only python scripts
but the camera itself if capturing in BW mode should be able to quadruple its fps, or take much less time at same refresh rate, sure the frame compression/decompression time will stay the same with jpeg but the capture time should decrease
@user-cd9cff The issue is that the world camera does not pick up what the users sees. How do you want to evaluate the users gaze without know where the user looked at?
@user-c7a20e I don't have insights into the configuration of the camera firmware/hardware. @mpk is the expert on this.
Hi all, does anyone know if there is a way to record the binocular 3d gaze mapper debug window? Or if there is a way to calibrate and record each eye separately in the binocular system?
@papr The way the spectroscope is setup, the world camera picks up the display on the right eye perfectly fine. And since the right eye display is the same as the left eye display, the world camera can see what the user sees. If I code surface markers into the right eye display, the world camera will see them too
actually looks like I may have been wrong, at least ordinary jpg files can be grayscale
@user-cd9cff oh ok, then I misunderstood the setup. What exactly is it that you want show in your experiment? Why do you need to split the screens?
Physically, we have two monitors. The right monitor is for the right eye and the left monitor is for the left eye. But, we used a DualHead2Go in order to merge both monitors into one large screen. Now, we are running the stimulus on the right half of this large screen and the left half of this large screen. The right half of the large screen fills up the right monitor while the left half of the large screen fills up the left monitor. When I look through the mirrors, since the scene camera is on the right side of the face, the scene camera picks up the image in the right mirror, which is the right monitor. Additionally, the stimulus is being run in Matlab. My idea was to code the surface markers (attached below) into the Matlab stimulus and have Capture recognize these markers and calibrate itself.
Now I have understood the setup. My question is what you want to archive. My guess is that this kind of setup is useful if you want to show specific stimulus to a single eye without showing it the other. But you already said that both "screens" will be showing the same stimulus. Sorry if I am missing something very obvious here.
We want to show the same stimulus to both eyes and simply track how well each individual eye sees the stimulus. We did not split up the screens to show the eyes different stimuli, but to show them the same stimuli separately and have the brain put the images together.
Don't worry, it's a confusing setup to understand
Ok. In this case you want to use a dual-monocular gaze mapper. Unfortunately, it is not possible to calibrate it directly. Instead a inocular gaze mapper will be created after calibration. We wanted to add an option for that but it did not have high enough priority yet. You will have either to run a modifed version of the source code or record everything, incl calibration procedure. As soon as we release this feature you will be able to use the offline calibration function to recalibrate using the dual monoclar mapper
I'm a little confused, could you explain the necessary procedure a little clearler please?
So in your case you want to track the eyes individually. This is what a dual monocular gaze mapper does. The binocular mapper merges two pupil positions into a single gaze point. The screen calibration creates a binocular mapper by default. There is no ui option to change that befaviour. This means that you will have to change the source code in order to change that behaviour.
@papr Is there anyway to put the calibration screen on the right and left diaplys seperately?
displays*
Why don't you simply mirror the screens?
Like you would mirror a beamer during a presentation
The Matlab code that I have only works with this screen setup
is there anyway to put the calibration even on one half of the large display?
screen*
@user-cd9cff no, not built in
You either run it in full screen or use the window mode.
So window mode might eactually work for you
Will coding surface markers into the stimulus on the right display work, or am I beating a dead horse with the surface markers?
No, this should work fine
This is the recommended setup if you want map gaze to a screen
So i can simply code the marker(see attached image) into the stimulus and capture will automatically detect it and calibrate?
Just to clarify the terms. The square markers are used for surface tracking, i.e. you activate the surface tracker, show the markers within the scene, register a surface for the markers, and Capture will start mapping gaze to that surface.
Yea
Separately from that you will have to calibrate.
So if we can't use them to calibrate, the I will either have to figure out how to use HMD, or I will have to change the source code for dual-noncular gaze mapping?
Calibration != surface tracking. You will need both for your experiment
Last question: What if I calibrated using my personal laptop screen and then used the surface trackers for the stimulus?
would that work?
Would Capture be able to scale the calibrated view to the stimulus using the surface markers?
Technically you can calibrate in your double screen setup just fine using the screen marker calibration window on the right screen. But you will have to major issues:
your subject only sees the marker on the right, therefore the right eye will fixate the marker. But you do not know what the left eye will do. Nonetheless the calirbation procedure expects that both eyes fixate the calibration marker. Therefore it is not clear if the left eye will be calibrated correctly. You can fix that by showing the calibration marker on both screens at the same time.
the binocular gaze mapper uses vergence to estimate, i.e. depth, as well as the binocular gaze point. This binocular gaze point is meaningless in you setup since you want to investigate gaze for each eye separately. This is why you need a monocular gaze mapper. And you will have to change the code to use it instead of a binocular mapper.
Further clarification: calibration markers are the concentric circles displayed during screen marker calibration.
The arguments above are independent of the surface tracking/displaying square markers. If you display them, setup a surface based on them, then Capture will use current gaze, independently which gaze mapper was configured, and will transform it into the coodinate space of the surface that you have setup before
So, does that mean that I can calibrate on a seperate screen and then use surface markers inn the stimulus?
if that won't work, how do I show the calibration marker on both screens at the same time?
Yes, you can use the surface markers in your stimulus
Oh, so it will work even if I have it calibrated on another screen
?
But as I said, surface markers are not used for calibration but for surface tracking.
Yes, my plan is to do normal calibration on my personal laptop, and then move my head to the spectroscope and use the surface markers coded in the stimulus
so i'll be doing both
This might work, yes.
But the issue with the dual-monucular gaze mapper persists nonetheless
The issue being that at the edn of the day, I will have a gaze that combines both eyes instead of showing the eyes seperately
correct!
so I wont be able to track the eyes seperately
That might not be a problem, I'll have to try this out and see
Thank you so much for your invaluable help
No problem! I will see if I can squeeze the option for the dual monocular mapper into the next release 😉
Thank yo so much!
@papr Do you know how to show the calibration marker on both screens? Is there a Pupil setting that can do that?
@papr , Do you guys have a resource that I can read more extensively about the Pupil Mobile module. It is just in the documentation it is not clear if I can use it in real-time or not.
@user-cd9cff This is not build in. You would not have this issue if you mirrored the screens... yes would have to change that matlab script... but this would simplify everything, including the script itself
I understand, thank you
@user-525392 https://docs.pupil-labs.com/#pupil-mobile
@user-525392 you can use Pupil Mobile for real time streaming of video data over WiFi or for local recording of sensor data on the Android device.
Hey guys, I have problems with drivers for Pupil labs. There is no Libusbk in device manager at all. I am using windows 10 and htc vive pupil addon. Can anybody help?
@user-2feb34 what version of Pupil Capture are you using?
Do Pupil Cam devices show up in another category when you connect them via USB? How are you connecting the Pupil cameras to your computer?
@wrp can I use the mouse_control.py without the markers?
can I use the mouse_control.py without the markers?
No, you need the markers to detect the screen
ok
Hello guys, is it possible to detect surface without codes qr?
The built in surface tracker does only work with these markers. You will have to implement your own object detection if you don't want to use our markers. @user-41f1bf has some work on that
You can found the plugin I have been using for tracking screens without fiducial markers here: https://github.com/cpicanco/pupil-plugin-and-example-data
You can found a complete description of the approach here http://doi.org/10.1002/jeab.448
Dear Pupil Labs Team, I need the authorization of using this photo in my application for the ethical approval. Is there any formal form I need to prepare? Or it would be enough if I note the copy right as well as the citation in my references? Thank you.
I'm sorry to bother you guys again. We are planing to conduct a experiment with the Pupil Labs eyetracker. Today, I tried to record a session with the Pupil Mobile App and recognized that the duration of a recording is limited to around 10 minuts and 4GB file size. This is quite a problem because our sessions will take around 20 minutes and there is no possibility to restart a recording. Then I tried to activate the h.264 compression which is decreasing the file size dramatically but pupil player or any other programm is not able to play this recordings. Is there a way to have longer recordings then 10 minutes with the Pupil Mobile App and not using a Wifi?
Hi @user-b571eb If you note our company name and website address as credit for the image, we can approve the usage for ethical approval application. You can also email us at info@pupil-labs.com for further confirmation/approval.
@user-c351d6 are you using the latest version of Pupil software and Pupil Mobile? What device are you using with Pupil Mobile?
@wrp I'm using the latest version for both. The device is a One Plus 5T. I could provide you a short recording. The experiment will be conducted in the beginning of august and I have to find a solution for this quite fast.
@user-c351d6 just to make sure. You should run h254 compression on the world video. This will open and play in Pupil Player. We recommend using mjpeg for the eyes. I think this should stay below 4gb for 20mins.
@mpk Yes, I just used h.264 for the world video and thats the problem, the world video can't be played. After dragging the recording into pupil player, the application terminates without an error. After deactivating of the compression it's working again.
@user-c351d6 are you using windows?
@mpk same behaviour on mac and windows
Whats the output from terminal when the app stops?
ah ok.
let me try to reproduce this here.
I will upload a short recording to a cloud drive.
@user-c351d6 do you have a hi speed or hi-res world camera?
hi-res
However, the recodring was on 1280x720 at 30Hz
thats improtant input!
The hi res camera does camera side h264 compression.
this means it different form the hi speed.
If you can share a recording I can fix this or let out our devs know that this needs fixing.
@mpk I've sent you a link via pn
Thanks for the fast support, please let me know how you make progress on this if possible
@user-c351d6 ok. I see this file is not right. I have deligated this to our devs. I ll get back to you with a fix or updated release for Pupil Mobile.
Hi guys! Is there any neat way to integrate pupil-labs with UE4? I saw there is instruction for unity. Do you have something similar for UE4?
hello guys, any ideas what gone wrong here?
I'm unable to open this recording in Pupil Player
Memory, CPU and drive are going ~100% and nothing happens
Are you trying to open a very big recording?
that's right
Unfortunately I am on mobile and currently not able to open the log file. Does the log say "out of memory error" at the end?
It is a known issue that Player is not able to handle very big recordings well. We are currently working on an approach to fix that.
there are no errors
only warnings, debug and info
2018-06-23 15:11:15,327 - world - [DEBUG] av_writer: Opened 'C:\Users\zezima\recordings\2018_06_23\001/world.mp4' for writing. 2018-06-23 15:11:15,362 - world - [INFO] recorder: Started Recording. 2018-06-23 15:34:59,360 - world - [DEBUG] recorder: Closed media container 2018-06-23 15:34:59,362 - world - [INFO] camera_models: Calibration for camera world at resolution (1280, 720) saved to C:\Users\zezima\recordings\2018_06_23\001/world.intrinsics 2018-06-23 15:35:57,805 - eye1 - [WARNING] uvc: Turbojpeg jpeg2yuv: b'Invalid JPEG file structure: two SOI markers' 2018-06-23 15:36:35,092 - world - [INFO] recorder: Saved Recording. 2018-06-23 15:36:44,515 - eye1 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. 2018-06-23 15:36:44,537 - eye1 - [DEBUG] uvc: Stream stopped 2018-06-23 15:36:44,549 - eye1 - [DEBUG] uvc: Stream closed 2018-06-23 15:36:44,550 - eye1 - [DEBUG] uvc: Stream stop. 2018-06-23 15:36:44,544 - eye0 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. 2018-06-23 15:36:44,573 - eye1 - [DEBUG] uvc: UVC device closed. 2018-06-23 15:36:44,575 - eye0 - [DEBUG] uvc: Stream stopped 2018-06-23 15:36:44,579 - eye0 - [DEBUG] uvc: Stream closed 2018-06-23 15:36:44,579 - eye0 - [DEBUG] uvc: Stream stop. 2018-06-23 15:36:44,589 - eye0 - [DEBUG] uvc: UVC device closed. 2018-06-23 15:36:44,688 - world - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. 2018-06-23 15:36:44,689 - world - [DEBUG] uvc: Stream stopped 2018-06-23 15:36:44,693 - world - [DEBUG] uvc: Stream closed 2018-06-23 15:36:44,693 - world - [DEBUG] uvc: Stream stop. 2018-06-23 15:36:44,727 - world - [DEBUG] uvc: UVC device closed. 2018-06-23 15:38:54,117 - eye1 - [INFO] launchables.eye: Process shutting down. 2018-06-23 15:38:55,460 - eye0 - [INFO] launcha
2018-06-23 15:38:56,250 - world - [DEBUG] plugin: Unloaded Plugin: <accuracy_visualizer.Accuracy_Visualizer object at 0x000000000790E940> 2018-06-23 15:38:56,250 - world - [DEBUG] plugin: Unloaded Plugin: <display_recent_gaze.Display_Recent_Gaze object at 0x000000000790E8D0> 2018-06-23 15:38:56,257 - world - [DEBUG] plugin: Unloaded Plugin: <system_graphs.System_Graphs object at 0x000000000790E6D8> 2018-06-23 15:38:56,259 - world - [DEBUG] plugin: Unloaded Plugin: <plugin_manager.Plugin_Manager object at 0x000000000790E780> 2018-06-23 15:38:56,261 - world - [DEBUG] plugin: Unloaded Plugin: <calibration_routines.screen_marker_calibration.Screen_Marker_Calibration object at 0x000000000790E1D0> 2018-06-23 15:38:56,262 - world - [DEBUG] plugin: Unloaded Plugin: <video_capture.uvc_backend.UVC_Manager object at 0x00000000078FE358> 2018-06-23 15:38:56,263 - world - [DEBUG] plugin: Unloaded Plugin: <log_display.Log_Display object at 0x00000000078FE208> 2018-06-23 15:38:56,265 - world - [DEBUG] plugin: Unloaded Plugin: <surface_tracker.Surface_Tracker object at 0x000000000786DF60> 2018-06-23 15:38:56,266 - world - [DEBUG] plugin: Unloaded Plugin: <fixation_detector.Fixation_Detector object at 0x00000000078FE198> 2018-06-23 15:38:56,267 - world - [DEBUG] plugin: Unloaded Plugin: <calibration_routines.gaze_mappers.Binocular_Vector_Gaze_Mapper object at 0x0000000020916588> 2018-06-23 15:38:56,368 - world - [DEBUG] plugin: Unloaded Plugin: <pupil_remote.Pupil_Remote object at 0x00000000078E9B00> 2018-06-23 15:38:56,368 - world - [DEBUG] plugin: Unloaded Plugin: <pupil_data_relay.Pupil_Data_Relay object at 0x00000000078E98D0> 2018-06-23 15:38:56,371 - world - [DEBUG] plugin: Unloaded Plugin: <video_capture.uvc_backend.UVC_Source object at 0x00000000078E9630> 2018-06-23 15:38:56,474 - world - [INFO] launchables.world: Process shutting down.
ok, so probably there is a problem, that the files are too big
Ah wait. This is a Capture log but you are describing issues with Player. You need to look for player.log in the player settings folder
I know, I thought that it might be problem with recording
2018-06-25 10:54:16,445 - MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. 2018-06-25 10:54:17,753 - player - [ERROR] player_methods: No valid dir supplied (C:\Users\zezima\Desktop\MGR\pupil_v1.7-42_windows_x64\pupil_player\pupil_player.exe) 2018-06-25 10:54:28,306 - player - [INFO] launchables.player: Starting new session with 'C:\Users\zezima\recordings\2018_06_23\000' 2018-06-25 10:54:28,315 - player - [INFO] player_methods: Updating meta info 2018-06-25 10:54:28,317 - player - [INFO] player_methods: Checking for world-less recording 2018-06-25 10:54:29,484 - player - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend 2018-06-25 10:54:30,261 - player - [INFO] launchables.player: Application Version: 1.7.42 2018-06-25 10:54:30,261 - player - [INFO] launchables.player: System Info: User: zezima, Platform: Windows, Machine: DESKTOP-9U84NDA, Release: 10, Version: 10.0.17134 2018-06-25 10:54:30,546 - player - [DEBUG] video_capture.file_backend: loaded videostream: <av.VideoStream #0 mpeg4, yuv420p 1280x720 at 0x774dc88> 2018-06-25 10:54:30,546 - player - [DEBUG] video_capture.file_backend: No audiostream found in media container 2018-06-25 10:54:30,566 - player - [DEBUG] video_capture.file_backend: Auto loaded 60891 timestamps from C:\Users\zezima\recordings\2018_06_23\000\world_timestamps.npy 2018-06-25 10:54:30,593 - player - [INFO] camera_models: Previously recorded calibration found and loaded! 2018-06-25 10:58:08,983 - player - [INFO] file_methods: C:\Users\zezima\recordings\2018_06_23\000\pupil_data has a deprecated format: Will be updated on save
Please wait after dropping the recording onto the the player window. At some point the should open
ok, I'll give it more time, once I got bluescreen, we'll see what happens
hello everyone..... is there anyone who has done this project or have any idea about it...."cursor movements with eye gaze "
Hi @user-7f5ed2 yes, that is done using surfaces (you can see these in the video linked above) defined in Pupil Capture and this helper script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py
player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "shared_modules\file_methods.py", line 58, in load_object File "msgpack/_unpacker.pyx", line 164, in msgpack._unpacker.unpack (msgpack/_unpacker.cpp:2622) File "msgpack/_unpacker.pyx", line 139, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:2068) MemoryError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "launchables\player.py", line 246, in player File "shared_modules\file_methods.py", line 64, in load_object File "shared_modules\file_methods.py", line 48, in _load_object_legacy _pickle.UnpicklingError: unpickling stack underflow
player - [INFO] launchables.player: Process shutting down.
@papr you were right, there is memory error in Player
@wrp there is a error in that code
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 880, in runfile execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Richa Agrawal/Downloads/Compressed/Computer_Vision_A_Z_Template_Folder/Code_for_Windows/Code for Windows/circle detection.py", line 7, in <module> from pymouse import PyMouse
File "C:\ProgramData\Anaconda3\lib\site-packages\pymouse__init__.py", line 92, in <module> from windows import PyMouse, PyMouseEvent
ModuleNotFoundError: No module named 'windows'
@user-7f5ed2 this script depends on zmq
, msgpack
, and pyuserinput
- https://github.com/PyUserInput/PyUserInput
The error you are seeing appears to initiate from pymouse aka pyuserinput
please check out pyuserinput docs for installation on windows
@user-7f5ed2 I have never personally run this script on windows only linux and macos
Hey guys, new to the chat, anyone here used pupil + psychopy to analyse pupil dilation data?
Hello everyone,. Have any of you seen this error when exporting data in Pupil Player? I'm having it come up on several different videos I am trying to export. Also when I accidentally ran the exporting without the worldtimestamps file I was able to export the video. Any thoughts or do I need to make a formal question in the Github?
@user-bfecc7 are you using recordings made with Pupil Mobile while timesync was running?
we need a bit more infos on your setup.
Hello, yes my apologies. We are running pupil mobile with timesync running. Collecting data from 2 pupil labs applications running at the same time and connecting them with pupil groups.
@user-bfecc7 ok. I think something happended that made one of the pupil mobile sessions 'forget' the time sync offset. (app crash?)
we can fix this with a script. Best if you can share this recording (just the _timestamps.npy files) with data[at]pupil-labs.com . We can confirm the issue and show you the script to fix it.
I ll also let our dev know, that we need to make this more robust against this kind of failure 😃
One more question: Are you recording on the Pupil mobile device or streaming to a Pupil Capture instance and record there?
We are streaming to a computer and having it record there. This is for several videos and we have not had an app crash to my knowledge during a recording session. Also it shouldn't be a network issue as the only two devices on a private network are our pupil labs devices.
@user-bfecc7 ok. I think I know whats going ok. Let me confirm this with our Android dev and I hope we have this fixed in the next release.
We can most likely fix your existing recordings. Check out @papr answer to this issue: https://github.com/pupil-labs/pupil/issues/1203
Please also send this timestamps file to us so I can double check that my assumptions are indeed correct regarding your failure case.
Do you have an example of associating the pupil with the processing program (https://processing.org/)?
@user-3f0708 there are no examples of Pupil + Processing that I am aware of - maybe someone from the community uses Processing. However, if you want to stream data in realtime to Processing (Java) then you would need to install zeromq (zmq) and msgpack in your Processing (Java) client/app.
Hello. I make a project with the pupil labs and the Hololens. I tried to use the blink detection and noticed that it isn't aviable for the Hololens. I want to ask why this doesn't work and if it will be added in the future?
Hello, We have pupil labs 200hz binocular.We are doing a calibration and the definition of the eyes cameras is worst than the videos in youtube, We configured the focus, the distance of the cameras, contrast and gain but the definition still be the same. Any ideas?
thank you!
@user-b6398e please note that 200hz eye cameras can not be focused - they are designed with a set focus for the depths that are used by the hardware. Manually trying to focus the 200hz binocular cameras will damage the cameras.
What kind of eye image do you see?
If you can share an example, we can provide you with concrete feedback
Thank you wrp,
We were talking with papr yesterday and we improve some details but the definition still be unfocused
We send you a videos in wetransfer
@user-b6398e thanks for the videos - this helps to clarify.
Can you please go to Pupil Capture's world window and in the General > Restart with default settings
Pupil detection in this example looks to have low confidence
(I'm also seeing that eye1 is running at 30fps - is this intentional?)
I told @user-b6398e yesterday that eye1 seemed a bit too dark and that they should try increasing the cameras gain
Thanks @papr for following up on this. From the videos @user-b6398e sent it also looks like some other params were manipulated for the pupil detector that may not have been so beneficial to pupil detection
We tried with gain parameter but we realised that was not the key . We have tried today with more light in the room but the definition still be bad. We have record another video for clarify with you the problem of the definition of the eye cameras
The calibration is worst than the first calibration that we sent you by wetransfer
The Player software is crashing, which it does regularly with any new set of data I have. But now it is doing on files that I was able to view previously. I even tried opening them in the older versions. Is there something I can do so I can actually read the error message before it closes on me?
@papr Hello, I am using the filter messages program from the Matlab helpers repository and when I request norm pos, I am getting back an x and a y value for each eye (pupil 0 and pupil 1). However, is there anyway to access gaze data as well? Essentially, is there a way to access gaze data in real-time with Matlab?
Hello, I am working on a project with the pupil labs eye trackers and HTC Vive. How does pressing C on the keyboard work to calibrate the trackers? My group members and I are very confused and would love some help. Thank you!
Is there any documentation, or research papers, that go into detail about the difference between detection & mapping in 2d vs 3d? The pupil-labs website's documentation just has a small paragraph about 3d creating a 3d model of the eye to help with gaze tracking, but doesn't say anything at all about what 2d is doing differently.
perhaps this?: http://delivery.acm.org/10.1145/3210000/3204525/a9-dierkes.pdf?ip=130.64.25.61&id=3204525&acc=OPEN&key=AA86BE8B6928DDC7%2E4579F4D1C4C67060%2E4D4702B0C3E38B35%2E6D218144511F3437&acm=1530065653_2b14257fc22573ae1a500023eea81d98
@user-a8c41c When you start pupil capture, the world camera feed will pop up. ON the left side of the feed you will see the buttons C,T,R; Pressing c on your keyboard is the same as pressing C on the world feed and that will start the calibration sequence
@user-d3a1b6 That does a good job of explaining the 3d mapping, and lines up with what I thought it was doing; I feel like I have a good grasp on the 3d mapping mode. I don't really have any intuition about how the 2d mapping mode works. Appreciate the link though.
@user-a8c41c @user-cd9cff the hmd case is a special case. Calibration is started through unity since unity is responsible for providing the scene
@user-cd9cff yes, it is possible to subscribe to gaze. You will have to subscribe to the gaze topic in the same way as you subscribe to the pupil data. Uncomment this line https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/filter_messages.m#L58
@user-2686f2 the paper that @user-d3a1b6 linked has not been implemented. But it is based on Swirskis 3d model that we use.
@papr That doesn't work, the socket comes back empty
Also, that line references socket instead of sub_socket
You are right, that is a mistake. It should be sub_socket
Even if it is sub_socket, it still returns an empty socket with no data
@user-2686f2 we have not published work on the 3d model fitter. it is based on https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
we have added a lot of addional logic for gaze mapping and co. We plan to publish a whitepaper in this pipeline.
@user-b6398e It looks to me that Pupil detection is not working right. The 3d model is breaking down this leads to broken calibrations.... Try adjusting the camera to fil the eye form a bit more downward. Also please reser the pupil capture app. Use alogrithm view to confirm the the pupil min and max size are set around the actual pupil value observed.
@user-41711d blink detection can be added to the hololens example. Since it is part of Pupil Capture and Service its just a questions of adding this to the hololens relay stream. Where do you need blink detection. Do you need it in realtime?
@mpk Have you published anything regarding the 2d mapping?
@user-2686f2 yes we have a white paper that outlines the 2d pipeline. I generally recommend using 3d mode though.
@papr Hello. I make a project with the pupil labs and the Hololens. I tried to use the blink detection and noticed that it isn't aviable for the Hololens. I want to ask why this doesn't work and if it will be added in the future?
@user-54f521 do you need this is realtime?
yes. i would just need to be able to subscribe to it like for the gaze. But instead just need the info if the user has blinked (for example to use it as some form of input)
Are you using Capture oder Service? And which version?
Technically this should be possible. Please be aware that blinks are not really a good type of user input since it is very difficult to differentiate between voluntary and involuntary blinks...
Hi guys, is it actually possible to get valid/good calibration results when connecting the eyetracker via pupil mobile and wifi to pupil capture? Is seems like there is quite a delay.
@user-c351d6 the delay does not play a role as long as the frames have correct timestamps. Make sure that you are using Time Sync !
@user-c351d6 the delay can affect the gaze mapping. The get good result we recommend using fast wifi and a network that is not used by a lot of other clients.
@mpk which effects are you talking about exactly?
@papr if one eye is delayed the mapping with be monocular as the binocular pairs cannot be paired in realtime.
Ah yes, I was not aware of this implication
@mpk Thanks, I will do some tests. Not sure whether we can host our own wifi, it's in a hospital. However, we need a fallback solutions what we could use to record more than 10 minutes in the case you will be not able to fix the compression problem.
@user-c351d6 understood. Making you own wifi comes down to getting a wifi router for 80EUR/USD. Our android dev is working on the c930e issue.
@papr we are using the unity plugin and the capture. Yes we are aware that it difficult, but would like to experiment with using it.
@mpk Could you link a pointer to the 2d pipeline? At the moment I'm getting much better results using the 2d mapping and would like to understand the difference between the two modes.
@user-2686f2 sure: https://arxiv.org/abs/1405.0006
@user-2686f2 for VR and AR 2d is the recommended conf. For mobile eye tracking I recommend 3d.
@mpk Thanks very much. My use-case is looking down at a piece of paper while drawing on the paper. Any recommendation of 2d vs 3d for that?
@user-2686f2 if you dont have a lot of headmovements. You might get better results in 2d!
@mpk Ok, great. You've been a big help, thanks!
Hello, I am a college student studying pupil tracking. Is there a document that explains the entire process or principles of pupil tracking? It was not found in pupil docs. I would like to hear about the github code also . thank you.
@mpk It's more about the wifi policy of the hospital :/ For some reasons you are usually not allowed to host a wifi there.
Thank you so much for the help! Could anyone give me a bit of a run through on how to actually initiate the calibration process in Unity? My teammates and I would greatly appreciate it.
@papr Do you know how to make the socket return data when it is querying gaze data as the socket that is returned to me is returned empty. (This is an extension of my question above)
I am not sure what goes wrong. I will have to test this on Friday when I have access to Matlab
Morning, anybody active in chat here?
Hi @user-24e31b - there are people here - welcome 😺 👋
Afternoon @wrp, I'm in New Zealand so normally the rest of the world is sleeping or out and busy when I'm here at work! 😦
I work for the Aviation Security Service and we are interested in purchasing the Pupil Labs Glasses for some in-house research!
Though I could ask the developers/manufacturers thought I would check in here first
Nice to meet you and welcome to the community! You are actuall speaking with one of the co-founders of Pupil Labs 😸
Hahaha, slightly bias then, however will be able to help me out 😄
Seem like a really good community you have built. Love the open source + docs stuff too!
I will certinaly defer to the community for questions that I feel are answered by someone with less bias.
And will be happy to answer any questions that you have
I have been researching the last couple of days (short I know, but the budget is coming up!) and just wanted to check that Pupil Labs is capable of what we are looking for, research wise.
I have put a business case forward to purchase the highspeed, 200hz binoculars for two main usages and research ares.
1) Have users navigate through our screening points (and the rest of the airport) 2) Use an online training program which displays x-ray images
I can see from the online videos and testimonials that Pupil Labs is capable of #1, but a little confused on if it could give us valuable data on #2
@user-24e31b Responses to your points and notes: 1 - Yes, Pupil is designed for real-world use cases as you noted (e.g. navigating through spaces). For this research applicaiton, I would encourage you to take a look at Pupil Mobile so that you can capture raw data on an android device instead of a laptop/desktop.
1.1. Pupil Mobile - enables you to connect the Pupil eye tracking headset to Android devices (our Pupil Mobile bundle uses the Moto Z2 Play device). With the device we spec/resell you record up to 4 hours of video locally on the phone, or stream video and sensor data over Wifi. The bundle comes with: Moto Z2 Play (black), hot-swappable Moto power pack, 64gb SD card, USBC-USBC cable, and is pre-loaded with the Pupil Mobile app. (Note: Pupil headsets with 3d world camera are not compatible with Pupil Mobile).
2.1. Surface Tracking - In order to determine the relationship of an object in the scene and the participant, you can add fiducial markers in your scene and use the surface tracking plugin in Pupil Capture. This plugin will enable you to automatically detect markers, define surfaces, and obtain gaze positions relative to these surfaces in the scene. Read more about surface tracking in the docs: https://docs.pupil-labs.com/#surface-tracking
You might also want to take a quick look at the citation list that we maintain here: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing
Great! For using Pupil Mobile, does that app work with other phones that use the USB-C connection. (Saw in the docs certain phones listed, any updates on this?)
There is actually some recent work in the aviation domain (albeit in control tower/control room - but also uses surface tracking) listed in the citation list
I did see the surface tracking (with the fiducial markers) and it looks good. Now, I'm not wizard at python, and I can navigate around a pc, how difficult is hte initial set up?
Other Android devices with USBC may work, but have only tested in-house Moto Z2 Play, OnePlus 3/3T/5/5T/6, Nexus 6p. Other members in the community have reported that some Samsung devices (S8?) work, but we have not verified this on our end. One of the main reasons we recommend Moto Z2 is due to the expansion port - what Moto calls "mods" - that you can use to connect and hot swap battery packs. Additionally this device has external SD card.
Re coding and Pupil - You do not need to know how to write code to use Pupil.
If you want to extend Pupil (develop on top of or implement something that is not included in Pupil software) then you will need to write code.
Cool! Thanks for the info. Is the Eye Camera powered by USB from the laptop (the none mobile package) then? And, can the Moto Z2 be bought separately at a later date?
@user-24e31b The Pupil headset is powered by USB from laptop, desktop, or android device. You can purchase the Pupil Mobile bundle at a later date if desired and you will just plug the headset into the android device via USBC-USBC cable (supplied in the bundle) - no extra hardware required.
Sounds great. I'm very excited to start getting involved. 👀
What is the main benefit and difference between the binoculars and single eye tracker glasses? Is it an accuracy aspect or a capture, speed?
@user-24e31b Some notes: More data - Binocular eye tracking provides more observations of eye movements and observations of both eyes - this "redundancy" of observations is especially beneficial at at extreme eye angles; where one eye may be extremely proximal (looking towards the nose and difficult for cameras to detect) and the other eye extremely distal (and easier to detect the pupil).
3D gaze coordinate and parallax compensation - With binocular data, Pupil Capture can estimate a gaze coordinate as a 3d point in space using binocular vergence, and can compensate for parallax error. With a monocular system calibration is accurate for the depth at which you calibrated; this is sufficient for some use cases like screen based work.
Binocular data for classificiation - We can leverage binocular data for classification of blinks, fixations, and more.
There are other benefits - but this is the short list (maybe other members of the community can chime in when they get online 😄 )
Copy that. Thanks! I'm still get to grasp with the concept and the tech behind it. Forgive my ignorant questions =) When the glasses are recording footage, is the data video file stored separately to the eye data then combined in the pupil player?
@user-24e31b we welcome any and all questions here 😸
Regarding data format - https://docs.pupil-labs.com/#data-format
Short answer is that eye video(s) and world video are saved as videos. Other data is stored in pupil_data
file and the data can be visualized, analyzed, and exported with Pupil Player
You can also download Pupil Player - https://pupil-labs.com/software - and download a sample dataset (like this one: https://drive.google.com/file/d/0Byap58sXjMVfRzhzZFdZTkZuSDA/view?usp=sharing) and take a look
Will check out those links, on mobile ATM, just wanted clarification on the format used... Alrighty, well that's all I can think of for now. Will submit the business case, really love the looks of the your hardware and service.
Hopefully I can get back to you with an order confirmation. =)
Tomorrow!
@user-24e31b Thanks for the feedback! We look forward to hearing from you 😸
@wrp Hello! When my teammates and I entered the hierarchy for the Calibration scene in Unity (we downloaded your online plugins), we will press the play button and see nothing but an endless black grid space and blank loading screen. Therefore, we assume the calibration has not taken place. We are able to press 'C' and to connect to Pupil Service, however we are not able to see anything through the headset. We believe this is due to an error message we are getting from Unity.exe (Unity was running smoothly with other programs we ran). The issue is we are completely new to eye tracking technology and do not know how to fix this even after reading through your Pupil Docs, do you have any troubleshooting tips?
hi everyone
could someone from the pupil team explain how exactly does the hololens integration work?
AFAIK the hololens USB port is not an OTG port, so I don't see how pupil connects to the hololens
ah nvm, finally found this that it connects to a separate pc https://github.com/pupil-labs/hmd-eyes
@user-a8c41c I had the same problem when I used the 3D tracking file, but the problem went away when I used the 2D Tracking file
Hi all, I research people with intermittent eye turns. have the headsets with binocular eye cameras, and I'm wondering if there is any way to get access to the data that goes into forming the gaze prediction? when the eye turns the gaze dot drifts off in the direction of the turn but I would love to have more access to where each eye is pointing separately.
@user-464538 gaze data is based on Pupil data. You have access the same way as you have access to the gaze data
When I put files into player that used to work fine, it errors saying something like it can't make the files because they already exist. I tried older versions of the software, but that didn't help. Is there anything I can do? My paper is due in a couple days and half my data isn't viewable.
Can you post the exact error message?
No, I don't really know the correct term for this but the black box showing the messages closes instantly
I only got that bit from causing the error repeatedly until I could catch a glimpse
Which os do you use?
Windows
Please upload the player.log file in the pupil_player_settings folder
I don't see a folder named that. It should be in this folder, right? pupil_player_windows_x64_v1.7-42-7ce62c8
No
There is a separate folder in your user folder
Oh ok I see it
Just search for player.log and you should find it
It had a different error than I saw I guess
mmh, could you please remove all spaces from all folder names?
just replace them with _
Does this include the Pupil Cam files?
what do you mean?
These are pupil mobile files. these need to stay as they are
I mean the folder names
ok that's what I thought
specifically 2018-03-19- do not change
Ok now it is getting the error I saw before
well actually
ignore that one...
that one
ok
not sure why it runs through the upgrade process again
It seems to only be doing it on a few of the files since this just happened with half of this folder
nevermind, it's doing it with all of it now
it just happened when I was halfway through analyzing the files
ok, please put the following files in a sub folder named "backup" for each recording that has that issue: - world - eye0 - eye1 - audio
can that be a folder in the folder with the rest of the files, like the offline data folder is?
did you change or replace the info.csv files?
similar to the offline_data folder yes
There's a possibility I did back in March but not recently
such that each recording has its own backup folder including the files above
start with one recording and lets see if it works
so I should move them, not copy?
you need to move, exactly
such that they are not in there original place anymore
that fixed it
ok, I will make an issue to handle this situation in the next release
Great, thanks so much for the help. I was getting a bit panicked about the paper
I can imagine 😄
It was data I got while I was abroad, so I really couldn't redo it!
Issue for reference: https://github.com/pupil-labs/pupil/issues/1221
@papr I downloaded all of the folders for pupil in python following the instructions in pupil docs for windows dependencies., but I can't seem to open any of the python scripts even though I have python downloaded
@user-cd9cff Sorry, could you recap what you are trying to accomplish by installing the source?
Good afternoon. And with what you can open these files: world_timestamps.npy
Python and numpy
Thanks)
@papr I'm sorry about the previous question, python is working fine with pupil on my computer now. However, I would prefer to use Matlab because the stimulus is coded in matlab. Were you able to figure out the problem about quering gaze data using matlab?
@user-cd9cff hey, you do not have to run from source to run the Matlab scripts. You can use the bundled application. Unfortunately I was not able to test gaze subscription yet :-/