Hi, I'm trying to use pupil cam for hololens, but the USB is not recognized on the computer. I think this problem is related to USB driver. I read the guide, and I know it should be worked if I execute pupil capture, but it doesn't work.. Is there anyone can solve this problem??
I think I found the problem.. some cable was cut. thanks
@here We are pleased to announce the latest release of Pupil software v1.10! We highly recommend downloading the latest application bundles: https://github.com/pupil-labs/pupil/releases/tag/v1.10
Hey. I am trying to use the natural features calibration method. The computer on which I use the eye tracking system is separate from the one which runs our psychophysics task. I show on the other screen: 9 red dots, one at a time as stimuli, whose coordinates we know in our screen space. (The 9 dot pattern shown in this link is what I am trying to use: .http://www.okazolab.com/faq-blog/how-to-calibrate-an-eye-tracker-within-eventide) The subject is asked to fixate on those dots and I click on those 9 dots in the world camera view . How are pupil positions and gaze positions normalized in this case? I am a little confused with mapping of the coordinate system of the eye tracker on the coordinate system of the screen for our psychophysics task. Could you help me out with that ? And do you recommend a better method of calibration ,if not this ? Also, where is the readable calibration file saved for each file ? All the pupil capture calibration file gives is the angular accuracy and precision . Where can I get the gain and offset values ?
Hey @user-41c874 Please do not repost your questions. A small reminder is usually ok. To your question: Pupil gaze is calibrated to the scene cameras coordinate system. Use the Surface Tracker plugin to get gaze in reference to an area of interest. Are you using the offline calibration feature of Player or are we talking about online calibrations with Capture?
@user-d9bb5a I still need to review your recording. This might be a bug Player. If this is true we will create an issue on Github where its solution can be tracked.
@papr Thanks for your reply. I tried the online calibration with Capture using the natural features calibration method. Is the data in pupil_positions.csv also calibrated to the world camera's coordinate system at the time of calibration? (Or is that the case only for the data in gaze_positions.csv?) I will try to use the Surface Tracker plugin now and get back to you about that soon. Thanks for the suggestion.
@user-41c874 Pupil (pupil_positions.csv
) data is relative to the eye cameras coordinate system and therefore not calibrated. Only gaze (gaze_positions.csv
) and data based on gaze (e.g. fixations/gaze on surfaces) are calibrated
Please be aware that the online natural feature calibration can result in low accuracy since it depends on a second person clicking the correct positions in the Capture window while the subject might be moving her/his head
In your case, I recommend to use the offline calibration feature or one of the online calibration methods that depend on automatic marker detection (e.g. screen marker/manual marker/single marker calibration)
So, pupil data is immune to movement of the head , but gaze data isn't. Right ?
What do you mean by immune?
I need to combine the normalised x and y positions of gaze/pupil with the pixel coordinates of the stimulus I show on a screen. So, if a particular x-y position of the pupil corresponds to a particular x-y position on the screen used to show the stimulus , and the subject moves his head in between , then still the same x-y pupil positions will correspond to the same coordinates on my screen. But, since gaze x-y position is dependent on the world camera coordinates , I presume gaze-xy positions will change if the head position is changed, but the subject is still looking at the same place. (Or is this not the case? )
No, this is not the case. We have 3 independent coordinate systems in this case. eye camera image space, world camera image space, and the monitor pixel grid. We need to learn their relationships before we can translate locations between them. A calibration learns the mapping function between eye and world camera. This assumes the camera positions to each other to be fixed.
The relation between the world camera and the monitor is not fixed since the subject is allowed to move his/her head. Therefore we need to reestimate the current relationship for each recorded world frame. To do this we need the square/surface markers.
Afterwards we can translate pupil positions (eye image coordinates) to gaze (world image coordinates) to mapped surface gaze (monitor coordinates)
Ah! that really helps me clear this confusion in my head ! Thanks a lot . I will try to use surface tracking with these markers (online and offline) and then get back to you. Thanks !!
@papr I dropped a post a week ago to your mail.
@user-d9bb5a I was not able to reproduce the issue (no exported heatmaps) on mac. I will try to reproduce on Windows. Attached is the exported heatmap.
thank
@user-d9bb5a I was also not able to reproduce the issue on our Windows 10 machine. Please make sure that you are using v1.9
. The export included in your recording that you sent to us indicates that you were using v1.8
Hi, when i use mobil phone, i can record with audio. However, when i use macbook pro, i can't record audio. I opened audio plugin.
@user-adc157 hi, which version of Capture and macos do you use?
hello... can anyone spare a video of a pupil camera recording to me?
@user-6fdb19 do you need a single eye video recording or a complete recording that you open in Player?
single eye recording!
can you send it to my email please? [email removed]
Sure
preferably a video that is at least 3 minutes long
thanks mate!! π
Not sure if our public example is that long. I will check
im going to buy the pupil-labs for my phd... i want to get things started already
@user-6fdb19 In this case I would recommend to download Player from the github release page and playing around with this dataset: https://drive.google.com/open?id=1OugotQQHsrO42S0CXwvGAa0HDvZ_WChG
thanks mate!
with pupil player i can extract the pupil-camera to .avi?
You can export the eye and world videos to mp4 using the World Video Exporter plugin (loaded by default) and the Eye Video Exporter plugin (activate in the plugin manager).
Quick question, regarding pupil player. When loading videos that were recorded with opencv, I get a warning about the pts not being evenly spaced despite having a timestamp file. Is the evenly spaced pts assumption used solely to retrieve the images on playback or is it going to ignore the actual timestamps and screw up the gaze mapping?
Also, have a look at our user docs on our website
@user-c1923d The evenly spaced pts are required for well-behaved playback. The gaze mapping itself is not affected by this. Be aware that the visualization might be incorrectly shown.
Ok. So exported gaze positions later on should map correctly to the timestamps file
yes
Thanks!
thanks for the email ppr π
Hi, I have got v1.8, today downloaded new versiyon.
Just wondering if there was a place where the pupil dilation data would be stored during recording and how I could access that data.
Thanks in advance π
Do I have to have a certain configuration for that information to be recorded?
@user-4878ad do you need to access the information during the recording or after the recording?
@papr after the recording
@user-4878ad open the recording, export it with the raw data exporter, and check the pupil_positions.csv
This is what I have from the recording... Sorry I'm not that great with computer stuff... not sure why they assigned this task to me π
@papr
@user-4878ad did you download Pupil Player?
yes!
Great, open it, and drag the 000 folder on top of the gray window
After opening the recording, hit e
. Check the folder in the file explorer window. It should have have an export folder containing the exported pupil data
I pressed 'e'
there is an export file!! π
sorry, I am working between computers.... the new laptop has pupil labs but no MS office so I am transferring the file over to the other computer
I see in the docs that the recommended threshold for pupil confidence is 0.6. Similarly, is there any recommended threshold for the gaze estimation confidence?
@user-c1923d gaze confidence is inferred from pupil confidence. Take a look at the accuracy visualizer plugin if you want to judge the gaze estimation quality.
@papr this is post hoc for evaluating the gaze quality at the end of the experiment. So I'm just looking for a threshold to consider samples as data loss or for actual measurements
Hi, when i use mobil phone, i can record with audio. However, when i use macbook pro, i can't record audio. I opened audio plugin. I selected voice and sound. I have got v1.8 capture and player.
@user-adc157 I can try to reproduce the issue on Monday.
@wrp and @papr Hey. I am having similar issues as @user-7bc627 on 02.10.2018 . Pupil cam ID1 is not appearing in the "select to activate " and instead of it , it says "unknown". It shows Capture initialization failed for eye1 and shows Ghost capture. The world camera and pupil cam ID0 work fine. I have tried reinstalling drivers and running as an administrator and I am using the latest version of pupil capture (1.10). Any suggestions on what I should try apart from these troubleshooting options?
@user-41c874 Did Pupil Cam ID1 work before has it never been working for you?
It worked fine before and then suddenly stopped working .
@user-41c874 Please contact info@pupil-labs.com with the information above as well as your order id. My colleague will take over support form there.
Okay. Thanks.
@here starting on January 11 we will be making changes to how this public server is moderated. In order to post messages you will need to meet the following criteria: 1. That you have a verified email address for your Discord account. 2. That you have been a member of the server for more than 5 minutes.
These changes follow Discord's recommendations for public servers. The primary aim is to reduce the potential for spam/bots from posting off-topic/explicit content.
If, for any reason, you disagree with these changes, please let us know and we can discuss.
hello everyone
does anyone have the link of pupil detection algorithm used by pupil-labs?
The original pupil paper has a description, but it seems they also use some tracking now
You can always check the source code
Alternatively, I wrote a brief description on the PuReST paper (don't know if I'm allowed to link that here)
do you mean "Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction
Yes, there is a PETMEI paper and a technical report on arxiv
The other paper I mentioned is "PuReST: robust pupil tracking for real-time pervasive eye tracking"
thanks mate!
@user-c1923d I did not realise that it was you. Welcome to the channel. I think we met personally at ETRA last year. Yes, the "minimum data confidence threshold" (by default 0.6, adjustable in the Player general settings) is the threshold that you are looking for. There is a second, higher threshold that we use to filter pupil data when calibrating.
Hi, I'm trying to run the pupil capture from source (windows 10) but I have 2 problems: From the command prompt I get an error in the build.py process for the pupil_detectors (I also have some overriding of /W3 by /W and the option c++11 is ignored) If I try to start the main.py I get an error from the git (which works with the command prompt...) Any help? Thanks
@user-bd800a Hi,
- Regarding the first issue: Are you building the pupil detectors directly by running python setup.py build
? This often gives a more detailed error message.
- Regarding the second issue: Did you download the source as zip from github? It does not include the necessary version information that we require. Please use git clone to download the source.
Generally, we recommend to run the bundle (especially on Windows). Nearly everything an be done via a custom plugin or the Pupil Remote network api. This avoids the tedious source dependency installation.
Ok I will try the setup.py, I downloaded it using github desktop, so that should have included it, I guess
@user-bd800a Yes, I agree that should be equivalent to git clone. Could you pm me the git error message that you get by executing main.py?
@papr thanks! I'm going with two thresholds based on your docs: >0.6 ("useful data") and >0 (all pupils for which at least something is known)
@user-c1923d The only useful information in pupil data with confidence==0. is the timestamp I guess. Everything else are default values IIRC
@papr makes sense. Anyway, thanks for sponsoring ETRA again! Hopefully I'll see you guys around this year as well
@user-c1923d Yes, we are planning on coming, too! π
hello everyone! I've a question about the Offline Surface Tracker...neither the docs and the player interface does explain well which is the X and Y size meanings, apart that is very important for the heatmaps visualization
@user-c8c8d0 It is the number of bins (resolution) which will be used for the heatmap generation. E.g. if you surface has a size of 30x20 cm and you set the size to width=15, height=10, the resulting heatmap bins will be of size 2cmx2cm
cool, the heatmap bins! π thank you very much @papr ..it wasnt very clear
@user-c8c8d0 Yeah, our upcoming version will separate surface unit scaling from heatmap bins. This will make it more clear to use.
Oh, ok..and is already planned a release date?
No, there are still some implementational issues that need to be resolved first.
π thank you again!
Hello everyone. I'm trying to setup Pupil Capture using the latest Windows 10 bundle. Both eye capture cams are working fine, but I'm getting no world camera feed. If I click calibration I get an error:
world - [ERROR] calibration_routines.screen_marker_calibration: Calibration requiers world capture video input.
If I then go to the Realsense Manager section and select Intel RealSense R200 from the Activate source dropbown I get an error with trace:
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "launchables\world.py", line 596, in world
File "launchables\world.py", line 391, in handle_notifications
File "shared_modules\plugin.py", line 340, in add
File "shared_modules\video_capture\realsense_backend.py", line 213, in __init__
File "site-packages\pyrealsense-2.2-py3.6-win-amd64.egg\pyrealsense\core.py", line 27, in __init__
File "site-packages\pyrealsense-2.2-py3.6-win-amd64.egg\pyrealsense\core.py", line 36, in start
File "site-packages\pyrealsense-2.2-py3.6-win-amd64.egg\pyrealsense\utils.py", line 46, in _check_error
pyrealsense.utils.RealsenseError: rs_create_context(api_version:11203) crashed with: IKsControl::KsProperty(...) returned 0x800706be
Any clues on what's going on here?
@user-026f3a please contact info@pupil-labs.com regarding the use of the R200 on Windows.
@user-026f3a you do have a R200 realsense camera as World camera, don't you?
hello again! I've another question about the Offline Surface Tracker...is it possible to export the heatmaps over the world video?
@user-c8c8d0 No, this is not possible, unfortunately.
cause they are edited with opengl?
Correct. To be more precise: They are drawn with opengl. But all world video visualizations need to be rendered into the frame data. Therefore, rendering heatmaps into the world video is not supported.
@papr Thank you!
maybe replacing opengl with opencv can work
@user-c8c8d0 Rendering the heatmaps with opencv is much slower than drawing with opengl. Nonetheless, we could look into adding this feature for the export. What exactly would be your usecase/benefit of having the heatmaps rendered into the world video? The heatmaps are aggregated over the complete recording. They would not change their values.
@papr It would be pure visualization purpose, such as the circle or other vis. But I understand that it does not worth the effort, instead of exporting the video, one could simply reproduce it in the Player.
Hello again! Quick question. Can the world camera be detached from the headset and mounted else where?
@user-41c874 Do you mean if one can mount it to a different headset?
No. Just if the world camera can be taken off the headset and mounted somewhere else . (We plan on doing behavioural studies with head-fixed monkeys. and it would be simple to mount it on the top of the primate chair. )
@Tarana#6043 yes, technically it is possible. But I am not sure what our repair policy is, if something breaks in this case. Please contact info@pupil-labs.com for details.
Thanks!
Hello, I am following the doc about windows dependencies https://docs.pupil-labs.com/#opencv-to-pupil-external . it shows 'opencv\build\x64\vc14\bin\opencv_world320.dll' but the opencv has updated, can I use 'opencv_world400.dll' instead? Can I do the same to other dependencies or wheels? Thanks so much!
like I am thinking to use boost 1.65 instead of 1.64 to avoid a 'unknown compiler' warning. Would this be fine?
Hi @user-c5bbc4, the windows dependency setup is very fragile and we recommend to follow the instructions as close as possible. It might be possible to get it running with other versions but there are no guarantees. Actually, it is often enough to run the bundled application instead of running from source. Did you try running the bundle?
Ah, the software works fine for me. I am trying to work with the code
Is there a specific change you have in mind? Do you need access to specific data?
I
Oops. push the wrong button
I am now a MRes student. trying to improve the performance of pupil detection
but still new. Not get stuck with running the code
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\type_traits(1357): note: see reference to class template instantiation 'std::aligned_union<1,_Ty>' being compiled with [ _Ty=singleeyefitter::Detector2DResult ] C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\memory(1820): note: see reference to alias template instantiation 'std::aligned_union_t<1,singleeyefitter::Detector2DResult>' being compiled C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\memory(1866): note: see reference to class template instantiation 'std::_Ref_count_obj<_Ty>' being compiled with [ _Ty=singleeyefitter::Detector2DResult ] c:\work\pupil\pupil_src\shared_modules\pupil_detectors\detect_2d.hpp(76): note: see reference to function template instantiation 'std::shared_ptr<singleeyefitter::Detector2DResult> std::make_shared<singleeyefitter::Detector2DResult,>(void)' being compiled error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit status 2
this is what I got after trying to run 'python setup.py build' with pupil_detectors
any ideas?
To change the pupil detection algorithm you indeed need to change the code. Unfortunately, that output does not contain an actual error message/failure reason as far as I can see.
oh, you mean I can ignore this message?
No, there is definitively something going wrong. π
π yes. I believe so
cl : Command line warning D9025 : overriding '/W3' with '/w' cl : Command line warning D9002 : ignoring unknown option '-std=c++11' detector_2d.cpp Unknown compiler version - please run the configure tests and report the results C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\type_traits(1271): error C2338: You've instantiated std::aligned_storage<Len, Align> with an extended alignment (in other words, Align > alignof(max_align_t)). Before VS 2017 15.8, the member type would non-conformingly have an alignment of only alignof(max_align_t). VS 2017 15.8 was fixed to handle this correctly, but the fix inherently changes layout and breaks binary compatibility (only for uses of aligned_storage with extended alignments). Please define either (1) _ENABLE_EXTENDED_ALIGNED_STORAGE to acknowledge that you understand this message and that you actually want a type with an extended alignment, or (2) _DISABLE_EXTENDED_ALIGNED_STORAGE to silence this message and get the old non-conformant behavior.
does this make any sense? The length is out the limit of Discord. A bit tricky to send them all
Please define [...] (2) _DISABLE_EXTENDED_ALIGNED_STORAGE to silence this message and get the old non-conformant behavior.
β This looks like the solution
yes. I believe the other way to deal with it is to use BOOST 1.65 so that is why I am asking whether other versions are accpetable
A bit embarrassing to say that I do not know how to pass this command _DISABLE_EXTENDED_ALIGNED_STORAGE to the compilerπ·
Oh, I did not notice this issue. Thanks a lot!!
Hello, 'Player' is working now, thanks very much! But 'capture' and 'service' run into the same problem
C:\work\pupil\pupil_src>python main.py service service - [ERROR] launchables.service: Process Service crashed with trace: Traceback (most recent call last): File "C:\work\pupil\pupil_src\launchables\service.py", line 98, in service import pupil_detectors File "C:\work\pupil\pupil_src\shared_modules\pupil_detectors__init__.py", line 21, in <module> from .detector_3d import Detector_3D File "detector_3d.pyx", line 35, in init pupil_detectors.detector_3d ModuleNotFoundError: No module named 'visualizer_3d'
any ideas about this 'visualizer_3d' module?
My only guess is that there is still something wrong with the detector_3d compilation
it should be. but there is no message now after running setup.py under pupil-detectors
C:\work\pupil\pupil_src\shared_modules\pupil_detectors>python setup.py build running build running build_ext that 's it
so does the optimization_calibration. I would assume they are working fine?
I copied the 'visualizer_3d' file from '/singleeyefitter' to '/pupil_detectors'
and it is working now
Hey fellow developers, I've followed the instructions on the developer's docs as close as possible to try get the capture module from the source code running
Unfortunately I've run into an error that has been posted on the pupil labs github help page but has yet to be addressed
any help will be greatly appreciated
This error appeared when I attempted to build the pupil_detector module through 'python setup.py build'
Hi following on @user-c5bbc4 issue, how can the _DISABLE_EXTENDED_ALIGNED_STORAGE
be passed from the detector_3d.pyx ? Does it correct the build issue? I tried with another version of boost but it was unsuccessful.
Hi! I have a problem with second eye camera using pupil mobile. Eye video window (on my laptop) is constantly switching between "not available. Running in a ghost mode" and settings without video image. On the app it also does not load. World camera and one eye camera are ok. I use binocular (200hz) eye-tracker and Blackview BV 9500 smartphone. I understand that the problem could be in my smartphone, but maybe somebody faced these kind of problem with other smartphones too?
Hi all, I have some questions with regards to the Pupil LSL Relay Plugin, is anyone using it? I have tried installing it following the steps described on the github page (https://github.com/labstreaminglayer/App-PupilLabs) however, when I open pupil capture I cannot find the plugin among the other plugins. Any suggestions?
@here We are very excited to announce our newest product - Pupil Invisible.
The first eye tracking device that truly looks and feels like a normal pair of glasses. Machine learning powered. No setup, no adjustments, no calibration.
Learn more and sign up for the closed beta program at https://pupil-labs.com/blog/2019-01/pupil-invisible-beta-launch/
@user-bd800a Hi, this link works for me to pass the command. https://github.com/pupil-labs/pupil/issues/1331#issuecomment-430418074. The warning about /w3 /w and the following one I cannot recall can be ignored I think.
@user-c5bbc4 but to have the .cpp files you need to build them, which fails, do you have to have a first failure to generate the .cpp then edit then reun the .bat files?
I did not change the boost settings. I was thinking to use another version but I soved it didn't
oops. unfinished. I mean I solved the problem before I reached that far
do you run it with the .bat or trhough the main?
I think I got the .cpp files after trying to run 'python setup.py build'
Hi there, we're currently experiencing an issue with Pupil Mobile where the mobile app stops recording, and enters gost mode on the desktop application, before coming back and continuing to stream and record to our computer but the mobile recording does not continue. We use 2 pupil cameras, one with an eyetracker and one without, we find that the issue only occurs on the camera with the eyetracker. Do you know what might be causing this issue, and how we can alleviate it? I can also obtain the capture.log file if that's of any assistance?
@user-2968b9 My guess is that the phone's resources are being strained. Did you try recording without streaming?
We have not yet, we'll give that a shot. In practice though, we would not be able to record solely on the phone/laptop. Is there a way of decreasing the strain that the app would put on the phone, or would it perhaps be a matter of using a more powerful phone? We currently use LG Nexus (I believe it's 4). Is there also anywhere I can find the specs required to run pupil mobile, and I could try and compare them vs the Phone specs
@wrp Great! Can't wait for the HMD version π
Has anyone published any papers using pupil labs eye tracking, additional stimulus mediated by a lab streaming layer (matlab)?
Any idea on how to solve this issue: https://github.com/pupil-labs/pupil/issues/1208
Hello π has anybody a recommendation which value to choose for the minimum+maximum duration in the offline fixation detector, in an experiment with human robot interaction? Human and robot have to solve a task together. Any recommendations based on experimence or concrete literture is welcome! Thank you π
@user-4878ad publications/projects using Pupil are collected in this spreadsheet: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing
Hey! I have an other question: Is it possible to use instead of the dispersion based algorithm for defining fixations, also a velocity- or area based algorithmin pupil labs? Thank you!
Hi @user-07d4db there is no velocity based algorithm in the codebase/app bundles, however if you want to implement a plugin for velocity based fixation detector you might want to start by looking at https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py
or copying fixation_detector.py
as a start then going from there
Thank you! Where do I have to copy "fixation_detector.py "? And did I understand it correctly, that there is no preprogrammed velocity based algorithm in pupil labs at the moment, but it is possible, to create such a plug-in on my own?
@user-07d4db Hey, yes, it is correct that Pupil does not come with a velocity-based fixation detector. We use a dispersion-duration-based algorithm. Yes, you can extend Pupil's functionality with custom plugins. As wrp said, a good place to start is to copy the linked file into the corresponding plugin folders:
- ~/pupil_capture_settings/plugins
for realtime plugins in Capture
- ~/pupil_player_settings/plugins
for offline plugins in Player that have access to the whole recording
Thank you very much @papr π Just one further question: How can I measure the transition time of gazes between different areas of interest?
@user-07d4db each gaze point has a timestamp. Just calculate the difference between the last gaze point before exiting a surface and the first gaze point entering a new surface.
Okay! And is there a possibilty that puil labs does this calculation for me, sth. like an automatic setting? Becaus doing that for every participant of my experiment on my own would take a lot of time.
@user-07d4db no, this is not automatically calculated. But if you export the surface data, you will get csv files whose data would allow you to calculate the metric in question in an automated way
how much fps the pupil invisible will work with? how much $$?
Hey there, so we've been having some issues with getting Pupil set up on our computer here at the university. It runs on Windows 10 and is proving to be difficult to set up. I've spent the last couple of days troubleshooting, uninstalling and reinstalling drivers, and attempting to get pupil capture to run, all of which have been unsuccessful.
However, I have a MacBookAir that I installed the pupil application and plugged in the headset. This worked wonderfully, which allows us to use mine for the time being.
My concern is, is there anyway to completly uninstall everything pupil related from the windows computer, and is there a process that I can follow that will successfully allow me to run pupil capture? Any help with this is appreciated, thanks!
@wrp thanks!!
@user-8903eb the best way to reset the windows machine is to use the 'restart with default settings' button in the world window.
@mpk Puil Capture wont launch, is the issue. Nothing will display once it is run as an administrator, so I cant reset it to default settings.
Hi There, has anyone discovered an easy way to print the fixation points onto a matching screen? So the results look something like this:
Hey @papr, do you or your team happen to have any solutions regarding this issue: https://github.com/pupil-labs/pupil/issues/1208
@wrp Just saw the pupil invisible announcement! very exciting, is there any information on the tracking technology used? Neither tracking pupil nor glint and no cameras in sight, but with Inertia Sensor... very mysterious - but very cool! Keep it up π
@user-54bbd5 no, unfortunately not.
Hi all, is there a way to get playback speed back up X1.0? (or render out the video to enable smooth playback) I am recording via the mobile interface if that makes a difference. Thanks for the help.
@user-af87c8 we will be releasing more information about Pupil Invisible and the core technology that powers the gaze estimation pipeline via the beta program newsletter and our website/blog in the near future. The glasses are still video based eye tracking, but with tiny eye cameras and an entirely new approach. The IMU is not involved in gaze estimation - it is a bonus hardware feature! Hopefully that is slightly less mysterious πΈ
The eye camera in the store is a regular RGB or IR camera?
@user-94ac2a this is an IR camera. It is the same camera used for our Pupil headsets. It is sold separately so that people with older headsets (with 120hz eye cameras) can upgrade, or if someone has a binocular frame with only one eye camera, they can upgrade and make their headset binocular by buying another eye camera
@wrp Thanks. Does that mean I can replace my own IR cameras with the one comes with the headset?
@user-94ac2a maybe you could provide a bit more context, I'm not sure I understand/follow.
@wrp Say if I have my own IR cameras. Can I use my own IR cameras instead of the original cameras in the headset?
So you would like to build your own headset with your own IR eye cameras, correct? The short answer is that UVC compliant cameras are supported by our software, but you may need to make some customizations to source code to use your cameras. If your cameras are not UVC compliant, then it would mean writing your own backend for the cameras.
@wrp Got it. The DIY tutorial is about building my own headset, right?
correct
@wrp Any speicifc tilting degree for the cameras? Or cameras can be placed any position under the eye
you might also want to read: https://docs.pupil-labs.com/#jpeg-size-estimation-and-custom-video-backends you might also want to see custom video backends in source here: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/video_capture
@user-94ac2a position of the eye camera needs to be near the eye (with the pupil visible in the eye video frame), the system is designed to accommodate different camera positions.
@wrp Thanks a lot!
you're welcome!
@wrp Any requirement for the IR camera except the UVC compliant
what camera are you considering?
perhaps the community could give you some feedback on your DIY work
@wrp Maybe [email removed]
I mean fps
I will let the rest of the community respond re DIY setups - as there are likely others with some suggestions πΈ
OK
@user-24e31b You note that you are recording with Pupil Mobile? Are you recording onto the Android device or are you using the Android device for streaming and then recording via desktop/laptop that is running Pupil Capture?
What is the minimum resolution requirement for Eye camera?
The one on the store shows: Sensor Global Shutter. [email removed] [email removed] Are these two cameras or one?
@user-94ac2a this is one camera, with multiple spatial/temporal resolutions that can be selected via software
OK
@wrp So this is a stereo camera
no, a single camera per eye
not stereo camera
π
@wrp two seperated cameras connected through one single USB2.0 port?
@user-94ac2a It is an USB 3 hub that connects the USB 2.0 cameras.
@papr So its two seperated USB 2.0 connected to USB 3.0 hub, and USB 3.0 hub connect to PC?
@user-94ac2a correct
Thanks!
Hi, may pupil be affected by external IR light sources? We want to use the pupil headset in an environment with optitrack motion capture cameras - which emit IR light for tracking just like the pupil cameras. I'm asking because I don't know how pupil's algorithms are working, but afaik many algorithms in eye tracking use corneal reflections (of known IR light sources) and there will be more than one in our case. Thanks for any help π
@user-82e7ab corneal reflections are not used. So, other IR light sources should not be a problem
However, you may notice some artifacts from IR emitters in the eye video (e.g. if you had the HTC Vive lighthouse system running you might see what looks like flicker/banding due to the frequency of the IR emitters in the lighthouse system)
perfect, thx!
as long as these artifacts do not affect your tracking, this is no problem at all
this is kind of old, but https://github.com/mdfeist/OptiTrack-and-Pupil-Labs-Python-Recorder
@user-82e7ab you might also want to check out the citation list for other research/papers that use optitrack (or other pose tracking systems) with Pupil: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing
oh, we already have distinct optitrack and pupil integrations for our framework. We just haven't tested both in combination yet. But I will have a look at the citation list π
sounds good @user-82e7ab
@papr in regards to your response to @user-2968b9 's query - we tried to stream the eye tracker directly with the laptop and it seems to work fine. But when we connect it with the phone and stream it on the laptop, the eye camera shows 0 rate even though it streams on the phone. Do you suggest we change the phone? Jordan has already embriones the phone model that we are using
Can anyone tell me exactly what the difference is with Robust Detection enabled for surface tracking?
@wrp Yep, I am recording to the Android Device then copying the files to the laptop for playback + using offline circle calibration. Can only seem to get playback at X0.5 speed?
@user-24e31b Which version of Player do you use?
pupil_v1.9-7
You can increase and decrease playback speed by hitting the right/left arrow keys during playback
@papr ^ I shall go and test those hotkeys π
@papr Yep! Was as easy as that, thanks! - is there a reference sheet for all the hotkeys for Capture/Player?
@user-24e31b no, unfortunately not. But there not many:
- space
play/pause
- left/right
during playback: decrease/increase playback speed
- left/right
during pause: prev./next frame
- e
for export
- Any other letter as shown in the half-transperent tumb icons left
Copy that, should be able to memorize those, they are pretty logically obvious. Thanks again, will no doubt be back here soon as I've just begun testing/bench-marking.
I couldn't find anything in the documentation but is there a recommended way to clean the lenses of the newer 200hz cameras? One camera feed is much blurrier than the other and I think its from some dust that got into the lens or a smudge from adjusting it. I wanted to use rubbing alcohol on a q-tip but didn't know if the material the Hololens add-on kit is printed from would react to the rubbing alcohol.
@papr Is the USB 3.0 hub which is connected with two USB 2.0 eye camera a customized USB3.0 hub or it is just a regular USB3.0 hub?
@user-94ac2a Its a custom made hub in terms for form factor and connectors, but its in function "just" a usb3.0 hub.
@user-e8a795 to clean the lens i would recommend to remove the lens and carefully spray canned air at a distance of 2/3 cm from the sensor .
@mpk That means I can use any usb3.0 hub and connect with any two usb2.0 IR camera to perform the same
in terms of hardware yes. We do a some special things in the driver/ usb user-space layer that might not work with your cameras though.
Check out libuvc and pyuvc forks from us for that: https://github.com/pupil-labs/libuvc https://github.com/pupil-labs/pyuvc
Thx!
hello all, i have downloaded the latest pupil software (1.10) but in the Player, I can't find the Vis Scan Path plugin - any clues what I need to do? It's running on a Mac
Hi @user-a951f2 vis scan path
has been temporarily disabled in Pupil Player since v1.8
. Please see notes here: https://github.com/pupil-labs/pupil/releases/tag/v1.8
@wrp thanks! Hopefully it will be enabled soon π
@papr In the exported files, are the fixation and gaze "x-scale" and "y scale" in surface folder using the lowerleft corner of surfaces as the origins? Or are they using the lowerleft corner of the world camera image?
@user-e2056a x_scaled = x_norm * surface_width
, where surface_width
is set by the user in the surface settings. Therefore, the x/y_scaled
values originate in the surface origin (lower left corner).
@papr Thank you, there is another gaze position file in the exports, not in the surface folder, though, what is the origin of that one?
The gaze_positions.csv
file contains gaze whose norm_pos
values originate in the world camera lower left image corner.
@papr, Thank you, in the gaze position file, I saw multiple gazes( with difference x y positions) occuring at the same timestamp, may i ask why?
@ and why is the world timestamp different from the gaze timestamp?
@user-e2056a Gaze timestamps are inferred from pupil timestamps. If it is monocular gaze its timestamp equals to the pupil timestamp it is based on. If it is binocular gaze, its timestamp is the average of both pupil timestamps it is based on. Pupil timestamps are inferred from the eye video frame on which they were detected. Eye images are acquired at a much higher frequency than world images. Therefore, there are multiple gaze points that belong to a single world frame. Therefore, it is normal that you can see gaze data with the same world timestamp. The gaze timestamp is very unlikely to be the exact same for two different gaze points.
Is there any documentation about playing with the pupil sensor settings ? For example, default resolution is 320x240, which is very small, using higher resolution should help pupil algorithm perform better (else than running at slower frame rate) ?
@user-8b1528 We have never made exact measurements but we have never seen a systematic drop in detection performance when using 320x240. Increasing eye video resolution does not only mean lower frame rate but also higher cpu requirements/detection time.
@user-8b1528 we found that the 320 resolution is superior due to the coarse detection which happens at higher resolutions (which fires back sometimes) and thin eyelashes are not as sharp in 320 (which is good). Far more important are the setting for brightness etc... we use 64 for both brightness and contrast, 140 for gain, 100 gamma, 0 sharpness. We use the hmd integration so no guarantee that this are the best settings for the normal headset. For hmd version it is also important to slightly enhance the distance between eye and screen.
Is there a way to get simulated eye tracker data sent over zeromq? I want to work on the API when I am home and don't have access to the pupil hardware.
@user-29e10a Hi, thanks for the numbers ! It gives a really white image, that's what we want ? A gain of 140... interface max is at 100, could you confirm gain value ?
Thanks !
@user-5c7218 you can use the video file source, and playback recorded eye videos instead of using the USB camera
@papr but there is no way to generate simulated data?
You could write a plugin that feeds recorded data into Capture... But this does not exist yet.
@papr okay, thank you
@papr, the default setting of minimum data confidence is 0.5, is it ok for us to use value lower than 0.5? what is the lowest value allowed without compromising the data accuracy?
@user-8b1528 oh Iβm sorry I meant Gamma to 144 and leave gain to 0
Hello all,
@user-e2056a I think it is actually 0.6
. Low confidence usually means less accurate. Depending on your use case, you should use different thresholds.
I got a small question regarding the frame publisher plugin. I managed to grab the world frames using zmq in c++ and store them as opencv mat files- so far so good. Unfortunately the publisher only outputs the raw video format, that is distorted. Even when i calibrate it using the camera intrinsic estiamation plugin, sitll the raw format is published. So I tried to get the intrinsics from the file that your software safes (camera.intrinsics) but i cant get my head around the encoding of that file. Is there a way of a) transmitting the undistorted (corrected) video stream through the publisher? or b) get the intinsic camera parameters from the saved file ? (that is preferred since I can just integrate them using opencv). Any hints regarding that topic ? Best Regards
@user-26fef5 The file is encoded using msgpack. You should be able to decode it in a similar fashion as Pupil network messages
Ahhh, I see. HavenΒ΄t thought of that. Thanks
@user-29e10a Concerning the post about the sensor settings, you finished the post talking about the distance between eye and screen for the HMD version: is it preferable to have the eyes closer or farther to the screen ? From the tests I done, seems to be better when close to screen, could you confirm ?
Thanks !
@user-8b1528 Regarding the eye-screen distance: It is most important that the eye is well visible in the eye cameras' fields of view. Since the cameras are attached to the screen, the eye-screen distance is one parameter to adjust eye visibility and therefore pupil detection quality.
@papr Thank you! Another question related to gaze: can I find the duration of each gaze from the gaze position file?
@user-e2056a A gaze datum does not have a duration -- at least not conceptually in Pupil.
Just to be sure, you were not referring to fixations, correct?
@papr Still trying to fine tune my setup and even with camera lenses carefully adjusted to get a sharp image, I noticed that if I press the HMD helmet a bit more on my face or pull it away, even just a bit, tracking of pupil if we are looking at left/right edges seems to be lost easily. So I'm concerned on how to use the setup with a lot of different persons without having to fine tune everything (adjust lens focus for instance) each time since it seems that sensible to the distance.
Any trick or info about that ?
Thanks !
@papr correct, Thank you!
Hey . I'm trying to use the manual marker calibration method and displaying the marker on a screen as an image . Does it matter if we change the size of the circular marker ? And since it says that it is optimum for 1.5-2 metres , is it okay if we use it for 30 cm eye-to -screen distance? (Also where can I set eye-to- screen distance?)
And the document says something about scaling down the markers for the screen marker calibration . "You can adjust the scale of the pattern for a larger or smaller calibration target." Can I also scale where on the screen these markers are visible ? (Since our screen is 120 cm X 68 cm , subjects cant fixate at the corner of the screen while sitting 30 cms away from the screen- with the full screen mode)
@user-41c874 Hi, no, the markers can be scaled down, especially if your screen is that close. You should position and scale the markers such that they are still visible in the world camera's field of view. The world process gives you feedback about visibility and detection of the markers during the Manual Marker calibration.
Perfect ! Thanks a lot!
Has anyone built a custom physical apparatus modifying the physical Pupil headset? We need to design an interface that allows users to use the pupil glasses during robotic surgery. Currently the pupil apparatus interferes with the user headset of the robotic surgery console. We are interested in speaking with anyone who has physically customized Pupil to work with a head-interface device.
Any minimum resolution requirement for eye camera?
Hi again, when i use mobil phone, i can record with audio. However, when i use macbook pro, i can't record audio. I opened audio plugin. I am using new version 1.10.20
@user-94ac2a I would not recommend going below 200x200 for spatial resolution
Thx
@user-adc157 I will try to replicate the behavior with audio recording on macOS. BTW, what version of macOS?
@user-adc157 And which input device is selected as default input device? The integrated microphone or an external one?
Hey . We ran into the issue that sometimes the pupil doesnt get detected automatically at 192X192 eye camera resolution. I tried to play around with the exposure time, pupil intensity range, pupil min and max. Could you explain what these settings exactly change and what has to be ideal to get a good and stable detection of the pupil.?
@wrp @papr MacOS version is Mojave 10.14.2, I selected Audio Source as microphone(integrated). I cant see any auido file when using pupil player. However, when i use mobil phone with offline record. I can see easily audio file pupil player.
@user-41c874 - Higher exposure time increases the image brightness naturally. This has a upper limit based on the selected frame rate. - The 2d pupil detection algorithm searches for black areas in the image. The pupil intensity range is a threshold that specifies which pixels belong to these black areas and which do not. - min and max pupil sizes are lower and upper bounds for filtering potential pupil candidates
Generally, you want the pupil to be as dark as possible and everything else as bright as possible.
@user-adc157 Thanks for the feedback. I will try to reproduce the issue with an external mic since the Mac Mini that I am working with, does not have an integrated mic.
Is there a reason that it says "no audio "
@aduckingah, I believe 'no audio' is an option you can select. If you want to record audio, select one of your options listed above the no audio line.
Hi @user-4878ad, @user-8950d7 is correct.
@user-8950d7 @papr you were correct. thanks!
π
@user-4878ad Yes, i can select Built-in Microph from Audio Source. But, i cant record audio file.
I didnt get one either as far as i can see
@user-4878ad @user-adc157 please check your capture.log files for any hints to what the problem could be
@papr .... where might i find this capture.log file? Would it be in the recording file? π
It is in the pupil_capture_settings folder
Please check it directly after a failed audio recording. Do not restart Capture since this would overwrite the previous log file
Will go try a new recording now
@papr I figured out the recording... but there is a time delay for visual and audio βΉοΈ
What kind of exposed (black) film negative should I purchase for DIY project
Does this exposed film negative work? https://www.amazon.com/gp/product/B0152OSM4Y/ref=ppx_yo_dt_b_asin_title_o00__o00_s01?ie=UTF8&psc=1
Hi @user-94ac2a, I think the purpose is to filter all daylight out and transmit just IR light through the filter. I used a color film. pulled it out in daylight and sent it off for development. The dark brown part of the film did the job for me.
You need to cut off the IR filter of the camera.
@user-14d189 You mean cutting off the Coated glass?
Microsoft HD6000
yes, the back of the lens of the eye tracking camera usually has a IR filter, that need to go. I wasted one eye tracking camera btw.
I did that
The film seems not working?
Which side should I put the film on
I'm not sure about the black and white negative film. CCD chips are sensitive to daylight and IR light. In this case we just want IR.
and you would put the cut out where the IR filter was.
so in between lens and ccd
and the film must be exposed and developed.
How can I exposed and developed the film?
or any available one I can just purchase?
Just pull the film out of the case, show it to a light source and then send it in to a photo shop
there should be daylight filter available. not known to me. I would assume they are made of glass and most likely have the wrong size.
that's why the colour film is an easy solution.
Which one did you purchase?
any will do.
You can start as well with some recordings without the daylight filter. The only problem is that reflections from surrounding daylight might influence the quality of the recording.
How about this one?
looks allright to me. you just need the size of a hole puncher hole. even one would be enough, I had an ASA 100, the grain is a bit smaller than the ASA 400.
Gain in negative film is the size of the salt particles that detect light. 400 is more sensitive and bigger particle.
Thanks!
For this film, do I still need to expose and develop?
yes. If you do not expose and develop the daylight will come through.
I see. Probalbly I need to go to a photo shop nearby?
I ask a photographer that back in the day did film and he gave me some cut offs of developed film. that was enough
Cool!
@papr Thanks !. I'll try to work with this and a few other things and get back to you !
@user-4878ad so there is an actual audio recording? What did change to the previous recordings?
Hi all, new to Pupil Labs (using the vive drop-in). Was wondering how people are saving their recordings in Unity or accessing data directly from the first-party software. I see a SaveRecording in the PupilTools script (under networking) but not sure the appropriate way to call it. Suggestions?
@papr the mic was on mute ..... π
but yes, there is a recording now. the only issue is that there is a latency between the audio and the video. do you know if there is a fix for this or if there are things that I should change in settings?
@user-4878ad @user-adc157 I tried to reproduce the audio recording issue. I used Apple headphones that have an integrated microphone as input. When I selected the microphone in the Audio Capture
menu, I got a system permissions dialogue which asked me to grant access to the mic. After that, I was able to make a successful audio recording that was in sync with the video. I can recommend to film a device playing this kind of video to test audio/video synchronization: https://www.youtube.com/watch?v=ucZl6vQ_8Uo
@papr Thanks, i will try with external microphone, i understand that integrated microphone do not work on MACOS systems. I will inform to you when i get audio.
Hello π Does pupil labs provide, additional to the dispersion based algorithm, an area based algorithm in order to define fixations with respect to specific areas of interest?
@user-07d4db no, we don't. Do you have a specific algorithm in mind? I would be interested in how it works.
@user-07d4db personally I prefer a velocity based fixation identification filter. Your project sounds a bit different to fixation identification only. probably you need to identify the area of interest in the world camera image first, to exclude movements between head and area of interest. From there you can determine if gaze is in this area and apply a binary decision fixation yes or no. cpicanco posted earlier here a screen detection algorithm - If you run test on a screen.
@user-07d4db @papr I prefer velocity based fixation filter, because they include fixation on possible moving objects too. Not sure what a trade off is?? Do you have any experience? The filer is easy to design, takes out data related to blinks and rapid and short eye movements (saccades) . It identifies 60-80% of eye movements as fixations. Has someone experience if that is about right?
Thank you very much for your reply! unfortunatly i am not an expert in programming, so that i am not able to design a plugin for a velocity based algorithm. i am conducting an experiement, measuring how visual attention allocation (number and duration of visual intake/fixations) shifts between different pre-defined aoi in human robot interaction (so i have a dynamic task). Do you have any recommendations how to deal with that, using the dispersion based algorithm?
What are you using currently? Matlab Excel? Did you had a look to the output file of the fixation detector? You do not necessarily need a plugin. Velocity can be calculated out of the gaze data within the 3D model detection. 2D should be possible as well.
There will be 15-20 blink per minute 20-60 fixations. At some point you want to have that be determined automatically.
@papr thanks!! I will take a look and get back to you with my results
Hello. I am running into the issue that my surface markers aren't getting constantly detected. ( I am displaying the markers on a screen as an image . It is visible in the world camera ( 1280 x 1024- Frame size). ) Do you think I should increase the frame size of the world camera or maybe increase the size of surface markers?
@user-41c874 you have two options. Increase the physical size of the markers or reduce the minimum marker perimeter setting in the surface tracker menu
The minimum marker perimeter is set at the least it can go to , ie 30. I think I will try to increase the size of the markers . The image in the world camera is quite pixelated with current settings , so maybe increasing the resolution should help . I'll try both and get back to you . Thanks a lot !!
Hi, I am a MRes student in VR focusing on eye tracking and currently working with a Pupil. I wonder whether anyone here has done some work about error quantification/characterisation on Pupil? Apart from the original Pupil Labs paper, any evaluation about the current Pupil's performance? Thanks very much!
@user-c5bbc4 Hi Yifan, check that out. Wearable Eye-tracking for Research: Automated dynamic gaze mapping and accuracy/precision comparisons across devices / MacInnes, Jeff J. /Institute for Learning and Brain Sciences University of Washington Seattle, WA USA -not Pupil labs related
I'm also interested if some else have done some studies or comparison. cheers
Thanks! Yes, I wonder whether there are some existing work. Otherwise I plan to do it
One more question: any intro documents for Pupil's code?
@user-c5bbc4 Pupil Labs has as well a research-and-publication channel. or the citation listing https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0
Right. I have checked through it. Not much relevant results tho
Good day, if one eye camera feed is working and the other isn't, could it be a setting issue or do i need to fix my camera?
I have already updated the drivers
Hello. I'd like to calibrate two eye cameras without world camera, so that I need transformation matrix to convert camera coordinate to world coordinate. In 3D detection, the software brings three axis data such as 'circle_3d' or 'sphere'. What point does the orign of the three axis space represent? Is it the origin of camera coordinate?
Hi @user-c6ccfa If you use Pupil Capture and "restart with default settings" and run pupil capture with admin privlidges, what do you see in the eye windows?
Hi, is there any work using pupil labs that allows the eye gaze to interact with the computer? Like the Windows Eye Control using Tobii eye tracker.
Hi @user-82e954 would simple mouse control be enough of a good starting point?
Yes, as long as the eye movement can interact with objects.
I am thinking of using it as an input for a Unity project.
This is not Unity, but does demonstrate simple mouse control: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py
Is there any example of the implementation of that function? Sorry I am new at this, so I don't really know how to use that.
@user-82e954 this script runs independently. You start Pupil Capture, define a surface and name it "screen" according to this code, and then run the script via terminal/cmd prompt and it will use gaze coordinates on screen to control the mouse position
Hi @wrp, Could you tell me what point does the orign of the three axis data like 'circle_3d' represents in 3D detection?
Is it the origin of camera coordinate?
@wrp Oh, I get it. Thanks!
Can anyone help me for the tutorial https://github.com/pupil-labs/hmd-eyes ?
Is this a project where you need to already setup the vive with unity? "This would be a good point to put said device on your head.
Use the displayed realtime videos of your eyes to make sure they are as centered as possible and in focus."
@user-619198 That is correct. This instruction assumes that the connection between the hmd-eyes plugin and Capture has been established already.
Awesome, thanks
@user-c5bbc4 @user-14d189 we are very close (think weeks) to publishing a preprint on a extensive visual test battery (many tasks) with 15 subjects, in direct (simultaneously) recording of an Eyelink-1000 as "gold standard". This might be the study you are looking for!
Hi, thanks for replying. Could you explain a bit about 'visual test battery'? I guess you mean a study with 15 participants testing the accuracy and precision of an Eyelink-1000?
not sure whether I get the point. 'gold standard' is referring to the best performance I guess?
@user-c5bbc4 not every eye tracking parameter can be pinned down to "ground truth", i.e. subjects do not fixate fixationcrosses perfectly (e.g. microsaccades), do not track smooth pursuit targets perfectly etc. We therefore recorded both PupilLabs Glasses & an Eyelink 1000 at the same time. So yes, "gold standard" would be best possible performance in a sense, of course dual purkinje or the likes woudl be even better, so "best" (currently) possible performance with an video eyetracker. The test battery contains multiple tasks for "classical" measures like accuracy, precision, but also a task for smooth pursuit, blinks, microsaccades, pupil dilation and head motion Hope that helps!
@user-af87c8 looking forward to it! 'dual purkinje' would be an ideal reference point. May I ask you what you referring to with ' the likes'?
@user-14d189 I was thinking of scanning laser ophthalmoscope trackers. Where you can record single photoreceptors of the eye. I guess at least for small eye movements these will be even more accurate than dual purkinje eyetrackers
@user-af87c8 sounds very good!! So the work will be published in weeks? is the preprint open to the public?
@user-af87c8 I am also looking forward to your work!
@papr Hi, may I ask whether Pupil Labs has done some evaluation work for Pupil apart from the original paper as the product has evolved?
@user-c5bbc4 There is a paper by us that evaluates the 3d eye model in regards to the effect of refraction: https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf
Also, check out our Pupil Citation List: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0
If I remember correctly, it includes some independent work that evaluates the Pupil pipeline.
Thanks very much!! Is the glint-free model in the first paper the one used in Pupil Invisible?
The glint-free model is currently used for the 3d detection and mapping pipeline in Pupil Capture.
oh, okay. Thanksπ
Hello! Does anyone know what it means when I get this error in Player?
Thanks in advance!
@user-2798d6 if I remember correctly, this is related to the mjpeg video format and can most likely be ignored.
Hello everybody!
I have a few questions for finding the best setup. I am using a HTC Vive for VR and I am not happy with my calibration. At the moment I am trying to figure out whether 2D or 3D detection works best. There a multiple other variables like the intensity range, pupil min, pupil max and all the image post processing variables I am playing around with at the moment. Is there a guide somewhere on how to set these variables? For instance what do the different rectangles in the debug window mean? How can I use the debug informatino for my purpose. Any help is appreciated! Thanks a lot!
And also is there an easy way to change the calibration pattern in VR?
Hi @user-97bf6d Do you have an example eye video recording that you could share with us? I could have a look at it and give specific feedback.
Hello papr! Not yet. I'll create one and share it with you
Sorry... You need the eyes only?
Yes, that is correct. Please share the recording with [email removed]
Done! Thanks a lot
@user-c5bbc4 @papr yes, the preprint will be openly availabe in the next weeks π
I recently purchased a couple monocular glasses -- while they work great, I realize my study design would require accurate gaze beyond the calibrated depth. My question is -- is it possible to upgrade the monocular glasses to binocular by simply adding another eye-camera, or do I need to purchase a separate glasses frame? (i.e., getting the binocular model).
@user-9e1c96 please contact info@pupil-labs.com with that question
@papr thanks! just emailed them.
Hello! When looking at my gaze position data, I noticed that the difference in gaze timestamps between consecutive sample points is sometimes 4.17ms, instead of 8.33ms as would be expected from my sample rate of 120/s. Can someone help explain why this is occurring?
@user-4c85cf if you are mapping monocularly, then you might see interleaved samples from both sides and therefore a sampling rate of 240 hz.
@user-4c85cf You can check by looking at the gaze topic. If it ends in 01 it is binocular, if it is 0 or 1, then it is monocular. In case you have binocular data, please send a copy of the recording to data@pupil-labs.com such that I can have a closer look.
@papr thanks for the quick reply! However, I'm not exactly sure where to find this gaze topic you mentioned.
sorry if this is a FAQ but is it possible to experiment with Pupil and get it working without first buying the expensive glasses?
@user-4c85cf have you been looking at the exported gaze positions csv or at Realtime zmq data?
@papr I've been looking at the csv files
@user-4c85cf Check the base_data
field. It should include data in the format of
- xxxxxx-0
,
- xxxxxx-1
, or
- xxxxxx-0 xxxxxx-1
did i ask a bad question? π¦
@user-c550fd check out https://docs.pupil-labs.com/#diy. The key requirement is that you use uvc compliant cameras. Alternatively, if you want to bypass hw altogether, you could run the software against a dataset of recorded eye movements.
@user-c550fd, you can build your own DIY pupil-dev to make your own records. Also, you can use sample data available in the site.
@user-41f1bf can you provide any link to build. My own recordings? Let's say I want to stream the world camera to a remote website. Is this possible?
Try writing DIY in the search box
Thanks. I will try that
@user-a39804 streaming world camera (or any camera feed) can be done on a local WiFi network with minimal latency. However, streaming over the internet (Pupil Capture (desktop) --> Server --> Client) would likely experience high latency.
@papr I would like to ask if I can read the confidence value after calibration? I know there is a threshold which decide whether the calibration is success or not, but I don't know how to get/modify the threshold, and I don't know how can I know my confidence value right after calibration.
Hi @user-1609ba There are a few concepts that need to be distinguished carefully here:
- Confidence: Quality assessment of a pupil datum (eye camera space). You can see the pupil confidence at all time in the top left graphs of the world window.
- Calibration success: A calibration requires two things to be successful: pupil (eye camera space) and reference (world camera space) data. If this is given, the calibration tries to find the best possible mapping function between pupil data (eye camera space) and gaze data (world camera space). There is no quality assessment in regards to the mapping accuracy (see below)
- Calibration confidence threshold: In order to get the most accurate mapping, we filter the pupil data by confidence before applying the calibration, i.e. we only use high quality data for calibration
- Calibration accuracy+precision: This is measured after the calibration by the Accuracy Visualizer
plugin. See the docs for details on this.
Could you specify your questions regarding these terms?
@papr thanks for your explain. Here is the situation. I am trying to develope an application that I need to know if the calibration is successful enough or not. If the calibration is not good enough this time(which means the pupil data and gaze data are not consistent enough), then I will probably ask the user to redo the calibration with a slightly adjustment of HMD setting. Therefore, I would like to ask how can I quantify the calibration's quality? Also, how to improve the calibration's quality if it is not optimal?
@wrp that will be an issue then because we want to stream in real time with low latency
Looking at heatmaps in pupil player. We are seeing a static heatmap. Is there a way to show the heatmap as it generates in pupil player? We have screenrecorded when collecting data to produce this effect, but feel this is not ideal. Ty