@papr Thanks for kind reply last night
Could you let me know where is the algorithm codes about depth estimation part
Hi! Any news about the VR integration of the new 200 Hz cameras? We're about to order... π€
So I'm not exactly sure what casused this but I went to record recently and there was this black ring around the image. Did the foward facing camera's lense just come loose? Anyone know how to fix this?
@user-e1dd4e This is normal if you record in Full HD (1920x1080). If you set the frame size to 1280x720 the ring should not be visible.
@user-29e10a we have 200hz cameras and Pupil headsets for 200hz cameras now, but are still putting finishing touches and further testing into the new mounts for the 200hz VR integrations.
@user-98013c Accuracy - "Accuracy is calculated as the average angular offset (distance) (in degrees of visual angle) between fixations locations and the corresponding locations of the fixation targets."
Precision - "Precision is calculated as the Root Mean Square (RMS) of the angular distance (in degrees of visual angle) between successive samples during a fixation"
From source code here: https://github.com/pupil-labs/pupil/blob/465f02aa83e2a742091f5640cd69f3ec7d87fb60/pupil_src/shared_modules/accuracy_visualizer.py#L60-L67
Also displayed in the Accuracy Test after running the test
@user-98013c see also docs https://docs.pupil-labs.com/#notes-on-calibration-accuracy
Is there any potential issue with having long recordings (like 30 minutes) but only parts of the recording are actually capturing the data I want? I'm trying to avoid recalibrating for several different videos and am planning to just do 4 different experimental run in one big take.
@user-2798d6 if you use Pupil Mobile for this I recommend doing one long recording. I you use capture, then jsut calibrate first and do short recordings when needed.
I'm doing offline calibration, so I'd need to do a separate calibration sequence for each recording right?
The online calibration does get the accuracy that I'm looking for
Can I use mapping ranges to do the offline calibration if I do several experimental procedures within one long recording?
yes. mapping ranges would make sense then!
Perfect! Thanks!
Do I include the calibration frames in the mapping ranges or does that not matter?
One other thing - the Eye 1 camera window sometimes does not display the menu/plugins. The sidebar is there, but when I click on the settings icon, for example, it doesn't expand out.
Hello, i have a question about how accurate the binocular eye camera is, even if it has a depth difference compared to a mono eye camera.
Hello everyone, I'm new in eye tracking. I'm doing a project where I will do some eye tracking tests with participants. My question is: Pupil is a eye tracking device, but it is possible to link cognitive test to the device. For exemple, a fixation with random stimulus that comes in. Thanks !
@user-2798d6 I strongly suggest you recalibrate once or twice during the 30 minute recording. In MOST trackers, tracking quality degrades due to small movements of the glasses on the head. I wouldn't assume that pupil tracker has fixed this issue. In fact, use of multiple competing eye models (visible in 3d debug mode) means that the track quality will change quite a bit over time.
@wrp Will 200 Hz camera integrations be internal (behind the optics) ?
@user-8779ef Are you asking if the new 200Hz cameras use the same lenses as the old eye cameras?
@papr Nope. I'm asking if the HMD integration of the 200 Hz cameras will be behind the HMD optics. The current placement of the 120 Hz cameras is far away from the visual axis, and does not get a good eye image of many of my participants.
To be blunt, it seems that, if there's a bit of puff in the cheeks, the cheeks block the camera's view of the pupil.
Ah, I understand. I did not catch the HMD context. @mpk should be able to answer this. π
@user-8779ef we plan to place the new smaller cameras furthe rin the FOV. However we cannot move behind the optics.
I hope to get a better angle by moving further up without adding a lot more occlusion.
Hello everyone, i am trying to install pupil from source on win10 using this guide: https://docs.pupil-labs.com/master/#windows-dependencies. Now I'm on final step, executing main.py, but failed. I found https://github.com/pupil-labs/pupil/issues/728 but the fixes are not working for me. Do you have any idea what i can do? Thanks
@user-c0f86d Could you go to the pupil_src\shared_modules\pupil_detectors
folder within the command prompt and run python setup.py build
? What is the output?
FirstOutput
2nd
Hello! I was wondering if there is an ETA on the next iteration of the software? I use a MacBook pro with retina display and the offline calibration is not working right now. GitHub said they would fix it for the next release, so I was just wondering when that might be. Thank you!
@Eva#2951 It looks like the pupil detector build procedure cannot use your boost installation. I would recommend to repeat the installation steps for the boost lib.
@user-2798d6 We do not have a release date yet. If you need the new fixes I would recommend running Pupil from source. The macOS install instructions are kind of straightforward. There are much less possible hick-ups than on Windows or Linux.
@papr thanks. i tried but during building the boost libs again i got a new error π it seems that this is the fault. i will reset windows 10 and try it again
@user-6419ec may I ask what you are doing that requires building Pupil from source on Windows?
Of course. I have a setup of different sensors which are working together in an experiment, now i want to implemenet pupil labs in the existing setup. Thats why i had to write some plugins for the timesynchronisation, because the existing setup works with a own ntp software and datetime of visual studio for the timestamps. I already build pupil labs from source on another pc and implemented it in my setup with certain plugins in a proper way but the power of the pc isnΒ΄t good enough for binocular tracking thats why i want to install it on a new pc with more power
@user-6419ec have you considered writing a plugin and loading your plugin for the app bundle?
I ask, because (as you've noticed) setting up all dependencies on Windows is unfortunately not easy
thats true π not till now, because i dont know how this is working π
@wrp sorry for my newbie question, but how would this work? how can i load my plugin for the app bundle?
@user-6419ec https://docs.pupil-labs.com/master/#plugin-guide
You can put the plugin in pupil_capture_settings/plugins
folder
it will be loaded the next time you start Pupil Capture
@wrp but therefore i have to build it from source or? thats what i have done on the old pc with less power
@user-6419ec the great thing about plugins is that you do not have to build Pupil from source
you just write your plugin, and it is loaded by Pupil Capture (app bundle) when you start Pupil Capture
(or Pupil Player if writing a plug-in for Pupil Player for example)
@wrp thanks a lot this was something i didnΒ΄t know till know and it made my life much easyer π i already read the plugin-guide but because of "In the following sections, we assume that you run the Pupil applications from source. " i thought i had to run it from source. Thanks again
I apologize for the lack of clarity in the docs
We will work to improve the text in docs
to help clarify
@user-6419ec @wrp I wrote this on purpose since running custom plugins from bundle can lead to problems if you use python (system-)modules that are not included in the bundle.
@papr i see, that makes sense
Hi, I have coded a basic Matlab scripts that allows me to communicate with Pupil Capture via zmq. I feel like there is yet not much I can do other than start the eye process, calibrate and start/stop the recording.. is there any way to define more thoroughly the acquisition settings (e.g. frames per seconds, luminosity, etc.) ???
@user-9d900a cool! Could you share your existing code with us? I am curios how you implemented the zmq communication.
Sure it's super basic! And certainly not bug proof but it works π
Where should I send it?
You can upload it to http://gist.github.com/ and share the link with us if you like π
I actually relied on existing code (matlab-zmq-master by fagg : https://github.com/fagg/matlab-zmq). Then just needed to tweak a function for serializing the notifications using msgpack. I
@user-9d900a Ah, I see. I guess it would be helpful to write some abstractions around it that makes it easier to connect, subscribe, and receive. maybe even do the serialization. Using this package you should be able to write a very flexible matlab client (compared to the udp relay version).
I did. and it works..
It's just that I'd like to go beyond simply calibrating, starting and stoping the acquisitions. But there doesn't seem to be a way to really change acquisitions settings via the standard notification process is there ??
No,there are no camera setting notifications
Usually you set them manually once and do not need to automate them.
@papr Hmmm ok. Would you recommend to switch to python then? What would you say is the development strategy if I eventually want a stand-alone solution (executable with a UI, perhaps python based) that calls pupilLabs as well as a number of other processes? No choice but to work on Pupil Labs source code right?
No, in this case I would use the notification system. But why would you want to duplicate functionality that is already present in the Pupil ui? edit: Clarify question.
What I mean is that in most cases the Pupil api is used to access data for processing and to push trigger events that are stored during a recording.
@papr Because the solution we have in mind does not have eye-tracking as a principal functionality: we have a UI that deals with sound inputs.. so we don't want the user to have to manipulate the pupil capture UI ontop of that.. we'd like all the pupil acquisition processes to be automated and run in the background of our main application..
What do you say?
@papr , push triggerevent? like watermarks that you can retrieve in the recording file??
trigger events in the sense of annotations, e.g. the start of different experiment conditions. I understand. Then I would suggest to use Pupil Service and modify the eye process in such a way that it does not show any windows anymore. Additionally, you will have to implement notifications for the uvc source, and all other ui settings that you want to automate. This is a bit of work but the base infrastructure is there to build upon.
@papr . Ok let me make sure I have understood (I am quite the newbie). So you suggest I modify Pupil Capture's source code, in particular the code related to the eye processes? + you suggest I implement myself the mechanisms so that the video capture module can receive notifications?? What does Pupil Service have to do with this? Thanks for your help @papr, much appreciated!
@user-9d900a Do you use the Pupil Headset or the hmd add-ons?
Pupil Service is a version of Pupil Capture that does not have a world window but it is dedicated to the hmd integrations.
Yes, you will have to modify the eye process in both cases. If you use the headset you will have to modify Pupil Capture to remove the world window ui.
Pupil Capture is plugin based. Each plugin receives all notifications. Therefore you just need to extend the set of notificatins that the uvc source accepts.
The eye process does not have such a plugin structure. Therefore it is a bit tickier to implement all notifications for it but I think it is doable after reading its source code.
@papr Ok awesome. I currently use the headset. I'll look into how to modify the source code then.. starting by trying to install all the dependencies.. I actually have the choice to either work on mac or on windows.. Is the mac platform easier to work with?
@user-9d900a Installing the Windows dependencies is pure pain. I highlle recommend macOS if you have the choice.
@papr Great! I'll get into that then! Wish me luck π
Cool! Post your questions here if you need any pointers. π
Hi, thanks for the previous help around accuracy! We are still having some problems with our callibration, as it seems to fall apart halfway through our recordings. A couple of more specific questions: Is there a way to break down the calibration point by point. That is, if there are calibration points recorded that are not suitable/ or that are not detected, is there a way to remove them? In some of our data collected, the online calibration recording is separate to the rest of the recording (ie - they are two separate recordings). Is there any way to combine these together, or will we have to discard our calibration and use natural features on the second video.
Does anyone have issues with the pupil software having issues with tracking the pupil
We lose the pupil periodically
In the algorithm mode, we have an issue with the blue box not recognizing the whole eyw
A workaround is to reduce the eye camera resolution.
And to increase the Pupil max value.
So we reduce the resolution in the software right?
Exactly, in the eye window.
Thanks!
Hi! I have 2 questions about a study we are considering about "tracking the eyes of someone who is meditating (with open eyes)" We will focus particularly on pupil size and blinking, I think. 1) We will have the meditator meditating with a reduced amount of light. Do you see any issue about the reduced quantity of light? (I will try, but maybe someone has suggestions anyway)
2) we would like to show the user a visual representation of the eye in real time to see if/how this influence the meditation status. Not the image of the eye from the camera but a visual representation (an image created based on the current pupil size). What is the best way of doing this? I can imagine (a) doing a separated program accessing data from IPC backbone https://docs.pupil-labs.com/#the-ipc-backbone or (b) create a separated plugin (maybe starting from other plugin already visualizing pupil data in real-time)? There are other ways? What are their relative advantages and disadvantages? Thanks!
Hi, short question: Is the pupil hardware, especially the HMD integration, CE certified? π
@user-2798d6 We do not have a release date yet. If you need the new fixes I would recommend running Pupil from source. The macOS install instructions are kind of straightforward. There are much less possible hick-ups than on Windows or Linux.
I'm sorry to be out of the lop, but what do you mean by running from source?
loop*
@user-2798d6 Running from source means to install the dependencies, download the source and to execute the program via the command line without having to create an application bundle.
@user-d60420 1. The eye cameras are sensible to IR light and carry their own active IR emmiters. This means that darkness is not a problem. You will have to adapt some pupil detection parameters though since the pupils will be very dilated.
2. I would recommend the IPC for your usecase. Just subscribe to the pupil
and blink
topics. The pupil data includes the diameter and the blink data tells you about blinks.
Hello again - I am having trouble exporting data video from Player. I've tried two different computers just to see if it was a processing speed issue or a specific computer issue, but both are struggling. I am able to export raw data (at least I think I'm getting all of the files) but no video. Do you have any suggestions? It's a fairly long video - about 15 minutes.
Hello, I'd like to isolate some performance issues so I'm trying to build pupil from source. I got as far as running python setup.py build
in pupil\pupil_src\shared_modules\pupil_detectors
but I'm encountered with the linker error cannot open file 'boost_python3-vc140-mt-1_65_1.lib'
.
I have the vc141-mt-1_65_1.lib version of the file but not the vc140 version. Do you happen to know why it is demanding the MSVC 2015 version of the lib instead of the 2017 version?
Is 2017 required? I can go back and redo all the steps with MSVC 2015. If 2015 works then it might be possible to skip some steps.
umm... nevermind? I had forgotten to copy the opencv_world331.dll into pupil_external. It doesn't make sense to me, but after copying a missing opencv dll I can now get past that boost lib error and it seems to build successfully.
By the way, with the latest boost, it is no longer necessary to edit boost\python\detail\config.hpp
One more error before I go home:
D:\Tools\VR\Pupil\pupil\pupil_src>python main.py
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version.
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "D:\Tools\VR\Pupil\pupil\pupil_src\launchables\world.py", line 113, in world
import pupil_detectors
File "D:\Tools\VR\Pupil\pupil\pupil_src\shared_modules\pupil_detectors\__init__.py", line 19, in <module>
from .detector_3d import Detector_3D
ImportError: DLL load failed: The specified module could not be found.
I'll check back in tomorrow.
@user-2798d6 Is this by any chance a recording of a Pupil Mobile stream?
@user-54a6a8 Thanks for the note on the boost config edit. It looks like the Pupil Detector was not built correctly. Can you delete the build
directory in pupil_detectors
and run python setup.py build
within this folder? Mabe this gives any clues about what is going wrong.
@papr thanks!!!
@papr, when running python setup.py build
in pupil_detectors, it does seem to terminate a little abruptly but there are no fatal error messages I can see. The last object it tries to build is EyeModel.cpp
. Here are some pieces of the output:
Creating library build\temp.win-amd64-3.6\Release\detector_3d.cp36-win_amd64.lib and object build\temp.win-amd64-3.6\Release\detector_3d.cp36-win_amd64.exp
Generating code
d:\tools\vr\pupil\ceres-windows\glog\src\logging.cc(2025) : warning C4722: 'google::LogMessageFatal::~LogMessageFatal': destructor never returns, potential memory leak
Finished generating code
EyeModel.obj : warning LNK4049: locally defined symbol [email removed] (public: __cdecl google::LogMessageFatal::LogMessageFatal(char const *,int,struct google::CheckOpString const &)) imported
[... several more warnings like this ...]
EyeModel.obj : warning LNK4049: locally defined symbol [email removed] (public: class std::basic_ostream<char,struct std::char_traits<char> > * __cdecl google::base::CheckOpMessageBuilder::ForVar2(void)) imported
And I have a pupil_src\shared_modules\pupil_detectors\build\lib.win-amd64-3.6\pupil_detectors\detector_3d.cp36-win_amd64.pyd
Hi all π I received my pupil headset a week ago and haven't succeeded in getting correct calibration yet, so I'm posting my question here in hope of your help.
I have:
1) Made sure both eye camera's are in focus 2) The world camera is in focus 3) The green circle is about the size of my eyeballs 4) I don't move my head when calibrating 5) The minimum and maximum iris radius are correct...
Here is an example video that I exported in which you can also see both eye recordigns: https://drive.google.com/file/d/1CbyoHwjFf5rMaUe-qceMrCq_qLUrqNfM/view?usp=sharing
The eyes are supposed to be on the index-fingernail.
As you can see, the midline is captured perfectly, but everything above and below the midline is not correct.
Do you know of anything that could explain this?
Are there any docs about what the pupil detection algorithm overlay or debug window is showing? I don't know if this is telling, but my 2D debug window shows up at the lop left edge of my screen without a menu bar and I can't move it around; it's difficult to see from within Steam VR's desktop viewer. From what I can see, there's a circle in the upper left corner and it jitters about, but it mostly stays in the upper left corner.
@papr - the recording is not from pupil mobile. It's just a regular recording from Capture on a MacBook Air
Hi all, I've been attempting to install pupil from source on ubuntu 16.04. I have some really complicated build issues that I cannot solve so I created a docker container to make everything more simple. I have installed all the dependencies and pupil successfully builds, but I am receiving the error:
[email removed] python main.py MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. world - [INFO] launchables.world: Application Version: 1.1.63 world - [INFO] launchables.world: System Info: User: pupil, Platform: Linux, Machine: 04c3d52af60b, Release: 4.4.0-103-generic, Version: #126-Ubuntu SMP Mon Dec 4 16:23:28 UTC 2017 world - [INFO] pupil_detectors.build: Building extension modules... world - [INFO] calibration_routines.optimization_calibration.build: Building extension modules... world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend world - [INFO] launchables.world: Session setting are from a different version of this app. I will not use those. libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/home/pupil/pupil/pupil_src/launchables/world.py", line 299, in world main_window = glfw.glfwCreateWindow(width, height, "Pupil Capture - World") File "/home/pupil/pupil/pupil_src/shared_modules/glfw.py", line 522, in glfwCreateWindow raise Exception("GLFW window failed to create.") Exception: GLFW window failed to create.
world - [INFO] launchables.world: Process shutting down. ```
I even tried running with nvidia-docker to see if that would resolve the GLFW window issues. Can anyone offer any insight?
The command to create my container:
sudo nvidia-docker run -itd --runtime=nvidia --privileged -v /etc/localtime:/etc/localtime -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY --device /dev/snd --device /dev/video0 --name pupil nvidia/cuda
Does anyone know how to save "frame.img" in the events as a picture?
@user-6e1816 https://docs.opencv.org/3.0-beta/modules/imgcodecs/doc/reading_and_writing_images.html#imwrite This should work. frame.img is already a BGR array. So there should be no need for conversion.
@papr I thought the "frame.img" was the picture of world camera, but the results of it is "None", so could you tell me the meaning of "frame.img" and the variable name of world camera imageοΌ
@user-6e1816 Could you tell us the result of type(frame)
?
@papr <class 'NoneType'>
@user-6e1816 How do you access the frame object? Through the recent_events
plugin method? Or do you use pyuvc
directly?
@papr def recent_events(self,events): frame = events.get('frame.img') print(type(frame))
In this case, if frame is None
there is no new frame. Just wait for the next call of recent_events
where frame is not None
.
Hi, I received the pupil headset but I still didn't success the calibration in Mac OS. With my Macbook, calibration in full display doesn't work at all. It only worked on the small window. How can I fix it?
Just in case this got lost: @papr - the recording is not from pupil mobile. It's just a regular recording from Capture on a MacBook Air
Also, I've transferred files over to a MacBook Pro and tried exporting from there and it doesn't work either.
@user-526669 Could you describe what you mean by doesn't work at all
? Does the app freeze or crash? Is there no marker shown? Is the marker shown but not detected?
Hi everyone. I have a problem with the cameras: pupil capture can not detect eye01 and starts in ghost mode and i do not know why. Am i making something wrong, or is there something wrong with my device?
Hi @papr, could you please take a look at this? I am really struggling to calibrate my pupil headset. I use the screen calibration.
I have:
1) Made sure both eye camera's are in focus 2) The world camera is in focus 3) The green circle is about the size of my eyeballs 4) I don't move my head when calibrating 5) The minimum and maximum iris radius are correct...
Here is an example video that I exported in which you can also see both eye recordigns: https://drive.google.com/file/d/1CbyoHwjFf5rMaUe-qceMrCq_qLUrqNfM/view?usp=sharing
The eyes are supposed to be on the index-fingernail, but it's not doing that. Do you know of anything that could explain this?
When I write frame.img as a picture, there is not a red "gaze point" in it (first picture), so how to save a picture (second one) with the red gaze point in real time?
Hi @user-6e1816 the frame.img is only the image frame. If you want to visualize the gaze position into the frame, then you will need to write the gaze position into the image array.
The world view (second screenshot that you posted) shows a gaze position overlaid on the image. This visualization does not manipulate the pixels in the frame, it is an OpenGL point on top of an OpenGL texture.
@wrp Thanks, I see.
@user-6e1816 you can look at Pupil Player to see how this is done. When we export a video with Pupil Player the vis_*
plugins can modify the pixels of the frame to render the gaze visualization into the video frame.
See: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L553-L570
And see vis_circle.py
here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/vis_circle.py#L48-L50
@papr no markers shown. only white display.
@user-0f3eb5 were you using 3d or 2d mode. The pupil detection looks robust (from what I can see qualitatively from the eye videos). Have you tried re-calibrating offline?
@user-0f3eb5 if you'd like to share a quick sample dataset with eye videos you can email [email removed]
@user-6419ec this looks like a driver issue. You will likely need to manually uninstall drivers that are under the Imaging Devices
category and re-install drivers with the install-drivers .exe
within Pupil Capture
@user-526669 could you let us know some specs of your MacBook and OS version?
@user-526669 can you share the pupil_capture_settings/capture.log
file - you can also email this to [email removed]
@user-59f06b this seems like a GLFW and OpenGL issue that is affecting Linux, macOS, and Windows
it looks like there are quite a few fixes that are about to be released for GLFW v3.3 - (apparently v3.3 is 92% complete according to www.glfw.org)
apt-get libglfw3-dev
will install v3.1.2-3
@user-59f06b do you have your dockerfile somewhere online - maybe in a Gist so I can take a look?
@papr I send you the mail to [email removed] please check the mail.
@user-526669 Ok, thank you, I will have a look at it.
@wrp unfortunately it is not working. Listed image devices are only pupil cam1 ID0 and pupil cam1 ID2
Can you check show hidden devices?
And see if there are any devices installed in imaging devices category
@user-6419ec β
@wrp there are only pupil cam 1 ID0 and pupil cam 1 ID2 if i enable hidden devices
@user-7c4790 They are in the wrong category. Right click and remove the devices from the Bildverarbeitungsgeraete
category.
Delete these pleaee
i deleted both and they are listed in libusK Usb Devices, but not ID1
@user-7c4790 Is ID0 listed twice?
Or is it possible that you own a monocular headset?
no they are only listed once and it worked in the past binocular
Hello, I was wondering if Pupil software allows to map the fixations recorded with Pupil directly onto the stimuli that were shown on the screen. I searched the documentation and experimented with the Pupil player but didn't find an option for this
@user-3f894d You can add surface markers to your monitor, define the monitor as a surface and fixations will be mapped in relation to your monitor surface.
Use the (Offline) Surface Tracker plugin to define/detect the surface.
@papr Thanks. But the fixations will still be mapped to the video from the world camera, right?
Yes, the mapping to the surface happens additionally.
@papr OK, thanks for help
Hi everyone, I am workung on a University Project which aims at cotrolling the mouse of a PC with the gaze tracked by the pupil labs eye tracking system. We have already implemented the solution provided by pupil labs on Github, but now want to augment it a little further. Does anyone of you know how to extract the raw gaze data from pupil capture? I am talking about the kind of data that is responsible for the mouse movement. We dont really understand which data set from the 'msg' we are supposed to use for our task, and would be delighted if any of you could help us. thanks in advance and greetings!
itΒ΄s me again π because of the missing pupil ID1: could it be anything concerning the hardware, or am i maiking something wrong with the drivers? if it#s a bigger thing i can also write an email if it is easier for you (in addition to the pupil hardware we also bought a support contract)
@user-7c4790 Please write an email to info@pupil-labs.com concerning this matter.
@user-131985 The raw gaze data (available by subscribing to gaze
instead of surface
in line 44) is relative to the world camera. But in your case you need gaze relative to your screen. You define a surface to find the relation between your screen and the world camera. Gaze is automatically mapped onto this surface (gaze_on_srf
). Have a look at https://docs.pupil-labs.com/master/#development-overview for an overview about the data structures.
@papr Hi, how can I fix the calibration problem? how is it going?
Could I open eye camera 0 and eye camera 1 at the same time, but the gaze point only moves with the eye camera 0 ( gaze point only mapping with eye0)? I only need eye camera 1 to provide "confidence" info.
@user-526669 You can disable the full screen mode in the calibration settings. Run the calibration in window mode or the manual marker calibration procedure as a workaround. We will be releasing a new version soon that improves the compatibility with Retina displays.
@papr But it also didn't worked on my connected monitor. I tested on my Macbook and the connected monitor but only small window calibration worked. Is it same problem? because of the retina display of Macbook?
@user-526669 I am not sure. We were not able to reproduce this issue with the upcoming version. Please test it when it is released and report back if the problem is fixed for you.
Hello guys, is it possible to use the Pupil SDK with an another camera such as an ordinary webcam instead of the eyewear ones sold on the website? I understand the tracking quality will suffer.
Hi @user-813280 please see https://docs.pupil-labs.com/master/#diy
you may also want to message/discuss with @user-41f1bf about alternate eye cameras for DIY setup
Thank you. I'm not really sure you can have code in LGPL but not let it be used for commercial purposes, might want to change the license?
@user-813280 perhaps there is a mis-understanding/mis-communication somewhere. The code can be used in commercial applications according to the LGPL v3 license
Maybe there is, I got the impression from this "If you are an individual planning on using Pupil exclusively for noncommercial purposes, and are not afraid of SMD soldering and hacking β then, buy the parts, modify the cameras, and assemble a Pupil DIY headset. "
@user-813280 hardware != code
the DIY headset is intended for individuals and non-commercial users
I see. This is derailing from my original question but if the diy kit uses an ordinary webcam and usb connection, whats stopping people from making their own 3d printed holder for the camera and leds and just using that?
(for commercial stuff)
Good point. Personal integrity/ethics.
Sales from hardware are what keep Pupil Labs afloat and enable Pupil Labs to continue providing free open source software.
So, yes, one could (ab)use our model. And there may be people out there doing so.
Exactly. I don't see much moral issue here to be honest and definitely no legal issue. Perhaps you should consider switching the license for newer versions? I know it sucks but I don't see an alternative solution here. You could also ask for donations and patrons but I don't know if that would help keep the project afloat. There's also some open source software conservation fund that finances such projects but I forgot the name, I'll try to find it.
(off topic: do you by any chance have a donation button already I could use?)
We do not operate via donation and do not have a donation button π
We are pleased with software under LGPL license
well I can't afford a eyewear yet but would like to help. Anyway, these are just my thoughts.
@user-813280 send an email to info@pupil-labs.com and we can discuss further
OK
Anyone here tried running the SDK on a Tinker Board?
@user-813280 I guess to add to this: Our hardware is highly specialized and cannot just be sourced elseware. Eye tracking depends is in half on good hardware and good software.
OK
@user-813280 if you want to build your own diy kit and you get our frame on shapeways a part of that cost goes towards the Pupil Project. I guess this is a way to support us, even if you cant afford the specialized hardware we sell in our store.
I see. My final project is this: an animatronic robot that looks directly at a person in front of it. There will need to be a camera on its forehead for me to see what it sees, then a Pupil eyewear for me to translate my exact eyeball angles to the animatronic. If I could get Pupil running on a Pi3 or Tinkerboard I could put it inside the head of the animatronic and all I would need would be cable coming out of the animatronic for eyewear, no PC needed. Animatronic is built by me. https://www.youtube.com/watch?v=P6RYITlRgTg Warning, kinda scary.
@raiori, one do violate the EULA of Pupil-Labs when using Pupil DIY kits for commercial purposes. In commercial contexts, one also violate legal issues when exposing people to non regulated IR devices. Pupil software is not exclusive for pupil cameras, for obvious reasons, it would not be nice. Even so, such exclusiveness would not violate LGPL because your freedom to modify the code, access it, share it, and so on, would not be violated.
You are free to build your own hardware other than Pupil DIY and use it commercially with pupil software. It would not violate any terms at all. However, please remember that even though we are a community larger than Pupil Labs, Pupil Labs is the core of the community and would not be nice leave them eith empty hands. If you are in a limited budget, you can help in many ways other than money. @wrp already said that ethics is an important issue. I would like to add that we are social beings, and as such, the word csn be spread... People will talk about ones work or lack of work in a social project.
Pupil Labs is doing a great work by promoting accessibility of eye tracking research, and so, contributing to education in this specialized field. I am a concrete example. Without them, my work would not be possible.
Thanks for your support and contributions @user-41f1bf
Hey all
Two quick questions
Does pupil glasses fit kids size
Ages 7-17
2) does it work with glasses
@user-33d9bc , I have been testing with contact lens
And it works nicely
Some glasses are a little bit difficult to fit, but some should have a better fit with new 200hz tiny cameras
Thank you for your help. Do you have any idea about the adjustability of the headset
They have custom headsets for children
So, I have to ask for
Special glasses
I canβt use general one
Do you want a solution for your chield?
Child*
I am running a research that I need to use the headset
With ages 7-17
And was wondering if the glasses will fit
The kids head size
I understood you want to use correction glasses and a pupil headset at the same time, sorry about that
That is true as well
I had twi questions
*two
Ok.. So, for your first question, you could ask for a custom headset. And about using glasses, you will have better chances with the new tiny camera
What is different about the custom headset
The frame ?
Yes, the frame
Each person has a head size. For tiny heads you increase your chances with a custom headset.
I would guess that 13-17 would not be a problem with the normal one. The custom one should fit 7-12
With diy version of the glasses do you get the same performance/accuracy
As the ready to go
Glasses
DIY requires time to be assembled and is less versatile. Unfortunately I have not compared the two concerning their accuracy/precision.
However, I have an intuition. The binocular system is more robust, and is another history. But chances are that mocular systems are not so different.
hi all, i have a few questions regarding the pupil mobile system.
how does the offline calibration work, do you need to run through the manual marker calibration process at the beginning of the recording and then work through something like the natural features calibration with the video after the fact?
context: i am looking to use eyetracking on train drivers looking at the different screens/controls/buttons in the cab environment as well as vision through the windscreen.
also the recordings will probably end up being quite long, around 2 hours end to end for each run. in terms of storage on the phone, the pupil bundle includes a 64Gb SD card, how many hours of recording is this likely to hold?
@user-8a8051 regarding offline calibration you can use the marker or natural features. If you're using the marker, the marker needs to be present in the recording. See videos in this section of the docs within the offline calibration sub-section: https://docs.pupil-labs.com/master/#data-source-plugins
Markers will be automatically detected in the video with Pupil Player. If you're using natural features, then you will have to click on points in the scene.
@user-8a8051 re Pupil Mobile - our testing shows that you can achieve 4 hours of continuous recording with the 64GB SD card on Moto Z2 Play (with external battery pack). The limiting factor here is battery.
@user-33d9bc To add on to @user-41f1bf response. We (Pupil Labs) also make kid sized frames. These are not listed on the website but are indeed available. The kid sized frames shoudl accommodate 6-12 year old kids. After that the regular 'adult' sized headset should work.
@user-33d9bc with a Pupil Labs monocular system, you get a 120hz or 200hz eye camera. Higher frame rate provides more observations of the eye and therefore leads to more robust pupil detection. With the 200hz eye camera it has a global shutter, therefore motion blur artifacts are reduced and can yield more robust pupil detection especially with fast eye movements.
Great, can I purchase one glasses system
With the two franes
Frames
@user-813280 If I understand correctly, your animatronic robot only needs to receive gaze positions. The Pupil headset will be connected to a computer and then the Pi3 or other board can connect to WiFi or ethernet and can receive gaze and other data in real-time. (BTW - the video is great - thanks for sharing πΈ )
@user-33d9bc The cabling is integrated into the frames. Therefore what we could offer is two frames each with its own world camera, and then you could swap the eye cameras. (World camera is not designed to be swapped)
Does pupil glasses works well
With people wearing glasses
@user-33d9bc Pupil + prescription eye glasses - Many members in our community and developers on our team wear prescription lenses. You can put on the Pupil headset first, then eye glasses on top. You can adjust the eye cameras such that they capture the eye region from below the glasses frames. This may not be an ideal solution, but does work for many people.
That is an awesome news. sadly I have already submitted my grant proposal to the university and got fund for one pupil glasses with the shipping cost. Which one you advise to go with (kids frame or adult ) and what could be the added cost for your proposed solution
@user-33d9bc I will send you a PM with details so we don't overload the group thread π
Oh- sorry.
no problem πΈ
@wrp thanks very much,
@user-8a8051 you're welcome.
@user-813280 you may be interested in checking out this project that uses Pupil to communicate with rpi to control prosthetic/robotic hand - with a very nice Readme and demo videos: https://github.com/jesseweisberg/pupil
One of the videos here: https://youtu.be/KYcfLEvbxSc
@wrp quick follow up question regarding pupil mobile and offline calibration, do you find much difference between the accuracy of offline vs screen marker or manual marker calibration?
Hi, I am trying to use the Unity plugin, but it doesn't seem to be recording the video. I am trying to use the demo scene. It looks like it made the folder but never manages to write out a file
@user-8a8051 accuracy shoudl not be different - in fact you may be able to achieve higher accuracy with post-hoc calibrations in Pupil Player because you can fine tune Pupil detection and set calibration sections for different parts of the dataset if desired.
@user-a700b3 Please could you migrate your discussion to π₯½ core-xr channel and ask @user-e04f56 for assistance/feedback.
@wrp interesting, i had not thought of that, thanks very much.
@Fin#6137 I can confirm that post-hoc calibration increases overall data quality. I would recommend you two things... If you are using 3d pupil detection, make sure to ask people to roll their eyes during the recording and before calibration. This way you will have conditions to feed a good 3d eye model and avoid bad models. Sometimes, 3d detection does not works. So I would recommend always 9 points for calibration, or more, so you will have some alternatives if anything goes wrong
Hi, Iβm trying to DIY following the instruction. Iβm wondering if I can use a camera equipped with IR-LED as eye camera instead of making one? Such as this one: http://us.dlink.com/products/connect/hd-wi-fi-camera-dcs936l/ If it is possible, that would make customization much easier!
Hi, is there a way to have multiple calibration points in one video rather than one long section?
*calibration ranges I mean
For example from frame 200-700 & also frames 1500-1700
@user-2798d6 you can make a long section and then use natural features.
@user-41f1bf thanks for the advice
hi guys, is the pupil docs website offline?
@user-0f3eb5 not for me. Can you try again?
thanks for checking, it's still not working for me and my internet is fine (other websites work), i'll try again later, thanks
it's working again for me π
Hi! I have a question about blink data. I have enabled the "Blink Detection" plugin in Capture and recorded a short session. I have then enabled the "Raw data exporter" plugin in Player and exported. However I cannot find any info about blinks in the csv files. Am I correct in saying that the the "Blink Detection" plugin only triggers blink events (as written at https://docs.pupil-labs.com/#blink-detection ) but there is nothing at the moment listening for these events and writing them on a file? Or maybe there is an offline plugin for the player which computes blink onsets and offsets and can export them? If not, how would you suggest getting blink data? Thanks!
@user-943415 You are right that the blink events are currently not exported via the Raw Data Exporter nor is there an offline blink detection plugin yet. There is an open issue (https://github.com/pupil-labs/pupil/issues/968) on which I am currently working on: https://github.com/papr/pupil/tree/offline_blink_detection
Nonetheless, the online blink events are recorded in the pupil_data
file. You can extract them manually by unpacking the file with msgpack and accessing the blinks
array.
Yep, thanks! In fact I searched "blink" in this chat and got only my recent questions about blinking but then I found that there is a "Relevant" label beside "Recent" in the search bar on the right. And there I found a comment by you saying how to do it with msgpack. Trying now, thanks! π
Ok, got blink using load_object from https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py
pupil_data = load_object('pupil_data')
In [14]: len(pupil_data['blinks']) Out[14]: 7 Thanks!!!
@wrp
"@user-0f3eb5 were you using 3d or 2d mode. The pupil detection looks robust (from what I can see qualitatively from the eye videos). Have you tried re-calibrating offline?"
I am using 3d mode. I have gone through all the steps several times with no luck. One thing that immediately catches my attention when I compare my world-cam images to the one in this video is that my recordings seem scewed (almost like a fish-eye lens) while the lines in this video seem perfectly straight: https://www.youtube.com/watch?v=PXo0k7WmGYs
Do you think that's related to the problem? Is there a way to achieve the non-skewed view?
@user-0f3eb5 this is a very old demo video - was using a different world camera at that time. The current high speed world camera comes with a wide angle lens (100deg approx) as well as a narrow angle lens (60 deg approx).
We received your email and will follow up there
thanks @wrp π
Hi! I have a question. I'm on Ubuntu 16.04 and pupil capture didn't recognize my headset. After some research, I found this page https://groups.google.com/forum/#!topic/pupil-discuss/5mvfSs5841Mhttps://groups.google.com/forum/#!topic/pupil-discuss/5mvfSs5841M and added my user to that group. It didn't work, but with permissions issue on my mind I change my user to root and pupil capture worked!
Is there a way to use pupil capture without being root? what should I do??
did I miss a step?
@user-f1d926 Sometimes it is necessary to reboot after adding the user to the plugdev group.
adding your user to plugdev has the purpose of not needing sudo.
Oh! I forget that I needed to log out for group changes. Thanks, it was that (:
I got pupil software working, now I want to develop. But I get an error. I think there is something wrong with the opencv.
MainProcess - [INFO] os_utils: Disabled idle sleep. In file included from detector_2d.cpp:614: In file included from /usr/local/opt/opencv/include/opencv2/core.hpp:52: /usr/local/opt/opencv/include/opencv2/core/cvdef.h:428:14: fatal error: 'array' file not found
include <array> ^~~~~~~
1 error generated. error: command '/usr/bin/clang' failed with exit status 1 world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/Users/jonathan/Projecten/pupilHR/pupil/pupil_src/launchables/world.py", line 113, in world import pupil_detectors File "/Users/jonathan/Projecten/pupilHR/pupil/pupil_src/shared_modules/pupil_detectors/init.py", line 16, in <module> build_cpp_extension() File "/Users/jonathan/Projecten/pupilHR/pupil/pupil_src/shared_modules/pupil_detectors/build.py", line 25, in build_cpp_extension ret = sp.check_output(build_cmd).decode(sys.stdout.encoding) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['/Library/Frameworks/Python.framework/Versions/3.6/bin/python3', 'setup.py', 'install', '--install-lib=/Users/jonathan/Projecten/pupilHR/pupil/pupil_src/shared_modules']' returned non-zero exit status 1.
MainProcess - [INFO] os_utils: Re-enabled idle sleep.
@user-58cb0c what version of OpenCV are you using?
I used brew install opencv3 --with-contrib --with-python3 --with-tbb --without-python
to install opencv
Is it possible to change the angle of the camera relative to the eye?
Yes it is
I can confirm that changing the lenses angle is also possible. For wide angles use small ROI's
Thank you. Here is what I am doing, glasses that are covered with an IR reflector. Invisible to the eye, look like a mirror to the IR camera. Glasses are slightly tiled, cameras are tiled as well and positioned above the eyebrows. This allows to capture the eye exactly from the center without putting a camera right in front of the eye. I don't know if this will improve accuracy but I don't see why it won't.
@user-813280 you can use a "hot mirror" without negatively affecting the pupil detection algorithm the Pupil Labs DK2 add-on does this. Just to note, hot mirror setups can be problematic in natural environments because the environmental light can overpower the IR illuminators and then you will have a degraded image (or no image). Hope this is helpful
Got it, thanks. Final question, is it okay that the mirror (glasses) have some curvature to them. Nothing extreme, similar to this. https://cdn6.bigcommerce.com/s-d8bzk61/images/stencil/1280x1280/products/2528/6970/Pyramex_Intruder_Safety_Glasses_with_Clear_Lens__16824.1474037730.jpg?c=2
@user-813280 a curved mirror is like a lens, if you use a mirror like this I believe you will get a very distorted image.
sure, but maybe the software can compensate for that since eyeballs are spheres and it can undistort based on that
(like it takes into account perspective/angle of eye/camera)
There is no undistortion algorithm for the eye camera.
Recoreded video using pupil main, having trouble loading the saved data:
player - [INFO] launchables.player: Application Version: 1.1.63
player - [INFO] launchables.player: System Info: User: willem, Platform: Linux, Machine: willem-laptop, Release: 4.10.0-42-generic, Version: #46~16.04.1-Ubuntu SMP Mon Dec 4 15:57:59 UTC 2017
player - [INFO] camera_models: Previously recorded calibration found and loaded!
player - [INFO] launchables.player: Session setting are a different version of this app. I will not use those.
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
File "/home/willem/side/nystag/pupil/pupil_src/launchables/player.py", line 439, in player
p.recent_events(events)
File "/home/willem/side/nystag/pupil/pupil_src/shared_modules/vis_scan_path.py", line 57, in recent_events
gray_img = frame.gray
File "/home/willem/side/nystag/pupil/pupil_src/shared_modules/video_capture/file_backend.py", line 81, in gray
self._gray = np.frombuffer(self._av_frame.planes[0], np.uint8).reshape(self.height,self.width)
ValueError: total size of new array must be unchanged
player - [INFO] launchables.player: Process shutting down.
any ideas?
I want to use the raw data exporter plugin. Alternatively, if there are any utility functions or ways I can unpickle this data then I would like to inspect it in this method Edit: use load_object() in file_methods.py
@user-59f06b can you share the video file? The traceback suggests the file cannot be read? Did you make changes to the code or is this runnning current release/master?
"There is no undistortion algorithm for the eye camera." Then how do different camera lenses/FOVS and angles relative to the eye work? All these cause optical distortion to the frame.
@raiori#5658 the distortion introduced by your mirror will be different then when using a lens in the sense that it is non symmetrical. This could be a problem. I suggest to just give it a shot.
right but camera perspective (angle) distortion isn't symmetrical either. I can try but if theres any developer here they can give a real answer. Testing only shows what apparently works only, or what doesnt work but not always the why.
hey guys! Does anyone have experience with pupil kit and HTC VIVE?
Hi @user-53a623 I responded in the π₯½ core-xr channel
Hi! I have a problem on the interface with the text of the confidence and pupil diameter (see image, top left). They are black spots. Do you know why? I'm on Ubuntu. Thanks!
@user-5216fd this is a known openGL issue. We are working on it. Thanks for the report!
ok thanks! I have another one. Just today one eye camera is very strange, very grainy (not sure about the English word...) (see the image). Thanks!
have you tried restart the service?
I tried restarting pupil capture some times
@user-5216fd this looks like a hardware issue. What resolution and framerate are you using?
If this happens at VGA and QVAG we should do a repair replacement.
you can write to us at info[at]pupil-labs.com
resolution (1280, 720) / framerate 30
uhm, these are setting for the egocentric camera. were you referring to the eye cameras? where do I see resolution and framerate?
found it, of course it was in the relative eye window!
(320, 240) framerate 120
writing you an email now
hi! is there a problem with the Android app? I've downloaded the app on several phones and it doesn't recognize the pupil π¦
@paolo#0833 in this [email removed] shoot us an emay to get this resolved!
@user-ec208e what phones did you try? What cable are you using to connect the headset to the phones?
@mpk I tried with an LG g5 and a Samsung S7. The cable its an usb-c to usb-c. thanks!
Hi @user-dcd7f2 Thanks for the report. Currently Pupil Mobile is confirmed to run well on Moto Z2 Play, Nexus 5x, Nexus 6P, and One Plus 3. While Pupil Mobile + Pupil may work with other devices it depends on 3 main factors (1) USB Controller hardware quality in the Android device (not all USBC controllers are created equally or up to full USBC spec), (2) Android Version (3) USBC Cable.
Please also note that the USBC-USBC cable that comes with the device usually is designed for charging and possibly data transfer. Cables like this one - https://www.amazon.com/CHOETECH-Hi-speed-Devices-Including-ChromeBook/dp/B017W2RWB8/ref=sr_1_4?ie=UTF8&qid=1511416500&sr=8-4&keywords=CHOETECH+USB+C - have been proven to work.
I tried once Honor 9 and it was working. I didn't do a lot of tests but just tried once but everything seemed ok.
Thanks for sharing this report @user-5216fd π
π
Hi @wrp! The cable that I use itβs the same that you said. I tried with a Galaxy S8 and it worked! But Iβve another problem, when I record and I donβt change the video location it says that it records on default, where is βdefaultβ located? Iβve been testing with the cellphone and I didnβt find the recording that I did without setting the location. When I change the location I donβt have any problem. Thanks in advance for your response π !
Hi everyone, I am trying to use Raspberry pi zero with NoIR camera to make pupil headset. I couldn't find listed IR emitter in stores. Is there anything special about them? Can I safely use a general 5mm IR LED of same wavelength?
@user-ec208e if using local storage the files are saved in /Movies/Pupil Mobile
- on a nexus 5x this looks like: /storage/emulated/0/Movies/Pupil Mobile
To locate the files you can use a file browser app like Amaze
(or other file browsing apps)
Thank you @wrp ! βΊοΈ