@mpk do you know where I could get the plugin? I'm not sure it's possible to get Pupil and HoloLens to talk to each other yet. I was thinking of building a server for the HoloLens but was told that it wouldn't work very well.
@user-a6b05f mpk was referring to the HololensRelay plugin that is already being shipped in Capture if I am not mistaken
@papr A question regarding the new serializd objects: is it possible to have a "fake-serialized-object" that I can use the same way as the legacy-lists of dicts? I used to call several pupil-functions (e.g. recalibrations) directly from python. With the new serialized objects introduced in 1.8, I now need to use "Incremental_Legacy_Pupil_Data_Loader" to load my file. But with this, I cannot e.g. change pupil['gaze_positions'] and then apply the recalibration function on the changed object. So I probably need a fake-serializer because I do not want to write my data everytime I change something just so that the serializer can re-read it from disk. Or alternatively, is there a way to easily switch between the "lists" and a serialized msgpack object?
I don't know anything about your new way to save data. All my recorded data is pre 1.8
@user-af87c8 I am a bit confused what the exact issue is. Your data is pre 1.8 but you want to load in the 1.8 format? Incremental_Legacy_Pupil_Data_Loader is only for conversion to the new format. You can still use load_object on the old format/pupil_data file.
Hello everyone, I was wondering if someone has done a DIY hardware setup for the VIVE Pro yet using Pupil?
@papr I'm using the monocular headset.
Could you tell me more about the angle data? Angle data of the 'ellipse' and 'projected sphere' are provided, but I'm not sure what is 0 deg based on? And, could I get only the angle data without string searching?
@papr Thanks for your response! I was myself a bit confused, so now hopefully a bit clearer: So say, I want to call the calibration function of pupillabs. It expects the new dataformat in 1.8 (tries to unserialize in gaze_producer). But I want to change my data before putting it into the calibration function (e.g. remove some portions, split it up, fix timing etc.). Now my data is in list format, but the function expects serialized data and throws an error. I want somethign that allows me to use lists in new versions (e.g. a list2serialize function)
Hey @papr , do you by any chance know when the new release will be out?
@user-ad8e2d Hey, unfortunately the release is being delayed and there is no release date yet.
What is it exactly that you need from the new release? Maybe I can find a work-around for you.
@user-af87c8 Try this before passing the values into the function.
import msgpack
gaze_serialized = [msgpack.packb(gaze, use_bin_type=True) for gaze in gaze_list]
@user-b70259 Then my guess is that the failed to init
call came from the eye1
process. Just close the second (gray) eye window.
The 3d pupil data is relative to eye coordinate system. The data is msgpack encoded. There should be no need for string searching.
@papr thanks! I knew there had to be an "easy" way to become compatible. But I was lacking the msgpack vodoo for it π
@user-af87c8 let me know if there are any other incompatibilities π
@papr it is just subscribing to the frame.world on the frame publisher on windows
@user-ad8e2d Did you see the temporary fixed plugin? Were you able to get it working?
@papr No I didn't get it working. I couldn't find the file pupil_capture_settings, but there is a file called plugins in \torch\utils\trainer\plugins. Do I have to build it from source?
pupil_capture_settings
is a folder and it should exist directly under your home directory. This folder includes another folder called plugin
in which the temporally fixed version needs to be placed.
The \torch directory is not related to our project.
@papr found it thanks for your help π
@papr I have included your file and the help message shows up in the capture UI, but I don't receive the frame.world when I subscribe to the frame publisher.
Could it have anything to do with the recent_events function as it still says frame (just guessing, I have no clue)
There should be two "Frame Publisher"s listed. Make sure to load the "Frame Publisher Fixed" plugin from the plugin manager
@papr That's it! Thanks for your help
Works great!
Nice! Happy to help π
How do I check the accuracy of the tracking? Does the pupil labs platform have any plugins or scripts for this type of verification?
@user-3f0708 check out the accuracy visualizer. It is loaded by default
ok
Hi, I am trying to build boost.python, but I got an error when run the bootstrap.bat. Ther error is "'cl' is not recognized as an internal or external command, operable program or batch file.". Does anybody know a solution?
@user-648a71 am I right that you are trying to run from source on windows?
Guys can anyone help me making my own JST to USB cable for pupil cam? I'm hoping it's the standard pinout of USB but sometimes it's different.
@user-6aef01 does this video help ? https://youtu.be/nrWDFt_sLDs
I don't think the JST connector to USB pinout is universal among all USB cameras. So I want Pupil-Labs to verify the pinout.
aah. i'll test some things out sometime this week & get back to ya with an update
better askthe devs, if you mix pins you can damage the pcb
Has anyone answered which angle the Pupil circuitry needs to be placed in an oculus GO to [email removed] ?
Also, does the distance of the Pupils matter when placed within a VR hmd ? Or does the calibration take that into account & adjust accordingly ?
@papr Yes, I am trying to run the source code from Windows.
@user-648a71 may I ask for the particular reason? Running from source is very difficult to get right and is often not required. I am sure that I can find an alternative solution for you.
@papr Thank you! I shall try this out
@user-4b5226 That video is unrelated by the way.
@mpk @papr What is the JST connector pinout for the pupil cam?
@user-6aef01
From top to bottom: Ground, D+, D-, 5V
Hey @papr , I am using c++ on windows and when I get gaze.3d.01. data, I unpack into a msgpack::object, then use cout to get this: {"topic":"gaze.3d.01.","eye_centers_3d":{0:[18.8821,13.5578,-25.179],1:[-38.8884,13.211,-24.1582]}... Then when I go to put it into a string it seems to have a problem with the 0 after the "eye_centers_3d" and believes it is the null terminator, as a solution I use stringstream and put "" around the 0. Just wondering if there is a way you could put the "" around the 0 before you send it or if you know if there is a way to do this better as my solution is not that great. Thanks, Jack
@user-ad8e2d It is totally legal to use numbers as keys in msgpack-key-value-mappings and unpacking is working correctly as well. Therefore I suspect the issue to be specific to how msgpack::object converts the object into a string representation.
That's great! I'll look into that thank you
@papr I thought that is hardware problem, so I checked the cable, and Pupil Capture works well now. Then, after calibration, I got two angles 'theta' and 'phi'. The eye coordinate system is 3D polar coordinate system, right? The angle I need is theta, so my problem is solved. Thank you so much!!
Hey, could anyone give me a hint of how they parse the gaze data in c++. The way I am doing it is not great and would love if someone could help. π
Good afternoon. Please tell me what the data means - base data?
I'm worried that I'm not analyzing them correctly ...
The base_data field includes the data on which the current datum is based on. E.g. two pupil datums can be combined and mapped to one binocular gaze datum. The gaze datum's base datum would include the two pupil datums.
Can you advise where to find information, how to use this data correctly?
or examples
please...
What do you mean by "use [...] correcly"? Correctly in which use case?
In the case of analysis of data obtained during the study equipment Pupil. I analyze fixations, saccades, study time, the diameter of the pupils .. and the question arose how to use "base data"
This is the process chain: pupil -> gaze -> fixations. base data is just used to go back within this chain: e.g. you have one fixation and want to know which pupil positions belong to it.
base data is not new data but a reference to existing data. There is no "This is how you analyze base_data" because it is context dependend data.
Thank you very much! Immediately confused myself) Tomorrow is a very important study, here I am preparing ...
Can you still tell me what data you can use?
My research will be in the store
Please see my personal message to you.
Ok
help needed for manual marker calibration... please
Hi @user-bf07d4 please let us know what steps you have taken for manual marker calibration so that we can provide you with concrete feedback.
Hi, i want to get angle of binocular vector at point of convergence (where i looking at). i used 3d calibration mode and computed scipy.spatial.distance.cosine(s0_normal, s1_normal), i think it is radian value, so i converted that to angle by using np.arccos(result). Is this right access to get angle of binocular vector? @papr
@user-799634 that data is already in the gaze datum. Just look at gaze point 3d.
My basic purpose is to get gaze distance. but when i get data using gaze_point_3d_z that data is not as similar as actual distance where i looking at. In my test, when i looking at 500mm, the z value shows approximately 60~80mm. I just printed ['gaze_point_3d'][2] @mpk
@user-799634 yes the value is not a perfect ruler. You would have to add an additional depth calibration if you need this kind of accuracy. Estimating true depth through vergence can be tricky. May try adding different depth levels during your calibration?
I do not know that part. Actually i'm beginner of python so could you please advise me little how to add different depth levels during calibration @mpk ?
@user-799634 you do not need to be a programmer at all to try different calibration depths. Just position the marker at different depths during the calibration. This is easiest done using a printed marker and using the manual marker calibration procedure.
Hi ! Whatβs the minimum CPU power required on Linux | Windows | MacOS ?
@user-4b5226 depends on the setup. 200hz ? binocular? marker tracking required?
we have Pupil running on intel m with 1.2ghz but you can also max out at i5 if you want to run with all features active.
I'm deleting this message because I find that I did not install the driver properly. I don't understand installation instructions. I should have contacted my peers who had experience with this sort of thing instead of writing here. I must apologize.
@user-bb47e7 let's try to get it working from bundle (not from source) first
If all cams are blocked it does mean that the drivers were not installed correctly
Please restart capture with administrator rights
Eye cameras working fine. Problem is only with world camera
@user-bb47e7 please write an email to info@pupil-labs.com mentioning that you are having problems getting your realsense camera to work
OK. Thank you for replies.
Hi, I have a question about how pupil lab work. How can a single eye camera know the 3d transformation of our eye? The software gives us eyecenter,eyedia meter,pupil location etc. How on earth could it figure out all of that just from the image of the eye.
@user-7a9710 The 3d model is based on this paper: https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
What's the image sensor in the 200Hz Pupil cam? I'm familiar with the IC and can generate custom firmwares for experimenting with various exposures and modes if I know the sensor as well. I know this voids any warranty but I have enough funding for my research to afford that.
@user-6fba51 Please write an email to ppr@pupil-labs.com concerning this issue.
What do you guys use to measure your camera's latency?
And is it okay my Pupil Cam resolution is set to 192x192 in Capture ratherthan 200x200?
@user-fdb87b yes this ok. In regard to the camera latency: We have access to two different timestamps: 1. Start of exposure 2. Frame availability in software
The difference between these two timestamps is our latency
How do you know start of exposure?
it is provided in the frame header data. libuvc exposes this data to the user
hm, cool. Any way I can add the extra 8 pixels to be used?
Sorry, but I am not sure what you mean by that.
I mean can I use 200x200 frames?
No, I think there are only 192x192 and 400x400 frame sizes available
OK. maybe you should updated the webpages that say frames at 200hz are 200x200/
I will forward your feedback. π Thank you very much!
have a good day
You too!
Does Pupil code us Dshow, MSMF or WindowsRT to access the cameras?
@user-a3341c we use libuvc.
I guess I need to check what that uses internally. I currently use DSHOW in my code to list cameras by both their names and indexes to choose what camera to use and want to integrate the Pupil Labs code with my code. I'll either need to modify my or Pupil code more, not sure which. If they both used the same frontend (therefore same indexes) things would be easier.
@user-a3341c we also the libusbk drivers on windows, fyi.
isn't libusbk just a general interface, don't you still need a library for accessing UVC cameras specifically?
@user-a3341c yes, this is where libuvc comes in
(if I understood it correctly)
well, I guess I need to modify my code then to use libuvc as well, otherwise I have to modify Pupil library to not use libuvc which can lead to many missues
@user-a3341c yes, I recommend that as well
Doe libuvc provide methods to list available cameras?
found it, uvc_get_device_list
@user-a3341c be aware that this fn also lists devices that are in use
I think a simple try, Except on uvc_open() would do the trick in figuring out which are available
Yes, I think so too
okay, thanks for the help
No problem! Let me know if you have any other questions :+1:
um, I actually forgot one. Any ideas why libusbk pupil cam isnt recognized by any other USB camera program?
I thought the driver shouldn't matter
I don't know any details about that but my guess would be that other programs look for a specific driver categories and libusbk is not one of them.
ok
any suggested camera software tio check pupil cam without launching the pupil programs?
You can use pyuvc and write a small test utility if you installed the source dependencies.
This is the official pyuvc example: https://github.com/pupil-labs/pyuvc/blob/master/example.py
neat. thanks
here's a less important question: why do DIY cams also need to be reflashed with a libusbk driver? Looking into libuvc that doesn't seem to be a requirement
Unfortunately, I do not know that. @user-0f7b55 do you have inside on that?
Reflashing? If you mean modification of firmware , no , this is not required at all
As long as the camera is UVC compliant
I mean swapping the driver from libusb? to libusbk. Dunno if that changes the firmware on the cameras themselves. Why is that required. To moment someone does that his camera won't work with any other program.
We use libuvc as a cross platform way to list and access cameras. AFAIK it is not possible to list and access devices from other driver categories other than libusbk on Windows. Therefore we need you to install the libusbk drivers for the DIY cams as well.
Installing the drivers does not flash the firmware. "Flashing drivers" is a somehow confusing term. I suggest we use "flash firmware" and "install drivers" as two separate terms to avoid confusion.
"AFAIK it is not possible to list and access devices from other driver categories other than libusbk on Windows."
Not really true, let me find some code
here: https://pastebin.com/aUv51bfD .This uses Qt to list all cam names and their indexes on Windows an Linux, but if a Python wrapper was generated for MSMF we maybe wouldn't even need the Qt dependency. Still better than making webcams not work with other programs, imo.
You are right. I need to specify my statement: "AFAIK it is not possible to list and access devices from other driver categories other than libusbk on Windows [using libuvc]."
And yes, we could start doing of the following things:
I don't know if it would provide the features of libuvc. Like libuvc can access info such as start of exposure time? Didn't know cameras provide that info at all.
Yes, that would be an other issue to consider.
So summarizing: We sacrifice non-libuvc cam access on Windows in order to get code maintainability and all camera features
Pupil cam, though, worked for me without requiring to install libusbk drivers. How is this possible.
Pupil capture takes care of driver install on windows
You mean in the background when connecting pupil cam? Because I didn't install Pupil Capture, just unzipped.
when you run pupil capture the first time it will replace the driver with libusbk
ok. So this is only a Windows thing?
yes
lubusbK is strictly windows
ok thank you
sure
Looking to use pupil with a portable, small form factor computer (like raspberry pi/beaglebone). Is there any such computer hardware suggested for pupil? (Also, is that at all likely to work and have enough processing power?)
Hello, I'm getting this error when using my DIY camera as eye camera.
This is the log poped up in command line window: eye0 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. eye0 - [INFO] video_capture.uvc_backend: Found device. SIT USB2.0 Camera. eye0 - [ERROR] uvc: Could not init 'Roll absolute control'! Error: Error: Pipe error. eye0 - [INFO] video_capture.uvc_backend: Hardware timestamps not supported for SIT USB2.0 Camera. Using software timestamps. eye0 - [INFO] camera_models: No user calibration found for camera SIT USB2.0 Camera at resolution (320, 240) eye0 - [INFO] camera_models: No pre-recorded calibration available eye0 - [WARNING] camera_models: Loading dummy calibration eye0 - [WARNING] uvc: Could not set Value. 'Absolute Exposure Time'. eye0 - [WARNING] uvc: Could not set Value. 'White Balance temperature'.
Any ideas?
Hi, I am performing offline calibration on/exporting a large number of eye tracking files that range from 20-35 minutes in length. I am having an issue where it takes around an hour and a half to process each video and often my computer freezes. How powerful does a computer need to be to process these files and is there an OS that you would recommend? Thanks ,Sara
hey @user-9945c8 Which version of Player do you use and which OS did you try?
I am using 1.7 on Windows and Mac
Please upgrade to v1.8. We made major improvements in regards to memory requirements in this version.
Yes, I just did and processed a video and it is quite a bit faster/didn't freeze. Thanks and that is a huge step in a better direction! About the OS, how is processing time on a Linux compared to the others?
I don't think that the OS matters much in this case.
Also, I would like to get the processing/export time for each video down to about 30 min. Would I be able to achieve this with a regular desktop computer?
This is difficult to say since export time depends on multiple things like exact recording length and if audio export is active.
Ok. So without audio brings the time down signficantly then? It would be helpful for us if we could see a layout of what factors contribute to processing time and how much as we are working backwards to try to figure this out now.
I also have an unrelated data collection question. If we have already calibrated can we adjust the angle of the world cam without affecting the calibration in-session? I know we can't move t he eye cam, just not sure about the world cam.
Unfortunately, we do not have any exact numbers on these factors. The biggest factor is the video encoding itself. You will not get around that. Audio export is definitively a factor as well. Audio is automatically exported if an audio.mp4 file was found. Exporting raw data etc does not consume much time.
You should not change any of the camera relations (incl. world cam) after calibration.
Ok, thanks for your help! Might follow up soon with some questions about offline processing but that gives me a good start.
Sure thing π
Hi
Hello?
@papr hello?
I have a queation at pupil-lab
Pupil-lab function
go ahead with your question @user-4580c3
Can you measure the slight tremors of the eye that occur when you gaze at one place and the movement of the eye that occurs to keep the object clearly?
@user-4580c3 these are referred to as micro-saccades (if I am not mistaken). We do not have a saccade classifier built into Pupil software, however, we do have a fixation classifier. You might want to look at the fixation classifier plugin and the supporting gaze data that is used to classify a fixation and then calculate movements based on the associated gaze data for each fixation.
@wrp thank u :) Let me ask you one more question. Does it include Fixation, saccade, gaze pursuit, and gaze path? and can you output data with that function?
Hey everyone! Sorry to bother you @papr but I have some questions about running pupil on a docker container. I've downloaded the docker files from https://github.com/pupil-labs/pupil-docker-ubuntu, the building go fine and then after it's built I log inside the container by using the commands on that repo. When I start main.py though everything fails with the error:
world - [INFO] pupil_detectors.build: Building extension modules... world - [INFO] calibration_routines.optimization_calibration.build: Building extension modules... world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend world - [INFO] launchables.world: Session setting are from a different version of this app. I will not use those. world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/root/pupil/pupil_src/launchables/world.py", line 317, in world main_window = glfw.glfwCreateWindow(width, height, "Pupil Capture - World") File "/root/pupil/pupil_src/shared_modules/glfw.py", line 522, in glfwCreateWindow raise Exception("GLFW window failed to create.") Exception: GLFW window failed to create.
Actually, I do not know if this supposed to work. The docker image is mostly meant for building the bundle. @wrp do you have further inside?
@user-5787a9 you can "run" Pupil in Docker, but you will not be able to access sensor streams from within a docker container
The objective of the docker container was for building Pupil apps within a container/reproducable env
Aww too bad. Thanks!!
@user-5787a9 you can use the bundle though if you are looking for an easy way to run Pupil.
Yep, I was trying to build it from source cause I wanted to add a custom plugin
But I had some problems with dependencies (damn ros)
@user-5787a9 you do not need to run from source if you want to add a custom plugin.
'kay, I've followed the docs to install the dependencies and everything went fine. I also installed tensorflow as required for my plugin, then downloaded from source pupil as written in the docs (https://docs.pupil-labs.com/#download-and-run-pupil-source-code). But when I run main.py I get:
world - [INFO] pupil_detectors.build: Building extension modules... cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ world - [INFO] calibration_routines.optimization_calibration.build: Building extension modules... cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/home/optolab/Desktop/pupil/pupil_src/launchables/world.py", line 134, in world from remote_recorder import Remote_Recorder File "/home/optolab/Desktop/pupil/pupil_src/shared_modules/remote_recorder.py", line 26 sensor: ndsi.Sensor ^ SyntaxError: invalid syntax
world - [INFO] launchables.world: Process shutting down.
Which version of python do you use @user-5787a9 ?
Please be aware that Pupil requires python 3.5 orhigher
import sys print (sys.version) 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]
(and of course I run python3 make.py)
ok let's try it in another way. I've downloaded the pupil apps from the website, latest version. I want to add this plugin: https://github.com/jesseweisberg/pupil, the object_detector_app specifically. I've read in the docs that I just need to create a folder like that: /home/pupil_capture_settings/plugins/object_detector_app
where basically i just copy the folder from the original repo by jesseweisberg to that plugins folder
I've also noticed that in the init.py of the object_detector_app there's a path to jesse's system (/home/jesse/dev/pupil/pupil_src/shared_modules/object_detector_app), so I've changed to the path linking to where the object_detector_app is in my machine
but when starting the pupil app the plugin is not in the list
@papr If i change "sensor: ndsi.Sensor" to "sensor = ndsi.Sensor" the problem seems to go away on that line, but I need to comment the ["sensor"] part. Same thing for the similar instances after that one. But another problem arises with msgpack: "msgpack.exceptions.ExtraData: unpack(b) received extra data." π©
@user-5787a9 yeah the sensor stuff are type annotations but I thought they were compatible with 3.5
@user-5787a9 the other stuff might be related to the Capture version that you are running
When I ran the docker with python 3.6 those errors didn't show up. I'll try a fresh install with it next week to be sure of that... In the meantime I can try to port jesse object detector in a plugin fashion so it can be loaded by the normal app
Thank you for your help βΊ I'll update you as soon as I have news
Hello all! I've tried searching this, but Discord crashes when I do, so I'll ask here. I'm trying to get more information on the Accuracy Visualizer. Do any of you all have ways to improve accuracy with manual marker calibration?
Hello, guys
One other question - when I use fingertip calibration in Capture, my frame rate drops drastically. The scene is moving in slow motion. Is there a fix for that? Thank you!
@user-2798d6 how exactly are your performing the manual marker calibration? How many positions? What accuracy are you seeing? Have you tried single marker calibration method using manual marker (select this option from the drop down menu)?
@user-591137 Hi π welcome to Pupil community chat!
@user-2798d6 fingertip calibration frame rate will drop if you are using a machine without NVIDIA GPU. If you click V
to turn off Visualization frame rate will speed up even on non NVIDIA GPU machines.
V
visualizes the fingertip points within the world view. Clicking V
in the world GUI, will enable/disable the visualization. You may want to start with visualization on
so that you can see that it is working and then disable when you calibrate if you want to speed up frame rate - for example.
@wrp We are performing manual marker calibration with the marker printed on an 8.5 x 11 page. We are standing about 6 feet away from the participant and using about 9 positions (a square and then a center point).
Not sure if this is the place to ask this, but my left eye camera is not working correctly. It started by freezing a lot, and then today was flickering and cutting off entirely. I'm not well versed in working on electronics, so I'm hesitant to mess with it too much myself. Do you all have any suggestions for this? Thank you!
Hi, I'm having a problem with the Pupil Mobile app where the world camera recording has some problem. It produces a file that has say 500 MB, but the video says it is 00:00:00 long. This causes the Player to not be able to load the files. In effect, I lose that entire run. Is it possible to recover these files somehow?
@user-2798d6 please write an email to info@pupil-labs.com concerning the hardware issue
@user-072005 which version of Pupil Mobile did you use to make the recording?
@user-2798d6 also to clarify @wrp's statement: What lowers the frame rate is not the visualization itself but the processing of the neural network that does the fingertip detection. The NN processing is turned on if the calibration is active or if the visualization is enabled. Therefore you will probably experience frame drops during the calibration as well. This might result in a worse calibration result.
@papr when I posted, the version before the current one, but I went out yesterday after updating the app and tested again, and it happened again
Has anyone else reported this problem? I see it isn't in the github. I didn't encounter it earlier when I was collecting data in March, but it became a big problem when I went out again in June, and continues. But, if no one else has this problem, could it be that I'm using a Samsung Galaxy s8?
@user-072005 @papr maybe this is related to my issue? https://github.com/pupil-labs/pupil/issues/1275 ... I'm not using pupil mobile, but perhaps there is something in the network pipeline broken...? we use frame publisher, which generates a high load, too
this issue is driving us nuts
I'm not using capture and I don't see a timestamps file in the files that work, but I'm not the most savvy in this field. It does sound like a similar situation. The problem started at the same time as I started getting a bunch of folders that just have two files key_00040000.data and key_00040000.time in them. But this happens both when the video works and when it doesn't. Not sure if that could be related
@user-072005 do you have time to test it with v1.6? I'm pretty sure our issue exists after 1.7 was introduced...
v1.6 of player?
or... mobile?
The issue is with the recorded files in my phone that I made with pupil mobile
ok, I see, nevermind π
Hey everyone, I am trying to change the gaze mapper to Dual_Monocular_Gaze_Mapper to get gaze data for each of the eyes separately. Is there an easy way to do that? And would it even be the right thing to do? At the end I would like to have gaze data on a surface for each eye separately. Will the export function in pupil player create a fitting gaze_positions.csv or gaze_positions_on_surface.csv with columns for each eye if I just change the gaze mapper? Thank you very much in advance, Andreas
@user-019256 Yes, this would work. You will need to adjust finish_calibration.py since this is the file that selects the Binocular_Mapper over the Dual_monocular_Mapper
What are the principles of using a new scene camera with pupil headset pupil cams? We have a USB-out from our camera, but it is not being detected in Capture as a source.
Heya folks - could anyone be of assistance at this hour?
@user-6dcf68 hey, what's up? π
@user-a08c80 on which operating system is that?
Howdy! Trying to get this installed on windows 10 for class. Https://github.com/pupil-labs/pupil/releases/tag/v1.8
Attempted to unzip. I installed python too. It's not running
@user-6dcf68 You do not need to install python explicitly
The zip files that you can download on the linked github page are bundled applications. They should run without having to install anything manually.
Just unzip the 7zip file and execute Capture.exe. The first time with admin rights to install drivers
sir. you are the greatest. I cannot possibly thank you enough. This app is great!
@wrp @papr Hi, for a class project I want to modify offline fixation detector to be offline saccade detector, using a basic saccade detection algorithm. Can someone explain/point to where does offline fixation detector read the data for previously recorded video, such as in using fixation detector in Pupil Player rather than Pupil Capture? I've went over the source code in fixation detector plugin but I'm not sure
@user-a7d017 Hey, this sounds like a cool project! π The fixation_detector.py
consists of multiple parts:
This is the detectionfunction that the fixation detector calls in a background process: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L152
It takes gaze data as input and yields detected fixations. You should be able to modify it to yield saccades.
@papr Hey, I've followed the installation instructions on the pupil-docs page, I'm running ubuntu 16. The opencv that gets installed is 4, so when running main.py it fails cause it tries to load opencv2/core.hpp. with opencv4 the correct path is usr/local/include/opencv4/opencv2, but if I change it to point to it it fails again cause core.hpp tries to call other files with the same method... I don't think is a good idea to change every path in the local files, so is there any clean way to correctly load opencv2 headers?
@user-5787a9 Checkout an older opencv version after cloning the git repository
e.g. v3.2
I resolved making a copy of the entite opencv2 folder on top, now the headers are found
@papr ok now the error is the following, don't know what to do:
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. world - [INFO] launchables.world: Application Version: 1.8.46 world - [INFO] launchables.world: System Info: User: optolab, Platform: Linux, Machine: stark, Release: 4.15.0-34-generic, Version: #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018 world - [INFO] pupil_detectors.build: Building extension modules... world - [INFO] calibration_routines.optimization_calibration.build: Building extension modules... world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend Exception in thread Thread-3: Traceback (most recent call last): File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/lib/python3.5/threading.py", line 862, in run self._target(self._args, self._kwargs) File "main.py", line 160, in log_loop topic,msg = sub.recv() File "/home/optolab/pupil/pupil_src/shared_modules/zmq_tools.py", line 99, in recv payload = self.deserialize_payload(remaining_frames) File "/home/optolab/pupil/pupil_src/shared_modules/zmq_tools.py", line 110, in deserialize_payload payload = serializer.loads(payload_serialized, encoding='utf-8') File "msgpack/_unpacker.pyx", line 208, in msgpack._unpacker.unpackb msgpack.exceptions.ExtraData: unpack(b) received extra data.
the only difference is that I've installed tensorflow-gpu now
don't know if something broke because of that, a minute ago I was able to start everything
@papr we have access to both OSX and Windows 10. Currently mac seems to be working better
yeah it seems that tensorflow breaks something in msgpack. If I uninstall it everything works again, problem is I need tensorflow...
Hi everyone, I'm trying to sync two pupil labs eye-trackers together wirelessly. I purchased two mobile bundles and have the app running on both phone but cannot get them to see each other. Is there a setting in the pupil service application that allows the simultaneous viewing of two phones?
@user-e267d1 What do you mean see each other?
@user-a08c80 On windows you need to manually install the libusbk drivers for your other camera
@user-5787a9 Are you sending anything to the ipc via pupil remote or something similar?
@papr I wish to view and record the output of two pupil mobile phones simultaneously. Pupil capture (Mac) is configured in a way that only allows me to see the output of one phone (first person view and eyecam). The end goal is to hit record and have both phones capturing footage while synced.
hey, a friend told me about this server
and... he said that I could make my own eye thingy
How can I do that?
@user-591137 please see https://docs.pupil-labs.com/#diy
Oh, that looks pretty cool
Unfortunately looks like I must be smart to make that kind of stuff π¦
If you want to go DIY route, the link above provides instructions for building hardware. If you want to get an eye tracking headset (e.g. not build yourself) please see - https://pupil-labs.com/store and then download software from https://github.com/pupil-labs/pupil/releases/latest
@papr nope nothing. I've tried yesterday with version 1.5 of pupil and it works with both msgpack and tensorflow installed. Today I'll try with version 1.7
'kay, versions 1.8 and 1.7 of pupil do not work, but from 1.6 and below the problem goes away. In my opinion is just a call to a function that exists both in tensorflow and msgpack, downgrading both do not resolve the issue unfortunately
I received my pupil labs glasses, but I can't get them to work. I get a sound that the device is connected, but there is no response afterwards (i.e., no further installation of any driver). I tried both Apple and Windows machines, but with no avail. Anyway has an idea?
@user-81d8b5 what headset configuration are you using?
@user-81d8b5 What versions of macos and windows?
the binocular and high speed
using latest version of windows and macOS
Can you ensure that the usb c cable is firmly connected to the collar clip.
yes, I even tried different cables
In windows please run pupil capture as admin
The glasses just don't show up, not even in the device manager
It gives a connected sound, but there's no further action
the drivers don't get installed
Are you running the latest version of Pupil Capture and right click and running as admin?
yes i do
could it be that the usb clip is broken?
Ok, we will follow up via email to discuss further
which email?
@user-81d8b5 we've sent further instructions via our info@ email, thanks!
Hi, I'm trying to connect my pupil mobile headset to my mac using the pupil labs software however when I view the stream, the headcam view keeps entering ghost mode (every ~3ish seconds). Any tips on how to prevent this?
I am also trying to use an external microphone connected to the mobile phones, is there a setting where it uses my external microphone and not the phone's internal mic?
Another question that we have is that when using two phones, we want to be able to start recording at the same time. We can do independent recordings, but would like a way to sync the output of both phones, either using timestamps or some other method. Does anyone have experience in this?
@user-e267d1 which version of Pupil Mobile and Capture are you using?
In the pupil it is possible to obtain the coordinates of fixation of each frame during the tracking with the corresponding time?
@user-3f0708 yes, this information should be available after exporting
I will analyze the files after export
It might be possible that the exported data only includes start and stop frame indeces π€ but I will have to look up the details
smooth_x += 0.35 * (raw_x-smooth_x)
smooth_y += 0.35 * (raw_y-smooth_y)
x = smooth_x
y = smooth_y
If changing this value 0.35 will have much impact on mouse movement during the execution of the mouse_controll.py scrip
@user-3f0708 sorry, is this related to your previous question about fixations?
No
Hi all, could someone provide me with a list of psychological experiments using Pupil Labs equipment that investigated changes in pupil dilation in sustained attention tasks? Someone provided this a few months ago and I can't for the life of me find it again
@user-965faf do you mean our citation list? https://docs.google.com/spreadsheets/u/1/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/htmlview#gid=0
Thanks a lot @papr
hi
can we make heatmap video from pupil eyetracker saved file
if yes please help me how
Hi again.
hi jahoda
Has anyone here managed to successfully integrate PsychoPy and Pupil? I've been looking at the old discussions about this at the start of the year, and have looked at the helper scripts, but I'm a bit lost. If anyone's made a psych experiment analysing PD with PsychoPy and Pupil, I'd appreciate a brief rundown of how you went about it.
please help me
hi all, just want to know if any one works with pupil here? Im from Canada and want details about ordering one.
@user-19417c I think most people in this forum have Pupil headsets. Ordering is pretty straightforward, just go to the website and check out the store If you want more info a bout the order process, just write an email to info@pupil-labs.com
Hi everyone, I just bought two pupillab headsets and I have some serious issues when I run Capture on a MacBook Pro
Is this the right place to ask?
Hi @user-f99340 π - Yes this is the correct place to ask questions.
Please could you provide us details about the headset configuration (e.g. high speed world camera + 200hz binocular eye cameras) as well as details about the macOS version you are using. Please also describe the behavior you are observing or the issue you are facing so that we can provide concrete feedback.
Ok
high speed world camera + 200hz binocular eye cameras
On MacBook pro
With High Sierra 10.13.6
Graphics: Intel HD Graphics 630 1536mb. I am mentioning cause might be related
Problem
When I start Capture everything looks all right
When I make the window Full Screen
When I start Capture everything looks all right
When I make the window Full Screen
Capture becomes all blue
If I minimize again it works again
When I press calibration
@user-f99340 thanks for the detailed description and screenshot
It goes all white in full screen
And then when I cancel calibration
All windows are messed up
@user-f99340 just to confirm, you are using the latest version of Pupil Capture - v1.8
- correct?
Yes
@user-f99340 I am not able to reproduce this issue on macOS, however this could be related to graphic drivers and Retina display. @papr please could you look into this behavior and make an issue based on @user-f99340 feedback.
@wrp Sure π
@user-f99340 Just out of interest, how do you connect your headset to the laptop?
I tried to reproduce your issue -- my setup: - 13" MacBook Pro, early 2015, Intel Iris Graphics 6100 1536 MB, Retina Display - Bundled Pupil Capture, v1.8.26
Neither of the above issues (1. blue screen on fullscreen, 2. white screen during calibration) happens for me. I think this issue is a duplicate/similar to https://github.com/pupil-labs/pupil/issues/1214
anybody free for a Chat ?
Hey @user-db5f49 Whats up? π
we`re in class
from Germany
Hi @user-db5f49 welcome to Pupil community chat π
Hey folks. I'm looking over some old data collected by a student, and see some high frequency noise in more than one data. Any idea where this might have come from? Camera gain up too high, perhaps?
High frequency noise = black dots
@user-8779ef was this data recorded using the 200 Hz cameras?
@user-8779ef this might be related to this https://github.com/pupil-labs/pupil/issues/1116
Hello. I got my eyetracker a little while ago, but am just starting to use it now.
I got pupil_capture, etc. from the github repo, and everything seems to be working fine. Now I would like to do some programming: write a Python program to consume the pupil and gaze data being sent out, I presume, by pupil_capture. I've been looking through the documentation, but can't seem to find a simple working example or instructions to do this. (If I've missed it, I apologize.) Where are the Python libraries I need to install? Are there examples?
NB: I don't want to modify your application or library, I just want to consume data from it using a Python program. Thanks a lot!
This can be used to consume pupil data
Thank you.
Is there a higher-level API that would allow you to do things like
import eyetracker
tracker = eyetracker.Eyetracker()
No, unfortunately not
tracker.get_gaze()
The highest level api would be to use the network interface as in the example
You mean the filter_messages.py example?
Yes
OK, I'll try. Thank you.
Hell all. So I have a question, is there anyway to get the software to output a coordinate of error or just all 0 when a frame is dropped? So in data Iβm working with from a lab, the data is shorter then the recorded task. And chunks of the data are gone we lost 20sec for one (and itβs a hand task that only takes 10sec to do and then repeat). From what I understand is if the surface you denote is gone from the frame or the eyes go somewhere else that coordinate isnβt recorded and it hops to the next. I need the data to give me the full time itβs being recorded even if cells have no data I need to have the time match up. Since we are trying to match the eye tracking with kinematics, 20sec missed from the data shifts the eye-data and then it wonβt line up with the kinematic movement data.
Hello not hell whoops
@user-61ae37 could you describe your setup a bit further? You mentioned that you use surfaces? Which data are you missing exactly?
Hi! I am trying to run pupil from source on a MacBook Air , but get the following error: ModuleNotFoundError: No module named 'uvc' I traced the error back to the installation of pyuvc, which shows "Failed building wheel for uvc" and later "ld: library not found for -luvc.0.0.7" along with "error: command 'clang' failed with exit status 1". I think the reason for this is that the installation of libuvc shows the following: "MACOSX_RPATH is not specified for the following targets: uvc".
How could I solve this? And is my assumption right that the whole problem starts at the installation of libuvc?
Thank you very much in advance!
@user-019256 Yes, please make sure that libuvc was installed correctly. The second step is to make sure that pyuvc can find the libuvc installation.
Thank you for the quick answer. Do you have any idea how I could fix my installation of libuvc? The error I get is: Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run "cmake --help-policy CMP0042" for policy details. Use the cmake_policy command to set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
uvc
I would start by following the instructions and run cmake_policy
π
OK I found the mistake. After installing libuvc again the error "ld: library not found for -luvc.0.0.7" still came when trying to install pyuvc. I noticed that there was only libuvc0.0.8 installed on my system so I changed the line libs = ['turbojpeg', 'uvc.0.0.7'] to libs = ['turbojpeg', 'uvc.0.0.8'] in setup.py in the pyuvc folder. After that everything works. Could it be that the current version of libuvc is not suitable for the current version of pyuvc? Or is this just a problem on my system? Either way, it works now π
@papr from what I can tell we get full sets of eye coordinates dropped. So say the video shows a recording of 5:00min and then the excel export when you take the full time that is out put its 4:30min. So the problem is I"m not sure which parts of data drop. So we do eye tracking during an upper extremity prosthetic task. That has two sides of the table that they have to pic up a disc and transport. The world view camera because of head rotation will lose half the table at points. So what this set of PhDs did was create three surfaces in the AOI. The right side of the table (start) the left side of the table (end) and then the prosthetic. So what I think happens is if the head moves and the prosthetic is out of the world view and the eyes go to that segment that isn't in the world view the data doesn't populate and the export just cuts the output short. I'm not 100% sure.
@user-019256 Lib version naming is very tricky to get right on all platforms unfortunately.
Hey @user-61ae37 ! Could you specify in more detail what is missing in what file? Let me summarize what data can expect to get:
If you are recording the scene with say 30 Hz and the eyes with 200 Hz, you should of course get multiple gaze estimates per scene frame. In the gaze_positions.csv
file of your recording you should therefore always see multiple rows with the same index
value. The index
values identifies the scene video frame the gaze estimate belongs to.
For every gaze estimate that is being made, the software tries to map it to all surfaces. The necessary requirement for this mapping is that the surface was detected in the according world frame. All successful mappings are saved in surfaces/gaze_positions_on_surface_<surface_name>_<surface_id>.csv
. If your surface is always visible you should again see multiple rows with the same world_frame_idx
. However if your surface is ever not detected in the scene, no further rows will be added to this file until it reappears in the scene. Therefore there can be gaps in between the world_frame_idx
values or timestamp
values. If you subtract the set of world_frame_idx
values from the set of index
values, you get the indices of the frames where the surface detection was not successful.
The file surfaces/srf_positons__<surface_name>_<surface_id>.csv
contains detection results for the surface, i.e. there is exactly one row for each scene frame with a successful surface detection. The frame_idx
value identifies the scene frame this detection belongs to.
Maybe this clears things up a bit? If not, please let me know data exactly is missing in what files so we can trace the source of the problem!
Hi! I am using the pupil headset (200 Hz binocular) for research purposes and I would like to know if you have an EC declaration of conformity (necessary for ethical approvals..)
We have an EC declaration
@papr thanks for your quick answer! can I get this file somehow?
Please contact info@pupil-labs.com for that
@papr done! thank you
Hi, we are not sure about how to resolve this error.
Hey @user-b0c902 You should have gotten a pop-up asking for permissions. Did you deny it?
I think the easiest way would be to reinstall the app.
Hmm, not sure about the pop up. Never saw it before. Will check again and re-install it. Thanks!
Hi, I have a question regarding the mobile app use. is there a way for me to calibrate the headset via a connection to a computer, then taking it to the field (far from wifi), and still use the calibration? can I use the user_calibration_data file somehow with the pupil player offline tracking?
@user-b4961f I recommend to use offline calibration in this case. Just record the calibration procedure on site.
and is there a way to validate it, before running a long recording session?
could I take an offline calibration from one recording and use it for a different recording with offline tracking?
No, unless you run it in parallel.
This is not possible yet but is planned
I highly recommend to recalibrate in regular intervals though.
I see. you mean every 10 minutes or so run a calibration procedure, and the offline tracking will know to disregard an old calibration? or should I start the player again for each calibration?
You have to options: 1. One long recording including all calibration procedures 2. Start a new recording before each calibration
The offline calibration allows you define multple calibration and mapping ranges which is helpful for case 1
OK thank you. and you probably can't give me a definite answer, but if I save a user_calibration_data file, might it be useful in the future? that is, is it your near future plans to implement it?
Actually, the user calibration file is not useful and is only meant as temporary storage of the calibration data.
Capture stores all calibrations during a recording as notification. And at the start of a recording the content of an existing user_calibration_data will be republished as notification
and the player does it differently? could a plugin in the player use this republished notification as calibration?
You mean as import into an other recording? No.
As I said, we are planning to add the functionality to export and import calibrations. This is the open issue as reference: https://github.com/pupil-labs/pupil/issues/1003
Thank you. One more thing, do you see any issue that would not allow me to use the mobile app with the HMD add-on to make it wireless? I asked it in the HMD group, but no one answered..
@user-b4961f You mean by streaming the hmd addon's cameras to the PC runnign Capture?
This should work.
Hi,
I want to use my heatmaps for a congress abstract and need a legend to explain the colors of the map . I checked the export folder and couldn't find anything like that. Is there a special way to create one?
Hey @user-ecb916 The heatmaps use the jet colormap: https://docs.opencv.org/3.1.0/d3/d50/group__imgproc__colormap.html#gga9a805d8262bcbe273f16be9ea2055a65ab3f207661ddf74511b002b1acda5ec09
The values for each pixel are normalized by the maximum value for each surface. This means that surface heatmaps are not comparable to each other
@user-8779ef We were able to reproduce your PyCharm multiprocessing debug issue!
For reference: https://github.com/pupil-labs/pupil/pull/1306
Is there any way to send messages through ZMQ about whether Pupil Capture is in the process of recording?
@user-988d86 No, you cannot request the current state. Currently, you can only listen for state changes: rec start and stop.
Hi, I am trying to use the surface plugin to track if a worker is looking at a robot during the interaction. As stated in the doc surface hit are streamed but are they also recorded? I can't find an array with it in the recordings. I believe it produces a csv with hit counts but I would be more interested by hit timestamps. Thanks for your help.
hello im using a diy setup, trying to detect properly the pupil,can somebody tell me what its wrong with the image from the eye camera?
@user-ae156c we actuelly dont record them. instead you should use the offline surface tracker for that.
@mpk Thanks for your reply. Is there a documentation on how to use the offline tracker?
@user-ae156c yes, please see our documentation for details docs.pupil-labs.com
Hello, How do we stream a video from the world camera of the pupil glasses?
Just to clarify: Do you see the video feed in Capture?
i do, but how can i use it in my python code, i want to do object detection on the live video.
Hey guys,
Which file in offline_data holds the calibration points from a natural features calibration?