Is there a way to automate the drag and drop feature in pupil player? I have been trying to take a go at it. Can use a few tips!
Hi there, I'm not sure if this is the right place to ask but i'l try anyway.
I'm going through the hmd-eyes documentation, i can run the completed executable demo with successful calibration and eye gaze working with pupil service open. When in the Unity editor version of the demo i get a message saying i've successfully connected to Pupil and returns my current version (1.23.10), however, the status text still says not connected and when i press C to begin calibration i get an error saying 'Calibration not possible: not connected!' leading back to subsCtrl.IsConnected catch being false in the CalibrationController script.
Any help or information would be greatly appreciated.
Does anyone know if pupil mobile works with google pixel 4 which runs on android 10?
Is there a complete list of pupil mobile compatible phones?
Hi, I'm trying to run pupil capture from source on Ubuntu 18.04. When I'm choosing my RealSense D415 camera as video source , I'm getting an error that "The selected camera is already in use or blocked". I ran ps and there is no other process that is using the RealSense camera during the running. In addition when I'm activating pupil capture in the regular way everything working fine . I will really appreciate any help in the subject. Yogev
@papr I've got the gps data successfully. Could you tell me the definition of "accuracy and bearing"? And what is the unit of speed? Thank you.
@papr If I want to plot the average scan path across trials in an experiment, while each trial has different number of timestamps in the fixation data file, how could I find the average scan path instead of plotting that for every single trial?
@user-908b50 Hey, what exactly do you mean with "automating the drag and drop feature? If you just want to programmatically open a recording with player, you can just append the recording path when starting Pupil Player over command line. Depending on whether you run from bundle or source, this will look slightly different, but the idea is the same. Example with source:
python main.py player /path/to/my/recording/folder
@user-b37f66 which version of Pupil Capture are you using?
Hi @user-88b704, I can run Pupil Mobile without problems on a Google Pixel 3a with Android 10. So I assume a Pixel 4 should be good to go as well.
@user-894e55 if I see this correctly, you would have to find a way to attach eye cameras to the device. Afterward, you could use the VR-calibration-approach implemented in our unity integration. This way you would not need to integrate scene-camera access into Pupil Capture. This approach highly depends on what data is being made available by the m300xl in realtime.
@user-b37f66 In addition to @user-c5fb8b 's question: How do you start Capture in this case if not "in the regular way"?
Hi, @user-c5fb8b 1.23.10 @papr As described in the https://github.com/pupil-labs/pupil manual. I'm running "python main.py capture" from terminal in "pupil_src" direrctory
@user-b37f66 are you using the custom realsense backend plugin? Please note that we discontinued official support for the realsense cameras in v1.22. You can read about the reasoning behind this decision in the release notes: https://github.com/pupil-labs/pupil/releases/tag/v1.22 To make transitioning easier, we took the existing code and wrapped it into an external plugin that you can use to interface with the realsense cameras. You can find the link in the release notes or here: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11
@user-905228 The location data is output of the Android location api https://developer.android.com/reference/android/location/Location Just check out the corresponding getter function documentation, e.g. speed -> getSpeed()
@user-c629df You will have to interpolate the location for a given set of timestamps.
1. Calculate relative timestamps by subtracting the reference timestamp (e.g. timestamp of the first gaze datum that is on the AOI) from all following timestamps. This makes the timestamps comparable
2. Generate a set of interpolation timestamps, e.g. using numpy.arange(0.0, duration_in_seconds, time_difference_between_interpolation_points_in_seconds)
3. Use an interpolation function to estimate the gaze positions at the interpolation timestamps, e.g. by feeding the surface gaze norm pos into https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html (do it twice, once for x, once for y). This step will give you estimated gaze locations at the same relative timepoints and can therefore be averaged.
@papr Got it! Thank you.
I have a problem that when i start my experience the value of confidence is low(lower than .5) , then ,my confidence value suddenly become nan . does anyone know what's wrong?
thank you!
Hi @user-ab28f5 what calibration method did you use and what are the reported errors when running a validation afterwards?
I uesd the Screen Marker Calibration
Could you share a screenshot of your eye windows? Gaze confidence depends on the pupil detection confidence.
@user-ab28f5 I have not seen that before. Could you share the recording with data@pupil-labs.com such that we can try to reproduce this?
@user-c629df You will have to interpolate the location for a given set of timestamps. 1. Calculate relative timestamps by subtracting the reference timestamp (e.g. timestamp of the first gaze datum that is on the AOI) from all following timestamps. This makes the timestamps comparable 2. Generate a set of interpolation timestamps, e.g. using
numpy.arange(0.0, duration_in_seconds, time_difference_between_interpolation_points_in_seconds)
3. Use an interpolation function to estimate the gaze positions at the interpolation timestamps, e.g. by feeding the surface gaze norm pos into https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html (do it twice, once for x, once for y). This step will give you estimated gaze locations at the same relative timepoints and can therefore be averaged. @papr Thanks for the detailed reply! 1) In this type of analysis, which data file is most appropriate to use (pupil_position, gaze_position, or fixation_position)? 2) What might be an appropriate duration for the interpolation timestamp? Should it be the same as the time length of a single trial in an experiment? Thanks!
@user-c629df 1) you will need a common reference frame for this to work, i.e. surface mapped gaze or fixations. 2) the duration depends on your experiment. The trial duration is a good starting point, though.
Hello! I was wondering if it was possible for core (or another product of pupil's, though I believe core is what I want) to detect an eye other than a human eye (eg. cat or dog eye). I wasn't sure if the algorithm used for tracking eyes was specifically designed to pick up characteristics of the human eye, like relative size of the pupil/iris, or requiring a completely visible sclera.
@user-c629df 1) you will need a common reference frame for this to work, i.e. surface mapped gaze or fixations. 2) the duration depends on your experiment. The trial duration is a good starting point, though. @papr Oh I see. Thank you so much! What if each trial has slightly different durations, should I find the average of the duration to set up the interpolation timestamps? This might sacrifice certain gaze positions in some trials, but will the interpolation function be able to compensate for this error?
@user-c629df It really depends on what you are trying to research. E.g. if you want to show that the gaze pattern is the same when the subject looks at the AOI for the first time in comparison two the second time, then it would make sense to take all samples between "gaze enters surface" and "gaze exits surface". This would be an event based approach. If you want to simply show how the gaze pattern looks like in the first ten seconds after the gaze has entered the surface, then a fixed duration of 10 seconds would be appropriate.
@papr Make sense! Thanks so much for your help!
@papr May I also seek your suggestions on finding saccade data from the export data files? I couldn't find them nor among the pupil player plugins. I'm wondering if the data is embedded in some other data files?
Pupil Player does currently not perform saccade detection.
I see. Would gaze positions be an effective an alternative to that?
@user-c629df Gaze positions are the raw data on which eye movement detection is based. An alternative would be to use the fixation detection and assume that all gaze that does not belong to a fixation is a saccade. This is not completely true though since there are other eye movements types like smooth pursuit which you would interpret as saccade.
@papr Thanks for the clarification! A follow-up question: if I want to calculate the time interval between moving your eyes away from the fixation point (the center of a screen) to the target location on the screen within an trial of an experiment, which data file should I use to achieve that?
Is there a stimulus change that triggers the saccade? Usually, you would save a trigger/annotation with the timestamp of this event, effectively starting a timer. Then look at the gaze data after that event. As soon as there is significant movement, you can stop your timer.
@papr I see. Thanks for the explanation! I will start with gaze_positions.csv then!
@papr What is the unit of the timestamp? For instance, if there are two timestamps-1588133997.135 and 1588133948.176-would the difference between the two numbers, which is 48.959, represents the actual time difference between the two timestamps in seconds? It seems that in the surface data file, the timestamp doesn't match the real time length of my experiment. For instance, if a certain number of trial takes 5 mins, the difference between the min and max timestamp in the surface file is only 48.959 presumably seconds as its unit, which is drastically different from the real time length.
@user-c629df Timestamps are in seconds. Please be aware that gaze is actually only mapped to the surface if the surface was detected in the scene video. If you want you can share the recording with [email removed] and I can have a look.
Hi everyone, I'm using pupil 👁 core for scientific purposes for a while now. At the moment we would like to combine a motion caputre analysis with eye tracking data. I found that you can integrate/combine the eye tracking data from ergoneers eye tracker with the Qualisys Motion Catpure System. Is there any possiblity to combine/integrate the pupil core with a motion cature system? Thanks a lot for any answer in advance!
@user-c629df We have received your recording. We will come back to you via email tomorrow.
@papr Awesome! Thanks so much for your help!
@user-78130d Hi there. We didi exactly this. We are using ROS to sync the data streams from core and the qualisys qtm Client. This is based on the message filter package for ROS (time stamped based synchronisation). We have 3d printed a leightweight marker tree and attached that to the core frame. We used the c++ ROS Api since we are more familiar with that, but the Python API should be fine as well (there is a Python ros package for pupil core already and a Qualisys qtm ros wrapper)
Hope that this helps a bit
@user-26fef5 Thanks a lot! I will check this out.
@user-c629df I did not realise that we have been in contact via email already. Would you mind sharing the scene video with [email removed] too? Without it, it is difficult to judge what is causing the shorter than expected surface data.
Hello! I was wondering if it was possible for core (or another product of pupil's, though I believe core is what I want) to detect an eye other than a human eye (eg. cat or dog eye). I wasn't sure if the algorithm used for tracking eyes was specifically designed to pick up characteristics of the human eye, like relative size of the pupil/iris, or requiring a completely visible sclera.
Sorry, I think my message from a few days ago may have been overlooked. Just reposting it.
hi @user-6e1b0b the pupil detection algorithm is designed to be used with humans. However, there have been some projects that use modified versions of Pupil Core hardware for canines and non-human primates. I do not have insight into the results of these projects and/or what modifications were made to software for pupil detection.
@user-c629df I did not realise that we have been in contact via email already. Would you mind sharing the scene video with [email removed] too? Without it, it is difficult to judge what is causing the shorter than expected surface data. @papr Thanks! I just emailed it to you!
Hi , I'm using pupil core headset with a Realsense D415 camera on Ubuntu 18.04 platform . I'm trying to run Pupil Capture from source and, but the program is running terribly slow and the screen barely changing. I tried to activate 1.21 and 1.23 (with the Realsense plugin) versions and in both of them it was happened. When I'm activating the pupil capture in the regular way everything is working just fine. I also checked the memory and CPU usage and there is nothing unusual their. I will really appreciate any help in that matter. Yogev
Hi, i see the minimum requirements(ram 8gb, cpu i5). Can i use maximum performance(such as higest frame rate, etc) with this?
Hi, is there anyone who faced the issue on using Realsense D435 camera with 'Pupil Capture from source'? In my case, the camera is well recognized (it is listed up in 'video_capture.uvc_backend'), but when I clicked the device in the application, it returns the error "The selected camera is already in use or blocked" Ah, I tested in Linux 16.04 😄
@user-b37f66 you do not need to run Pupil from source in order to use the RealSense plugin. You can also use the prebuild bundle and just add the plugin to: your home folder > pupil_capture_settings > plugins You might need to create the plugins folder first. Please give this a try to test whether this runs more smoothly for you!
@user-48e99b Which version of Pupil are you using? Please note that we discontinued official support for the realsense cameras in v1.22. You can read about the reasoning behind this decision in the release notes: https://github.com/pupil-labs/pupil/releases/tag/v1.22 To make transitioning easier, we took the existing code and wrapped it into an external plugin that you can use to interface with the realsense cameras. You can find the link in the release notes or here: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11
Hi, Does anybody have experience with pupil core and VICON Nexus software ? I am trying to send a trigger form VICON to the pupil core, using pupil remote with no luck.
@user-c5fb8b Currently, I run the 'Pupil Capture' from the source. And I put the file (pyrealsense2_backend.py' to 'pupil_capture_settings > plugins' folder. But the application still returns the error 'The selected camera is already in use or blocked' even though it also returns 'video_capture.uvc_backend: Found device. Intel(R) RealSense(TM) 435' message continuously in the terminal. Could you let me know how I can fix it? Thank you! (The pupil source's version is v1.23)
@user-01d553 Am I correct that you were in contact with us via email? If this is correct, I will come back to you via email to avoid two parallel conversations.
OK... I though this is another channel....sorry...👍
@user-48e99b (maybe also interesting for @user-b37f66)
The plugin we created with the removed realsense integration code depends on pyrealsense2
, which are the python bindings from librealsense. If you run from source, you will have to install this as well into your python environment. A simple pip install pyrealsense2
should work if you're on linux. See the official docs for more info:
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python#python-wrapper
@user-c5fb8b I already installed the 'pyrealsense2' by 'pip install pyrealsense2'. To make sure, I reinstalled and checked that the terminal returns 'Requirement already satisfied: pyrealsense2 in ...' But, there are still errors what I mentioned; 'The selected camera is already in use or blocked' in the application window, and 'video_capture.uvc_backend: Found device. Intel(R) RealSense(TM) 435' message in the terminal (It keep showing..) Is there any reason that it is happened? Thank you!
@user-48e99b can you share the log output? From pupil_capture_settings/capture.log
Michael Jo Pupil-D435 test log
@user-48e99b also did you enable the plugin? In the plugin manager menu there should be Realsense2 Source
@user-c5fb8b Ah.. I didn't enable the plugin. It shows the image correctly! Thank you very much 😄
🙂 No worries, glad everything works!
Thank you again! 🙂
hello, i want to be sure on something , - capture is the one who calculating the pupil size ,gaze and etc and adding a video record ? , if yes then can i stop the option of calculating the pupil size in it? , and on the other hand pupil player he is transforming the data to exel? , and if i have out side video data it can calculate pupil size and gaze by itself in offline mode?
Hi @user-d9a2e5, I hope I understand you correctly: Pupil Capture records all video streams (eye and world) and by default also runs online pupil detection and gaze mapping, which will be included in the recording. Pupil Player also offers offline analysis options, including re-calculating pupil and gaze data from the recorded video.
1) The pupil size is a by-product of the pupil detection pipeline. You can disable the whole pipeline while recording, which will get rid of the pupil size, but also of the other pupil information as well as the gaze mapping (as no pupil data is available to map). You can do this in the general settings of Capture, by selecting detection & mapping mode: disabled. A common use-case for this is e.g. to reduce CPU load while recording on machines with low hardware specs.
2) If you have a recording without pupil and gaze information, you can still run offline pupil detection and offline gaze mapping in Pupil Player (as with any other recording).
3) Pupil Player does indeed export the data, but not necessarily "to excel". Instead we export all data in a general purpose format format (CSV), which can be opened from many different applications, but yes, also from Microsoft Excel.
I hope this answers your questions?
wow , you mega answered my question , thanks!!! you are great guys , always helping me 🙂 , have a good day
Hi, I'm using the pupil service to get data via zmq in a c++ application. I have an issue that data does not seem to reflect what I see in the capture program. Initially I though it was a coordinate mapping issue, but I realize it is not - as my data does not even relatively correspond to the movement in capture, nor is it mirrored/inverted. I scale the normalized data with the screen resolution. I am supecting that I might be seeing old data, is there a buffer that can fill up if I don't request information fast enough - any other ideas what could be causing it. At the moment I'm simply trying to get a reliable reaction to looking up, down, left, and right.
here is how I connect and receive data: https://github.com/mrbichel/eye-read-01/blob/master/src/ofApp.cpp#L21
@user-ae6127 You are subscribing to pupil data, i.e. data in the coordinate system of the right eye camera
please be aware that the right eye camera is physically flipped. Therefore, by default, bigger norm_pos_y values map to a downward movement of the pupil position
Yep I tried both with theese topics: "pupil.0" "gaze.3d.1."
Gaze is mapped pupil data in scene coordinates and requires calibration. Did you calibrate already?
yep, I did calibration, and I did a rcording looking around the edges of the screen with that calibration - that is of good quality. I use a chin rest and immediately test with same calibration in my own software.
I'm aware of the flipped y-axis.
ok, great! There is indeed the possibility, that you are receiving old data if you do not process the data quick enough
is there a way to make sure you always receive the newest data, and that older packages are simply dropped? - It was actually my understanding of ZMQ_SUB that you always get the most recent data when requesting?
https://rfc.zeromq.org/spec/29/
The SUB Socket Type ... For processing incoming messages: SHALL silently discard messages if the queue for a publisher is full. SHALL receive incoming messages from its publishers using a fair-queuing strategy. SHALL not modify incoming messages in any way. MAY, depending on the transport, filter messages according to subscriptions, using a prefix match algorithm. SHALL deliver messages to its calling application.
New packages will be dropped if the sub queue is full
I recommend to recv()
all available data first, before processing the data points one by one. One processing step would be to drop all but the most recent package
Ok I see, so basically each time I run my recieve method I will currently get the oldest message in the queue?
This is how you can check if data is available: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/zmq_tools.py#L128
@user-ae6127 correct
It is on you to empty/process the queue.
Ok I see, that si probably my issue then. Any chance you know of a c++ reference / example ?
Yeah, having a look at your code you call recv once every app cycle. This is probably 60Hz or less. This is not sufficient if you run pupil detection at 120 or 200Hz.
Yep, I tried to increase my framerate to accomodate - but my graphics processing in the application can not keep up with that.
Basically you need to modify ofApp::update()
to
while pupilZmq.hasData() {
pupilZmq.receive()
}
ok
The frequency of your gui should not be dependent on your subscription input frequency
cppzmq that I use does not seem to have a hasData method.
Neither does the python implementation. 🙂 This was more of an abstract example to clarify what needs to change 🙂
Check this Python equivalent on how to implement a hasData() function: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/zmq_tools.py#L128
hmm ok, seems I can just keep running recv until it returns false ?
@user-ae6127 No, this won't work, as recv() is a blocking function by default
@user-ae6127 btw, it might be easier to use the higher level http://zeromq.github.io/zmqpp/ over cppzmq which is quite low level
You can even use an event based approach with http://zeromq.github.io/zmqpp/classzmqpp_1_1reactor.html
yeah, might be a good idea. Have a bunch of code written with zmqpp now though.
something like
"while( socket.get( ZMQ_EVENTS ) == ZMQ_POLLIN ) {
socket.recv(&frame1);
socket.recv(&frame2);
}"
This looks about right. I have no c++ experience though. If you have already bunch of code and got used to cppzmq, then stick with it. There is nothing wrong about using it.
thanks for the guidance
hey ,i have another question ,do you have any suggestion in interpolating gaze data? im doing the following things: first deleting all data with confidence less then 0.8 , and all data which is not on AOI , then i making sure all data between 0 to 1 , and then i m using interpolator . do you know to tell me if i m doing something off the chart?, im asking because i see that my data which is interpolated to the same frequency of the movie is not same as shown in pupil player , and the gaze in pupil player is it raw data ? or it is interpolated too?
is it possible to do 3d calibration with one eye by doing multiple 2d calibrations?
the idea is to have many calibration profiles and switch to the right distance one (i know the distance they'll be looking at)
@user-50974c No, simply run the 3d calibration to get 3d gaze mapping. There is no simple way to switch between multiple calibrations, too.
thanks for the quick response 👍
hmm, but how would the pupil capture software properly calibrate in 3d with only one eye? I was considering doing an extra calibration layer in my own python script so that swapping (between multiple) would be easy.
The 3d calibration works via bundle adjustment. For this procedure, it does not matter if you have one or two eye cameras. There is one difference though: The gaze_point_3d will show a fixed depth. But you should not rely on this value anyway as it is quite noisy.
@user-c5fb8b Hi, I know that the Realsense plugin could be loaded in the prebuild bundle and everything is working great when I'm loading the plugin in that way. I need to run pupil capture from source with Realsense plugin in order to load and use another plugin that I developed that imports pythonic packages that aren't available in the prebuild version. The problem as I said is that when I'm running pupil capture from source with Realsense plugin, the program is running very slowly. In addition, I already installed pyrealsense2 in order to load Realsense plugin from source. Hope the problem is now more clearer.
@papr May I ask you for your suggestion on how to plot better pupil diameter graphs accurately? I followed the coding tutorials on github and it turns out that the plot does not exclude dips as shown below, which influence the overall pattern. Thanks!
@user-b37f66 can you check which frame rate/s is/are selected in the pyrealsense plugin? You should select a frame rate of 30 or higher to make the ui run smoothly. Please also check the fps graph in the top left for the effective fps numbers that you are receiving.
@user-c629df Are you aware that you should be filtering data by confidence? If so, which threshold have you chosen.
Hello, I try to find out where exactly the image plane (physical sensor) of the world camera is located. Is there any specific data on that? Or is it possible to tell me the actual sensor measurements, so that I can deduce the focal length in mm from the intrinsic camera calibration?
@papr Yes you are correct! When I choose the 0.85 as a threshold, the graph looks like the one above. Thanks!
A follow up question is how to explain the many variations within 3s? Also how come one's diameter can go above 160mm?
A typical graph that I see from literatures are like the ones below, which has much smoother curve?:
@user-d9a2e5 I'm not sure what you want to achieve? Why are you interpolating the data at all?
@user-908b50 Hey, what exactly do you mean with "automating the drag and drop feature? If you just want to programmatically open a recording with player, you can just append the recording path when starting Pupil Player over command line. Depending on whether you run from bundle or source, this will look slightly different, but the idea is the same. Example with source:
python main.py player /path/to/my/recording/folder
@user-c5fb8b Thanks, that's good to know! I am on the very early stages of data analysis and learning rn. Basically, I want to get exports for each of the recording folder simultaneously instead of getting these exports individually. If I can do the above on python, I should be able to use for loops? I had been building a bundle in a conda env over the past week but I realized python 3.8 won't work so starting over in a 3.6 virtual env.
@user-c629df What are you plotting exactly? diameter
or diameter_3d
? Such high values speak for a ill-fit eye model. The eye model needs to be fit well in order to produce consistent and good data.
@user-908b50 Unfortunately it is currently not possible to automatically open and export multiple recordings
@user-908b50 There is a pupil-community script that extracts prerecorded data from recordings that you could give a try https://github.com/tombullock/batchExportPupilLabs
@user-d9a2e5 I'm not sure what you want to achieve? Why are you interpolating the data at all? @user-c5fb8b i want to check gaze data , and i need frequency same as the movie, and i need to delete all non relevent data no?
@user-d9a2e5 From your previous question: The gaze displayed in Pupil Player is not interpolated.
Also, could you elaborate what you referring to by "movie"?
@user-d9a2e5 do you want to match gaze data to world video frames?
@user-d9a2e5 do you want to match gaze data to world video frames? @user-c5fb8b yes and + the pupil data
@user-d9a2e5 From your previous question: The gaze displayed in Pupil Player is not interpolated.
Also, could you elaborate what you referring to by "movie"? @papr i am trying to see movies which affect pupil delitation , and combining it with gaze data for my project
and i saw that when i am checking the raw data in pupils , its frequency histogram is not as i choose on pupil capture - i choose 200 , and i get some of them even bigger then 200 [hz] did i do something bad?
@user-d9a2e5 the exact frame-rate of the eye data can jitter a bit due to transmission lags etc.
How do you analyse the data? If you export your data with Pupil Player into CSV files, they will already contain the matching information, e.g. in pupil_positions.csv
you'll find for every pupil datum the corresponding world video frame index. If you remove your outliers here (confidence filter), you might indeed end up with some world frames where you do not have any corresponding pupil data. Is interpolation here really a good idea? This might actually introduce bad data. Maybe you can think rather about discarding world frames that do not have any pupil data available for your analysis?
Please tell me if what I'm saying makes any sense, as I'm not aware of your specific experimental setup 🙂
@user-d9a2e5 the exact frame-rate of the eye data can jitter a bit due to transmission lags etc.
How do you analyse the data? If you export your data with Pupil Player into CSV files, they will already contain the matching information, e.g. in
pupil_positions.csv
you'll find for every pupil datum the corresponding world video frame index. If you remove your outliers here (confidence filter), you might indeed end up with some world frames where you do not have any corresponding pupil data. Is interpolation here really a good idea? This might actually introduce bad data. Maybe you can think rather about discarding world frames that do not have any pupil data available for your analysis?Please tell me if what I'm saying makes any sense, as I'm not aware of your specific experimental setup 🙂 @user-c5fb8b basically what i do , i have a video with AOI markers , i am watching the movie , then using player to extract data , i m using pupil tool box that have been made by someone i am not sure his/her name+ i am using mine gaze data interpolator -that's basically all i did , and then i tried to make it the same size and now i am trying to put it in the movie i used - gaze and pupil data together. sorry for my bad english :C.
am i asking too much questions?
@user-d9a2e5 I understand. But doesn't it make sense to not display any gaze data if there is no high confidence gaze data?
@user-c5fb8b it does , but the time is not the same with the my movie video , so i have to make it same same :C , so i kinda feel like i most to do it , btw what data better? gaze_time , or taking the mean of the same world timer?
@user-d9a2e5 are you matching the gaze data by time or by index?
@user-c5fb8b i think i matching by index
@user-d9a2e5 are you using the exported surface data to match this? Since you seem to be only interested in the gaze on your surface, correct? I'm not entirely sure what the cause of your problems is. My guess would be that you are matching the wrong indices/timestamps somehow, if Pupil Player shows everything correctly. But I fear I can't help you much more on this, especially if you are using some toolbox that someone else wrote. In theory this should be an easy task.
@user-c5fb8b i am using https://github.com/ElioS-S/pupil-size , do you know their code ?
@user-d9a2e5 unfortunately we cannot offer any support for external code/tools. Could you share some screenshots of what you are trying to achieve with your current implementation? Maybe we can understand the underlying issue better from this.
@user-c5fb8b okay 🙂 , i will send it in upcoming days ty for help! and sry for the many questions hahah ops
Hi! I want to buy a Windows tablet for using it with the Core eye-tracker. Unfortunately I couldn't find any information about technical requirements to laptops and tablets.
Hi, I have a quick sales question. If we buy the Core eye-tracker with the high speed camera, can we still change the configuration to use it with the RealSense camera in the future?
@user-c629df What are you plotting exactly?
diameter
ordiameter_3d
? Such high values speak for a ill-fit eye model. The eye model needs to be fit well in order to produce consistent and good data. @papr I'm plotting diameter of the pupil in image pixels as observed in the eye image frame. Should I plot diameter_3d instead for a better eye model?
@user-c629df ah OK, then the value range makes more sense as it's pixels, not mm. If you want mm, you need to plot diameter_3d
Hi @papr , I just wanted to quickly confirm an understanding of the PL core algorithm (Swirski model). When we select 3D pupil mode, the software generates a 3D model based on detected 2D ellipses in the eye imagery yes? In order to calculate norm_pos in the scene video, PL uses the 2D ellipse center and calibration data to map the 2D ellipse center onto a 2D norm_pos value in the scene video yes?
The 3D model does not update the 2D pupil center value right? Rather, it is used to derive a 3D vector?
If you select 3d detection and mapping mode, the gaze mapping will map the pupil circle_3d
normal
s to the gaze_normal
s and try to intersect them, resulting in the gaze_point_3d
. This 3d point is backprojected onto the scene image, resulting in the gaze norm_pos
.
The pupil center (pupil norm_pos
) is only used in 2d gaze mapping using polynomial regression.
In 3d mode, the 2d ellipse will be projected onto the 3d model. Then the ellipse will be adjusted based such that it is a circle that is tangential to the eye model. Afterward, this circle (circle_3d
) is backprojected onto the eye image (ellipse
). This actually overwrites the original 2d ellipse
values. Depending on how well the backprojection fits on the original value, the higher the model confidence will be.
In other words, the 3d model does update the 2d pupil center.
Perfect. That's what I wanted to know.
Now, has this always been the behavior?
Or was this behavior updated post a particular version?
Now, has this always been the behavior? Yes, as far as I can remember.
In our upcoming 2.0 release, we will store 2d and 3d data, such that no data gets lost. This allows nice visualizations showing 2d and 3d ellipse at the same time. You can use this to check if your eye model is fit well.
I'm a little surprised by this information that it has always been the case .. So does this mean that a single natural features calibration point would calibrate the system? Or do you do a polynomial mapping on gaze norm_pos
post natural features calibration (or any calibration for that matter)?
norm_pos
as input and the norm_pos
of the reference targetBoth methods need multiple reference locations that are spread across the scene camera's field of view in order to work well.
Thank you, this cleared a lot of information for me. Could you point me to a link or webpage where this process is highlighted?
There is a note about this in https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration
Mapping Method: 2d uses polynomial regression, or 3d uses bundle adjustment calibration. The terms are not explained further as they are common/public algorithms.
Ok thanks @papr
@papr The frame rate is by default 30 and I didn't change it. I have notice that in the very few seconds after the activation of the Realsensse plugin , the program is running fine and just after that it is becoming slow. The same thing is happening when I'm trying to run from source an earlier version of pupil capture (1.21) with the former way to activate the RealSense World\Depth camera.
@user-b37f66 Unfortunately, it is very difficult for us to judge what is causing this issue as we do not have access to a realsense camera right now in order to reproduce the issue.
Hello
hello, I am a first year graduate student and a novice in the use of pupil lab core and solfware. Please can someone help me on how to get started. the headset has two eye cameras.
Hi @user-7daa32 I would start by reading through the documentation here: https://docs.pupil-labs.com/core/ probably the Getting Started section as well as the User Guide. This should help you getting an idea of what Pupil can do and what to watch out for. Especially the user guide > best practices might be interesting.
Hi @user-7daa32 I would start by reading through the documentation here: https://docs.pupil-labs.com/core/ probably the Getting Started section as well as the User Guide. This should help you getting an idea of what Pupil can do and what to watch out for. Especially the user guide > best practices might be interesting. @user-c5fb8b done already
@user-c5fb8b done already @user-7daa32 I don't know why it's getting difficult to do a pupil detection with a confidence of .9 or 1 everyday. Today I will get a good confidence, at another time, I will find it difficult to detect the pupil
It's pretty easy to detect it in one eye but hard to do that for both eyes
@user-7daa32 there are a couple of reasons for why pupil detection might be hard. Generally you want the pupil to be clearly visible with a good contrast. If you have any examples for the pupil detection failing or producing low confidence values only, you can share a screenshot of the eye window with us and we might be able to give you more specific tips on how to improve your setup.
@user-7daa32 there are a couple of reasons for why pupil detection might be hard. Generally you want the pupil to be clearly visible with a good contrast. If you have any examples for the pupil detection failing or producing low confidence values only, you can share a screenshot of the eye window with us and we might be able to give you more specific tips on how to improve your setup. @user-c5fb8b thank you so much. I will screenshot them today during the day. I actually mean to say that it's hard positioning the pupil at the center of the eye window in order to obtain a good config. This got me frustrated and Most times, I tend to twist the frame incorrectly. I have recording file containing spreadsheet data, video data and data that I don't know what they mean. I am aware of parameters like Fixation, Pupil diameter (I saw this on the Excel file), heatmap, scanpath. I think I have to work on Pupil detection and calibration for now
@user-7daa32 ok. Just in case, here are the links to how you can adjust your eye cameras: - sliding: https://docs.pupil-labs.com/core/hardware/#slide-eye-camera - rotating: https://docs.pupil-labs.com/core/hardware/#rotate-eye-camera - switching the extenders: https://docs.pupil-labs.com/core/hardware/#eye-camera-arm-extender
@user-7daa32 ok. Just in case, here are the links to how you can adjust your eye cameras: - sliding: https://docs.pupil-labs.com/core/hardware/#slide-eye-camera - rotating: https://docs.pupil-labs.com/core/hardware/#rotate-eye-camera - switching the extenders: https://docs.pupil-labs.com/core/hardware/#eye-camera-arm-extender @user-c5fb8b thanks again. I have gone through these several times. I'm trying to say that with the headset fixed in my head, I was unable to get detect the pupil. I adjusted the movable parts and still don't detect the Pupils. I have read most of the guide in the pupil lab website. I will send pictures later today. Thanks
hey there ^_^ is this the right place to ask about vive eye pro questions.
I am having trouble confuring the vive eye pro ^_^ cause I am using it in a lab that speaks a foreign language so it's hard to ask around if I am doing something wrong. if it's the wrong place please direct me to the right place
@user-ab0622 Hi, the Vive Eye Pro uses eye tracking from Tobii. Unfortunately, we cannot help you with that here. I would recommend contacting their support directly.
Nonetheless, you can let us know your questions. Maybe, we can answer them partially.
well
I have a vive with a tracker and a plug. I am not sure if it's from pupil eyes or not
I'll show you the picture
It displays errors as follows
@user-ab0622 This is indeed our product. 🙂
^_^ thought so that's why I came here. I thought it was a normal vive eye pro but I think it was assembled. First there's the usb. I am supposed to connect it to the computer right? And second do you think the reason why eyetracking isn't working is because I am installing the tobii eye tracking software? like how am I supposed to operate it.
@user-ab0622 Please checkout our getting started section for the add-on https://docs.pupil-labs.com/vr-ar/htc-vive/#install-the-vive-pro-add-on Afterward, checkout the getting started section for our software: https://docs.pupil-labs.com/core/#_1-put-on-pupil-core
Also, checkout the hmd-eyes project which includes a plugin for Unity https://github.com/pupil-labs/hmd-eyes/#hmd-eyes
oh nice man I didn't know there were many options I used to think the vive pro eye had many options but this is an addon to a normal vive pro right? also
do you recommend having the vive installation do all the work or it's better to do my own custom setup with teh software I'll definately check all the links.
Please why is the world window screen blurry. As soon as I progress, I will send screenshots
hi, Please where will i click to upload pics here?
@user-7daa32 If you are using Discord on a desktop machine there should be a + sign on the left side of the text field
@user-7daa32 you can also just drag-and-drop them
@user-7daa32 you can also just drag-and-drop them @user-c5fb8b Thanks so much
here are the screenshots of pupil detection. the config values especially for eye 0 tend to move sharply from 1 to below .5 while trying to look at the wall (supposed stimulus). I didnt use the arm extenders in this case. I have not done the algorithm setting or calibrated. just pupil detection. the eye 1 was flipped. is it not bad when the config vallues are changing like that?
@user-7daa32 in this screenshot the confidence seems to be fine? Do you have an example where the confidence drops? Additionally the pupil might be obstructed partially by the eyelashes here., which will make detection harder. Optimally the cameras should record the eye from slightly below, which makes it less likely that the eyelashes obstruct the image, maybe you can give the extenders a try?
I tried the extenders but i kept getting the pupils out of detection. Right now the screen calibration marker it taking long to read
@user-7daa32 Please make sure that the screen is centered in the field of view of the scene camera. If the screenmarker calibration shows a red dot in the middle this means that the calibration marker is not being detected.
@user-7daa32 Please make sure that the screen is centered in the field of view of the scene camera. If the screenmarker calibration shows a red dot in the middle this means that the calibration marker is not being detected. @papr I am still unable to calibrate with the screencamera positined at the center. attached is the screenshot of the world camera window
@papr I am still unable to calibrate with the screencamera positined at the center. attached is the screenshot of the world camera window @user-7daa32 @user-7daa32 Please make sure that the screen is centered in the field of view of the scene camera. If the screenmarker calibration shows a red dot in the middle this means that the calibration marker is not being detected. @papr I dont know why i am getting those stuffs i cirlcled
It looks like there is a smudge on your lens. Please use a microfiber cloth to clean it. If the image is still blurry, please try to rotate the lens carefully. If you turn it too far outwards it will detach.
It looks like there is a smudge on your lens. Please use a microfiber cloth to clean it. If the image is still blurry, please try to rotate the lens carefully. If you turn it too far outwards it will detach. @papr THank so much. I think I am improving. why is the calibration area very small?
It looks like there is a smudge on your lens. Please use a microfiber cloth to clean it. If the image is still blurry, please try to rotate the lens carefully. If you turn it too far outwards it will detach. @papr THank so much. I think I am improving. why is the calibration area very small? there is this words ''dismissing 33%'' what does it mean?
I would say the best way to help you would be if you could start a recording, make a calibration, stop the recording and send it to data@pupil-labs.com @user-c5fb8b can have a look at it tomorrow.
I would say the best way to help you would be if you could start a recording, make a calibration, stop the recording and send it to data@pupil-labs.com @user-c5fb8b can have a look at it tomorrow. @papr OKay. thanks
pupil player refused to open. only this opened without showing the booting green wordings. I reinstalled and still having the issue. Please do you know anything I could do to resolve it? thanks
Please delete the user_settings files in the pupil_player_settings folder
and try again
Hello, I am trying to find out information about the headpose tracking and found very limited content on the website. Has anyone worked with real-time headpose tracking with pupil core? Also is there a way to get the data into unity3D?
Hi @user-6779be, do you have any specific questions regarding the headpose tracking?
@user-c5fb8b I would like to know how to subscribe to the headpose data while streaming over pupil remote andwhat is the format of the data
@user-6779be I realized that we unfortunately don't have any documentation on the online head pose tracker in Pupil Capture.
However, it works very similar to the offline head pose tracker in Pupil Player. You can find the documentation here: https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins
The online head-pose tracker also publishes the data every frame via our normal network interface. You can subscribe to the topic head_pose
to receive the data. Here's an example dump for a single frame:
{
'camera_extrinsics': [-2.549196720123291,
-0.05451478064060211,
-0.7083187103271484,
-2.4842631816864014,
-8.648696899414062,
18.81245231628418],
'camera_pose_matrix': [[0.8645263314247131,
-0.08990409970283508,
0.49448105692863464,
-7.932243824005127],
[0.16451121866703033,
-0.879048764705658,
-0.44744759798049927,
1.223649024963379],
[0.4749003052711487,
0.4681779146194458,
-0.745170533657074,
19.247390747070312],
[0.0, 0.0, 0.0, 1.0]],
'camera_poses': [2.549196720123291,
0.05451478064060211,
0.7083187103271484,
-7.932243824005127,
1.223649024963379,
19.247390747070312],
'camera_trace': [-7.932243824005127, 1.223649024963379, 19.247390747070312],
'timestamp': 230.674514,
'topic': 'head_pose'
}
@user-c5fb8b Thanks will check it out
Pupil player > pupil player refused to open. only this opened without showing the booting green wordings. I reinstalled and still having the issue. Please do you know anything I could do to resolve it? thanks @user-7daa32 @user-c5fb8b @papr I can't find the user_setting files. Please how do they look like ? Thanks
I have read that while calibrating we should try to keep the head still. Please is this also what we should do during recording? I'm glad for the previlage I got as a member of this community. I wish to apologize that I will be asking a lot of novice-like questions
@user-7daa32, you can find the user_settings in: your home folder > pupil_player_settings > user_settings Regarding head movement: once you are calibrated, slippage of the headset can reduce the accuracy of the gaze prediction. As you can read in the docs, this effect is minimized when using the 3D detection and mapping pipeline, but you will still get reduced accuracy if the accumulated slippage becomes too large. No need to apologize, this is the place to ask your questions! 🙂
Hi, I am having problem from the very beginning, namely I cannot find a proper position of core glasses on my face. Is there any tutorial on that?
Hi @user-6e9a97 Have you already checked: https://docs.pupil-labs.com/core/#_3-check-pupil-detection ?
Are you familliar with all the adjustments that the eye cameras can make? https://docs.pupil-labs.com/core/hardware/#headset-adjustments ?
If you are able to share a small video of your eye or screenshot, we might be able to provide some suggestions on how to improve the setup 😸
Hello, I try to do positional tracking with the world camera of the pupil core eye tracker. In order to evaluate my results accurately, I would need the exact projection center. Are there any dimensions I could possibly use for these? Any help with this would be much appreciated.
Hi @user-7fa523, did you take a look at the head pose tracking plugin? Maybe this is already something similar to what you need? Otherwise, what do you mean by projection center? What's your setup and what exactly do you track?
@user-c5fb8b thank you very much for the hint with the plugin. This is also interesting for me, but currently not exactly what I am looking for. My setup: I use a second camera with depth information and estimate the head pose through feature matching in both frames (depth camera and world camera of the eye tracker). I want to evaluate my results with an external camera system (Krypton K600). I track the origin of the world camera coordinate system assuming the pinhole camera model. So by projection center I mean that origin. In reality, I suppose it would be the image sensor. I am interested in physical measurements so that I can define from the outside where the center is inside the camera.
@user-7fa523 What you are looking for are the camera intrinsics. Checkout our prerecorded camera intrinsics here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L27-L76
If you want more accurate results it is recommended to run the camera intrinsics estimation procedure for your camera https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation
Hello @papr , I had a question regarding the specifications of the pupil core camera. Does the pupil core eye camera has a global/rolling shutter?
@user-499cde The 200hz eye cams use a global shutter, as far as I know.
yes! Thank you for the information
Thank you @papr! I did a camera intrinsic estimation procedure. But this gives me the measurements in pixel units. I need the position of the optical center in metric units. The camera in question is “Pupil Cam1 ID2”. The physical measurements of the sensor would be helpful to transform the values of the intrinsic camera matrix. Do you have any information about that? However, this calculation could become imprecise. Do you maybe have any measured values or a data sheet for this camera in which the position of the optical center is described as a physical value?
Hey guys, I am launching pupil_capture from the terminal, but every time, I try to select one of the eye cams, I encounter this error message: ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /opt/pupil_capture/pupil_detectors/detector_3d/detector_3d.cpython-36m-x86_64-linux-gnu.so) Could anybody help me out here, I am really stuck because of this and do not know what it causes, i thought it is maybe a problem with the version !?
@user-b11159 Hey, which Ubuntu version are you running?
@papr that would be Ubuntu 16.04.
@user-b11159 Could you try running this sudo apt-get install libstdc++6
@papr yes I tried that, it prompts, that libstdc++6 is already the newest version (5.4.0-6ubuntu1~16.04.12). And it still crashes with the same error
@user-b11159 maybe you can try to reinstall. It looks like one of its dependencies is not correctly installed.
@papr i have reinstalled the whole thing now but it still throws the same error at me.
@user-b11159 What is your output of sudo apt show libstdc++6
?
@papr That would be the content of this file
Sorry i want to ask a question~
I have a problem ! When I open the pupil player It show some like this
What happen?
Hi! I don't see the plugin to count eye movements (fixations, saccades etc), I have here only fixation detector. I'm sure I had it on my previous laptop. Have you changed anything in the new version of the soft?
Hi @user-ab28f5, please try resetting your user settings: go to your home folder > pupil_capture_settings and delete all user_settings files
Hi @user-370594, we found that the external library we used for eye movement classification did not perform well enough to be usefull and removed it from Pupil Player and Capture in v1.17.
Ok, I see. Thanks!
@user-c5fb8b ok, I see thank you
Hello Scholars
I am having trouble understanding the meaning of most terminologies used in Pupil lab core Technology. Most of which always appear here. I read about them on the website and still don't understand. Also, I need a scaffolding tutor to understand these terms. I also need a help on how to get pupil dilation and gaze mapping data.
We are doing Chemistry Education Research and it involves asking Students to solve chemical problems while looking at Chemistry concepts. We fixation data, scanpath, pupil dilation and heatmap
Hello. I would like to know if the pupil core works as a marketing research tool
Hi @user-6e9a97 Have you already checked: https://docs.pupil-labs.com/core/#_3-check-pupil-detection ? @wrp Dear wrp, thanks for your reply. Here's the link with some pics of my setting... in this way I'm able to calibrate with a 30% of missing data due to pupil < 1; hower, when I visually inspect my records the precision appears to be very low https://photos.app.goo.gl/z6EdB9SUVj6LXWjk8 I've followed the instructions provided on your website, but I'm still feel not confortable with data recording thank you in advance for any hints!
Hello, I'm trying to create a recording from an external eye video file. I wasn't successful when I generated evenly spaced timestamps (the video file frame rate is also consistent), so I'm just wondering how the timestamps are usually generated? When looking at some recordings from my pupil core, I notice there is a difference between the timestamps and the frames on the video file, but I'm not sure what the significance is. Thanks!
Hi I am a freshman , when i follow the step at https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md,C:\Users\Administrator>pip install git+https://github.com/zeromq/pyre Collecting git+https://github.com/zeromq/pyre Cloning https://github.com/zeromq/pyre to c:\users\admini~1\appdata\local\temp\pip-req-build-fr2df3zp Running command git clone -q https://github.com/zeromq/pyre 'C:\Users\ADMINI~1\AppData\Local\Temp\pip-req-build-fr2df3zp' ERROR: Error [WinError 2] 系统找不到指定的文件。 while executing command git clone -q https://github.com/zeromq/pyre 'C:\Users\ADMINI~1\AppData\Local\Temp\pip-req-build-fr2df3zp' ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH?
Hi I am a freshman , when i follow the step at https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md,
"C:/Users/Administrator%3Epip install git+https://github.com/zeromq/pyre
Collecting git+https://github.com/zeromq/pyre
Cloning https://github.com/zeromq/pyre to c:\users\admini~1\appdata\local\temp\pip-req-build-fr2df3zp
Running command git clone -q https://github.com/zeromq/pyre 'C:\Users\ADMINI~1\AppData\Local\Temp\pip-req-build-fr2df3zp'
ERROR: Error [WinError 2] 系统找不到指定的文件。 while executing command git clone -q https://github.com/zeromq/pyre 'C:\Users\ADMINI~1\AppData\Local\Temp\pip-req-build-fr2df3zp'
ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH?"
I don't know what's wrong and how to do next? Can some one help me?
Please I will like to ask this. I got data in my recording file, how can one analyze them?
My primary goal is to be able to record people looking at different items and developing a heat map from their gaze patterns. A secondary goal is to be able to record and graph the pupil diameter information during the above process.
@user-7daa32 regarding your previous message:
I am having trouble understanding the meaning of most terminologies used in Pupil lab core Technology. Most of which always appear here. I read about them on the website and still don't understand. Also, I need a scaffolding tutor to understand these terms. I also need a help on how to get pupil dilation and gaze mapping data. We are doing Chemistry Education Research and it involves asking Students to solve chemical problems while looking at Chemistry concepts. We fixation data, scanpath, pupil dilation and heatmap Which terms specifically are unclear to you? Please feel free to always ask here if anything is unclear! Regarding the points you mentioned specifically: - fixations: you can enable the Fixation Detector plugin to work with fixations - scan path: Pupil previously had a scan path plugin using third party technology, which unfortunately performed rather poorly, so we removed it again. Since v1.22 we offer a gaze history visualization that works much better. You can enable it in the Vis Polyline plugin. - pupil dilation: when using the 2D pipeline you get the pupil diameter in pixels (of the eye image) which can be useful for relative comparisons. If you need mm, you have to use the 3D pipeline. Please read up on the trade-offs between 2D and 3D in https://docs.pupil-labs.com/core/best-practices/#choose-the-right-pipeline - heatmaps: you can automatically generate gaze heatmaps from the Surface Tracker plugin. This assumes that you have an area of interest marked with apriltag markers. Please see: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Also please note that we expect very basic knowledge about eye tracking from our users. We can assist with technical or software-related questions, but if you need a complete guide on study/experiment design, we also offer dedicated support packages: https://pupil-labs.com/products/support/
@user-7daa32 Regarding you second message:
Please I will like to ask this. I got data in my recording file, how can one analyze them?
My primary goal is to be able to record people looking at different items and developing a heat map from their gaze patterns. A secondary goal is to be able to record and graph the pupil diameter information during the above process.
You can open the recording in Pupil Player for common analysis tasks. You can export the data from all enabled plugins with the export function. You will see an exports
folder inside the recording's folder. The data is in CSV format, which you can e.g. open with Microsoft Excel or any other data analysis tool. Please read: https://docs.pupil-labs.com/core/software/pupil-player/#export
Please see my previous comment on notes about heatmaps.
Hi @user-a10852, what version of Pupil are you using? In older versions, the eye videos were writting with fixed intervals between frames, which did not necessarily correspond to the real timing (so video frame time intervals did not correspond to the intervals in the timestamp file). Since v1.16 the eye videos are recorded with the same timing as the underlying timestamps. Depending on your operating system, the timestamps are either generated directly on the hardware in the moment of the frame recording or in the software upon receiving the data with some additional logic to compensate for transmission delays.
Either way you should be fine with mocking a timestamps file with evenly spaced timestamps. How are you generating the file? What exactly does not work?
Hi @user-ec60c7 you are following the instructions to run Pupil from source. I'd recommend you try running our pre-buillt bundles instead. Please see the download button at the very top of: https://docs.pupil-labs.com/core/ You only need to run Pupil from source, if you want to make modifications to the Pupil source code.
Hello guys, I have posted a question here on Wednesday and I am still stuck with it 😦 Every-time I try to launch pupil_capture from the terminal, and try to select one of the eye cams, I encounter this error message: ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.22 not found (required by /opt/pupil_capture/pupil_detectors/detector_3d/detector_3d.cpython-36m-x86_64-linux-gnu.so)
I tried re installation as @papr suggested, but that did not do the trick ...
Next I checked the output of sudo apt show libstdc++6
also as @papr suggested
The out put of this was this very long terminal message:
But that is as far as it goes by now, I am still stuck. So could somebody please try to look into this again ?
@user-b11159 can you try running sudo apt show -a libstdc++6
@user-b11159 also please try upgrading your gcc version with
sudo apt update && sudo apt install gcc
@user-b11159 can you try running
sudo apt show -a libstdc++6
@user-c5fb8b I did and this is the output:
@user-b11159 also please try upgrading your gcc version with
sudo apt update && sudo apt install gcc
@user-c5fb8b I did, but it tells me, that gcc is already up to date with the newest version
In addition, i forgot to mention again, that I am running Ubuntu 16.04.
@user-b11159 when googling quickly I found a post where a user had a similar problem because his Anaconda installation used a different libstdc++ version than his system and interefered. Are you using Anaconda? Maybe an older versions?
Also please check if your system version of libstdc++ supports GLIBCXX_3.4.22 by listing the supported versions with:
strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
@user-c5fb8b I used it in the past, but currently anaconda is not installed.
I checked the output of the command, and it seems the support stops with 21
The last lines are these:
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_DEBUG_MESSAGE_LENGTH
@user-b11159 ok, so it's indeed the system version. Can you check your version of gcc? gcc --version
That would be:
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
@user-c5fb8b
@user-1a449f Hi, yes, it can be and has been used for this use case. You can look for previous publications on that topic on our website: https://pupil-labs.com/publications/
@user-b11159 It appears GLIBCXX_3.4.22 is only included in gcc > 6. Please try installting gcc-6 with
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-6
Btw, we explicitly delete libstdc++.so
from the bundle.
Otherwise nvideo [sic] opengl drivers will fail to load.
@user-c5fb8b So I did run these commands to install gcc-6 and it seemed to work:
[email removed] gcc --version gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609 Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.`
[email removed] gcc-6 --version gcc-6 (Ubuntu 6.5.0-2ubuntu1~16.04) 6.5.0 20181026 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.`
But when trying to select one of the eye cameras for detecting an eye, the process still crasher with the same error as before :/
eye0 - [ERROR] launchables.eye: Process Eye0 crashed with trace:
Traceback (most recent call last):
File "launchables/eye.py", line 150, in eye
File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
File "shared_modules/pupil_detector_plugins/__init__.py", line 14, in <module>
File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
File "shared_modules/pupil_detector_plugins/detector_2d_plugin.py", line 13, in <module>
File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
File "pupil_detectors/__init__.py", line 28, in <module>
File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
File "pupil_detectors/detector_3d/__init__.py", line 12, in <module>
ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version
GLIBCXX_3.4.22' not found (required by /opt/pupil_capture/pupil_detectors/detector_3d/detector_3d.cpython-36m-x86_64-linux-gnu.so)`
@user-b11159 What's the output of strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
now?
@user-c5fb8b It seems to be still the same:
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_DEBUG_MESSAGE_LENGTH
Could you please reboot and check if it is still the same?
Yes I did a reboot and it is still the same 😦
[email removed] strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX GLIBCXX_3.4 GLIBCXX_3.4.1 GLIBCXX_3.4.2 GLIBCXX_3.4.3 GLIBCXX_3.4.4 GLIBCXX_3.4.5 GLIBCXX_3.4.6 GLIBCXX_3.4.7 GLIBCXX_3.4.8 GLIBCXX_3.4.9 GLIBCXX_3.4.10 GLIBCXX_3.4.11 GLIBCXX_3.4.12 GLIBCXX_3.4.13 GLIBCXX_3.4.14 GLIBCXX_3.4.15 GLIBCXX_3.4.16 GLIBCXX_3.4.17 GLIBCXX_3.4.18 GLIBCXX_3.4.19 GLIBCXX_3.4.20 GLIBCXX_3.4.21 GLIBCXX_DEBUG_MESSAGE_LENGTH
So does that means it is just not compatible with my current setup because the support for the 22 version is missing ?
@user-b11159 No you should be able to upgrade to the correct version of libstdc++. Sorry for poking around in the dark here, but I don't have a Ubuntu 16 available to test how to do this properly. I suspect you might have to also upgrade libstdc++ again. Please try sudo apt install libstdc++6
once more, now that you have the correct gcc available.
Thank you very much, that resolved the issue 🙂 !
@user-b11159 great to hear! Sorry for the trouble, normally you should be able to run the bundle without any setup steps. We will look into why this was necessary!
Thanks for the response @user-c5fb8b, I'm generating an evenly-spaced timestamps file with: eye_timestamps = np.arange(0, eye_vid.duration, f) np.save("eye0_timestamps.npy", eye_timestamps)
I'm able to open the newly created folder in Pupil Player; however, the video does not play correctly (the FPS is changed to 30 as well) and I receive the error notification: "Advancing frame iterator went past the target frame".
@user-a10852 ah. it's very important that the number of timestamps matches the number of frames in the video. every frame has a timestamp basically. we don't do any resampling or similar. I assume it might be that the combination of your eye video duration and f
(probably your fixed frame rate?) does not yield exactly the same number of frames? Please check that.
Also Pupil Player will generate a lookup table for faster access upon loading the video for the first time. You will probably have to delete it (eye0_lookup.npy) after you modify your timestamps file in order for the changes to have any effect.
Hi,
for some of my recordings I get a ValueError : Each element in 'data' requires a corresponding timestamp in 'data_ts'
(Traceback below) when trying to feed the folder to Pupil Player. Any ideas on what went wrong and whether this is fixable post-hoc?
The data folder can be downloaded here (170MB): https://keeper.mpdl.mpg.de/d/167b191b2dd3473f9e69/
@user-141bcd which version of Pupil Player are you using?
@papr v 1.22
Thank you.
@user-141bcd We will have a look and come back to you in this regard.
this is the traceback:
data was originally recorded with hmd-eyes + pupil capture
There are no problems with the majority of the recordings. I had it now for the 2nd data folder of ~200 that I processed this far. @papr thanks already!
Thank you for providing this information. This is very helpful!
@user-141bcd I cannot reproduce the issue with your recording. I called the corresponding code manually on the gaze files instead of opening the complete recording. Could you please check if you did not upload the wrong recording by accident? If it was incorrect you only need to upload the gaze.*
files. If it was correct, I will download the complete recording and give it an other try.
@papr pardon! when checking back, I realized that my local copy of the gaze.pldata
file must have been corrupted when originally downloading it from our data server/cloud. The files I shared with you are the ones on the server (there the .pldata
is about twice the size with everything else being equal). After downloading again it works perfectly fine.
Totally didn't see that this was a possibility. Thanks for looking into it and sorry for wasting your time!
Don't worry. I am happy to hear that the problem could be fixed that easily :)
Hi @user-c5fb8b , The timestamp file and the video file have the same amount of frames (and the video file has the same fixed framerate "f"). Once in the Pupil Player (v1.19), I'm still getting the same error message with the video not playing correctly and I notice that the time ranges and index range do not match that of the source eye video file. Thanks again
@user-a10852 if you open it in player... do you have an existing recording where you replace the eye video? Or are you mocking up an entire recording?
@user-a10852 also, if you say "the time ranges and index range do not match..." are you referring to the ranges of the world video? I feel like there might be some confusion, as Pupil Player always operates relative to the world video. The full time range being displayed will be the time range of the world video. If your generated timestamps for the eye video lie outside of that range, you won't be able to see your eye video since the full information will be squashed into the first or last world video.
Alternatively, you can delete all world files and Player will try to generate artificial world timestamps based on the available eye timestamps.
@user-c5fb8b, I am mocking up an entire recording with just an eye video (so there is no world video). Sorry for any confusion, when I refer to "time range and [frame] index range" I am referring to what I see in the Pupil Player (e.g. the video playback bar/scrubber, time range export settings) which doesn't match with the source eye video and timestamp file frames.
Hi, I'm trying to generate a heatmap on Pupil Capture but I keep getting the "cannot add a new surface: no markers found in the image" error message even though I have placed markers in each corner of my screen. It doesn't seem to be recognizing it, though I have managed to generate a heatmap a couple weeks ago following the exact same steps, as far as I can remember, and it was actually pretty quick, only this time it's not working. Any ideas what might be happening?
@user-a10852 did you mock an info file for the recording? E.g. info.player.json
? Can you post the content?
@user-7d0b66 are the markers recognized? They should have a green transparent overlay. Maybe they are too small/too far away from the camera? Can you post a screenshot of your view in Capture with the markers visible?
@user-c5fb8b here is the mocked info file
and the timestamps
@user-a10852 hm, from what I can see this should work. Maybe you encountered a bug in Player here. Can you please share the entire recording folder (including the eye video) with [email removed] For example via google drive or any other file sharing service.
Hello guys, I would like to convert mage coordinates, that I am retrieving from a "gaze on surface" detection into world coordinates The easiest way to do this would be using the extrinsic/intrinsic parameters of the world camera i think. I am not sure however, how to obtain them. Could please you tell me how to obtain them, after calibration ?
Hi, some question about using the Core with an Android smartphone: I calibrate the Core connected to the pc using Capture. Then I record with the Core connected to the Android smartphone using the Mobile app. After copying the data to pc and into Player, does it use the last calibration made automatically? Thanks!
@user-c5fb8b here it is
@user-b11159 You can also just subscribe to gaze
and gaze on surface
, and match them base don their timestamps. Normal gaze is already in surface coordinates. Alternatively, you can map the gaze back into world coordinates using the transform matrices that come with each surface detection.
Oh and for the record, I'm using the markers that were made available on the company's website.
@user-7d0b66 as I assumed these are a bit too small. When you get closer to the markers you will notice that they will get a green/transparent overlay in Capture, signalizing that they were detected. I recommend you print them a bit larger. You will have to experiment how large you need them to be recognized consistently for your setup. We also added another PDF in our docs that shows only a single marker per page. You can use this to print the markers as big as you need them (e.g. by printing only 6 per page or similar). Here's the link: https://github.com/pupil-labs/pupil-helpers/blob/master/markers_stickersheet/tag36h11_full.pdf?raw=True
Please also always make sure that you have a large-enough white border around the marker. The width of the white border should be about twice the width of one of the blocks/pixels of the apriltags. Otherwise detection might also be less stable.
@user-c5fb8b, sent! Thanks
@user-a10852 I got it, I'll come back to you tomorrow!
@PFA great, thanks!
@user-c5fb8b Thanks! Changing the size of the markers worked (partially). I'm still getting some trouble generating the heatmap as the detection of the markers seems to be very unstable, it goes on and off repeatedly. Any other tips on how I can fix this? Here's how it looks now:
@user-7d0b66 also you cant have the markers show up again in the image of the world on the screen. This 'echo' breaks the tracking.
@user-a10852 I identified the problem: Your eye video was not recorded with Pupil and has an incompatible encoding. You might be able to fix this by converting or re-encoding it. Potentially you might already be able to solve it by converting it from H.264 to something like MJPEG. The specific problem is that Pupil can only handle streams where packet PTS equal frame PTS, which is not necessarily the case for H.264 encoded videos. While we also work with H.264 in some places, we ensure that these videos get created with matching PTS.
Additionally I noticed that your eye video is very large (500x700 px or something). If you intend to use our offline analysis tools, you will have to shrink the image, as the pipeline is only optimized for images of max. 400x400 pixels.
Hi, I'm totally new with pupil core (they arrived to me yesterday). I have some troubles with the pupil detection: all the position of the eye cameras fail in detecting my eyes (pupil) in the center of the screen at a good resolution and focus.
I already tried to change some settings but it didn't work
Hi @user-fa8a06 can you share a screenshot of the eye video?
@user-fa8a06 Have you seen this part of the docs yet? https://docs.pupil-labs.com/core/hardware/#headset-adjustments
yes, I've tried to change all the possible configurations, with or without the extension arms. The actual screenshot is the best image I've obtained with the arms of the cameras extended to the maximum point.
You should be able to rotate the eye cameras along its long axis upwards
(a twisting motion)
@user-fa8a06 The video Rotate Eye Camera shows possible motions that you can do: https://docs.pupil-labs.com/core/hardware/#rotate-eye-camera
thank you for your answers. My pupil core are slightly different from the one showed in the videos, and the movement of the lenses more limited. Are they a different version or I'm doing some mistakes?
The headset in the video is an older model, but you can basically move, twist and turn the camera in any direction. Let me try to show an example:
We are also actively working on updating the animations for our newer models.
ok, thank you. I'll try to rotate the cameras more than I already did. Is there a way to change the zoom of the cameras and/or their resolution?
@user-fa8a06 This is the movement that you have to do to center the pupil vertically in the image
You can choose between two different resolutions in the settings (you can see it in the second screenshot you sent). But you can also move the camera closer/further away from the image by sliding the camera: https://docs.pupil-labs.com/core/hardware/#slide-eye-camera
now it's working quite well, thanks!
@user-fa8a06 glad we could help! 🙂
another question, if you can help me. I cant install the application on my macOs version 10.11.13. Is it a problem of version?
@user-fa8a06 Yes, currently, the bundles are only supported on macOS 10.12 or higher. All future releases require macOS 10.13 or higher.
macOS 10.12 has reached its end-of-life and it has become too difficult for us to maintain the bundles on this and older versions of macOS.
@user-7d0b66 also you cant have the markers show up again in the image of the world on the screen. This 'echo' breaks the tracking. @mpk So basically, in order for it to function normally, I'd have to already have an image displayed on my screen so the markers can be tracked properly, withouts echoing, is that correct? This might be a silly question, but how can I add an image onto it, then? I need to generate heatmaps out of these specific set of images, but I'm not sure how to, exactly. Could you clarify this for me?
I can find analysis plugin in pupil player. Do we really have that on pupil core headset?
Please why is the config. graph covers with black color? the markers blinked with black color