Hi I just got my hands on pupil lab's core glasses. I have also gone through the documentation about connecting to the Network API. The documentation does state that apis can be integrated with programming language that supports zeromq and msgpack!! I wanna develop c# application where I can see live video coming from the device. Looking at the demonstration, most of the code is written in python. I wanna know if there is possibility of that?
@papr is there a way to limit the recording time in the capture to a certain time period?
Hi, I'm trying to manually visualize each eye's pupil video in same screen. I loaded numpy array of eye0_timestamps and eye1_timestamps for each video and used them to synchronize; however both eyes are misaligned - eye videos show time difference on the same blink. Could you give me some advice of how to align each eye's pupil video?
Hey, happy Halloween! @papr Previously, the 'Pupil Capture' could run normally on my computer. But recently I according to the official website of the ' IPC Backbone' steps (https://docs.pupil-labs.com/developer/core/network-api/#ipc-backbone), try to output data in real time. Now open "Pupil Capture", and then calibrate. When the calibration is over, the 'Pupil Capture' will not respond, as shown in the picture. What is the problem and how to solve it? Thanks!
Hey, I know pupils core software is not made for remote eye tracking but theoretically does it work? I created a virutal webcam by using akvcam but pupils capture software is not showing any virutal webcam. Is software actively filtering out the virutal webcam?
I believe libuvc is returning only uvc devices. Can a virutal camera from akvcam be simulated as uvc device? It is just for research purpose to see if a remote eye tracking is even possible and what are the implications.
@user-6b24c6 Hey, unfortunately, we only provide the examples in Python. Nonetheless, it is possible to use c# to to access the network api. You can have a look at our hmd-eyes project as a reference which uses the Network API to integrate our realtime eye tracking into Unity. https://github.com/pupil-labs/hmd-eyes
@user-765368 Not from the user interface of Pupil Capture. Instead, you would have to use a simple script that uses the Network API to remote control Pupil Capture. Read more about it here: https://docs.pupil-labs.com/developer/core/network-api/
@user-e94c74 Can you confirm, that the eyes are synced when using Pupil Player's eye overlay plugin? Also, please be aware that it is important how you extract frames from the video. Using OpenCV is not reliable for this. Check out our frame extraction tutorial for details: https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
@user-594d92 Thank you for reporting this. Could you share the following information/files:
- Pupil Capture version
- the capture.log
file (You can find it in Home directory -> pupil_capture_settings)
- the exact commands that you send to Capture via the Network API
@user-d8853d Our UVC backend only lists cameras that fulfil a specific set of requirements. Check out this Discord message for details: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
You will likely have to write your own video backend in order to access your virtual cameras. You can use this Realsense video backend plugin as a reference: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11
I think you best chance would be to write a custom "pupil detector plugin" that processes the remote video, doing the head pose estimation, double pupil detection, etc. Afterward, you will have to write a custom gaze mapping plugin that is able to run a calibration procedure fit for your use case.
@papr - Pupil Capture version: 2.4.0 - I have uploaded the other files to: https://github.com/mimic777/Pupil-Capture
@user-594d92 Thank you. This is the relevant Github issue: https://github.com/pupil-labs/pupil/issues/1733#issuecomment-554641134 You should be able to work around the issue by running Capture on a user account with ASCII characters or by changing your TEMP directory to a path that only includes ASCII characters.
So I know that a dispersion based algorithm is used to detect fixations. How does that work for surface tracking? Is the same algorithm mapped onto the area?
Also, to clarify, the degree of visual angle (i.e. the default dispersion of 1.50 on player) is where the pupil gaze is with respect to the x and y world coordinates?
Do you know of anyone that has calculated saccades using pupil labs ouput? When defined as the period between two fixations when the eye changes its position at a specific velocity, would it be possible to write code that calculates saccades?
Also, based on my reading of the code, it seems the Salvucci dispersion metric is used? I am using a max threshold of 1.08 with a duration 150 - 350 ms.
@user-908b50 Hi 👋 Dispersion is calculated based on the 3d gaze directions in scene camera space. Fixations are mapped later to the surface. Therefore, something that looks like a fixation in scene camera space might not be a fixation on a surface, if the surface was moving. Ideally, one would use the surface mapped gaze to calculate the dispersion but this is currently not implemented in Capture/Player. This is mainly because it is not clear how dispersion in visual degrees can be calculated in surface coordinates.
Do you know of anyone that has calculated saccades using pupil labs ouput? Check out @user-2be752's work https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing
When defined as the period between two fixations when the eye changes its position at a specific velocity, would it be possible to write code that calculates saccades? It is possible to calculate gaze velocity, yes. It just might be too noisy when using a low confidence threshold and too sparse when using a high confidence threshold.
Hi! Great, thanks for the link Pablo, will check it out. Not sure about velocity as of now, but I am definitely interested in counting the total number of saccades for my project. Perhaps that would simplify things. Alright, the part about a lack of mapping between visual degrees and surface coordinates makes sense, especially because surface moves! So basically, the scene camera space (from the world camera recording) are image vectors (x, y, and z directions) used for offline gaze (and later fixation) detection. So the 3D gaze vectors (as image vectors?) are used to calculate dispersion in visual degrees.
@user-908b50 My point regarding the saccades is that there are more eye movement types than just fixations and saccades. Defining saccades as the period between fixations might not be correct.
image vectors I would try to avoid this term as it mixes two different coordinate systems: (1) the 2d image plane that includes distortion, and (2) the 3d camera space that does not include distortion.
Gaze norm_pos
coordinate system is equivalent to (1). Gaze gaze_point_3d
lies in (2). Dispersion is calculated based on normalised gaze_point_3d
vectors, i.e. in (2). This is the function that calculates the maximum dispersion based on a set of vectors: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L130-L133
The coordinate descriptions for reference: https://docs.pupil-labs.com/core/terminology/#coordinate-system
hey folks, i got a problem, i have troubles with detecting when i am in binocular modus. I get confidence values of almost 1 for both eyes but after calibration and validation the error is often more than 30-40% and i already got that i have to be at least under 30% to make good data. It is completely different when i am in monocular modus. When i only track one eye i am working with 13% error... what am i doing wrong?
the error is often more than 30-40% Could you specify what the message says that includes these values?
no .... i will try it with my coworker again (maybe its because i have make up on? someone seriously asked me if that could be the reason it doesnt work properly,...) i get back to you in a few minutes
@user-3c006d Make up can lead to bad pupil detection, yes. You should be able to verify this by looking at the eye window. When wearing mascara, the pupil detection often finds the eye lashes as false positive detections.
@user-3c006d Nonetheless, I think you are referring to the amount of data that is discarded for the 2d calibration. Can you confirm that you were attempting a 2d calibration?
i tried both 2d and 3d and yes when i switched to the algorithm view it recognised my eye lashes as possible pupil.
Discarded data is not problematic per se. I think running a validation and looking at the validation accuracy value (measured in degrees) is more important to judge the quality of a calibration.
@user-3c006d You can use the ROI mode in the eye videos to set a Region of Interest that excludes the eye lashes as much as possible. That should help with the false positives.
okay thank you! I tried it with my coworker now and as he is a man he doesnt use any make up... it worked perfectly ^^
jeez i would never have thought of that 😄
Hello everyone, I have a small (or big) issue with the pupil lab fixation data. I am plotting each norm_pos_x and norm_pos_y on the corresponding frame (I tried multiple ones between start_frame and end_frame). but unfortunately it is not being plotted on the exact location on the video where the fixation should be (it does not match the green circle that shows the fixation on the video). here is a picture to give you an idea. what shoud i do to solve this problem?
@user-8effe4 What are you using for the frame extraction? Did you know that we have a tutorial that shows the recommended way for frame extraction and also draws the gaze point for a single frame? If not I can recommend to have a look https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
@papr in fact i tried that and i am now working on matlab but using the same principle. the only problem is that sometimes (like 4% of the time), it's not correct. And i am using fixation instead of gaze
@user-8effe4 So you extracted the frames with ffmpeg in your case, too?
@papr yes i did. and i am using fixation (not gaze)
I am having this error with some videos. but they are segmented fine. so i don't think it is the issue here
@user-8effe4 Can you confirm that the tutorial works as expected? (with gaze data)
@papr okay i will try that and be back
@user-8effe4 Cool! Once you have successful replicated it, please try it with fixations, too. Basically, the idea is to use common ground from which we know that it works and then to change one variable at a time until we find the cause of the issue. 🙂
@papr Okay i just tried it. it is working for most of the cases in gaze (still some are far). also in the fixation i think even where there is a frame range for a fixation(for example from frame 218 to 224), the fixation sometimes is moving in this area. i tried using the first frame, the last frame and the middle frame but still always with this problem
Is it possible to freeze the 3d model with a notification?
@user-430fc1 yes
@user-430fc1 https://github.com/pupil-labs/pupil/pull/1575
@papr Thanks!
Hello all. I have a question regarding the resolution and framerate for the eye camera. Does lower resolution affect pupil detection accuracy? If not, is there any merit in using 400x400px instead of 200x200px? I'm assuming that leaving image quality aside, and everything else being equal, having more fps —i.e. lower resolution— is better. Is that assumption correct?
@user-5ef6c0 actually, 192x192 has better pupil detection than 400x400
And it allows the usage of 200Hz. Therefore, it is the recommended settings.
@papr thank you
Two calibration questions: when using the single marker choreography, does the size of the printed marker matters? Also, for my setup (people stacking blocks vertically over a horizontal surface, see pic below): should the marker be on the table's surface (i.e. horizontal) or should it be displayed vertically? or perpendicular to the viewing direction (i.e. 30-45deg above the horizontal plane?
@user-5ef6c0 The calibration is always relative to the scene camera. The marker is detected best if marker is perpendicular to the scene cameras viewing direction, i.e. the concentric circles are circular in the camera image
The marker size matters for the detection. If it is too small, it will not be recognized.
@papr thank you. The size you provide in your A4 pdf file should work well at the distance shown here, right?
@user-5ef6c0 I think so, yes.
@user-908b50 My point regarding the saccades is that there are more eye movement types than just fixations and saccades. Defining saccades as the period between fixations might not be correct.
I would try to avoid this term as it mixes two different coordinate systems: (1) the 2d image plane that includes distortion, and (2) the 3d camera space that does not include distortion.
Gaze
norm_pos
coordinate system is equivalent to (1). Gazegaze_point_3d
lies in (2). Dispersion is calculated based on normalisedgaze_point_3d
vectors, i.e. in (2). This is the function that calculates the maximum dispersion based on a set of vectors: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L130-L133 @papr Good point on saccades! Okay, great! Thanks for correcting me on the term. Based on the function, first the distance of the vectors (x, y, z) are calculated and then the largest distance is subtracted from 1.0. Each dispersion calculation is then compared to a maximum dispersion value (chosen by user) to detect a fixation. So basically you use the radius dispersion metric (not Salvucci).
@user-908b50 The reason why we subtract the pdist
result from 1.0 is that the cosine metric is defined as
1 - (u @ v) / (norm2(u) * norm2(v))
in order to be a proper distance metric.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html
But we do not want the result of the distance metric but the actual angle.
@user-908b50 I am not sure what you mean by the radius metric. I will use this paper as bases https://cms.uni-konstanz.de/fileadmin/archive/informatik-saupe/fileadmin/informatik/ag-saupe/Lehre/WS_2016_2017/ET_TAP/p71-salvucci.pdf
The only occurrence of the term radius
is
A circular area is usually assumed, and the mean distance from each sample to the fixation centroid provides an estimate of the radius. on page 74.
This is not what we do. We actually calculate the largest dispersion between all possible gaze vector pairs. But yes, we implemented a I-DT fixation detector (section 3.2, page 74).
I am not sure which algorithm you are referring to by Salvucci
. Could you elaborate on that?
Dear @papr , it seems like surface fixations are calculated based on the original fixation data. Why do we generate multiple fixations based on the transformation on original one, rather than just calculating new surface fixations?
@user-e94c74 Checkout this comment https://discord.com/channels/285728493612957698/285728493612957698/773114352076455956
Oh I missed that comment. I just wondered how to interpret the surface fixations features - since there were several surface fixations from the same one - we can use the first element or calculate mean value among same fixations. Thank you for the fast reply.
@user-e94c74 These locations will only differ significantly if the surface moved relative to the scene camera during the fixation. So you could calculate the mean and the variance. If the variance exceeds a threshold of your choosing you can discard the fixation for too much surface movement.
Hello, I exported some data in a fixation on surface csv file. I'm just a bit confused because there are multiple lines with the same fixation id, but different x,y coordinates on the surface. It has no sense that a fixation moves on the surface, so maybe you if you could explain me a bit about it, as I couldn't find the documentation relating to exports on surfaces. Thanks in advance ^^
@user-19f337 actually, it makes sense that it can move as we do not detect fixations on the surface, but map existing ones to the surface. See my comments directly above your question.
If it's a fixation on the surface, the eye should stay on the same spot on the given surface right? The surface moves, the eye moves and should follow that same point on the surface as a smooth pursuit.
@user-19f337 This case would result in a false negative fixation detection, as this would look like a smooth persuit in scene camera coordinates and will therefore not be detected as fixation.
In order to find fixations on a moving surface, the dispersion needs to be calculated in surface coordinates instead of scene camera coordinates. As mentioned in the linked comment above, it is unclear how one would calculate this though.
Hello, I want to try to develop a plugin. Where should I start? I read the website of Pupil Labs and found the following sentence in the plugin.py file:
A simple example Plugin: 'display_recent_gaze.py'. It is a good starting point to build your own plugin. (Line 24) https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/plugin.py#L25-L167 Actually, I don't quite understand this sentence. How should I do it?
@user-594d92 It is referring to this file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/display_recent_gaze.py
Although, it is actually not the best starting point
try this instead:
from plugin import Plugin
from pyglui.cygl.utils import draw_points_norm, RGBA
class Display_Recent_Gaze(Plugin):
"""
DisplayGaze shows the three most
recent gaze position on the screen
"""
def __init__(self, g_pool):
super().__init__(g_pool)
self.order = 0.8
self.pupil_display_list = []
def recent_events(self, events):
for pt in events.get("gaze", []):
self.pupil_display_list.append((pt["norm_pos"], pt["confidence"] * 0.8))
self.pupil_display_list[:-3] = []
def gl_display(self):
for pt, a in self.pupil_display_list:
# This could be faster if there would be a method to also add multiple colors per point
draw_points_norm([pt], size=35, color=RGBA(0.2, 1.0, 0.4, a))
def get_init_dict(self):
return {}
I changed to things:
1. Inherit from Plugin
: This allows you to enable the plugin from the plugin manager
2. Changed the color in draw_points_norm
: This makes it different from the system plugin. Please be aware that you need to calibrate to see the visualization.
@papr HI, I tried to use pupil2.5 but I showed this error. I used pupil2.1 in the previous version. How can I fix this problem, thank you. (pupil36) E:\pupil\pupil2.5\pupil\pupil_src>python main.py player Error calling git: "Command '['git', 'describe', '--tags', '--long']' returned non-zero exit status 128." output: "b'fatal: not a git repository (or any of the parent directories): .git\n'" Traceback (most recent call last): File "main.py", line 35, in <module> app_version = get_version() File "E:\pupil\pupil2.5\pupil\pupil_src\shared_modules\version_utils.py", line 74, in get_version version_string = pupil_version_string() File "E:\pupil\pupil2.5\pupil\pupil_src\shared_modules\version_utils.py", line 57, in pupil_version_string raise ValueError("Version Error") ValueError: Version Error
@papr ok, thank you!👍
@user-a98526 ~~Are you running from source?~~ You are running from source but you did not clone the repository. Instead you downloaded it using a different method.
yes
@user-a98526 You need to use git clone ...
to get the repository. Please check the developer docs for details.
thanks for your help. I am trying this installation method. By the way, does pupil invisiable have applications similar to pupil player and pupil capture? I found that using pupil invisiable is not as easy as pupil core.
Or can I use the data from Pupil invisible in pupil player?
I found that using pupil invisiable is not as easy as pupil core. @user-a98526 I am very surprised by this statement. Can you specify what parts you feel are less easy?
Pupil Invisible is meant to be used with the Pupil Invisible Companion app for Android. The app can calculate gaze without calibration in real time. Recordings are uploaded to Pupil Cloud automatically. Pupil Cloud is the primary analysis platform for Pupil Invisible.
Hi, I do not mean to hijack this conversation but I'm looking to get a pupil tracker to study attention/engagement while the subjects (students) participate in lecture (active or passive learning). Without programming and coding knowledge, I'm looking for something that easy to use, robust in data collection and analysis (best if wireless). Invisible or Core?
@user-c780d4 I would strongly recommend using Invisible. It is much easier to use for your type of setup. Pupil Core is mostly meant for controlled lab environments.
@user-c780d4 Feel free to contact our sales team at info@pupil-labs.com if you have detailed questions in this regard. 🙂
Perhaps because of the plugin function of pupil core, I just started using pupil invisible. Pupil has helped me a lot in robot control. Thank you for your help.
@papr Do I get raw data with invisible? Do I calculate my own attention/engagement? Or, your software would do that?
@user-c780d4 You get gaze data in scene camera coordinates and soon we will be releasing marker-based surface tracking to Pupil Cloud which allows you to generate heatmaps for predefined areas of interests. Do you have a reference for what you would use as a higher level metric for attention/engagement?
@papr From some papers I read, pupil size, saccade and gaze are all being used to analyze attentions. I was hoping, since Invisible isn't a "research device" as Core, there would be a algorithm to calculate that automatically.
@user-c780d4 This is definitively the vision that we have for Pupil Invisible and Pupil Cloud. But Pupil Cloud is being built from the ground up. It will take a while until we are at the point where we can calculate a high-level attention. Pupillometry is currently not available for Pupil Invisible. But we are working on that as well.
Should I use the Core then?
@user-c780d4 if you need access to pupillometry data, Pupil Core is currently your only option. 😕
@papr Core isn't wireless, correct? Can data be collected using your software without wring code? I heard that it has to be Python.
@user-c780d4 Please contact info@pupil-labs.com in this regard. You do not need to write code to collect data. But you might need to calculate higher level metrics like attention yourself.
Hello,
My question might be repetitive but it is really urgent as I ran out of time trying lots of techniques.
I have been trying to plot a “fixation_on_surface” file coordinates on the original picture of the surface but it never worked (because the surface is alway distorted and the pupîlcapture or player is converting these coordinate in à way that they are correct on the surface which makes them not useful to be plotted on à surface).
So, I have used the surf method to compare each frame of the video with the original surface and create an affine model which I used to transform the fixation coordinates (norm_pos_x and norm_pos_y) to their equivalents on the surface.
But still even now, when the surface is bigger than the screen, or the subject moves à lot, the results are not good (full of errors).
Now , is there any method/technique/ algorithm I can apply toi solve this problem?
Any help would be really appreciated.
@user-8effe4 you are correct, that Player maps gaze and fixations into an undistorted surface space. Either (1) you plot the undistorted mappings on an undistorted image of the surface (recommended) or (2) you use the scene camera-based gaze and fixations (before surface mapping) and plot them on the distorted scene camera video.
Hi, is any of your using the Pupil mobile app for research? I contacted [email removed] and they said its deprecated but still usable. Anyone in here who is working with it? I want to use it in a strict experimental setting, so I need to make sure it works 100%. Any experiences with that? Else I stick to a Cable to the Computer. Any when I use pupil mobile but stream it to my desktop and start calibrations. validation and recording from there on, is there anything problematic about that? I would still use pupil mobile for the streaming part, any negative points about that?
@user-6e3d0f if you use Pupil Mobile, record on the phone. Recording the stream in Capture is not reliable.
but If I start it from the desktop where it is streamed to, is it saved on the computer or on the mobile?
If you hit the R button in Capture on the left, it will record on the computer. If you use the remote recording plugin, it will record on the phone
@papr so if i plot the "fixation_on_surface" coordinates on the clean image i used for the experiment, they would fit? or am i required to do some data proocessing?
If the surface definition is aligned to fit that image then yes.
@papr i think the surface in the video is a bit rounded and the subject is moving his head a lot. i don't know if that is a deal breaker or not
@user-8effe4 depends on how well the surface is detected. If you want you can share the recording with data@pupil-labs.com and we can have a look.
OH OK THANKS I WILL
Is there a "getting started" document for pupil mobile? I've downloaded the app on my phone and can do a remote start/stop from capture. I've got it set (I think) to save data on my phone, but all the files appear to be empty. When I try to open the saved folder in player it crashes the program. I'm sure I'm just missing a step somewhere, but I can't seem to find any document that tells me how to set it up on my phone or on the capture program. Also, if using mobile isn't a great option because it is no longer supported, does anyone have a good recommendation for a mini computer that I could place in a backpack?
Hi, I have two questions: 1) We have been getting fairly sizeable differences in pupil diameters between left and right eyes when recording binocularly (1-2 mm difference, with some variability). Are we doing something wrong? We have previously worked with SMI mobile eye tracking and never got such big L-R eye differences. 2) We've also been getting very large pupil diameters (6-9 mm). In the same testing environment, our SMI glasses gave much smaller numbers. Does the pupil diameter (diameter_3d in the Pupil Player export) really correspond to proper millimeters?
Hello
Please what is responsible for camera windows hanging and unresponsive?
@user-92dca7 Could you please respond to @user-608a0d when you have time? Message link: https://discord.com/channels/285728493612957698/285728493612957698/775347524859985942
hey guys, i would like to use the realsense (305) as a worldcam and tried following plugin: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11. Unfortunately the code architecture seems to have changed that much that it is no longer possible to use the plugin since following import: from camera_models import load_intrinsics (line 18 in the plugin) is no longer available. Does anyone know how I should proceed? Thanks in advance!
@user-da7dca The easiest way would be to use Pupil v1.23. Alternatively, one would need to adjust the plugin in the following way:
Line 18
- from camera_models import load_intrinsics
+ from camera_models import Camera_Model
Line 332
- self._intrinsics = load_intrinsics(
+ self._intrinsics = Camera_Model.from_file(
Hi @user-608a0d, yes, pupil diameter estimates are reported in mm. Both eyes are measured independently, with the global scale of each radius depending on the quality of the fit of the respective 3D eye model. If the eye model is estimated too far away from the eye camera, this will lead to an overestimation of the pupil radius. This could explain the differences you observe between the left and right eye and the overall size. If possible, prior to your actual measurement, check that both the left and right eye model looks good and then freeze both models during your experiment (possible in the GUI).
@user-7daa32 Hi, it looks like there is an error that causes the software to hand. Could you share the capture.log
file after reproducing the issue? You can find it in the pupil_capture_settings
folder in your home directory.
@user-8d1ce2 Could you share the player.log
file after reproducing the issue when opening the recording with Pupil Player? You can find it in the pupil_player_settings
folder in your home directory.
@user-8d1ce2 Could you share the
player.log
file after reproducing the issue when opening the recording with Pupil Player? You can find it in thepupil_player_settings
folder in your home directory. @papr I can't find a folder for version 2.5. just WinRAR archive file
@user-7daa32 This folder is not part of the installation or download folder. Please try searching for pupil_capture_settings
in your Windows search bar.
@papr What is the best way for me to share the log file?
@user-8d1ce2 just upload it here 🙂
Log from attempt at mobile capture.
@user-8d1ce2 The logs indicate that the application shuts down but there is no indication of a crash. Could you share the recording with [email removed] such that we can try to reproduce the issue?
@user-7daa32 This folder is not part of the installation or download folder. Please try searching for
pupil_capture_settings
in your Windows search bar. @papr
@user-7daa32 The above screenshot shows the correct folder. Please share the capture
file
@papr OK, I've sent the data
@user-8d1ce2 it looks like something went wrong during the conversion of the original recording? Do you still have a copy of it on the phone? Could you share it as well?
@papr Yes, I still have it on my phone, but it will take me a bit to figure out how to get it to you.
@user-8d1ce2 The easiest way should be to connect to phone to the computer, enable file transfer in the usb setting notification, and copy the recording from Movies -> Pupil Mobile -> local_recording -> ...
@papr That was actually what I did the first time, so I don't know that sending it again will change anything, but I've gone ahead and done that. You should receive it shortly.
@user-8d1ce2 The first shared recording was partially converted by Player. I hope that you were able to copy the unmodified files from the phone to reproduce the issue.
Unless, you opened the recording without copying the recording from the phone to the computer in the first place?
@papr Got it. Hopefully the new version will work then.
@user-8d1ce2 The second recording has also been modified. 😕
An unmodified version must not have a info.player.json
or info.mobile.csv
file. Instead, there should only be a info.csv
file. (plus the other video related files)
@papr OK, I'll try it one more time. I didn't delete the folder on my computer before transferring, so the added files were probably still left behind. It should be there in a minute.
@papr shoots it's saying the zipped file is too big to e-mail. I just sent you a link to a google folder.
@user-8d1ce2 This time you shared the original 👍 I was able to open it successfully in Player. Could you give it another try, too? Ideally, install the latest v2.5 version first. This way we can be sure to use the same version.
@papr OK, so it opened, but there isn't any eye data in it. Are you seeing anything other than the world camera?
@papr I am using v.2.5
@user-8d1ce2 Yes, this is expected for a Pupil Mobile recording. Please run the post-hoc processing https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
@user-908b50 The reason why we subtract the
pdist
result from 1.0 is that the cosine metric is defined as
py 1 - (u @ v) / (norm2(u) * norm2(v))
in order to be a proper distance metric.https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html
But we do not want the result of the distance metric but the actual angle. @papr okay, thanks a lot! pdist now makes sense! It was indeed the Salvucci dispersion metric. I used the attached paper attached to inform the maximum dispersion threshold. It compares across different dispersion metrics. I was confused between Radius threshold and Salvucci dispersion metric based on the formula. Based on the author's suggestions, a range of 0.7 deg - 1.3 degree radius is the best. 1.0 radius threshold is equal to about 1.8 Salvucci dispersion metric.
Here it is!
Hello everyone - we just purchased a Core with USB-C connector and I was just wondering if the community here has some thoughts on "recommended" cameras to pair with it... couldn't really find this sort of info on the website so thought someone here might have an idea 🙂
For the world camera I mean 😉
Hi, currently I'm looking into timestamp and synchronization problem. It seems like start_time in info.player.json
and timestamp in exports_info
are different; that timestamp in exports_info
are the time of the first frame of world camera. Two timestamp differs in few thousands of milliseconds.
Here I have a question: I want to fully utilize start time in info.player.json
(both Pupil Time and Unix Time), and first frame time as in exports_info
DURING THE RECORDING WITH PYTHON/ZMQ. Is there a way to figure out such values during the recording experiment (can't find them for network API commands)? Or should we manually find them after the experiment? Thank you in advance 😄
@user-7daa32 The above screenshot shows the correct folder. Please share the
capture
file @papr unable to open Discord in my laptop. Please send an email address let me send the file. Thanks
@user-7daa32 [email removed]
@papr Following up on the post-hoc processing for data collected with mobile. Is it possible to calibrate in Capture using a single marker and save that calibration for post-hoc processing? I like the ability to verify/visualize the accuracy of the calibration that is offered in Capture, so I was hoping that I'd be able to use that calibration, but I don't see a way to save it.
@user-8d1ce2 You can run the validation as part of the post-hoc calibration plugin. No need to save it during the recording.
has anyone transformed gaze vectors from the pupil core to a world space? I have motion capture markers on the pupil eye tracker (to create its vector space relative to the world space) and I have known targets in the real world (to provide me some calibration and validation data), but my calibration transformations do not seem to be working. if anyone has done this and has a procedure I could look at, I would greatly appreciate it
@papr do you know of recommended cameras to use with the Core’s USB-c Mount?
@user-074809 our head pose tracker does that. Things to keep in mind are that you need to estimate the headsets pose in world space for every scene camera video frame, and that calibration targets need to be in scene camera coordinates if you want to use the built-in calibrations.
@user-b4120d Not really. Originally, it was meant for Intel's realsense cameras.
Thanks papr that makes a lot of sense
Hello, i have a question on the surface position csv file. what is the difference between the columns "surf_to_img_trans" and "surf_to_dist_img_trans"
@user-8effe4 The former transforms surface coordinates into undistorted 3d camera coordinates, and the latter transforms them into distorted pixel space. See https://docs.pupil-labs.com/core/terminology/#coordinate-system as reference
oh ok thank you
@user-7daa32 Thank you for providing the log files. The issue is caused when the 3d detector debug window is being minimized on Windows. Other OS are not affected.
As a workaround, please do not minimize the debug window. Either leave it open or close it completely.
We will fix this issue in our next release. Fix https://github.com/pupil-labs/pupil/pull/2047
@user-7daa32 Thank you for providing the log files. The issue is caused when the 3d detector debug window is being minimized on Windows. Other OS are not affected.
As a workaround, please do not minimize the debug window. Either leave it open or close it completely.
We will fix this issue in our next release. Fix https://github.com/pupil-labs/pupil/pull/2047 @papr That's right! Noted. Thanks
I had an issue come up that I can't seem to find an answer for anywhere. My core headset has some issues with the camera connectors, and over time the wires have broken going into the connector. I was working with one cameras for a while, but as of today both eye cameras won't connect. I either need to get a new connector, or a full set of wiring I can use to rebuild the harness. Info on where I can get the connectors or order the wiring would be great.
@user-2150ee please contact info@pupil-labs.com in this regard
Ok, I figured it might be a common thing -- thanks
hey maybe you can help me out, i have 2 custom eye cameras which i want to use in capture. However they share the same device name and serial number which results in a random camera arrangement in window eye0 and eye1 when initializing a new capture session from old settings. Is there some preferred way to deal with this problem? thanks in advance!
@papr https://youtu.be/D6AIqIkTAQA We thought you might want to see this- some results from the neural network we're using to mask eye videos to process through Pupil software. This is some of the latest generated results, but I've run several prior revisions of the neural network's development alongside Pupil Core with great success.
@user-3cff0d Nice! The pupil detection is pretty solid, even when partially occluded during blinks. The iris acts out from time to time, but is mostly stable 👍
This particular model was actually trained on on images fairly unlike the video on the left, which is likely why the iris and sclera detection isn't spot on. @user-331121 would know more about the training dataset and process than I
Hi everyone. I have a question as I'm planning the setup of a new lab and wondering if you would please give me some advice. We are soon going to have an immersive CAVE VR system which will have to be integrated with a bunch of wearable devices. The participant will be inside this CAVE while wearing various instruments (like wearable EEG) which are all wireless and without any wires. Data from the EEG&others will be recorded by a laptop wirelessly. The main goal is to have the participant free to move in this environment without being thethered to any instrument via wires. The multimodal setup will also include a Pupil Core. A very important thing is that all the recordings have to be synchronized. So I'm planning to have a control PC which handles the acquisition of all the various components over the network using LabStreaming Layer. I'm wondering if you have any suggestions on how it's best to record the eyetracking data without having the participant directly connected to a computer via the usb cable and how this can be integrated with the PC running Unity. I thought about two options: 1) Have a mini-PC or a tablet running Windows 10 placed inside a backpack worn by the participant. Pupil core can be connected to that via usb and then data could be streamed over the network via LSL 2) Have Pupil core connected to an Android mobile phone and use Pupil Mobile to stream data to the PC running Unity - which will also run Pupil Capture or Pupil Service. I've never used the app, I'm wondering if it is stable and able to send data with no delays or very minor delays?
The eyetracking data will then have to be fed into unity, I guess using the hmd eyes Unity plugin Do you think either one of these two options would work or do you have any suggsetions / think about potential issues?
Thank you all!
@user-178ab2 Can you confirm that you indeed mean a Pupil Core headset and not one of our AR/VR add-ons?
Yes a pupil core headset, sorry
@user-178ab2 Interesting. Have you checked if there is sufficient space in the VR headset for it? I would worry that this is not the case.
Btw, @user-d8853d is developing a python script that sends video frames from a RaspberryPI to Pupil Capture over the network [1]. You could connect the eye cameras to the raspberry pi, and stream the eye videos to Capture, while the unity uses the hmd-eyes ScreenCast [2] feature to stream the scene video. Capture would run pupil detection and gaze estimation, and publish the results using the LSL relay plugin. You just need to make sure that the RaspberryPI and Unity use a synchronized clock to timestamp their video frames.
[1] https://github.com/Lifestohack/pupil-video-backend [2] https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#Screencast
We won't have a VR headset, it's just a CAVE, with projection screens all around. So we will use the Pupil Core headset normally
Thanks so so much for your suggestion! I hadn't considered a raspberry pie. I will definitely look into that
@user-178ab2 In this case, you should be able to just use the RaspPI. @user-d8853d 's backend still requires a time sync feature. I think he would be very happy if you contributed that to his project. We can discuss implementations in software-dev
Amazing! Yes that would be great
Am I missing something? I want to do Single Marker Calibration with pupil mobile. So i started pupil mobile and checked on Pupil Capture if everything is setup correctly. Then I Start the Calibration and need to move around because my testing marker is not near my monitor. Is there any other way to start the calibration? How do I validate on Single Marker?
@user-6e3d0f Did you connect the headset to the computer running Capture, or do you stream the video?
headset is connected to mobile
and streamed onto desktop
@user-6e3d0f So, you are at the computer, the subject stands further away, you start the calibration on the computer, go to the subject, show the marker, perform the procedure, (1) hide the marker, go back to the computer, and stop the calibration. Everything after (1) can be replaced with (2) show the calibration stop marker.
XD
I may always used the Stop Marker 😄
Thanks @papr
Haha, yeah, that happens 🙂
But how do you validate with single marker calibration?
@user-6e3d0f You do the came, but instead of clicking the C
button, you click the T
(test/validation) button.
and then perform the same headmovements again?
like those head moves?
@user-6e3d0f If you e.g. rotated your head clock-wise during calibration, you could rotate it counter-clock-wise during validation.
Interesting. Im just curious how it is calculated there. Need to do some research. But if you say this is a valid approach, I'll try it out.
@user-6e3d0f Given the dynamic nature of the single marker calibration, it is unlikely that you will repeat the same scene camera positions during the validation anyway
The concept is always the same: Capture records pupil data (in eye camera coordinates) and gaze target locations (in scene camera coordinates). Afterward, it matches them temporally and calculates a mapping function. The different calibration methods only differ in the type of head/eye movement and how Capture gathers gaze target locations.
I'm just a bit confused, because we try to validate on the same data as we calibrated. Like if I use the screen mirror calibration, we validate on other given points. Thats what confuses me. But maybe thats just a bit to complex for me or I need to read more about the calibration validation part here. Nevertheless, thank you @papr, as always, a great pleasure to gain advise from you 🙂
@user-6e3d0f One major difference is that the single marker calibration usually generates a denser "gaze target cloud" (highly depends on the actual head movement) compared to the screen marker calibration, that generates smaller clusters of gaze targets.
The denser the validation gaze target cloud, the more meaningful the validation result will be.
https://i.imgur.com/xG7pKz3.png that looks a bit small, isnt it?
as the calibration area
@user-6e3d0f It does. Let me check, I had an example video somewhere.
Would be actually great to see a video for it :).
I'll try to measure eye movements on a mirror (so a person 2meter in front of a mirror) so the Field of View and calibrated area should be somewhat close to the mirror size, I hope thats possible
@user-6e3d0f I was not able to find it. I recorded quickly a new one and will send it to you via a DM
Thank you very much!
Hello. I have a question. i installed pupil GUI in ubuntu 18.04. i already installed dependencies and the repo. I started Pupil software, but i got an error like this
world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/home/can/pupil/pupil_src/launchables/world.py", line 127, in world from pyglui import ui, cygl, version as pyglui_version File "pyglui/ui.pyx", line 1, in init pyglui.ui File "pyglui/cygl/utils.pyx", line 1, in init pyglui.cygl.utils ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject
world - [INFO] launchables.world: Process shutting down.
i think that i have the different version of Numpy... I want to know the version of Numpy which is used in this GUI
my numpy version is 1.13.3
@user-222203 I had the similar problem. I don't know if it will work for you but you can try upgrading the numpy.
pip install numpy --upgrade
If it still doesn't work, try:
pip install numpy --upgrade --ignore-installed
hi! with pupil player, is there a way to export the full 360 degree view captured from the wide angle lens?
Good morning everyone.
I have pupil-core glasses and I have a question about them. As far as I know, when you perform a calibration, there is a correction value for both eyes, right? Where do you get this value then ? I read the values of the glasses cyclically via a Python script (Network API). But with all these values I didn't find one that puts the offset of the two pupils in relation to each other. Does anybody know where I can get it from?
Best regards Sebastian
@user-178ab2 Interesting. Have you checked if there is sufficient space in the VR headset for it? I would worry that this is not the case.
Btw, @user-d8853d is developing a python script that sends video frames from a RaspberryPI to Pupil Capture over the network [1]. You could connect the eye cameras to the raspberry pi, and stream the eye videos to Capture, while the unity uses the hmd-eyes ScreenCast [2] feature to stream the scene video. Capture would run pupil detection and gaze estimation, and publish the results using the LSL relay plugin. You just need to make sure that the RaspberryPI and Unity use a synchronized clock to timestamp their video frames.
[1] https://github.com/Lifestohack/pupil-video-backend [2] https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#Screencast @papr sorry to bother you again. Thanks for all the useful suggestions. I looked into what you recommended and I'm worried about delays if using the the video is streamed from the Rasperry Pi to the PC running Unity&Capture, which then needs to be elaborated and the gaze estimation be passed to Unity again.
Do you think it is possible to use the hmd plugin in Unity & stream to it the gaze data already estimated from another PC? This other PC would run Core and could send the gaze/pupil data over the network to the Unity PC?
@user-178ab2 I was not aware that you needed the gaze signal in real-time. Do you use it for interaction with the environment?
The usage of EEG and LabStreamingLayer suggested that you wanted to process the data post-hoc. In this case, you do not need to worry about the delay as long as the clocks are properly synchronized.
Yes sorry, I didn't make myself very clear. I do need the gaze signal in real-time to interact with the environment and I was wondering if that can be an input to hmd eyes if sent from another PC over the network
@user-178ab2 In this case, you would have to skip using the PI, connect the headset directly with a USB cable to the computer doing the gaze estimation.
@user-178ab2 Be aware that the gaze estimation is relative to the scene camera, i.e. you will probably have to do some additional mapping step to map the gaze from scene camera coordinates to the actual CAVE coordinates (assuming that you need that).
To answer your actual question, the hmd-eyes plugin is able to receive gaze in scene camera coordinates, yes.
Thanks so much. Yes I'd need to transform the coordinates from the scene camera space to the CAVE space. I'm taking into considerations all the various options to see which one is more feasible. I might try as you said now in the first instance. Have a PC connceted to the headset and running the gaze estimation. This PC would then stream this data to the Unity PC where the hdm-eyes plugin should be able to receive this data from the network. Are there functions already implemented in the hmd-eyes tool to read data from the network? (apologies, these are very basic question. it's the first time ever I approach all of this)
@user-178ab2 Yes. Read more about it here: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md (You can ignore the calibration part as you would not use the hmd-calibration)
@user-178ab2 Also, I can recommend having a look at https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Amazing, thank you so much @papr This is extremely helpful. I will look at those
@user-222203 I can also recommend to uninstall any numpy versions installed via apt-get
@user-10fa94 The wide angle lens only as a field of view of 139°x83° at 1080p. This is the highest field of view that you can get with the official Pupil Labs cameras. Therefore, we do not support 360° videos.
@user-b7ea86 The mapping of pupil (eye camera coordinates) to gaze data (scene camera coordinates) is more complex that an offset. Subscribe to calibration.result
to get the class name of the implemented mapping function and its parameters.
@papr thank you very much. You are a great machine!! 🙂
Hello everybody, I'm just about to connect the glasses to my mobile phone with a USB-C cable. I have installed the Pupil Mobile App and can view the data in the app. I read there is a way to get the data from another PC in the same network. There should be settings in Pupil Capture (PC) that I can't find. Can someone help me with my problem. Thanks
@user-a6e660 You can stream the video from Pupil Mobile to Pupil Capture for monitoring purposes. The recorded data needs to be transferred via USB.
@papr that's exactly what I want, I've already seen the official video from Pupil Labs (https://www.youtube.com/watch?v=atxUvyM0Sf8&feature=emb_logo), but I can't find the setting option shown in the video. What can that be?
@user-a6e660 When your device is in the same network as the computer running Pupil Capture, go to the Video source menu in Capture, and select your device from the "Activate device" selector.
Hey guys! were trying to figure out the accuracy of the glasses. Namely, we are trying to understand the error the glasses produce between the real point (lets say the center of the calibration marker) and the gaze point marked by the glasses. Is there an easy way to extract that information?
@user-200ca9 Yes, checkout these docs https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy
Thank you! i'll check it out!
@papr is this data exported anywhere? we've seen this visually in the pupil capture but haven't found the accuracy in any of the CSV files..
@user-200ca9 No, it is not. You can reproduce these values in Pupil Player in the post-hoc calibration plugin.
@papr The glasses are in any case in the same network and successfully connected to the cell phone, but no device is displayed in the pupil capture, other ideas what could be the cause?
@user-a6e660 Let's check the logs. Please share the pupil_capture_settings/capture.log
file
@papr so i've found a short video showing manual calibration with the post-hoc, those the data points that the user chooses during the small sample are then exported?
does*
(this is the video we checked)
@user-200ca9 The reference locations are not exported but saved in an intermediate format. You can read them from offline_data/reference_locations.msgpack
using msgpack.
I see, we will check it out! thanks!
@papr
@user-a6e660 It looks like Capture is choosing the wrong network interface (VMware Virtual Ethernet Adapter) for the connection. Are you running Capture in a VM?
Probably not, correct? In any case, you will need to disable this network interface in your settings such that Capture choses the correct network interface. Unfortunately, a manual selection is not possible.
Thanks very mutch, yes you right! I will try it in another network.
Hello
Please how do you use the Annotation plug-in. It's hard to understand at the user guide. Any video for it?
Would I click their various hotkeys anytime I want to start and end ?
If I have Start(S) and finish (F), are we going to manually click these buttons ?
I don't know if I am making sense again!
@user-7daa32 Correct, just click the buttons if you want to tigger the event. Please be aware that annotations are single points in time, not a period. Therefore, you need dedicated start and end annotations, like you suggested with S and F, to annotate periods.
@user-7daa32 Correct, just click the buttons if you want to tigger the event. Please be aware that annotations are single points in time, not a period. Therefore, you need dedicated start and end annotations, like you suggested with S and F, to annotate periods. @papr thanks. Can I have many Start Annotations and end annotations? All in one video
@user-7daa32 Yes, as many as you like. Annotations are not unique.
Just for completion, you can create annotations using our Network API, too.
@user-7daa32 Yes, as many as you like. Annotations are not unique. @papr and not seem to be accurate with time exactly
@user-7daa32 If you want to be more accurate in time than pressing a button manually, you will have to use a script/programming or create annotations post-hoc in Pupil Player.
@papr hey, I was going through the user guide, and saw the following (in relation to the markers): - An individual marker can be part of multiple surfaces. - The used markers need to be unique, i.e. you may not use multiple instances of the same marker in your environment. aren't these 2 points contradict one another?
@user-7daa32 If you want to be more accurate in time than pressing a button manually, you will have to use a script/programming or create annotations post-hoc in Pupil Player. @papr each time I press the annotations plugin in player, I got my computer freezed. Player will stop responding
@user-7daa32 I cannot reproduce the issue. Could you share the pupil_layer_settings/player.log
file, similar to the capture.log
file the other day but with a different file name and in a different folder
@user-200ca9 An individual marker can be used as part of multiple surfaces. See this example with 6 markers. You can define 2 surfaces here, where the middle markers are used in both surfaces. What you cannot do is put 2 identical markers in the same recording.
oohhhh ok! got it! thank you!!!
congratulation for new release 2.6 😃
@user-7daa32 I cannot reproduce the issue. Could you share the
pupil_layer_settings/player.log
file, similar to thecapture.log
file the other day but with a different file name and in a different folder @paprSent. I am experimenting this in capture too
@user-7daa32 Saw it. Please use a newer version of Pupil Player.
@user-7daa32 I do not see any issues in the Capture log
@user-7daa32 Yeah, you are using 2.0.161. The issue was fixed in v2.0-177
@user-7daa32 So, you can either using a 2.0 version from https://github.com/pupil-labs/pupil/releases/v2.0 or use the latest version released today
hey guys, can someone explain which angle is refered to in this image?
@user-1529a4 Accuracy is calculated as the average angular offset (distance) (in degrees of visual angle) between fixation locations and the corresponding locations of the fixation targets. Precision is calculated as the Root Mean Square (RMS) of the angular distance (in degrees of visual angle) between successive samples during a fixation.
@papr what do you mean when you say visual angle
?
@user-1529a4 Imagine two rays, originating from the scene camera. One going through the gaze target (ground truth) and one through the estimated gaze location. By visual angle, we refer to the angle between these two rays.
hmm I understand what your saying, but there is a hidden assumption (if I understand correctly) that you are referring to these angle in 2D, because in 3D we have 2 angles to refer to, right?
this*
(like the phi and theta we are exporting into the export file..)
@user-1529a4 Not necessarily. You can also measure the smallest angle between the rays. No need to split into spherical components. We use the arccos of the cosine distance to calculate the angle.
Cosine distance of A and B: (A @ B) / (||A|| * ||B||)
@
is the dot product in this case.
aaahhh I see what you are saying...
so this angle is of the dot product
ok got it, thank you!
Hey, is there any possibility to change the head camera? We would need a camera which is not fish eyed. I need to be able to read the content the test person looked at in the video. Its not possible with the fish eye camera
@user-3c006d Your Pupil Core comes with a narrow lens that nearly does not have any distortion. Have you given that a try?
no i didnt even know about that til now xD
okay i try it
thank you! MUCH BETTER!
@user-3c006d After switching lenses, you should reestimate your camera intrinsics
alrighty! thanks!
@papr Is there a tutorial about live video from the eye cameras using the network API? it is stated that it is possible but not explained how...
@user-765368 there is documentation here. https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote
it is not there... or at least i couldn't find it...
@user-765368 Have you tried running the code examples linked by @user-d8853d? Were they successful?
@papr is there any option to change the camera parameters to improvise the image quality?
@user-0ee84d You can change camera related parameters in the Video Source menu of each window.
Thanks
@papr is it possible to change those settings in Pupil Player?
@user-0ee84d Player just shows the recorded video. Exposure time needs to be adjusted before the recording.
@papr yes, the code works. but i only managed to record data and couldn't display it. is there a way using the network API to also live stream the feed from the eye camera?
@user-765368 this example receives the raw eye images from the eye cameras and uses opencv to visualize them. Is this what you would like to do? https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py
@user-0ee84d Player just shows the recorded video. Exposure time needs to be adjusted before the recording. @papr I tried to set the auto exposure mode to auto... It throws an error message stating "World: could not set value. Auto exposure mode"
@user-0ee84d The naming of these options might be a bit confusing. Choose "aperture priority mode" for auto exposure.
@papr yes, thanks!
Thank you. Why does the camera turns into fish eye camera if I set the resolution to 1920x1080? If it's 1280x720, it looks normal but with bad quality
@user-0ee84d The default wide-angle lens is a fish eye lens. If you want you can switch lenses. Checkout my previous conversation with @user-3c006d https://discord.com/channels/285728493612957698/285728493612957698/778644090664255505
720p is a crop of the least-distorted part of the 1920x1080 view.
@user-0ee84d The default wide-angle lens is a fish eye lens. If you want you can switch lenses. Checkout my previous conversation with @user-3c006d https://discord.com/channels/285728493612957698/285728493612957698/778644090664255505
720p is a crop of the least-distorted part of the 1920x1080 view. @papr thank you
Hello, is possible in Pupil Core getting video (live stream) from cameras in real time? If so, what I should read for do it?
I would like to display this video in WPF Application in real time.
I have recorded some data and would like to have to current date time. How could I transform the timestamps to it ?
from datetime import datetime
timestamp = 1970207.964740
dt_object = datetime.fromtimestamp(timestamp)
print("dt_object =", dt_object)
This returns dt_object = 1970-01-23 20:16:47.964740
@user-3ede08 that should be a monotonic time in python and not epoch that you are trying to convert.
Hello Pupils lab team, The eye camera is 200 Herz and the diy camera is 30 Herz, is there any advantage of having high speed eye camera? Does it make eye tracking more accurate? I understand that we will have 6.6 times more data but does it translates to better quality? Have you done any any studies on that?
Hey @user-d8853d thx for you answer. My goal is to transform the pupil recorded timestamps into the actual data time. Could you provide some code lines that do that ?
Hello. I have some trouble getting our Pupil Core running on one of our Microsoft Surfaces. The Frame Rate is super low, always around 3-4 FPS. I can't really find a reason for this as both processor and RAM are not at max limit and there are no background processes running. Am I missing something or is the surfaces processor simply to weak? On my normal laptop (which admittedly a lot stronger) it works ok at around 15 FPS. Any recommendations are welcome 😃
@user-764f72 Could you please link your time sync tutorial for @user-3ede08?
Is it possible to get the timestamps display on the video (world.mp4 ) ?
@user-3ede08 with a custom plug in, yes. I am currently not at work, but once I am, I will be able to share it with you.
That will be fantastic, thanks
Will you do that today ?
@user-3ede08 hi, this tutorial that should help you figure out how to transform the timestamps into datetime objects https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Hi @user-764f72, thank you for that.
@user-d8853d It really depends on your requirements. At 30Hz, you will get a general understanding of what the wearer is looking, but you will miss out on more detailed information on, for example, saccades
@user-d8853d @nmt Technically, you would calculate eye movement data on the 200Hz eye cam signal. Therefore, you are not missing out in that sense. The only issue is that the spatial resolution in scene coordinates might not be accurate during scene movement.
@user-467cb9 check out the frame publisher examples in our pupil helpers repository. https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py
Aha I thought @user-d8853d was intending to track the eyes at 30Hz
@nmt oh, you are right. I misread that. I thought he was talking about the scene camera.
@user-3ede08 with a custom plug in, yes. I am currently not at work, but once I am, I will be able to share it with you. @papr Can you still do it today ?
@user-3ede08 Install it to ~/pupil_player_settings/plugins
@user-3ede08 I just noticed that this plugin is a bit older and not yet compatible with the newest recording format. @user-764f72 could you please update it to make it compatible?
@papr Did you mean, that I should download vis_datetime.py
and paste the file ~/pupil_player_settings/plugins
?
How will I connect it to the file /Users/user_name/recordings/2020_11_20/000/exports/000/world.mp4 ?
@user-3ede08 This is how you add a plugin. https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin Afterward, open the recording in Player, activate the plugin and you should see the datetime in the bottom left.
@user-3ede08 Depending on your recording, you might want to wait for @user-764f72 to update the plugin.
@papr @nmt So does more hertz on eye camera means more accurate results on gaze estimation? Is there any study you guys did?
@papr The file is now in ~/pupil_player_settings/plugins/vis_datetime.py
. But does not appear in the plugin list. I am waiting the new file from @user-764f72
I am using pupil player version 2.5.0
@user-3ede08 please use the plugin file I've attached above. I've updated the way the recording metadata is read, so it should work now. the plugin you're looking for in the plugin list is called Clock
.
Thank you @user-764f72
Once I activate it, it displays the Player Synced
and Player System
time in the middle of the Pupil player and disappear after some seconds. It is the way it should behave ?
@user-3ede08 you should see the full date and time text at the bottom of the frame, as shown in the sceenshot
Got it. I was behind the first graph functions. @user-764f72 thank you so much
Meanwhile, is it possible to make it be more precise ?
Something like this : 2020-11-20 08:35:33.534631968
(note the digit element)
I have changed("%Y-%m-%d %H:%M:%S")
to ("%Y-%m-%d %H:%M:%S:%M")
in the vis_datetime.py. It does add something after the second, but that number does not change during the time.
Any suggestions or idea will be welcome
I Always get that Error in pupil mobile Any. Info?
@user-3ede08 Here is an overview over possible formatting options https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
%M
refers to
Minute as a zero-padded decimal number.
You probably want %f
Microsecond as a decimal number, zero-padded on the left.
Thank you @papr
@user-6e3d0f The KeySensor records keyboard inputs, e.g. from a bluethooth connected keyboard or presentation remote. Not sure why the error appears in this case though. You should be able to ignore it since it does not relate to the cameras.
👍🏾
I just received my pupil-labs core and it is not being recognized on windows.
I followed the troubleshoot instructions from the website, but still not connecting....
@user-2162fb Please make sure the cable is connected properly, both in the USB port as well as the headset clip. If it already is, please check the device manager and check if you can see "Pupil Cam" entries in any of the following cateories: Cameras, Imaging Devices, libusbk.
The cables are connected... I checked the device manager, but I can't see the pupil cams.. When I plug the headset in, it says that USB device not recognized
@user-2162fb So there are no entries in the named categories, after connecting the device?
No. I don't see libusbk ether
@user-2162fb ok, please contact info@pupil-labs.com in this regard.
Just emailed them.
Hi I'm new in the pupil core world and I would want to ask a support on an issue I've found in installing pupil 2.6 on windows 10.
Hi installed all the dependencies as indicated in the website and I tried to use the installer. I launched the capture but when I record a session I don't find any audio in the recording. Also in the plug in window I don't see any options to audio recording. Would you help me in understanding where I'm wrong in using or setting the software? Thank you very much in advance for your attention. Max
@user-3f5fe2 There is nothing wrong with the software or your installation. We discontinued audio recording support with our Pupil v2.0 release.
Oh Thank you. What do you suggest me to record audio in sync with the camera? I need to sign audio stimulus to see eventually pupil variations correlated. Thank you
Maybe a script that launch the microphone and capture?
@user-3f5fe2 Maybe recording audio with a different framework like labstreaminglayer might work better for you.
Could you help in one more thing please?
Sure, I will try 🙂
Why there is in the recording option sound only or silent even sound is not recorded yet?
Is there any plugin to calculate the visual accuracy. I mean your eye is looking at a object but the pupil tracking is showing the gaze somewhere else and you calculate the angle between those two points.
@user-d8853d yes, checkout the accuracy visualizer. it is loaded by default
Hello, I'm considering getting the usb-c mount or high-speed camera configuration. Is it easy to write a plug in for a new sensor? say like Mynteye? and mount then on Core?
@user-f22600 Our video backend only lists cameras that fulfil a specific set of requirements. Check out this Discord message for details: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
@papr Is there any python example to obtain the gaze position in the world space/view space?
https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py Just comment line 23 and uncomment line 24. Afterward, you will receive gaze in normalized scene coordinates. See this for a coordinate system reference: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Please be aware that you need to calibrate first, before you can get gaze data.
Also is there any python example to load the folder containing the previous recordings and retrieve the gaze positions and the frames? :)
Usually, you would want to process the recording in Pupil Player. You can write code to read the intermediate data format if you wanted, though. It is documented here: https://docs.pupil-labs.com/developer/core/recording-format/
@user-f22600 Additionally, to the built-in backend, you could use @user-d8853d's opencv backend, that runs independently of Capture and streams the video to Capture via the network API: https://github.com/Lifestohack/pupil-video-backend/
Writing your own backend will require a lot of work, to be honest. If you are not dependent on the depth data, I would highly recommend buying the high-speed camera configuration.
@papr trying to install pupil throws the following error
ERROR: Command errored out with exit status 128: git clone -q https://github.com/zeromq/pyre /tmp/pip-install-muyuu9_p/pyre Check the logs for full command output.
Why are you trying to run Pupil from source? There is nearly no reason to do that anymore. It is much simpler to run the bundled application that you can download here: https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads
Fixed it...works
Just pasted an updated url
Thank you
I will check
Which version of ceres is required?
Unable to build wheel for pupil-detector
Thank you
I am assuming that you know about the bundled applications. I apologize for my tone if this was not the case.
I would like to make use of the python api so I was trying to install it in my conda environment
There might be a misunderstanding. There are two types of APIs of which both work with the bundled applications: 1) Network api 2) Plugin api
See this section of the documentation for reference: https://docs.pupil-labs.com/developer/core/overview/#where-to-start
I have installed the bundled application. However I need the python api in my conda environment.
This seems to be interesting though... It seems that I will get the gaze data remotely while the pupil capture is running somewhere if I understand it correctly.
100% correct
Perfect. Thanks
Did anyone of you work with Pupil Mobile? If so, what Mobile phones did you use? Anyone experience with the OnePlus 6T?
We have used Xiaomi Mi a2
(it works good on my 7T, but we need a more cheaper phone)
@user-6e3d0f Yes, the 1+ phones should all work well in terms of compatibility
i've dabbled a bit with it, using galaxy s8. it works, but i've haven't dived deep into all the files it generates
and speccs should not really make a big difference or? 6T is from late 2018
Pupil mobile homepage says Android device (Nexus 5x or Nexus 6p)
@user-6e3d0f please note that Pupil Mobile is no longer maintained. It should still be working on OnePlus devices as @papr notes, but we have not evaluated on the devices that you or @user-200ca9 have listed.
We got an Old Samsung galaxy S7, but dont know if that works good enough
@wrp I know that, I worked with it for about 2 weeks now, and till now we didnt see any problems with it. For our use case, a mobile Setup is pretty much needed.
Hi all, I'm having this issue with the recordings that I've made with pupil mobile: I have a recording that lasts 40minutes (more or less) and this information is shown also in the file "info.mobile.csv" in the directory of the recording but when I try to open it in pupil player there are only 6 minutes. How can I fix that? Thank you
@user-067553 Does this happen for all of your recordings? Sounds like the scene camera got disconnected after 6 minutes. If you want you can share the recording with [email removed] and we can have a look.
Thank you @papr Is it possible to send only the info and the .npy files without sending you the whole recording? thanks
@user-067553 That would be a start. We can let you know if we need more files.
Hi @papr, I've sent the data at the address data@pupil-labs.com (do you need more informations?) thanks
hey @papr, im trying to write a script that does analysis of the pupil polar coordinates, what are the origin coordinates for theta and Phi? (i mean that in the 3d space, what are the theta = 0 and phi = 0 coordinates?)
Hi! I'm not sure you are able to help me with this @papr , but I would like to give it a try anyway. (Btw, your tips were extremely helpful so far, I am really grateful for them). I'm trying to replicate a study measuring Pupil Core precision, and in their code the authors refer to a file called "info.csv" (from where I should get the timestamps), but I can't find such a file. Am I missing something, or did it perhaps get renamed in the meantime?
I am extremely interested in this study. Could you please update me on your study? Perhaps you would like to link your github source code or link to your study?
@user-690703 The new equivalent file is info.player.json
Thank you very much!
@user-765368 Unfortunately, this is not well documented 😬 And I do not know it by hard right now.
@user-067553 I won't be able to have a look at it today. I will let you know once I had a chance to do that on Monday. 🙂
Meanwhile, If you have some documents to suggest me for this kind of problem that I could see to understand if I can do it by my own and fix it, I can also try to take a look.
Ok, thank you
Hello @papr, I'd like to ask you two questions: First: According to the developer documentation, we can get the real-time eye movement data by : https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone. I want to make sure that this data is normalized to the point just on the screen or to all the points that the world camera takes. Second, we are trying to implement online fixation detection. May I ask where is the code to implement the online fixation detection function in the Pupil Core source code? Thank you!
@user-594d92 Gaze data is in scene camera coordinates. So if I understand your options correctly, the second one. If you want gaze in screen coordinates (first option), you will need to use surface tracking on your monitor and subscribe to surface-mapped gaze.
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L669
@user-2ab393 Checkout the recent_events() method. The Fixation_Detector class is primarily a plugin, i.e. the glue code between algorithm and application. It reads the gaze data from the events dictionary, buffers it in a running window, and calls the gaze_dispersion function on the buffered data.
So with Surface Tracking, is the data output in real time processed by the Surface Tracking plugin?
@user-594d92 Yes
Nearly all detections and analayses can be done in real-time in Capture or post-hoc in Player
ok, thank you! 👍
Windows / Sophos updates look to have broke my Pupil Capture. Has anyone run into a similar issue with Pupil Capture? Maybe this is one for local IT support...?
@user-430fc1 Is it possible for you to restore the file via sophos? Afterward, Capture should run as usual.
https://support.sophos.com/support/s/article/KB-000036919?language=en_US
Generic ML PUA detections explained Potentially Unwanted Application (PUA) is a term used to describe applications that, while not malicious, are generally considered unsuitable for business networks.
What to do [...] If a user needs to allow a PUA to run, it can be done by an admin through the Sophos Central dashboard. Locate the alert for the detection on the device and select Allow from the options. Doing this will restore the application and stop it from being detected again.
@papr Ah OK, thanks, will probably need to get admin to do this as it's a departmental machine
Hello @papr, I'd like to ask you one question:
What is the input data of online fixation detection algorithm ?
Thank you!
@papr OK, thank you. Let me understand it.
Is the output in the fixations_on_surface_name not handled the same as the export from the Fixation detector? Because Fixation Detector gave me 344 Fixations and the on_surface.csv has way more lines and shows shorter durations than what my sections are in the fixation detector?
Fixations with the same id should have the same duration. Since fixations can span multiple world frames, and the surface location could be different in each one, we map the fixation for each world frame. This is why you see more entries,
Yes, thats the case
If I want to work with this output data on my surface, and for example classify my surface like 0 - 0.5 is left and 0.5 - 1 is right and I want to classify my fixations if they are left or right inside the frame, I could simply use every entry or only every fixation_id? Because fixation_id differs in the position by a small amount. Or I could go with the average, right?
Group by fixation id, calculate average location, apply left/right classification. The less the surface moves during the fixation, the more valid this approach is
Thanks, I will stick to that.
If Fixation is fixed on a wall that means no moving? Or do you mean the less the world camera moves?
it is about the relative movement between surface and scene camera. if either one moves while the other does not, the fixation mapping will result in different locations
hi@papr May i have a question? When and how the recent_events() method is called, and what is the meaning of the parameter "events" it passes in.
Pupil Capute uses a loop to process data. Each iteration processes one scene frame and the most recent gaze data. This is done by calling recent_events() on each plugin once per loop iteration. events
contains data added by other plugins, e.g. frame and gaze data
Hello. I have an eye-tracking session that unfortunately was recorded over a few bad disk sectors. The file affected is world.mp4: when loaded into pupil player it works fine as long as you don't reproduce it between the 7-20sec mark. The same with VLC. The rest of the file works fine in both software. When I try to copy the file to another drive, the process fails at 6% of the copy. I imported the recording into pupil player anyway and exported a new world.mp4 but skipping the first 840 frames of video. I thought that importing this new world.mp4 into pupil player with the rest of the files (the original ones) could work —I thought that perhaps pupil player was simply gonna match the shorter video with the corresponding data based on frames or timestamps, or something like that. However, pupil player crashes during load. I could forget about all this and just work on the original files (which I'm doing), but I'm afraid the faulty drive may die any minute now. So finally, this is my question: is there a way to match the shorter video with the original data? either by removing the data corresponding to the first 840 frames from all the original files, or by somehow filling the missing 840 frames of video with blank footage? I actually thought the latter option could work but I then thought this would probably mess up some of the video metadata and thus render it useless for pupil player. Anyway, any suggestions are welcome
@user-5ef6c0 it should work if you also copy the exporter world_timestamps.npy file into the recording folder
So what I just tried is: a) imported the original files which are in the disk with bad sectors (including the corrupted world.mp4) into pupil player b) in pupil player, exported the between frame 840 and the final frame. I only included the world video exporter and raw data exporter plugins. c) copied all original files from the disk with bad sectors into a new folder in a separate hard drive. I didn't include the original world.mp4. Let's call this new folder MIRROR. d) copied the new world.mp4 and world_timestamps.npy files (which I obtained after exporting from the original in pupil player), into the MIRROR folder. e) imported MIRROR folder into pupil player, but pupil player crashed during load. Is that what you were referring to? (PS: I edited to be more clear)
player - [INFO] camera_models: Loading previously recorded intrinsics... player - [INFO] video_export.plugins.world_video_exporter: World Video Exporter has been launched. player - [WARNING] video_capture.file_backend: Advancing frame iterator went past the target frame! player - [ERROR] video_capture.file_backend: Found no maching pts! Something is wrong! player - [INFO] video_capture.file_backend: No more video found player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "shared_modules\video_capture\file_backend.py", line 452, in recent_events_external_timing File "shared_modules\video_capture\file_backend.py", line 412, in get_frame video_capture.base_backend.EndofVideoError
that's what the console displays
I will try to insert 839 black frames at the beginning of the exported video using ffpmeg and see if that works
Unable to add surface in pupil capture
?? Please why?
@user-5ef6c0 ah, you need to delete the world_lookup.npy file after replacing the other file
it will be regenerated by Player based on the new files
@user-7daa32 You will have to provide more details for us to be able to help you. What is different than usual? Can you share a screenshot of the surface tracking menu and your scene video frame that you are trying to define a surface on?
Not clickable
@user-7daa32 The issue is that surfaces cannot be added when the scene is frozen. Please disable the "Freeze scene" button and try again. There is a warning that should have shown up in this case but it looks like it is rendered behind the frozen scene image, effectively hiding it.
That's right
Can we do the annotations plugin and the surface plugin together. The annotation spreadsheet is emptied and I don't know why. I was clicking the keyboard. May I will need to click the annotations keys on the world window
Yes, the world window needs to be active to receive keyboard events.
Hi we have a long pupil mobile recording data, 1h 20min, and loaded into pupil player 2.5. Pupil Player only shows 25minutes without much errors in the log. What can we do for recovering the remaining?
Your issue seems to be related to @user-067553 's issue here https://discord.com/channels/285728493612957698/285728493612957698/781194512865034270
I still need to check their recording to see if one can restore/access the remaining data.
Hello,@papr The pupil core can output data in real time (https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone).I hope to use the real-time output data as the input of the real-time fixation detection algorithm. Now I have two problems. First of all,What does normpos of real-time output refer to?Second, what is the input data of real-time fixation detection in the eye tracker sourcecode(https://github.com/pupil-labs/pupil/blob/master/pupil_src)?
gaze
data as inputHey, i still have troubles with calibration. I do follow the steps but somehow i cant make it that the camera detects the pupils properly. Also the detected view in the recordings is not correct. It is not where it should be and too wiggly ...
@user-3c006d If the pupil detection is inaccurate, the gaze estimation will be, too. Feel free to share an example recording with [email removed] such that we can review the eye camera setup.
okay i will send it 🙂
done
did it arrive? had to send a dropbox link
@user-3c006d it did. We will come back to you via email once we have reviewed it.
okay thank you!
In the instructions for Surface Tracking at the very bottom there is a blog post mentioned about Surface Metrics with Pupil Player. That Link is dead though. Do you have the current link to that blog post? https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@user-8ed000 I think we removed that link a while ago. It is possible that you are seeing a cached version of the documentation.
The blog post is out-of-date which is why we did not replace it with a newer link yet.
@user-8ed000 We have a few surface tracker related tutorials here: https://github.com/pupil-labs/pupil-tutorials
@papr Thank you, I will have a look at it
@user-292135 Could you please also share all ".npy" files of the recording with [email removed] @user-067553* could you please share the world.mp4
file?
I invited data@pupil-labs.com into my Box folder. thanks!
I have not exported it.. Because there is a mismatch between the durations (if you want I can try to export)..I have the world files in the "mjpeg" format I can send it to you these. Is it ok? @papr
ok! thank you
I think I have more than 40 sessions (40 minutes each) with this kind of problem.
@user-067553 The should be no need for exporting anything. The world.mjpeg files are the correct ones.
Ok thanks
@papr Which of the real-time output data is what you said "gaze data"?
@user-2ab393 Subscribe to gaze
to get that data. Please be aware that you need to calibrate first to receive that data
@papr Why do I use the data as the input of the gaze detection algorithm, and then display the data on the screen is quite different from the real one. Is surface tracking required? If so, can real-time surface tracking be carried out?
Why do I use the data as the input of the gaze detection algorithm, and then display the data on the screen is quite different from the real one. What do you mean? The input to the fixation detector and the displayed gaze are the same. Please be aware that our fixation detector buffers data. It therefore includes a few older datums as well as the most recent gaze (which is displayed)
Hello, @papr Because the world camera is slightly higher than the center of the screen, the image of the screen in the camera is not a rectangle. So I would like to know if the capture window I get on the screen by SurfTracking plug-in is not a rectangle, but an ordinary parallelogram, will it cause me any deviation in drawing points on the image with normalized coordinates.
@user-594d92 the plugin assumes the surface to be a rectangle in real life. It will assume it to be distorted by perspective if it is not rectangular. But in this case it is possible that you need to reestimate your camera intrinsics
There is currently no way to assign post hoc surfaces without markers or?
@user-6e3d0f no, there is not