Happy New Year everyone! Im currently trying to build pupil labs from source on an arm64 linux device. After a bit of stumbling i managed to install all python libraries as well as requirements. When starting pupil capture however, I get the following error message: File "/pupil/pupil_src/launchables/world.py", line 537, in world
main_window = glfw.create_window(
glfw.GLFWError: (65542) b'GLX: No GLXFBConfigs returned'
I already tried to build glfw as well as the related python bindings from source. The weird thing is that with the pyglfw sample application I have no problems to initialize a glfw window. Also the pyglui sample runs without problems (so i dont think its graphics driver related). Does anyone know what else I could do to solve my problem? Thanks in advance!
Hi @papr. Recently, I studied the calibration routine of the software, and I have some questions about it. 1. There are two "Gazer" classes. Since we are using the HMD streaming to provide images of all 3 cameras to the Pupil Capture, is our gaze calibration done by the GazerHMD3D class? 2. Also, for post-hoc gaze calibration through Pupil Player, is it done through the GazerHMD3D or the Gazer3D? 3. In GazerHMD3D, the eye translation with respect to the world camera is supplied externally. Could you please point out which class/method supply these parameters? Also, can we manually set the eye translation parameter? 4. For both gaze calibration routine, the reference data are essential. Could you please point out which class/method provide the reference data?
Just because use the HMD streaming video source, it does not mean you need the hmd calibration or gazer. It depends on the hardware that you are using. Could you remind me what hardware you use?
Player uses Gazer3D by default. If you run this PR from source, you can use the HMD 3d gazer, too. But it is experimental. https://github.com/pupil-labs/pupil/pull/2176
These are typically supplied by the software displaying the reference targets in the virtual world camera, e.g. our unity plugin https://github.com/pupil-labs/hmd-eyes/
In case of the screen marker calibration choreo., the plugin detects the concentric circles within the scene camera video feed. In case of the HMD Calibration, the external program supplies the locations, see 3.
By default, we raise all glfw warnings as errors. You can add your error code (65542
) to this whitelist of ignored error codes https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gl_utils/utils.py#L385-L389
hey @papr thanks for your quick reply! I have now whitelisted the error message, which led to the subsequent error message:
glfw.GLFWError: (65545) b'GLX: Failed to find a suitable GLXFBConfig'
If I now also try to whitelist this no window can be created and the startup is aborted at the next assertation glfwSetWindowPos: Assertion window != ((void *)0)')
.
We are using the Pupil Lab's eye cameras and world camera, but we modified the setup a little bit such that the original eye translation hardcoded in calibrate_3d.py is inaccurate. Given that, I think I should just run from the source and modify the hardcoded eye translation in calibrate_3d.py according to our setup, right?
That is a possibility, yes.
Hi, can we use Reference Image Marker with Pupil Core? if yes, than how we can generate reference image marker? Thank you
This feature is only available in Pupil Cloud and therefore only for Pupil Invisible recordings.
Hi, is there any way to trim or split a surface post-hoc so that multiple heat maps may be made for it? Or a way to merge identical-sized surfaces?
Hi, could you provide an example use case for the latter? You can define multiple surfaces based on the same markers and adjust them, such that they do not overlap.
Thanks! I made a surface representing an area of a face (like "right eye") and would like to be able to compare fixation distributions between other eyes on other faces. However, as of now all eyes are treated as the same surface. However, each eye appears for only 2 seconds at a time, so would there be a way to trim by time?
You can trim by time, but you would need to do that manually after exporting the data. You could create annotations as reference points for trimming the data.
Are these faces displayed on a screen? Or are we talking real-life faces?
All are on a screen
And are you displaying the apriltag markes on screen, too, or on physically printed papers attached to the screen?
Yes--both faces and the apriltags
Then you could display different apriltags with every face. This way you can define a dedicated right eye surface for every face.
Thanks! I'll see what I can do
hello. Is there an official channel to make requests for Pupil Player?
Hey, feel free to write about your feature ideas/requests here. Many features can easily added via third-party plugins. Maybe the requested functionality is available already.
HI everyone, is it possible to set a ROI for the auto exposure mode ?
No, unfortunately not. You can either select full auto exposure or one fixed exposure.
hi,when i pip install -r requirements.txt, it happens follow problem
my computer version is win10,and python version is 3.9
how to save it?
@user-7904e8 hi. To run from source on Windows you need Python 3.6. I recommend running the bundled application. You can get it here: https://github.com/pupil-labs/pupil/releases/latest/#user-content-downloads Nearly every functionality can be added via plugins to the bundled application. So there are only very few use cases were running from source is necessary.
hello everyone, I want to know, is the Pupil labs core suitable for people who wear glasses?
Hi @user-8add12 In principle Pupil Core can be used with glasses, providing that the eyes are visible to the eye cameras behind the lens/frame. This is not optimal in many cases, and depends on the size/shape of the glasses, and head physiology. Pupil Core will work fine with contact lenses though. I hope this helps!
Hello! Does anyone have a solution for our eye tracking data not uploading to the cloud? The data on the app is sitting and trying to upload but will not move past 0%. We have tried changing our wifi connection, restarting the phone and app, neither have worked.
I have responded over in the πΆ invisible channel π
hi i have a quick question, I have built the DIY pupil core. I cant seem to get it to select the cameras. I have verifed that in the windows camera app they both show up. I tried running the pupil capture as admin and it showed several cameras but when i try to select any of them it says the camera is already in use. How should i procede?
there is the log from the core
I also have tried uninstalling the device in the device manager
I tried to use a VM running Linux as well, in the VM the pupil capture software picked up the cameras (but had some issues with capturing the output).
is there a way to manual load the drivers?
I have attempted on both a windows surface pro 7 and a custom desktop both running windows 11
@user-5135bb Hey! Pupil Capture only installs drivers for Pupil Labs cameras by default. To manually install drivers for third-party cameras manually, please follow steps 1-7 of these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Thanks!!!
ahhhh okay perfect i will give that a try
Fantasitc it is working great!! thanks for the help
Great to hear! Let us know how your projects goes π
Will do!
Hello, I have errors when using the eyetracker. I just received it and I have a computer with windows10. I have installed "pupil_v3.5-1-g1cdbe38_windows_x64.msi", and when I connect the core to the computer via USB, and I launch pupil-capture (v3.5.1), it only launches a black window, and nothing happens. I don't see the interface of the software appearing... even after an hour of time. I have uninstalled, reinstalled, restarted, and still the same thing happens.... Can you help me?
Hey, I am sorry to hear that! There should be two windows, a command line prompt window and the actual application window which seems to stay black in your case. Can you confirm that?
Hi, i have only one windows, the commande line prompt.
Ok, thank you for clarifying that. So it seems like there is a general problem with running the application. What kind of CPU do you have?
For context: The bundled application is compiled for a specific CPU architecture and using a different one causes the software not to run properly.
i have an alienware laptop. Intel Core i7-7700HQ CPU 2.80 GHz, 16 Go RAM, x64
ok, that should be compatible. Could you please check which Windows 10 version you have? Please also try opening a new command line prompt and running the pupil_capture.exe
explicitly from there instead of double-clicking the app icon. I am hoping for some kind of text/log indicating the issue.
My pc is running under Windows 10 Professionnel. I try to launch "pupil_capture.exe" from the commande ilne prompt. It launch the same black window, like the double-cilck. No error messages on the commande line prompt. Do i need a specific python version on my computer?
Could you please check if your user directory (C:\Users\<user name>
) has a folder pupil_capture_settings
containing a capture.log
file? If yes, could you please share the log file?
No, the bundle comes with its own python.
the capture.log: 2022-01-13 14:58:48,976 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version. 2022-01-13 15:54:59,801 - world - [DEBUG] launchables.world: Application Version: 3.5.1 2022-01-13 15:54:59,811 - world - [DEBUG] launchables.world: System Info: User: p.labedan, Platform: Windows, Machine: port-labedan, Release: 10, Version: 10.0.18362 2022-01-13 15:54:59,811 - world - [DEBUG] launchables.world: Debug flag: False
2022-01-13 15:54
Is this your current local time or are these from yesterday?
Every time the application launches, these logs are overwritten. So there seems to be something different between yesterday, when the application ran at least partially, and now where it is not even overwriting the logs anymore.
oh, it's very strange, i just lokk again the capture.log, and the time has changed! There is only one line in the file now: 2022-01-14 11:39:37,657 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version.
So the application launches, but seems to get stuck before the window is created. Could you please restart the computer (to ensure that there are no dangling Pupil Capture processes running somewhere), delete the user_settings_*
and capture.log
files in the pupil_capture_settings
folder and then try launching the app. Please for a minute and share the newly generated capture.log
file with us in case the issue was not resolved.
After a long time, the window appreared! but, no camera deteted i think.... and the .log was full of text. I put it in the next message:
Ok, here's the result... but not after 1 mn (15 mn). only one line in the .log : 2022-01-14 14:01:35,987 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version.
Indeed. Something was blocking the application for over an hour. Is it possible that you are running an anti-virus or similar administrative software?
There is the Kaspersky antivirus yes. But there is no message from it...
@user-6cbd8b if no camera is detected, then it is likely that the drivers were not installed correctly. Check out https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting
Mac and windows which is better for eye tracking?
Pupil Capture for macOS/Linux and Windows only differ in a timestamp implementation detail. Otherwise, the biggest difference is that one needs to install drivers for the cameras on Windows. Warning: On the latest macOS Monterey, one needs to start Pupil Capture with administrator privileges to access the cameras.
Hello
Please I have a question.
In the video, how do you know the 0,1 X and Y coordinate of the screen so we can convert to the actual screen size in centimeter? The 0, and 1 are off the screen
Hello everyone
I want to start using Mac, would all my eye tracking file on window open on Mac ?
The files are compatible between operating systems
Hello people, In my setup I want to do a Natural Feature Calibration from inside a car. I have placed different markers at different distances and angles. My original plan was to not allow any head movement during the calibration, as I am then calibrating to the largest FoV possible. However, during validation I wanted to allow the head movements to cover the normal glances and head movements like during driving.
So I get an accuracy of 2.58+/-0.09 after calibration and 1.726666667Β° after validation.
However, I noticed when I do not instruct the subject how to hold his head, I get a much higher accuracy. (Normal Gaze-behavior)
After calibration: 1.03+0.08 Β°. After validation: 1.3144444Β°.
What do you recommend ?
Allow head movements in calibration and validation Do not allow head movements in both cases or a mix ?
Problem: If I allow head movements (no instructions to subject) , I calibrate to a smaller FoV and can't make any statement about the accuracy at large eye angles, but accuracy is much better and calibrated to normal gaze patterns.
Cheers, Cobe
Hi Cobe, what you could do is to allow head movement during the calibration and disallow it during validation. This way you can measure the effect of calibrating a small area. @nmt what best practices would you otherwise recommend here?
0 and 1 of the surface coordinates are based on the surface definition. Where ever the top right surface corner is, it corresponds to (1,1) and the bottom left to (0,0)
Thanks... They can't be known if surface is not defined? There are background images in the videos together with the computer screen. The edges of stimulus read from 0.22 to 0.78. how would you convert to the actual stimulus size in centimeter? The coordinate at the edge of the computer screen is not known. Does 0,0 and 0,1 begin from the marker centre or the edge of the marker ?
Best practice would be to cover all gaze angles during calibration that you might expect, and want to capture, in your experiment. For example, if stimuli appear in the periphery of vision (e.g. in the wing mirror), the driver may glance at the mirror without moving their head. In such a case, it would be worth having the system calibrated for that.
You could subsequently validate with or without head movement: With - provides accuracy during more typical gaze (majority of eye movements) Without - provides accuracy at larger angles (minority of eye movements, but possible, e.g. glancing at the mirror)
Note that even if accuracy is reduced at larger viewing angles when compared to more typical gaze, ~2.5 degrees is likely sufficient to identify the object being gazed at. This of course depends on the size of the object, but it certainly sounds reasonable in a driving situation.
Alternatively, if you aren't interested in those glances to more extreme angles, and most of the experiment contains gaze accompanied by head movements, then maybe calibrating within the smaller FoV is sufficient.
Thanks ! I' ll think about it π
Like I need to define a big surface for the whole stimulus right ? The size of the stimulus is from 0.22 to 0.78 and converting to actual distance in centimeter is difficult because we don't know where 0,0 and 1,1 are
You can adjust the surface relative to the markers. This will move their coordinate systems accordingly. See surface B in the screenshot. The origin is not necessarily the corner of the marker. Therefore, I suggest adjusting the surface to match the size of your stimulus. Then you can calculate the real world coordinates in cm by multiplying the estimated surface gaze location by the size of the stimulus.
E.g. for the center point (0.5, 0.5). Assuming the stimulus has a width of 30 cm and a height of 20 cm. Then the center point would be at 0.5 * 30 cm = 15 cm from the left and 0.5 * 20 cm = 10 cm from the bottom.
Hello papr, i tried what you suggest, but it's the same result (https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting). Tha cam is detected by the system, but not used by pupil_capture.exe...
Can you confirm that the device manager lists the Pupil Cam entries? If yes, in which category are they displayed?
This helped... Thanks so much
Credit to @nmt who helped me to come up with that answer. π
After the test you recommended me, here's the device manager when i connect the eyetracker on the usb. I'm still waiting since approximatively one hour that the pupil_capture.exe responds... because i remerber that the cam whare detected differently than the on the jpg.
They were likely detected as "unknown". Please contact info@pupil-labs.com in this regard. This is the second case that I have seen where drivers seem to be installed correctly but Capture is not able to access them.
Hi! I have a question about a problem that I started having yesterday and I haven't found a solution. When I open the files of my experiment in the pupil player I can no longer see the video of the world camera, I have been working on this file without problems for a week and suddenly yesterday it stopped working. I can find the video in the folder but even re-importing it into the Pupil Player I still see all gray.
Hi, I am having issues connecting the pupil camera to the pupil capture software. Every time I try to connect it, grey screens appear. I tried using the device manager and uninstalling the drivers and restarting the computer with no luck. I also logged in as an administrator and ran into the same issue. Please let me know if you have any recommendations in order for my device to recognize my pupil camera.
Could you please check in which category the Pupil Cam devices are listed in the device manager? If you have a binocular headset, there should be 3 devices in total.
Hi, I am sorry to hear that. It sounds like your video file got corrupted and Player has difficulties opening it. If the video file playable in every other media player, please try shutting down Player, deleting the world_lookup.npy
file and restarting Player.
it worked! thanks!
Hi, is there any apps needed to be installed on my pc so I can transfer my videos from companion device?!
The 3 cameras are listed under the camera category and libusbK USB devices category.
Do the ones under the libusbk category look slightly more transperent?
Under both categories at the same time?
As long as we created a surface before exporting the data ?
correct. the export only exports the current state. you need to make a new export if change anything.
Hello, how should I read the values of the timestamp column? How do I get from the values to seconds? The same applies to the diameter_3D column? Which unit is used? The values differ very much from each other. See example:
Hi @user-d2873b π. The unit of the timestamps is seconds in relation to a fixed starting point. You can read more about that in our online documentation: https://docs.pupil-labs.com/core/terminology/#timing. The unit for diameter_3d is mm.
That said, the formatting doesn't look quite right in your screenshot. Sometimes, spreadsheet software auto-formats the timestamps incorrectly when importing the pupil_positions.csv
file. For example, the German meaning of ',' and '.' in floating numbers are opposite to their English meaning. Your spreadsheet software may have parsed the values as integers instead of decimal numbers, and subsequently filled in the 3-digit-separators. There should only be one decimal point within each value!
Please try adjusting the format for those columns in your respective software in the first instance. If that doesn't work, try re-exporting and re-opening the data! Let us know how you get on.
Hello. The latest pupil capture 3.5 doesn't detect the cameras with the latest Pupil Core in Montery despite running with root privilege. "Could not connect to the device'. Has anyone able to run it successfully ? Thanks a lot !
I am able to run it. What exact version of macOS Monterey are you using? To confirm, you are running the application from the terminal to give it the admin. privileges?
hi all. I serached through docs and videos, but can't see the info on "best practices" of matching the worldview camera to real "worldview" of user
Could you give an example of what you try to achieve?
eye tracking of user observing phisical objects like a handheld product.
using monitor as reference leads user to shift head to match the worldview from monitor. - liftng worldview up to ceiling and asking user to comfortably watch the center of monitor and without shifting his head realign worldview camera to match center of worldview image kinda works. but then we get into situation that user can move the object and place it too low in worldview image during experiment.
Pupil Capture assumes that the relationship between scene and eye cameras is fixed during and after calibration. Adjusting the scene camera breaks this relationship and requires recalibration.
1 - worldview camera alignment 2 - calibration.
so let's break this in two parts.
it leads to problem that user calibrates to ~50cm distance, but then can move object closer to 30-40cm distance, or move it lower than worldview was set.
with calibration we have no problem - best practice we found is to calibrate using monitor so that monitor is set to same distance as objects the user interacts with
problems are with aligning worldview - before calibration
You can always use the 1920x1080 resolution which has a huge field of view.
hm.
so using lower resolution is not scaling but like digital zoom - cropping smaller part of sensor - and making picture "zoom in" ?
yes, 1280x720 is a crop of the full sensor's field of view
but with huge FOV we "lose" ability to do good calibration with small monitor - as you can't put it "in your face" for user. and if you set it to 50+cm you occupy very small part of picture and calibration results are not very good.
That is correct. This is exactly the trade off one needs to do: The bigger the FOV the more objects one can see in the video. But the less pixels are available for these objects.
I see that we need to experiment more and find better methd for our usage scenario.
Hi @papr thanks a lot for getting back quickly. Apparently, the problem is in the USB adapter as the new Mac has only USB C. I tried a bunch of adapters. Some work with the new Pupil Core with single camera while the others work with new Pupil core with two cameras. Thanks a lot once again !
In this case I recommend getting a usb-c to usb-c cable that is suitable for data transmission. Then you can connect the headset to the laptop directly. Then is should work properly.
I have 2 questions on worldview - 1 why it is not sed directly in the middle of forehead? why it's offset? it causes parallax if calibration is done on longer distance than objects in experiment. 2 - where can I read more on USB-C option for external cameras-sensors
Feel free to write an email to info@pupil-labs.com if the infos on the website are not sufficient regarding your second question.
@mpk might be able to comment on your first question.
thanks. I'll wait the reply
@user-a07d9f the USB-C world cam option is used for prototyping. You will need to "bring your own" sensor and will need to write your own capture backend. Some researchers have used this for adding a depth sensor (previously using Realsense sensors).
I understood the idea. but I thought you (as pupil labs) had some ready solutions for this.
ok - I'll go with fullhd for now.
By the way - is it possible to have reduced resolution but with same FOV as 19201080? like 960540 to have direct 2x downscale?
That is not possible with our cameras. You would need to scale down the image yourself.
hm and how zoom with pan and all this works then? is it sw processed on pc ?
Even though some cameras support these UVC controls, they are not implemented in ours. Our cameras provide a fixed set of resolutions that correspond to a specific crop/zoom/scaling of the sensor. This is done on the camera itself to my knowledge. Pupil Capture just requests the target resolution and processes what ever is returned by camera.
We tried that with the Intel RealSense camera. But when Intel dropped support for the R200 from one day to the other, we were forced to do so, too. And this is not the user experience we want for our customers. The usb-c mount is a trade-off where we can provide a flexible hardware and software platform without depending on camera manufactures.
I see
Ok I'll go test all this now. We worked with old version of Pupil Capture all the time before (2.3) and I wonder if switching it to 1920-1080 will make cpu pload much greater. are there any hw encoding features present in newer releases?
CPU load will increase for detection features like surface tracking or the calibration marker detection. There are no hardware acceleration features.
That makes me sad. "so much HW and so little acceleration"
Modern CPUs should be able to handle it without any issues.
ok - I'll go check the system recommendations for CPU ram storage and also for Mobile setup.
The most important metric is cpu frequency for Intel CPUs. The Apple m1 chips handle it easily.
Hello all, is it correct to say that surfaces created with the surface tracker are defined in the undistorted view of the world camera, rather than the distorted view? I am attempting to use surfaces to track gaze on a curved computer monitor (unless this is not recommended) and am trying to address similar issues as this user was https://github.com/pupil-labs/pupil/issues/1930
Hi, I wanted to quickly follow up with a clarification. The "detection error" described in the linked github issue might look like the issue that you are having with the curved monitor but reality it is an additional error source. 1. Scene camera lens distortion - This is being corrected for by defining the surface in undistorted camera space and using it for gaze mapping. 2. Assumption of flat surface - This is where you need to correct for the curvature of the monitor yourself. One workaround could be to setup multiple markers around the monitor and setting up multiple flat surfaces to approximate the curvature. One would need to manually combine the mapped gaze results from all sub-surfaces post-hoc.
hi, surfaces are actually defined in both coordinates. The previewed outline connects the distorted corners. Gaze is mapped within the undistorted space. Nonetheless, the algorithm assumes that the surface is flat. So mapping onto a curved monitor will introduce some mapping error.
Thanks for the reply - so in the gaze_positions_on_surface file generated by the surface tracker export, on_surf events refer to times when the gaze mapped in the undistorted space is within the surface as defined in the undistorted coordinates?
correct
Hi Guys, i want to visualize the gaze angles in horizontal degrees and vertical degrees (not with the angles theta and phi defined by the eye coordinate system). To make it more intuitive. So looking straight would result in 0Β°/0Β°. My current approach is to use the norm_x_pos and norm_y_pos and scale it with respect to the FOV of the world camera and a defined Mid point. Is it possible to do it this way ? I am wondering whether the linear scaling is ok.
I am doing it this way because i want to compare typical gaze angles with my targets angles during the natural Feature calibration. If I subscribe to the calibration data via zmq i get the ref_list of the targets in norm_pos_x and norm_pos_y as well.
thanks, Cobe
norm_pos are distorted coordinates, i.e. you need to correct for the lens distortion. You can use the gaze_point_3d instead which uses undistorted 3d coordinates. (0, 0, 1) corresponds to looking straight ahead (from the scene cameras point of view). Subtracting this vector from your actual gaze_point_3d gives you a difference that can be divided into horizontal and vertical components and transformed to degrees.
Ok i clearly misunderstood the desciprtion given here: "looking directly into the eye camera: (x=0, y=0, z=-1) (cartesian) phi=-Ο/2, theta=Ο/2 (spherical) looking up: decreasing y vector component; increasing theta looking left: increasing x vector component; increasing phi"
What do you mean by "actual gaze_point_3d"? Should i e.g. place a Marker straight 0Β°/0Β° H/V -> measure the vector and subtract it from the other gaze_point_3d data ? Then i would plot everything as follows:
For better understanding, I just recorded a video only looking horizontically: -> Theta = vertical angle.
As you said before, you do not want to use theta/phi since they are in eye camera coordinates. Everything within the pupil data is in its respective eye camera coordinate system. What you want to use instead is gaze data which is in scene camera coordinates. Gaze data includes the gaze_point_3d
field that I was talking about. Please be aware that you need a successful calibration to receive gaze data.
Good morning from here
I know I have asked this question before very long time ago. I so sorry if it's kind of boring.
I want to know the importance of the validation steps. How do you know the accuracy has dismissed due to slippage? And why the validation after calibration? Do we take the accuracy values for each participant or trial?
Hi @user-7daa32. Please see this response for reference: https://discord.com/channels/285728493612957698/285728493612957698/838705386382032957
Don't forget that you can use the search function in the top right to find answers to your previous posts. You can also filter messages from specific users and in specific channels. E.g. from: <username> in: core <search term>
π
Yes ok, The generated plot is based on gaze data (gaze_point_3d x ,gaze_point_3d x, gaze_point_3d x), i exported via the pupil player. -> gaze_positions.csv . The plot is generated by the pupil-tutorial 5 - Analyzing Gaze Velocity. So the next step would be as I described here: "Should i e.g. place a Marker straight 0Β°/0Β° H/V -> measure the vector and subtract it from the other gaze_point_3d data ?" The previous norm_pos_x/y are also based on gaze data. I did not analysed pupil data before. my fault sry
Ah, my bad, then you are doing everything correctly already. The only thing left is to subtract 90 degrees from both theta and psi signals s.t. looking-front corresponds to 0/0deg. The tutorial's implementation has its center at 90/90 deg.
Hi - Would it be possible to use the functions to process the eye tracking data in real time on a local processor? I am looking into a project where the data would need to collected and processed on moving vehicle and then the world camera with gaze position imposed is sent wirelessly back to a remote location.
Hi, yes, this can be made possible but requires fairly high processing power from the local unit.
Do you have some specifications for the level of processing required?
Thank you
Something around an Intel Core i5 with >2.8. GHz? Giving an exact number is difficult. Alternatively, you could also use something low-powered, e.g. Raspberry PI, and stream scene and eye videos to a computer with more processing power and running Pupil Capture.
Great - thank you Sounds like the more reasonable approach is to combine the video off car. Thanks for the info
Hi, does Pupil Labs have multi glint eye tracking systems?
No, we don't. We rely on detecting the pupil outline and fitting an ellipse to it
I will have a look on Monday π
Hi Pablo, did you have a chance to look into this error since the weekend?
Hi there! I am setting up my lab and would like to incorporate eye tracking. I have a question regarding my setup and compatibility with pupil labs
My existing PC has a custom-made software (using LabView compiler) where the subject must track a moving dot. Would the pupil glasses allow interface with a 3rd party stimulus presentation software?
It looks like Pupil Invisible would do this. I am wondering how I would sync up the eye tracking data with the stimulus and other hand movement data I am collecting.
Hi @user-97fc9e π. Both Pupil Core and Pupil Invisible have Network APIs that offer flexible approaches to send/receive trigger events and synchronise with other devices (e.g. https://docs.pupil-labs.com/developer/core/network-api/#network-api). We also maintain Lab Streaming Layer (LSL) relays for publishing data that enables unified/time-synchronised collection of measurements with other LSL supported devices (e.g. https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture). So that I can make a concrete product recommendation, it would be helpful to learn more about your moving dot paradigm (and other gaze tasks) β what sort of eye tracking metrics are you looking to extract?
Hello all.
Are there any ways to troubleshoot the player 2.3.0 startup failure on win10?
Please try deleting the user_settings_*
files in the pupil_player_settings
folder and start Player again. If I remember correctly, older versions had an issue where the app windows would not appear if they were used on a no-longer-connected computer screen.
Sorry to say that - but there is no pupil_player_settings folder there
If the application did start up correctly before, you should be able to find it at C:\Users\<user name>\pupil_player_settings
Thank you - this resolved the issue. I wanted to ask if there are faqs and troubleshooting guides anywhere in docs section.
For camera connection issues with Core, we have https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting. We do not keep a troubleshooting list with issues that have been fixed in the latest release. For open issues, see https://github.com/pupil-labs/pupil/issues
By the way - we have several cores with different revisions. One has eye cam connectors without plastic thingy which looks like a capon it and we had to repair the wires seveeal times. I wonder if it is possible to order those caps?
That might indeed be possible. Please contact info@pupil-labs.com in this regard.
Thank you
Hi Neil, I'd like to measure pupil dilation, duration of gaze fixation, and saccadic movements (trajectory, distance, speed) as the subjects move their eyes along the line. The dot starts at the center of the screen and moves along a straight line in one of 28 directions.
Hi @user-a68b92 π. Pupil Core lends itself better to the measurements you describe. It records pupil diameter both in pixels (observed in the eye videos) and millimetres (provided by 3d eye model). We also implement a dispersion-based fixation filter: https://docs.pupil-labs.com/core/terminology/#fixations. While we don't quantify saccades explicitly, we do expose the user to raw data that can be used for this purpose. Check out this page for details of data made available: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv. With these data it should certainly be possible to examine eye movements in relation to the moving dot. You might also be interested in Pupil Core's our surface tracker that can map gaze to surfaces such as computer screens: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Hi @papr . I want to manually save and load 3D eye models, so I tried to dump self.detector into a file in the pye3d_plugin, but I got a RuntimeError: SynchronizedArray objects should only be shared between processes through inheritance. I also tried creating a method in detector_3d.py (the detector class in pye3d package) to access the 3d eye model, but I still get the same RuntimeError. It seems like an issue related to multiprocessing, but I cannot find anything related to multiprocessing in pye3d class (in file detector_3d.py) or pye3d_plugin.py. Could you please point out how I should solve this issue? Here is my modified pye3d_plugin.py:
Hi @papr , got a quick question regarding to the unit interpretation of the saccade length. Assuming I'm obtaining the βsaccade lengthβ by using the equation above (by column norm_pos_x & norm_pos_y), then I would get the result of the saccade length like the figure above.β¨β¨Would like to know how to interpret the result into actual length ( e.g. mm or cm)? Many Thanks~
Hi@user-04dd6f. If I understand correctly, your approach here is to classify saccades as the inter-fixation change in gaze position. Note that during the time between fixations, other ocular events, e.g. smooth pursuits, PSOs etc. may occur. In addition, if there are multiple rapid saccades in-between two classified fixations, e.g the fixation filter settings weren't optimised for very short fixations, then the inter-fixation change in gaze would not accurately reflect saccadic distance and it would be difficult to spot these just from the raw data. A more robust way to classify saccades, depending on your experiment, might be to use a velocity-based approach
How can be the distortion be calculated if there is no undistorted Video? Using the gaze_point_3D seemd okay but comparing the plottet Datapoints it felt like there is an inversion of the Data an using both eyes, the Datapoints are quite seperated.
Hi@user-80316b. I'm not sure that I understand the question fully. Can you elaborate a bit more on exactly what you are trying to do?
I'm note sure if it helps to elaborate my question further. For me it seemed that the graph I plotted from the normal_gaze data blue = eye0, orange = eye1 that comparing to the real video is inverted. And also got elliptical boundaries. So I did some research and realised working with the "normal" Data doesn't work and we have to use the "3D" Data.
In short I want to define a small area which works as AOI (which didn't work in Pupil Player, due to missing markers) and only want to research time in AOI and outside of AOI during the Measuring time. It seemed to work okay with "normal", but it might not be accurate enough due to distorion. But working with the 3D-Dater doesn't work either
Hello, I would like to ask something related to the calibration process: 1. It turns out that we have 2 ways for calibrating, and I just want to confirm if I did that in a correct way : (1) Screen Marker Calibration Choreography: keep the head still while moving eyes to track the markers, and (2) Single Marker Calibration Choreography: keep the eyes still on the marker while making spiral movements of the head. 2. In terms of accuracy, do you think which calibration approach is better? 3. When I finished the Single Marker Calibration Choreography, I removed the eye-tracker and turned off the Pupil Capture, but when I open them again, it is still able to detect my gaze, is it true that the calibration parameters from the previous process has been saved?
Hope to receive your support!
Hi @user-d50398. Responses below: 1. These are correct. However, you can also move the single marker around whilst keeping the head still (if using the physical marker) 2. The key difference with a single marker + head movement (or physically moving the marker around) is that you can calibrate in larger area of the visual field, as opposed to the screen-based 5-point calibration, which will only cover the area that the screen occupies. Both approaches should yield consistent accuracy for their respective area. 3. Yes, the previous calibration is saved. Important note: If you have removed the headset from the wearer, we advise you to re-calibrate as the previous calibration might not be valid
Hi! I'm using Pupil Core to study the glare of athletes during sports competitions. I am trying to find the angle that consists of the eye, the gaze point, and the lighting. Regarding the position of the illumination, the position coordinates (pixels) on the image were obtained by annotating the distortion-free image acquired by iMotios. The gaze position uses Gaze2dX and Gaze2dY acquired by iMotios. The angle is calculated assuming the eye position coordinates are at the center of the image (640,360). Is this calculation method correct? Thank you in advance!
Hi @user-785b0f π. The assumption that the eye position coordinates are at the centre of the image is incorrect. I'm not totally familiar with the iMotions export. But, eye_center*_3d_x,y,z
(*eye0 or eye1) defines the eyeball centre estimates in 3d camera coordinates in our raw data export.
Hey, I'm trying to extract all frames from the world video to jpg format using ffmpeg as described in this tutorial :https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb. However, trying to account for variable frame rate using the -vsync 0 option gives me the following error:
Encoder did not produce proper pts, making some up.
[mjpeg @ 0x7f982370c180] Invalid pts (41) <= last (41)trate=N/A speed=2.12x
Video encoding failed
Conversion failed!
Using png as output format or using -vsync 2 (variable frame rate) works, but i'm not sure whether frames are extracted correctly. Could you tell me why this error is produced or maybe whether -vsync 2 is also a valid option for extracting frames? Thanks a lot!
Hey guys, have you maybe had time to look into my problem? Maybe it's better to ask this question on an ffmpeg forum, i know it's quite specific...
Hi, I feel like distortion is less of an issue in this case. The reason one usually needs markers is that gaze is predicated in scene camera space but the latter can move relative to your AOI. You won't be able to map gaze to your AOI unless 1) you assume that there is no head movement relative to the AOI or 2) have some kind of external reference to calculate the relationship between scene camera and AOI over time.
I would also like to emphasize the difference between pupil and gaze data. The former is estimated in eye camera space and does not require a calibration. The latter requires a calibration and is estimated in scene camera space. You seem to have plotted pupil data if I am not mistaken.
Thanks for your fast reply. I think I could approximately calulate the values by defining an area for the lower third of the video frame and the two upper thirds of the video frame. It's like an handheld device wich it held like a mobilephone. So I can assume if the values are in the lower third the person is looking at the device and for values above the person is looking at the "World". Which datapoints should I use for this sczenario? I would guess it's gaze_point_3d_x, gaze_point_3d_y in the gaze.positions.csv and using the 0:0:1 vector calculation as you recommend on your homepage?
Thanks for your help, first time using the core
The only way to be sure of this would be via surface tracking markers. Personally, I wouldn't be comfortable making the assumption of lower vs upper thirds of the video frame being associated with phone and world gaze, respectively, and would instead adopt manual annotation to discern whether or not participants gazed at the mobile device. Of course, this depends on how critical these insights are! π
okay, had that in mind as well. With the annotations there is a column calld duration, how can I use the annotations to get a value? Couldn't figure it out
The manual annotations in Pupil Player do not have a duration. They are timestamped with the timestamp of the displayed scene video frame. I would rather recommend using two separate events, enter and exit.
okay, thanks a lot! Then I'll do it like that.
Thank you @nmt for your information, it is clear to me know. But when doing the calibration with the physical marker, I am not sure when (how long) should we stop the calibration process? Secondly, after calibration, there is a spiral displayed on the visualization as images below, what does it mean? Does it indicate the religion of confidence for gaze detection?
To add to @nmt's explanation: The small orange lines are the residual training errors for the collected pupil data. Your calibration accuracy looks very decent from the shared picture alone!
There is no set limit for when you should stop. There is usually a trade-off between how much of the visual field you want to cover and how long the wearer can keep their concentration, i.e. staying fixated on the target! The spiral is showing the area that was covered during the calibration
@nmt Thank you for your fast reply. I also think that the effectiveness of the angle of composition needs to be discussed. Therefore, I am currently studying the relationship between the installation position of the lighting equipment, the pupil contraction rate, and the subjective evaluation. Is it possible to convert eye_center * _3d_x, y, z (* eye0 or eye1) to 2D? I processed the distortion-free 2d image plane output by iMove Exporter in Pupil Player.
I think the goal would be to unproject the 2d coordinates of the lighting from pixel to camera space. @papr can you offer more insight into this? I do wonder whether calculating an angle between the viewing direction and the lighting would be the best approach. Such an angle would be relative to the scene camera coordinate system, and therefore dependent on the wearer's head position in space. Have you considered this?
Thank you @nmt , then is it true that it is better to let this spiral cover the whole field of view (of the world camera)?
It is more true that the calibration should incorporate areas you will record in your experiment, which isn't necessarily the whole field of view.
Unfortunately, it fell off my radar. I made a note and will look at it first thing tomorrow morning
No problem! Thanks for your attention.
Hi @papr (Pupil Labs) . I want to manually save and load 3D eye models, so I tried to dump self.detector into a file in the pye3d_plugin, but I got a RuntimeError: SynchronizedArray objects should only be shared between processes through inheritance. I also tried creating a method in detector_3d.py (the detector class in pye3d package) to access the 3d eye model, but I still get the same RuntimeError. It seems like an issue related to multiprocessing, but I cannot find anything related to multiprocessing in pye3d class (in file detector_3d.py) or pye3d_plugin.py. Could you please point out how I should solve this issue?
Hey, sorry, I saw your message yesterday but it fell off my radar. I would recommend writing out the model parameters (x,y,z) explicitly instead of dumping the python object. There should be an attribute to access these values. The object should be a synchronized array, yes, but you should be able to convert it to a tuple and serialize the tuple
Hi. Sorry for the delay. Multiple people have been using the system. Here is the current state of things: - in Device Manager, the Pupil devices appear as a single device: "Unknown USB Devices (Device Descriptor Request Failed)", under "Universal Serial Bus controllers". - We have another, monocular Pupil headset, which when plugged in appears solely as two libusbK devices, and works fine with Pupil Capture. - The binocular headset works perfectly with Pupil Capture on another machine. This issue started after someone uninstalled the drivers for the binocular headset in Device Manager, and attempted to reinstall them by running Pupil Capture as administrator as recommended on the website. Prior to reinstalling the drivers, the binocular headset worked perfectly. Uninstallation/reinstallation as described on the site does not help.
I guess something about the device configuration isn't getting cleared out upon right-click->uninstall. Since the device is unknown, there's no checkbox to "delete the driver software". Is there a specific directory I should look in to find and remove such files? Thanks again.
I am not sure about this. I will have to discuss this with my colleagues.
I see. Thanks! I will study a little more about the modeling code and try to dump things out in other forms.
This is the model attribute I was talking about. https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/asynchronous.py#L77
I am actually not sure if you could set the values when recovering the model from disk. There is a bit more to the models state than just its center. I might make sense to write explicit de/serialization functions for the models. But that requires deeper knowledge of the inner workings of pyed3d. I can work on that but unfortunately not until mid/end of February.
May I ask that is it possible for the users wearing eye-lens to wear the eye-trackers? Also in case of wearing eye-glasses?
Hi @user-d50398 π Responses to your questions: - Contact lenses + Pupil Core: Contact lenses should not negatively impact pupil detection with Pupil Core. If you have users who wear spectacles/prescription lenses who also have contact lenses we would recommend asking them to wear the contact lenses - if possible. - Spectacles/Prescription eye glasses + Pupil Core: It is possible to use prescription eye glasses with Pupil Core, but not recommended because it is difficult to set up, does not work for all users due to facial geometry and glasses frame shape. If you do go this route you will need to capture the eye directly from below the glasses frame and not through the lens of the eye glasses as this will introduce reflections that will negatively impact pupil detection and therefore gaze estimation.
Hi, another quick question in terms of the distance between the eye and fixation, is it correct if I apply the "eye_center0_3d_z" in "gaze_positions.csv" & average of βgaze_point_3d_z" in βfixation_csv" to obtain the linear distance ?
Many Thanks
The depth component of gaze_point_3d
is prone to inaccuracies, particularly at longer viewing distances. gaze_point_3d
is defined as the intersection (or nearest intersection) of the left and right eyes visual axes. Predicting depth from these is inherently difficult. For a more accurate measurement of eye-to-fixation distance, I would recommend using the head pose tracker plugin: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
got it, but the head-pose-tracking is applicable for those who still got the raw data, right? (cuz I only have the export data with me)
I fixed the typo in the docs and but was not able to reproduce the issue. Hence, I created this debug plugin https://gist.github.com/papr/98636456fd4b4d0621835ac0d3cffc77 Use it instead of the official plugin. It will log an error message when it runs into the issue above instead of crashing. Please share the message with us should you run into the error again!
Here is how you can install the plugin https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin
I opened up Pupil Capture and ran your debug plugin - the intrinsics estimation worked with no errors. So, frustrated that I could now not reproduce the error I was having, I ran the regular camera intrinsincs estimation and it crashed as it had been before.
I then closed Pupil Capture, opened it again, and ran your debug plugin for intrinsics estimation. The system recorded error messages (the same ones as before) and spiked CPU usage like it was going to crash, but it was able to finish intrinsics estimation. Emailed the logs.
If your exported data has no head pose tracker results, then that option is off the cards. How accurate does your eye-to-fixation point distance need to be? I wouldn't rely on it for anything important, but rather use it as a rough estimate
I need the distance between the eye and fixation for calculating the "saccade amplitude (visual angle), so I assume that it's not accuracy demand for the purpose, just to get a rough estimate for the distance~
Note that there is no way to check the quality of the distance measure without access to head pose data
You can work directly with the gaze_normals
to calculate saccade amplitude in degrees
@nmt I just found the explanation of the "gaze_normals", so I assume that the way of obtaining the saccade amplitude is to get the distance directly by ""gaze_normal_z""?
please correct if I'm [email removed]
Could you please elaborate how to use the "gaze_normal_z" to get the "saccade amplitude" (i.e if the following equation correspond to your answer: saccade amplitude = 2atan(Size/2Distance) )
So is it correct if I just fit the "gaze_normal_z" & "eye_center0_3d_z" into the equation above (wondering the unit for the "gaze_normal_z" & "eye_center0_3d_z")?
Thanks~
Hello. I want to export some data again because I changed something. During the eport I get an error message: "surface_tracker.surface_tracker_offline:Surface Gaze mapping not finished. No data will be exported". Any idea how to fix this? I tried recalculating the Gaze Mapper, but to no success.
Hi @user-8ed000. How long is your recording? This error indicates that the Surface Tracker is still mapping gaze to the surface when you hit export
Hi, I would like some advice on entering areas of interest into a video within pupil player. Unfortunately using surface tracking is not possible in our case (or we are not able to place markers in the scene). The issue is that we are doing interviews and need to compare the level of eye contact within the interview. Is there any way to manually insert the areas of interest into the video? Unfortunately I have not found a manual anywhere that addresses this issue. Thanks
Hi @user-9e0d53 π. It isn't possible to add AOIs or define surfaces in Pupil Player unless markers were placed in the scene. There are a few things you could try instead: 1) Manually annotate important events in the recording, e.g. fixations on eyes: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-player (easy to perform but time consuming) 2) process the world video + gaze overlay with a face detection utility and automatically relate gaze with the facial features (more technically challenging, but could save time in the long run!)
Thanks for sharing the figure β this approach is more useful for defining the size of stimuli, e.g. that are presented on a screen, in terms of visual angle.
If you are interested in saccades, Pupil Core already provides theta and phi for each eye in the pupil_positions.csv
export! Their unit is radians. Note that theta and phi are relative to the eye camera coordinate system.
Thanks for the kind response.
Refer to your answer, I found a description on the docs, which is the figure below. Does the marked sentense mean that the "looking down: decreasing y vector component; increasing theta" & "looking right: increasing x vector component; increasing phi" if I only got the eye0 data?
The recording is 28 minutes long. Is there an indicator for the process of the surface tracker mapping the gaze?
A 28 minute recording could take some time to complete, depending on computer specs. You can check the progress of the Marker Cache and mapping for each surface defined within the Player Window (see screenshot: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker)
Hi, how do I calculate common gaze from gaze 0 2d and gaze 1?
Hi @user-4eef49. Please clarify what you mean by "gaze 0 2d" π
gaze from individual eyes. So I have gaze information from gaze.2d.0. and gaze.2d.1. How do I calculate common gaze from gazes for individual eyes?
Hi, binocular gaze (gaze combined from both eyes) is published under the gaze.2d.01.
topic. Please let us know more about your setup/calibration process if you don't receive such data.
Its working brilliantly. Waht is the meaning of two numbers in norm_pos ?
I'm glad to hear it's working! The gaze datum format is described in detail here: https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format
And also, would like to know how to measure the saccade amplitude between "fixations" rather than "gaze", what I understand is that the parameters "theta"&"phi" is based on the gaze
Hi, what video are you trying to extract frames from exactly? The intermediate scene video or the exported one?
A simple way to check if -vsync 2
works as expected would be to use this Pupil Player plugin https://gist.github.com/papr/c123d1ef1009126248713f302cd9fac3 It renders the frame index into the exported world video. After transcoding you can check if all frames are there as expected.
I'm using the exported one, thanks for the suggestion, I'll check it out!
Thank you. I will have a look at it tomorrow
Thank you, will talk to you tomorrow. I have sent the logs to data@pupil-labs.com in the interest of privacy and removed them from the previous message. Let me know if you cannot access them that way.
I can recommend reading "Identifying fixations and saccades in eye-tracking protocols": https://doi.org/10.1145/355017.355028 The most basic filter suitable for detecting saccades is probably the I-VT (section 3.1). Section 4 of this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb shows how to calculate angular velocity from theta and phi. Prior that operation you might want to convert theta and phi from radians to degrees
Thanks for the kind reference, I would take a look at it!
just wanted to follow up to see if your colleagues had any advice for how to go about fixing this issue, thanks!
Hey, thanks for following up. Unfortunately, I do not have anything new on that. Could you please reach out to [email removed] regarding this problem?
Hello. Is it possible to create different forms of Surfaces in the Surface Tracker then an rectangle? For example an U shape? There are only four points available for changing the form, is it possible to add more?
Hey, unfortunately it is not possible to use other shapes. But you can always define multiple surfaces based on the same markers.
Helo All. Is it possible to set surface mapper to include a bit more surface beyond mark boundary? or to use surface markers and adjust the shape (keeping it rectangular) of area we check?
You can edit surfaces to drag the boundaries beyond the boundaries of the markers as long as the new boundary area stays co-planar with the markers.
@&288503824266690561 Hello, I'm currently trying to build from source, but I am getting an ImportError for uvc. Is there a way to fix this issue?
Hi, apologies for the delayed response. You are missing one of our dependencies: https://github.com/pupil-labs/pyuvc
We offer Python 3.6 wheels on Windows [1] to simplify the install. On other platforms, please follow the corresponding source-install instructions [2].
Please be aware that we also offer easy-to-install application bundles for the Pupil Core applications here [3]. Their functionality can be extended via our powerful plugin API [4]. I can give you more pointers and recommendations on how to implement specific functionality if you let me know your use case.
[1] https://github.com/pupil-labs/pyuvc/releases [2] https://github.com/pupil-labs/pupil/tree/master/docs [3] https://github.com/pupil-labs/pupil/releases [4] https://docs.pupil-labs.com/developer/core/plugin-api/