Hey guys!
I'm wondering if the Pupil Labs Core equipment is capable of dealing with radiation?
Another thing- the application won't let me select the "shutter priority mode" or the "auto mode" - only the "manual mode" and "aperture priority mode" are selectable. Maybe that could be related?
It's also worth restarting Capture with default settings to see if that solves it - you can do that from general settings. Re. the different exposure settings, unfortunately the naming conventions are a bit confusing. 'Aperture priority mode' is actually the auto-exposure mode
@nmt Thank youπ
Hi all, I am familiar with the dispersion-based algorithm used in pupil player to detect fixations post-hoc. I have been reading the code and trying to understand it. So my questions are: 1) does it take into the sample frequency of the gaze data (frame size?); 2) also, when calculating duration, is missing data considered? If there are m ore than a few missing timestamps, how is the end time of the fixation then calculated. Thanks!
Hey @user-908b50 - the search for the end of a fixation starts at line 207
in shared_modules/fixation_detector.py
(https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L207).
Starting from the gaze datum that achieves the minimum duration within the maximum dispersion (call this "A"), the detector iterates sample-by-sample until it finds gaze data whose timestamp exceeds the maximum duration (call this "B"). The samples in the range between A and B (but not including B) are searched to determine which one first exceeds the maximum dispersion.
I think that addresses both of your questions, but let me know if you need more clarification
Hi pupil labs team. We need to calibrate the Pupil Core at 1,5 m distance. This works perfectly when using the calibration marker that is send together with the eyetracker, which is sized for 1-2.5 m distance and has a diameter of about 6 cm. We were wondering if making the calibration marker slightly bigger (when printing the marker from the docs to DinA4 it results to a diameter of 8 cm instead of 6 cm) does affect the calibration results (accuracy and precision)? A quick test showed no difference in calibration results, do you have any experience here?
Hi, it should not make a difference, the only issue would be if you print it too small and it could not be detected in the scene camera.
Hello, Is the screen marker calibration method only for recording gaze on the screen?
Hi @user-fa2527 ! You can use it outside too, it is more convenient for the screen as the area calibrated would match but you can also use it outside.
I'm a brand new user desperately trying to generate some gaze data. I'm using Mac OS. My recordings show real world video but not gaze. Is that because Pupil Capture was not started with administrative privileges?
Admin rights are required to get the camera streams, so you would need them to run Pupil Capture. If you grant them, you should be able to make a recording and play it in Pupil Player. If you recently brought Pupil Core, you would be eligible for an Onboarding session. You can ask for it at info@pupil-labs.com
Wow! Thank you for letting me know about the onboarding. That should really help. Question: in the onlilne documentation, it says to copy the pupil capture application to the /Applications folder. Is that the same as the Applications folder?
The procedure does not differ from installing any other app in MacOS, it will ask you to drag it into the Applications folder (usually at Macintosh HD - Applications). Once there you can open the terminal on you mac and use the command described here: https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer
I am working on a Macbook Pro. Can I plug the glasses directly into it without the USB adapter?
I changed the administrative permissions as recommended and still do not get gaze data. Here is what Pupil Capture looks like - apparently no data from right eye:
From your screenshot it looks like you are receiving right-eye data. In the confidence graph, the left- and right-eye values overlap (the right-eye values are behind the left-eye values)
Hello @nmt. I am using a plugin developed by jesseweisberg (https://github.com/jesseweisberg/pupil) to automatically detect objects from the scene camera. I have placed plugin files in the pupil capture settings folder, and when I am trying to start pupil capture software I received the following warnings. Moreover, I cannot see the object detector plugin in the plugin manager (Pupil capture application). Is it the correct procedure for loading a custom plugin, and if so, why is this warning showing? So how can I use this plugin with a pupil core headset?
eye0 - [WARNING] plugin: Failed to load 'object_detector_app'. Reason: 'cannot import name 'Visualizer_Plugin_Base'' eye1 - [WARNING] plugin: Failed to load 'object_detector_app'. Reason: 'cannot import name 'Visualizer_Plugin_Base''
Hi, @user-00cc6a - at first glance that repo was forked a long time ago and hasn't kept up with changes to the main project in several years. Digging just a little deeper and it looks like that fork includes changes to code besides just the addition of the plugin. In short, you are not going to be able to run that plugin on an up-to-date version of the Core software without significant effort. You will probably have a much easier time running that fork from source, although I suspect you will have to also hand-pick dependency versions from that time frame just given how much time has passed since that fork was updated.
hi everyone! This is my first time using core to connect to my computer(MacOS) and i have trouble with it . I am wondering where can i get some help?Thank you!
Hi @user-74c615 π Do you have troubles in accessing the Pupil Core video streams from Pupil Capture on MacOS? If yes, please take a look here (https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer) and let me know how it goes
okok thank youοΌ
Getting started
Hey folks, I'm having a little trouble with processing gaze data.
For reference, here is the structure of the gaze datum:
{
# monocular gaze datum
'topic': 'gaze.3d.1.',
'confidence': 1.0, # [0, 1]
'norm_pos': [x, y], # norm space, [0, 1]
'timestamp': ts, # time, unit: seconds
# 3D space, unit: mm
'gaze_normal_3d': [x, y, z],
'eye_center_3d': [x, y, z],
'gaze_point_3d': [x, y, z],
'base_data': [<pupil datum>] # list of pupil data used to calculate gaze
}
And here is the 'base_data':
{
### pupil datum required fields
'id': 0, # eye id, 0 or 1
'topic': 'pupil.0',
'method': '3d c++',
'norm_pos': [0.5, 0.5], # norm space, [0, 1]
'diameter': 0.0, # 2D image space, unit: pixel
'timestamp': 535741.715303987, # time, unit: seconds
'confidence': 0.0, # [0, 1]
### 2D model data
# 2D ellipse of the pupil in image coordinates
'ellipse': { # image space, unit: pixel
'angle': 90.0, # unit: degrees
'center': [320.0, 240.0],
'axes': [0.0, 0.0],
},
### 3D model data
# Fixed to 1.0 in pye3d v0.0.4.
'model_confidence': 1.0,
# pupil polar coordinates on 3D eye model. The model assumes a fixed
# eye ball size. Therefore there is no `radius` key
'theta': 0,
'phi': 0,
# 3D pupil ellipse
'circle_3d': { # 3D space, unit: mm
'normal': [0.0, -0.0, 0.0],
'radius': 0.0,
'center': [0.0, -0.0, 0.0],
},
'diameter_3d': 0.0, # 3D space, unit: mm
# 3D eye ball sphere
'sphere': { # 3D space, unit: mm
'radius': 0.0,
'center': [0.0, -0.0, 0.0],
},
'projected_sphere': { # image space, unit: pixel
'angle': 90.0,
'center': [0, 0],
'axes': [0, 0],
},
}
Hi @user-1c31f4 π. Check out this page of our docs for a description of each variable: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter and this page for a coordinate systems: https://docs.pupil-labs.com/core/terminology/#coordinate-system (description of theta and phi under 3d eye model header)
First, I'm curious what the field 'gaze_normal_3d' is.
Second, I am wondering how 'theta' and 'phi' for each eye are established. I.E., Would (theta, phi) = (0, 0) represent the user looking straight forward?
Thanks for any help!
I have the phone buzzing, and I get a message that says Recording Error. When I unplug and connect the wire it seems to work again, but this is pretty annoying. Also the Live Stream works 75% of the time, and does not work the rest of the time. Long time loading up the page (even with the IP address of the companion specified). Sometimes it comes up, and other times it struggles.
Responded to in πΆ invisible
hi everyone! When i using core i find out that it cannot connect to pupil capture .The screen went out the picture of my eyes. Restart cannot fix the problem.Does anyone know what to do? thank you !
hi i restart the default settings and problem solved!!!
Hello! I have a Core headset and I'm creating a new plugin where I read the gaze position of the eyes. As I would like to use it on a Raspberry Pi 4b I don't want to show the windows of the eyes and the camera of the world. I do "python main.py capture --hide-ui" but sometimes the eye cameras don't open. If I don't use --hide-ui I can click on the buttons and activate "Detect eye 0" and "Detect eye 1" but I want to run it without showing/using any windows. Is there any way to program it so that it can open the cameras through the python file of the custom plugin? I tried to look for information on the Pupil website and this channel but I couldn't find it. thanks!
I have a similar question to this. My project is going to be embedded, so any interaction with the pupil capture software using its GUI must be avoided. About 5% of the time, the camera feeds for eye 0 and eye 1 don't open, and I have to click the buttons to show them.
Good morning, can you recommend a computer that works well according to your experience with Pupil-Labs? Thank you so much
Hi @user-4514c3 π. We prefer Apple computers with M1 or M2 chips (+ 16GB unified memory). These get great performance out of Core systems!
Hi @user-6eeda2! If I could ask, what's your reasoning for trying to run Core on a Raspberry Pi? It has pretty limited processing power and you'll likely not be able to do real-time gaze estimation on it.
I was thinking of using a Raspberry Pi because of its small size and weight to be able to use it to process the position of the gaze while the user moves without the need to limit the user's displacement.
Hi guys, I'm looking for the mouse controller on this repo: https://github.com/pupil-labs/pupil. Didn't find it, googling on internet I found this repo: https://github.com/pupil-labs/pupil-helpers with this specific library mentionned previously. At the moment of running the code it shows that "ModuleNotFoundError: No module named 'windows'", package that I already intalled via conda and pip. Any ideas of how to solve it? the current python version I have is 3.11.3
Hi, @user-9a0705 - it looks like our mouse controller is using a now deprecated PyUserInput
package. It shouldn't be too hard to replace it though. I'll see if I can whip something up
When using the surface tracker plugin, do we need to draw surfaces in the pupil capture application before starting the experiment or else, do we need to draw in the pupil player application and get fixation data onto the defined surfaces.
Hi on the right is my Pupil Capture software, but the lsl relay option is not displayed. How can I solve this? Thank you
Hi, @user-4bc389 - the LSL relay for Pupil Capture has to be installed separately. You can find instructions here: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture
@user-6eeda2 this is another potential (more hacky) solution: https://discord.com/channels/285728493612957698/285728493612957698/1108325281464332338
@user-6eeda2 If you run it from source you can use the flag --hide-ui https://github.com/pupil-labs/pupil#command-line-arguments, although it would only hide the window, the gui would still be there
@user-6eeda2 my understanding is you would like a stripped-down version of Capture software to reduce computational load, is that correct?
It could be nice to have that version.
Hi May I ask what caused the problem in the video, which resulted in the loss of pupil capture
Hi @user-4bc389 ! Would you mind developing what you mean by loss of Pupil Capture? Did it crash? Regarding the video, I see one issue, the py3D model of the eye seems poorly fitted, also seems like you freeze the 3D model. I would suggest unfreezing it, moving the eye around until the model fits better the eye, and then freezing it again if you need to freeze it for example to obtain pupillometry.
Sorry, my description may not be accurate. Thank you for your suggestion. I will give it a try
Hi @user-4bc389 ! It's me again, I've looked at the video again on a larger window and it does look like is changing (so you probably did not freeze it), apologies for the confusion, you can scratch that. And coming back to the loss of Pupil Capture, wouldn't you be referring to the pupil detection, and the moments where it is in an erroneous location? On that video, this occurs due to the eyelid occlusion preventing the pupil to be properly detected, in fact if you look at the confidence value you should see a drop. You can try to minimise this, positioning the eye cameras to capture the eye from a lower angle, providing a more clear view of the pupil at instances where the eyelid partially occludes it.
Hi!
Short question. I notice that before my recording I get a good py3D model for pupil size detection if I let my participants look around. So once I have a nice py3D model I fixate the head and ask the participants not to move their eyes during trials. I keep close track of how the py3D model changes during my experiment. Yesterday I came across a participant that had a good py3D model, but once I ran the post-hoc pupil detection in pupil player, the algorithm was struggling to stabilize to a fitting model.
I started recording when the head and eyes were already stationary. Next time, should I let participants look around after starting the recording so that the post hoc pupil detection can create a fitting py3D?
I hope this is sufficient explanation I might be very vague sorry in advance.
Cheers
Hi @user-d99595! Yes we would definitely recommend recording the model fitting process (i.e. when the wearer samples different gaze angles). We would also recommend freezing the model if you have a controlled environment. Have you already read out best practices for doing Pupillometry? If not, it's worth a read: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Basicly, my question is: should I start the recording before or after obtaining a good py3d model if I want to use post-hoc pupil detection? I'm primarilly interested in pupillometry (fixed head, eyes don't move during trial).
When I change the color of the Vis Fixation plugin circle and dot, it remains red. Is this normal?
Hello, @user-cdcab0. I am using surface tracker plugin to capture gaze patterns using the surfaces defined in the pupil player. I am drawing multiple AOIs with the help of april tags and during this process, some AOIs are being overlapped one over the other. So, with the existing plugin, is it possible to deactivate a particular AOI and keep the other AOI active?
No, you can't deactivate a surface/AOI. You can delete one, but then you'd have to recreate it when you need it again
hi a quick question i was looking into fixation files i just saw the confidence level is recorded for gazes what about fixations?
Hey @user-eb6164 π. Confidence for each fixation is reported in the export. It's the mean of the base gaze data used to generate a fixation. By default, values <0.6 are discarded from fixation detection.
Hi οΌWhen I set up the eye video, I found that when my eyes look down, there is always a problem where the pupils cannot be well monitored, leading to unstable gaze points and easy drift. Common methods have been tried, but they are still easy to occur. Is there any other way to solve this problem?
Hi @user-4bc389! Thanks for sharing your video. Pupil obscuration like this can be a challenge and is often dependent on the wearer's eye appearance and/or facial characteristics. When the pupil is obscured by the upper eyelid, it's usually possible to find a more optimal camera position. When the pupil is occluded by the lower eyelid, the best course of action would be to tailor the experiment such that large downward shifts of gaze aren't necessary. E.g. keep the stimuli in a more neutral position.
Hi, I installed Pupil Capture on Ubuntu 22.04 and when I start the calibration in fullscreen mode, target display is extremely slow, making it impossible to complete the calibration. I have this problem neither without fullscreen mode nor on Windows 11 (dual boot). I have not found anything about a similar problem, has someone experienced the same issue ? Thanks
Hi, @user-6ce1ed ππ½ - what kind of laptop is that? Depending on your hardware, Ubuntu usually requires an extra post-installation step for graphics drivers. Are your graphics drivers installed and up to date?
Thank you for your answer, it's a lenovo ideapad 5 pro, with integrated graphics of a ryzen 7 5800u cpu. I don't think I have installed any graphics driver after installing ubuntu, is it needed even without a dedicated gpu ?
Yes, some integrated graphics (or APUs as AMD likes to refer to them) may still require kernel modules/drivers. First though, can you confirm that you have pupil detection enabled and the eye cameras turned on?
Ok, I will check, thank you. Yes they are turned on, and the calibration is going smoothly when I don't run it on fullscreen mode
Oh - it's only slow like that when you're running in fullscreen mode? That's interesting and unusual - sorry, I didn't understand that from your earlier message. It's still a hint to me that it may be a driver issue.
After driver installation, it works fine, thank you !
hi guys, i'm new to eye tracking. i'm just about to start using the pupil labs core for a study that's taking place outside. unfortunately, the sun's rays are bothering me so much that the pupil calibration isn't really working. does anyone have any ideas on how i can do something about this?
The first thing to check when outside is the eye camera exposure. If the images are overexposed (washed out) that will hinder pupil detection
You can adjust this setting in the eye windows
that didn't do much. the detectionrate of the pupil then jumps between 0.4 and 0.7
Here are some additional steps you can try: 1. As the participant to wear a cap to block direct sunlight 2. Set the ROI to only include the pupil. Note that it is important not to set it too small (watch to the end): https://drive.google.com/file/d/1tr1KQ7QFmFUZQjN9aYtSzpMcaybRnuqi/view 3. Modify the 2D detector settings: https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings 4. Adjust gain, brightness, and contrast: In the eye window, click Video Source > Image Post Processing
hi! trying to control pupil core with tcp messages, but having trouble. Do I just need to have Pupil Capture open and then send messages at the ip:port set?
Hi @user-75df7c ! to programmatically control Core, have a look at our network API https://docs.pupil-labs.com/developer/core/network-api/ it uses zeromq protocol
thanks! I saw that, but I can't use python. we have a custom system to run my experiment and with it I can send tcp messages, is that not enough?
what language do you use? Perhaps there are already some ZMQ bindings for it
Dear Team Pupil_Labs, I am a new kid on the block here & have an enquiry about what might be the best Pupil Labs system for me to purchase. I am interested in recording portable EEG/ExG [I plan to purchase a 32 channel mBrainTrain Smarting Pro system]. As I understand it, I could use LSL to integrate the eye tracking stream from one of your systems with their EEG/ExG data acquisition. I do not have the Smarting Pro system yet - I am making enquiries & trying to work out what would work best overall in terms of eye tracking systems [Core, Invisible or Neon]. My application would be indoors - ppl walking around an art museum looking at artworks - so there will not be a huge ballistic motion component. I would like to have some future flexibility to also do some task-activated recordings - we use PsychoPy to deliver stimuli in our lab to an existing tethered EEG system. [We perform EEG data analysis using MNE-Python. I would want to record EEG/ExG data using BDF format [24 bits]]. I am not wedded to any eye tracking analysis software at this stage, but my main main issue is making sure that the data acq of neuro & eye tracking data are properly synched. I am not a beginner with EEG/ExG, but am a newbie with eye tracking. Thanks for your consideration, Aina. π€
Hi @user-443c5a ! That sounds really interesting, I think Neon together with our reference image mapper and our Psychopy integration can fit you perfectly, but why don't you write us at sales@pupil-labs.com and request a demo so we can ensure that's the right fit.
@user-d407c1 : thank you for your response. I will do exactly that. It will be good to see the system demo. π
Hi, me and my resaerch team was doing a test with Pupil Invisible and once we did the recordings, it was giving an error and did not allow to create a project. Do you know what can be the problem?
Hello. I want to do the Camera intrinsics estimation using the narrow-angle lens. I do see the scree with the dots, but when I pres "i" on my keyboard nothing happens. When I press the circular I button in the world window, the pattern disappears. Can you please tell me how to fix this? Many thanks
Hi, @user-7ab893 ππ½ - when you're pressing i
on the keyboard you have to be sure that the main Pupil Capture window (and not the window with the circles) has keyboard focus. Can you give that a shot?
Thanks. How exactly do I do this? The cursor is already on the Pupil capture window, but still nothing happens when I press "i". When I press the ciruclar "I" button in the world window, the window with the circles still disappears
After showing the circles, click back on the main window in a blank space (not on the circled-I or on any other graphical element). Then try pressing i
on your keyboard.
If you're using a single monitor, you may need to resize the window so that the circles and main window both fit on the screen.
Hi all, I have a quick question regarding the recorded videos from the eyetracker, the 'world' videos that have the views of the cameras combined. We would like to sync these videos with recordings made with go-pro cameras. I notice that the recorded framerate is 28.980 (mediainfo), however. Is this something that can be adjusted? And is this a variable framerate? Thanks!
Hi @user-23177e π. We do have a plugin that enables synchronisation of Core recordings with third-party video sources, like from a gopro. It takes a little bit of work to get set up, but functions really well. Further details here: https://discord.com/channels/285728493612957698/446977689690177536/903634566345015377 @user-2e75f4 has very recently used this with an azure video!
thanks. now it is working. However, when we press i, we get the message "World capture 10 calibration patterns" . This appears 10 times, but nothing happens. We dont think that there is a pattern that is captured and detected.
The circled "I" should turn blue, and it'll have a counter next to it. The counter should go down by one each time the camera captures an image of the circles. If the number stays at 10, that means it hasn't captured the circles.
You may need to manually adjust the camera exposure down - I have found that a white background on a computer monitor creates a strong bloom effect and washes out nearby edges, making them impossible to detect.
and no data is displayed
Hi! I want to estimate the gaze position of the eyes in millimeters with respect the scene camera using the Core glasses. I don't want to use the Pupil Capture software to obtain the gaze since i don't want to display any GUI and i want to run the code in a raspberry pi. What i am doing now is to access to the Pupil Labs cameras with pyuvc and then with pye3d i obtain for each eye the center of the sphere, projected sphere, phi, theta, etc. I don't know how to estimate the gaze wrt the scene camera from the data that pye3d is given to me. I tried to look to the code of Pupil Labs/pupil but, i get lost with all the files. how can i obtain the gaze? Thank you in advance!
Hey guys! Do you know what kind of infrared lights are there on the cam for recognize pupils? and what wavelenght spectrum do they have?
Hello, I am having an issue with the export process in pupil player. I have to repeat the process many times so that I can get the surfaces folder and the files inside it. I am worried that even after getting them (after many tries) some data are not exported from surfaces. What might be the issue?
Hello @user-eb6164 , just a quick question, do you get an error when you try to export ? Something like player - [ERROR] surface_tracker.surface_tracker_offline: Marker detection not finished. No data will be exported.
?
you mean when downloading the file on the player gui?
Hi teams. I just a question about data filtering. Now I am trying to analyze the gaze data. For each gaze data, I got a value, like βon_surface_confidenceβ which present the probability of the existing of corresponding gaze. I wonder if there is an official threshold for this value? For example, I should only keep the data that has more then 0.8 confidence. Or I need to decide it myself? Thanks for considering my question!
Hi @user-956845 ! First, let me clarify that on the gaze_position_on_surface you obtain one on_surface column which is a boolean telling you whether the gaze point is in the surface, and a confidence value of the gaze, which is related to the gaze itself and not to the surface.
There is no official threshold for this value, as it depends on the specific use case and the level of accuracy required for your analysis.
However, a common approach is to discard data points with a confidence level lower than 0.6, as it is a trade-off between the accuracy of the data and the amount of data that is discarded.
Ultimately, the threshold you choose will depend on your specific experiment and the level of accuracy required for your analysis. If you have clean data, you can afford to increase the threshold. If your data is noisy, you might need to reduce the threshold to have any data left.
I hope this helps!
Hello... any tips on how to make the process of adjusting the eye trakcer and calibration easier it is taking me a long time to do so for each participant i am using the single marker because i need a wider area to be covered on the screens but i always get bad calibration and sometimes it takes me more than 15 minutes to adjust it. especially the eye camera part i always have to adjust it several times to fit the user and always getting bad confidence
Hi @user-eb6164 ! Is a bit hard to provide feedback without further context, perhaps you can share an example recording with us at data@pupil-labs.com where you record the calibration choreography as well? That would help us give you more concrete help.
That said, have you tried the standard on screen calibration? What results do you get?
is physical markers on a white board better than single one?
Thank you for your management. Could you please answer the following questions? Below are questions for Pupil Core.
Q1. I have a csv file named "blinks" in Pupil Core. As shown in the attached image (left side), line F is marked as index. What does this index mean?
Q2 In the csv named "blinks," is there an index for the blink fold point (the point where the eye is completely closed)? The blink fold point is the point where the eye is completely closed between the start and the end of a blink, as shown in the right figure in the attached image.
Q3 In the csv named "blinks", there are start_frame and end_frame. For example, which of the following is start_frame?γ(1) the frame at the moment the eyelid closes (i.e., the blink is recognized from the eyelid border), or (2) the frame at the moment the pupil begins to be hidden by the eyelid.
Q4 Which of the following is the blink recognition method?γ(1) Recognizing the blink from the eyelid border. (2) Recognizing the blink from whether the pupil is hidden or not.
Q5 Does the frame at start_time in line B of the csv mean the start_frame in line E of the csv?γThe two values did not exactly match when these correlations were calculated.
Hi @user-5c56d0 π ! Pupil Core has two ways to obtain blink events depending on whether your require them in realtime or post-hoc. Most of the questions you have are already answered in the documentation. Which you can find here: For Capture, https://docs.pupil-labs.com/core/software/pupil-capture/#blink-detector For Player, https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
I would also recommend reading our best practices about blink detection with Core https://docs.pupil-labs.com/core/best-practices/#blink-detector-thresholds
But generally speaking, the blink detector uses the confidence in the pupil detection, and a filter over it (here, is the posthoc filter https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/blink_detection.py#L360). On a blink event, there is normally a sudden drop in the confidence, followed by a quick recovery which is used as a signal.
Thank you for your reply. I am sorry to bother you, but could you also answer the following?
What does 'the median of the blink event.' mean? Does this mean the point where the eye is completely closed between the start and the end of a blink, as shown in the attached image?
As described previously, the blink detector uses the confidence in the pupil detection, this signal is convolved with a filter and the response is named "activity".
Assuming a good pupil detection, on a blink, as the eyelid closes and occludes the pupil, the confidence in the detection of the pupil rapidly decreases and the "acitivity" signal increases.
Here comes at play, the onset threshold, which is the value at which the activity response signal starts peaking, or in some sort of way, the confidence in the detection of the pupil starts decreasing. You can define that threshold in the settings, then, as the confidence in the detection of the pupil starts increasing, the activity signal falls, and there you have the offset threshold, which determines how much it should fall to be considered end of a blink.
Although the activity response its directly linked to the pupil detection confidence and highly correlated with the openness of the eyelid, this is not a 1-1 relationship and therefore there is no fold point. You can highly increase the onset value, and lower the offset one to filter blink events where the eyelid may have not been fully closed, but again nothing ensures that the confidence totally decreased by the fully closeness of the eye lid.
The index, is the median of the world frame indexes that appear between the onset event and the offset. In other words, the start of the blink event as per the rise in the activity signal and the end of it by the signal falling under the offset threshold.
Thank you very much [email removed]
I have tried the standard screen calibration it is better than the single physical marker i discovered that there is an issue with the eye tracker so will send an email to the team about that because i prefer the single marker since i have large area to cover. I have another question in the gazes output how can we differentiate between unique gazes is it the world_index?
Hello, I broke the connector on my pupil core headset with the eye0 camera. I would like to resolder it but I could not find the reference of the connector. Does someone know it ?
Hey @user-91a92d π. Please reach out to info@pupil-labs.com and a member of the hardware team will help you out!
Hey @user-eb6164! In gaze_positions.csv, each row represents a unique gaze datum. The world_index tracks the scene cam sampling rate, so you'll find multiple gaze datums per world index.
oh thank you that clear things so for each surface I just have to cout number of rows will give me number of gazes
On the pupil player is there a way to chop up the recording/trim the recording down?
Hi @user-2e75f4 ! Sure! You can click on the general settings (the icon that looks like a gear wheel) and there you can select the relative time range or frame index range to export
Thanks π
Hi there! I am starting a new project using Pupil Core and am excited to learn more about this eye tracker! I have been playing with pupil player and the sample downloadable data, and had three quick questions: 1) what are the units for gaze_point_3d_x? 2) should gaze_point_3d_x basically be the same as norm_pos_x (both from gaze_positions.csv) except the norm is scaled? 3) if I look at something in the real physical world, keep my eyes there, and then pitch my head down a little bit, what would that look like in the gaze_positions.csv file? Maybe another way to put it is if we get eyes-in-head or eyes-in-real-world at the end? Thank you very very very much for your time!!
Hi, @user-3874a1 - I think you'll find answers to your questions (and more!) in the official documentation. Here are a couple of links that I recommend reviewing: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter https://docs.pupil-labs.com/core/terminology/#coordinate-system
To answer your question about fixating on an object while pitching your head up and down, you would see gaze Y coordinates change
Hi! I am working on a computer vision project with the pupil core. For my work, I am trying to record a few clips with the world camera (for analysis) but for some reason, the recording is being saved at a lower resolution than what I have put in the settings. Is this a common problem? Can anyone who has faced this issue pls help me?
Hi, @user-6bf52d ππ½ - that's unusual! What resolution are you trying to record at and what resolution are you seeing? Have you tried more than one resolution? It might also help to know what version of the software you're using and on what operating system
hello! I have a problem importing files from Pupil Labs to iMotions, previously I did it by downloading Pupil Player Data but it is no loger possible to download it from Puppils Lab Cloud, can you tell me how to import files to iMotions now? There have been some changes in the software and it haven't been changed in the help center yet....
Hey @user-e66c8c π ! I understand that you have the Pupil Invisible glasses, is that right?
exactly
Thanks for clarifying that - Let's move this conversation to the πΆ invisible channel. I've replied to your question: https://discord.com/channels/285728493612957698/633564003846717444/1121411569608298547
sure, thanks
Dear sir. The Pupil core eye camera is currently at 120HZ, can you tell me how to make it 200HZ?
Hi, I want to calculate the horizontal pupil position/angle relative to the head/world camera origin in degrees. I'm streaming my data through LSL, which records norm_pos_x/y and gaze_point_3d_x/y/z. At the moment I'm calculating the horizontal angle (psi) in spherical coordinates like in this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb (psi = np.arctan2(gaze_point_3d_z, gaze_point_3d_x)). I was now wondering, if I should use norm_pos_x/y instead or if my calculation is right?
I think what you have is correct. You wouldn't be able to calculate psi from x/y image coordinates because you have no depth component there.
Also, if you do end up working with those normalized coordinates for something else, keep in mind that they are normalized to the scene image resolution. Because the image width and height are different, the x and y normalized values have different scales
great thank you!
Dear sir. The Pupil core eye camera is currently at 120HZ, can you tell me how to make it 200HZ?
Hi @user-5c56d0 ! You should be able to change this setting in the eye camera window under video source
Thank you very much [email removed] I got it.
Is it possible to purchase an HMD add-on camera that supports FHD resolution? I heard that it was available for sale a few years ago.
Hi,
For our project we want to know when someone is looking at a particular surface based on a person's gaze data using the surface tracker feature from PupilLabs Core. However, the timestamps do not seem to match with what we expected. The timestamps of gaze.3d.0._on_surface
are lagging behind the timestamp of the world video frame. Does anyone know if it is supposed to be like this, if not is there a quick way to fix this problem?
The data was logged using a hdf5 logger.
Hey folks, I'm running pupil capture from source on Linux. The software opens correctly and I can see all the camera feeds. But when I try to perform calibration, the calibration screen freezes, and I get the error:
(venv) [email removed] python3 main.py capture
[14:08:37] WARNING eye0 - uvc: Could not set Value. 'Backlight Compensation'. uvc_backend.py:424
WARNING eye1 - uvc: Could not set Value. 'Backlight Compensation'. uvc_backend.py:424
[14:09:31] INFO world - calibration_choreography.base_plugin: Starting Calibration base_plugin.py:537
open(): No such file or directory
[14:09:34] INFO world - calibration_choreography.base_plugin: Stopping Calibration base_plugin.py:574
open(): No such file or directory
ERROR world - gaze_mapping.gazer_base: Calibration Failed! gazer_base.py:203
ERROR world - gaze_mapping.gazer_base: Not sufficient reference data available. gazer_base.py:297
WARNING world - plugin: Plugin Gazer3D failed to initialize
@user-cdcab0 , do you or any of the other pupil labs members have any ideas to help me start fixing this?
I don't think the problem is 'Gazer3D' failing to initialize. I'm pretty sure it's whatever's causing the "open(): no such file or directory" message. I looked at the source code in base_plugin.py and found no calls to an 'open()' method, so I'm quite confused.
Any help would be so appreciated! In return, I can keep you guys updated on running the pupil software on a single board computer π
Hello, can we get the gaze in Pixels of the World camera? in real time via the network api, Also what is the 0 and 1 limit of the normalized gaze values? thanks
The normalized gaze value is mapping your field of view to a 2d plane. 0 would represent looking all the way to the left, and 1 would represent looking all the way to the right. So, 0.5 would represent looking straight forward. The same applies for the vertical axis (looking upwards or downwards).
I'm not sure about getting the gaze in pixels of the world camera. I'm pretty sure the pink dot on the screen is actually using the norm_pos parameter -- which then I suppose could be mapped to the size of the world camera
Hi Kin, thanks for the reply , the normalized values chage between 0.4 and 0.6 even if the pink circle moves from one side of the image to the other, do I need to scale the values?
That's interesting. Maybe my understanding of the normalized values is incorrect. You are looking at the norm_pos value, correct?
Exactly,
Here is the message,
The plot in this video is norm_pos. I am re-normalizing to -1,1 on both axes, but I am getting values between the full range of 0 and 1 before normalizing
I'm not sure why your values only between 0.4 and 0.6 π¦
I am using norm_pos of the gaze data after 3D calibration, is it will be helpful to try 2D calibration?
Also, what are the units of the gaze_point_3d? centimeters?
I used 3d calibration too. Not sure about the units
We are using the physical marker calibration (not the screen),
It's probably not actually frozen. Do you see one of the calibration markers on the screen? If you hit escape
, does it close the calibration?
When you start calibration, the program is trying to collect data from both the world scene camera (specifically, the location of the calibration marker in the image) and from pupil detection. Those two pieces of the puzzle are both necessary together - if either of them are missing, then the calibration will not move forward (which may have give you the impression that it's frozen).
So your plugin: Plugin Gazer3D failed to initialize
probably actually is the culprit here
The first calibration marker appears on the screen, but it is greyed out (in the first frame of the 'fade in' animation). If I recall correctly, hitting escape does not do anything, but hitting "alt+tab" to navigate back to the main pupil capture window does close the calibration.
If plugin: Plugin Gazer3D failed to initialize
is the culprit, do you have any tips on where I should start in order to fix this?
@user-cdcab0 I've been looking through the source code to try to find where the error is coming from, and I can't find where the plugin: Plugin Gazer3D failed to initialize logging statement is coming from. I also looked for FileNotFound exceptions that could be logging the 'open(): No such file or directory' messages, but I couldn't find any that would create that message. I also looked for general except exception as e messages, but none seemed relevant.
I hadn't previously noticed the open(): No such file or directory
messages in my log, but I do see them now that you pointed them out (even when calibration is working). I tracked it down to https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/audio/init.py#L101
It's just trying to play a little notification sound, but the sound file probably doesn't exist on your PC. It's unlikely to be the cause of what you're experiencing
I can't even find the words 'failed to initialize' anywhere in the repo...
That error message is generated here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/plugin.py#L436
Let me try to break my build in a similar way so I can help you figure out the best way to troubleshoot
I've also searched all of the issues and discussions for the same error message, and none of them are related to my issue of the calibration screen freezing.
Thanks again for any help, by the way!
This https://discord.com/channels/285728493612957698/285728493612957698/1100792975132471326 and this https://discord.com/channels/285728493612957698/285728493612957698/1098517309552869386 appear to be the same issue. I read through the responses to those. All three camera feeds are working and so is pupil detection and the 3d eye model.
@user-1c31f4 - let's start with the basics.
1) Have you made any changes to the source (git status
, git diff
, etc to check)?
2) Have you tried clearing your capture_settings
folder and running with all the defaults? Note that when you're running from source, settings are loaded from the capture_settings
folder in the source tree
I'm going to take a look and make sure I haven't made any changes when I'm back at my lab. But, this evening, I found something interesting: the issue only occurs when calibration is run in full screen mode. And, when holding down alt tab (cycling through all the open windows), the program un-freezes. The same problem occurs with the rate at which the pupil glasses sends packets via the network api -- everything slows to a halt unless I hold alt tab. Pretty bewildering. Must be something to do with the focused window -- possibly a performance setting that throttles background windows. Or, maybe when the window is selected with alt-tab, a refresh is forced -- so by holding alt tab, a refresh is forced at a rapid rate (making it appear to run smoothly).
"the issue only occurs when calibration is run in full screen mode"
Another (linux) user recently had this issue - they just needed to install graphics drivers
I'm going to dig into it a bit more tomorow, and now I have a good starting point. Thank you so much for your help, @user-cdcab0 ! It's reassuring that I can move on from the 'no such file' message and the 'plugin3d failed to initialize' message. Whatever is going on is definitely tied to the 'focused window' thing I mentioned in my previous message.
I should mention -- I am running the pupil software on a single board computer, the Lattepanda 3 delta. It runs fantastically smooth (except for this calibration issue), much faster than my 2020 MacBook Air. This device is likely the best solution for users attempting to use a raspberry pi, as the Lattepanda 3 Delta is only marginally larger. It's also very easy to get the pupil software set up on it because its x86, not ARM, so running from source is not even necessary, and all dependencies worked exactly as expected. The real-time pupil detection and gaze calculations work perfectly. (Just putting this here for others info βΊοΈ)
I also restarted with default settings, also no change
Hm. Okay, I'd be nice to know if the release (non-source) version has the same behavior? Same for the other calibration methods. What about other fullscreen OpenGL apps - do they run properly?
I think the next step is probably investigating the calibration loop, ideally with an interactive debugger. I'd start by sttepping through the recent_events
and gl_display
functions in screen_marker_plugin.py
to make sure those are both executing as expectetd
Made sure I had no changes with git, no change
Hello! I've just installed the pupil capture software and plugged in the device for the first time. The application has loaded but no video is coming from the device, just just a blank screen. Basically, I'm stuck at step 2 on the "getting started" steps. Any advice would be appreciated!
Hi Sam, what operating system are you using? On my Mac, I had to run pupil capture with administrator privileges for the camera feeds to show.
Hi, @user-c34503 - welcome to the community! @user-1c31f4 is right - if you're running on Mac you will need to run with elevated privileges. Some info and instructions here: https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer
Hello! Would it be possible to record the eye images with a constant frame rate through Pupil Capture?
For example by using a Request-Response method instead of the current method. If this would be possible how high would the maximum frame rate be for 192x192 images? (Currently 200fps for Vive pro add-on)
As far as I am aware, there isn't a way to lock the frame rate of the camera feeds. You may be able to configure the your code with the networking API to receive messages at a specific rate, although I am not sure if the images can be transmitted with the network API (maybe @user-cdcab0 can speak on that):
subscriber = ctx.socket(zmq.SUB)
subscriber.connect(f'tcp://{ip}:{sub_port}')
subscriber.subscribe('gaze.') # receive all gaze messages
import msgpack
while True:
topic, payload = subscriber.recv_multipart()
message = msgpack.loads(payload)
print(f"{topic}: {message}")
time.sleep(16.66e-3) #delay for 16.6ms for 60fps
Although that feels a bit hacky. I also don't think you can configure the ctx.socket object to receive packets at a different rate.
What is the reason for this, anyway?
Hi, @user-40d3d5 - can you clarify what you mean by "the current method"? When you're using Pupil Capture's recording functionality, you should be seeing a constant frame rate (which you can configure under the Video Source options for each camera) in the generated video files. Your framerate options are limited to what's supported by the camera at the requested resolution. 200 fps is the max for our eye cameras at their lowest resolution.
With regards to the request-response method, what do you mean exactly? You can use the network API to send commands to pupil capture - such as telling it to record (see https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py for an example). That's effectively the same as just pushing the record button in the Capture app yourself. Either way, you should have a framerate equal to whatever you have configured.
You can also use the network API to receive video frames (see https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py for an example), but if your intention is just to write those frames to a file then you'd just be adding an extra layer of overhead. Note that messages must be read by the subscriber as fast as they are generated - otherwise some messages will be dropped. You may be interested in some of the information here: https://docs.pupil-labs.com/developer/core/network-api/#delivery-guarantees-pub-sub
See https://docs.pupil-labs.com/developer/core/network-api/ if you are not yet familiar with the network API π
I don't yet have an intimate understanding of everything that happens on that event loop. It's possible that even if the uvc video stream callback has more consistent timings that other activity in the loop might negate the effect. I think that's unlikely, but not impossible, and it would be disappointing to put in that work only to find no benefit
Hi,
Please may I know if it's possible to trim a recording? I accidentally let Pupil Core continue recording for an additional ~10 minutes and I'd like to trim the recording before analysis. The total duration of the recording is now 30 mins which is too long and would probably crash Pupil Player
thanks!
Hi @user-89d824 π ! Check out our previous message about how to trim recordings https://discord.com/channels/285728493612957698/285728493612957698/1121020823910764554
30 min should not be an issue for Pupil Player
@user-d99595 Replying to your message from the Neon channel https://discord.com/channels/285728493612957698/1047111711230009405/1123913766594170941 I'd suggest using the Network API (https://docs.pupil-labs.com/developer/core/network-api/#network-api) to programmatically send an annotation each time the participant starts a new trial on Psychopy. We offer also a Psychopy integration for Pupil Core (https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html)
Thanks @user-c2d375 I'll have a look.
So far I've only run pupil core capture from source (V3.6.6), naively assuming it would be more performant with Python 3.11 etc. It usually occupies around 65-75% CPU on an 8core-16HT Core i7 Dell workstation (each eye is ~35% and world is ~2%). Much to my surprise when I downloaded the old (2021) precompiled binary V3.5 it occupies around 16% CPU (~7% for each eye capture and 2% for world capture). This is on Ubuntu 22.04. I tried to make sure the same plugins and camera FPS was used by both (I only have 2 default plugins enabled). Is this expected? EDIT: CPU measured using system monitor which is averaged across all cores (100% = all cores at 100%), pupil capture CPU values (around 600 for source per eye) are much higher as they are per core?)