Hi! I wonder if it is possible to get the pixel size of eye camera in pupil core. I have got the intrinsic matrix of eye camera and I want to get the pixel size to calculate the real focal length. Many thanks
Hi @user-b02f36! Please reach out to info@pupil-labs.com in this regard π
Hi everyone, one of my student upgraded her pc, she has a mac book (MacBook air 13, chip M1. 2020. 8 GB macOS Sequoia version 15.0.1), and capture doesn't work anymore. That is, the message appears that the cameras are detected but they are actually disconnected, even though among the usb ports appear to be connected--how can this be fixed?
Hi @user-a4aa71 ! Could she try the following troubleshooting steps? If those donβt resolve the issue, could you please open a π troubleshooting ticket with additional details, such as the version of Pupil Capture being used?
it works!! Perfect! Thank you so much!
Hi I apologize if this question has already been asked, but I couldn't find it when I searched.
When collecting data with Pupil Core, I take care to detect the pupil, make the necessary adjustments, ask the subject not to make sudden movements with their head, and perform a correct calibration, with good levels of confidence. When analyzing the data in the pupil player, I noticed that, in some cases, the participant's fixations were not detected (unfortunately, I didn't notice this at the time of recording). Is this normal for some participants? I noticed that the number of fixations was very low and it is possible to see that fixations were not detected during the video (pupil player). Could you explain what might have happened or something I can do? Thank you very much!
Hi @user-7b0c86 , the default parameters of the fixation detector have been chosen to be useful in many scenarios. but you might have an experiment that requires some tweaking of the parameters. Please see here and here for more information.
Hello, Please in my 13 minute study (see attached screenshot), I had 2 calibration points and 2 validation points (So calibration then validation, then calibration, then validation at the end). The part of my data I really care about is between the 2nd calibration and validation (at the end of data) points. If I leave Pupil and Gaze data set to "from recording", how does the exported csv manage these calibrations? Does it simply use the most recent calibration to calculate gaze data until when a new calibration arises and then it uses that one moving forward, or does it somehow mesh and use both calibrations for the entire data span, or does it calculate gaze some other way?
Additionally, I am working with a surface. Please how would this surface gaze data interact with the 2 calibration sessions?
Finally, is there an ideal configuration for extracting post-hoc fixations optimally? or should I just stick to the default settings for the fixation detector plugin in pupil player?
May I ask for more clarification about what you mean by "how does the surface data interact with the 2 calibration sessions"? The surface mapping procedure will still remain the same:
Regarding fixation detection, the "ideal" choice depends on your experimental setup and configuration. The default parameters of the fixation detector are set to be useful in many scenarios. If you find that those parameters are not fitting your use case (e.g., fixations are clearly not being detected), then you can modify the parameters as you see fit. You may find this section of the documentation useful.
Hi @user-b31f13 , if you have not seen it already, I recommend checking out the Post-hoc Calibration
documentation.
In your case, try the following to use only the 2nd calibration:
Post-hoc Gaze Calibration
menuPost-hoc Gaze Calibration
as Data Source
Calibration
sectionNew Calibration
and give it a nameSet from Trim Marks
(next to Collect References
)Recalculate
To now use that calibration for gaze mapping, expand the Gaze Mapper
menu and:
New Gaze Mapper
and give it a nameCalibration
to the post-hoc one you just madeRecalculate
, then by default it will map gaze throughout the whole recording using the chosen calibration. You can limit the mapping to just the second half of your recording, again using Set from Trim Marks
.You can now export the data and note that it will be the data from within the Trim Mark window. Note also that the exported CSV files do not exactly "manage" the calibrations. Rather, Pupil Player manages the calibrations and after gaze mapping is complete, the exported gaze data will be the result of that mapping.
Also, I noticed that although it is two calibrations and two validations in my data, for some reason, it posits that there are 3 calibrations (which is not the case) (please ignore the "Pre-trial calibration", as I manually created that). Please can someone help me understand why this would happen?
Can I use ESP32 camera model for capture eye? @user-d407c1
Hi @user-ffeac9! We don't know about the ESP32 camera, specifically. But note that the Capture video backend only supports cameras that are UVC compliant. More details in this message: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
Is it possible to track the eye movements using a video? By inputing it
Hi @user-ffeac9 , I'll briefly hop in for Neil: yes, so long as your camera that provides the videos is UVC compatible.
Is there provided guideline?
Yes, please see the message linked above: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498 in this message it says "For other cameras, you could theretically write your own video backend, but that might be very time consuming." is there anybody who done it ??
any help please?
Hi @user-ffeac9 , it is possible that your camera is not UVC compliant. You could try the steps in this messsage and see if that resolves it.
@user-f43a29 @nmt Please could someone have a look at my messages above https://discord.com/channels/285728493612957698/285728493612957698/1304594934053212162
Hi Is there a way to remotely tell pupil capture to add a surface in the surface plugin, as with sending a request to calibrate through pupil remote?
Hi @user-11b3f8! This isn't possible via Pupil remote. The surfaces need to be defined in the Capture software itself. May I ask why you would want to do so?
We are using the pupil labs core for real time eyetracking to use an interface. We use the surface tracker plugin to get the gaze within the surface which is our interface. But as we are running it through docker, pupil capture starts anew, so without any extra plugins initiated. So i was just curious as to see if there was a way to initalize a plugin and add surfaces without having to setup the program
so by requesting it through pupil remote, so the setup would be a part of starting up our program
Interesting! Okay, maybe I'll zoom out a little. Technically, what you're asking for is actually possible. It would take a bit of work, but I think it's feasible (no guarantees). Let me try to point you in the right direction.
Firstly, the capture world settings are stored in the pupil_capture_settings
directory on desktop machines. Is there a way to make this persistent in your setup?
yeah we should be able to copy an existing settings file, and place it into the program files on startup
With the right settings, the surface tracker plugin should load automatically.
Also note that defined surfaces are saved in a file called surface_definitions
in the pupil_capture_settings
directory.
If you restart Capture and the Surface Tracker plugin loads, your surface definitions from previous sessions will be loaded.
I would give that a shot and see if you can make it work
we've found the files, we'll see if we can make that work, thank you!
Hello all, a question about how the "diameter_3d" is calculated using diameters from eye 1 and eye 0: if one eye has an outlier reading or missing data, how is it handled in the calculation?
Hello and thanks again for your previous help! As I mentioned, we are planning to conduct research sessions in dyads (parent and child) and we were wondering if you could recommend an optimal screen size for these studies. Given that weβll have both adults and kids interacting, any guidance on screen dimensions that would work well for both would be really appreciated. Thanks so much in advance! Best, Asia
Hi @user-cc6071 , you are welcome!
Without knowing the specific parameters of your environment, as well as the overall experimental design, I don't want to make an assumption and risk giving you a wrong answer for the optimal screen size.
May I ask if you will be using AprilTags to map gaze to the screen?
We maintain a list of publications that used our eyetrackers, where you can see how other researchers who used Pupil Core handled such questions.
hi Team, i would like to know if there is a way to compute the heading and pitch of the gaze without the head pose information? I have exported the gaze data but there was no head pose information being recorded. Thanks
Hi @user-0f7e53 , you were using Pupil Core, correct? Did you potentially have an alternate way to determine the orientation/pose of the wearer's head?
Do I understand correctly that you were using the Head Pose Tracker?
Hi @user-98789c , we do not reject said data. We provide you with all estimates, as computed by the underlying pye3d model.
Since outlier definition can be dependent on research context, we leave that decision to the user, rather than reduce the data you get. This way, you get as much flexibility and data as possible.
Could you clarify what you mean by "missing data" in this context?
Thank you @user-f43a29, my question is now answered. It was actually a question from reviewer number 2! and I found the function I used to handle missing data from only one eye.
Hi Rob, yes I was using Pupil Core and i did not have any other devices that recorded the orientation of the wearer's head and I could not use the Head Pose Tracker as I did not apply any markers as they were not easily detectable in a dark environment.
Then, it is indeed tricky, but not strictly impossible. One potential option, if you had the world camera enabled and there was enough structure and light in your environment (and the wearer moved around enough), is to try an open-source Structure from Motion algorithm, but I cannot make guarantees about success or the quality of the final result.
Otherwise, members of the community here might have additional tips π
Okay, hopefully i can get other tips as we have the gaze data which recorded the world camera as well.
Hi @user-f43a29 as a follow up to this question, i would like to get your opinion whether it is possible to estimate the heading and pitch using the gaze_normal and gaze_point_3d values? I would just like to verify whether the steps below from GenAI makes sense:
I'm sure this question gets asked all the time, but is the core compatible with contact lenses? I know the neon is. After searching on your website and on this discord chat, I couldn't find the answer for the core specifically. Thanks!
Hi @user-d34f33! For the majority of lenses, yes it works. The lenses do not affect pupil detection. There may be some edge cases, as reported by @user-0f7e53. In such cases, you can try to tweak the 2D detector settings in the Capture software.
this is just from my experience.. i think it depends. I had one participant who i think was wearing some kind of thick lenses and I could not detect his eye pupil.. there were others who also wore lenses but it was still able to detect the pupil. I am not sure what is the difference with this particular person's lenses though
Hi @user-0f7e53 , I now think that I may have misunderstood your original question. I had the impression that you wanted gaze in a world-based coordinate system. For example, a system where, if you look towards magnetic North, gaze azimuth would be 0 degrees, whereas when looking South, it would be -180 degrees.
However, this latest message suggests rather that you want to know the angular deviation from neutral gaze, regardless of head pose? Do you want, in a sense, data related to the rotational orientation of the eyes within the socket?
Yes, that could work as well since we simply do not have the head orientation due to no head pose data but at least to get the eye orientation may work for our scenario. with the approach i showed, i am getting a plot as shown in the image, which is not as expected as the person is expected to look directly ahead most of the time, so most of the points should be centred around 0. So there seem to be a need to find the offset to adjust but if there is an easier way to do this, that would be useful as well. And I found this link: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb where the spherical co-ordinates of the gaze is calculated and is visualizes the gaze points but im not sure if this could work
Hi @user-0f7e53 , so, the gaze ray provided by Pupil Core originates at the center of the world camera, which is offset and can be essentially arbitrarily oriented. The gaze ray is represented in the world camera's 3D space, so it indicates the direction from that camera to the point that a wearer is looking at. This is what gaze_point_3d_x/y/z
intend to specify
The transformation, as done in that tutorial, is fine for determining gaze velocity, as an offset and arbitrary rotation do not change such velocities. However, that transformation alone does not provide a "gaze heading" such that an azimuth and elevation of 0 always corresponds to looking straight ahead (i.e., it does not account for the arbitrary rotation of the world camera)
Rather, you can try this:
elevation = azimuth = 0
Also, please note that the tutorial uses "traditional" spherical coordinates, where:
theta
) and 90Β° elevation (i.e., psi
) corresponds to "forward" in the world camera coordinate system (not the same as "gaze straight ahead")An easy validation is to make a recording where you first look left, then up, then right, then down, and you can better understand the relationship to the spherical coordinates.
Also, did you apply the negative_z_values
correction?
With respect to the calculations in your earlier post, gaze_point_3d_x/y/z
and eye_center0/1_3d_x/y/z
are not compatible in that way, so they cannot be directly combined like that to derive an azimuth and elevation for gaze
Please let me know if this cleared things up
Hi Rob, Thanks for the detailed explanation and I think it is a great idea to first get the neutral gaze point which i can do myself and then calculate the offset based on the neutral points. I will also try to record the point where i look into different directions to get a sense of the co-ordinates for each direction. I will try this method and see if it works. I have sent an email btw with an example and i will reply to the email when i have tried this method
I just got a secondhand pair of Pupil Core glasses and am planning to use them for basketball performance analysis, focusing mainly on free throws. Iβd also like to explore their use for more dynamic movements. Can these glasses can operate without being continuously connected to a laptop, or if there are alternative setups for more freedom of movement?
Hi @user-dcf968 , great to hear! If you want to be more mobile with your Pupil Core, then you could try connecting it to a LattePanda (for SBCs with Pupil Core, you want to consider x86-64 architectures). So, essentially, yes, it does need to be connected to a computer.
Since you plan to use it in an athletic setting, then please note that Pupil Core's default eyetracking algorithms can tolerate a certain amount of headset slippage, but not much. If there will be any rapid head motion or shock that causes enough shift or movement/jitter of the headset, then it is advised to pause, reset the device, and re-calibrate. Running some validations can help determine how much is acceptable in your case, if you experience any slippage to begin with. You can learn more about that and other Best Practices here, and let us know if you have any other questions.
If you ever plan to upgrade your eyetracking equipment, then I recommend looking into our latest eyetracker, Neon, which does not have this issue. It is calibration free and slippage resistant.
Iβm planning to do coding in Python on a MacBook Pro with an M1 chip. Could you recommend any useful websites for API references? Also, could you explain any precautions I should take when using an M1-powered computer?
I am preparing for development by following the instructions on https://github.com/pupil-labs/pupil/blob/master/README.md#installing-dependencies-and-code. However, I am having trouble downloading the Pupil Labs libraries listed in the requirements.txt file on a MacBook with an M1 chip. The error "error: subprocess-exited-with-error" occurs.
Hi @user-76ebbf , can I ask what you specifically plan to program? Then, I am better positioned to point you in the right direction.
Hi @user-f43a29 , thank you for your response. I haven't decided on the specific program yet, but eventually, I plan to conduct psychological experiments using Python and PsychoPy. For now, I aim to perform operations such as calibration and recording using programs created in Python.
At this stage, I have created a virtual environment with Python version 3.11, as recommended, and executed the following commands in the terminal:
git clone https://github.com/pupil-labs/pupil.git cd pupil git checkout develop
However, when I ran:
python -m pip install -r requirements.txt
I encountered errors while downloading the following libraries:
// Pupil-Labs ndsi==1.4. pupil-apriltags==1.0. pupil-detectors>=2.0.2rc2 pupil-labs-uvc pye3d>=0.3.2 pyglui>=1.31.1b1
It seems that these libraries are causing issues during installation.
Since I'm using an M1 MacBook, I was able to launch Pupil Capture programmatically by running the following code:
import subprocess
//Command written as a list command = ["sudo", "/Applications/Pupil Capture.app/Contents/MacOS/pupil_capture"] // Execute the command subprocess.run(command)
β οΈ // is a comment line in code.
Hi @user-76ebbf, then this might be more work than necessary.
Have you taken a look at the Getting Started guide? We already provide an easy to install bundle of the Pupil Core software, so it is not usually needed to build it yourself from source at the terminal.
If you will be using standard PsychoPy, Python, & the Pupil Core Bundle software (provided at that link), then you should be fine with your M1 computer; if anything, you have a quite powerful system for using Pupil Core!
If you want to program your experiments directly in Python, then I recommend checking out the Network API documentation and the Pupil Helpers repository. With respect to PsychoPy, their website and community have great documentation and guides on how to get setup. For example, this guide on Pupil Core with PsychoPy Builder.
Let us know if you have any other questions.
Hi Folks - haven't fired up the core in a while. I was thinking about an in-class demo, but found that Capture can't find my cams on Sonoma. I don't see Capture listed in "settings / privacy&security/cameras".
Is it simply no longer functional on mac?
Hi @user-8779ef , it still works on newer MacOS. As of MacOS 12, you want to take note of the steps here.
If you installed it as an App bundle via the Download link here, then you can try applying those recommendations to:
/Applications/Pupil Capture.app/Contents/MacOS/pupil_capture
Hi Rob. Thanks for your response. What I meant was: Since there are two sets of calibration and validation, the first occurring at the beginning of the experiment and the second occurring mid way through, I wanted to know if the first calibration is used for the first half of the recording and then from the point where the second calibration occurs, does the first calibration get discarded and the second calibration is then used for all recording that occurs after it?
Hi @user-b31f13 , I see now, apologies for the misunderstanding. So, it works like this:
Gaze Data from Recording
, then all gaze samples are based on the most recent calibration, as this is what Pupil Capture uses when saving the data. This corresponds to your statement: the first calibration is used for the first half of the recording and then from the point where the second calibration occurs, that is then used
.Post-hoc Gaze Calibration
, then it will use whichever Calibration you choose in the Calibration
sub-menu, according to the method described earlier: https://discord.com/channels/285728493612957698/285728493612957698/1305676139137859605Can someone please clarify if this camera will works for eye capture feature?
Hi @user-ffeac9 , it's possible. If it is UVC compatible, then it should in theory work. Without having tested this specific camera, though, I cannot give a definite answer in this case, so it could be worth testing it out.
eye0_hardcoded_translation = 20, 15, -20 # οΌ
eye1_hardcoded_translation = -40, 15, -20 # οΌ
ref_depth_hardcoded = 500
This is a fixed value for the eye position in the code. If I want to modify the relative position relationship, for example, the world camera moves up some, how to do it.
I can't currently see exactly what these numbers mean in the world coordinate system because they don't particularly agree with what I'm measuring.
Hi @user-fb8431 ! I might have missed any previous message, but would you mind providing more context on what is the issue you are facing and what are you trying to achieve?
Kindly note that these parameters you link are only the initial parameters, Pupil Core do what we call bundle adjustment to estimate eye ball parameters.
The gaze data from the eye camera is slifhtly misaligned. Is it possible to shift it slightly on the x-axis and y-axis by offline calibration after recording? Is it possible to shift by manual correction?
Hi @user-e5b833 π !
As mentioned in the ticket, you can use the following Plugin to manually apply an offset correction to your recordings. However, it might be more beneficial to address this issue by improving the calibration process.
Please note that applying an offset correction modifies the gaze point in either the x or y direction (depending on your choice), but note that adjusting one coordinate alone might also affect both coordinates for the on-surface mapping.
Why does this happen? The surface can occupy different positions in the world camera space. You can visualize this relationship here. One of the steps in converting from scene camera coordinates to surface coordinates involves undistorting both the gaze and scene camera. For more details on this process, check out this tutorial on undistortion and unprojection. Thus, one coordinate change depending where the gaze is, can actually modify the point for surface on both coordinates.
If you're interested in how the surface mapper is implemented, you can find the source code here.
Let me know if you have any further questions or need additional clarification!
Hi everyone, I am a human factors engineering student. I'm currently designing an experiment with a python programming software that requires subjects to perform some mouse clicks on a computer. I also want to capture data from their eyes and be able to synchronize it with my mouse data. (For example, the eye data will start to be captured only after the subject tries the start button on the screen). I have both invisible and core devices. I don't know which one is easier to program ( since my English and programming skills below average). It would be nice if there is a documentation or project about it! Thanks guys! I really need your help!π₯Ή π π
Hi @user-74c615 π ! You can programmatically control Pupil Capture and send event annotations using the Network API. While the method for detecting and capturing mouse events is up to you, once captured, you can send event annotations through the Network API to mark these events. See below some resources that can be of interest for you:
If you're planning to implement a gaze-contingent paradigm, the Alpha Lab guide on building gaze-contingent assistive applications might be helpful. Additionally, Pupil Core offers integration with PsychoPy through the PsychoPy plugin for Pupil Core, which can be useful if you're developing your experiment to control stimulus presentation.
Finally one more note, while it's possible to start recording at a specific point, we recommend starting the recording before performing the calibration. Check out our Best Practices for more recommendations on data collection.
Thank you for your professional and detailed answer! I'll look into it more closely!π π π
Hello I am trying to figure out the DIY case for starting a high school project. The link to the shapeways store is broken. Are there any other cameras that are tried and tested for the DIY kit? Thanks
Hi, I am running the Pupillabs Core source(https://github.com/pupil-labs/pupil) in a Python 3.7 environment, but I get the following error:
Traceback (most recent call last): File "/home/~~~~/PycharmProjects/pupil_with_deepvog/pupil_src/main.py", line 39, in <module> from version_utils import get_version File "<fstring>", line 1 (get_tag_commit()=) ^ SyntaxError: invalid syntax
How can I resolve this? I want to run it in a Python 3.7 environment.
This happens when you download the source code from github's website, rather than using the git clone
command to grab it. Just so you know though, we provide pre-made builds of the Core software, and most users will not need to run from source.
Thank you! I'll try it
Hello everyone. I tried the Network API today and it worked, thanks to your clear documentation! π₯° But I still have 2 questions. π₯Ή Can I use the pupil remote command to change the name of a subfolder? I only see the use of 'R rec_name' to change the name of the main folder. But I would like to have a lot of recordings in one main folder and be able to name the subfolders myself, instead of using names like 000 ,001. The second question is, if I use my computer screen for calibration, does the bottom left corner of my screen correspond to (0, 0) in the world camera? What I'm trying to accomplish is that I want my eye position to be normalized based on screen size. Or do I have to define my screen as a surface? I found this out because I noticed that the coordinates inside 'gaze_postions.csv' (e.g. gaze_point_3d_x) would have negative values. I dont know what dose negative values meanπ§ Thanks again for your work and help!π π«
Hi @user-74c615!
1. If memory serves, that first command should generate a custom name for the recording folder. In a sense, each recording is made in a new folder.
2. If you want screen-mapped gaze, you'll need to use the Surface Tracker Plugin in Pupil Capture. You'll need to set up a surface that outlines your screen. The mapped coordinates are described in the documentation. Briefly: x_norm
and y_norm
are coordinates between 0
and 1
, where (0,0)
is the bottom left corner of the surface and (1,1)
is the top right corner.
Bonus: You can use the real-time API to get real-time surface-mapped gaze. This example script shows you how!
Thank you for answering and providing detailed information,Neil! Today I will try how the surface should be used.π π©βπ»
Hi all, is there a data sample available somewhere? https://pupil-labs.com/blog/demo-workspace-walkthrough-part1 leads to a 404.
In particular, I was looking for a monocular recording, ideally at 50hz with a non-moving head (e.g., chin rest)... a few minutes would do. This would be enormously helpful!
Hi @user-7aab89 ! I did replied by email.