πŸ‘ core


user-b02f36 07 November, 2024, 12:11:50

Hi! I wonder if it is possible to get the pixel size of eye camera in pupil core. I have got the intrinsic matrix of eye camera and I want to get the pixel size to calculate the real focal length. Many thanks

nmt 07 November, 2024, 14:57:39

Hi @user-b02f36! Please reach out to info@pupil-labs.com in this regard πŸ™‚

user-a4aa71 08 November, 2024, 10:53:19

Hi everyone, one of my student upgraded her pc, she has a mac book (MacBook air 13, chip M1. 2020. 8 GB macOS Sequoia version 15.0.1), and capture doesn't work anymore. That is, the message appears that the cameras are detected but they are actually disconnected, even though among the usb ports appear to be connected--how can this be fixed?

user-d407c1 08 November, 2024, 11:42:57

Hi @user-a4aa71 ! Could she try the following troubleshooting steps? If those don’t resolve the issue, could you please open a πŸ›Ÿ troubleshooting ticket with additional details, such as the version of Pupil Capture being used?

user-a4aa71 08 November, 2024, 11:49:57

it works!! Perfect! Thank you so much!

user-7b0c86 08 November, 2024, 17:50:15

Hi I apologize if this question has already been asked, but I couldn't find it when I searched.

When collecting data with Pupil Core, I take care to detect the pupil, make the necessary adjustments, ask the subject not to make sudden movements with their head, and perform a correct calibration, with good levels of confidence. When analyzing the data in the pupil player, I noticed that, in some cases, the participant's fixations were not detected (unfortunately, I didn't notice this at the time of recording). Is this normal for some participants? I noticed that the number of fixations was very low and it is possible to see that fixations were not detected during the video (pupil player). Could you explain what might have happened or something I can do? Thank you very much!

user-f43a29 11 November, 2024, 23:39:52

Hi @user-7b0c86 , the default parameters of the fixation detector have been chosen to be useful in many scenarios. but you might have an experiment that requires some tweaking of the parameters. Please see here and here for more information.

user-b31f13 08 November, 2024, 23:54:30

Hello, Please in my 13 minute study (see attached screenshot), I had 2 calibration points and 2 validation points (So calibration then validation, then calibration, then validation at the end). The part of my data I really care about is between the 2nd calibration and validation (at the end of data) points. If I leave Pupil and Gaze data set to "from recording", how does the exported csv manage these calibrations? Does it simply use the most recent calibration to calculate gaze data until when a new calibration arises and then it uses that one moving forward, or does it somehow mesh and use both calibrations for the entire data span, or does it calculate gaze some other way?

Additionally, I am working with a surface. Please how would this surface gaze data interact with the 2 calibration sessions?

Finally, is there an ideal configuration for extracting post-hoc fixations optimally? or should I just stick to the default settings for the fixation detector plugin in pupil player?

Chat image

user-f43a29 11 November, 2024, 23:33:45

May I ask for more clarification about what you mean by "how does the surface data interact with the 2 calibration sessions"? The surface mapping procedure will still remain the same:

  • It will detect the AprilTag-defined Surface in a World frame
  • It will map the (calibrated) gaze data to that Surface
  • You will receive the mapped data in the Surface Tracker data export

Regarding fixation detection, the "ideal" choice depends on your experimental setup and configuration. The default parameters of the fixation detector are set to be useful in many scenarios. If you find that those parameters are not fitting your use case (e.g., fixations are clearly not being detected), then you can modify the parameters as you see fit. You may find this section of the documentation useful.

user-f43a29 11 November, 2024, 23:30:49

Hi @user-b31f13 , if you have not seen it already, I recommend checking out the Post-hoc Calibration documentation.

In your case, try the following to use only the 2nd calibration:

  • In Pupil Player, go to the Post-hoc Gaze Calibration menu
  • Choose Post-hoc Gaze Calibration as Data Source
  • Expand the Calibration section
  • Choose New Calibration and give it a name
  • Then, set the Trim Marks to only include your second calibration (see attached image)
  • Change any other calibration settings as needed and click Set from Trim Marks (next to Collect References)
  • Click Recalculate

To now use that calibration for gaze mapping, expand the Gaze Mapper menu and:

  • Click New Gaze Mapper and give it a name
  • Set Calibration to the post-hoc one you just made
  • If you click Recalculate, then by default it will map gaze throughout the whole recording using the chosen calibration. You can limit the mapping to just the second half of your recording, again using Set from Trim Marks.

You can now export the data and note that it will be the data from within the Trim Mark window. Note also that the exported CSV files do not exactly "manage" the calibrations. Rather, Pupil Player manages the calibrations and after gaze mapping is complete, the exported gaze data will be the result of that mapping.

Chat image

user-b31f13 09 November, 2024, 01:19:24

Also, I noticed that although it is two calibrations and two validations in my data, for some reason, it posits that there are 3 calibrations (which is not the case) (please ignore the "Pre-trial calibration", as I manually created that). Please can someone help me understand why this would happen?

Chat image

user-ffeac9 09 November, 2024, 13:26:18

Can I use ESP32 camera model for capture eye? @user-d407c1

nmt 11 November, 2024, 01:45:51

Hi @user-ffeac9! We don't know about the ESP32 camera, specifically. But note that the Capture video backend only supports cameras that are UVC compliant. More details in this message: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

user-ffeac9 11 November, 2024, 09:11:08

Is it possible to track the eye movements using a video? By inputing it

user-f43a29 11 November, 2024, 09:16:38

Hi @user-ffeac9 , I'll briefly hop in for Neil: yes, so long as your camera that provides the videos is UVC compatible.

user-ffeac9 11 November, 2024, 09:17:43

Is there provided guideline?

user-f43a29 11 November, 2024, 09:19:07

Yes, please see the message linked above: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

user-ffeac9 11 November, 2024, 18:02:41

https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498 in this message it says "For other cameras, you could theretically write your own video backend, but that might be very time consuming." is there anybody who done it ??

user-ffeac9 11 November, 2024, 18:19:25

any help please?

Chat image Chat image Chat image Chat image Chat image

user-f43a29 11 November, 2024, 23:36:04

Hi @user-ffeac9 , it is possible that your camera is not UVC compliant. You could try the steps in this messsage and see if that resolves it.

user-b31f13 11 November, 2024, 19:02:30

@user-f43a29 @nmt Please could someone have a look at my messages above https://discord.com/channels/285728493612957698/285728493612957698/1304594934053212162

user-11b3f8 12 November, 2024, 09:55:07

Hi Is there a way to remotely tell pupil capture to add a surface in the surface plugin, as with sending a request to calibrate through pupil remote?

nmt 12 November, 2024, 10:03:58

Hi @user-11b3f8! This isn't possible via Pupil remote. The surfaces need to be defined in the Capture software itself. May I ask why you would want to do so?

user-11b3f8 12 November, 2024, 10:06:00

We are using the pupil labs core for real time eyetracking to use an interface. We use the surface tracker plugin to get the gaze within the surface which is our interface. But as we are running it through docker, pupil capture starts anew, so without any extra plugins initiated. So i was just curious as to see if there was a way to initalize a plugin and add surfaces without having to setup the program

user-11b3f8 12 November, 2024, 10:06:44

so by requesting it through pupil remote, so the setup would be a part of starting up our program

nmt 12 November, 2024, 10:20:05

Interesting! Okay, maybe I'll zoom out a little. Technically, what you're asking for is actually possible. It would take a bit of work, but I think it's feasible (no guarantees). Let me try to point you in the right direction.

nmt 12 November, 2024, 10:21:29

Firstly, the capture world settings are stored in the pupil_capture_settings directory on desktop machines. Is there a way to make this persistent in your setup?

user-11b3f8 12 November, 2024, 10:22:51

yeah we should be able to copy an existing settings file, and place it into the program files on startup

nmt 12 November, 2024, 10:27:19

With the right settings, the surface tracker plugin should load automatically.

Also note that defined surfaces are saved in a file called surface_definitions in the pupil_capture_settings directory.

If you restart Capture and the Surface Tracker plugin loads, your surface definitions from previous sessions will be loaded.

I would give that a shot and see if you can make it work

user-11b3f8 12 November, 2024, 10:30:34

we've found the files, we'll see if we can make that work, thank you!

user-98789c 13 November, 2024, 15:27:18

Hello all, a question about how the "diameter_3d" is calculated using diameters from eye 1 and eye 0: if one eye has an outlier reading or missing data, how is it handled in the calculation?

user-cc6071 13 November, 2024, 18:51:18

Hello and thanks again for your previous help! As I mentioned, we are planning to conduct research sessions in dyads (parent and child) and we were wondering if you could recommend an optimal screen size for these studies. Given that we’ll have both adults and kids interacting, any guidance on screen dimensions that would work well for both would be really appreciated. Thanks so much in advance! Best, Asia

user-f43a29 14 November, 2024, 08:59:31

Hi @user-cc6071 , you are welcome!

Without knowing the specific parameters of your environment, as well as the overall experimental design, I don't want to make an assumption and risk giving you a wrong answer for the optimal screen size.

May I ask if you will be using AprilTags to map gaze to the screen?

We maintain a list of publications that used our eyetrackers, where you can see how other researchers who used Pupil Core handled such questions.

user-0f7e53 13 November, 2024, 19:54:57

hi Team, i would like to know if there is a way to compute the heading and pitch of the gaze without the head pose information? I have exported the gaze data but there was no head pose information being recorded. Thanks

user-f43a29 14 November, 2024, 08:54:04

Hi @user-0f7e53 , you were using Pupil Core, correct? Did you potentially have an alternate way to determine the orientation/pose of the wearer's head?

Do I understand correctly that you were using the Head Pose Tracker?

user-f43a29 14 November, 2024, 08:53:27

Hi @user-98789c , we do not reject said data. We provide you with all estimates, as computed by the underlying pye3d model.

Since outlier definition can be dependent on research context, we leave that decision to the user, rather than reduce the data you get. This way, you get as much flexibility and data as possible.

Could you clarify what you mean by "missing data" in this context?

user-98789c 14 November, 2024, 15:32:56

Thank you @user-f43a29, my question is now answered. It was actually a question from reviewer number 2! and I found the function I used to handle missing data from only one eye.

user-0f7e53 14 November, 2024, 10:58:44

Hi Rob, yes I was using Pupil Core and i did not have any other devices that recorded the orientation of the wearer's head and I could not use the Head Pose Tracker as I did not apply any markers as they were not easily detectable in a dark environment.

user-f43a29 14 November, 2024, 11:05:02

Then, it is indeed tricky, but not strictly impossible. One potential option, if you had the world camera enabled and there was enough structure and light in your environment (and the wearer moved around enough), is to try an open-source Structure from Motion algorithm, but I cannot make guarantees about success or the quality of the final result.

Otherwise, members of the community here might have additional tips πŸ™‚

user-0f7e53 14 November, 2024, 11:11:53

Okay, hopefully i can get other tips as we have the gaze data which recorded the world camera as well.

user-0f7e53 14 November, 2024, 15:20:05

Hi @user-f43a29 as a follow up to this question, i would like to get your opinion whether it is possible to estimate the heading and pitch using the gaze_normal and gaze_point_3d values? I would just like to verify whether the steps below from GenAI makes sense:

  1. Compute the Eye Center as a Combined Position If the gaze system doesn't automatically provide a central eye position, calculate it as the average position between the two eyes: \text{eye_center_x} = \frac{\text{eye_center0_3d_x} + \text{eye_center1_3d_x}}{2} \text{eye_center_y} = \frac{\text{eye_center0_3d_y} + \text{eye_center1_3d_y}}{2} \text{eye_center_z} = \frac{\text{eye_center0_3d_z} + \text{eye_center1_3d_z}}{2}
  2. Calculate the Heading Vector from Eye Center to Gaze Point Use the calculated eye center position as the starting point and the gaze point coordinates as the end point. The heading vector is: \text{Heading Vector} = \left(\text{gaze_point_3d_x} - \text{eye_center_x}, \text{gaze_point_3d_y} - \text{eye_center_y}, \text{gaze_point_3d_z} - \text{eye_center_z}\right) Normalize this vector to get a unit vector indicating the direction of the gaze.
  3. Convert the Heading Vector to Angular Coordinates Calculate azimuth (horizontal angle) and elevation (vertical angle): using some formula
user-d34f33 14 November, 2024, 22:47:47

I'm sure this question gets asked all the time, but is the core compatible with contact lenses? I know the neon is. After searching on your website and on this discord chat, I couldn't find the answer for the core specifically. Thanks!

nmt 15 November, 2024, 10:00:45

Hi @user-d34f33! For the majority of lenses, yes it works. The lenses do not affect pupil detection. There may be some edge cases, as reported by @user-0f7e53. In such cases, you can try to tweak the 2D detector settings in the Capture software.

user-0f7e53 15 November, 2024, 07:53:08

this is just from my experience.. i think it depends. I had one participant who i think was wearing some kind of thick lenses and I could not detect his eye pupil.. there were others who also wore lenses but it was still able to detect the pupil. I am not sure what is the difference with this particular person's lenses though

user-f43a29 15 November, 2024, 09:51:23

Hi @user-0f7e53 , I now think that I may have misunderstood your original question. I had the impression that you wanted gaze in a world-based coordinate system. For example, a system where, if you look towards magnetic North, gaze azimuth would be 0 degrees, whereas when looking South, it would be -180 degrees.

However, this latest message suggests rather that you want to know the angular deviation from neutral gaze, regardless of head pose? Do you want, in a sense, data related to the rotational orientation of the eyes within the socket?

user-0f7e53 15 November, 2024, 09:56:14

Yes, that could work as well since we simply do not have the head orientation due to no head pose data but at least to get the eye orientation may work for our scenario. with the approach i showed, i am getting a plot as shown in the image, which is not as expected as the person is expected to look directly ahead most of the time, so most of the points should be centred around 0. So there seem to be a need to find the offset to adjust but if there is an easier way to do this, that would be useful as well. And I found this link: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb where the spherical co-ordinates of the gaze is calculated and is visualizes the gaze points but im not sure if this could work

Chat image

user-f43a29 15 November, 2024, 14:34:27

Hi @user-0f7e53 , so, the gaze ray provided by Pupil Core originates at the center of the world camera, which is offset and can be essentially arbitrarily oriented. The gaze ray is represented in the world camera's 3D space, so it indicates the direction from that camera to the point that a wearer is looking at. This is what gaze_point_3d_x/y/z intend to specify

The transformation, as done in that tutorial, is fine for determining gaze velocity, as an offset and arbitrary rotation do not change such velocities. However, that transformation alone does not provide a "gaze heading" such that an azimuth and elevation of 0 always corresponds to looking straight ahead (i.e., it does not account for the arbitrary rotation of the world camera)

Rather, you can try this:

  • If you had a measure of neutral gaze (i.e., "looking straight ahead"), then you can mark that as elevation = azimuth = 0
  • Then, you apply the azimuth & elevation transformation, as provided in the tutorial, and correct for the offset from said neutral point

Also, please note that the tutorial uses "traditional" spherical coordinates, where:

  • 90Β° azimuth (i.e., theta) and 90Β° elevation (i.e., psi) corresponds to "forward" in the world camera coordinate system (not the same as "gaze straight ahead")
  • 0Β° elevation corresponds to "upwards" in this same system

An easy validation is to make a recording where you first look left, then up, then right, then down, and you can better understand the relationship to the spherical coordinates.

Also, did you apply the negative_z_values correction?

With respect to the calculations in your earlier post, gaze_point_3d_x/y/z and eye_center0/1_3d_x/y/z are not compatible in that way, so they cannot be directly combined like that to derive an azimuth and elevation for gaze

Please let me know if this cleared things up

user-0f7e53 15 November, 2024, 14:44:52

Hi Rob, Thanks for the detailed explanation and I think it is a great idea to first get the neutral gaze point which i can do myself and then calculate the offset based on the neutral points. I will also try to record the point where i look into different directions to get a sense of the co-ordinates for each direction. I will try this method and see if it works. I have sent an email btw with an example and i will reply to the email when i have tried this method

user-dcf968 18 November, 2024, 07:41:36

I just got a secondhand pair of Pupil Core glasses and am planning to use them for basketball performance analysis, focusing mainly on free throws. I’d also like to explore their use for more dynamic movements. Can these glasses can operate without being continuously connected to a laptop, or if there are alternative setups for more freedom of movement?

user-f43a29 18 November, 2024, 22:25:53

Hi @user-dcf968 , great to hear! If you want to be more mobile with your Pupil Core, then you could try connecting it to a LattePanda (for SBCs with Pupil Core, you want to consider x86-64 architectures). So, essentially, yes, it does need to be connected to a computer.

Since you plan to use it in an athletic setting, then please note that Pupil Core's default eyetracking algorithms can tolerate a certain amount of headset slippage, but not much. If there will be any rapid head motion or shock that causes enough shift or movement/jitter of the headset, then it is advised to pause, reset the device, and re-calibrate. Running some validations can help determine how much is acceptable in your case, if you experience any slippage to begin with. You can learn more about that and other Best Practices here, and let us know if you have any other questions.

If you ever plan to upgrade your eyetracking equipment, then I recommend looking into our latest eyetracker, Neon, which does not have this issue. It is calibration free and slippage resistant.

user-76ebbf 18 November, 2024, 08:29:18

I’m planning to do coding in Python on a MacBook Pro with an M1 chip. Could you recommend any useful websites for API references? Also, could you explain any precautions I should take when using an M1-powered computer?

I am preparing for development by following the instructions on https://github.com/pupil-labs/pupil/blob/master/README.md#installing-dependencies-and-code. However, I am having trouble downloading the Pupil Labs libraries listed in the requirements.txt file on a MacBook with an M1 chip. The error "error: subprocess-exited-with-error" occurs.

user-f43a29 18 November, 2024, 23:29:18

Hi @user-76ebbf , can I ask what you specifically plan to program? Then, I am better positioned to point you in the right direction.

user-76ebbf 19 November, 2024, 04:57:10

Hi @user-f43a29 , thank you for your response. I haven't decided on the specific program yet, but eventually, I plan to conduct psychological experiments using Python and PsychoPy. For now, I aim to perform operations such as calibration and recording using programs created in Python.

At this stage, I have created a virtual environment with Python version 3.11, as recommended, and executed the following commands in the terminal:

git clone https://github.com/pupil-labs/pupil.git cd pupil git checkout develop

However, when I ran:

python -m pip install -r requirements.txt

I encountered errors while downloading the following libraries:

// Pupil-Labs ndsi==1.4. pupil-apriltags==1.0. pupil-detectors>=2.0.2rc2 pupil-labs-uvc pye3d>=0.3.2 pyglui>=1.31.1b1

It seems that these libraries are causing issues during installation.

Since I'm using an M1 MacBook, I was able to launch Pupil Capture programmatically by running the following code:

import subprocess

//Command written as a list command = ["sudo", "/Applications/Pupil Capture.app/Contents/MacOS/pupil_capture"] // Execute the command subprocess.run(command)

⚠️ // is a comment line in code.

user-f43a29 19 November, 2024, 09:35:46

Hi @user-76ebbf, then this might be more work than necessary.

Have you taken a look at the Getting Started guide? We already provide an easy to install bundle of the Pupil Core software, so it is not usually needed to build it yourself from source at the terminal.

If you will be using standard PsychoPy, Python, & the Pupil Core Bundle software (provided at that link), then you should be fine with your M1 computer; if anything, you have a quite powerful system for using Pupil Core!

If you want to program your experiments directly in Python, then I recommend checking out the Network API documentation and the Pupil Helpers repository. With respect to PsychoPy, their website and community have great documentation and guides on how to get setup. For example, this guide on Pupil Core with PsychoPy Builder.

Let us know if you have any other questions.

user-8779ef 21 November, 2024, 15:24:38

Hi Folks - haven't fired up the core in a while. I was thinking about an in-class demo, but found that Capture can't find my cams on Sonoma. I don't see Capture listed in "settings / privacy&security/cameras".

Is it simply no longer functional on mac?

user-f43a29 21 November, 2024, 15:54:00

Hi @user-8779ef , it still works on newer MacOS. As of MacOS 12, you want to take note of the steps here.

If you installed it as an App bundle via the Download link here, then you can try applying those recommendations to:

/Applications/Pupil Capture.app/Contents/MacOS/pupil_capture
user-b31f13 22 November, 2024, 01:14:52

Hi Rob. Thanks for your response. What I meant was: Since there are two sets of calibration and validation, the first occurring at the beginning of the experiment and the second occurring mid way through, I wanted to know if the first calibration is used for the first half of the recording and then from the point where the second calibration occurs, does the first calibration get discarded and the second calibration is then used for all recording that occurs after it?

user-f43a29 25 November, 2024, 11:08:41

Hi @user-b31f13 , I see now, apologies for the misunderstanding. So, it works like this:

  • If you are using the default Gaze Data from Recording, then all gaze samples are based on the most recent calibration, as this is what Pupil Capture uses when saving the data. This corresponds to your statement: the first calibration is used for the first half of the recording and then from the point where the second calibration occurs, that is then used.
  • If you are using Post-hoc Gaze Calibration, then it will use whichever Calibration you choose in the Calibration sub-menu, according to the method described earlier: https://discord.com/channels/285728493612957698/285728493612957698/1305676139137859605
user-ffeac9 24 November, 2024, 11:58:02

Can someone please clarify if this camera will works for eye capture feature?

Chat image

user-f43a29 25 November, 2024, 11:13:02

Hi @user-ffeac9 , it's possible. If it is UVC compatible, then it should in theory work. Without having tested this specific camera, though, I cannot give a definite answer in this case, so it could be worth testing it out.

user-ffeac9 24 November, 2024, 11:59:25

Chat image

user-ffeac9 24 November, 2024, 11:59:25

Chat image

user-ffeac9 24 November, 2024, 11:59:25

Chat image

user-fb8431 25 November, 2024, 13:53:03

eye0_hardcoded_translation = 20, 15, -20 # ? eye1_hardcoded_translation = -40, 15, -20 # ? ref_depth_hardcoded = 500 This is a fixed value for the eye position in the code. If I want to modify the relative position relationship, for example, the world camera moves up some, how to do it. I can't currently see exactly what these numbers mean in the world coordinate system because they don't particularly agree with what I'm measuring.

user-d407c1 27 November, 2024, 10:32:23

Hi @user-fb8431 ! I might have missed any previous message, but would you mind providing more context on what is the issue you are facing and what are you trying to achieve?

Kindly note that these parameters you link are only the initial parameters, Pupil Core do what we call bundle adjustment to estimate eye ball parameters.

user-e5b833 26 November, 2024, 07:36:58

The gaze data from the eye camera is slifhtly misaligned. Is it possible to shift it slightly on the x-axis and y-axis by offline calibration after recording? Is it possible to shift by manual correction?

user-d407c1 27 November, 2024, 10:20:41

Hi @user-e5b833 πŸ‘‹ !

As mentioned in the ticket, you can use the following Plugin to manually apply an offset correction to your recordings. However, it might be more beneficial to address this issue by improving the calibration process.

Please note that applying an offset correction modifies the gaze point in either the x or y direction (depending on your choice), but note that adjusting one coordinate alone might also affect both coordinates for the on-surface mapping.

Why does this happen? The surface can occupy different positions in the world camera space. You can visualize this relationship here. One of the steps in converting from scene camera coordinates to surface coordinates involves undistorting both the gaze and scene camera. For more details on this process, check out this tutorial on undistortion and unprojection. Thus, one coordinate change depending where the gaze is, can actually modify the point for surface on both coordinates.

If you're interested in how the surface mapper is implemented, you can find the source code here.

Let me know if you have any further questions or need additional clarification!

user-74c615 27 November, 2024, 06:01:43

Hi everyone, I am a human factors engineering student. I'm currently designing an experiment with a python programming software that requires subjects to perform some mouse clicks on a computer. I also want to capture data from their eyes and be able to synchronize it with my mouse data. (For example, the eye data will start to be captured only after the subject tries the start button on the screen). I have both invisible and core devices. I don't know which one is easier to program ( since my English and programming skills below average). It would be nice if there is a documentation or project about it! Thanks guys! I really need your help!πŸ₯Ή πŸ‘ πŸ‘€

user-d407c1 27 November, 2024, 10:05:46

Hi @user-74c615 πŸ‘‹ ! You can programmatically control Pupil Capture and send event annotations using the Network API. While the method for detecting and capturing mouse events is up to you, once captured, you can send event annotations through the Network API to mark these events. See below some resources that can be of interest for you:

If you're planning to implement a gaze-contingent paradigm, the Alpha Lab guide on building gaze-contingent assistive applications might be helpful. Additionally, Pupil Core offers integration with PsychoPy through the PsychoPy plugin for Pupil Core, which can be useful if you're developing your experiment to control stimulus presentation.

Finally one more note, while it's possible to start recording at a specific point, we recommend starting the recording before performing the calibration. Check out our Best Practices for more recommendations on data collection.

user-74c615 27 November, 2024, 11:37:22

Thank you for your professional and detailed answer! I'll look into it more closely!πŸ‘ πŸ™Œ 😊

user-02a7e9 27 November, 2024, 12:10:24

Hello I am trying to figure out the DIY case for starting a high school project. The link to the shapeways store is broken. Are there any other cameras that are tried and tested for the DIY kit? Thanks

user-fce73e 28 November, 2024, 05:11:35

Hi, I am running the Pupillabs Core source(https://github.com/pupil-labs/pupil) in a Python 3.7 environment, but I get the following error:

Traceback (most recent call last): File "/home/~~~~/PycharmProjects/pupil_with_deepvog/pupil_src/main.py", line 39, in <module> from version_utils import get_version File "<fstring>", line 1 (get_tag_commit()=) ^ SyntaxError: invalid syntax

How can I resolve this? I want to run it in a Python 3.7 environment.

user-cdcab0 28 November, 2024, 05:16:05

This happens when you download the source code from github's website, rather than using the git clone command to grab it. Just so you know though, we provide pre-made builds of the Core software, and most users will not need to run from source.

user-fce73e 28 November, 2024, 05:21:00

Thank you! I'll try it

user-74c615 28 November, 2024, 06:21:50

Hello everyone. I tried the Network API today and it worked, thanks to your clear documentation! πŸ₯° But I still have 2 questions. πŸ₯Ή Can I use the pupil remote command to change the name of a subfolder? I only see the use of 'R rec_name' to change the name of the main folder. But I would like to have a lot of recordings in one main folder and be able to name the subfolders myself, instead of using names like 000 ,001. The second question is, if I use my computer screen for calibration, does the bottom left corner of my screen correspond to (0, 0) in the world camera? What I'm trying to accomplish is that I want my eye position to be normalized based on screen size. Or do I have to define my screen as a surface? I found this out because I noticed that the coordinates inside 'gaze_postions.csv' (e.g. gaze_point_3d_x) would have negative values. I dont know what dose negative values mean🧐 Thanks again for your work and help!πŸ™Œ πŸ«‚

nmt 28 November, 2024, 10:59:12

Hi @user-74c615! 1. If memory serves, that first command should generate a custom name for the recording folder. In a sense, each recording is made in a new folder. 2. If you want screen-mapped gaze, you'll need to use the Surface Tracker Plugin in Pupil Capture. You'll need to set up a surface that outlines your screen. The mapped coordinates are described in the documentation. Briefly: x_norm and y_norm are coordinates between 0 and 1, where (0,0) is the bottom left corner of the surface and (1,1) is the top right corner.

Bonus: You can use the real-time API to get real-time surface-mapped gaze. This example script shows you how!

user-74c615 29 November, 2024, 02:15:10

Thank you for answering and providing detailed information,Neil! Today I will try how the surface should be used.😊 πŸ‘©β€πŸ’»

user-7aab89 29 November, 2024, 11:34:00

Hi all, is there a data sample available somewhere? https://pupil-labs.com/blog/demo-workspace-walkthrough-part1 leads to a 404.

user-7aab89 29 November, 2024, 11:35:20

In particular, I was looking for a monocular recording, ideally at 50hz with a non-moving head (e.g., chin rest)... a few minutes would do. This would be enormously helpful!

user-d407c1 03 December, 2024, 12:16:14

Hi @user-7aab89 ! I did replied by email.

End of November archive