Hello, I wonder if it is possible to change Sensor settings and Image post processing in eye camera window of Pupil Capture using python, such as changing the brightness, gamma, and contrast?
Hello, when I use the source code in ubuntu20.04 to package the software version to v3.6.7, the software is not available after installation, what is the problem.
Hi @user-fb8431 , may I ask some clarifying questions:
Hi all, I'm working with a eye dataset collected with PupilCore. I took over this dataset, and just to note before I ask my question, the data were collected without any Apriltags.
I would like to extract the average gaze position during fixations, and overlay these on what the participants were seeing (varying clips). To extract fixations, I used this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb which relies on the circle_3d_normal_x/y/z variables in pupil_positions.csv file. So now I have the timestamps of these detected fixations. I'm a bit confused though which varible in gaze_positions.csv file is the best to extract the position of the eye in the coordinates of the scene during these extracted fixations. In particular, what is the difference between 'gaze_normal0_x/y/z' and 'gaze_point_3d_x/y/z'? I read the description of these variables but it is still unclear to me. Are 'gaze_normal0_x/y/z' variables derived from 'gaze_point_3d_x/y/z' variables by normalizing them? Which set of variables do you recommend to use for extracting gaze during fixations? Many thanks in advance!
Hi @user-57dd7a , the fixation detector in that tutorial is a simple one for learning purposes. You will have better results and an easier analysis experience by using the more robust fixation detector in Pupil Player. It uses the calibrated gaze data from the recording.
The formats of the data files are described in the documentation. Briefly:
gaze_normal0_x/y/z
: This is a vector that points from the center of the eye through the center of the pupil. It is also known as the "optical axis" and essentially tells you the orientation of each eyeball. They are not derived from gaze points, but are estimated from the 3D eye model. Calibration can be conceptualized as a process that transforms these vectors to gaze positions in the world camera image. See here for an alternate explanation.gaze_point_3d_x/y/z
: This is an estimate of the point in 3D space that is being gazed by the wearer. Note that the z component can be unreliable after about 1 meter distance. Whether you need this value depends on your use case. For many cases, you can just use norm_pos_x
and norm_pos_y
.I also recommend taking a general look through our documentation on Pupil Player, as well as the other tutorials, such as this one that should help with correlating gaze and fixation. Feel free to ask any other questions!
Hi all! I have a question about using a realsense d435 as the world camera such that I can project the gaze on the pointcloud within ROS, I hope you can give me some advise? I have a legacy setup in the lab that hasnt been touched in ages that I am trying to necro. It is an old core headset with the original world camera replaced with d435. I am running it on Ubuntu 20 with bundled ros-noetic-realsense2. When running the Pupil Capture (bundle) I get feeds of eyes but the d435 world camera is not detected, although ROS does detect the d435. I found the plugin for the realsense2 plugin for v2.3+ and some threads in dev that have similar crash behaviors as mine (failing to import context from pyrealsense2). I will go fixing them as described in the thread. Just in case, I want to double check here if there are more up to date materials? Edit: I thought I had problems with pyrealsense2 on python3.8 so I installed 3.6 and corresponding pyrealsense2 but I am still getting no context in pyrealsense2
Hi @user-200f75! There have been no updates to the realsense plugin/materials I'm afraid. It's now deprecated. Your best bet is to try and follow the previous thread(s), and use the same software versions (if possible) that those users had.
Hey I am working on implementing the pupil core eye tracker in unity, I was wondering if this is possible or if I should just use the pupil capture and record applications?
if it is possible, where's the documentation for it?
So i ended up finding the documentation.. The pupil core I am using has a broken left eye camera and I was wondering if there was a way I could modify the demo code for the gaze demo such that it only takes right eye data for calibration?
Hey @user-3c7db0, thanks for reaching out π I'm sorry to hear that one of the eye cameras are broken. May I ask if you've already contacted us via email concerning the broken camera? Otherwise, could you describe the camera's situation and why you think it's broken? Then we can discuss repair. Alternatively, you could buy additional cameras separately.
Btw, you can run Pupil Core monocularly by disabling one of the eye cameras. But this might not be recommended, depending on what you're trying to achieve.
Good morning, the offset for manual correction (in pupil player) on the gaze that you can do with a post-hoc calibration in what units is it? degrees?
Hi @user-a4aa71, which offset correction you're referring to? Are you using a custom plugin?
Hi again! We've verified that motion capture interferes with eyetracking because the flashing IR sources from the mocap cameras interferes with the Core's ability to calibrate. To solve this, we're wondering whether there's a way to precisely time the start up of the cameras so we can predict when frame capture will occur?
Hi @user-40931a π. This was your set up, correct? https://discord.com/channels/285728493612957698/285728493612957698/1210025943956062268
The Optitrack camera with the IR strobe is very close to and pointing directly at the Core headset. My first question is, could you change the setup so it's not so close to the wearer?
Secondly, did you try turning down the eye camera exposure time in the eye window settings? The images you shared already look overexposed, which I think exacerbates the periods when the strobe is on, causing the pupil detection on our system to fail.
It's not really possible to precisely control the startup of our cameras in the way you describe. However, I can think of a few workarounds. But I'd like to exhaust the above suggestions first, as I believe with the right positioning and exposure settings, you could mitigate the strobe effects.
Those suggestions have both been tried now. The cameras are in their final positions as far from the eyetracking as possible and the exposure is the lowest it can be while still being able to pick anything up in the darkness of the recording booth. We also have tried turning the flashing off of just the cameras closest to the screen and adding stable IR sources pointing at the person, both of which interfere with motion capture enough that it isn't looking like a great solution. If you have any more ideas we would be glad to try them.
Theoretically, it's conceivable to send a TTL signal to our system via our real-time API. But I suspect the network latency would be too great to accurately sync the trigger arrival time with the frequency of the strobe.
It could be worth a try. But before going down that route, have you considered post-filtering the noisy data?
That is, calibrate the eye tracker under normal conditions (i.e., no IR strobe), and then make your experimental recording with the mocap system running (with strobe).
You would end up with normal data interspersed with noisy jitters when the pupil detection failed periodically. With some signal processing, it would be possible to filter out the noisy intervals and interpolate the gaps.
Hi everyone! Me and my working group (clinical reseach, neurology) have a problem we would like to open for discussion. It regards the 3d eye model and pupil size in mm. We did a study with patients and older control subjects and are mainly interested in pupil size. The patients have an impaired eye movement, so in some cases the 3d model did not fit well and calibration was not very accurate sometimes. We freezed the model before recording the actual data. Data was recorded with the LSL LabRecorder, so there are no eye or world videos of the actaul recording. However, we did record the calibration with Pupil Labs (with videos) before starting the experiment. We are currently in the data analysis stage and would, if possible, prefer to use pupil data in mm. Unfortunately we discovered that in some recordings 3d pupil size is bigger or smaller due to an badly fitted eye model. So our question is, if there is a way to refit the eye model or correct the badly fitted model, potentially from the calibration video, and then apply the new model on the LSL data stream? Thanks!
If you had the full Pupil Core recording, you could recompute data with an adjusted model and we might be able to find a way to synchronize the data with your other LSL recorded data... but without the Pupil Core recording, I'm afraid you won't be able to recompute anything
HI everyoneοΌ We are DIY-ing our own headset, and although we have manually updated the drivers for our camera, it still shows as 'unknown' when running the source code. How can we resolve this issue? Additionally, how can we modify the name of the camera displayed on the interface?
Hey Kimmy! Nice to hear you're making your own DIY Pupil Core. Regarding the unknown source, do you know if your cameras are UVC compliant? https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
Hello everyone, when developing with source code, is there any way to distinguish a custom camera on the page like the official camera nameοΌ
Good evening, this is going to be a very stupid question, but I can't find an explanation for this: I performed a single marker calibration in a room, having the subject perform a choreography with the head and then a validation by fixing known points and I got very bad results (accuracy of 3.2, absurd) despite having the pupil always detected and the marker always visible in the word camera. There were no changes in brightness due to head movement. I had the same test done in another room, much slightly brighter and got far better results. I have done many tests and cannot understand why it comes out well in one place and drastically worse in another. Does the lighting affect it that much? is it better for the room to be as bright as possible?
Hi, @user-a4aa71 - lighting can certainly have a dramatic effect like this. The single-marker choreography, especially, is going to be prone to motion blur in darker environments (due to a longer exposure time). This will negatively impact marker detection
Hi everyone, I would like to confirm: Is the 2d pupil size equal to the long axis of the 2d pupil ellipse?
Hi @user-d6701f ! That's correct! The 2d diameter is defined by the greater ellipse axes. You can find its definition in the pupil_detectors.
Hello everyone, I have a problem, here is the detailed instructions, https://github.com/pupil-labs/pupil/issues/2352
Hi @user-fb8431 ! Thanks for reporting this issue, while we might need to see if we can reproduce it, is there any reason why you are trying to build it yourself rather than using the already prebuilt bundles ?
Hi, @user-fb8431 - this type of error (missing modules after freezing with pyinstaller) is somewhat common with pyinstaller and isn't really specific to our software. Pyinstaller has a system in place to correct for this, but it sometimes breaks on new versions of Python or specific package dependencies.
You can sometimes work around it by listing the missing module as a "hidden import". If you look in the .spec
file, you'll see that we do exactly that for several modules already.
Another possible work around is to downgrade the problematic package or your Python version
Does it work when running directly from source, i.e. https://github.com/pupil-labs/pupil?tab=readme-ov-file#installing-dependencies-and-code ?
Running directly from source is correct.
I would think that you'd need to inherit from video_capture.Base_Source
rather than plugin.Plugin
, no?
no idea, the documentation on plugins is pretty scarce, will try Edit: could you elaborate on potential inherit options?
It it? I see: class Realsense2_Source(
Hello everyone, I have a question, when we use Pupil Core, if we use the single-purpose method, what is the approximate level of accuracy?
Hi @user-fb8431 , when you say "single-purpose method", do you mean the Single Marker Calibration Choreography?
Hi all, I am trying to develop scripts to analyse pupillometry data using the Core system. I have created a custom script to pull csv files directly from the pldata files.
This process is taking an incredibly long time, and I wondered if anyone had an open source script that transposes the pupil data into a more usable format?
Thankyou in advance!!
Hey @user-789ddb, thanks for reaching out π Have you seen the community GitHub page yet? There are a few post-hoc analysis script, like extracting pupil size.
This tutorial concerning pupillometry might also be of interest to you.
Perhaps you could also clarify why you're writing custom scripts to export data, instead of using Pupil Player?
Hi everyone,
Iβm having a bit of trouble understanding the units of measurement used in Pupil Core for gaze data collection. When I look at the gaze data in the CSV file, I canβt figure out what the units are. Can someone please let me know if theyβre in centimeters, meters, or something else?
Hi @user-06c973, pupil diameter is in mm -- see first sentence in the provided link π.
For clarification, there are various files that you can export from Pupil Core when using the Raw Data Exporter. May I ask which file you're asking about?
For instance, pupil_positions.csv has pupil diameters in mm, but gaze_positions.csv doesn't.
Hiya! Apologies if this question has already been asked. Is the Python Simple API compatible with the Core tracker? discover_one_device() is not finding anything (running service on 127.0.0.1:50020). I am trying to build up on the realtime-screen-gaze example. Thanks for your input on this!
Hey Chris, thanks for reaching out π I've deleted the same message from π» software-dev . Please don't double post as it floods the channel. Also, please be patient for a response.
For Pupil Core, you'll need to use the Network API plugin.
The Network API is based on zeromq and uses msgpack for serialization. Both are open-source technologies and provide libraries for a multitude of programming languages. You can find various Python and Matlab examples in our pupil-helpers repository.
Hi @user-07e923 - thank you so much. Got it.
So, the real-time-screen-gaze is out of scope for Core?
This was written for Pupil Invisible and Neon, which uses different methods to stream data in real-time.
Here are some more examples of using the Network API with Pupil Core.
Oh, understood. I was quiet excited about that.
Just out of curiosity, are you trying to map gaze onto a screen without using April Tags?
With AprilTags
You could use the Surface Tracker plugin, but subscribe to real-time gaze using the Network API. See this script.
(to minimise calibration)
Sorry, removed my previous message as I was referring to something else. Yes, that's one of our approaches.
This however operates on surfaces, detected with a single tag, rather than using a QRect representing the surface generated based on x tags (for precision). There's heavy reliance on the plugin (i.e. to detect edges etc);
We want to move away from the plugin.
Hey @user-275583, If I may interject, I want to clarify how the surface tracker works in Pupil Capture to avoid any misunderstanding.
You can use multiple tags to define your surface, such as one or more on each corner of a computer screen. You're not limited to using just one tag.
The surface tracker plugin in Pupil Capture, along with the real-time API, provides the same functionality as the real-time screen gaze package would if it were compatible with Core.
Since you need to use Pupil Capture to calibrate the Core system and the surface tracker plugin is integrated into the Capture software, the real-time screen gaze package is largely redundant for Core.
When we saw that API, it was like "wow, that's it!"
But it's a pity Core is not supported.
Thank you for clarifying this @nmt. I see your point. Thanks for clarifying.
Hello everyone, I was setting up the Core system earlier, but I encountered the following error only with EYE1:
EYE1: Could not connect to device! No images will be supplied. EYE1: No camera intrinsics available for camera (disconnected) at resolution [192, 192]! EYE1: Loading dummy intrinsics, which might decrease accuracy! EYE1: Consider selecting a different resolution, or running the Camera Intrinsics Estimation!
Could you please advise me on how to resolve this issue? EYE0 is functioning without any problems.
Hey @user-76ebbf, this seems like one of the eye cameras wasn't detected when you plug Pupil Core into the computer. Could you tell me which operating system you're using?
Also, can you try installing the Core bundled software on a different device and see if this issue persists?
Troubleshooting Core's eye camera
Hello All, while using my pupil core, one of the eye cams stopped working
It has happened a few times, is there any fix for it?
Thank you for clarifying [email removed] I
HI @nmt thank you!
Hi, any suggestions for processor/RAM/drive specification of a Windows/PC computer to use Pupil core with? I have the rare opportunity to purchase a new computer specifically for use with the core device.
Hi @user-e3fdf5 π. It's always nice when those opportunities come up! I have a few questions to help point you in the right direction.
What will the predominant use case be for this computer alongside Pupil Core? Are you focusing on obtaining gaze data alone, or will you be running other plugins, such as the surface tracker plugin, and possibly other third-party software applications as well?
I ask because just running Core on its own mostly taxes the CPU. However, other plugins can start to get RAM heavy depending on factors like the number of surface tracking markers and the length of the recordings that contain these markers.
Hi @nmt ! That's a great question. I am planning to use Pupil Core with PsychoPy tasks and surface tracking with Apriltags.
For Windows/PC, the key specs are CPU and RAM. I can't give you a specific optimal hardware recommendation, but I can say we have achieved good results with a late generation i7 and 16GB RAM. The RAM becomes more important with longer recordings and a higher number of surface tracking markers and surfaces defined. Have you also considered Mac? When running Pupil Capture on Apple Silicon, we easily achieve Core's maximal sampling rate. For surface tracking, you'd also want a decent amount of unified memory, e.g., 16GB or above.