Hello, I wonder if it is possible to change Sensor settings and Image post processing in eye camera window of Pupil Capture using python, such as changing the brightness, gamma, and contrast?
Hello, when I use the source code in ubuntu20.04 to package the software version to v3.6.7, the software is not available after installation, what is the problem.
Hi @user-fb8431 , may I ask some clarifying questions:
Hi all, I'm working with a eye dataset collected with PupilCore. I took over this dataset, and just to note before I ask my question, the data were collected without any Apriltags.
I would like to extract the average gaze position during fixations, and overlay these on what the participants were seeing (varying clips). To extract fixations, I used this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb which relies on the circle_3d_normal_x/y/z variables in pupil_positions.csv file. So now I have the timestamps of these detected fixations. I'm a bit confused though which varible in gaze_positions.csv file is the best to extract the position of the eye in the coordinates of the scene during these extracted fixations. In particular, what is the difference between 'gaze_normal0_x/y/z' and 'gaze_point_3d_x/y/z'? I read the description of these variables but it is still unclear to me. Are 'gaze_normal0_x/y/z' variables derived from 'gaze_point_3d_x/y/z' variables by normalizing them? Which set of variables do you recommend to use for extracting gaze during fixations? Many thanks in advance!
Hi @user-57dd7a , the fixation detector in that tutorial is a simple one for learning purposes. You will have better results and an easier analysis experience by using the more robust fixation detector in Pupil Player. It uses the calibrated gaze data from the recording.
The formats of the data files are described in the documentation. Briefly:
gaze_normal0_x/y/z: This is a vector that points from the center of the eye through the center of the pupil. It is also known as the "optical axis" and essentially tells you the orientation of each eyeball. They are not derived from gaze points, but are estimated from the 3D eye model. Calibration can be conceptualized as a process that transforms these vectors to gaze positions in the world camera image. See here for an alternate explanation.gaze_point_3d_x/y/z: This is an estimate of the point in 3D space that is being gazed by the wearer. Note that the z component can be unreliable after about 1 meter distance. Whether you need this value depends on your use case. For many cases, you can just use norm_pos_x and norm_pos_y.I also recommend taking a general look through our documentation on Pupil Player, as well as the other tutorials, such as this one that should help with correlating gaze and fixation. Feel free to ask any other questions!
Hi all! I have a question about using a realsense d435 as the world camera such that I can project the gaze on the pointcloud within ROS, I hope you can give me some advise? I have a legacy setup in the lab that hasnt been touched in ages that I am trying to necro. It is an old core headset with the original world camera replaced with d435. I am running it on Ubuntu 20 with bundled ros-noetic-realsense2. When running the Pupil Capture (bundle) I get feeds of eyes but the d435 world camera is not detected, although ROS does detect the d435. I found the plugin for the realsense2 plugin for v2.3+ and some threads in dev that have similar crash behaviors as mine (failing to import context from pyrealsense2). I will go fixing them as described in the thread. Just in case, I want to double check here if there are more up to date materials? Edit: I thought I had problems with pyrealsense2 on python3.8 so I installed 3.6 and corresponding pyrealsense2 but I am still getting no context in pyrealsense2
Hi @user-200f75! There have been no updates to the realsense plugin/materials I'm afraid. It's now deprecated. Your best bet is to try and follow the previous thread(s), and use the same software versions (if possible) that those users had.
I have a little follow-up question, if you dont mind. Preface: I have used the old materials to run the RealSense d435 as the world camera - it works now. However I wanted to change the pipeline from a) read camera with pyrealsense2 plugin -> do tracking -> stream world camera and tracking to robot to b) read camera with robot -> send world image to plugin -> do tracking -> stream tracking to robot. It all kind of works, except I cannot calibrate - pressing the calibrate button doesnt do anything. My plugin looks like this:
def init(self, g_pool, **kwargs): super().init(g_pool) self.rsi = Realsense_interface()
def gl_display(self): if self.rsi._recent_frame is not None: self.g_pool.image_tex.update_from_ndarray(self.rsi._recent_frame) gl_utils.glFlush() should_flip = getattr(self.g_pool, "flip", False) gl_utils.make_coord_system_norm_based(flip=should_flip) self.g_pool.image_tex.draw() super().gl_display() gl_utils.make_coord_system_pixel_based( (720, 1280, 3) )
where rsi._recent_frame is bgr8 image I get from ROS. Could you advise me how to enable calibration? Is my gl_display messing with starting the calibration?
Also, why recent_events(events) only giving me dt and nothing more in events? Meanwhile I get everything through IPC.
Hey I am working on implementing the pupil core eye tracker in unity, I was wondering if this is possible or if I should just use the pupil capture and record applications?
if it is possible, where's the documentation for it?
So i ended up finding the documentation.. The pupil core I am using has a broken left eye camera and I was wondering if there was a way I could modify the demo code for the gaze demo such that it only takes right eye data for calibration?
Hey @user-3c7db0, thanks for reaching out π I'm sorry to hear that one of the eye cameras are broken. May I ask if you've already contacted us via email concerning the broken camera? Otherwise, could you describe the camera's situation and why you think it's broken? Then we can discuss repair. Alternatively, you could buy additional cameras separately.
Btw, you can run Pupil Core monocularly by disabling one of the eye cameras. But this might not be recommended, depending on what you're trying to achieve.
The professor in the lab which I work for contacted pupil labs over email and they said it was unfixable and would need to be replaced with a new left eye 200hz camera. The glue which held the lens cap to the circuit came undone and despite me attempting to fasten it back on with tape we get a confidence value which jumps around a bunch between 0.5 and 0.8
Good morning, the offset for manual correction (in pupil player) on the gaze that you can do with a post-hoc calibration in what units is it? degrees?
Hi @user-a4aa71, which offset correction you're referring to? Are you using a custom plugin?
the one descrbed here https://docs.pupil-labs.com/core/software/pupil-player/ in the "Gaze Data And Post-hoc Calibration" section. If you perform a custom calibration it is possible to calculate the gaze mapper using that calibration file and also applying an offset to correct the data
Ah yes. It applies a linear x,y offset to the gaze position. Gaze is given in normalised world image frame coordinates, so the units are image frame width and height.
Thank you! It's more clear now
We believe monocular should work for the data we are trying to collect, and while I have been able to get it to work monocularly for tracking and recording using Pupil Capture and Player, I'm seeking to use it with Unity, I installed the hmd plugin on github and was messing around with that a bit yesterday, however, I noticed it only supports binocular calibration and doesn't calibrate when I only have one camera plugged in
Ah, so you'd like to use hmd-eyes with your Core headset. May I ask what your goals are / what you're trying to achieve?
For context, hmd-eyes is designed for using head mounted displays (think VR headsets). It assumes the use of binocular data.
Hi again! We've verified that motion capture interferes with eyetracking because the flashing IR sources from the mocap cameras interferes with the Core's ability to calibrate. To solve this, we're wondering whether there's a way to precisely time the start up of the cameras so we can predict when frame capture will occur?
Hi @user-40931a π. This was your set up, correct? https://discord.com/channels/285728493612957698/285728493612957698/1210025943956062268
The Optitrack camera with the IR strobe is very close to and pointing directly at the Core headset. My first question is, could you change the setup so it's not so close to the wearer?
Secondly, did you try turning down the eye camera exposure time in the eye window settings? The images you shared already look overexposed, which I think exacerbates the periods when the strobe is on, causing the pupil detection on our system to fail.
It's not really possible to precisely control the startup of our cameras in the way you describe. However, I can think of a few workarounds. But I'd like to exhaust the above suggestions first, as I believe with the right positioning and exposure settings, you could mitigate the strobe effects.
Those suggestions have both been tried now. The cameras are in their final positions as far from the eyetracking as possible and the exposure is the lowest it can be while still being able to pick anything up in the darkness of the recording booth. We also have tried turning the flashing off of just the cameras closest to the screen and adding stable IR sources pointing at the person, both of which interfere with motion capture enough that it isn't looking like a great solution. If you have any more ideas we would be glad to try them.
Theoretically, it's conceivable to send a TTL signal to our system via our real-time API. But I suspect the network latency would be too great to accurately sync the trigger arrival time with the frequency of the strobe.
It could be worth a try. But before going down that route, have you considered post-filtering the noisy data?
That is, calibrate the eye tracker under normal conditions (i.e., no IR strobe), and then make your experimental recording with the mocap system running (with strobe).
You would end up with normal data interspersed with noisy jitters when the pupil detection failed periodically. With some signal processing, it would be possible to filter out the noisy intervals and interpolate the gaps.
Hi, @user-200f75 - nothing in that snippet jumps out to me. One thing to try though would be to disable to your plugin and try calibration to see if that is indeed causing the problem.
Your workflow is a little confusing/unclear to me, but I would also consider whether this is something that has to be done in real-time, or could you collect all the data and then process it post-hoc?
Hi @user-cdcab0, I can calibrate using the old pyrealsense2 plugin - there are some problems with recording, but the calibration looks ok-ish. Afterwards I can swap plugins and use my pipeline. However, I hope you can help me resolve the problem so that I dont have to swap plugins. Regarding my pipeline - in short, the reason I prefer to send the image from ROS to plugin to core is due to pointcloud registration. It needs to be real-time (unrelated to pointclouds but still). My current problem is reading the gaze position on the world image using the plugin. The recent_events(event) event only contains "dt". From core documentation I thought the events is supposed to contain everything that is sent through the network api? Edit (followup): I have checked with a simple dummy plugin that prints contents of event in both bundle and source capture versions and in both the only key available is "dt". Is it actually by design? Edit2 (solved): plugin order was 0.0. 0.5 did it.
Hi everyone! Me and my working group (clinical reseach, neurology) have a problem we would like to open for discussion. It regards the 3d eye model and pupil size in mm. We did a study with patients and older control subjects and are mainly interested in pupil size. The patients have an impaired eye movement, so in some cases the 3d model did not fit well and calibration was not very accurate sometimes. We freezed the model before recording the actual data. Data was recorded with the LSL LabRecorder, so there are no eye or world videos of the actaul recording. However, we did record the calibration with Pupil Labs (with videos) before starting the experiment. We are currently in the data analysis stage and would, if possible, prefer to use pupil data in mm. Unfortunately we discovered that in some recordings 3d pupil size is bigger or smaller due to an badly fitted eye model. So our question is, if there is a way to refit the eye model or correct the badly fitted model, potentially from the calibration video, and then apply the new model on the LSL data stream? Thanks!
If you had the full Pupil Core recording, you could recompute data with an adjusted model and we might be able to find a way to synchronize the data with your other LSL recorded data... but without the Pupil Core recording, I'm afraid you won't be able to recompute anything
I see, thank you for the reply. I expected that.. The following study also records with both Pupil Labs and LSL simultanously, but at first we were worried that would compromise the CPU and therefore the data. Since we froze the eye model before calibrating, would it be a possibility to adjust the model for the pupil labs recording we did during calibration and then transfer the new model to the following LSL recording?
at first we were worried that would compromise the CPU and therefore the data That's not an unreasonable concern, but I expect some pilot testing would give you a more definitive answer for your hardware/environment
would it be a possibility to adjust the model for the pupil labs recording we did during calibration and then transfer the new model to the following LSL recording? I'm not entirely sure what you're asking, but a frozen model is sensitive to movement of the headset on the head (i.e., it will generate erroneous data). Removing and replacing the headset (or even just adjusting it or slippage), for example, will require you to unfreeze the model, fit a new one, and then freeze again.
Sorry, you're right, that was a bit confusing. That we froze the model shouldn't actually matter in this regard. So what I wanted to say is, that our experimental setup was very stable, seated position and fixation cross in front of the participant. So I understand that due to the lack of eye videos from the actual recording, fitting a new eye model for the whole time course is impossible. However, we recorded the data with frozen eye models from before calibration, so for the sessions we need to correct we would basically need one new, well fitted eye model from right before the recording (based on the calibration recorded with Pupil Labs with eye videos), that we then could apply to the following recording with LSL also, as if it was frozen. Of course that wouldn't account for slippage, yes. - We would not need correct gaze positions, we are only interested in pupil size. Hope I managed to describe my idea more clearly this time? Is the idea technically imaginable and realizable or would you advice us to just fall back on 2d data and data normalization?
HI everyoneοΌ We are DIY-ing our own headset, and although we have manually updated the drivers for our camera, it still shows as 'unknown' when running the source code. How can we resolve this issue? Additionally, how can we modify the name of the camera displayed on the interface?
Hey Kimmy! Nice to hear you're making your own DIY Pupil Core. Regarding the unknown source, do you know if your cameras are UVC compliant? https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
YesοΌIt is in accordance with the standards of UVC.
Which platform are you using?
When I select unknown usb, the code crashes
ubuntu 22.04
Hello everyone, when developing with source code, is there any way to distinguish a custom camera on the page like the official camera nameοΌ
Hi @user-d6701f, I'll step in briefly for @user-cdcab0. If you've only recorded pupil size in mm via LSL and don't have the eye videos from the experiment, I'm afraid there's no way to do what you suggest. You would need the eye videos (and ideally the complete recording) for any post-hoc pupil detection and model re-fitting.
Alright. Thanks for the reply @user-4c21e5
Good evening, this is going to be a very stupid question, but I can't find an explanation for this: I performed a single marker calibration in a room, having the subject perform a choreography with the head and then a validation by fixing known points and I got very bad results (accuracy of 3.2, absurd) despite having the pupil always detected and the marker always visible in the word camera. There were no changes in brightness due to head movement. I had the same test done in another room, much slightly brighter and got far better results. I have done many tests and cannot understand why it comes out well in one place and drastically worse in another. Does the lighting affect it that much? is it better for the room to be as bright as possible?
Hi, @user-a4aa71 - lighting can certainly have a dramatic effect like this. The single-marker choreography, especially, is going to be prone to motion blur in darker environments (due to a longer exposure time). This will negatively impact marker detection
Thanks! Is there any way to see that the marker is detected worse in either case? I also pose another question: do subjects with light eyes create more problems for the tracking? I tested a subject with blue eyes and the eye tracker detected the pupil in portions of the iris...I could not find any parameter setting (brightness, contrast...) that gave me good tracking π¦
I don't know of a quick/easy way to objectively visualize marker detection, but subjectively these types of problems will be visible in the video. If you can share a recording, we might be able to give some tips.
Regarding eye color - the eye cameras capture light in IR, so eye color differences really should have little impact
Hi everyone, I would like to confirm: Is the 2d pupil size equal to the long axis of the 2d pupil ellipse?
Hello everyone, I have a problem, here is the detailed instructions, https://github.com/pupil-labs/pupil/issues/2352
Hi, @user-fb8431 - this type of error (missing modules after freezing with pyinstaller) is somewhat common with pyinstaller and isn't really specific to our software. Pyinstaller has a system in place to correct for this, but it sometimes breaks on new versions of Python or specific package dependencies.
You can sometimes work around it by listing the missing module as a "hidden import". If you look in the .spec file, you'll see that we do exactly that for several modules already.
Another possible work around is to downgrade the problematic package or your Python version
Hi @user-d6701f ! That's correct! The 2d diameter is defined by the greater ellipse axes. You can find its definition in the pupil_detectors.
Hi @user-d407c1 , perfect, thanks!!
Hi @user-fb8431 ! Thanks for reporting this issue, while we might need to see if we can reproduce it, is there any reason why you are trying to build it yourself rather than using the already prebuilt bundles ?
Yes, I need to modify some of the code to fit our tasks.
Does it work when running directly from source, i.e. https://github.com/pupil-labs/pupil?tab=readme-ov-file#installing-dependencies-and-code ?
Running directly from source is correct.
I see you solved one of your challenges - nice work!
Regarding the calibration, it will be a little difficult for me to help, since we don't really support that hardware anymore, and I don't have a realsense camera or experience with it. Under different circumstances we might offer a custom consultancy package, but I'm not sure that we'd even be able to offer that in this case. The best I can probably do is look over your code and maybe offer some tips for debugging
Dear @user-cdcab0 thank you for your help. Below is my plugin stripped down - all it does is start a ROS node inside the plugin, receive world camera image from ROS convert to cv2 bgr8 image and draw it. When comparing with the pyrealsense2_backend I notice the following: 1) I dont write the image to recent_events(events), 2) I don't have the entire UI shenanigans, 3) I update the image from ndarray instead of yuv buffer 4) I get the "world - video_capture.uvc_backend: Could not connect to device! No images will be supplied." error.
import logging from ctypes import * import cv2 import numpy as np from OpenGL.GL import * from OpenGL.GLU import * from pyglui import cygl import gl_utils from av_writer import MPEG_Writer from camera_models import Camera_Model from plugin import Plugin import sys import rospy from sensor_msgs.msg import Image import cv2 from cv_bridge import CvBridge
class Pupils_ROS(Plugin):
uniqueness = "by_base_class"
order = 0.5
icon_chr = chr(0xE412)
icon_font = "pupil_icons"
def __init__(self, g_pool, **kwargs):
super().__init__(g_pool)
self.rsi = Realsense_interface()
def gl_display(self):
if self.rsi._recent_frame is not None:
self.g_pool.image_tex.update_from_ndarray(self.rsi._recent_frame)
gl_utils.glFlush()
should_flip = getattr(self.g_pool, "flip", False)
gl_utils.make_coord_system_norm_based(flip=should_flip)
self.g_pool.image_tex.draw()
super().gl_display()
gl_utils.make_coord_system_pixel_based((720, 1280, 3))
class Realsense_interface: def init(self): rospy.init_node('realsense_interface', anonymous=True) rospy.Subscriber('/camera/color/image_raw', Image, self.image_callback) self.bridge = CvBridge() self._recent_frame = None
def image_callback(self, msg):
self._recent_frame = self.bridge.imgmsg_to_cv2(msg, desired_encoding="bgr8")
I would think that you'd need to inherit from video_capture.Base_Source rather than plugin.Plugin, no?
no idea, the documentation on plugins is pretty scarce, will try Edit: could you elaborate on potential inherit options?
Well you're replacing the Realsense2_Source plugin, right? That plugin inherits from Base_Source as do other video capture plugins
It it? I see: class Realsense2_Source(Old_Base_Source): and class Old_Base_Source(Plugin):
It it? I see: class Realsense2_Source(
Hello everyone, I have a question, when we use Pupil Core, if we use the single-purpose method, what is the approximate level of accuracy?
Hi @user-fb8431 , when you say "single-purpose method", do you mean the Single Marker Calibration Choreography?
Hi all, I am trying to develop scripts to analyse pupillometry data using the Core system. I have created a custom script to pull csv files directly from the pldata files.
This process is taking an incredibly long time, and I wondered if anyone had an open source script that transposes the pupil data into a more usable format?
Thankyou in advance!!
Hi everyone,
Iβm having a bit of trouble understanding the units of measurement used in Pupil Core for gaze data collection. When I look at the gaze data in the CSV file, I canβt figure out what the units are. Can someone please let me know if theyβre in centimeters, meters, or something else?
Hi @user-06c973, pupil diameter is in mm -- see first sentence in the provided link π.
For clarification, there are various files that you can export from Pupil Core when using the Raw Data Exporter. May I ask which file you're asking about?
For instance, pupil_positions.csv has pupil diameters in mm, but gaze_positions.csv doesn't.
Hey @user-789ddb, thanks for reaching out π Have you seen the community GitHub page yet? There are a few post-hoc analysis script, like extracting pupil size.
This tutorial concerning pupillometry might also be of interest to you.
Perhaps you could also clarify why you're writing custom scripts to export data, instead of using Pupil Player?
Hey wee,
Thanks for the info!
I have thousands of files and don't want to use pupil player to export as the process (as far as I am aware) requires me to load an individual file into the player and then export to csv. This would take many, many hours. Instead, I have managed to develop a JS script which reads the pldata files, exporting to json. I then convert to csv to analyse in R.
Hiya! Apologies if this question has already been asked. Is the Python Simple API compatible with the Core tracker? discover_one_device() is not finding anything (running service on 127.0.0.1:50020). I am trying to build up on the realtime-screen-gaze example. Thanks for your input on this!
For Pupil Core, you'll need to use the Network API plugin.
The Network API is based on zeromq and uses msgpack for serialization. Both are open-source technologies and provide libraries for a multitude of programming languages. You can find various Python and Matlab examples in our pupil-helpers repository.
Hey Chris, thanks for reaching out π I've deleted the same message from π» software-dev . Please don't double post as it floods the channel. Also, please be patient for a response.
Hi @user-07e923 - thank you so much. Got it.
So, the real-time-screen-gaze is out of scope for Core?
This was written for Pupil Invisible and Neon, which uses different methods to stream data in real-time.
Here are some more examples of using the Network API with Pupil Core.
Oh, understood. I was quiet excited about that.
Just out of curiosity, are you trying to map gaze onto a screen without using April Tags?
With AprilTags
You could use the Surface Tracker plugin, but subscribe to real-time gaze using the Network API. See this script.
(to minimise calibration)
Sorry, removed my previous message as I was referring to something else. Yes, that's one of our approaches.
This however operates on surfaces, detected with a single tag, rather than using a QRect representing the surface generated based on x tags (for precision). There's heavy reliance on the plugin (i.e. to detect edges etc);
We want to move away from the plugin.
Hey @user-275583, If I may interject, I want to clarify how the surface tracker works in Pupil Capture to avoid any misunderstanding.
You can use multiple tags to define your surface, such as one or more on each corner of a computer screen. You're not limited to using just one tag.
The surface tracker plugin in Pupil Capture, along with the real-time API, provides the same functionality as the real-time screen gaze package would if it were compatible with Core.
Since you need to use Pupil Capture to calibrate the Core system and the surface tracker plugin is integrated into the Capture software, the real-time screen gaze package is largely redundant for Core.
When we saw that API, it was like "wow, that's it!"
But it's a pity Core is not supported.
Thank you for clarifying this @user-4c21e5. I see your point. Thanks for clarifying.
Hello everyone, I was setting up the Core system earlier, but I encountered the following error only with EYE1:
EYE1: Could not connect to device! No images will be supplied. EYE1: No camera intrinsics available for camera (disconnected) at resolution [192, 192]! EYE1: Loading dummy intrinsics, which might decrease accuracy! EYE1: Consider selecting a different resolution, or running the Camera Intrinsics Estimation!
Could you please advise me on how to resolve this issue? EYE0 is functioning without any problems.
Hey @user-e91538, this seems like one of the eye cameras wasn't detected when you plug Pupil Core into the computer. Could you tell me which operating system you're using?
Also, can you try installing the Core bundled software on a different device and see if this issue persists?
Troubleshooting Core's eye camera
Hello All, while using my pupil core, one of the eye cams stopped working
It has happened a few times, is there any fix for it?
Thank you for clarifying [email removed] I
Hello @user-07e923 , Thank you for your response regarding the equipment issue. Following your advice, I reattached the camera connector, but EYE1 still would not start. Additionally, the connector for EYE1 became very hot.I have purchased two units, and the other one is working fine under the same conditions. Therefore, I believe it is likely a case of initial defect.
Hi @user-e91538! I'll jump in for @user-07e923. Can you please open a ticket in π troubleshooting and we can coordinate there.
HI @user-4c21e5 thank you!
Hi, any suggestions for processor/RAM/drive specification of a Windows/PC computer to use Pupil core with? I have the rare opportunity to purchase a new computer specifically for use with the core device.
Hi @user-e3fdf5 π. It's always nice when those opportunities come up! I have a few questions to help point you in the right direction.
What will the predominant use case be for this computer alongside Pupil Core? Are you focusing on obtaining gaze data alone, or will you be running other plugins, such as the surface tracker plugin, and possibly other third-party software applications as well?
I ask because just running Core on its own mostly taxes the CPU. However, other plugins can start to get RAM heavy depending on factors like the number of surface tracking markers and the length of the recordings that contain these markers.
Hi @user-4c21e5 ! That's a great question. I am planning to use Pupil Core with PsychoPy tasks and surface tracking with Apriltags.
For Windows/PC, the key specs are CPU and RAM. I can't give you a specific optimal hardware recommendation, but I can say we have achieved good results with a late generation i7 and 16GB RAM. The RAM becomes more important with longer recordings and a higher number of surface tracking markers and surfaces defined. Have you also considered Mac? When running Pupil Capture on Apple Silicon, we easily achieve Core's maximal sampling rate. For surface tracking, you'd also want a decent amount of unified memory, e.g., 16GB or above.