Hi, I'm looking at using a diy camera for eye tracking (hd-6000). The camera is picked up by Windows but not the pupil capture software. I assume it's got something to do with default settings not being 200fps 192Γ192. Is there a driver or setup/config procedure to let it get recognised? Thanks
Hello. If you have eye 0 or eye 1 detection enabled in the main window then there should be a window open on your system titled as "Pupil Capture - eye 0" or "eye 1". In this eye window please go under the "Video Source" tab from the right vertical buttons. In the video source pane there is a button for manual camera selection. To use any third party camera than the default PupilLabs devices' cameras you need to have this enabled, then select your camera from the drop down menu. I just tried this for you and it works very well but make sure the IR illumination is strong on the eye because the illumination is used both for calibration and gaze tracking and black pupil detection. If the IR is strong enough, you should see the iris a bright grey colour and the pupil dark black, hence the dark pupil detection.
Hi, please follow steps 1-7 of these instructions to manually install drivers for your camera https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Hello everybody! Just joined in the channel π I'm an interaction and neuroergonomics researcher and have been using your Core and Invisible devices for about 2 years now. Recently, I discovered an issue with one of our Core recordings. I thought I'd take this opportunity, turn it into a reason and join in the channel to discuss it with others.
The issue I experienced is where the pupil detection algorithm of Pupil Capture loses the pupil momentarily, despite that there were no eye movements, then the algorithm continues normal operation after a couple of fixations. These couple of fixations were mapped incorrectly as a result of this situation.
What we did to fix this in the lab was to post-process the recordings in Pupil Player by running the post-hoc gaze detection, and voilΓ ! The issue was fixed. This is very curious to us as a lab because we had no thoughts on whether the online (Capture) and post-hoc (Player) pupil detection algorithms were different. Due to the nature of the issue we experienced with the recorded pupil detection and because the participant's eye did not move it must be the algorithm that is somehow different, right? The pupil being detected is almost perpendicular to the camera so it's not the camera angle, and the detection parameters were noted and used identically with the online recording in the offline analysis.
Hey, welcome to the channel! I am happy to see that you are using your experience to help others before your first question has even been answered. Much appreciated π
Capture and Player use the same algorithms for pupil detection and gaze mapping. The only possible difference that I see is the pupil data matching/pairing process that is required for gaze mapping. You can read more about it here https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching The order of the pupil data might have been different in realtime compared to the post-hoc case.
Hello everyone, I'm new to using Pupil Core and am trying to conduct an experiment using the core system in a real world environment. I've decided to use the single target calibration method. Despite numerous attempts to calibrate both on myself and other people the system seems to be miscalibrated at least with what I can tell from the recordings as the visual circle tends to deviate from the actual targets used for the test. In some cases this deviation also increases further as the target moves further away from the center of view. I was mainly wondering if there was something I might be doing wrong in regards to the calibration process.
Hi Daniel, welcome to the community! π
Without further information, my first guess would be that the pupil detection needs to be improved. Tuning it requires a bit of practice and can sometimes be difficult to get right if you are in an uncontrolled environment.
If you like you can send a Pupil Capture recording of you performing a calibration to [email removed] After the review, we will be able to give more concrete feedback.
Thanks for the help! I'll try and send the ones I have over when I can make it to the lab again. Would the recordings of the brief tests I did after each calibration also be useful or is just the calibration data needed?
We would primarily need a recording that includes a calibration. Other recordings can be helpful, too.
Hey Everyone, I have a question regarding batch exporting. Is there a way to export data from a lot of participants without manually having to open and export each recording in the player? I've seen there's a batch export plugin (https://github.com/tombullock/batchExportPupilLabs) which i've tried to install, but it's not loading in pupil player. could you point me in a direction? Thanks a lot!
Hi! This is meant to run as a stand-alone script, not as a Player plugin. Similarly, see the extract_*.py
scripts in the Post-hoc Analyses section.
Hi everyone! I conducted an experiment using Pupil Core in a real work environment. I am interested in the time participants are looking at a specific projected pattern on the wall. I just discovered that there is a plugin 'Surface tracker' which can provide information on how often a person is looking towards a specified area. However, for this, you need to place markers in the real environment. Unfortunately, I did not do this before the experiment. Does anyone know whether it is possible to place these markers afterward in the video and still be able to add a surface using the surface tracker plugin?
Hi, unfortunately, there is no method to add the markers post-hoc. It might be easier to manually annotate the scene video frames during which the subject looked at the pattern using the Annotation plugin.
alright, thank you π
Do you think it is possible to edit the world video and place markers in it to solve the issue of not having placed markers in the real environment? If I want to edit this video, what should the properties be such that I can open the world video in pupil capture. And do I need to change the filename in the code somewhere?
Pupil Player expects the following: - everything listed here under Timestamp Files and Video files https://docs.pupil-labs.com/developer/core/recording-format/ - that each packet has one frame, and that each packet's and frame's pts are the same
Thank you!
May I ask what your approach for editing the video is?
I want to try Motion Tracking in Adobe After Effects, but I'm not sure whether this is possible for such a big video (7GB). My thoughts are; if I can place the marker in the video and it moves along with my target area, it can be detected with the surface tracker. I really hope that this can be a solution to my problem, but I do have not much experience with video editing..
Interesting approach! Please let us know how this approach ends up working for you π
I will!π
Hi @papr , would you mind answering a few questions about bundle adjustment for me? I know that the 3D model fitting process estimates the eye position within eye-camera space. The next two steps are to estimate the position and orientation of the eye cameras within world camera space. I know that an explicit value is provided / hardcoded for eye camera position, and eye camera orientation is estimated through bundle adjustment. My question is - does the normal player pipeline (most recent version) also refine camera position in world space during bundle adjustment?
For the normal Core headset, we fix the translation and let the bundle adjustment optimize the rotation of the eye cameras. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L71-L84
We do the same for hmd bundle adjustment https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L191-L204
The issue is that we are seeing an offset in our Blender model of the pupil data that suggests an error in eye positions/orientations within world camera space.
There's an example of a nasty offset.
That was my understanding. So, I guess this is a bug on our end.
That gaze cursor is pointing where it should (at the gaze targets)
Note that the eye model locations may change over time if it is not frozen.
Yes, we are accounting for that. Thank you π
Hi everyone ! We have a Pupil Core in my team, and I am really interested in using it, but is there any way to turn it into a wireless device ? For example would some bluetooth adapter be sufficient ? Or a Raspberry card communicating through the Network API ?
People have used this to stream video from a RPi to a computer running Capture https://github.com/Lifestohack/pupil-video-backend/ Note: It has some known issues with camera intrinsics and I would consider it experimental.
Hello everyone. I am using Pupil Core and connected to a Mac with USB-C cable. I met an issue that Pupil Capture can detect the device but no image captured from either worldview or eye0/1. Any solution??? Thank you!!!
Hi @user-219de4. Are you running on macOS Monterey? If so, you will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5
Thank you so much, Neil! It works now! π I met the similar issue with Window 10 though, could it be restricted by the admin access as well?
Hi Papr, I just wanted to clarify, you recommend using the Extracting blink data script to be modified for getting the gaze data?
Hi, no, I was referring to the extract pupil data script π
Hi @nmt, I' new to Pupil-Labs eye-trackers and need some help. I have question about Gaze Datum Format(https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format). Which number means current gaze location on world camera image? I tried with 'norm_pos' but it seems not the correct one
@user-a4bd50 norm_pos
is normalized coordinates of gaze data relative to world camera coordinate view.
If you are looking at surface data format gaze_on_surfaces : norm_pos
is the normalized gaze coordinate relative to the surface.
Does 'norm_pos' must relay to a surface? i've calibrate with Pupil Capture application.
Hi wrp, thanks for you reply. norm_pos I get is quite wired that it changes very little(about 0.1) when I look from left edge to right edge. I use the example from here(https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone) and only topic was changed to "gaze.3d.01.". My codes below:
...continued from above Assumessub_port
to be set to the current subscription port
subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('gaze.3d.01.') # receive all gaze messages
we need a serializerimport msgpack
while True:
topic, payload = subscriber.recv_multipart()
message = msgpack.loads(payload)
#print(f"{topic}: {message}")
print(message.get(b'norm_pos'))
print('\n')
sleep(0.5)
for more detials, I stare one point and shake my head. x or y in norm_pos should vary between [0,1]. But both of them only vary in 0.01. nothing changes even i mask the eye cameras
Hi wrp, thanks for your help. problem seems bring by sleep(). things going well after i delete it. Why does it cause? Data get in subscriber.recv_multipart() is not realtime?
This is due to subscriber.recv_multipart()
not returning the most recent datum but a buffered value. subscriber
has an internal queue. If this queue is not processed quickly enough, new items will be dropped and old items will remain in that queue until processed. The goal should always be to query this function as often as possible.
Hi @user-a4bd50. I'd recommend using this example script to filter gaze on surfaces: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py
Hi there, I'm facing issues with core calibration and eyes recognition. I don't know why, but I cannot obtain a stable eye tracking. If I stare at a point on screen, the virtual pointer (ie the pink one) is far away from it. Can someone telling me what's wrong? If this is not the right place to post this, can you please tell me where can I get support? Thank you!
To use the monocular gaze plugin, do I just download the GitHub file and drag it into the plugins folder? Do I do it for the capture AND player folder or just one or the other?
For realtime calibrations, add it to the Capture folder. For post-hoc calibrations, add it to the Player folder. After restarting the app, the dual-monocular options should show up next to the regular 2D and 3D options.
Hi @user-3a4a19 π. Please share a recording that contains a calibration sequence with [email removed] and we will provide some feedback π
Hi @nmt thank you, I will send the recording hoping for some feedback
Hi! is it possible to use pupil lab sofware whit other industrial cameras?
Yes, but they need to fulfil very specific requirements to work out of the box. See this previous message on that topic https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
On Windows, this issue is related to an incorrect driver installation. Please see https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Thanks for your advice. Will keep you updates for further help π
if my cameras are good, how can I set the software to work with my cameras?
Are you on Windows?
yes!
Please follow steps 1-7 from these instructions to install the drivers https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md Afterward, the cameras should show up in the video source menu.
thank you
Ok thank you @papr ! And you are not aware of any bluetooth solution ?
Bluetooth does not have sufficient bandwith to transmit scene and eye video streams π
Hi, I am not able to get good gaze positions (when a person focuses on a certain point, the gaze point shown by the eye-tracker doesn't reflect at that certain point), even after calibrating several times. Am I missing something in the process?
The most likely reason for that is that the pupil detection is not stable. See https://docs.pupil-labs.com/core/#_3-check-pupil-detection and https://docs.pupil-labs.com/core/best-practices/#pye3d-model for reference
I am following these rules but is it possible because I'm calibrate on my laptop while my work environment is different?
That should work sufficiently well in most cases. For concrete feedback, please share a recording of you or another user calibrating. Please send the full recording to data@pupil-labs.com
There is gaze timestamp and World timestamps.... Which should be used in calculating duration?
Is it possible to save surface information in surface tracking plugin if I want to use same environment with different participants so that I don't need to define surface separately each time (I'm not changing the markers)
Hello! I am new to LabStramingLayer(LSL) and interested in learning its synchronization over multiple streams. For your setup, did you have two pupil core devices record and import stream by separate station and uniform a shared LSL timestamps on another location? Or did you record and save all signals on the same site? For the other streams of data, have you had other cam stream (e.g., webcam)? How did you set up additional camera input in LabRecorder? Thank you for your great help advanced! π
Hi, the Pupil Capture LSL Relay plugin will change Pupil Capture's clock to match the LSL clock. That means that you can match any natively recorded data to the LSL recorded data post-hoc. Therefore, there is no need for transfer video data via LSL. I have also not come by a good software solution to do that, yet.
@papr any thoughts on this?
β Question here. I've found in the chat history on discord that when pupil capture closes normally the settings are saved.
That's great. But what I'd really like to do is to save out two different default settings for the different experiments that I'm doing. Searching here and in the docs, I haven't found anything like this. Does anyone know if such a tool already exists?
Hi @user-19bba3 π thanks for searching the chat history before diving into questions πΈ
All settings for Pupil Capture and Pupil Player are saved in folders in your user's directory. pupil_capture_settings
and pupil_player_settings
respectively. These are folders that contain settings. One low-tech solution could be to set up Capture as you like for experiment 1 and move that folder to another location, then set up Capture as needed for experiment 2 and move that folder to another location. Then you can just swap in/out that the desired settings folder as needed in your user dir (name of the folder needs to be pupil_capture_settings
). Same could apply to pupil_player_settings
There might be a more sophisticated way to do this, but this would one such way to achieve your goals.
Question. please tell me required PC specification for Pupil Core.
Why is it that the data from gaze_positions_on_surface are coming out negative or off ? The gaze_positions (not on the surface) came fine. I can't use the latter dimensions as 0 and 1
Why is it that the data from gaze_positions_on_surface are coming out negative or off in my calculations ? The gaze_positions (not on the surface) came fine. I can't use the latter dimensions as 0 and 1
Could you please provide an example?
After converting x and y positions to centimeters. X (0,1), Y (0,1) , Y (52.0192cm), X(28.702). I got negative and offset values even though they are within the defined surface.
From the shared Screenshots it looks like there is an issue with your conversion algorithm
It was Okay when I used the data from gaze_positions... Not the one from surface
It was Okay when I used the data from gaze_positions... Not the one from surface
Your screenshot showed positiv y values before conversion, negative values afterward. That means the error lies in the conversion. Without knowing the implementation, I can't make more concrete recommendations. π
Hi everyone! I'm preparing an experiment with potentially many areas of interest (approximately 60, or even more), using Pupil Core. My question : is there a maximum amount of AOI, that could be implemented in one work environment at once or is it unlimited? Maybe it depends on the maximum amount of Apriltags that could be used at once ? We will use probably 3 or 4 tags per area. My second question concerns the posthoc AOI edition. While adding a new AOI or editing it, the Capture Player lags and starts to runn slow. Is there any parameter that could be modified to improve it? Thank you for your answerπ
Hey! While there is no software limit to the amount of surfaces, they require memory and computational resources. That said, the software was not designed to scale to so many surfaces.
The lagging happens especially on Windows due to an underlying implementation detail on how subprocesses are spawned. You should see less lagging on macOS or Linux.
I would also recommend reducing the number of AOIs if possible. For example, if you have multiple coplanar surfaces, you can replace them by one big surfaces and map its gaze to the sub-aois post-hoc.
Very well, thank you for your answer π
Hai, We are actually distributor for pupils lab here in Malaysia. Our clients wants to know what are the compatible result analysis softwares that are compatible with the pupils lab? As I know we can get raw data by exporting results in excel sheets. Any advice on how we can present the data from the raw data excel sheet? Maybe into graphs? Are there any softwares for that? Thank you
Hi! We have a series of examples using Python here: https://github.com/pupil-labs/pupil-tutorials
Hi all, does pupil capture stream out video captures or eye data?
I'm trying to see what I have access to in TouchDesigner and I'm struggling
Hey, yes, Pupil Capture streams data via the Network API https://docs.pupil-labs.com/developer/core/network-api/ But I have not heard about TouchDesigner yet. Therefore, I doubt it is compatible.
So Im understanding it's streamed via tcp://localhost:50020 but the sample code uses the zmq library in python, is that parsing a bunch of text into video? Or is it being sent in JPEG format like it says at the bottom of the network API menu, under frame publisher?
TouchDesigner has a "Video Stream In" operator where you just put in the address of the video feed, it also has a "Syphon Spout In" operator. But I'm not getting anything from the TCP address... Maybe I haven't started streaming yet?
Hi! I have a question about the gaze_positions.csv file. I want to know the position of the fixation dot in the video within the world video. I think I need to look for this at the norm_pos_x and norm_pos_y variables. I am wondering where the 0 points are and whether they change when the participant moves its head (so also the world video changes in position).
Hey! Gaze is estimated in scene camera space. So it is relative to the subject's head. The origin of the normalized coordinate system is in the bottom left of the scene camera coord. system. See our documentation here https://docs.pupil-labs.com/core/terminology/#coordinate-system
And is there data on the position of the head/world camera such that you could add the data of the gaze and the world camera to know where the gaze is within the world?
The Pupil Core Network API is fairly custom. That is why I doubt that it will work out of the box. Do you have a link to the reference documentation for these operators?
To contextualize gaze in the real world, you need an additional mapping step. For Core, we offer 2d surface mapping https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking and 3d head pose tracking https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Thank you!
Both operators are not compatible with Capture. But they have a Python interface https://docs.derivative.ca/Category:Python You might be able to build something like a custom operator that uses the Network API examples to receive data.
Alright I'll give it the ol college try! Thanks π
Hey everyone, could someone please tell me if iMotion licensed software is essential for acquiring normalised gaze coordinates relative to the egocentric scene video? From what I've seen of the documentation on Pupil Labs Capture and Play, it seems as though this capability is available in the free software.
Hey, Pupil Core software provides gaze relative to the egocentric scene video by default. Could you clarify what you mean by normalised?
That's great thanks for the response. By normalised I just mean a normalised coordinate system with respect to the scenic camera's pixels.
Yes. That is provided.
Do you guys still sell the 120 hz eye cameras (the ones that can be focused)? We currently have the 200 hz cameras that came with the headsets but we were interested in adjusting the focus, which these ones donβt allow us to do
Hi @user-b9005d we might have a few still please send us an email to sales@pupil-labs.com and our Assembly team will get back to you on Monday.
Hi there, I'm running Pupil Capture version 3.5.8 on Ubuntu 22.04 and so far it has unrecoverably crashed Ubuntu about 1/3 times, requiring a hard reset or REISUB. Any ideas what might be causing this? Seems to happen regardless of which USB port Pupil Core is connected to.
Hey, I am sorry to hear that. This is not a known issue. We are running this setup internally without any issues. What changed in your setup in comparison to prior Capture usages? Did you just setup Ubuntu 22?
Yes, just set up Ubuntu 22 on a new machine
Did you run Capture on the same machine with a different Ubuntu version before?
Its a completely new setup
This makes it difficult to pinpoint the cause of the issue. π Can you check if there any kind of system logs that could contain information about the freeze/crash?
@papr Question - when using pupil remote to start a recording and specifying a folder, e.g., 'R my_folder', is there a way to avoid it creating 000-padded subfolders inside that folder?
No, there is not. This feature ensures that Capture records into a fresh folder and does not overwrite anything.
Hello, I have a question about opening the pupil and gaze .pldata files. I need to import these into Matlab somehow. I'm guessing I need to write some python script which extracts the data and puts it into some nparray or excel file which I can then read in Matlab, just wondering where to start. Thanks!
The recommended workflow is to open and export the recording using Pupil Player. This will generate CSV files that can be easily imported using Matlab
Hi there. I am using Pupil Core glasses on Windows. When I open pupil capture, it says it cant connect to eye0 or eye1 devices and that no camera intrinsics are available for camera (disconnected) at resolution [192 192]. This is a brand new problem, I have previously been able to connect to both eye0 and eye1. The only thing that has changed is that I downloaded the pupil-core-network-client module. I followed the trouble shooting steps outlined here, with no luck: https://docs.pupil-labs.com/core/software/pupil-capture/
Could you please connect the headset, open the device manager, expand the Cameras and libUSBk categories, make a screenshot of the window, and share it with us?
How do I read the transformation matrices from surf_positions excel file? Although they are in the matrix form as we write in MATLAB but MATLAB detects those cells as text, hence I can't directly read them as a matrix. Is there any way that I can import those matrices in some programming language?
You should be able to convert the string into a matrix with py.eval
. But I am not sure if that works. I am basing this assumption on this documentation https://de.mathworks.com/help/matlab/matlab_external/differences-between-matlab-python.html
Hi. I am using pupil capture. Is there a way to change the default user_info fields on startup?
Hey, that is not supported
Hi, I am looking to get diameter and center for iris, any suggestions or library that could be used?
Hello guys π When plugging in the core glasses in our OnePlus 8, the companion app shows an error message saying that there's a USB problem. Do you know how to resolve that?
Hi. The Companion app is only designed to work with Pupil Invisible. Please use Pupil Capture to drive your Core headset.
Hey all, does anyone have any tips on how to calculate angular velocity from the gaze positions file? I am trying to see if participants are doing saccades between various time stamps. I was thinking to use the data under the gaze_point 3d x,y,z columns but if I did this, I would need to use a Euler rotational matrix and I do not know how the local coordinates are set up. I was also thinking I could use a simple 2d method since participants are looking at a board with their head fixed in a chin strap. Just reaching to see if there are any ideas on discord. Thanks.
Hi, check out this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb
@papr at a more reasonable range (say, up to 500 deg/second) you will likely see noise characteristics that might cause a new user to worry that their calculations are wrong.
I would rather be interested how these larger numbers come to be. Is it that the spatial difference is larger than expected or the temporal one shorter? Can you reproduce these numbers in your own recordings?
....unless they notice the diff. in Y axis,
I searched around for this but wanted to double check (before adjusting my IRB)! A research assistant and I are working to set up an eyetracking study with Core. The headset is not picking up her pupils well with her eyeglasses on. Is Core designed to work with a user who wears eyeglasses?
eyeglasses can obstruct the eye cameras' views or (if filming through the glasses) create reflections that cause the pupil detection algorithm to work worse. If you share an example Pupil Capture recording with data@pupil-labs.com we will be able to tell you the exact cause.
Awesome, thanks. It's a small study, we might amend to exclude eyeglasses, but I will see if we need to send along a recording. We can definitely tell that it's less stable with her glasses than it is with me not wearing glasses.
Hii All, Is there any way or function to convert Gaze Co-ordinate to Pupil Co-ordinate.? It will be really helpful if someone can help me out with this.
Hey! π Could you please elaborate in which type of information you are interested?
Good morning all, could you please help me how can I download Pupil Core Software on my Windows 10? Thank you!!
Hi! You ca find it at the bottom of this page https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads
Hi, we are using pupil core glasses to track gaze positions of a person looking at a different lights on a physical board that is angled at 30degrees with respect to a table it is placed on. I am trying to figure out which of our measures from your gaze_position csv file would be most suitable for figuring out where they are looking. Specifically, our task requires that they fixate on a specific lights, and we want to make sure they keep their eyes on it. Do you have any advice for which of the measures from the gaze_positions file would work best?
delete annotations
hi everyone! representing my academic design department, we're looking to invest in an eye tracking system. Are there any resources out there directly comparing the "core" and "invisible" products, describing the ideal use cases for each etc?
Hi, my pupil player just keep crashing for no reason, especially when I was editing the surface (AOIs) or add/remove the markers. Hence, I would like to know that if this is a known bug on pupil player or just only me have this issue?
Many Thanks
Note: I'm running the pupil player v3.5.7 (with macOS Monterey)
This is the detail report of the problem
Hi @user-7c6eb3 π. We don't have a specific resource that directly compares Invisible and Core online. That said, I can certainly outline some important points here.
Pupil Invisible is well-suited to use-cases ranging from art and design to sports performance, both in the lab and in the real world. It was designed to look and feel like a normal pair of glasses, and uses a real-time neural network to provide stable gaze estimation in all environments, without requiring calibration. This means it's fast and easy to set up, and you can take the glasses on and off again without worrying about re-calibrating. Invisible connects to a smartphone device, which makes the system fully portable.
Pupil Core is ideal for studies demanding high accuracy or pupillometry. It is fully open-source and can be extended and customised to meet research aims. It does, however, require calibration and a controlled environment such as a lab to achieve the best results. Pupil Core connects to a laptop/desktop computer via USB for operation.
Check out our blog to see how Pupil Invisible and Core are being used: https://pupil-labs.com/blog/
When doing post-hoc calibration of videos, sometimes there are sections of video where only one eye is properly confident in frame. If I mark a calibration dot where I know that one eye is fixated, how does that affect the gaze estimation once both eyes come back into focus?
Just following up on this
Hi channel, we recently recorded some data with a pupil core device and we are interested in pupil size. However, when looking at the recorded data the pupil size of both eyes differs largely and seems impossibly small (<2mm) for both eyes. This, even though I discarded all data with confidence values <0.6 and the recorded videos of the eyes look normal. Would anyone be willing to have a look at my output, or in another fashion can help me figure out what is going wrong? (We've had this problem consistently now)
Hi @user-7c46e8 π. Getting accurate pupil size estimates depends on a well-fitting eye model. If you haven't already, I'd recommend checking out our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Hi everyone! I am interested in the DIY kit, but the HD-6000 and B525 cameras used in it have been discontinued, is there other cameras that can be replaced?
Hi I am wondering how do I get a face blurring feature ?
Hey! Which product are you using?
the eye tracking glasses
Let me clarify: Do you use Pupil Core or Pupil Invisible?
core
The Pupil Core software does not have such a feature built-in. You would need to run a face blurring algorithm post-hoc on the recorded video. Unfortunately, I don't have any experience with a particular face blurring software s.t. I cannot give any specific recommendations in this regard.
Ok thanks
hola hola! como estan?
Saben si el VR/AR add-ons sirve para los oculus quest?
Hi Hi! how are they?
Do you know if the VR/AR add-ons work for the oculus quest?
Hi @user-e45bce π. The VR add-on isn't compatible with Oculus Quest. Please see this message for reference: https://discord.com/channels/285728493612957698/285728635267186688/956545909884321793