Hi, we would like to buy a core suite, just wonder how long will it take to ship it to UK ? Many thanks π
Hey, that depends on a various factors. Please contact info@pupil-labs.com with details about the purchasing entity (personal/academic/commercial).
when exporting fixations on a surface, why are there multiple entrys for the same fixation_id ?
Yes, that is because fixations usually span multiple scene video frames and are estimated in scene video coordinates. For every scene video frame, we get a new scene-to-surface mapping function. That means that we need to map the same fixation multiple times to the surface, once for each frame in which the surface was recognized. This is only an issue if the surface moved during the fixation (in which case it will look like the fixation moved on the surface).
So If i want to get the coordinate of a single fixation, it is viable to iterate over the same id and just take the average value for like the x and y position? Because looking at my data, the position slightly changes
Correct. Small changes are expected due to the noise in the surface tracking.
Hello, How can I calculate gaze amplitude from phi and theta? Since we have these values: x = data.gaze_point_3d_x y = data.gaze_point_3d_y z = data.gaze_point_3d_z r = np.sqrt(x 2 + y 2 + z ** 2) theta = np.arccos(y / r) # for elevation angle defined from Z-axis down psi = np.arctan2(z, x)
Hi @user-ef3ca7 π. Have a look at the first three lines of code in section 4 of this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb
Hello,
In a research protocol, we would like to compare to conditions : one with AR-HMD wearing and another without. We are thinking to respectively use the HoloLens add-on for the first condition and the Core for the second. Is the hardware (the 3 cameras) on the two devices exactly the same (for data consistency's sake) ?
Thanks in advance.
Hey, the hardware is the same. π
Thank you for the response !
When doing post-hoc calibration of videos, sometimes there are sections of video where only one eye is properly confident in frame. If I mark a calibration dot where I know that one eye is fixated, how does that affect the gaze estimation once both eyes come back into focus?
Hey π This is what happens under the hood: 1. Player collects high confidence pupil data for each eye in the specified calibration range 2. Player creates three combinations of pupil and target data: a) left pupil data + targets b) right pupil data + targets c) left + right pupil data + targets 3. For each matching, it fits a separate model, i.e. two monocular models and one binocular model.
If only one eye is detected, the corresponding monocular model is used. If both eyes are detected, the binocular model will be used.
By placing a target marker while only one eye is detected well, you improve the corresponding monocular model. The binocular model is not affected as their is no valid left-right eye pair that could be combined with the target marker.
I hope this was clear.
This does help, thank you so much!
Hi! I have three surfaces, one big one that spans my entire screen and two smaller ones inside the big one. The small ones overlap with the big one but are very separated from each other. Any idea why my surface_events.csv looks like this? It's always entering and exiting all three at the exact same times, which is impossible.
You might want to look at the gaze_on_surface* data instead π
Hi π These enter and exit events do not refer to gaze entering/exiting the surfaces but the surfaces being detected within the image. If the surfaces are defined based on the same set of markers, the above behavior is expected.
Thank you so much!
On that note, what does "on_surf" refer to? And any tips for converting the timestamp values into more useful numbers? Thanks again :)
1) on_surf is a boolean indicating if the gaze point was on the corresponding surface or not. 2) See this tutorial on post-hoc time sync https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Looking for some advice here. In the first image, we have norm_pos represented by the green sphere, and gaze_point_3d by the blue sphere. We would like to project a gaze vector from the cyclopean onto the 2D world image. We assume this vector will pass through gaze_point_3d (blue sphere), but what is the assumed origin here? We assume that the origin is the world camera, but there is an offset that you can see in the second image, which shows that gaze point 3d does not align with norm_pos. This could be due to incorrect intrinsics... any other thoughts?
We're also quite confused as to why gaze_point_3d is at 20 cm from the ~~head~~ world camera, when the calibration plane (the wall w/checkerboard) was most definitely at a further distance. We would like to use geometric camera calib on the world video to extract the distance (we don't have a good record of it, unfortunately). ...but, I believe we need to know the sensor size to estimate the checkerboard distance.
So, the second question is, any ideas why PL estimates a depth of about 20 cm ? (across multiple recordings)
Depth estimates are inaccurate due to noise in the gaze_normals. And yes, gaze_point_3d, norm_pos, and scene camera origin should lie on one line.
Yep. Well, my student is double checking the FOV against the intrinsics stored in the pupil labs data folder. Let's hope that explains it...
Could you elaborate on how the second image got generated? If it is the image plane of the scene camera, the norm_pos should be on the plane, i.e. the red lines should cross in the plane. But in this picture the cross further into the scene.
Yes, that's the concern that prompted the question. I woudl have assumed they cross in the plane, but they cross at 20 cm.
I thought gaze_point_3d was at 20cm? From the first picture it looks like gaze_point_3d is further away than norm_pos
Let me put together a quick description with my student. I'll share later today.
gaze_point_3d is at 20 cm from the head world camera.
and norm_pos?
It is the blue sphere
Norm_pos, the green sphere, is ... (asking the student)
Ah, I got those mixed up, my bad.
since we're using the normalized image coords, it's set to the depth of the image plane
...wouldn't you expect the monocular eye vectors to cross at the depth of calibration?
Here, they are calibrated tot he depth of the checkerboard. I would have assumed that bundle adjustment adjusted rotation to minimize error between the mono gaze vectors and the calibration points. ... and that they cross near to the real-world depth of the calibration targets at the time of calibration.
In this case, it is expected that the lines cross in front of the scene cam image plane, and not in it. What is unexpected is that the green sphere is so far out.
Note that this question is from a second line of work with the core, not the stuff Kevin is working on in the HMD.
The data being monocular explains the depth of 20
But that should be mm not cm
Ooh, this might explain a thing or two.
I believe we're using the calibration matrices for the binocular calibration
My student suggested the bino calibration had individual matrices for the left/right eyes
...and we're using those matrices for the position and rotation of the local eye/eye camera space.
within world camera space
So, to be clear, you're assuming vergence to a distance of 20 mm at the time of calibration.
and the real-world distance of the calibration targets will not effect that assumption
for monocular post-hoc hmd calibrations, yes. That is the estimated distance between eyes and the hmd. Real-time hmd calibrations take the depth from the 3d target data. Normal Core 3d calibrations assume a distance of 500mm.
Yes, I think that last sentence is the relevant on here. https://github.com/pupil-labs/pupil/pull/2176/files#diff-1db06b7632c7441082988af78c11f0d28b6371a07779e1db309fef09c4622d04 Thanks. So, then the ~20 cm estimated fixation distance is just a coincidence.
Hi:
I just received a binocular eye tracker today, and I was trying to export the gaze position data independently (eye0 and eye1), hence, I drag the dual-monocular gazer plug-in into the pupil player.
However, I somehow couldn't find the plug-in on the list, is the "dual-monocular gazer" located somewhere else in pupil player?
Thanks~
Thanks~
Hey, you need to perform post-hoc calibration to leverage that plugin. See the Gaze Data menu and https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration
I am still learning the messaging format for communicating via the IPC backbone. Is there a way to call specific values from the real-time gaze data? (i.e. instead of printing the whole string like in the pic, could I extract and print only certain coordinates or only the timestamp?)
Hi, yes there is! After decoding the message, you get a Python dictionary. Read more about them here https://realpython.com/python-dicts/
Thank you! So in this case, would the name of the dictionary that I call into be 'gaze.3d.0'?
You can use extracted_obj.keys() to print the top-level keys. Note the b in front of the strings. That means that the you need to prefix the strings when accessing the fields with a b as well:
extracted_obj[b"name"]
Alternatively, you should be able to upgrade your msgpack version and that should give you normal strings without the need for the b. Just give it a try π
hi, i am trying to get my pupil core to work on my new mac, and no footage is loading through capture... settings say Local usb disconnected. Can someone help with this issue please?
Hi @user-a02d16 π. You will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture. See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5
hi, is there any known helper scripts that extract the closest world, eye0, and eye1 image at a given Pupil time?
I want to check the eye condition and world situation at a specific moment.
e.g. python helper_script.py --pupil_time=120.101, --path="./images/
-> generate ./images/120_100_eye0.jpg, 120_102_eye1.jpg, 120_091_world.jpg
Hi @user-292135 π. Check out this tutorial that shows how to extract specific frames from the scene video: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb. It's currently set up to find the 30-second mark (see section 8), but you can use specific pupil timestamps if required.
The tutorial will also work with eye videos. Note that it requires a *_timestamps.csv file. You'll need to run the 'Eye Video Exporter' plugin in Pupil Player to get these (one for each eye video). Alternatively, you can adapt the script to load the *timestamps.npy file using numpy.load instead (section 4 of the tutorial).
Hi! We have purchased two Core devices. On one of them, the World camera won't turn on. It says "could not connect to device"! What could be wrong with the device? One is working but the other is not.
Hi! We have purchased one Pupil Core device and we are thinking of purchasing two more eye cameras and attaching them to that device. How can we get all four cameras working at once?
While you can record video from all 5 cameras with two Pupil Capture instances, the software is not designed to make use of the two extra cameras. The eye cameras available in our shop are meant for replacement or upgrading older eye cameras.
Hi Niki. Please follow these steps to debug the driver installation: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Thank you! But this operation did not help. Now the device search is not done, just displayed this:
Hi again, We're using blink detection for our experiment and have run into some issues with the system registering blinks for too long, or registering errant blinks, even when the eyes are open and the pupils of the participant are clearly visible. Do you have any recommendations on how we might go about fixing this, and is there a way to correct for it post-hoc as part of the test?
Hi @user-856af7. You might need to adjust the blink detector thresholds in Pupil Player. You can read about this and how the blink detector works here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
Hi, I was wondering how gaze_normal data column was normalized as in what denominator was used?
It is normalized by using its original length st. the result has a length of 1
To follow up on this response:
Our lab is interested in knowing where the gaze is located. But in the case of people with eye deviations, we're concerned with where each eye is pointing individually. If we want to get an accurate gaze reading, would we need to do monocular calibrations of some kind?
Also, if we have really good binocular data during calibration, does this also feed into the monocular models? We were worried that there may not be enough monocular calibration points to create an accurate model when monocular gaze does occur.
For the calibration, binocular data is the intersection of high quality left and right eye data for which each left-right pair is close in time. The monocular models are fitted on all high quality samples from their corresponding eye. So, yes, data used for binocular calibration is also used for monocular calibration.
Regarding eye deviations: The calibration assumes that both eyes are able to fixate the target at the same time. That assumption is rarely true but the further one eye deviates from the gaze target the less accurate the gaze mapping will be.
We have a calibration plugin that allows you to calibrate one eye after each other, but it is designed for external head mounted displays and you would need to implement the visualization of the gaze targets yourself.
Does the heatmap image export map onto the world camera surface?
Hey, could you please share a screenshot of the device manager on the affected device with the following categories expanded (if available): Cameras, Imaging Devices, libUSBk. Please also enable "Show hidden devices" in the view option.
The camera fell off again! Could it be shutting down due to overheating?
Could you please also clarify if the two headsets are being a) connected to two different computers or b) to the same machine.
Thanx!
I don't know what you did, but the device worked unexpectedly. Maybe the drivers picked up somehow, but it didn't work for a very long time.
"Fell off" as in physically fell off? Or do you mean that the scene video stopped working?
Yes, the camera spontaneously stopped displaying images
That looks like all cameras are connected. Can you confirm that the scene video is not being displayed even though the device manager is showing all three devices? (mind the difference to a disconnected device in the device manager)
Hi, do you mind clarifying what you mean? We were hoping to understand what was set as the origin, what is the max value, and what does a negative value mean for these coordinates. I will include an image of the coordinates we are inquiring about.
Hi @user-26b243 π. norm_pos_x/y are gaze coordinates in pixel space normalised by the height and width of the camera image. Bottom left: 0,0; top right: 1,1. Negative values can occur when the gaze point leaves the scene camera image whilst the pupils are still tracked by the eye cameras β usually low quality data. You can read more about Pupil Core's coordinate systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hello, we are coming across another problem where it seems that the gaze recording is a bit off from where the participant was actually looking. We ran a quick experiment where we specifically looked at all 4 black and white boxes (as seen by the included images) for about 5 secs to see what gaze data would be recorded. We noticed that for multiple trials the side furthest from the center (left in this case) was the side that had the most variability. For example, picture 1 (screenshot 21) is when the subject was looking at the top left square and picture 2 (screenshot 22) is when the subject was looking at the bottom left square. For these two trials the calibration for angular accuracy was 1.46. Also, for reference the board that the squares are on is set up at an angle in respect to the subject.
Would you be able to share a recording with [email removed] such that we can provide concrete feedback? Please include the calibration choreography in the recording π
It seems that *_lookup.npy is more useful. Any critical difference between *_timestamp.npy and *_lookup.npy ?
Hi π Some type of recordings can have multiple video files (parts) from the same camera. Each part has its own *_timestamp.npy.
*_lookup.npy is a cache file that contains the video timestamps and pts of all parts. It also contains interpolated timestamps between parts.
Pupil Capture recordings typically only have one part, i.e. there is no big difference between the lookup and timestamp file.
Note: The tutorial is based on the video exported _timestamps.csv* file, not npy. The csv contains timestamps and pts as well.
Hey there, I am wondering how to exclude blinks from raw pupil data (csv format). I understand the 'blink detection' you guys built lets you select the filter, onset, and offset. Since I want to do this post-hoc though I figured I could remove all my data points with a confidence below a certain threshold. What number do you suggest using? Also if you could point me towards documentation on how the confidence value is established that would be helpful. Thanks.
Hi @user-97ca10 Check out this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb `It demonstrates how you can relate the different exports to each other by time. This should allow you write a short routine that finds pupil datums that belong to a blink and discard them.
Confidence is mainly calculated within the 2d pupil detector. It looks for pupil edges and fits an ellipse to it. Summarized, the confidence is high if the fitted ellipse matches the detected edges well. For more information, see http://arxiv.org/abs/1405.0006 If you are interested in the exact implementation, I can look that up, too.
Hey there, where can I find Data file? thank you!
Pupil Capture creates a recordings folder in your home folder. Each recording is a folder with multiple files. You can use Pupil Player to view the recording https://docs.pupil-labs.com/core/software/pupil-player/
Hi there, may I know if there's a time limit for recording? My experiment will likely last up to at least 1.5 hours. Is that going to be a problem for Pupil Core?
We'd definitely recommend splitting your experiment into smaller chunks. If you haven't already, check out the best practices page: https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks
Sorry, I should've been more specific with my question. The study will be broken up into a few blocks and there will be calibration at the beginning of each block. However. the total duration for the experiment is going to be at least 1.5 hours. I'm just wondering if that will be a problem since it'll be 1.5 hours of almost continuous use
Hi @user-89d824. I would recommend splitting into several blocks rather than just a few, if your experiment allows π. It's definitely worth doing some pilot testing to get a feel for the kinds of data you'll get, how accurate the calibration is over time, etc. That'l help you decide on an appropriate plan.
Thank you for your response! please see the screenshot that shows me unknown files.
Hi, these are files from the intermediate recording format. They are documented here https://docs.pupil-labs.com/developer/core/recording-format/
Note, the recommended way to use them is to open the folder in Pupil Player and to export the data to CSV. https://docs.pupil-labs.com/core/software/pupil-player/#export
Hi,
I had a question regarding the pupil invisible glasses. I saw that the link to apply offset correction wasn't working.
https://docs.pupil-labs.com/invisible/how-tos/tools/apply-offset-correction
Is there an alternative place where I could look up the procedure?
Thanks. π
Hi @user-e91538! We have made a mistake leaving that link in the docs! We were considering such an article, but have opted for in-app documentation instead. You can find instructions on how to do the offset correction in the Companion app by opening the wearer profile of the currently active wearer (by clicking on the wearer name on the home screen) and then clicking "Adjust".
We'll remove the link from the docs. Thanks again for reporting the error!
Let me try to fix that π
Ah cool. Thank you. And I just realized I posted it on the wrong thread. I'll try to do it right the next time. π
Hi:
I've got a question regarding the gaze accuracy.
While I could see the gaze accuracy and gaze precision after each calibration, but I would like to know that if it's possible to find the gaze accuracy and gaze precision among each exported file (.csv), or it's already there in the folder?
Many Thanks
They are not stored as part of the recording. the results are only published as logging messages which you would need to receive/parse in real time
Thanks~
Hi, I want to do eye tracking on mobile phone. For this, should I calibrate on the laptop with screen markers or should I print out pupil calibration markers and attach them on the phone screen? And in what range should the angular accuracy and angular precision values be in order for me to get the best result? When I calibrate and test with screen markers I get around 2,6 angular accuracy and 0,2 angular precision. Is this useful?
eye movement calculation
Hi @user-aaa726 π. Iβd recommend using the physical marker, making sure to cover the area of the visual field that the phone occupies. Also, calibrate at the viewing distance youβll be recording, e.g. arm's length. For instructions on how to use the physical marker, see this page: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-marker.
You can determine how much calibration accuracy youβll need with this calculation: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246
Note that investigating gaze in the context of a phone screen will require robust pupil detection and good calibration. Here is a reasonable best-case example of gazing on a mobile phone screen using Core: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing
If I could the calibration like in the video, it would be perfect. What path should I follow to achieve this result (also without phone holder)? Do I have to define an AOI with apriltags? I try to do it but in the video you have shared they don't do. Or is it sufficient to calibrate with physical markers? Sorry for the stupid questions π©
Many thanks for your answer. Now I have tried to calculate the calibration accuracy but without valid result. In my experiment (it's my phd. thesis) I use a portable phone holder with a distance ca. 40 cm to the user, the smartphone is 17,27 cm. When I add the values to the formula I get NA.
hey guys, newbie here. I was wandering is it possible to integrate pupil capture into a custom mobile app (call some API or sth) : basically whenever the app is launched i want to start recording . Thanks
Hi π Pupil Capture has a network api that you could call from your app https://docs.pupil-labs.com/developer/core/network-api/ Note, that Capture will require a desktop pc to run.
does it have a java binding to it? also my research is on the move outside ,so cant use a pc.
Have you considered using Pupil Invisible instead of Pupil Core? https://pupil-labs.com/products/invisible/
yes, the university does not have that pupil invisible , that's i am thinking maybe if its possible . but assuming I do, how would I integrate it into my custom app. or do I have to run both apps maybe
the companion app and my custom android app
If you need to use Pupil Core, you can use a tablet running a desktop operating system, e.g. Windows Surface tablet. You might not be able to run the pupil detection at the full frame rate, though. There are other challenges when recording outside with Core, e.g. IR reflections due to sun, inhibiting the pupil detection.
I see. thanks for the clarification. I will see what I can do
In case of Invisible, you need to use a different API, see https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html (starting/stopping a recording is a simple http request.) But yes, both apps would need to be running.
nice. again to use this REST API I need java/kotlin binding correct? I am new to android dev as well
I have no experience with android dev. Any library able to make http requests will do.
okay will look more into it. this is just for curiosities sake π running both apps will suffice for me I think. thanks again, I appreciate your help !π
You referenced that there is a binocular and monocular model under the hood. Do we have access to this model and would we be able to export it? We were hoping to do some spatial transformations of the eye tracker data externally from pupil software
Yes, the parameters are announced via the network api and stored to the notify.pldata file during a recording. Otherwise, you can infer which model was used for mapping based on its base_data field. If it only has one pupil datum, it was mapped monocularly, otherwise binocularly
Would calibration data files suffice, or is a video of the process preferred?
We will need an actual recording, i.e. set up the Core system as usual, start a recording in Capture, perform a calibration, stop the recording, then zip the folder and share it with us π
Hello! Did anyone have experience sending triggers from MATLAB to pupil cam? Can the exported file save the triggers along with the timestamp? Thank you!!!!
Hi @user-219de4 π. Check out our Matlab helpers repository: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/send_annotation.m
and, we tried to use remote annotation to send triggers by Python (from your manual: https://docs.pupil-labs.com/developer/core/network-api/), we do see the annotation during stream but cannot see the triggers in the exported files. Where would be the triggers saved?
Thank you! We tried to set it up but met difficulties with installing matlab-zmq, it is an error with mex function: Error using mex ld: library not found for -l./libzmq clang: error: linker command failed with exit code 1 (use -v to see invocation)
I think there might be a typo on page https://docs.pupil-labs.com/core/software/pupil-capture/. In the Plugins -> Surface Tracking -> Defining a Surface part. This is the text, and I'm not sure if this is correct. If markers have been erroneously added or left out, click the add/remove markers button and afterwards onto the according marker to add/remove them from your surface.
I think instead of according the word might have been correposnding.
Hi, I am a MSc Computer Science student using pupil-labs core for research. I am using a mobile phone with pupil mobile app for capture, hence I need to do post-hoc calibration. In my data, the calibration marker was accidentally left in the frame and is causing the gaze data to be off. I'm trying to calibrate using only a subset of the video (using trim marks) but when I come to collect the references (Detected circle markers in the recording) I cant seem to 'set from trim marks'. Ive moved the trim marks to the correct places but the button does nothing and continues to say 'Collect References in: 00:0'. Can anyone advise on this?
Hi! Sounds like you are doing everything correctly. If you increase the width of the menu the full message should appear.
Indeed it does! Although, the gaze data doesn't follow the marker very well when the participants head moves in circles (The detected fixations don't stay in the middle on the marker, but circle round the marker as the head moves)
In the gaze mapping section, there is a validation menu. Please set it to the same trim marks and run the validation. What does the accuracy say?
For more concrete feedback, please share the recording with data@pupil-labs.com There are many reasons that can influence the quality of the gaze estimation.
To visualize pupil detection quality, it can be helpful to enable the eye video overlay.
Looks like the issue is low confidence pupil data. 90% of your samples have been discarded for the calibration calculation. Please check if you can get better pupil data by tweaking the parameters during post-hoc pupil detection
Thanks! Looks like the first few seconds of the data the pupil detection is struggling to settle. Once settled the detection is much better. Have removed these first few seconds and the gaze data is already visually much closer and validation reflects that!
Small trick for post-hoc pupil detection: You can restart pupil detection while it is still running and it will keep the current eye model. This way you can apply a more stable model from the beginning.
Cheers for all the help!
Hi, I am trying to figure out how to include the software methodology for calculating the post-hoc gaze validation function in Pupil Player in a research paper. I have not found anything explaining that on the website nor on discord so far. Can you point me in the right direction? Thank you!
I have also seen that you have contacted via email with similar questions. Regarding how accuracy and precision are calculated, we will add information about this here https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy
For Pupil Core, the calibration methodology is the same for realtime and post-hoc. I suggest referring to the corresponding documentation web-pages.
Okay, thank you for your help! Sorry for the double contact, I wasn't sure which would be best.
Hi, check out the Pupil Core tech report https://arxiv.org/abs/1405.0006 See the Spatial Accuracy and Precision section. That document can also be referenced in your paper
Perfect! Thank you so much!!
I'm using the surface tracker plugin to develop a code that returns a specific surface where the gaze point is displayed when three surfaces defined for a single world frame are visible. For example, if gaze point indicates surface2, surface2 is output.
I need your advice on the following issues.
Gaze on surface
Hey π They are stored in annotation.pldata in the intermediate format. Use Player's Annnotation plugin to export them to csv
And one last issue: we are having problems registering the camera in pupil capture in windows 10. We were following the guideline so we installed Libusk and zadig to try to find the connection but zadig canβt find it so we are stuck. Can you help us with registering the pupil camera in windows 10 pupil capture? Many thanks
Thank you so much, we found it! In the annotation file, what are the columns representing for? For example, in the first column "index", does the world frame index refers to the framecount (started from the recording, not the session)?
Hi, I was using Pupil Player for post-HOC head pose tracking w/ April Tags, and I was unable to find the head_pose_tracker_poses.csv file, which contains the pose of the world camera. Where does this file get saved?
Hi @user-1bda7f π. Please ensure that you've:
1. Gone through all steps using the Head Pose Tracker plugin, i.e. detect markers, 3d model and camera localisation, and that they were successful
2. Run the raw data exporter with the Head Pose Tracker plugin enabled (check out the export docs here: https://docs.pupil-labs.com/core/software/pupil-player/#export)
Then the head_pose_tracker_poses.csv will be in the export folder
It's difficult to make a suggestion here. I'd recommend reaching out the authors of matlab-zmq
Thanks for your comments, we will, we now follow your Python script to record triggers and it works. But we still have the Windows registration issue to receive video input in Pupil Capture. Is there anything you could suggest here? Thank you so much!
yh so we've been following the instructions https://zenodo.org/record/201933#.Ytlhjy-B10s
Firstly, please make sure you are using the latest version of Pupil Capture (https://github.com/pupil-labs/pupil/releases/tag/v3.5). If the scene video isn't showing, follow these driver debugging steps: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
but zadig is not finding our headcams
also as a separate question we are trying to run a dual headcam set - can we run two headcam inputs from one pupil capture session?
If you do not need eye video or pupil data, you could select the second scene camera as an eye camera and use Player's Eye Video Overlay. f you need the recordings to be complete, i.e. including pupil detection and eye video, use two Capture instances
hi Neil, many thanks for your response. We tried running as admin with no success. When we check for libusk devices in device manager there is none? even with hidden devices shown
That means that the drivers were not installed successfully :/
hi! I am trying to do calibration with vive headset on unity but I keep getting this message on pupil capture. Also, it stops responding in the middle of the calibration. Does anyone know how to fix this? I have added the gaze tracker to an existing unity project.
this is what I am getting
Is there any way to install capture, player and service on a computer that operates using Windows 11?
Have you also downloaded the app from this link? The linked version is very old. The driver instructions should be up to date though.
which app? Zadig? we tried version 2.5 and 2.7
@user-e3f20f Forwarding a question from @user-6ec20c:
Is there any way to install capture, player and service on a computer that operates using Windows 11?
Should be the same as on Windows 10
Thanks @user-e3f20f! @user-6ec20c have you tried going to this page and clicking "Download Desktop Software" already? https://pupil-labs.com/products/core/
@user-6ec20c π Users have reported being able to use Pupil desktop software on Windows 11. See this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/933755742509465610 Just try searching Windows 11 in discord, there are other messages for reference too π
Hello I am new to this community and I have a couple of questions concerning pupil lab. Where I can post these questions?
Welcome to the community, @user-eb6164 π. Feel free to post your questions here! (if you want some tips, check out the community-guidelines: https://discord.com/channels/285728493612957698/983612525176311809)
Thank you so much π
I tried to find my answer but could not, I am working on my research and still in the process of developing my research questions and design my experiments. I am working on a driving simulator that contains three screens. We need to extract specific eye metrics (such as fixations, visual dispersion) from different AOIs such as billboards, road signs, vehicles. Mirrors, Road. I have read that setting AOIs can be done only with markers. Taking into consideration that markers will not help me here since these objects are moving and located inside the screen (very small). Is there any way I can set dynamic AOI? Any free tool that can help us extract these data or should we do manual code only? Also if I am working on three screens should I calibrate on each screen?
That's not helpful. We have installed libusk 3.0.7 following the instructions - the install is on the c drive so what do you mean its not properly installed
Can you confirm that you are running Windows 10, and not the newer version 11?
I understand your frustration. My response should have been more constructive. What I meant to say was: The instructions should be correct and have worked well for others in the past. If the devices remain being listed in another Device Manager category (e.g. Cameras or Imaging Devices), something must have gone wrong while using Zadig (either the headset was not connected or something else that is specific to your setup). After a successful driver installation, the libUSBk category should be listing the device entries. We do not know of any specific conditions that would cause an error.
Please note that Zadig is a third-party software and not maintained by us. Therefore, it is very difficult for us to make a statement about what might be going wrong without any specific error messages. Even with Zadig error messages, it is likely that we would not be able to help you with them. I would kindly ask you to refer Zadig specific questions to the Zadig maintainers/support.
That said, if you find out what went wrong, please let us know! Then we can update the documentation and help future users who run into the same issue.
Can you please clarify which version of Pupil Capture you are using?
3.5.1
Youβll likely want to present four markers on each screen (one in each corner). These markers can be used to generate multiple AOIs on each screen. Iβd recommend downloading this example recording that has markers in view and loading it into Pupil Player. Thatβll give you a better sense of how surface tracking + AOIs work: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view
With that said, you are presenting moving stimuli, so static AOIs might not be appropriate. This leads to the question, do you know the coordinates on-screen of what is being presented?
If so, you could have one big AOI that covers each screen, and youβll get x,y coordinates of gaze relative to each screen. Then it would be a case of correlating gaze with your on-screen stimuli coordinates, thereby automating your analysis.
If not, then manual coding is certainly an option, albeit a time-consuming one. Check out our annotation plugin: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-player
You should only need to calibrate using one screen β when using the 3d pipeline, the calibration will extrapolate outside the calibration.
When you say I should compare the coordinates of the object (AOI) and coordinates of eye gazes you mean I should check in excel file for these values exported (gaze_point_3d_x, gaze_point_3d_y,gaze_point_3d_z) and compare them with my object coordinates inside the screen? . Also, is there any documentation that helps me understand how to read eye metrics? I am new to this. Thank you.
Yes they are moving stimuli since there will be driving scenario although they are static stimuli but the screen will change while driving forward. So static AOI will not help. And I think Yes I can get the coordinates of the objects at least the road signs, pedestrians that are standing.
@user-219de4 One other suggestion would be to perform the procedure, step by step, with the exact versions of the linked software, on a fresh system. As mentioned before, it might be that something in your existing setup is interfering with the driver installation.
Hello, please help! I do not know what's wrong with the eye-tracking device. There is no view on eye 1 window and the world window showed " EYE1: could not set the value. 'Backlight Compensation'".You can see the pictures for specific information. Thank you so much! @user-e3f20f
Hi everyone! I'm preparing an experiment with an AOI, using Pupil Core. I have a question about the gaze position data. The "confidence" value varies from 0 to 1, where 1 indicates perfect confidence of a gaze position. However, can you advise any threshold for the useful data? I found that for the pupil positions, a useful data carries a confidence value greater than ~0.6. Is it relevant for the gazes too? Thank you for your answer and your help!
Hi @user-eb6164 π If you use Surface Tracking to define each screen you can export gaze and fixation data relative to the surface/screen. Follow this link in our documentation for reference: https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system. Data is exported as .csv files. For a break down of .csv exports please follow this link: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter In your case you would need to match the locations your stimuli appear on the screen with the gaze/fixation_positions_on_surface.csv
great I will work on that and try to learn about it, thank you so much!
Hi @user-be0bae π This warning does not effect functionality so is not likely to be the cause of the issue. Please carefully check the eye camera cable connection, and try re-starting Pupil Capture with default settings in the main settings menu, and see if that resolves it.
Hello @user-e91538 @user-d407c1 @user-4c21e5 ! I'm having the same problem as @user-be0bae , but restarting Pupil Capture with default settings and checking the eye camera cable connection has not resolved it, unfortunately. I am running Pupil Capture with Pupil Core on a Windows 11 laptop -- I had to install the libusbK drivers for the pupil cameras (following these instructions: https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md)
The error message is "Camera disconnected. Reconnecting.... '___ Found device. Could not set Value. 'Backlight Compensation', and the camera feeds are not showing up ('pupil detection', 'detect eye 0', and 'detect eye 1' are all checked).
I am setting this up to run on Linux (Ubuntu-22.04) as a subsystem of Windows 11. This headset's cameras are functioning properly on MacOS, so I don't think it's the Pupil Core hardware.
Do you have any other suggestions?
Hi @user-6586ca π The default confidence threshold is set at 0.6. This is inherited by gaze data after you have calibrated.
Thank you very much for your answer !
hi! I have tried window troubleshooting from this site (https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting) and now pupil capture terminates by itself. I have checked the hidden devices on device manager and I can't find "libUSBK Usb Devices" anymore. Is there a way to fix this?
Hi @user-8b2591 π have you tried deleting the pupil_capture_settings folder from your user directory? This will make Pupil Capture start from i's default settings next time you open it.
My scene camera gets to an FPS of 7 or sometimes even 5, and as a result the recording is choppy. Am I correct in saying that the data (e.g., fixation count) derived from this recording is going to be inaccurate?
Hey! I want to import my simulated environment. I have an fbx file. I want to be able to import it in pupil capture. How do I do that?
Hi @user-138f84 π. Pupil Capture is our desktop recording software used to capture eye tracking data in real-time. There's no functionality to load fbx files. If I can ask, what's your overall goal with such a use case?
Hello ! I did my best to look for an answer but couldn't find it in the doc/git issues/discord/archives. If it has been already answered, mea culpa. I am trying to recreate a gaze vector in a virtual environment. We recreated the world camera in the virtual environment using intrinsic and extrinsic matrix. But the virtual camera is perfect in the sense that it can not reproduce fisheye distortion, thus creating a perfect, non-distorted image of what the world camera captures. Therefore, my question is : does the exported gaze data norm_y and norm_x take distortion into account ? (Meaning : is the exported data already undistorted as if the world camera had no distortion ?) Or do I have to undistort gaze data using camera matrix, in order for norm_x and norm_y to make sense in the virtual environment ? Thank you for anyone reading this. I hope you have a nice day.
Hi @user-89d824. Fixations are computed from Pupil Core's gaze data, which will more than likely have a higher sampling rate. You can calculate the sampling rate using the timestamp column of the gaze_positions.csv export.
That said, 5 fps is very low. What are the specs of your CPU? If your computer does not have sufficient computational resources, the software will dismiss/drop samples in order to keep up with the incoming real-time data. If you don't have access to a more powerful machine, one workaround is to record your experiment with real-time pupil detection disabled. That should speed things up. You can then run pupil detection and calibration in a post-hoc context: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
We have an Intel Xeon W-2135 CPU @ 3.70Ghz so it should be powerful enough for Pupil Core, but we do have a few things plugged in. Thanks for the suggestion. May I know if it's necessary for the brightness level to be uniform during calibration and during the experiment? M
Hi Neil! Ive a simulated environment (fbx) in which I need to keep track of where the user is looking at. The eye tracking data is great but Im unable to overlay its circle on the simulared environment as I cant find a way to import the fbx into the pupil capture or pupil player
Thanks for clarifying! There's no support for fbx files in Pupil Player/Capture. I'd recommend having a look at our Surface Tracker Plugin. This will allow you to obtain gaze in screen-based coordinates. You'd need to add April Tag markers to your computer screen (presented digitally or physically). Further details here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Hi everyone! Does anyone know how to debug the numbers in the red circle in this diagram? Thank you very much if someone could answer my questionοΌπ₯Ή
Hi @user-e6ed07 π The numbers you refer to indicate the fixation id in your recording - it's an index of the fixation number from the start of the recording when you have the Fixation Detector plugin enabled. Each fixation id will correspond to a row in the fixations.csv export. You can see the fixation index increase/decrease if you skip through fixations using the Next Fixation/Previous Fixation buttons in the Pupil Player window.
so I need to export fixations.csv firstοΌthen i could see the indexοΌrightοΌπ₯²
You can see the fixation id if you select the fixation detector in Pupil Player (current fixation). This will correspond to a row in the export for that fixation.
I've found it! I can finally see it! Thank you so much!π
Hi! Is it possible to run Pupil Capture under ARM?
Hi admins, everyone . Can someone provide pyuvc usage docs ?
Hi, pyuvc expects specific functionality from the cameras. See https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498 It is possible that your camera does not fulfil all these requirements.
How do I resolve this error. I have correctly configured my uvc camera driver as libusbk one
If the computer is running multiple sensors/devices, then that would likely explain the low sampling rate. Uniform ambient illumination isn't a requirement.
Thank you for confirming
Have you installed any custom plugins? Which plugins are you using?
These are the ones I have. Should I turn them all off during recording to improve the FPS?
Crash. Instruction not valid?
HI @user-f93379. Please see this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/956583277429420052
Does anyone know about pupil_positions.csv file data processing?
Hi @user-e91538 π Could you clarify which metrics you would like to extract from the pupil_positions.csv file? Pupil Positions refers to the location of the pupil within the eye camera coordinate system. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#pupil-positions and find a break down of the export here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
Hi, does PupilLabs have monocular glass?
Hi @user-13f46a π It would be possible for us to provide you with a Pupil Core headset with only one eye camera for monocular tracking. Is there a particular reason you need this?
Hi, all. Does pyuvc have functionality to control exposure of camera?
Hi π Yes, if the camera exposure its exposure setting as a uvc control. See this example https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L631-L640
Thanks for quick reply. In windows installation of pyuvc, I can't find the file uvc_backend.py . Where is this file located. I installed the .whl
To get an uvc.Capture instance, start by looking at https://github.com/pupil-labs/pyuvc/blob/master/example.py
uvc_backend.py is part of the Pupil Core application that uses pyuvc. In the example, self.uvc_capture is an instance of uvc.Capture.
We need an eye tracker, but bicular glasses seem to be over our budget
i wanted to see price difference
Please contact sales@pupil-labs.com to request a quote π
Is there any way to make uncompressed formats like .bmp , be supported by pyuvc ?
Yes, but you would need to modify the pyuvc source code and rebuild the package. Currently, it only supports mjpeg
Surface Tracker and Head Pose Tracker are the most resource-intensive plugins in that list. You can run both plugins post-hoc, too.
Thank you!
hey @user-e91538 i am working on my project and in that, I need to find the pupil dilation from the csv file. The main objective of the project is to check if the pupil size is changing when someone reads sad, happy, or neutral articles.
Check out pupillometry best practices here: https://docs.pupil-labs.com/core/best-practices/#pupillometry You can get diameter both in millimetres (provided by 3d eye model, as @user-e3f20f referred you to), and in pixels (observed in the eye videos: diameter)
In this case, have a look at the diameter_3d column which returns the diameter in 3d. Note that you need a well-fit eye model and no slippage for these values to be accurate. Have also a look at the other fields here https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
@user-e3f20f and @user-e91538 thank you so much π
@user-e3f20f @user-e91538 I am getting this type of output. is it the right way to do the analysis?
You could try reproducing the attached figure. This was from a simple setup maintaining central fixation whilst the screen alternates from black to white. The change in luminance corresponds to an increase/decrease in Pupil dilation.
'Hello, recently I am facing an issue starting today with the eye tracker. I can notice that my gazes are not being detected correctly, when I look down the eye tracker points to somewhere else completely far from where am looking, even calibration is not working anymore. I tried to reset the software did not work, my eye confidence is 1.00. We even noticed that fixations are detected even when my eyes are closed. Any leads about what might be the issue? And how to fix it/troubleshoot?
Hi @user-eb6164 : π. Have you changed anything with your setup, any Capture settings, swapped scene cam lenses etc.?
Hello not at all π¦
how can I detect outliers, Eye Blinkings from the pupil_positions.csv file?
You can use the blink detector to find blinks and apply the methodology presented in this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb
Otherwise, I would recommend discarding data points with a confidence of 0.6 or lower.