Hi, we would like to buy a core suite, just wonder how long will it take to ship it to UK ? Many thanks π
Hey, that depends on a various factors. Please contact info@pupil-labs.com with details about the purchasing entity (personal/academic/commercial).
when exporting fixations on a surface, why are there multiple entrys for the same fixation_id ?
Yes, that is because fixations usually span multiple scene video frames and are estimated in scene video coordinates. For every scene video frame, we get a new scene-to-surface mapping function. That means that we need to map the same fixation multiple times to the surface, once for each frame in which the surface was recognized. This is only an issue if the surface moved during the fixation (in which case it will look like the fixation moved on the surface).
So If i want to get the coordinate of a single fixation, it is viable to iterate over the same id and just take the average value for like the x and y position? Because looking at my data, the position slightly changes
Correct. Small changes are expected due to the noise in the surface tracking.
Hello, How can I calculate gaze amplitude from phi and theta? Since we have these values: x = data.gaze_point_3d_x y = data.gaze_point_3d_y z = data.gaze_point_3d_z r = np.sqrt(x 2 + y 2 + z ** 2) theta = np.arccos(y / r) # for elevation angle defined from Z-axis down psi = np.arctan2(z, x)
Hi @user-ef3ca7 π. Have a look at the first three lines of code in section 4 of this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb
Hello,
In a research protocol, we would like to compare to conditions : one with AR-HMD wearing and another without. We are thinking to respectively use the HoloLens add-on for the first condition and the Core for the second. Is the hardware (the 3 cameras) on the two devices exactly the same (for data consistency's sake) ?
Thanks in advance.
Hey, the hardware is the same. π
When doing post-hoc calibration of videos, sometimes there are sections of video where only one eye is properly confident in frame. If I mark a calibration dot where I know that one eye is fixated, how does that affect the gaze estimation once both eyes come back into focus?
Hey π This is what happens under the hood: 1. Player collects high confidence pupil data for each eye in the specified calibration range 2. Player creates three combinations of pupil and target data: a) left pupil data + targets b) right pupil data + targets c) left + right pupil data + targets 3. For each matching, it fits a separate model, i.e. two monocular models and one binocular model.
If only one eye is detected, the corresponding monocular model is used. If both eyes are detected, the binocular model will be used.
By placing a target marker while only one eye is detected well, you improve the corresponding monocular model. The binocular model is not affected as their is no valid left-right eye pair that could be combined with the target marker.
I hope this was clear.
Hi! I have three surfaces, one big one that spans my entire screen and two smaller ones inside the big one. The small ones overlap with the big one but are very separated from each other. Any idea why my surface_events.csv looks like this? It's always entering and exiting all three at the exact same times, which is impossible.
Hi π These enter and exit events do not refer to gaze entering/exiting the surfaces but the surfaces being detected within the image. If the surfaces are defined based on the same set of markers, the above behavior is expected.
You might want to look at the gaze_on_surface*
data instead π
Looking for some advice here. In the first image, we have norm_pos represented by the green sphere, and gaze_point_3d by the blue sphere. We would like to project a gaze vector from the cyclopean onto the 2D world image. We assume this vector will pass through gaze_point_3d (blue sphere), but what is the assumed origin here? We assume that the origin is the world camera, but there is an offset that you can see in the second image, which shows that gaze point 3d does not align with norm_pos. This could be due to incorrect intrinsics... any other thoughts?
We're also quite confused as to why gaze_point_3d is at 20 cm from the ~~head~~ world camera, when the calibration plane (the wall w/checkerboard) was most definitely at a further distance. We would like to use geometric camera calib on the world video to extract the distance (we don't have a good record of it, unfortunately). ...but, I believe we need to know the sensor size to estimate the checkerboard distance.
So, the second question is, any ideas why PL estimates a depth of about 20 cm ? (across multiple recordings)
Depth estimates are inaccurate due to noise in the gaze_normals. And yes, gaze_point_3d, norm_pos, and scene camera origin should lie on one line.
Let me put together a quick description with my student. I'll share later today.
gaze_point_3d is at 20 cm from the head world camera.
and norm_pos?
It is the blue sphere
Norm_pos, the green sphere, is ... (asking the student)
Ah, I got those mixed up, my bad.
since we're using the normalized image coords, it's set to the depth of the image plane
...wouldn't you expect the monocular eye vectors to cross at the depth of calibration?
Here, they are calibrated tot he depth of the checkerboard. I would have assumed that bundle adjustment adjusted rotation to minimize error between the mono gaze vectors and the calibration points. ... and that they cross near to the real-world depth of the calibration targets at the time of calibration.
In this case, it is expected that the lines cross in front of the scene cam image plane, and not in it. What is unexpected is that the green sphere is so far out.
Note that this question is from a second line of work with the core, not the stuff Kevin is working on in the HMD.
The data being monocular explains the depth of 20
But that should be mm not cm
Ooh, this might explain a thing or two.
I believe we're using the calibration matrices for the binocular calibration
My student suggested the bino calibration had individual matrices for the left/right eyes
...and we're using those matrices for the position and rotation of the local eye/eye camera space.
within world camera space
So, to be clear, you're assuming vergence to a distance of 20 mm at the time of calibration.
for monocular post-hoc hmd calibrations, yes. That is the estimated distance between eyes and the hmd. Real-time hmd calibrations take the depth from the 3d target data. Normal Core 3d calibrations assume a distance of 500mm.
and the real-world distance of the calibration targets will not effect that assumption
Hi:
I just received a binocular eye tracker today, and I was trying to export the gaze position data independently (eye0 and eye1), hence, I drag the dual-monocular gazer plug-in into the pupil player.
However, I somehow couldn't find the plug-in on the list, is the "dual-monocular gazer" located somewhere else in pupil player?
Thanks~
Hey, you need to perform post-hoc calibration to leverage that plugin. See the Gaze Data menu and https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration
Thanks~
I am still learning the messaging format for communicating via the IPC backbone. Is there a way to call specific values from the real-time gaze data? (i.e. instead of printing the whole string like in the pic, could I extract and print only certain coordinates or only the timestamp?)
Hi, yes there is! After decoding the message, you get a Python dictionary. Read more about them here https://realpython.com/python-dicts/
hi, i am trying to get my pupil core to work on my new mac, and no footage is loading through capture... settings say Local usb disconnected. Can someone help with this issue please?
Hi @user-a02d16 π. You will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
. See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5
hi, is there any known helper scripts that extract the closest world, eye0, and eye1 image at a given Pupil time?
I want to check the eye condition and world situation at a specific moment.
e.g. python helper_script.py --pupil_time=120.101, --path="./images/
-> generate ./images/120_100_eye0.jpg, 120_102_eye1.jpg, 120_091_world.jpg
Hi @user-292135 π. Check out this tutorial that shows how to extract specific frames from the scene video: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb. It's currently set up to find the 30-second mark (see section 8), but you can use specific pupil timestamps if required.
The tutorial will also work with eye videos. Note that it requires a *_timestamps.csv
file. You'll need to run the 'Eye Video Exporter' plugin in Pupil Player to get these (one for each eye video). Alternatively, you can adapt the script to load the *timestamps.npy
file using numpy.load
instead (section 4 of the tutorial).
Hi! We have purchased two Core devices. On one of them, the World camera won't turn on. It says "could not connect to device"! What could be wrong with the device? One is working but the other is not.
Could you please also clarify if the two headsets are being a) connected to two different computers or b) to the same machine.
Hi Niki. Please follow these steps to debug the driver installation: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Hi! We have purchased one Pupil Core device and we are thinking of purchasing two more eye cameras and attaching them to that device. How can we get all four cameras working at once?
While you can record video from all 5 cameras with two Pupil Capture instances, the software is not designed to make use of the two extra cameras. The eye cameras available in our shop are meant for replacement or upgrading older eye cameras.
Hi again, We're using blink detection for our experiment and have run into some issues with the system registering blinks for too long, or registering errant blinks, even when the eyes are open and the pupils of the participant are clearly visible. Do you have any recommendations on how we might go about fixing this, and is there a way to correct for it post-hoc as part of the test?
Hi @user-856af7. You might need to adjust the blink detector thresholds in Pupil Player. You can read about this and how the blink detector works here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
Hi, I was wondering how gaze_normal data column was normalized as in what denominator was used?
It is normalized by using its original length st. the result has a length of 1
Does the heatmap image export map onto the world camera surface?
That looks like all cameras are connected. Can you confirm that the scene video is not being displayed even though the device manager is showing all three devices? (mind the difference to a disconnected device in the device manager)
Hello, we are coming across another problem where it seems that the gaze recording is a bit off from where the participant was actually looking. We ran a quick experiment where we specifically looked at all 4 black and white boxes (as seen by the included images) for about 5 secs to see what gaze data would be recorded. We noticed that for multiple trials the side furthest from the center (left in this case) was the side that had the most variability. For example, picture 1 (screenshot 21) is when the subject was looking at the top left square and picture 2 (screenshot 22) is when the subject was looking at the bottom left square. For these two trials the calibration for angular accuracy was 1.46. Also, for reference the board that the squares are on is set up at an angle in respect to the subject.
Would you be able to share a recording with [email removed] such that we can provide concrete feedback? Please include the calibration choreography in the recording π
Hey there, I am wondering how to exclude blinks from raw pupil data (csv format). I understand the 'blink detection' you guys built lets you select the filter, onset, and offset. Since I want to do this post-hoc though I figured I could remove all my data points with a confidence below a certain threshold. What number do you suggest using? Also if you could point me towards documentation on how the confidence value is established that would be helpful. Thanks.
Hi @user-97ca10 Check out this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb `It demonstrates how you can relate the different exports to each other by time. This should allow you write a short routine that finds pupil datums that belong to a blink and discard them.
Confidence is mainly calculated within the 2d pupil detector. It looks for pupil edges and fits an ellipse to it. Summarized, the confidence is high if the fitted ellipse matches the detected edges well. For more information, see http://arxiv.org/abs/1405.0006 If you are interested in the exact implementation, I can look that up, too.
Hey there, where can I find Data file? thank you!
Pupil Capture creates a recordings
folder in your home folder. Each recording is a folder with multiple files. You can use Pupil Player to view the recording https://docs.pupil-labs.com/core/software/pupil-player/
Hi there, may I know if there's a time limit for recording? My experiment will likely last up to at least 1.5 hours. Is that going to be a problem for Pupil Core?
We'd definitely recommend splitting your experiment into smaller chunks. If you haven't already, check out the best practices page: https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks
Hi,
I had a question regarding the pupil invisible glasses. I saw that the link to apply offset correction wasn't working.
https://docs.pupil-labs.com/invisible/how-tos/tools/apply-offset-correction
Is there an alternative place where I could look up the procedure?
Thanks. π
Let me try to fix that π
Hi @user-d4d4bc! We have made a mistake leaving that link in the docs! We were considering such an article, but have opted for in-app documentation instead. You can find instructions on how to do the offset correction in the Companion app by opening the wearer profile of the currently active wearer (by clicking on the wearer name on the home screen) and then clicking "Adjust".
We'll remove the link from the docs. Thanks again for reporting the error!
Ah cool. Thank you. And I just realized I posted it on the wrong thread. I'll try to do it right the next time. π
Hi:
I've got a question regarding the gaze accuracy.
While I could see the gaze accuracy and gaze precision after each calibration, but I would like to know that if it's possible to find the gaze accuracy and gaze precision among each exported file (.csv), or it's already there in the folder?
Many Thanks
They are not stored as part of the recording. the results are only published as logging messages which you would need to receive/parse in real time
Hi, I want to do eye tracking on mobile phone. For this, should I calibrate on the laptop with screen markers or should I print out pupil calibration markers and attach them on the phone screen? And in what range should the angular accuracy and angular precision values be in order for me to get the best result? When I calibrate and test with screen markers I get around 2,6 angular accuracy and 0,2 angular precision. Is this useful?
Hi @user-aaa726 π. Iβd recommend using the physical marker, making sure to cover the area of the visual field that the phone occupies. Also, calibrate at the viewing distance youβll be recording, e.g. arm's length. For instructions on how to use the physical marker, see this page: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-marker.
You can determine how much calibration accuracy youβll need with this calculation: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246
Note that investigating gaze in the context of a phone screen will require robust pupil detection and good calibration. Here is a reasonable best-case example of gazing on a mobile phone screen using Core: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing
eye movement calculation
hey guys, newbie here. I was wandering is it possible to integrate pupil capture into a custom mobile app (call some API or sth) : basically whenever the app is launched i want to start recording . Thanks
Hi π Pupil Capture has a network api that you could call from your app https://docs.pupil-labs.com/developer/core/network-api/ Note, that Capture will require a desktop pc to run.
the companion app and my custom android app
In case of Invisible, you need to use a different API, see https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html (starting/stopping a recording is a simple http request.) But yes, both apps would need to be running.
Hello! Did anyone have experience sending triggers from MATLAB to pupil cam? Can the exported file save the triggers along with the timestamp? Thank you!!!!
Hi @user-219de4 π. Check out our Matlab helpers repository: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/send_annotation.m
I think there might be a typo on page https://docs.pupil-labs.com/core/software/pupil-capture/. In the Plugins -> Surface Tracking -> Defining a Surface part. This is the text, and I'm not sure if this is correct. If markers have been erroneously added or left out, click the add/remove markers button and afterwards onto the according marker to add/remove them from your surface.
I think instead of according the word might have been correposnding.
Hi, I am a MSc Computer Science student using pupil-labs core for research. I am using a mobile phone with pupil mobile app for capture, hence I need to do post-hoc calibration. In my data, the calibration marker was accidentally left in the frame and is causing the gaze data to be off. I'm trying to calibrate using only a subset of the video (using trim marks) but when I come to collect the references (Detected circle markers in the recording) I cant seem to 'set from trim marks'. Ive moved the trim marks to the correct places but the button does nothing and continues to say 'Collect References in: 00:0'. Can anyone advise on this?
Hi! Sounds like you are doing everything correctly. If you increase the width of the menu the full message should appear.
Indeed it does! Although, the gaze data doesn't follow the marker very well when the participants head moves in circles (The detected fixations don't stay in the middle on the marker, but circle round the marker as the head moves)
In the gaze mapping section, there is a validation menu. Please set it to the same trim marks and run the validation. What does the accuracy say?
For more concrete feedback, please share the recording with data@pupil-labs.com There are many reasons that can influence the quality of the gaze estimation.
Looks like the issue is low confidence pupil data. 90% of your samples have been discarded for the calibration calculation. Please check if you can get better pupil data by tweaking the parameters during post-hoc pupil detection
To visualize pupil detection quality, it can be helpful to enable the eye video overlay.
Thanks! Looks like the first few seconds of the data the pupil detection is struggling to settle. Once settled the detection is much better. Have removed these first few seconds and the gaze data is already visually much closer and validation reflects that!
Small trick for post-hoc pupil detection: You can restart pupil detection while it is still running and it will keep the current eye model. This way you can apply a more stable model from the beginning.
Cheers for all the help!
Hi, I am trying to figure out how to include the software methodology for calculating the post-hoc gaze validation function in Pupil Player in a research paper. I have not found anything explaining that on the website nor on discord so far. Can you point me in the right direction? Thank you!
For Pupil Core, the calibration methodology is the same for realtime and post-hoc. I suggest referring to the corresponding documentation web-pages.
I have also seen that you have contacted via email with similar questions. Regarding how accuracy and precision are calculated, we will add information about this here https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy
Hi, check out the Pupil Core tech report https://arxiv.org/abs/1405.0006 See the Spatial Accuracy and Precision
section. That document can also be referenced in your paper
Okay, thank you for your help! Sorry for the double contact, I wasn't sure which would be best.
I'm using the surface tracker plugin to develop a code that returns a specific surface where the gaze point is displayed when three surfaces defined for a single world frame are visible. For example, if gaze point indicates surface2, surface2 is output.
I need your advice on the following issues.
Gaze on surface
Hi, I was using Pupil Player for post-HOC head pose tracking w/ April Tags, and I was unable to find the head_pose_tracker_poses.csv
file, which contains the pose of the world camera. Where does this file get saved?
Hi @user-1bda7f π. Please ensure that you've:
1. Gone through all steps using the Head Pose Tracker plugin, i.e. detect markers, 3d model and camera localisation, and that they were successful
2. Run the raw data exporter with the Head Pose Tracker plugin enabled (check out the export docs here: https://docs.pupil-labs.com/core/software/pupil-player/#export)
Then the head_pose_tracker_poses.csv
will be in the export folder
yh so we've been following the instructions https://zenodo.org/record/201933#.Ytlhjy-B10s
Firstly, please make sure you are using the latest version of Pupil Capture (https://github.com/pupil-labs/pupil/releases/tag/v3.5). If the scene video isn't showing, follow these driver debugging steps: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Have you also downloaded the app from this link? The linked version is very old. The driver instructions should be up to date though.
but zadig is not finding our headcams
also as a separate question we are trying to run a dual headcam set - can we run two headcam inputs from one pupil capture session?
If you do not need eye video or pupil data, you could select the second scene camera as an eye camera and use Player's Eye Video Overlay. f you need the recordings to be complete, i.e. including pupil detection and eye video, use two Capture instances
hi! I am trying to do calibration with vive headset on unity but I keep getting this message on pupil capture. Also, it stops responding in the middle of the calibration. Does anyone know how to fix this? I have added the gaze tracker to an existing unity project.
this is what I am getting
Is there any way to install capture, player and service on a computer that operates using Windows 11?
@papr Forwarding a question from @user-6ec20c:
Is there any way to install capture, player and service on a computer that operates using Windows 11?
Should be the same as on Windows 10
Thanks @papr! @user-6ec20c have you tried going to this page and clicking "Download Desktop Software" already? https://pupil-labs.com/products/core/
@user-6ec20c π Users have reported being able to use Pupil desktop software on Windows 11. See this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/933755742509465610 Just try searching Windows 11 in discord, there are other messages for reference too π
Hello I am new to this community and I have a couple of questions concerning pupil lab. Where I can post these questions?
Welcome to the community, @user-eb6164 π. Feel free to post your questions here! (if you want some tips, check out the community-guidelines: https://discord.com/channels/285728493612957698/983612525176311809)
Thank you so much π
I tried to find my answer but could not, I am working on my research and still in the process of developing my research questions and design my experiments. I am working on a driving simulator that contains three screens. We need to extract specific eye metrics (such as fixations, visual dispersion) from different AOIs such as billboards, road signs, vehicles. Mirrors, Road. I have read that setting AOIs can be done only with markers. Taking into consideration that markers will not help me here since these objects are moving and located inside the screen (very small). Is there any way I can set dynamic AOI? Any free tool that can help us extract these data or should we do manual code only? Also if I am working on three screens should I calibrate on each screen?
Youβll likely want to present four markers on each screen (one in each corner). These markers can be used to generate multiple AOIs on each screen. Iβd recommend downloading this example recording that has markers in view and loading it into Pupil Player. Thatβll give you a better sense of how surface tracking + AOIs work: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view
With that said, you are presenting moving stimuli, so static AOIs might not be appropriate. This leads to the question, do you know the coordinates on-screen of what is being presented?
If so, you could have one big AOI that covers each screen, and youβll get x,y coordinates of gaze relative to each screen. Then it would be a case of correlating gaze with your on-screen stimuli coordinates, thereby automating your analysis.
If not, then manual coding is certainly an option, albeit a time-consuming one. Check out our annotation plugin: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-player
You should only need to calibrate using one screen β when using the 3d pipeline, the calibration will extrapolate outside the calibration.
@user-219de4 One other suggestion would be to perform the procedure, step by step, with the exact versions of the linked software, on a fresh system. As mentioned before, it might be that something in your existing setup is interfering with the driver installation.
Hello, please help! I do not know what's wrong with the eye-tracking device. There is no view on eye 1 window and the world window showed " EYE1: could not set the value. 'Backlight Compensation'".You can see the pictures for specific information. Thank you so much! @papr
Hi @user-be0bae π This warning does not effect functionality so is not likely to be the cause of the issue. Please carefully check the eye camera cable connection, and try re-starting Pupil Capture with default settings in the main settings menu, and see if that resolves it.
Hi everyone! I'm preparing an experiment with an AOI, using Pupil Core. I have a question about the gaze position data. The "confidence" value varies from 0 to 1, where 1 indicates perfect confidence of a gaze position. However, can you advise any threshold for the useful data? I found that for the pupil positions, a useful data carries a confidence value greater than ~0.6. Is it relevant for the gazes too? Thank you for your answer and your help!
Hi @user-6586ca π The default confidence threshold is set at 0.6. This is inherited by gaze data after you have calibrated.
hi! I have tried window troubleshooting from this site (https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting) and now pupil capture terminates by itself. I have checked the hidden devices on device manager and I can't find "libUSBK Usb Devices" anymore. Is there a way to fix this?
Hi @user-8b2591 π have you tried deleting the pupil_capture_settings
folder from your user directory? This will make Pupil Capture start from i's default settings next time you open it.
My scene camera gets to an FPS of 7 or sometimes even 5, and as a result the recording is choppy. Am I correct in saying that the data (e.g., fixation count) derived from this recording is going to be inaccurate?
Hi @user-89d824. Fixations are computed from Pupil Core's gaze data, which will more than likely have a higher sampling rate. You can calculate the sampling rate using the timestamp column of the gaze_positions.csv
export.
That said, 5 fps is very low. What are the specs of your CPU? If your computer does not have sufficient computational resources, the software will dismiss/drop samples in order to keep up with the incoming real-time data. If you don't have access to a more powerful machine, one workaround is to record your experiment with real-time pupil detection disabled. That should speed things up. You can then run pupil detection and calibration in a post-hoc context: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Hey! I want to import my simulated environment. I have an fbx file. I want to be able to import it in pupil capture. How do I do that?
Hi @user-138f84 π. Pupil Capture is our desktop recording software used to capture eye tracking data in real-time. There's no functionality to load fbx files. If I can ask, what's your overall goal with such a use case?
Hello ! I did my best to look for an answer but couldn't find it in the doc/git issues/discord/archives. If it has been already answered, mea culpa. I am trying to recreate a gaze vector in a virtual environment. We recreated the world camera in the virtual environment using intrinsic and extrinsic matrix. But the virtual camera is perfect in the sense that it can not reproduce fisheye distortion, thus creating a perfect, non-distorted image of what the world camera captures. Therefore, my question is : does the exported gaze data norm_y and norm_x take distortion into account ? (Meaning : is the exported data already undistorted as if the world camera had no distortion ?) Or do I have to undistort gaze data using camera matrix, in order for norm_x and norm_y to make sense in the virtual environment ? Thank you for anyone reading this. I hope you have a nice day.
Hi everyone! Does anyone know how to debug the numbers in the red circle in this diagram? Thank you very much if someone could answer my questionοΌπ₯Ή
Hi @user-e6ed07 π The numbers you refer to indicate the fixation id in your recording - it's an index of the fixation number from the start of the recording when you have the Fixation Detector plugin enabled. Each fixation id will correspond to a row in the fixations.csv
export. You can see the fixation index increase/decrease if you skip through fixations using the Next Fixation/Previous Fixation buttons in the Pupil Player window.
Hi! Is it possible to run Pupil Capture under ARM?
Crash. Instruction not valid?
Hi admins, everyone . Can someone provide pyuvc usage docs ?
Hi, pyuvc expects specific functionality from the cameras. See https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498 It is possible that your camera does not fulfil all these requirements.
How do I resolve this error. I have correctly configured my uvc camera driver as libusbk one
Does anyone know about pupil_positions.csv file data processing?
Hi @user-183822 π Could you clarify which metrics you would like to extract from the pupil_positions.csv
file? Pupil Positions refers to the location of the pupil within the eye camera coordinate system. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#pupil-positions and find a break down of the export here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
Hi, does PupilLabs have monocular glass?
Hi @user-13f46a π It would be possible for us to provide you with a Pupil Core headset with only one eye camera for monocular tracking. Is there a particular reason you need this?
Hi, all. Does pyuvc have functionality to control exposure of camera?
Hi π Yes, if the camera exposure its exposure setting as a uvc control. See this example https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L631-L640
We need an eye tracker, but bicular glasses seem to be over our budget
i wanted to see price difference
Please contact sales@pupil-labs.com to request a quote π
@papr and @user-9429ba thank you so much π
@papr @user-9429ba I am getting this type of output. is it the right way to do the analysis?
You could try reproducing the attached figure. This was from a simple setup maintaining central fixation whilst the screen alternates from black to white. The change in luminance corresponds to an increase/decrease in Pupil dilation.
'Hello, recently I am facing an issue starting today with the eye tracker. I can notice that my gazes are not being detected correctly, when I look down the eye tracker points to somewhere else completely far from where am looking, even calibration is not working anymore. I tried to reset the software did not work, my eye confidence is 1.00. We even noticed that fixations are detected even when my eyes are closed. Any leads about what might be the issue? And how to fix it/troubleshoot?
Hi @user-eb6164 : π. Have you changed anything with your setup, any Capture settings, swapped scene cam lenses etc.?
Could you share a recording with us that demonstrates the issue? Ideally, it would include the calibration choreography. Please share it with data@pupil-labs.com
how can I detect outliers, Eye Blinkings from the pupil_positions.csv file?
You can use the blink detector to find blinks and apply the methodology presented in this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb
Otherwise, I would recommend discarding data points with a confidence of 0.6 or lower.