core


user-3cee2f 04 July, 2022, 13:05:44

Hi, we would like to buy a core suite, just wonder how long will it take to ship it to UK ? Many thanks πŸ™‚

papr 04 July, 2022, 13:27:57

Hey, that depends on a various factors. Please contact info@pupil-labs.com with details about the purchasing entity (personal/academic/commercial).

user-6e3d0f 05 July, 2022, 09:52:30

when exporting fixations on a surface, why are there multiple entrys for the same fixation_id ?

papr 05 July, 2022, 10:08:10

Yes, that is because fixations usually span multiple scene video frames and are estimated in scene video coordinates. For every scene video frame, we get a new scene-to-surface mapping function. That means that we need to map the same fixation multiple times to the surface, once for each frame in which the surface was recognized. This is only an issue if the surface moved during the fixation (in which case it will look like the fixation moved on the surface).

user-6e3d0f 05 July, 2022, 10:12:18

So If i want to get the coordinate of a single fixation, it is viable to iterate over the same id and just take the average value for like the x and y position? Because looking at my data, the position slightly changes

papr 05 July, 2022, 10:13:29

Correct. Small changes are expected due to the noise in the surface tracking.

user-ef3ca7 05 July, 2022, 19:16:39

Hello, How can I calculate gaze amplitude from phi and theta? Since we have these values: x = data.gaze_point_3d_x y = data.gaze_point_3d_y z = data.gaze_point_3d_z r = np.sqrt(x 2 + y 2 + z ** 2) theta = np.arccos(y / r) # for elevation angle defined from Z-axis down psi = np.arctan2(z, x)

nmt 06 July, 2022, 06:35:40

Hi @user-ef3ca7 πŸ‘‹. Have a look at the first three lines of code in section 4 of this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb

user-be99cd 06 July, 2022, 07:42:20

Hello,

In a research protocol, we would like to compare to conditions : one with AR-HMD wearing and another without. We are thinking to respectively use the HoloLens add-on for the first condition and the Core for the second. Is the hardware (the 3 cameras) on the two devices exactly the same (for data consistency's sake) ?

Thanks in advance.

papr 06 July, 2022, 07:43:36

Hey, the hardware is the same. πŸ™‚

user-b9005d 06 July, 2022, 12:30:28

When doing post-hoc calibration of videos, sometimes there are sections of video where only one eye is properly confident in frame. If I mark a calibration dot where I know that one eye is fixated, how does that affect the gaze estimation once both eyes come back into focus?

papr 06 July, 2022, 12:37:12

Hey πŸ‘‹ This is what happens under the hood: 1. Player collects high confidence pupil data for each eye in the specified calibration range 2. Player creates three combinations of pupil and target data: a) left pupil data + targets b) right pupil data + targets c) left + right pupil data + targets 3. For each matching, it fits a separate model, i.e. two monocular models and one binocular model.

If only one eye is detected, the corresponding monocular model is used. If both eyes are detected, the binocular model will be used.

By placing a target marker while only one eye is detected well, you improve the corresponding monocular model. The binocular model is not affected as their is no valid left-right eye pair that could be combined with the target marker.

I hope this was clear.

user-75df7c 07 July, 2022, 08:21:31

Hi! I have three surfaces, one big one that spans my entire screen and two smaller ones inside the big one. The small ones overlap with the big one but are very separated from each other. Any idea why my surface_events.csv looks like this? It's always entering and exiting all three at the exact same times, which is impossible.

Chat image

papr 07 July, 2022, 08:26:06

Hi πŸ‘‹ These enter and exit events do not refer to gaze entering/exiting the surfaces but the surfaces being detected within the image. If the surfaces are defined based on the same set of markers, the above behavior is expected.

papr 07 July, 2022, 08:26:54

You might want to look at the gaze_on_surface* data instead πŸ™‚

user-b14f98 07 July, 2022, 13:50:18

Looking for some advice here. In the first image, we have norm_pos represented by the green sphere, and gaze_point_3d by the blue sphere. We would like to project a gaze vector from the cyclopean onto the 2D world image. We assume this vector will pass through gaze_point_3d (blue sphere), but what is the assumed origin here? We assume that the origin is the world camera, but there is an offset that you can see in the second image, which shows that gaze point 3d does not align with norm_pos. This could be due to incorrect intrinsics... any other thoughts?

Chat image Chat image

user-b14f98 07 July, 2022, 13:55:09

We're also quite confused as to why gaze_point_3d is at 20 cm from the ~~head~~ world camera, when the calibration plane (the wall w/checkerboard) was most definitely at a further distance. We would like to use geometric camera calib on the world video to extract the distance (we don't have a good record of it, unfortunately). ...but, I believe we need to know the sensor size to estimate the checkerboard distance.

user-b14f98 07 July, 2022, 13:55:43

So, the second question is, any ideas why PL estimates a depth of about 20 cm ? (across multiple recordings)

papr 07 July, 2022, 14:17:12

Depth estimates are inaccurate due to noise in the gaze_normals. And yes, gaze_point_3d, norm_pos, and scene camera origin should lie on one line.

user-b14f98 07 July, 2022, 14:32:33

Let me put together a quick description with my student. I'll share later today.

user-b14f98 07 July, 2022, 14:35:03

gaze_point_3d is at 20 cm from the head world camera.

papr 07 July, 2022, 14:35:14

and norm_pos?

user-b14f98 07 July, 2022, 14:35:15

It is the blue sphere

user-b14f98 07 July, 2022, 14:35:29

Norm_pos, the green sphere, is ... (asking the student)

papr 07 July, 2022, 14:36:01

Ah, I got those mixed up, my bad.

user-b14f98 07 July, 2022, 14:35:57

since we're using the normalized image coords, it's set to the depth of the image plane

user-b14f98 07 July, 2022, 14:36:46

...wouldn't you expect the monocular eye vectors to cross at the depth of calibration?

user-b14f98 07 July, 2022, 14:37:23

Here, they are calibrated tot he depth of the checkerboard. I would have assumed that bundle adjustment adjusted rotation to minimize error between the mono gaze vectors and the calibration points. ... and that they cross near to the real-world depth of the calibration targets at the time of calibration.

papr 07 July, 2022, 14:37:26

In this case, it is expected that the lines cross in front of the scene cam image plane, and not in it. What is unexpected is that the green sphere is so far out.

papr 07 July, 2022, 14:38:35

https://github.com/pupil-labs/pupil/pull/2176/files#diff-db1863be9d238abfc9904434bd833f9418b8707ec391f1cbcca7099a5ee9c4a3R192

user-b14f98 07 July, 2022, 14:45:36

Note that this question is from a second line of work with the core, not the stuff Kevin is working on in the HMD.

papr 07 July, 2022, 14:38:56

The data being monocular explains the depth of 20

papr 07 July, 2022, 14:39:09

But that should be mm not cm

user-b14f98 07 July, 2022, 14:39:21

Ooh, this might explain a thing or two.

user-b14f98 07 July, 2022, 14:39:22

I believe we're using the calibration matrices for the binocular calibration

user-b14f98 07 July, 2022, 14:39:46

My student suggested the bino calibration had individual matrices for the left/right eyes

user-b14f98 07 July, 2022, 14:40:09

...and we're using those matrices for the position and rotation of the local eye/eye camera space.

user-b14f98 07 July, 2022, 14:40:19

within world camera space

user-b14f98 07 July, 2022, 14:44:29

So, to be clear, you're assuming vergence to a distance of 20 mm at the time of calibration.

papr 07 July, 2022, 14:47:39

for monocular post-hoc hmd calibrations, yes. That is the estimated distance between eyes and the hmd. Real-time hmd calibrations take the depth from the 3d target data. Normal Core 3d calibrations assume a distance of 500mm.

user-b14f98 07 July, 2022, 14:44:51

and the real-world distance of the calibration targets will not effect that assumption

user-04dd6f 07 July, 2022, 16:49:30

Hi:

I just received a binocular eye tracker today, and I was trying to export the gaze position data independently (eye0 and eye1), hence, I drag the dual-monocular gazer plug-in into the pupil player.

However, I somehow couldn't find the plug-in on the list, is the "dual-monocular gazer" located somewhere else in pupil player?

Thanks~

Chat image

papr 07 July, 2022, 18:58:17

Hey, you need to perform post-hoc calibration to leverage that plugin. See the Gaze Data menu and https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration

user-04dd6f 07 July, 2022, 20:25:36

Thanks~

user-c01a3e 07 July, 2022, 17:03:19

I am still learning the messaging format for communicating via the IPC backbone. Is there a way to call specific values from the real-time gaze data? (i.e. instead of printing the whole string like in the pic, could I extract and print only certain coordinates or only the timestamp?)

Chat image

papr 07 July, 2022, 18:59:30

Hi, yes there is! After decoding the message, you get a Python dictionary. Read more about them here https://realpython.com/python-dicts/

user-a02d16 09 July, 2022, 10:48:22

hi, i am trying to get my pupil core to work on my new mac, and no footage is loading through capture... settings say Local usb disconnected. Can someone help with this issue please?

nmt 11 July, 2022, 08:46:00

Hi @user-a02d16 πŸ‘‹. You will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture. See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-292135 11 July, 2022, 07:21:52

hi, is there any known helper scripts that extract the closest world, eye0, and eye1 image at a given Pupil time? I want to check the eye condition and world situation at a specific moment. e.g. python helper_script.py --pupil_time=120.101, --path="./images/ -> generate ./images/120_100_eye0.jpg, 120_102_eye1.jpg, 120_091_world.jpg

nmt 11 July, 2022, 13:10:44

Hi @user-292135 πŸ‘‹. Check out this tutorial that shows how to extract specific frames from the scene video: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb. It's currently set up to find the 30-second mark (see section 8), but you can use specific pupil timestamps if required.

The tutorial will also work with eye videos. Note that it requires a *_timestamps.csv file. You'll need to run the 'Eye Video Exporter' plugin in Pupil Player to get these (one for each eye video). Alternatively, you can adapt the script to load the *timestamps.npy file using numpy.load instead (section 4 of the tutorial).

user-f93379 11 July, 2022, 16:35:42

Hi! We have purchased two Core devices. On one of them, the World camera won't turn on. It says "could not connect to device"! What could be wrong with the device? One is working but the other is not.

papr 12 July, 2022, 09:06:18

Could you please also clarify if the two headsets are being a) connected to two different computers or b) to the same machine.

user-f93379 11 July, 2022, 16:36:01

Chat image

nmt 11 July, 2022, 17:12:53

Hi Niki. Please follow these steps to debug the driver installation: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

user-b07d3f 11 July, 2022, 16:45:40

Hi! We have purchased one Pupil Core device and we are thinking of purchasing two more eye cameras and attaching them to that device. How can we get all four cameras working at once?

papr 12 July, 2022, 07:09:19

While you can record video from all 5 cameras with two Pupil Capture instances, the software is not designed to make use of the two extra cameras. The eye cameras available in our shop are meant for replacement or upgrading older eye cameras.

user-856af7 11 July, 2022, 18:38:38

Hi again, We're using blink detection for our experiment and have run into some issues with the system registering blinks for too long, or registering errant blinks, even when the eyes are open and the pupils of the participant are clearly visible. Do you have any recommendations on how we might go about fixing this, and is there a way to correct for it post-hoc as part of the test?

nmt 12 July, 2022, 06:29:04

Hi @user-856af7. You might need to adjust the blink detector thresholds in Pupil Player. You can read about this and how the blink detector works here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector

user-26b243 11 July, 2022, 19:01:14

Hi, I was wondering how gaze_normal data column was normalized as in what denominator was used?

papr 12 July, 2022, 06:34:06

It is normalized by using its original length st. the result has a length of 1

user-9e284e 12 July, 2022, 01:49:32

Does the heatmap image export map onto the world camera surface?

user-f93379 12 July, 2022, 09:00:46

Chat image

user-f93379 12 July, 2022, 10:34:31

Chat image

papr 12 July, 2022, 11:49:13

That looks like all cameras are connected. Can you confirm that the scene video is not being displayed even though the device manager is showing all three devices? (mind the difference to a disconnected device in the device manager)

user-26b243 12 July, 2022, 17:58:59

Hello, we are coming across another problem where it seems that the gaze recording is a bit off from where the participant was actually looking. We ran a quick experiment where we specifically looked at all 4 black and white boxes (as seen by the included images) for about 5 secs to see what gaze data would be recorded. We noticed that for multiple trials the side furthest from the center (left in this case) was the side that had the most variability. For example, picture 1 (screenshot 21) is when the subject was looking at the top left square and picture 2 (screenshot 22) is when the subject was looking at the bottom left square. For these two trials the calibration for angular accuracy was 1.46. Also, for reference the board that the squares are on is set up at an angle in respect to the subject.

Chat image Chat image

nmt 12 July, 2022, 19:59:18

Would you be able to share a recording with [email removed] such that we can provide concrete feedback? Please include the calibration choreography in the recording πŸ™‚

user-97ca10 13 July, 2022, 17:17:34

Hey there, I am wondering how to exclude blinks from raw pupil data (csv format). I understand the 'blink detection' you guys built lets you select the filter, onset, and offset. Since I want to do this post-hoc though I figured I could remove all my data points with a confidence below a certain threshold. What number do you suggest using? Also if you could point me towards documentation on how the confidence value is established that would be helpful. Thanks.

papr 14 July, 2022, 07:21:05

Hi @user-97ca10 Check out this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb `It demonstrates how you can relate the different exports to each other by time. This should allow you write a short routine that finds pupil datums that belong to a blink and discard them.

Confidence is mainly calculated within the 2d pupil detector. It looks for pupil edges and fits an ellipse to it. Summarized, the confidence is high if the fitted ellipse matches the detected edges well. For more information, see http://arxiv.org/abs/1405.0006 If you are interested in the exact implementation, I can look that up, too.

user-2e1368 13 July, 2022, 23:55:25

Hey there, where can I find Data file? thank you!

papr 14 July, 2022, 07:22:11

Pupil Capture creates a recordings folder in your home folder. Each recording is a folder with multiple files. You can use Pupil Player to view the recording https://docs.pupil-labs.com/core/software/pupil-player/

user-89d824 14 July, 2022, 15:58:11

Hi there, may I know if there's a time limit for recording? My experiment will likely last up to at least 1.5 hours. Is that going to be a problem for Pupil Core?

nmt 14 July, 2022, 16:18:40

We'd definitely recommend splitting your experiment into smaller chunks. If you haven't already, check out the best practices page: https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks

user-d4d4bc 15 July, 2022, 07:33:19

Hi,

I had a question regarding the pupil invisible glasses. I saw that the link to apply offset correction wasn't working.

https://docs.pupil-labs.com/invisible/how-tos/tools/apply-offset-correction

Is there an alternative place where I could look up the procedure?

Thanks. πŸ™‚

papr 15 July, 2022, 07:34:25

Let me try to fix that πŸ™‚

marc 15 July, 2022, 08:38:45

Hi @user-d4d4bc! We have made a mistake leaving that link in the docs! We were considering such an article, but have opted for in-app documentation instead. You can find instructions on how to do the offset correction in the Companion app by opening the wearer profile of the currently active wearer (by clicking on the wearer name on the home screen) and then clicking "Adjust".

We'll remove the link from the docs. Thanks again for reporting the error!

user-d4d4bc 15 July, 2022, 07:35:43

Ah cool. Thank you. And I just realized I posted it on the wrong thread. I'll try to do it right the next time. πŸ™‚

user-04dd6f 15 July, 2022, 11:43:08

Hi:

I've got a question regarding the gaze accuracy.

While I could see the gaze accuracy and gaze precision after each calibration, but I would like to know that if it's possible to find the gaze accuracy and gaze precision among each exported file (.csv), or it's already there in the folder?

Many Thanks

papr 15 July, 2022, 11:46:46

They are not stored as part of the recording. the results are only published as logging messages which you would need to receive/parse in real time

user-aaa726 16 July, 2022, 13:30:10

Hi, I want to do eye tracking on mobile phone. For this, should I calibrate on the laptop with screen markers or should I print out pupil calibration markers and attach them on the phone screen? And in what range should the angular accuracy and angular precision values be in order for me to get the best result? When I calibrate and test with screen markers I get around 2,6 angular accuracy and 0,2 angular precision. Is this useful?

nmt 18 July, 2022, 06:22:16

Hi @user-aaa726 πŸ‘‹. I’d recommend using the physical marker, making sure to cover the area of the visual field that the phone occupies. Also, calibrate at the viewing distance you’ll be recording, e.g. arm's length. For instructions on how to use the physical marker, see this page: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-marker.

You can determine how much calibration accuracy you’ll need with this calculation: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246

Note that investigating gaze in the context of a phone screen will require robust pupil detection and good calibration. Here is a reasonable best-case example of gazing on a mobile phone screen using Core: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing

user-bd1280 17 July, 2022, 11:22:42

eye movement calculation

user-25da8c 18 July, 2022, 12:34:22

hey guys, newbie here. I was wandering is it possible to integrate pupil capture into a custom mobile app (call some API or sth) : basically whenever the app is launched i want to start recording . Thanks

papr 18 July, 2022, 12:35:29

Hi πŸ‘‹ Pupil Capture has a network api that you could call from your app https://docs.pupil-labs.com/developer/core/network-api/ Note, that Capture will require a desktop pc to run.

user-25da8c 18 July, 2022, 12:44:15

the companion app and my custom android app

papr 18 July, 2022, 12:46:57

In case of Invisible, you need to use a different API, see https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html (starting/stopping a recording is a simple http request.) But yes, both apps would need to be running.

user-219de4 18 July, 2022, 23:24:05

Hello! Did anyone have experience sending triggers from MATLAB to pupil cam? Can the exported file save the triggers along with the timestamp? Thank you!!!!

nmt 19 July, 2022, 06:30:26

Hi @user-219de4 πŸ‘‹. Check out our Matlab helpers repository: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/send_annotation.m

user-d4d4bc 19 July, 2022, 10:33:00

I think there might be a typo on page https://docs.pupil-labs.com/core/software/pupil-capture/. In the Plugins -> Surface Tracking -> Defining a Surface part. This is the text, and I'm not sure if this is correct. If markers have been erroneously added or left out, click the add/remove markers button and afterwards onto the according marker to add/remove them from your surface.

I think instead of according the word might have been correposnding.

user-90ba8c 19 July, 2022, 13:13:51

Hi, I am a MSc Computer Science student using pupil-labs core for research. I am using a mobile phone with pupil mobile app for capture, hence I need to do post-hoc calibration. In my data, the calibration marker was accidentally left in the frame and is causing the gaze data to be off. I'm trying to calibrate using only a subset of the video (using trim marks) but when I come to collect the references (Detected circle markers in the recording) I cant seem to 'set from trim marks'. Ive moved the trim marks to the correct places but the button does nothing and continues to say 'Collect References in: 00:0'. Can anyone advise on this?

Chat image

papr 19 July, 2022, 13:19:26

Hi! Sounds like you are doing everything correctly. If you increase the width of the menu the full message should appear.

user-90ba8c 19 July, 2022, 13:27:21

Indeed it does! Although, the gaze data doesn't follow the marker very well when the participants head moves in circles (The detected fixations don't stay in the middle on the marker, but circle round the marker as the head moves)

papr 19 July, 2022, 13:31:06

In the gaze mapping section, there is a validation menu. Please set it to the same trim marks and run the validation. What does the accuracy say?

For more concrete feedback, please share the recording with data@pupil-labs.com There are many reasons that can influence the quality of the gaze estimation.

user-90ba8c 19 July, 2022, 13:34:18

Chat image

papr 19 July, 2022, 13:35:32

Looks like the issue is low confidence pupil data. 90% of your samples have been discarded for the calibration calculation. Please check if you can get better pupil data by tweaking the parameters during post-hoc pupil detection

papr 19 July, 2022, 13:36:18

To visualize pupil detection quality, it can be helpful to enable the eye video overlay.

user-90ba8c 19 July, 2022, 13:42:44

Thanks! Looks like the first few seconds of the data the pupil detection is struggling to settle. Once settled the detection is much better. Have removed these first few seconds and the gaze data is already visually much closer and validation reflects that!

Chat image

papr 19 July, 2022, 13:45:03

Small trick for post-hoc pupil detection: You can restart pupil detection while it is still running and it will keep the current eye model. This way you can apply a more stable model from the beginning.

user-90ba8c 19 July, 2022, 13:47:46

Cheers for all the help!

user-990e57 20 July, 2022, 07:16:03

Hi, I am trying to figure out how to include the software methodology for calculating the post-hoc gaze validation function in Pupil Player in a research paper. I have not found anything explaining that on the website nor on discord so far. Can you point me in the right direction? Thank you!

papr 20 July, 2022, 07:18:44

For Pupil Core, the calibration methodology is the same for realtime and post-hoc. I suggest referring to the corresponding documentation web-pages.

papr 20 July, 2022, 07:22:57

I have also seen that you have contacted via email with similar questions. Regarding how accuracy and precision are calculated, we will add information about this here https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy

papr 20 July, 2022, 07:46:23

Hi, check out the Pupil Core tech report https://arxiv.org/abs/1405.0006 See the Spatial Accuracy and Precision section. That document can also be referenced in your paper

user-990e57 20 July, 2022, 07:24:48

Okay, thank you for your help! Sorry for the double contact, I wasn't sure which would be best.

user-0b9182 20 July, 2022, 10:46:19

I'm using the surface tracker plugin to develop a code that returns a specific surface where the gaze point is displayed when three surfaces defined for a single world frame are visible. For example, if gaze point indicates surface2, surface2 is output.

I need your advice on the following issues.

https://github.com/pupil-labs/pupil/issues/2247

papr 20 July, 2022, 12:58:26

Gaze on surface

user-1bda7f 21 July, 2022, 00:07:54

Hi, I was using Pupil Player for post-HOC head pose tracking w/ April Tags, and I was unable to find the head_pose_tracker_poses.csv file, which contains the pose of the world camera. Where does this file get saved?

nmt 21 July, 2022, 06:45:07

Hi @user-1bda7f πŸ‘‹. Please ensure that you've: 1. Gone through all steps using the Head Pose Tracker plugin, i.e. detect markers, 3d model and camera localisation, and that they were successful 2. Run the raw data exporter with the Head Pose Tracker plugin enabled (check out the export docs here: https://docs.pupil-labs.com/core/software/pupil-player/#export) Then the head_pose_tracker_poses.csv will be in the export folder

user-219de4 21 July, 2022, 14:24:26

yh so we've been following the instructions https://zenodo.org/record/201933#.Ytlhjy-B10s

nmt 21 July, 2022, 14:39:25

Firstly, please make sure you are using the latest version of Pupil Capture (https://github.com/pupil-labs/pupil/releases/tag/v3.5). If the scene video isn't showing, follow these driver debugging steps: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

papr 22 July, 2022, 07:24:04

Have you also downloaded the app from this link? The linked version is very old. The driver instructions should be up to date though.

user-219de4 21 July, 2022, 14:25:13

but zadig is not finding our headcams

user-219de4 21 July, 2022, 14:26:32

also as a separate question we are trying to run a dual headcam set - can we run two headcam inputs from one pupil capture session?

nmt 21 July, 2022, 14:43:37

If you do not need eye video or pupil data, you could select the second scene camera as an eye camera and use Player's Eye Video Overlay. f you need the recordings to be complete, i.e. including pupil detection and eye video, use two Capture instances

user-8b2591 21 July, 2022, 16:46:10

hi! I am trying to do calibration with vive headset on unity but I keep getting this message on pupil capture. Also, it stops responding in the middle of the calibration. Does anyone know how to fix this? I have added the gaze tracker to an existing unity project.

user-8b2591 21 July, 2022, 16:48:51

this is what I am getting

Chat image

user-6ec20c 21 July, 2022, 19:03:59

Is there any way to install capture, player and service on a computer that operates using Windows 11?

marc 22 July, 2022, 07:29:49

@papr Forwarding a question from @user-6ec20c:

Is there any way to install capture, player and service on a computer that operates using Windows 11?

papr 22 July, 2022, 07:31:04

Should be the same as on Windows 10

marc 22 July, 2022, 07:36:17

Thanks @papr! @user-6ec20c have you tried going to this page and clicking "Download Desktop Software" already? https://pupil-labs.com/products/core/

user-9429ba 22 July, 2022, 08:38:46

@user-6ec20c πŸ‘‹ Users have reported being able to use Pupil desktop software on Windows 11. See this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/933755742509465610 Just try searching Windows 11 in discord, there are other messages for reference too πŸ™‚

user-eb6164 22 July, 2022, 13:37:18

Hello I am new to this community and I have a couple of questions concerning pupil lab. Where I can post these questions?

nmt 22 July, 2022, 13:46:37

Welcome to the community, @user-eb6164 πŸ‘‹. Feel free to post your questions here! (if you want some tips, check out the community-guidelines: https://discord.com/channels/285728493612957698/983612525176311809)

user-eb6164 22 July, 2022, 13:48:33

Thank you so much πŸ™‚

user-eb6164 22 July, 2022, 14:00:37

I tried to find my answer but could not, I am working on my research and still in the process of developing my research questions and design my experiments. I am working on a driving simulator that contains three screens. We need to extract specific eye metrics (such as fixations, visual dispersion) from different AOIs such as billboards, road signs, vehicles. Mirrors, Road. I have read that setting AOIs can be done only with markers. Taking into consideration that markers will not help me here since these objects are moving and located inside the screen (very small). Is there any way I can set dynamic AOI? Any free tool that can help us extract these data or should we do manual code only? Also if I am working on three screens should I calibrate on each screen?

nmt 22 July, 2022, 14:47:02

You’ll likely want to present four markers on each screen (one in each corner). These markers can be used to generate multiple AOIs on each screen. I’d recommend downloading this example recording that has markers in view and loading it into Pupil Player. That’ll give you a better sense of how surface tracking + AOIs work: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view

With that said, you are presenting moving stimuli, so static AOIs might not be appropriate. This leads to the question, do you know the coordinates on-screen of what is being presented?

If so, you could have one big AOI that covers each screen, and you’ll get x,y coordinates of gaze relative to each screen. Then it would be a case of correlating gaze with your on-screen stimuli coordinates, thereby automating your analysis.

If not, then manual coding is certainly an option, albeit a time-consuming one. Check out our annotation plugin: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-player

You should only need to calibrate using one screen – when using the 3d pipeline, the calibration will extrapolate outside the calibration.

papr 22 July, 2022, 15:32:45

@user-219de4 One other suggestion would be to perform the procedure, step by step, with the exact versions of the linked software, on a fresh system. As mentioned before, it might be that something in your existing setup is interfering with the driver installation.

user-be0bae 25 July, 2022, 03:38:23

Hello, please help! I do not know what's wrong with the eye-tracking device. There is no view on eye 1 window and the world window showed " EYE1: could not set the value. 'Backlight Compensation'".You can see the pictures for specific information. Thank you so much! @papr

Chat image Chat image

user-9429ba 25 July, 2022, 14:58:05

Hi @user-be0bae πŸ‘‹ This warning does not effect functionality so is not likely to be the cause of the issue. Please carefully check the eye camera cable connection, and try re-starting Pupil Capture with default settings in the main settings menu, and see if that resolves it.

user-6586ca 25 July, 2022, 07:47:11

Hi everyone! I'm preparing an experiment with an AOI, using Pupil Core. I have a question about the gaze position data. The "confidence" value varies from 0 to 1, where 1 indicates perfect confidence of a gaze position. However, can you advise any threshold for the useful data? I found that for the pupil positions, a useful data carries a confidence value greater than ~0.6. Is it relevant for the gazes too? Thank you for your answer and your help!

user-9429ba 25 July, 2022, 15:03:05

Hi @user-6586ca πŸ‘‹ The default confidence threshold is set at 0.6. This is inherited by gaze data after you have calibrated.

user-8b2591 25 July, 2022, 16:32:36

hi! I have tried window troubleshooting from this site (https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting) and now pupil capture terminates by itself. I have checked the hidden devices on device manager and I can't find "libUSBK Usb Devices" anymore. Is there a way to fix this?

wrp 26 July, 2022, 05:49:09

Hi @user-8b2591 πŸ‘‹ have you tried deleting the pupil_capture_settings folder from your user directory? This will make Pupil Capture start from i's default settings next time you open it.

user-89d824 26 July, 2022, 11:24:03

My scene camera gets to an FPS of 7 or sometimes even 5, and as a result the recording is choppy. Am I correct in saying that the data (e.g., fixation count) derived from this recording is going to be inaccurate?

nmt 27 July, 2022, 10:57:49

Hi @user-89d824. Fixations are computed from Pupil Core's gaze data, which will more than likely have a higher sampling rate. You can calculate the sampling rate using the timestamp column of the gaze_positions.csv export. That said, 5 fps is very low. What are the specs of your CPU? If your computer does not have sufficient computational resources, the software will dismiss/drop samples in order to keep up with the incoming real-time data. If you don't have access to a more powerful machine, one workaround is to record your experiment with real-time pupil detection disabled. That should speed things up. You can then run pupil detection and calibration in a post-hoc context: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-138f84 26 July, 2022, 17:48:18

Hey! I want to import my simulated environment. I have an fbx file. I want to be able to import it in pupil capture. How do I do that?

nmt 27 July, 2022, 12:44:58

Hi @user-138f84 πŸ‘‹. Pupil Capture is our desktop recording software used to capture eye tracking data in real-time. There's no functionality to load fbx files. If I can ask, what's your overall goal with such a use case?

user-3a5451 27 July, 2022, 08:58:05

Hello ! I did my best to look for an answer but couldn't find it in the doc/git issues/discord/archives. If it has been already answered, mea culpa. I am trying to recreate a gaze vector in a virtual environment. We recreated the world camera in the virtual environment using intrinsic and extrinsic matrix. But the virtual camera is perfect in the sense that it can not reproduce fisheye distortion, thus creating a perfect, non-distorted image of what the world camera captures. Therefore, my question is : does the exported gaze data norm_y and norm_x take distortion into account ? (Meaning : is the exported data already undistorted as if the world camera had no distortion ?) Or do I have to undistort gaze data using camera matrix, in order for norm_x and norm_y to make sense in the virtual environment ? Thank you for anyone reading this. I hope you have a nice day.

user-e6ed07 27 July, 2022, 15:25:23

Hi everyone! Does anyone know how to debug the numbers in the red circle in this diagram? Thank you very much if someone could answer my question!πŸ₯Ή

Chat image

user-9429ba 27 July, 2022, 15:50:26

Hi @user-e6ed07 πŸ‘‹ The numbers you refer to indicate the fixation id in your recording - it's an index of the fixation number from the start of the recording when you have the Fixation Detector plugin enabled. Each fixation id will correspond to a row in the fixations.csv export. You can see the fixation index increase/decrease if you skip through fixations using the Next Fixation/Previous Fixation buttons in the Pupil Player window.

user-f93379 28 July, 2022, 10:49:03

Hi! Is it possible to run Pupil Capture under ARM?

user-f93379 28 July, 2022, 12:58:02

Crash. Instruction not valid?

Chat image

user-3eccb3 28 July, 2022, 11:05:13

Hi admins, everyone . Can someone provide pyuvc usage docs ?

user-3eccb3 28 July, 2022, 11:11:31

Chat image

papr 28 July, 2022, 13:12:58

Hi, pyuvc expects specific functionality from the cameras. See https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498 It is possible that your camera does not fulfil all these requirements.

user-3eccb3 28 July, 2022, 11:12:27

How do I resolve this error. I have correctly configured my uvc camera driver as libusbk one

user-183822 28 July, 2022, 19:32:01

Does anyone know about pupil_positions.csv file data processing?

user-9429ba 29 July, 2022, 13:15:47

Hi @user-183822 πŸ‘‹ Could you clarify which metrics you would like to extract from the pupil_positions.csv file? Pupil Positions refers to the location of the pupil within the eye camera coordinate system. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#pupil-positions and find a break down of the export here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-13f46a 29 July, 2022, 00:22:04

Hi, does PupilLabs have monocular glass?

user-9429ba 29 July, 2022, 13:20:28

Hi @user-13f46a πŸ‘‹ It would be possible for us to provide you with a Pupil Core headset with only one eye camera for monocular tracking. Is there a particular reason you need this?

user-3eccb3 29 July, 2022, 12:20:07

Hi, all. Does pyuvc have functionality to control exposure of camera?

papr 29 July, 2022, 12:20:59

Hi πŸ‘‹ Yes, if the camera exposure its exposure setting as a uvc control. See this example https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L631-L640

user-13f46a 29 July, 2022, 13:21:39

We need an eye tracker, but bicular glasses seem to be over our budget

user-13f46a 29 July, 2022, 13:22:06

i wanted to see price difference

user-9429ba 29 July, 2022, 13:26:14

Please contact sales@pupil-labs.com to request a quote πŸ™‚

user-183822 29 July, 2022, 14:49:29

@papr and @user-9429ba thank you so much πŸ™‚

user-183822 29 July, 2022, 15:22:56

@papr @user-9429ba I am getting this type of output. is it the right way to do the analysis?

Chat image

user-9429ba 01 August, 2022, 07:23:22

You could try reproducing the attached figure. This was from a simple setup maintaining central fixation whilst the screen alternates from black to white. The change in luminance corresponds to an increase/decrease in Pupil dilation.

Chat image

user-eb6164 29 July, 2022, 19:36:50

'Hello, recently I am facing an issue starting today with the eye tracker. I can notice that my gazes are not being detected correctly, when I look down the eye tracker points to somewhere else completely far from where am looking, even calibration is not working anymore. I tried to reset the software did not work, my eye confidence is 1.00. We even noticed that fixations are detected even when my eyes are closed. Any leads about what might be the issue? And how to fix it/troubleshoot?

nmt 30 July, 2022, 09:04:11

Hi @user-eb6164 : πŸ‘‹. Have you changed anything with your setup, any Capture settings, swapped scene cam lenses etc.?

papr 01 August, 2022, 06:27:09

Could you share a recording with us that demonstrates the issue? Ideally, it would include the calibration choreography. Please share it with data@pupil-labs.com

user-183822 31 July, 2022, 14:37:49

how can I detect outliers, Eye Blinkings from the pupil_positions.csv file?

papr 01 August, 2022, 06:25:45

You can use the blink detector to find blinks and apply the methodology presented in this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb

Otherwise, I would recommend discarding data points with a confidence of 0.6 or lower.

End of July archive