πŸ‘ core


user-3cee2f 04 July, 2022, 13:05:44

Hi, we would like to buy a core suite, just wonder how long will it take to ship it to UK ? Many thanks πŸ™‚

user-e3f20f 04 July, 2022, 13:27:57

Hey, that depends on a various factors. Please contact info@pupil-labs.com with details about the purchasing entity (personal/academic/commercial).

user-6e3d0f 05 July, 2022, 09:52:30

when exporting fixations on a surface, why are there multiple entrys for the same fixation_id ?

user-e3f20f 05 July, 2022, 10:08:10

Yes, that is because fixations usually span multiple scene video frames and are estimated in scene video coordinates. For every scene video frame, we get a new scene-to-surface mapping function. That means that we need to map the same fixation multiple times to the surface, once for each frame in which the surface was recognized. This is only an issue if the surface moved during the fixation (in which case it will look like the fixation moved on the surface).

user-6e3d0f 05 July, 2022, 10:12:18

So If i want to get the coordinate of a single fixation, it is viable to iterate over the same id and just take the average value for like the x and y position? Because looking at my data, the position slightly changes

user-e3f20f 05 July, 2022, 10:13:29

Correct. Small changes are expected due to the noise in the surface tracking.

user-ef3ca7 05 July, 2022, 19:16:39

Hello, How can I calculate gaze amplitude from phi and theta? Since we have these values: x = data.gaze_point_3d_x y = data.gaze_point_3d_y z = data.gaze_point_3d_z r = np.sqrt(x 2 + y 2 + z ** 2) theta = np.arccos(y / r) # for elevation angle defined from Z-axis down psi = np.arctan2(z, x)

user-4c21e5 06 July, 2022, 06:35:40

Hi @user-ef3ca7 πŸ‘‹. Have a look at the first three lines of code in section 4 of this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb

user-e91538 06 July, 2022, 07:42:20

Hello,

In a research protocol, we would like to compare to conditions : one with AR-HMD wearing and another without. We are thinking to respectively use the HoloLens add-on for the first condition and the Core for the second. Is the hardware (the 3 cameras) on the two devices exactly the same (for data consistency's sake) ?

Thanks in advance.

user-e3f20f 06 July, 2022, 07:43:36

Hey, the hardware is the same. πŸ™‚

user-e91538 06 July, 2022, 07:57:09

Thank you for the response !

user-b9005d 06 July, 2022, 12:30:28

When doing post-hoc calibration of videos, sometimes there are sections of video where only one eye is properly confident in frame. If I mark a calibration dot where I know that one eye is fixated, how does that affect the gaze estimation once both eyes come back into focus?

user-e3f20f 06 July, 2022, 12:37:12

Hey πŸ‘‹ This is what happens under the hood: 1. Player collects high confidence pupil data for each eye in the specified calibration range 2. Player creates three combinations of pupil and target data: a) left pupil data + targets b) right pupil data + targets c) left + right pupil data + targets 3. For each matching, it fits a separate model, i.e. two monocular models and one binocular model.

If only one eye is detected, the corresponding monocular model is used. If both eyes are detected, the binocular model will be used.

By placing a target marker while only one eye is detected well, you improve the corresponding monocular model. The binocular model is not affected as their is no valid left-right eye pair that could be combined with the target marker.

I hope this was clear.

user-b9005d 06 July, 2022, 12:41:25

This does help, thank you so much!

user-75df7c 07 July, 2022, 08:21:31

Hi! I have three surfaces, one big one that spans my entire screen and two smaller ones inside the big one. The small ones overlap with the big one but are very separated from each other. Any idea why my surface_events.csv looks like this? It's always entering and exiting all three at the exact same times, which is impossible.

Chat image

user-e3f20f 07 July, 2022, 08:26:54

You might want to look at the gaze_on_surface* data instead πŸ™‚

user-e3f20f 07 July, 2022, 08:26:06

Hi πŸ‘‹ These enter and exit events do not refer to gaze entering/exiting the surfaces but the surfaces being detected within the image. If the surfaces are defined based on the same set of markers, the above behavior is expected.

user-75df7c 07 July, 2022, 08:27:05

Thank you so much!

user-75df7c 07 July, 2022, 08:28:53

On that note, what does "on_surf" refer to? And any tips for converting the timestamp values into more useful numbers? Thanks again :)

user-e3f20f 07 July, 2022, 08:29:47

1) on_surf is a boolean indicating if the gaze point was on the corresponding surface or not. 2) See this tutorial on post-hoc time sync https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

user-b14f98 07 July, 2022, 13:50:18

Looking for some advice here. In the first image, we have norm_pos represented by the green sphere, and gaze_point_3d by the blue sphere. We would like to project a gaze vector from the cyclopean onto the 2D world image. We assume this vector will pass through gaze_point_3d (blue sphere), but what is the assumed origin here? We assume that the origin is the world camera, but there is an offset that you can see in the second image, which shows that gaze point 3d does not align with norm_pos. This could be due to incorrect intrinsics... any other thoughts?

Chat image Chat image

user-b14f98 07 July, 2022, 13:55:09

We're also quite confused as to why gaze_point_3d is at 20 cm from the ~~head~~ world camera, when the calibration plane (the wall w/checkerboard) was most definitely at a further distance. We would like to use geometric camera calib on the world video to extract the distance (we don't have a good record of it, unfortunately). ...but, I believe we need to know the sensor size to estimate the checkerboard distance.

user-b14f98 07 July, 2022, 13:55:43

So, the second question is, any ideas why PL estimates a depth of about 20 cm ? (across multiple recordings)

user-e3f20f 07 July, 2022, 14:17:12

Depth estimates are inaccurate due to noise in the gaze_normals. And yes, gaze_point_3d, norm_pos, and scene camera origin should lie on one line.

user-b14f98 07 July, 2022, 14:27:35

Yep. Well, my student is double checking the FOV against the intrinsics stored in the pupil labs data folder. Let's hope that explains it...

user-e3f20f 07 July, 2022, 14:31:29

Could you elaborate on how the second image got generated? If it is the image plane of the scene camera, the norm_pos should be on the plane, i.e. the red lines should cross in the plane. But in this picture the cross further into the scene.

user-b14f98 07 July, 2022, 14:32:00

Yes, that's the concern that prompted the question. I woudl have assumed they cross in the plane, but they cross at 20 cm.

user-e3f20f 07 July, 2022, 14:33:32

I thought gaze_point_3d was at 20cm? From the first picture it looks like gaze_point_3d is further away than norm_pos

user-b14f98 07 July, 2022, 14:32:33

Let me put together a quick description with my student. I'll share later today.

user-b14f98 07 July, 2022, 14:35:03

gaze_point_3d is at 20 cm from the head world camera.

user-e3f20f 07 July, 2022, 14:35:14

and norm_pos?

user-b14f98 07 July, 2022, 14:35:15

It is the blue sphere

user-b14f98 07 July, 2022, 14:35:29

Norm_pos, the green sphere, is ... (asking the student)

user-e3f20f 07 July, 2022, 14:36:01

Ah, I got those mixed up, my bad.

user-b14f98 07 July, 2022, 14:35:57

since we're using the normalized image coords, it's set to the depth of the image plane

user-b14f98 07 July, 2022, 14:36:46

...wouldn't you expect the monocular eye vectors to cross at the depth of calibration?

user-b14f98 07 July, 2022, 14:37:23

Here, they are calibrated tot he depth of the checkerboard. I would have assumed that bundle adjustment adjusted rotation to minimize error between the mono gaze vectors and the calibration points. ... and that they cross near to the real-world depth of the calibration targets at the time of calibration.

user-e3f20f 07 July, 2022, 14:37:26

In this case, it is expected that the lines cross in front of the scene cam image plane, and not in it. What is unexpected is that the green sphere is so far out.

user-e3f20f 07 July, 2022, 14:38:35

https://github.com/pupil-labs/pupil/pull/2176/files#diff-db1863be9d238abfc9904434bd833f9418b8707ec391f1cbcca7099a5ee9c4a3R192

user-b14f98 07 July, 2022, 14:45:36

Note that this question is from a second line of work with the core, not the stuff Kevin is working on in the HMD.

user-e3f20f 07 July, 2022, 14:38:56

The data being monocular explains the depth of 20

user-e3f20f 07 July, 2022, 14:39:09

But that should be mm not cm

user-b14f98 07 July, 2022, 14:39:21

Ooh, this might explain a thing or two.

user-b14f98 07 July, 2022, 14:39:22

I believe we're using the calibration matrices for the binocular calibration

user-b14f98 07 July, 2022, 14:39:46

My student suggested the bino calibration had individual matrices for the left/right eyes

user-b14f98 07 July, 2022, 14:40:09

...and we're using those matrices for the position and rotation of the local eye/eye camera space.

user-b14f98 07 July, 2022, 14:40:19

within world camera space

user-b14f98 07 July, 2022, 14:44:29

So, to be clear, you're assuming vergence to a distance of 20 mm at the time of calibration.

user-b14f98 07 July, 2022, 14:44:51

and the real-world distance of the calibration targets will not effect that assumption

user-e3f20f 07 July, 2022, 14:47:39

for monocular post-hoc hmd calibrations, yes. That is the estimated distance between eyes and the hmd. Real-time hmd calibrations take the depth from the 3d target data. Normal Core 3d calibrations assume a distance of 500mm.

user-b14f98 07 July, 2022, 14:48:55

Yes, I think that last sentence is the relevant on here. https://github.com/pupil-labs/pupil/pull/2176/files#diff-1db06b7632c7441082988af78c11f0d28b6371a07779e1db309fef09c4622d04 Thanks. So, then the ~20 cm estimated fixation distance is just a coincidence.

user-04dd6f 07 July, 2022, 16:49:30

Hi:

I just received a binocular eye tracker today, and I was trying to export the gaze position data independently (eye0 and eye1), hence, I drag the dual-monocular gazer plug-in into the pupil player.

However, I somehow couldn't find the plug-in on the list, is the "dual-monocular gazer" located somewhere else in pupil player?

Thanks~

Chat image

user-04dd6f 07 July, 2022, 20:25:36

Thanks~

user-e3f20f 07 July, 2022, 18:58:17

Hey, you need to perform post-hoc calibration to leverage that plugin. See the Gaze Data menu and https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration

user-c01a3e 07 July, 2022, 17:03:19

I am still learning the messaging format for communicating via the IPC backbone. Is there a way to call specific values from the real-time gaze data? (i.e. instead of printing the whole string like in the pic, could I extract and print only certain coordinates or only the timestamp?)

Chat image

user-e3f20f 07 July, 2022, 18:59:30

Hi, yes there is! After decoding the message, you get a Python dictionary. Read more about them here https://realpython.com/python-dicts/

user-c01a3e 07 July, 2022, 19:04:27

Thank you! So in this case, would the name of the dictionary that I call into be 'gaze.3d.0'?

user-e3f20f 07 July, 2022, 19:10:42

You can use extracted_obj.keys() to print the top-level keys. Note the b in front of the strings. That means that the you need to prefix the strings when accessing the fields with a b as well:

extracted_obj[b"name"]

Alternatively, you should be able to upgrade your msgpack version and that should give you normal strings without the need for the b. Just give it a try πŸ™‚

user-a02d16 09 July, 2022, 10:48:22

hi, i am trying to get my pupil core to work on my new mac, and no footage is loading through capture... settings say Local usb disconnected. Can someone help with this issue please?

user-4c21e5 11 July, 2022, 08:46:00

Hi @user-a02d16 πŸ‘‹. You will need to start the application with administrator rights. sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture. See the release notes for details: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-292135 11 July, 2022, 07:21:52

hi, is there any known helper scripts that extract the closest world, eye0, and eye1 image at a given Pupil time? I want to check the eye condition and world situation at a specific moment. e.g. python helper_script.py --pupil_time=120.101, --path="./images/ -> generate ./images/120_100_eye0.jpg, 120_102_eye1.jpg, 120_091_world.jpg

user-4c21e5 11 July, 2022, 13:10:44

Hi @user-292135 πŸ‘‹. Check out this tutorial that shows how to extract specific frames from the scene video: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb. It's currently set up to find the 30-second mark (see section 8), but you can use specific pupil timestamps if required.

The tutorial will also work with eye videos. Note that it requires a *_timestamps.csv file. You'll need to run the 'Eye Video Exporter' plugin in Pupil Player to get these (one for each eye video). Alternatively, you can adapt the script to load the *timestamps.npy file using numpy.load instead (section 4 of the tutorial).

user-f93379 11 July, 2022, 16:35:42

Hi! We have purchased two Core devices. On one of them, the World camera won't turn on. It says "could not connect to device"! What could be wrong with the device? One is working but the other is not.

user-f93379 11 July, 2022, 16:36:01

Chat image

user-b07d3f 11 July, 2022, 16:45:40

Hi! We have purchased one Pupil Core device and we are thinking of purchasing two more eye cameras and attaching them to that device. How can we get all four cameras working at once?

user-e3f20f 12 July, 2022, 07:09:19

While you can record video from all 5 cameras with two Pupil Capture instances, the software is not designed to make use of the two extra cameras. The eye cameras available in our shop are meant for replacement or upgrading older eye cameras.

user-4c21e5 11 July, 2022, 17:12:53

Hi Niki. Please follow these steps to debug the driver installation: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

user-f93379 12 July, 2022, 08:59:32

Thank you! But this operation did not help. Now the device search is not done, just displayed this:

user-856af7 11 July, 2022, 18:38:38

Hi again, We're using blink detection for our experiment and have run into some issues with the system registering blinks for too long, or registering errant blinks, even when the eyes are open and the pupils of the participant are clearly visible. Do you have any recommendations on how we might go about fixing this, and is there a way to correct for it post-hoc as part of the test?

user-4c21e5 12 July, 2022, 06:29:04

Hi @user-856af7. You might need to adjust the blink detector thresholds in Pupil Player. You can read about this and how the blink detector works here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector

user-26b243 11 July, 2022, 19:01:14

Hi, I was wondering how gaze_normal data column was normalized as in what denominator was used?

user-e3f20f 12 July, 2022, 06:34:06

It is normalized by using its original length st. the result has a length of 1

user-b9005d 11 July, 2022, 19:23:23

To follow up on this response:

Our lab is interested in knowing where the gaze is located. But in the case of people with eye deviations, we're concerned with where each eye is pointing individually. If we want to get an accurate gaze reading, would we need to do monocular calibrations of some kind?

Also, if we have really good binocular data during calibration, does this also feed into the monocular models? We were worried that there may not be enough monocular calibration points to create an accurate model when monocular gaze does occur.

user-e3f20f 12 July, 2022, 07:17:53

For the calibration, binocular data is the intersection of high quality left and right eye data for which each left-right pair is close in time. The monocular models are fitted on all high quality samples from their corresponding eye. So, yes, data used for binocular calibration is also used for monocular calibration.

Regarding eye deviations: The calibration assumes that both eyes are able to fixate the target at the same time. That assumption is rarely true but the further one eye deviates from the gaze target the less accurate the gaze mapping will be.

We have a calibration plugin that allows you to calibrate one eye after each other, but it is designed for external head mounted displays and you would need to implement the visualization of the gaze targets yourself.

user-9e284e 12 July, 2022, 01:49:32

Does the heatmap image export map onto the world camera surface?

user-f93379 12 July, 2022, 09:00:46

Chat image

user-e3f20f 12 July, 2022, 09:05:09

Hey, could you please share a screenshot of the device manager on the affected device with the following categories expanded (if available): Cameras, Imaging Devices, libUSBk. Please also enable "Show hidden devices" in the view option.

user-f93379 12 July, 2022, 10:26:49

The camera fell off again! Could it be shutting down due to overheating?

user-e3f20f 12 July, 2022, 09:06:18

Could you please also clarify if the two headsets are being a) connected to two different computers or b) to the same machine.

user-f93379 12 July, 2022, 10:20:25

Thanx!

user-f93379 12 July, 2022, 10:19:54

I don't know what you did, but the device worked unexpectedly. Maybe the drivers picked up somehow, but it didn't work for a very long time.

user-e3f20f 12 July, 2022, 10:28:35

"Fell off" as in physically fell off? Or do you mean that the scene video stopped working?

user-f93379 12 July, 2022, 10:29:33

Yes, the camera spontaneously stopped displaying images

user-f93379 12 July, 2022, 10:34:31

Chat image

user-e3f20f 12 July, 2022, 11:49:13

That looks like all cameras are connected. Can you confirm that the scene video is not being displayed even though the device manager is showing all three devices? (mind the difference to a disconnected device in the device manager)

user-26b243 12 July, 2022, 17:32:25

Hi, do you mind clarifying what you mean? We were hoping to understand what was set as the origin, what is the max value, and what does a negative value mean for these coordinates. I will include an image of the coordinates we are inquiring about.

Chat image

user-4c21e5 12 July, 2022, 19:57:24

Hi @user-26b243 πŸ‘‹. norm_pos_x/y are gaze coordinates in pixel space normalised by the height and width of the camera image. Bottom left: 0,0; top right: 1,1. Negative values can occur when the gaze point leaves the scene camera image whilst the pupils are still tracked by the eye cameras – usually low quality data. You can read more about Pupil Core's coordinate systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-26b243 12 July, 2022, 17:58:59

Hello, we are coming across another problem where it seems that the gaze recording is a bit off from where the participant was actually looking. We ran a quick experiment where we specifically looked at all 4 black and white boxes (as seen by the included images) for about 5 secs to see what gaze data would be recorded. We noticed that for multiple trials the side furthest from the center (left in this case) was the side that had the most variability. For example, picture 1 (screenshot 21) is when the subject was looking at the top left square and picture 2 (screenshot 22) is when the subject was looking at the bottom left square. For these two trials the calibration for angular accuracy was 1.46. Also, for reference the board that the squares are on is set up at an angle in respect to the subject.

Chat image Chat image

user-4c21e5 12 July, 2022, 19:59:18

Would you be able to share a recording with [email removed] such that we can provide concrete feedback? Please include the calibration choreography in the recording πŸ™‚

user-292135 13 July, 2022, 02:14:29

It seems that *_lookup.npy is more useful. Any critical difference between *_timestamp.npy and *_lookup.npy ?

user-e3f20f 13 July, 2022, 06:56:39

Hi πŸ‘‹ Some type of recordings can have multiple video files (parts) from the same camera. Each part has its own *_timestamp.npy.

*_lookup.npy is a cache file that contains the video timestamps and pts of all parts. It also contains interpolated timestamps between parts.

Pupil Capture recordings typically only have one part, i.e. there is no big difference between the lookup and timestamp file.

Note: The tutorial is based on the video exported _timestamps.csv* file, not npy. The csv contains timestamps and pts as well.

user-97ca10 13 July, 2022, 17:17:34

Hey there, I am wondering how to exclude blinks from raw pupil data (csv format). I understand the 'blink detection' you guys built lets you select the filter, onset, and offset. Since I want to do this post-hoc though I figured I could remove all my data points with a confidence below a certain threshold. What number do you suggest using? Also if you could point me towards documentation on how the confidence value is established that would be helpful. Thanks.

user-e3f20f 14 July, 2022, 07:21:05

Hi @user-97ca10 Check out this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb `It demonstrates how you can relate the different exports to each other by time. This should allow you write a short routine that finds pupil datums that belong to a blink and discard them.

Confidence is mainly calculated within the 2d pupil detector. It looks for pupil edges and fits an ellipse to it. Summarized, the confidence is high if the fitted ellipse matches the detected edges well. For more information, see http://arxiv.org/abs/1405.0006 If you are interested in the exact implementation, I can look that up, too.

user-2e1368 13 July, 2022, 23:55:25

Hey there, where can I find Data file? thank you!

user-e3f20f 14 July, 2022, 07:22:11

Pupil Capture creates a recordings folder in your home folder. Each recording is a folder with multiple files. You can use Pupil Player to view the recording https://docs.pupil-labs.com/core/software/pupil-player/

user-89d824 14 July, 2022, 15:58:11

Hi there, may I know if there's a time limit for recording? My experiment will likely last up to at least 1.5 hours. Is that going to be a problem for Pupil Core?

user-4c21e5 14 July, 2022, 16:18:40

We'd definitely recommend splitting your experiment into smaller chunks. If you haven't already, check out the best practices page: https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks

user-89d824 14 July, 2022, 16:40:45

Sorry, I should've been more specific with my question. The study will be broken up into a few blocks and there will be calibration at the beginning of each block. However. the total duration for the experiment is going to be at least 1.5 hours. I'm just wondering if that will be a problem since it'll be 1.5 hours of almost continuous use

user-4c21e5 15 July, 2022, 06:25:52

Hi @user-89d824. I would recommend splitting into several blocks rather than just a few, if your experiment allows πŸ™‚. It's definitely worth doing some pilot testing to get a feel for the kinds of data you'll get, how accurate the calibration is over time, etc. That'l help you decide on an appropriate plan.

user-2e1368 14 July, 2022, 19:39:49

Thank you for your response! please see the screenshot that shows me unknown files.

Chat image

user-e3f20f 15 July, 2022, 10:02:44

Hi, these are files from the intermediate recording format. They are documented here https://docs.pupil-labs.com/developer/core/recording-format/

Note, the recommended way to use them is to open the folder in Pupil Player and to export the data to CSV. https://docs.pupil-labs.com/core/software/pupil-player/#export

user-e91538 15 July, 2022, 07:33:19

Hi,

I had a question regarding the pupil invisible glasses. I saw that the link to apply offset correction wasn't working.

https://docs.pupil-labs.com/invisible/how-tos/tools/apply-offset-correction

Is there an alternative place where I could look up the procedure?

Thanks. πŸ™‚

user-4a6a05 15 July, 2022, 08:38:45

Hi @user-e91538! We have made a mistake leaving that link in the docs! We were considering such an article, but have opted for in-app documentation instead. You can find instructions on how to do the offset correction in the Companion app by opening the wearer profile of the currently active wearer (by clicking on the wearer name on the home screen) and then clicking "Adjust".

We'll remove the link from the docs. Thanks again for reporting the error!

user-e3f20f 15 July, 2022, 07:34:25

Let me try to fix that πŸ™‚

user-e91538 15 July, 2022, 07:35:43

Ah cool. Thank you. And I just realized I posted it on the wrong thread. I'll try to do it right the next time. πŸ™‚

user-04dd6f 15 July, 2022, 11:43:08

Hi:

I've got a question regarding the gaze accuracy.

While I could see the gaze accuracy and gaze precision after each calibration, but I would like to know that if it's possible to find the gaze accuracy and gaze precision among each exported file (.csv), or it's already there in the folder?

Many Thanks

user-e3f20f 15 July, 2022, 11:46:46

They are not stored as part of the recording. the results are only published as logging messages which you would need to receive/parse in real time

user-04dd6f 15 July, 2022, 12:14:07

Thanks~

user-aaa726 16 July, 2022, 13:30:10

Hi, I want to do eye tracking on mobile phone. For this, should I calibrate on the laptop with screen markers or should I print out pupil calibration markers and attach them on the phone screen? And in what range should the angular accuracy and angular precision values be in order for me to get the best result? When I calibrate and test with screen markers I get around 2,6 angular accuracy and 0,2 angular precision. Is this useful?

user-bd1280 17 July, 2022, 11:22:42

eye movement calculation

user-4c21e5 18 July, 2022, 06:22:16

Hi @user-aaa726 πŸ‘‹. I’d recommend using the physical marker, making sure to cover the area of the visual field that the phone occupies. Also, calibrate at the viewing distance you’ll be recording, e.g. arm's length. For instructions on how to use the physical marker, see this page: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-marker.

You can determine how much calibration accuracy you’ll need with this calculation: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246

Note that investigating gaze in the context of a phone screen will require robust pupil detection and good calibration. Here is a reasonable best-case example of gazing on a mobile phone screen using Core: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing

user-aaa726 18 July, 2022, 11:10:45

If I could the calibration like in the video, it would be perfect. What path should I follow to achieve this result (also without phone holder)? Do I have to define an AOI with apriltags? I try to do it but in the video you have shared they don't do. Or is it sufficient to calibrate with physical markers? Sorry for the stupid questions 😩

user-aaa726 18 July, 2022, 11:00:46

Many thanks for your answer. Now I have tried to calculate the calibration accuracy but without valid result. In my experiment (it's my phd. thesis) I use a portable phone holder with a distance ca. 40 cm to the user, the smartphone is 17,27 cm. When I add the values to the formula I get NA.

user-e91538 18 July, 2022, 12:34:22

hey guys, newbie here. I was wandering is it possible to integrate pupil capture into a custom mobile app (call some API or sth) : basically whenever the app is launched i want to start recording . Thanks

user-e3f20f 18 July, 2022, 12:35:29

Hi πŸ‘‹ Pupil Capture has a network api that you could call from your app https://docs.pupil-labs.com/developer/core/network-api/ Note, that Capture will require a desktop pc to run.

user-e91538 18 July, 2022, 12:38:35

does it have a java binding to it? also my research is on the move outside ,so cant use a pc.

user-e3f20f 18 July, 2022, 12:40:36

Have you considered using Pupil Invisible instead of Pupil Core? https://pupil-labs.com/products/invisible/

user-e91538 18 July, 2022, 12:43:47

yes, the university does not have that pupil invisible , that's i am thinking maybe if its possible . but assuming I do, how would I integrate it into my custom app. or do I have to run both apps maybe

user-e91538 18 July, 2022, 12:44:15

the companion app and my custom android app

user-e3f20f 18 July, 2022, 12:46:16

If you need to use Pupil Core, you can use a tablet running a desktop operating system, e.g. Windows Surface tablet. You might not be able to run the pupil detection at the full frame rate, though. There are other challenges when recording outside with Core, e.g. IR reflections due to sun, inhibiting the pupil detection.

user-e91538 18 July, 2022, 12:48:18

I see. thanks for the clarification. I will see what I can do

user-e3f20f 18 July, 2022, 12:46:57

In case of Invisible, you need to use a different API, see https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html (starting/stopping a recording is a simple http request.) But yes, both apps would need to be running.

user-e91538 18 July, 2022, 12:51:41

nice. again to use this REST API I need java/kotlin binding correct? I am new to android dev as well

user-e3f20f 18 July, 2022, 12:52:21

I have no experience with android dev. Any library able to make http requests will do.

user-e91538 18 July, 2022, 12:54:40

okay will look more into it. this is just for curiosities sake πŸ˜„ running both apps will suffice for me I think. thanks again, I appreciate your help !πŸ‘

user-b9005d 18 July, 2022, 17:27:26

You referenced that there is a binocular and monocular model under the hood. Do we have access to this model and would we be able to export it? We were hoping to do some spatial transformations of the eye tracker data externally from pupil software

user-e3f20f 18 July, 2022, 17:37:26

Yes, the parameters are announced via the network api and stored to the notify.pldata file during a recording. Otherwise, you can infer which model was used for mapping based on its base_data field. If it only has one pupil datum, it was mapped monocularly, otherwise binocularly

user-26b243 18 July, 2022, 17:34:35

Would calibration data files suffice, or is a video of the process preferred?

user-4c21e5 18 July, 2022, 18:15:19

We will need an actual recording, i.e. set up the Core system as usual, start a recording in Capture, perform a calibration, stop the recording, then zip the folder and share it with us πŸ™‚

user-219de4 18 July, 2022, 23:24:05

Hello! Did anyone have experience sending triggers from MATLAB to pupil cam? Can the exported file save the triggers along with the timestamp? Thank you!!!!

user-4c21e5 19 July, 2022, 06:30:26

Hi @user-219de4 πŸ‘‹. Check out our Matlab helpers repository: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/send_annotation.m

user-219de4 20 July, 2022, 15:12:22

and, we tried to use remote annotation to send triggers by Python (from your manual: https://docs.pupil-labs.com/developer/core/network-api/), we do see the annotation during stream but cannot see the triggers in the exported files. Where would be the triggers saved?

user-219de4 20 July, 2022, 15:09:51

Thank you! We tried to set it up but met difficulties with installing matlab-zmq, it is an error with mex function: Error using mex ld: library not found for -l./libzmq clang: error: linker command failed with exit code 1 (use -v to see invocation)

user-e91538 19 July, 2022, 10:33:00

I think there might be a typo on page https://docs.pupil-labs.com/core/software/pupil-capture/. In the Plugins -> Surface Tracking -> Defining a Surface part. This is the text, and I'm not sure if this is correct. If markers have been erroneously added or left out, click the add/remove markers button and afterwards onto the according marker to add/remove them from your surface.

I think instead of according the word might have been correposnding.

user-90ba8c 19 July, 2022, 13:13:51

Hi, I am a MSc Computer Science student using pupil-labs core for research. I am using a mobile phone with pupil mobile app for capture, hence I need to do post-hoc calibration. In my data, the calibration marker was accidentally left in the frame and is causing the gaze data to be off. I'm trying to calibrate using only a subset of the video (using trim marks) but when I come to collect the references (Detected circle markers in the recording) I cant seem to 'set from trim marks'. Ive moved the trim marks to the correct places but the button does nothing and continues to say 'Collect References in: 00:0'. Can anyone advise on this?

Chat image

user-e3f20f 19 July, 2022, 13:19:26

Hi! Sounds like you are doing everything correctly. If you increase the width of the menu the full message should appear.

user-90ba8c 19 July, 2022, 13:27:21

Indeed it does! Although, the gaze data doesn't follow the marker very well when the participants head moves in circles (The detected fixations don't stay in the middle on the marker, but circle round the marker as the head moves)

user-e3f20f 19 July, 2022, 13:31:06

In the gaze mapping section, there is a validation menu. Please set it to the same trim marks and run the validation. What does the accuracy say?

For more concrete feedback, please share the recording with data@pupil-labs.com There are many reasons that can influence the quality of the gaze estimation.

user-90ba8c 19 July, 2022, 13:34:18

Chat image

user-e3f20f 19 July, 2022, 13:36:18

To visualize pupil detection quality, it can be helpful to enable the eye video overlay.

user-e3f20f 19 July, 2022, 13:35:32

Looks like the issue is low confidence pupil data. 90% of your samples have been discarded for the calibration calculation. Please check if you can get better pupil data by tweaking the parameters during post-hoc pupil detection

user-90ba8c 19 July, 2022, 13:42:44

Thanks! Looks like the first few seconds of the data the pupil detection is struggling to settle. Once settled the detection is much better. Have removed these first few seconds and the gaze data is already visually much closer and validation reflects that!

Chat image

user-e3f20f 19 July, 2022, 13:45:03

Small trick for post-hoc pupil detection: You can restart pupil detection while it is still running and it will keep the current eye model. This way you can apply a more stable model from the beginning.

user-90ba8c 19 July, 2022, 13:47:46

Cheers for all the help!

user-990e57 20 July, 2022, 07:16:03

Hi, I am trying to figure out how to include the software methodology for calculating the post-hoc gaze validation function in Pupil Player in a research paper. I have not found anything explaining that on the website nor on discord so far. Can you point me in the right direction? Thank you!

user-e3f20f 20 July, 2022, 07:22:57

I have also seen that you have contacted via email with similar questions. Regarding how accuracy and precision are calculated, we will add information about this here https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy

user-e3f20f 20 July, 2022, 07:18:44

For Pupil Core, the calibration methodology is the same for realtime and post-hoc. I suggest referring to the corresponding documentation web-pages.

user-990e57 20 July, 2022, 07:24:48

Okay, thank you for your help! Sorry for the double contact, I wasn't sure which would be best.

user-e3f20f 20 July, 2022, 07:46:23

Hi, check out the Pupil Core tech report https://arxiv.org/abs/1405.0006 See the Spatial Accuracy and Precision section. That document can also be referenced in your paper

user-990e57 20 July, 2022, 07:47:48

Perfect! Thank you so much!!

user-0b9182 20 July, 2022, 10:46:19

I'm using the surface tracker plugin to develop a code that returns a specific surface where the gaze point is displayed when three surfaces defined for a single world frame are visible. For example, if gaze point indicates surface2, surface2 is output.

I need your advice on the following issues.

https://github.com/pupil-labs/pupil/issues/2247

user-e3f20f 20 July, 2022, 12:58:26

Gaze on surface

user-e3f20f 20 July, 2022, 15:28:19

Hey πŸ‘‹ They are stored in annotation.pldata in the intermediate format. Use Player's Annnotation plugin to export them to csv

user-219de4 20 July, 2022, 15:52:34

And one last issue: we are having problems registering the camera in pupil capture in windows 10. We were following the guideline so we installed Libusk and zadig to try to find the connection but zadig can’t find it so we are stuck. Can you help us with registering the pupil camera in windows 10 pupil capture? Many thanks

user-219de4 20 July, 2022, 15:49:21

Thank you so much, we found it! In the annotation file, what are the columns representing for? For example, in the first column "index", does the world frame index refers to the framecount (started from the recording, not the session)?

user-1bda7f 21 July, 2022, 00:07:54

Hi, I was using Pupil Player for post-HOC head pose tracking w/ April Tags, and I was unable to find the head_pose_tracker_poses.csv file, which contains the pose of the world camera. Where does this file get saved?

user-4c21e5 21 July, 2022, 06:45:07

Hi @user-1bda7f πŸ‘‹. Please ensure that you've: 1. Gone through all steps using the Head Pose Tracker plugin, i.e. detect markers, 3d model and camera localisation, and that they were successful 2. Run the raw data exporter with the Head Pose Tracker plugin enabled (check out the export docs here: https://docs.pupil-labs.com/core/software/pupil-player/#export) Then the head_pose_tracker_poses.csv will be in the export folder

user-4c21e5 21 July, 2022, 06:51:42

It's difficult to make a suggestion here. I'd recommend reaching out the authors of matlab-zmq

user-219de4 21 July, 2022, 13:40:57

Thanks for your comments, we will, we now follow your Python script to record triggers and it works. But we still have the Windows registration issue to receive video input in Pupil Capture. Is there anything you could suggest here? Thank you so much!

user-219de4 21 July, 2022, 14:24:26

yh so we've been following the instructions https://zenodo.org/record/201933#.Ytlhjy-B10s

user-4c21e5 21 July, 2022, 14:39:25

Firstly, please make sure you are using the latest version of Pupil Capture (https://github.com/pupil-labs/pupil/releases/tag/v3.5). If the scene video isn't showing, follow these driver debugging steps: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

user-219de4 21 July, 2022, 14:25:13

but zadig is not finding our headcams

user-219de4 21 July, 2022, 14:26:32

also as a separate question we are trying to run a dual headcam set - can we run two headcam inputs from one pupil capture session?

user-4c21e5 21 July, 2022, 14:43:37

If you do not need eye video or pupil data, you could select the second scene camera as an eye camera and use Player's Eye Video Overlay. f you need the recordings to be complete, i.e. including pupil detection and eye video, use two Capture instances

user-219de4 21 July, 2022, 15:14:17

hi Neil, many thanks for your response. We tried running as admin with no success. When we check for libusk devices in device manager there is none? even with hidden devices shown

user-e3f20f 22 July, 2022, 07:24:46

That means that the drivers were not installed successfully :/

user-8b2591 21 July, 2022, 16:46:10

hi! I am trying to do calibration with vive headset on unity but I keep getting this message on pupil capture. Also, it stops responding in the middle of the calibration. Does anyone know how to fix this? I have added the gaze tracker to an existing unity project.

user-8b2591 21 July, 2022, 16:48:51

this is what I am getting

Chat image

user-6ec20c 21 July, 2022, 19:03:59

Is there any way to install capture, player and service on a computer that operates using Windows 11?

user-e3f20f 22 July, 2022, 07:24:04

Have you also downloaded the app from this link? The linked version is very old. The driver instructions should be up to date though.

user-219de4 22 July, 2022, 13:56:12

which app? Zadig? we tried version 2.5 and 2.7

user-4a6a05 22 July, 2022, 07:29:49

@user-e3f20f Forwarding a question from @user-6ec20c:

Is there any way to install capture, player and service on a computer that operates using Windows 11?

user-e3f20f 22 July, 2022, 07:31:04

Should be the same as on Windows 10

user-4a6a05 22 July, 2022, 07:36:17

Thanks @user-e3f20f! @user-6ec20c have you tried going to this page and clicking "Download Desktop Software" already? https://pupil-labs.com/products/core/

user-e91538 22 July, 2022, 08:38:46

@user-6ec20c πŸ‘‹ Users have reported being able to use Pupil desktop software on Windows 11. See this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/933755742509465610 Just try searching Windows 11 in discord, there are other messages for reference too πŸ™‚

user-eb6164 22 July, 2022, 13:37:18

Hello I am new to this community and I have a couple of questions concerning pupil lab. Where I can post these questions?

user-4c21e5 22 July, 2022, 13:46:37

Welcome to the community, @user-eb6164 πŸ‘‹. Feel free to post your questions here! (if you want some tips, check out the community-guidelines: https://discord.com/channels/285728493612957698/983612525176311809)

user-eb6164 22 July, 2022, 13:48:33

Thank you so much πŸ™‚

user-eb6164 22 July, 2022, 14:00:37

I tried to find my answer but could not, I am working on my research and still in the process of developing my research questions and design my experiments. I am working on a driving simulator that contains three screens. We need to extract specific eye metrics (such as fixations, visual dispersion) from different AOIs such as billboards, road signs, vehicles. Mirrors, Road. I have read that setting AOIs can be done only with markers. Taking into consideration that markers will not help me here since these objects are moving and located inside the screen (very small). Is there any way I can set dynamic AOI? Any free tool that can help us extract these data or should we do manual code only? Also if I am working on three screens should I calibrate on each screen?

user-219de4 22 July, 2022, 14:02:47

That's not helpful. We have installed libusk 3.0.7 following the instructions - the install is on the c drive so what do you mean its not properly installed

user-e3f20f 22 July, 2022, 15:30:31

Can you confirm that you are running Windows 10, and not the newer version 11?

user-e3f20f 22 July, 2022, 15:23:17

I understand your frustration. My response should have been more constructive. What I meant to say was: The instructions should be correct and have worked well for others in the past. If the devices remain being listed in another Device Manager category (e.g. Cameras or Imaging Devices), something must have gone wrong while using Zadig (either the headset was not connected or something else that is specific to your setup). After a successful driver installation, the libUSBk category should be listing the device entries. We do not know of any specific conditions that would cause an error.

Please note that Zadig is a third-party software and not maintained by us. Therefore, it is very difficult for us to make a statement about what might be going wrong without any specific error messages. Even with Zadig error messages, it is likely that we would not be able to help you with them. I would kindly ask you to refer Zadig specific questions to the Zadig maintainers/support.

That said, if you find out what went wrong, please let us know! Then we can update the documentation and help future users who run into the same issue.

user-4c21e5 22 July, 2022, 14:26:38

Can you please clarify which version of Pupil Capture you are using?

user-219de4 22 July, 2022, 14:34:42

3.5.1

user-4c21e5 22 July, 2022, 14:47:02

You’ll likely want to present four markers on each screen (one in each corner). These markers can be used to generate multiple AOIs on each screen. I’d recommend downloading this example recording that has markers in view and loading it into Pupil Player. That’ll give you a better sense of how surface tracking + AOIs work: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view

With that said, you are presenting moving stimuli, so static AOIs might not be appropriate. This leads to the question, do you know the coordinates on-screen of what is being presented?

If so, you could have one big AOI that covers each screen, and you’ll get x,y coordinates of gaze relative to each screen. Then it would be a case of correlating gaze with your on-screen stimuli coordinates, thereby automating your analysis.

If not, then manual coding is certainly an option, albeit a time-consuming one. Check out our annotation plugin: https://docs.pupil-labs.com/core/software/pupil-player/#annotation-player

You should only need to calibrate using one screen – when using the 3d pipeline, the calibration will extrapolate outside the calibration.

user-eb6164 24 July, 2022, 21:38:56

When you say I should compare the coordinates of the object (AOI) and coordinates of eye gazes you mean I should check in excel file for these values exported (gaze_point_3d_x, gaze_point_3d_y,gaze_point_3d_z) and compare them with my object coordinates inside the screen? . Also, is there any documentation that helps me understand how to read eye metrics? I am new to this. Thank you.

user-eb6164 22 July, 2022, 14:50:31

Yes they are moving stimuli since there will be driving scenario although they are static stimuli but the screen will change while driving forward. So static AOI will not help. And I think Yes I can get the coordinates of the objects at least the road signs, pedestrians that are standing.

user-e3f20f 22 July, 2022, 15:32:45

@user-219de4 One other suggestion would be to perform the procedure, step by step, with the exact versions of the linked software, on a fresh system. As mentioned before, it might be that something in your existing setup is interfering with the driver installation.

user-be0bae 25 July, 2022, 03:38:23

Hello, please help! I do not know what's wrong with the eye-tracking device. There is no view on eye 1 window and the world window showed " EYE1: could not set the value. 'Backlight Compensation'".You can see the pictures for specific information. Thank you so much! @user-e3f20f

Chat image Chat image

user-6586ca 25 July, 2022, 07:47:11

Hi everyone! I'm preparing an experiment with an AOI, using Pupil Core. I have a question about the gaze position data. The "confidence" value varies from 0 to 1, where 1 indicates perfect confidence of a gaze position. However, can you advise any threshold for the useful data? I found that for the pupil positions, a useful data carries a confidence value greater than ~0.6. Is it relevant for the gazes too? Thank you for your answer and your help!

user-e91538 25 July, 2022, 14:47:43

Hi @user-eb6164 πŸ‘‹ If you use Surface Tracking to define each screen you can export gaze and fixation data relative to the surface/screen. Follow this link in our documentation for reference: https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system. Data is exported as .csv files. For a break down of .csv exports please follow this link: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter In your case you would need to match the locations your stimuli appear on the screen with the gaze/fixation_positions_on_surface.csv

user-eb6164 25 July, 2022, 15:46:18

great I will work on that and try to learn about it, thank you so much!

user-e91538 25 July, 2022, 14:58:05

Hi @user-be0bae πŸ‘‹ This warning does not effect functionality so is not likely to be the cause of the issue. Please carefully check the eye camera cable connection, and try re-starting Pupil Capture with default settings in the main settings menu, and see if that resolves it.

user-cdb45b 13 March, 2023, 18:06:44

Hello @user-e91538 @user-d407c1 @user-4c21e5 ! I'm having the same problem as @user-be0bae , but restarting Pupil Capture with default settings and checking the eye camera cable connection has not resolved it, unfortunately. I am running Pupil Capture with Pupil Core on a Windows 11 laptop -- I had to install the libusbK drivers for the pupil cameras (following these instructions: https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md)

The error message is "Camera disconnected. Reconnecting.... '___ Found device. Could not set Value. 'Backlight Compensation', and the camera feeds are not showing up ('pupil detection', 'detect eye 0', and 'detect eye 1' are all checked).

I am setting this up to run on Linux (Ubuntu-22.04) as a subsystem of Windows 11. This headset's cameras are functioning properly on MacOS, so I don't think it's the Pupil Core hardware.

Do you have any other suggestions?

Chat image

user-e91538 25 July, 2022, 15:03:05

Hi @user-6586ca πŸ‘‹ The default confidence threshold is set at 0.6. This is inherited by gaze data after you have calibrated.

user-6586ca 25 July, 2022, 15:09:53

Thank you very much for your answer !

user-8b2591 25 July, 2022, 16:32:36

hi! I have tried window troubleshooting from this site (https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting) and now pupil capture terminates by itself. I have checked the hidden devices on device manager and I can't find "libUSBK Usb Devices" anymore. Is there a way to fix this?

wrp 26 July, 2022, 05:49:09

Hi @user-8b2591 πŸ‘‹ have you tried deleting the pupil_capture_settings folder from your user directory? This will make Pupil Capture start from i's default settings next time you open it.

user-89d824 26 July, 2022, 11:24:03

My scene camera gets to an FPS of 7 or sometimes even 5, and as a result the recording is choppy. Am I correct in saying that the data (e.g., fixation count) derived from this recording is going to be inaccurate?

user-138f84 26 July, 2022, 17:48:18

Hey! I want to import my simulated environment. I have an fbx file. I want to be able to import it in pupil capture. How do I do that?

user-4c21e5 27 July, 2022, 12:44:58

Hi @user-138f84 πŸ‘‹. Pupil Capture is our desktop recording software used to capture eye tracking data in real-time. There's no functionality to load fbx files. If I can ask, what's your overall goal with such a use case?

user-e91538 27 July, 2022, 08:58:05

Hello ! I did my best to look for an answer but couldn't find it in the doc/git issues/discord/archives. If it has been already answered, mea culpa. I am trying to recreate a gaze vector in a virtual environment. We recreated the world camera in the virtual environment using intrinsic and extrinsic matrix. But the virtual camera is perfect in the sense that it can not reproduce fisheye distortion, thus creating a perfect, non-distorted image of what the world camera captures. Therefore, my question is : does the exported gaze data norm_y and norm_x take distortion into account ? (Meaning : is the exported data already undistorted as if the world camera had no distortion ?) Or do I have to undistort gaze data using camera matrix, in order for norm_x and norm_y to make sense in the virtual environment ? Thank you for anyone reading this. I hope you have a nice day.

user-4c21e5 27 July, 2022, 10:57:49

Hi @user-89d824. Fixations are computed from Pupil Core's gaze data, which will more than likely have a higher sampling rate. You can calculate the sampling rate using the timestamp column of the gaze_positions.csv export. That said, 5 fps is very low. What are the specs of your CPU? If your computer does not have sufficient computational resources, the software will dismiss/drop samples in order to keep up with the incoming real-time data. If you don't have access to a more powerful machine, one workaround is to record your experiment with real-time pupil detection disabled. That should speed things up. You can then run pupil detection and calibration in a post-hoc context: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-89d824 28 July, 2022, 10:26:37

We have an Intel Xeon W-2135 CPU @ 3.70Ghz so it should be powerful enough for Pupil Core, but we do have a few things plugged in. Thanks for the suggestion. May I know if it's necessary for the brightness level to be uniform during calibration and during the experiment? M

user-138f84 27 July, 2022, 13:05:47

Hi Neil! Ive a simulated environment (fbx) in which I need to keep track of where the user is looking at. The eye tracking data is great but Im unable to overlay its circle on the simulared environment as I cant find a way to import the fbx into the pupil capture or pupil player

user-4c21e5 27 July, 2022, 14:21:08

Thanks for clarifying! There's no support for fbx files in Pupil Player/Capture. I'd recommend having a look at our Surface Tracker Plugin. This will allow you to obtain gaze in screen-based coordinates. You'd need to add April Tag markers to your computer screen (presented digitally or physically). Further details here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-e6ed07 27 July, 2022, 15:25:23

Hi everyone! Does anyone know how to debug the numbers in the red circle in this diagram? Thank you very much if someone could answer my question!πŸ₯Ή

Chat image

user-e91538 27 July, 2022, 15:50:26

Hi @user-e6ed07 πŸ‘‹ The numbers you refer to indicate the fixation id in your recording - it's an index of the fixation number from the start of the recording when you have the Fixation Detector plugin enabled. Each fixation id will correspond to a row in the fixations.csv export. You can see the fixation index increase/decrease if you skip through fixations using the Next Fixation/Previous Fixation buttons in the Pupil Player window.

user-e6ed07 27 July, 2022, 16:06:56

so I need to export fixations.csv first,then i could see the index?right?πŸ₯²

user-e91538 27 July, 2022, 16:24:22

You can see the fixation id if you select the fixation detector in Pupil Player (current fixation). This will correspond to a row in the export for that fixation.

Chat image

user-e6ed07 27 July, 2022, 16:44:54

I've found it! I can finally see it! Thank you so much!πŸ˜†

user-f93379 28 July, 2022, 10:49:03

Hi! Is it possible to run Pupil Capture under ARM?

user-3eccb3 28 July, 2022, 11:05:13

Hi admins, everyone . Can someone provide pyuvc usage docs ?

user-3eccb3 28 July, 2022, 11:11:31

Chat image

user-e3f20f 28 July, 2022, 13:12:58

Hi, pyuvc expects specific functionality from the cameras. See https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498 It is possible that your camera does not fulfil all these requirements.

user-3eccb3 28 July, 2022, 11:12:27

How do I resolve this error. I have correctly configured my uvc camera driver as libusbk one

user-4c21e5 28 July, 2022, 11:43:35

If the computer is running multiple sensors/devices, then that would likely explain the low sampling rate. Uniform ambient illumination isn't a requirement.

user-89d824 29 July, 2022, 13:32:00

Thank you for confirming

user-e3f20f 28 July, 2022, 11:49:05

Have you installed any custom plugins? Which plugins are you using?

user-89d824 29 July, 2022, 13:34:03

These are the ones I have. Should I turn them all off during recording to improve the FPS?

Chat image

user-f93379 28 July, 2022, 12:58:02

Crash. Instruction not valid?

Chat image

user-4c21e5 28 July, 2022, 13:10:41

HI @user-f93379. Please see this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/956583277429420052

user-e91538 28 July, 2022, 19:32:01

Does anyone know about pupil_positions.csv file data processing?

user-e91538 29 July, 2022, 13:15:47

Hi @user-e91538 πŸ‘‹ Could you clarify which metrics you would like to extract from the pupil_positions.csv file? Pupil Positions refers to the location of the pupil within the eye camera coordinate system. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#pupil-positions and find a break down of the export here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-13f46a 29 July, 2022, 00:22:04

Hi, does PupilLabs have monocular glass?

user-e91538 29 July, 2022, 13:20:28

Hi @user-13f46a πŸ‘‹ It would be possible for us to provide you with a Pupil Core headset with only one eye camera for monocular tracking. Is there a particular reason you need this?

user-3eccb3 29 July, 2022, 12:20:07

Hi, all. Does pyuvc have functionality to control exposure of camera?

user-e3f20f 29 July, 2022, 12:20:59

Hi πŸ‘‹ Yes, if the camera exposure its exposure setting as a uvc control. See this example https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L631-L640

user-3eccb3 29 July, 2022, 12:59:28

Thanks for quick reply. In windows installation of pyuvc, I can't find the file uvc_backend.py . Where is this file located. I installed the .whl

user-e3f20f 29 July, 2022, 13:08:16

To get an uvc.Capture instance, start by looking at https://github.com/pupil-labs/pyuvc/blob/master/example.py

user-e3f20f 29 July, 2022, 13:07:25

uvc_backend.py is part of the Pupil Core application that uses pyuvc. In the example, self.uvc_capture is an instance of uvc.Capture.

user-13f46a 29 July, 2022, 13:21:39

We need an eye tracker, but bicular glasses seem to be over our budget

user-13f46a 29 July, 2022, 13:22:06

i wanted to see price difference

user-e91538 29 July, 2022, 13:26:14

Please contact sales@pupil-labs.com to request a quote πŸ™‚

user-3eccb3 29 July, 2022, 13:29:19

Is there any way to make uncompressed formats like .bmp , be supported by pyuvc ?

user-e3f20f 29 July, 2022, 13:33:07

Yes, but you would need to modify the pyuvc source code and rebuild the package. Currently, it only supports mjpeg

user-e3f20f 29 July, 2022, 13:35:41

Surface Tracker and Head Pose Tracker are the most resource-intensive plugins in that list. You can run both plugins post-hoc, too.

user-89d824 29 July, 2022, 13:37:38

Thank you!

user-e91538 29 July, 2022, 13:48:57

hey @user-e91538 i am working on my project and in that, I need to find the pupil dilation from the csv file. The main objective of the project is to check if the pupil size is changing when someone reads sad, happy, or neutral articles.

user-e91538 29 July, 2022, 13:59:43

Check out pupillometry best practices here: https://docs.pupil-labs.com/core/best-practices/#pupillometry You can get diameter both in millimetres (provided by 3d eye model, as @user-e3f20f referred you to), and in pixels (observed in the eye videos: diameter)

user-e3f20f 29 July, 2022, 13:52:44

In this case, have a look at the diameter_3d column which returns the diameter in 3d. Note that you need a well-fit eye model and no slippage for these values to be accurate. Have also a look at the other fields here https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv

user-e91538 29 July, 2022, 14:49:29

@user-e3f20f and @user-e91538 thank you so much πŸ™‚

user-e91538 29 July, 2022, 15:22:56

@user-e3f20f @user-e91538 I am getting this type of output. is it the right way to do the analysis?

Chat image

user-e91538 01 August, 2022, 07:23:22

You could try reproducing the attached figure. This was from a simple setup maintaining central fixation whilst the screen alternates from black to white. The change in luminance corresponds to an increase/decrease in Pupil dilation.

Chat image

user-eb6164 29 July, 2022, 19:36:50

'Hello, recently I am facing an issue starting today with the eye tracker. I can notice that my gazes are not being detected correctly, when I look down the eye tracker points to somewhere else completely far from where am looking, even calibration is not working anymore. I tried to reset the software did not work, my eye confidence is 1.00. We even noticed that fixations are detected even when my eyes are closed. Any leads about what might be the issue? And how to fix it/troubleshoot?

user-4c21e5 30 July, 2022, 09:04:11

Hi @user-eb6164 : πŸ‘‹. Have you changed anything with your setup, any Capture settings, swapped scene cam lenses etc.?

user-eb6164 30 July, 2022, 19:44:13

Hello not at all 😦

user-e91538 31 July, 2022, 14:37:49

how can I detect outliers, Eye Blinkings from the pupil_positions.csv file?

user-e3f20f 01 August, 2022, 06:25:45

You can use the blink detector to find blinks and apply the methodology presented in this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb

Otherwise, I would recommend discarding data points with a confidence of 0.6 or lower.

End of July archive