👁 core


user-7daa32 01 August, 2020, 19:00:14

Hello , sorry. What is the reason for lack of fixation in pupil player? The gaze circle didn't appear throughout

papr 03 August, 2020, 07:27:59

@user-7daa32 Please see my notes below - Fading 3d eye model - The 3d eye model fades out if the 3d confidence is low. This can have two reasons: 1) the 2d confidence is low or 2) the eye model is not fit well. In case 1), the blue circle should be fading, too. Check the algorithm view for possible reasons. In case 2) the blue circle should be stable but the red circle does not match it. Roll your eyes to sample better pupil positions for the 3d model. Once it is fit well, you can freeze the model in the 3d detector settings. You can share a recording with info@pupil-labs.com if you want more specific feedback.

  • White background / writing not visible - I am not sure if I understand the issue/setup. Was the scene camera working? Was it pointing at the text? Was the lens in focus? A picture of the setup or an example recording would help us to understand the situation better.

  • Lack of fixations - Fixations require gaze data. If this was a Pupil Mobile recording you will need to run the post-hoc pupil detection and gaze mapping first.

papr 03 August, 2020, 07:32:56

@user-7daa32 Did you see this message in Player or Capture?

did not collect enough data for mapping accuracy

user-6e9a97 03 August, 2020, 13:25:06

Hi, I'd need to known the distance (in mm) between a given fixation and a given area. Is there a straightforward method to do that? Thanks!

Chat image

user-c5fb8b 03 August, 2020, 13:26:33

@user-6e9a97 in your image, I assume the circle labeled "EYE" should be labeled "FIXATION"?

user-6e9a97 03 August, 2020, 13:44:28

@user-6e9a97 in your image, I assume the circle labeled "EYE" should be labeled "FIXATION"? @user-c5fb8b Yes, sorry, EYE is FIXATION (with x and y coordinates), while AOI is just an area I've created within a larger area (such as the area of the screen).

user-c5fb8b 03 August, 2020, 13:48:14

@user-6e9a97 and the fixation is on the same planar surface as the AOI rectangle?

user-6e9a97 03 August, 2020, 15:33:01

Hi! Yes, I want to calculate the distance between fixation and an area, all my stimuli are presented on a PC monitor (in my picture, the monitor can be imagined as the white area while the blue blue AOI is just a given picture (e.g., a face) I want to present to participants). Thanks!

user-c5fb8b 03 August, 2020, 15:34:45

@user-6e9a97 are you tracking the AOI with Pupil's Surface Tracker plugin? Or is Pupil not interacting with the AOIs?

user-6e9a97 03 August, 2020, 15:40:21

Well, at the moment I'm working on this solution: cover the whole screen with a whole-screen AOI generated through 4 aprils tags on the 4 corners of the screen; then I use the Suerfer Tracker plugin, and I insert in the whole-screen AOI the physical widht and height in mm of my screen; so, then, I can get the X and Y in mm position of a given fixation within the screen. Then, offline, I have obtained the X and Y coordinates of the smaller blue AOI (in px, then converted in mm). With these values, I'm trying to calculate the distance between fixation and the area. But I'm wondering whether there is another solution.

papr 03 August, 2020, 15:41:24

@user-6e9a97 This sounds like a good solution.

user-6e9a97 03 August, 2020, 15:41:40

ok, thanks!

user-c5fb8b 03 August, 2020, 15:41:49

@user-6e9a97 I agree with @papr

user-6e9a97 03 August, 2020, 15:43:40

ok, I'll try with this! In this specific case (I'm interested in eye movements made towards stimuli appearing on a fixed-distance PC monitor) do you suggest to use 2D or 3D calibration? (I'm not using a chinrest or similar). And a monocular or binocular registration? Thanks.

papr 03 August, 2020, 17:51:23

@user-6e9a97 definitely binocular. Usually, we recommend using 3D. 2D can be more accurate in a very controlled environment.

user-a98526 04 August, 2020, 13:03:54

Hi@papr ,how do I cite pupil core in my paper.

user-a8f443 04 August, 2020, 15:28:11

@papr - whew, made it back to the discord after a year away. Our lab is still using the Intel depth sensor, I don't follow the dev stuff much because other priorities... but, I understand from @user-b14f98 there is no longer explicit support in the toolchain. Is there a rogue group of folks out there who still need/use this functionality I should be connecting with?

user-b14f98 04 August, 2020, 15:29:34

(btw, bok_bok is also performLabRit). This is my personal / non-professional account.

user-a8f443 04 August, 2020, 15:30:30

You have been unmasked!!!!

papr 04 August, 2020, 15:31:05

Back to the topic. There is a community plugin for it now https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11

user-a8f443 04 August, 2020, 15:31:16

Thanks @papr

user-a8f443 04 August, 2020, 15:32:28

I'll give it a go. I like it fits in a Gist

papr 04 August, 2020, 15:32:53

@user-a8f443 You still need to install pyrealsense though

user-a8f443 04 August, 2020, 15:33:11

makes sense - a small sacrafice

user-a8f443 04 August, 2020, 15:33:39

it makes sense to 'peel it out' actually, as best as I can see.

papr 04 August, 2020, 15:33:44

It means that you need to symlink pyrealsense into your user plugins folder or run Pupil from source

user-a8f443 04 August, 2020, 15:34:12

OK- yeah, I remember having to do that to get some of the stuff happening in the older versions as well

user-a8f443 04 August, 2020, 15:34:34

I have a docker VM with it setup for the older version so I can learn from my mistakes

user-a8f443 04 August, 2020, 15:36:12

Thanks again - now if I can only get a solution for my 'flip has really squinty eyes' problem. I was thinking of an external prosthesis like used to clamp eyelids open for surgical intervention

user-a8f443 04 August, 2020, 15:36:22

Or, you know, gene editing

user-430fc1 05 August, 2020, 17:03:23

Hello, is it possible that bright light from LEDs can affect the quality of pupil measurements?

Chat image

user-430fc1 05 August, 2020, 17:04:11

I'm measuring a PLR, looking into an integrating sphere

Chat image

user-430fc1 05 August, 2020, 17:04:12

The light seems to cause problems with the pupil data. I'm wondering if it is because it makes the pupil too small, or if there is something happening at the sensor level. Apologies if this is not clear...

Chat image

user-b14f98 06 August, 2020, 14:45:51

@user-430fc1 Fellow user - not with PLabs. Make sure to explore the options available to you in the eye video view. This includes adjusting the region of interest (ROI) in which the pupil lives, and also several parameters that affect the pupil fitting algorithm, including minimum pupil size. Adjust those parameters while in the "algorithm view," so that you can see their effect.

user-b14f98 06 August, 2020, 14:47:22

It could be that your min pupil size is too large for the smaller, constricted pupil.

user-b14f98 06 August, 2020, 14:49:56

My thought is that the light you're turning on should not affect the image quality unless it is adding a lot of infrared light, and saturating the eye cameras. Even then, it should not matter as long as the pupil is in high contrast to the iris.

user-b14f98 06 August, 2020, 14:50:58

In fact, in both of those images, the pupil seems to be tracked well, and the eyeball model indicated by the larger green circle seems reasonably sized and positioned. Those are only images, of course.

user-c5fb8b 06 August, 2020, 14:52:22

@user-430fc1 is your graph showing pupil diameter? make sure to filter for pupil detection confidence (e.g. > 0.8) before using the data, as it will also include frames where the detection failed (indicated by low confidence).

user-430fc1 06 August, 2020, 14:59:27

@user-b14f98 Thanks for the input, the ROI is the full camera and I've have good settings for the max/min pupil size. The light is from 10 LED channels spanning 420-740 nm. It seems to happen at the offset of the light. EDIT - sorry the picture is not clear. It's basically showing an artifact consistently in the same place

Chat image

user-430fc1 06 August, 2020, 15:01:38

@papr Yes, pupil diameter, and the problematic data are low in confidence. I'm trying to figure out why detection is problematic at this particular time

user-b14f98 06 August, 2020, 15:01:45

@user-430fc1 Consider that a small, constricted pupil is more likely to overlap in size with other "false alarm" pupils in the image. You can reduce this possibility by playing with the pupil fitting parameters pupil min size, and the parameter that controls the histogram split (I forget what it's called, but it's the top parameter in the window).

user-b14f98 06 August, 2020, 15:02:04

Also, I strongly suggest that you debug this using algorithm view, and not only derived time series.

user-b14f98 06 August, 2020, 15:03:16

@user-430fc1 The images you presented us with look like good, high confidence fits. Were they actually low confidence?

user-b14f98 06 August, 2020, 15:03:31

both have confidence of 1.00

user-430fc1 06 August, 2020, 15:04:43

@user-b14f98 Those images were screenshotted just before and after ligt onset. The vertical blue line on the pupil plot corresponds to the onset of a 1 s pulse of light. After light offset, there are issues with detection.

user-b14f98 06 August, 2020, 15:05:06

oh, oK. Yes, your lights that span 420-740 nm are in the near IR range that may cause some issues.

user-b14f98 06 August, 2020, 15:05:37

Maybe test the camera's sensitivity to the emitted range by covering up the light source on the Pupil Labs hardware and turning your light on/off.

papr 06 August, 2020, 15:07:19

The easiest way to give concrete feedback would be by providing an example recording. This way we can open it in Player and rerun pupil detection to check what is going wrong.

user-b14f98 06 August, 2020, 15:08:02

Papr and PFA are the real pros here. I would follow their advice.

papr 06 August, 2020, 15:08:40

@user-b14f98 Thank you nonetheless for stepping in and providing good answers!

user-b14f98 06 August, 2020, 15:08:59

One more thing to consider, if it does turn out that hte light source is the issue. I'm not sure if it would interfere with your experimental design to do so, but you could have the observer wear a welding instructor's mask, which filters out near IR. The Pupil Labs core will fit underneath it.

user-b14f98 06 August, 2020, 15:09:04

Link incoming

user-430fc1 06 August, 2020, 15:10:36

@papr OK, will link to one soon. Another thing, the integrating sphere has a high Lambertian reflectance coating on the inside. Could that cause issues with scattered IR from the emitters?

Chat image

papr 06 August, 2020, 15:11:56

@user-430fc1 We do not use the attached IR LEDs as features for pupil positions. The algorithm relies on fitting an ellipse to the pupil edge. If there is a strong reflection that falls ontop of that border it will reduce the confidence of that sample as the pupil edge is interrupted.

user-b14f98 06 August, 2020, 15:16:53

Here's an example of a coated faceshield. https://www.amazon.com/dp/B07PPCVFXW/ref=sr_1_42?dchild=1&keywords=welding+mask+infrared&qid=1596726934&sr=8-42

user-b14f98 06 August, 2020, 15:17:20

They coatings can be less dramatic than this one. I'm having trouble tracking down the model I bought, which was visually quite transparent, and without tint.

user-b14f98 06 August, 2020, 15:18:33

@user-430fc1 I'm curious, what's the goal here? Some kind of multispectral analysis of eye / skin region?

user-430fc1 06 August, 2020, 15:21:36

@user-b14f98 Thank you! Yes, all things PLR basically. We hope to explore melanopsin function after some light-source optimisation

user-b14f98 06 August, 2020, 15:21:45

Wonderful.

user-b14f98 06 August, 2020, 15:22:34

Keep because melanin gives color to both the iris and skin, it can be problematic to segment the two using standard RGB or IR emitters.

user-b14f98 06 August, 2020, 15:23:22

I've tossed around doing this kind of work myself in the past - not because it's an area of my own expertise, but because I work in a department with a group of remote sensing scientists that specialize in hyper and multi-spectral sensing.

user-b14f98 06 August, 2020, 15:24:29

Best of luck!

user-b14f98 06 August, 2020, 15:25:55

Oh, it looks like I misunderstood your aims just a bit there. I read too quickly, and misread melanopsin as melanin.

user-b14f98 06 August, 2020, 15:26:06

So, don't worry if what I said was confusing.

user-b14f98 06 August, 2020, 15:26:10

🙂

user-430fc1 06 August, 2020, 15:26:50

@user-b14f98 Yes, we are interested in the photopigment, present in a small population of retinal ganglion cells 😏

user-b14f98 06 August, 2020, 15:27:13

inprgcs?

user-b14f98 06 August, 2020, 15:27:16

iprgcs

user-b14f98 06 August, 2020, 15:27:36

Which lab?

user-430fc1 06 August, 2020, 15:28:12

Perception Lab at Oxford

user-b14f98 06 August, 2020, 15:28:44

PI?

user-b14f98 06 August, 2020, 15:29:17

Smithson?

user-430fc1 06 August, 2020, 15:30:11

Yes, she is head. PM if you want to know more so we don't spam this channel!

user-430fc1 06 August, 2020, 15:32:47

@papr https://www.dropbox.com/sh/3rqpyvnrmj666hw/AAAL3NYhhLqnIjP0bgSI1hzaa?dl=0 - here is an example recording where the issue occurs

papr 06 August, 2020, 15:33:55

@user-430fc1 Is this a monocular recording on purpose?

user-430fc1 06 August, 2020, 15:34:28

@papr Yes, only have one eyecam on this headset

papr 06 August, 2020, 15:42:01

@user-430fc1 From what I can see, 2d detection looks fine, but 3d detection struggles in scene frames 143-154. Is this what you are referring to?

user-430fc1 06 August, 2020, 15:42:56

@papr Yes, that is generally the issue. I deleted older examples where it was much more noticeable

papr 06 August, 2020, 15:56:45

@user-430fc1 To be honest, I am not sure what causes the 3d ellipse to jump there. The model is stable. The 2d detection looks solid. I ran the offline pupil detection and the jump reduced to a single scene frame. It will take some time to look into this issue.

user-430fc1 06 August, 2020, 15:59:02

@papr OK, no problem. Thanks for your help.

user-430fc1 06 August, 2020, 16:26:08

@papr Another Q - I have a routine which uses the worldcam to timestamp the light. It is a subclass of threading.Thread with an overrided .run() method, and get's the timestamp associated with the first frame where luminance passes a threshold (and then sends an annotation). I also have a similar class which grabs and stores pupil.1.3d data for immediate access, which I start running at the same time in my script. However, the exact timestamp of the light-detection-frame does not exist in the grabbed pupil data, and I have to find the nearest. Is this to be expected, and if so, why? Both cameras are running at 120 Hz (320, 240).

papr 06 August, 2020, 16:28:03

@user-430fc1 This is to be expected as the cameras are not synchronized. Finding the closest sample in time is your best bet.

user-7aaf5c 07 August, 2020, 12:08:07

Hello everyone, I have a question that confuses me, I would be glad if you could help. Is it possible to use these softwares with a standard webcam or should I necessarily have one of the pupil labs hardware.

user-c5fb8b 07 August, 2020, 12:10:33

Hi @user-7aaf5c, the Pupil software works with head-mounted eye-trackers only. It can work with other cameras than the ones we sell with our hardware as well. Generally there are 2 prominent types of eye-tracking: head-mounted vs remote eye-tracking. For remote eye-tracking you normally have a camera attached to e.g. a computer screen. Maybe you are referring to this? Our software does not work for remote eye-tracking setups unfortunately.

user-7aaf5c 07 August, 2020, 12:33:47

@user-c5fb8b Thanks for your answer. Basically I have an external camera which is connected with USB. My main goal is to track the pupil with close images. Can I achive my goal with this software?

user-c5fb8b 07 August, 2020, 12:36:29

@user-7aaf5c not without significant effort. You will also at least need 2 cameras, one for the eye and one for the "view" of the wearer. Depending on the type of camera, it might not even be compatible with Pupil. Please note that we offer instructions to build your own DIY version of a headset, but it will also require significant technical work to build one: https://docs.pupil-labs.com/core/diy/

papr 07 August, 2020, 13:07:01

Technically, if you are only interested in tracking th pupil, without the gaze mapping part, you only need one camera pointed at the eye. In order to use the built-in video backend your camera needs to fulfill these requirements: https://discordapp.com/channels/285728493612957698/285728493612957698/725357994379968589

user-7aaf5c 07 August, 2020, 13:43:31

@papr Okey thanks. So I need a camera which is given in this link https://docs.pupil-labs.com/core/diy/ ? For example Logitech C525 is okey for my study?

papr 07 August, 2020, 13:44:45

Other cameras that fulfill the requirements work, too. And yes, the C525 fulfillls the requirements.

papr 07 August, 2020, 13:47:30

Please keep in mind, that the algorithm is optimized for IR-light filtered images. It works on visible-light images, too but not as well. Specifically, if your subject has a dark-colored iris. Following the DIY guide might be the easiest way to success if you do not want to buy the official headset

user-430fc1 07 August, 2020, 17:49:50

@user-430fc1 To be honest, I am not sure what causes the 3d ellipse to jump there. The model is stable. The 2d detection looks solid. I ran the offline pupil detection and the jump reduced to a single scene frame. It will take some time to look into this issue. @papr Here is a link to a recording that illustrates the issue much more clearly. The ellipse jumps around for about a second, despite stable detection. https://www.dropbox.com/sh/tofhqi30urlujou/AAA7SjLrIxfn2i0DTjXbr06Da?dl=0

user-7aaf5c 07 August, 2020, 21:40:02

I am getting this error message. Does my camera model causes this problem? I understand that my camera not fullfils the requirement that you mentioned before.

Chat image

user-7aaf5c 09 August, 2020, 14:30:33

Hello again, my previous problem has been nearly solved. But, I want to ask one more thing. I just want to use my camera to track the pupil. But as a result of the changes I made, my camera is seen as world and pupil camera. So, is there any way to ignore the world camera and use my camera only as a pupil camera?

user-26fef5 09 August, 2020, 16:39:53

@user-7aaf5c hey there, I am not a pupil labs member,but nonetheless try to answer. You can just set the video source to fake capture (via the drop down menu in the video source selection) - you will not be able to calibrate the gaze due to missing video feeds for marker based calibration.

papr 09 August, 2020, 21:18:29

@user-26fef5 @user-7aaf5c the fake source has been removed some releases ago.

user-26fef5 09 August, 2020, 21:22:13

Ok. I see.

user-c5fb8b 10 August, 2020, 10:28:35

@user-7aaf5c currently there's no way to "disable" the world camera. I assume you are not interested in gaze mapping?

I'm not sure if using the same camera from both windows might not cause data loss (as in some frames get send to world, some to eye). In that case maybe you have some USB webcam available that you could hook up? You can then just switch the world window to that other camera and basically ignore it. Please be aware that not all USB cameras are compatible with Pupil.

papr 10 August, 2020, 18:38:41

I just realized, if you are not interested in recording but only the real time data you can use Pupil Service instead of Capture.

user-7daa32 11 August, 2020, 14:12:13

@user-7daa32 Did you see this message in Player or Capture? @papr Thanks. I cant recall but I will try out another sampling

user-7daa32 11 August, 2020, 14:19:47

ok, I'll try with this! In this specific case (I'm interested in eye movements made towards stimuli appearing on a fixed-distance PC monitor) do you suggest to use 2D or 3D calibration? (I'm not using a chinrest or similar). And a monocular or binocular registration? Thanks. @user-6e9a97 how can one specify ? I thought the 2D and the 3D run concurrently?

user-c5fb8b 11 August, 2020, 14:22:00

@user-7daa32 the 2D and 3D pupil detection run concurrently. There's also 2D vs 3D gaze mapping. You can specify which gaze mapping pipeline you want in the calibration menu. Please see this section in the docs for an explanation: https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline

user-7daa32 11 August, 2020, 14:22:37

Hello, is it possible that bright light from LEDs can affect the quality of pupil measurements? @user-430fc1 How did you get that graph ? in the new updated software?

user-c5fb8b 11 August, 2020, 14:23:54

@user-7daa32 the graphs from @user-430fc1 are not made with the Pupil software. From the looks I assume he created them with matplotlib in python.

user-7daa32 11 August, 2020, 14:26:06

Thank you @user-c5fb8b

user-b3403f 12 August, 2020, 14:13:43

Hello! I'm wondering if it's possible to export only the world mp4 video from a detected surface rather than the whole world view. It is my undertanding that you can view the surface alone and gaze position within the surface in its own window, but i there a way to export this as its own video file. Thanks!

user-c5fb8b 13 August, 2020, 07:55:41

Hi @user-b3403f this is not possible from within Pupil.

We do however export the relevant homographies, which can be used to convert pixel positions between surface coordinates and coordinates of the original camera image. You can use these to crop the surface from the distorted scene image, and get something similar to our surface debug view. The homographies are exported as two additional columns in the surf_positions_<name>.csvexport file: - dist_img_to_surf_trans - surf_to_dist_img_trans

user-56be96 13 August, 2020, 10:16:24

Hi! Trying to record on PC from Pupil Mobile result an error - Recording Pupil Mobile stream requires a connection How it can be fixed?

Chat image

user-c5fb8b 13 August, 2020, 10:20:00

Hi @user-56be96 Pupil Mobile is not intended to be recorded via Pupil Capture. The network latency normally results in very low fps data and timing issues. Instead you should record on the phone and later transfer the recording to the PC via USB to open in Pupil Player.

You can however use Pupil Capture with the Pupil Remote functionality to start the recording (on the phone) via Pupil Capture (on the PC), which is very handy for some situations. Are your phone and PC in the same network and are there any network restrictions? From our experience the network communication between Pupil Mobile and Pupil Capture will sometimes not work in e.g. university networks, in this case we recommend setting up your own wifi network.

user-56be96 13 August, 2020, 10:52:41

Hi @user-56be96 Pupil Mobile is not intended to be recorded via Pupil Capture. The network latency normally results in very low fps data and timing issues. Instead you should record on the phone and later transfer the recording to the PC via USB to open in Pupil Player.

You can however use Pupil Capture with the Pupil Remote functionality to start the recording (on the phone) via Pupil Capture (on the PC), which is very handy for some situations. Are your phone and PC in the same network and are there any network restrictions? From our experience the network communication between Pupil Mobile and Pupil Capture will sometimes not work in e.g. university networks, in this case we recommend setting up your own wifi network. @user-c5fb8b Yes, they are in same wifi and PC can see stream from phone. And how do I calibrate if recording on Mobile only? Post-hoc in Pupil Player?

user-c5fb8b 13 August, 2020, 10:54:19

@user-56be96 correct. May I ask which Pupil version you are using?

user-56be96 13 August, 2020, 10:55:34

@user-c5fb8b latest software and Binocular headset. Phone is Oneplus 3T

user-c5fb8b 13 August, 2020, 10:57:55

@user-56be96 so you say the PC sees the stream from the phone? As in: you selected the Pupil Mobile device as video source and you see the 3 streams (world, eye0, eye1) in capture?

user-56be96 13 August, 2020, 11:24:06

@user-c5fb8b Yes, but recording did not start

user-c5fb8b 13 August, 2020, 11:25:34

@user-56be96 theoretically it should start (even though we recommend recording on the phone). I'll see if I can reproduce this.

user-c5fb8b 13 August, 2020, 11:40:25

@user-56be96 I was able to reproduce the problem. It appears when you try to start recording with big network lags. I turned down the frame rate of all 3 cameras such that the wifi is capable of handling the data throughput and then I could start the recording from PC.

user-c5fb8b 13 August, 2020, 11:41:05

Also turning down the world camera resolution helps a lot

user-c5fb8b 13 August, 2020, 11:41:19

You can find these settings in the Video Source menu

user-56be96 13 August, 2020, 11:49:45

@user-c5fb8b ok thx!

user-c5fb8b 13 August, 2020, 11:53:04

@user-56be96 Again I would recommend to record on the phone. You can run the calibration in parallel in Capture on the stream while recording on the phone to get a rough idea of how good/bad the calibration worked. Note that the offline calibration will be more accurate in the end since it's not influence by network lag.

user-c5fb8b 13 August, 2020, 11:54:08

I would recommend the "Single Marker" calibration with a printed marker for that.

user-7daa32 13 August, 2020, 12:21:48

Hello I don't know why I am having this after minimizing or bringing in the eye windows

user-c5fb8b 13 August, 2020, 12:23:09

@user-7daa32 which version of Pupil are you using? Are you on Windows? Can you share the log file with us right after the error occurs? You can find it in your home folder > pupil_capture_settings > capture.log

user-7daa32 13 August, 2020, 12:25:23

Pupil > @user-7daa32 which version of Pupil are you using? Are you on Windows? Can you share the log file with us right after the error occurs? You can find it in your home folder > pupil_capture_settings > capture.log @user-c5fb8b Pupil hardware or software? I am using the current version of Pupil lab software 👁 core ...as for the file, I will send when I get to my office

user-c5fb8b 13 August, 2020, 12:27:22

@user-7daa32 Pupil Capture version. We have released v2.2 just 8 days ago, where we fixed a lot of UI bugs. Are you already using v2.2?

user-7daa32 13 August, 2020, 12:28:20

@user-7daa32 Pupil Capture version. We have released v2.2 just 8 days ago, where we fixed a lot of UI bugs. Are you already using v2.2? @user-c5fb8b Yes, started using it yesterday. The first, I got that error

user-c5fb8b 13 August, 2020, 12:29:06

Ok, I will see if I can reproduce the issue on Windows. Are you experiencing any other problems when minimizing the eye windows? Or is it just the error message?

user-c5fb8b 13 August, 2020, 13:06:07

@user-7daa32 I was able to reproduce the error message when minimizing the eye window. This should however have no effect and you can safely ignore it.

user-7daa32 13 August, 2020, 13:24:40

Okay, thanks

user-b3403f 13 August, 2020, 14:15:26

@user-c5fb8b thanks for the help. I actually just opened the surface window from Pupil Player and did a screen recording as it played through the whole video. I now have a great .mov file of only the surface and nothing else from the world view. You can use this for future reference!

user-c5fb8b 13 August, 2020, 14:16:01

@user-b3403f ha, good idea indeed! 👍

user-7e517e 13 August, 2020, 14:22:28

Will windshield heating in vehicles affect the scene camera attached to the eye-tracking glasses (making the video 'short-sighted')?

user-c5fb8b 13 August, 2020, 14:48:47

Hi @user-7e517e, we have seen your question in the 🕶 invisible channel as well. Are you specifically asking for Pupil Invisible glasses? We'll double check with our team to see if someone can give a well-founded answer to your question or otherwise might run a quick test ourselves!

user-7e517e 14 August, 2020, 10:09:32

@user-c5fb8b, yes I'm specifically asking for Pupil Invisible glasses. Thanks!

user-56be96 14 August, 2020, 10:10:20

@user-c5fb8b Yesterday we decided to record on PC. Today got an critical error at Pupil Capture launch, "Instruction at 0x00007FF86E88A916 referenced memory at 0x0000000000113DA40. The memory could not be written." What might be the reason?

user-c5fb8b 14 August, 2020, 10:18:59

@user-56be96 I have not seen this error before. Does it always happen when you start Pupil Capture now? After seeing an error, please send us the logfile. You can find it at your home folder > pupil_capture_settings > capture.log. Please note that this file gets overwritten every time you restart Capture, so you need to send/copy it directly after encountering an error.

Again I want to stress that we don't recommend recording with Capture when using Pupil Mobile. You can certainly play around with it, but for actual data acquisitions we only recommend recording on the phone.

user-56be96 14 August, 2020, 10:19:38

@user-c5fb8b Ok, no we are recording directly, without phone 🙂

user-c5fb8b 14 August, 2020, 10:21:34

Ah, so with the Core headset connected to the PC via USB?

user-56be96 14 August, 2020, 10:26:38

yes

user-7aaf5c 14 August, 2020, 13:18:14

Hello again. I have one more question. Is there a way to identify a region of interest (ROI) to my camera?

user-c5fb8b 14 August, 2020, 13:19:08

Hi @user-7aaf5c can you describe in a bit more detail what you want to do?

user-7aaf5c 14 August, 2020, 13:42:16

@user-c5fb8b I want to track the pupil. But my camera not focusing in the short distance. So, I want to minimize the region of interest to minimize the errors with a little further distance.

user-c5fb8b 14 August, 2020, 13:47:04

@user-7aaf5c Pupil does have indeed a ROI mechanism to mask the eye windows. However, I'm not sure if it will help you with the pupil being out of focus, as this is only intended to mask out very dark or very bright areas which can throw off our pupil detection algorithm. Are you using a Pupil Labs headset, or are you using your own cameras? But you can give it a try: I the "General Settings" menu in the eye windows, select "mode: ROI". You should then see 4 colored points, one in each corner of the image, which you can drag around to select your ROI. If you select "mode: algorithm" you can see that the pupil detection only run on that slice of the image then. But as I said, I'm not sure if it will help you with focusing problems.

user-196692 14 August, 2020, 19:21:18

Hi team. I'm setting up a Pupil Core (MacOS Catalina 10.15.6, Pupil Capture 2.2.0) and notice that the world camera frame rate drops dramatically (from 60 Hz at 1280x720, into the low teens) when the Pupil Capture world camera window loses focus. No plugins are active. The eye cameras chug along fine at 200 Hz. Doesn't matter what window I give focus to: a Finder window, empty terminal, or even the eye camera display windows. Doesn't seem like it's a load issue, because if I start another process and then click on the world camera window to give it focus back everything does fine. I've tried elevating the nice priority of the Pupil Capture processes, without effect. Anybody know how to prevent this? Or else trick MacOS into thinking Pupil Capture still has focus? Thanks!

user-7aaf5c 15 August, 2020, 21:37:20

I apologize if I missed while searching, can you explain me what is this red drawing?

Chat image

user-c5fb8b 17 August, 2020, 06:37:50

@user-7aaf5c the blue circle is the result of the 2D pupil detector, the red ellipse is the result of the 3D pupil detector. The result of the 3D detector (red) seems far off, but as you can see on the outside, the 3D eye model (green circle) is not yet well fit. The green circle should roughly lie on the eyeball for the 3D detector to work. You will need a couple of seconds of slowly looking around your field of view at the start for the 3D model to build up. Afterwards the result of the 3D detector should be roughly identical to the 2D one, if not even better. Please have a look at our getting started section in the docs about pupil detection: https://docs.pupil-labs.com/core/#_3-check-pupil-detection Especially the video will show how the 3D model fits the eyeball after some looking around.

papr 17 August, 2020, 08:18:55

You will need a couple of seconds of ~~slowly~~ looking around your field of view at the start for the 3D model to build up. I would like to add that it is not necessary to do this slowly but can be done with a succession of quick eye movements, too.

user-196692 17 August, 2020, 21:42:26

Hello! I'm working on an experiment that needs real-time surface tracking, and I'd like to apply the most recent surface homography matrix to the freshest gaze data as it's published. So I'm trying to use the img_to_surf_trans homography matrix published by the surface tracker, and apply it to new gaze samples. The norm_pos coordinates published along with the surface data look reasonable, but when I try to apply the img_to_surf_trans matrix to the norm_pos base data that was published as gaze, I don't get the same answer, even for the same samples of base data. My understanding from reading the docs is that I should be able to multiply the homography matrix by the gaze vector in normalized coordinates: [norm_pos[0], norm_pos[1], 1] and get out the norm_pos in surface coordinates. Am I normalizing something wrong? I've tried the full group of transpositions to see if the matrix is transposed or reversed but no dice. Thanks in advance!

papr 17 August, 2020, 21:51:30

@user-196692 It is actually a bit more complicated since we map gaze in undistorted camera space https://github.com/pupil-labs/pupil/blob/bf9c2c5819c6dff1aae1a004cf71ba790d6e01bd/pupil_src/shared_modules/surface_tracker/surface.py#L248-L269

user-196692 17 August, 2020, 23:11:13

@papr thanks! That's really illuminating. If I want fresh samples in surface space in my application code in MATLAB or Python, any opinion on whether it'd be easier to recapitulate the undistortion and homography in my application code, or write a Pupil plugin extending the surface tracker to output each gaze-tracked sample?

user-de8356 18 August, 2020, 08:47:50

Hi there. I represent a cognitive neuroscience lab in Ukraine. We have allocated some funds for eye-tracking device, however, we planned to buy previous version of Pupil Core, which costed around 1300 EUR with academic discount. We have circa 1700-1800 EUR (in Ukrainian currency, may differ depending on currency exchange rates). Is there any way of buying previous version of Pupil Core, or is is possible to get an additional academic discount for current version, so we can buy it?

user-c5fb8b 18 August, 2020, 11:51:19

Hi @user-de8356, please contact info@pupil-labs.com with this question again and our sales team will take over from there.

user-de8356 18 August, 2020, 11:52:05

Hi @user-de8356, please contact info@pupil-labs.com with this question again and our sales team will take over from there. @user-c5fb8b Great, thanks. Couldn't find the contact email on site.

user-ee33ec 19 August, 2020, 04:02:15

Hello, I'm not sure if this is the right place to ask but i'l try anyway.

user-ee33ec 19 August, 2020, 04:09:47

The cable of eye 0 broke during the experiment (the black one). Could you please give me some advice? I tried to push it back in but it didn't work.

Chat image

papr 19 August, 2020, 06:24:09

@user-ee33ec please send this picture in an email to [email removed] Our colleagues will follow up with you.

user-70a9d1 19 August, 2020, 11:20:20

Hi, The world camera is not showing in capture. I get the following error in the capture log: "world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied." I am using Windows 10 with a pupil labs headset, an intel realsense2 world camera and version 2.2 of core. If I run the realsense viewer the camera shows so the connection and drivers are working. The eye cameras work fine. The pyrealsense2 plugin loads fine. Any suggestions? It worked before.

user-c5fb8b 19 August, 2020, 11:28:53

Hi @user-70a9d1 we have removed official support for RealSense cameras in v1.22, you can read about the decision in the release notes in the section Removed Built-in RealSense Video Backend: https://github.com/pupil-labs/pupil/releases/tag/v1.22 However you can still use the RealSense cameras, as there is a custom community plugin which ports the previous backend to later Pupil versions: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11

user-70a9d1 19 August, 2020, 11:33:08

Thanks @user-c5fb8b. I did this and had it working. The log file suggests that plugin loaded fine.

user-c5fb8b 19 August, 2020, 11:33:42

@user-70a9d1 Are you running from source or from bundle?

user-70a9d1 19 August, 2020, 11:33:54

bundle

user-c5fb8b 19 August, 2020, 11:34:27

Can you share the whole log file with us?

user-70a9d1 19 August, 2020, 11:35:00

Sure. Sorry, how do I do that?

user-c5fb8b 19 August, 2020, 11:35:40

Just upload it here, or send an email to data@pupil-labs.com The file is in your home folder > pupil_capture_settings > capture.log

user-70a9d1 19 August, 2020, 11:38:51

Sent you an email, thanks.

user-c5fb8b 19 August, 2020, 12:21:23

@user-70a9d1 did you enable the Realsense2 Source plugin in the plugin manager?

user-70a9d1 19 August, 2020, 13:37:12

@user-c5fb8b Hah. That was it. Must of got turned off somehow. Thanks for your patience!

papr 19 August, 2020, 13:37:41

@user-70a9d1 After updating the software, the session settings are reset. 🙂

user-908b50 19 August, 2020, 20:09:13

For some reason, my player crashes after i drag and drop the recording folder into player (or even if I do it through the path to recording way from terminal). I am not sure what to do. The data collected using v1.18. I am analyzing the data using v2.x (the latest one). I am using ubuntu 20.04. Installation was successful (fnally, after many hiccups). Edit: It says updating recording format. This may take a while! for some time before the crash.

user-908b50 19 August, 2020, 20:58:59

Edit: It worked for another recording. Loading the recording took quite a while. Exporting 16946 frames also took 27 minutes. Any idea how to speed up processing?

papr 19 August, 2020, 21:35:10

@user-908b50 Hi, please provide the ~/pupil_player_settings/player.log file after attempting to open the problematic recording. 16946 are a lot. It is expected that will take some time as the video encoding requires a lot of CPU

user-00fa16 20 August, 2020, 02:06:50

Why the 'green circle' of eye camera can't display fully?

user-00fa16 20 August, 2020, 02:34:35

Could anyone tell me about ioHub of pupil lab? such as the configuration files (format of YAML)

user-908b50 20 August, 2020, 05:07:25

@papr The export ended up working through a process or trial and error. I am getting the errors below for two other exports. I can still see the exported files. I looked at gaze_postions.csv for both of these exports. I found that some of the variables weren't calculated for a quite a few timeframes. Not sure if that is because of the error below?

`2020-08-19 21:19:03,428 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "/home/fiza/pupil/pupil_src/launchables/player.py", line 672, in player p.recent_events(events) File "/home/fiza/pupil/pupil_src/shared_modules/task_manager.py", line 84, in recent_events self._manage_current_tasks() File "/home/fiza/pupil/pupil_src/shared_modules/task_manager.py", line 88, in _manage_current_tasks self._update_running_tasks() File "/home/fiza/pupil/pupil_src/shared_modules/task_manager.py", line 112, in _update_running_tasks result = managed_task.most_recent_result_or_none() File "/home/fiza/pupil/pupil_src/shared_modules/task_manager.py", line 258, in most_recent_result_or_none for result in self.task_proxy.fetch(): File "/home/fiza/pupil/pupil_src/shared_modules/background_helper.py", line 121, in fetch raise datum ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 38490 and the array at index 1 has size 3965

2020-08-19 21:19:03,428 - player - [INFO] launchables.player: Process shutting down. /background_helper.py", line 121, in fetch raise datum ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 39114 and the array at index 1 has size 2618

2020-08-19 21:19:03,428 - player - [INFO] launchables.player: Process shutting down.`

user-908b50 20 August, 2020, 05:22:34

This is the error pertaining to my original post. Its a hit and miss with recordings. Some work and some don't. 2020-08-19 22:19:13,373 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version. 2020-08-19 22:19:13,594 - player - [INFO] numexpr.utils: Note: NumExpr detected 24 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. 2020-08-19 22:19:13,595 - player - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. 2020-08-19 22:19:13,708 - player - [ERROR] launchables.player: InvalidRecordingException: Target at path does not exist: /home/fiza/pupil/pupil_src/mnt/*****/**/****/***/***/*/* 2020-08-19 22:19:58,012 - player - [INFO] launchables.player: Starting new session with '/mnt/*****/**/****/***/***/*/*' 2020-08-19 22:19:58,035 - player - [INFO] pupil_recording.update.new_style: Checking for world-less recording...

user-908b50 20 August, 2020, 05:35:38

@user-908b50 hey, is it possible that the recordings that do not work have any type of brackets in their path? Or non-ascii characters? @papr not really!

papr 20 August, 2020, 05:36:15

Sorry, that issue was not definitely not related to brackets. This is why I deleted my message

user-908b50 20 August, 2020, 05:36:50

I think it has something to do with OS and the actual program. This is because even after I load it and export it, the player actually crashes after a while. I am not able to play the recording.

papr 20 August, 2020, 05:37:44

Would you mind sharing an example recording with data@pupil-labs.com st. we can check that it is not the recording?

papr 20 August, 2020, 05:38:49

@user-00fa16 when the green circle is not around your eye ball it means that your eye model is not fit well. Roll your eyes before calibrating to fit a good eye model

user-908b50 20 August, 2020, 05:38:58

Uhh sure, will do! More than one recording is doing that. I can share two of them.

papr 20 August, 2020, 05:39:54

@user-00fa16 would you mind clarifying what you mean by ioHub or specify which config files you are talking about? Ideally post a link to them.

papr 20 August, 2020, 05:41:42

@user-908b50 that would be great! I see that you are running from source. Have you made sure to wait for the actual crash before checking the logs? For long recordings, it is normal that it can take quite a while to open them the first time.

user-908b50 20 August, 2020, 05:42:47

@user-908b50 that would be great! I see that you are running from source. Have you made sure to wait for the actual crash before checking the logs? For long recordings, it is normal that it can take quite a while to open them the first time. @papr yes to both! I also click wait once/twice before giving up and force quitting. I am trying again (final try) and then I will email you.

papr 20 August, 2020, 05:44:39

If there is no python traceback after the crash, then this indicates a low-level crash, likely in pyav

papr 20 August, 2020, 05:45:09

In this case I would try to reinstall ffmpeg and pyav and test if this improves the situation

user-00fa16 20 August, 2020, 05:46:26

I want to link the pupil lab with psychopy, some one suggest that I should use the API called ioHub https://www.psychopy.org/api/iohub.html, and white the configuration files, a test of YAML format, to describe information about my eye_thacker for ioHub to read

papr 20 August, 2020, 05:51:22

@user-00fa16 thanks for the clarification. Pupil Labs does not offer official support for ioHub yet. Actually, I was not aware of this psychopy feature. We will have a look at it and evaluate if it is suitable for Pupil.

user-00fa16 20 August, 2020, 06:15:08

So if I want to link with my psychopy to realize self-adaption, the 'net-work API' is a verified way?

papr 20 August, 2020, 06:23:24

@user-00fa16 Correct

user-430fc1 21 August, 2020, 09:01:44

This is only the case if you use the same resolution for both. I would highly recommend doing that in your case if possible.

Not at the moment. Let us gather some measurements to give you a more concrete answer. Once we have that, we can share the measurement script for you to replicate the measurement.

Unfortunately, I do not have a concrete number for that either. Are you analysing the 2d or 3d pupil diameter? @papr Just wondering if you have had the chance to look at the timing issue in further detail? I recently migrated to Mac, and am quite keen to get some further information about timing / camera latencies.

papr 21 August, 2020, 09:10:37

@user-430fc1 Unfortunately, we did not have time to look into this topic in more detail yet as we have been concentrating on fixing a series of user-facing issues in the last couple of weeks.

user-430fc1 21 August, 2020, 09:17:06

@papr OK, no problem! It's just that after moving to Mac I noticed that some of my pupil-latency measures had shifted and that there was some variability / physiological implausability (e.g. pupil constriction onset of <150 ms following light onset). This is despite the fact that the world-cam timestamps are associated with the first frame where an increase in luminance becomes apparant in Pupil Player. So I'm just trying to figure out what's going on, and I remembered what you said previously about hardware vs. software latencies and adding fixed ammounts to the timestamps.

papr 21 August, 2020, 09:18:55

@user-430fc1 Correct, the macOS version uses hardware timestamps and the shift in your measurements is very likely to be related to this change

user-430fc1 21 August, 2020, 09:19:42

@papr Does the Windows version use different timestamps?

papr 21 August, 2020, 09:20:16

@user-430fc1 Windows uses software timestamps

user-430fc1 21 August, 2020, 09:23:30

@papr OK. So the method I have for timestamping lights, while useful for general trial extraction, is not going to be suitable for getting accurate reports on time-critical measures?

papr 21 August, 2020, 09:25:33

@user-430fc1 Could you remind me at which frame rate you are running the world cam?

papr 21 August, 2020, 09:26:44

Ok, so if I understood correctly, you only depend on the increased luminance in the scene video, correct?

user-430fc1 21 August, 2020, 09:27:37

@paprEye and World running at 120 Hz and lowest resolution, with exposure mode set to manual

papr 21 August, 2020, 09:28:34

Did you try sending annotations that are timestamped with the moment when you turn on the light? I am not sure if we talked about this solution before.

user-430fc1 21 August, 2020, 09:30:14

@papr Yes - an increase in luminance. Before turning the light on, I started grabbing frames and comparing the mean value to the previous frame. When this exceeds a threshold, I immediately send an annotation with the timestamp from that frame.

papr 21 August, 2020, 09:31:59

@user-430fc1 Do you control the light manually or via a script? If the light was controlled by a script, the script could send the exact timestamp of turning the light on to Capture. This requires the script to be synchronized in time to Capture.

user-430fc1 21 August, 2020, 09:35:39

The light is controlled programmatically, but via http requests. So it has its own variability. It can take 100-200 ms before the light comes on after sending a command. So the script time immediately before or after sending that command is not going to be an accurate timestamp.

papr 21 August, 2020, 09:38:21

@user-430fc1 And you do not have access to the http server processing the request?

user-430fc1 21 August, 2020, 09:39:19

So that's why I went down the route of using the world cam for detection. I also tried an Arduino-photoresistor circuit controlled with PyFirmata, but that wasn't as good. The 'ground truth' being how close the annotation appears to the to the light onset in Pupil Player

user-430fc1 21 August, 2020, 09:40:26

@user-430fc1 And you do not have access to the http server processing the request? @papr It's a BeagleBone running Linux in a branded shell. I don't think I can mess arround with that

papr 21 August, 2020, 09:42:42

Mmh, this is unfortunate to hear.

physiological implausability (e.g. pupil constriction onset of <150 ms following light onset) What are more plausible values? Do you have a reference for that?

papr 21 August, 2020, 09:43:33

Also, is the light source visible in the eye cameras as well? Maybe, you could use that to correct potential variability in the camera clocks.

user-430fc1 21 August, 2020, 09:44:27

Mmh, this is unfortunate to hear. What are more plausible values? Do you have a reference for that? @papr Should be about 230 ms, without much variability in healthy subjects.

Winston, M., Zhou, A., Rand, C. M., Dunne, E. C., Warner, J. J., Volpe, L. J., Pigneri, B. A., Simon, D., Bielawiec, T., Gordon, S. C., Vitez, S. F., Charnay, A., Joza, S., Kelly, K., Panicker, C., Rizvydeen, S., Niewijk, G., Coleman, C., Scher, B. J., … Weese-Mayer, D. E. (2019). Pupillometry measures of autonomic nervous system regulation with advancing age in a healthy pediatric cohort. Clinical Autonomic Research. https://doi.org/10.1007/s10286-019-00639-3

papr 21 August, 2020, 09:44:52

Thanks for the reference! 👍

papr 21 August, 2020, 09:45:22

Were you able to reproduce these values with timestamps on windows?

user-430fc1 21 August, 2020, 09:49:19

The LAT column in Table 1 is most relevent. On Windows, the latencies were larger, if anything, but still variable. I calculate latency based on the negative acceleration peak following the timestamp.

user-430fc1 21 August, 2020, 09:56:02

Also, is the light source visible in the eye cameras as well? Maybe, you could use that to correct potential variability in the camera clocks. @papr I have not noticed this or thought to check because the light we are using is not IR

user-7daa32 21 August, 2020, 18:21:04

Hello everyone

I am trying to look at object in off the screen mostly around the lab. Which type of calibration to use? Are the Apritags mandatory for calibration in order to look at object outside the screen? What type of calibration measure is appropriate for this ?

  1. Please do you have an idea how I can measure the distance between specific objects and also know the time spent looking at individual object? Thanks
user-7daa32 21 August, 2020, 18:22:01

I can see Y and X coordinates, do we use this to measure distance, if yes how ?

user-7aaf5c 21 August, 2020, 21:46:08

Hello again, Currently I am using a camera named as A4 Tech Full HD 1080P PC Camera for Pupil Tracking. I obtained satisfactory results with this camera but there are some focusing problems with this one. Now, I want to use another camera called as PGR Chameleon CMLN-13S2C for pupil tracking. Is this possible? I have also another question: Is there a way to use a video file as an input to Pupil Tracker without using any camera for just pupil tracking?

user-7aaf5c 23 August, 2020, 15:41:22

I have one more question. I am getting this error and have installed all the Dlls. What am I need to do? File "C:\work\pupil\pupil_src\launchables\world.py", line 133, in world from uvc import get_time_monotonic ImportError: DLL load failed: The specified module could not be found.

user-0e8b56 23 August, 2020, 18:21:07

Hi guys, I'm looking for a 3D Mesh model of the glasses so I can mount a Realsense depth sensor over the glasses. Does anybody know where I can find a mesh file?

papr 23 August, 2020, 18:25:57

@user-0e8b56 Check out these. https://github.com/pupil-labs/pupil-geometry

user-0e8b56 23 August, 2020, 18:44:57

Thank you! This is really helpful!! Would there also happen to be an STL file of the frame? If not I think I can just work off this

papr 23 August, 2020, 18:45:27

@user-0e8b56 No, I do not think that we provide that publicly.

user-0e8b56 23 August, 2020, 19:03:25

No problem, thanks for the files

user-86bf3d 23 August, 2020, 19:45:27

@user-7aaf5c I have the same problem. I would be glad if you help.

user-c5fb8b 24 August, 2020, 06:27:20

@user-7daa32 if you want to measure the time a participant looks at each object, you will have to use the surface tracking with apriltags. What type of objects are we talking about? Are they stationary? You will have to place apriltags around each object and define a rectangular surface around the object in Pupil with the Surface Tracker plugin. Pupil will give you the information if the gaze is in any of the rectangular surfaces for every frame. This can be used to calculate the time looking at an object.

For calibration when not looking at a screen you should use the Single Marker Calibration Choreography with "Marker display mode: Physical". For this you have to print out a calibration marker: https://docs.pupil-labs.com/assets/img/v0.4_calib_marker_02.4b9f83a6.jpg Then place the marker at a distance to the participant which resembles the distance of the objects you want to track. After starting calibration, participants should keep their gaze focused on the marker and move their head around, e.g. in a spiral pattern, to sample different gaze directions.

user-c5fb8b 24 August, 2020, 06:35:36

@user-7aaf5c regarding the camera: for your camera to be supported by our video backend, it needs to fullfill the following criteria: 1) UVC compatible (see specification here: http://www.cajunbot.com/wiki/images/8/85/USB_Video_Class_1.1.pdf) (Chapters below refer to this document) 2) Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) 3) Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) 4) Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) 5) Support UVC_FRAME_FORMAT_COMPRESSED frame format For other cameras, you could theretically write your own video backend, but that might be very time consuming.

Regarding your DLL issue: We had the same error message once in the past, where a user did not set up Pupil's dependencies correctly when running from source. I assume you are running from source on Windows? Which version of Pupil are you running? The latest master branch? It might be that you haven't correctly setup pyuvc, did you download and install the wheel as instructed in the documentation? I recommend you try to cleanup everything and start from the beginning by following the setup guide closely: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md

user-3eccb3 29 July, 2022, 13:29:19

Is there any way to make uncompressed formats like .bmp , be supported by pyuvc ?

papr 24 August, 2020, 08:42:05

@user-7aaf5c @user-86bf3d More specifically, we have also noticed issues when install pyuvc into python virtual environments on Windows.

user-86bf3d 24 August, 2020, 11:56:16

@user-c5fb8b Yes I am using Windows 10 and latest version of Pupil's source codes. I download and install the pyuvc wheel as given in the document and install the Microsoft Visual C++ 2010. Problem still continues.

user-86bf3d 24 August, 2020, 11:57:26

@papr so do you suggest me to install everything outside the virtual environment?

papr 24 August, 2020, 11:57:59

@user-86bf3d yes

user-86bf3d 24 August, 2020, 11:59:15

@papr okey I will try it. Thanks

user-7daa32 24 August, 2020, 12:15:07

@user-7daa32 if you want to measure the time a participant looks at each object, you will have to use the surface tracking with apriltags. What type of objects are we talking about? Are they stationary? You will have to place apriltags around each object and define a rectangular surface around the object in Pupil with the Surface Tracker plugin. Pupil will give you the information if the gaze is in any of the rectangular surfaces for every frame. This can be used to calculate the time looking at an object.

For calibration when not looking at a screen you should use the Single Marker Calibration Choreography with "Marker display mode: Physical". For this you have to print out a calibration marker: https://docs.pupil-labs.com/assets/img/v0.4_calib_marker_02.4b9f83a6.jpg Then place the marker at a distance to the participant which resembles the distance of the objects you want to track. After starting calibration, participants should keep their gaze focused on the marker and move their head around, e.g. in a spiral pattern, to sample different gaze directions. @user-c5fb8b Yes, stationary objects. Thanks. Can I be able to use the X and Y coordinates to draw a line graph? Maybe I was talking about the scanpath. I want to know the distances between individual object in player. I remember we asked about scanpath analysis.

user-7daa32 24 August, 2020, 12:15:39

Something like this

Chat image

user-c5fb8b 24 August, 2020, 12:45:01

@user-7daa32 given the limited information I have about your use-case, I think Pupil should provide all the data that you will need to produce your visualization. But it won't create the visualization for you, you will have to do this yourself with some other tool.

user-6e9a97 24 August, 2020, 16:27:38

Hi, an eye camera (eyeo) is no longer detected by pupil capture, and its screen it's just gray. If I try to unplug-plug the camera cable, windows 10 saus that the "new USB device" cannot be recognised. Any suggestion? Thanks.

user-6e9a97 24 August, 2020, 16:28:04

I'm using the pupil core system with the last software.

papr 24 August, 2020, 16:29:39

@user-6e9a97 Please contact info@pupil-labs.com in this regard.

user-6e9a97 24 August, 2020, 16:29:49

ok thanks

user-cd5b8b 24 August, 2020, 19:22:45

@papr Hi, Is there any chance for me to add AOI's for extisting recorded trials? I didn't have any marker when I recorded.

user-9ddeda 24 August, 2020, 19:34:07

Hi everyone, I am wondering if anyone knows what the model number of the eye camera is? Or if anyone knows of any good alternatives with comparable physical size and frame rates?

user-7daa32 24 August, 2020, 20:07:03

I noticed a cut in while playing the video on pupil player. Although, I was just practising using the eye tracker. Please do you know reason for this ?

user-7daa32 24 August, 2020, 20:08:31

Something like this

Chat image

wrp 25 August, 2020, 02:50:25

@user-cd5b8b if there were no markers included in your scene, then there is no option to define AOIs post-hoc in Pupil Player. Did I understand your question correctly?

user-c5fb8b 25 August, 2020, 07:19:07

Hi @user-9ddeda, we have some do-it-yourself instructions on our website in case you want to build your own headset. There are references for the specific cameras to buy: https://docs.pupil-labs.com/core/diy/#getting-all-the-parts

user-c5fb8b 25 August, 2020, 07:19:44

@user-7daa32 is the grey gap right after the calibration period?

papr 25 August, 2020, 07:31:14

@user-7daa32 Generally, this means that no frames were recorded during that period. That can have different reasons, e.g. device disconnect or the software was very busy computing something, e.g. the calibration as @user-c5fb8b suggested

user-b292f7 25 August, 2020, 10:00:14

Hello, I have a pupil eyetracker, and I'm using the pupil mobile and pupil player to extract the data.

.I would like to know if it is possible to get information about the coordinates of the head / camera in the world?

Best Roni

papr 25 August, 2020, 10:02:55

@user-b292f7 yes, with a bit of an extra setup, this is possible: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking

Edit: Thanks @user-c5fb8b I made my link more specific

user-c5fb8b 25 August, 2020, 10:03:55

@user-b292f7 specifically you can do that with the Head Pose Tracker plugin. Here's a demo video, showcasing how to use it with Pupil Player: https://www.youtube.com/watch?v=9x9h98tywFI

user-b292f7 25 August, 2020, 10:55:11

Thank you! can I mark by myself the area now when I already have the movies? I can not do the detection process

user-c5fb8b 25 August, 2020, 11:09:14

@user-b292f7 no, unfortunately our head pose tracker plugin will only work, if you place the black-and-white apriltag markers in your environment. There might be different ways to estimate the camera movement in the world from a simple video, but this is not a feature of Pupil Player.

user-b292f7 25 August, 2020, 11:23:12

oh...can I send you an email with a picture of my date and my problem, maybe you can help me with that...

user-c5fb8b 25 August, 2020, 11:24:55

@user-b292f7 you can also post it here, but if you prefer an email, please send it to info@pupil-labs.com

user-b292f7 25 August, 2020, 12:01:25

👍

user-9ddeda 25 August, 2020, 14:36:35

@user-c5fb8b Thanks for the suggestion of looking at the DIY page. I have already read it and looked into the suggested HD-6000 eye camera, however it does not seem very suitable for the use case. From what I have read, it has a max frame rate of 30fps @ 1280 x 720, compared to the 200 fps at 192 x 192 of the pupil eye camera. I am looking for something with closer specifications, and specifically frame rate. Do you know of any other cameras that have been used?

user-7daa32 25 August, 2020, 14:42:27

@user-7daa32 is the grey gap right after the calibration period? @user-c5fb8b I calibrated before recording

user-7daa32 25 August, 2020, 14:43:57

@user-7daa32 Generally, this means that no frames were recorded during that period. That can have different reasons, e.g. device disconnect or the software was very busy computing something, e.g. the calibration as @user-c5fb8b suggested @papr There should be a reason why I am trying to know to avoid lost of data during the actual data collection. I will check next time that I have good Hardware connection

user-c5fb8b 25 August, 2020, 14:50:49

@user-7daa32 if you are using a Pupil Core headset connected via USB, you should normally not get any data loss during recording. The only known exception to us is if you are also recording the calibration period, then you will commonly see a few seconds of grey while the calibration is computed. Can you describe your specific hardware setup? PC specs, which Pupil headset you are using and how it is connected? Also please run a few more tests to see if this happens regularly. This might indicate a broken connection in your headset, in which case we would refer you to our hardware team.

user-7daa32 25 August, 2020, 14:55:24

@user-7daa32 if you are using a Pupil Core headset connected via USB, you should normally not get any data loss during recording. The only known exception to us is if you are also recording the calibration period, then you will commonly see a few seconds of grey while the calibration is computed. Can you describe your specific hardware setup? PC specs, which Pupil headset you are using and how it is connected? Also please run a few more tests to see if this happens regularly. This might indicate a broken connection in your headset, in which case we would refer you to our hardware team. @user-c5fb8b I noticed for the first time when not recording and calibrating concurrently. It is probably due to USB not well fixed in. Or hardware not well intact. I will sure run another test. It is not a usual issue though.

user-c5fb8b 25 August, 2020, 14:59:33

@user-9ddeda the HD-6000 was indeed used for earlier prototypes at pupil labs. Since then our headsets contain integrated CMOS sensors, so I unfortunately cannot recommend any "regular" camera with a USB connection here. Maybe you can find someone in the community with more experience in building more performant DIY solutions.

user-9ddeda 25 August, 2020, 15:07:14

@user-c5fb8b Ok, thanks for the help

user-6e9a97 26 August, 2020, 13:11:49

Hi, on our pupil core (binocular) one of the two cameras is no longer working, but we should start an experiment in the next days. I'm wondering whether ONE camera could be enough. Our task is a screen-based experiments, with the participants sitting in front of a PC monitor where all the stimuli will appear. We are planning to use a chinrest as well. A monocular 2-D (or 3D? but the head is stabilised) recording could be enough to get where the parcipant is looking at? Many thanks.

user-c5fb8b 26 August, 2020, 13:34:53

Hi @user-6e9a97, running monocularly should be fine both in 2D or 3D mode. We used to sell purely monocular headset previously. There might be a very minor decrease in accuracy compared to a binocular setup, but you should be absolutely fine to work with it.

user-c5fb8b 26 August, 2020, 13:36:49

Also, there's nothing you have to do to run in monocular mode, Pupil will recognize this automatically and switch to monocular gaze mapping.

user-6e9a97 26 August, 2020, 14:21:11

Hi @user-6e9a97, running monocularly should be fine both in 2D or 3D mode. We used to sell purely monocular headset previously. There might be a very minor decrease in accuracy compared to a binocular setup, but you should be absolutely fine to work with it. @user-c5fb8b Hi, thanks! So, if the participant's head is stabilised with a chinrest, do you suggest to use 2D or 3D? Cheers.

user-c5fb8b 26 August, 2020, 14:23:52

@user-6e9a97 as you already suggested, a chinrest is a good way to minimize slippage, in which case the 2D pipeline can produce more accurate results. Please also have a look at our best practices section, where all the other trade-offs between 2D and 3D are explained as well: https://docs.pupil-labs.com/core/best-practices/

user-6e9a97 26 August, 2020, 14:25:16

@user-6e9a97 as you already suggested, a chinrest is a good way to minimize slippage, in which case the 2D pipeline can produce more accurate results. Please also have a look at our best practices section, where all the other trade-offs between 2D and 3D are explained as well: https://docs.pupil-labs.com/core/best-practices/ @user-c5fb8b Many thanks for helping me! Best

user-bbee68 26 August, 2020, 16:52:33

anyone know how to increase the accuracy of the pupil core gaze?

papr 26 August, 2020, 19:21:55

@user-bbee68 what accuracy are you looking for? You can send an example recording that shows the best accuracy that you were able to achieve and we can comment on how to improve based on that.

user-7aaf5c 26 August, 2020, 21:37:50

Is there a way to use a video file as an input to Pupil without using any camera for just pupil tracking?

user-6b3ffb 27 August, 2020, 12:20:15

Hello guys. I have an issue when try to connect the Intel RealSense D435i as world/scene camera. I'm using 1.21-5 version of pupil core. The camera is being recognised from the drop down list of device manager but when select it I get an error:

[ERROR] video_capture.realsense2_backend: get_frames: Timeout! world - [WARNING] video_capture.realsense2_backend: Realsense failed to provide frames.Frame didn't arrived within 500

Do you know if I could be able to increase the timeout? Then when I disconnect the camera and restart the application it searches for intel and is not connected with the default. When I try to select another camera the window is closed. How can I change the camera to default? My PC is Windows 10.

user-fd5a69 27 August, 2020, 20:55:09

Hello guys, I am planning to run a study using one pupil core device and one pupil invisible device to record simultaneously. Both are recorded on the phones. I wonder is there any way to make use of the export_info file to synchronize these two recordings? Or please let me know if you have any suggestions on how to do the synchronization of the two recordings. Thank you!

user-6b3ffb 28 August, 2020, 08:24:25

Hello guys. I have an issue when try to connect the Intel RealSense D435i as world/scene camera. I'm using 1.21-5 version of pupil core. The camera is being recognised from the drop down list of device manager but when select it I get an error:

[ERROR] video_capture.realsense2_backend: get_frames: Timeout! world - [WARNING] video_capture.realsense2_backend: Realsense failed to provide frames.Frame didn't arrived within 500

Do you know if I could be able to increase the timeout? Then when I disconnect the camera and restart the application it searches for intel and is not connected with the default. When I try to select another camera the window is closed. How can I change the camera to default? My PC is Windows 10. @user-6b3ffb to self answer my question I resolve the issue running from the source the v1.23 and put the realsense2_backend.py to the plugins folder.

papr 28 August, 2020, 08:44:25

@user-7aaf5c Yes, it is possible if you setup your video files as part of a Pupil recording. See details about it here: https://docs.pupil-labs.com/developer/core/recording-format/ Afterward, you can open the recording in Pupil Player.

papr 28 August, 2020, 08:48:58

@user-fd5a69 Pupil Invisible records all its timestamps in Unix epoch. Pupil Mobile records the recording start time point in Unix epoch and in Pupil epoch. This allows you to calculate the difference between the epochs and manually synchronize the recordings by applying the time difference to the timestamps of one of the recordings.

user-1ce4e3 28 August, 2020, 18:40:06

any suggestions for dealing with a consistent gaze offset? tried all 3 types of calibration, accuracy is 1.00 yet visibly the displayed gaze is ~2 inches off of the actual gaze

user-7aaf5c 28 August, 2020, 23:31:20

@papr thank you for your answer. But I don't understand exactly what to do. I have just a mp4 file. Is it possible to extract files given in the link like meta, pldata, timestamp etc.

user-86bf3d 30 August, 2020, 00:50:14

Hello everyone, I recorded videos with Pupil Capture and exported it with Pupil Player and obtained pupil_positions.csv file. I want to learn the pupil radius corresponding to milliseconds but I don't understand what is "pupil timestamp". Is there a way to transfer it to millisecond?

papr 31 August, 2020, 09:05:21

@user-196692 Hi, we tried to reproduce this issue on two separate Macs but were not able to reproduce it. Instead, we noticed fps drops when the window were on a "Space" that was not visible.

papr 31 August, 2020, 09:08:41

@user-7aaf5c In this case, you will have to create a dummy info.player.json file and generate a timestamp file that fulfills the recording format requirements. This represents a minimal recording that can be opened in Player. Afterward, you can run the offline pupil detection in Player to extract pupil data.

papr 31 August, 2020, 09:09:18

@user-86bf3d Read more about timing here: https://docs.pupil-labs.com/core/terminology/#timing

papr 31 August, 2020, 10:29:54

@user-1ce4e3 If you working with a Pupil Capture recording, you can use this plugin to offset the gaze manually. https://gist.github.com/pfaion/7b83b0daa31edc4b678610bb3213c9be

Alternatively, you can use the built-in offset correction of the post-hoc calibration: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration

user-196692 31 August, 2020, 15:51:48

@papr Thanks for checking it out. I'm in MacOS Catalina 10.15.6. haven't tried in different "Spaces." Interestingly, even giving focus to one of the pupil eye-camera windows will cause the world camera to drop frames. I've got a work-around with a script that will give focus back to the world-camera window if it loses it, but I wonder if other people are going to have this issue once they upgrade to Catalina?

papr 31 August, 2020, 15:53:34

@user-196692 As mentioned above, we were not able to reproduce on two different Macs running Catalina and we have also not received any other reports in this regard. Are you running any other special software on your mac? e.g. a window manager, etc. Also, what Mac are you using?

user-196692 31 August, 2020, 16:00:00

@papr, System info posted below. I discovered this while working on Pupil integration with MATLAB R2020a, but it happens without MATLAB running too. Otherwise no special software or window manager running. As I'm writing this I'm realizing that I've only ever tried this while connected to a 2nd monitor, and I wonder if that's a factor. I'll be in lab tomorrow and will see if that changes the behavior and report back.

Chat image

papr 31 August, 2020, 18:55:25

@user-196692 I tested it with 2 monitors. It did not make a difference.

End of August archive