core


user-9211a2 01 September, 2020, 01:54:45

Hi, I'm totally a newbie in eye traker. I have tried it and download some data, but I don't know what is the unit and coordinate axis of gaze point 3d. Is the axis along with the world camera or the normal of the face plane. Do you have any guide with the coordinate system?

papr 01 September, 2020, 07:02:27

@user-9211a2 Hi, we have received your email as well. We will follow up via email.

user-9211a2 01 September, 2020, 15:54:20

@papr Thank you so much! I will check it out right away.

user-196692 01 September, 2020, 19:24:37

@papr Yeah, I tested it with only one monitor after a system restart, no other applications running, and no plugins running, with the same behavior. I also tried upgrading from Pupil Capture 2.2.0 to 2.3.0 with the same behavior. Just to try to convince you I'm not crazy here’s a video of the screen showing frame drops in the world camera when giving focus to either eye-camera window or the finder.

papr 01 September, 2020, 19:27:05

@user-196692 Don't worry, we take such feedback serious. I apologize if my previous responses conveyed the opposite! I just wanted to let you know that we unfortunately were not able to reproduce this issue which in turn makes it very difficult for us to debug and fix.

user-196692 01 September, 2020, 19:29:34

@papr No sweat! I realize it seems to be unreproducible, and - yeah - I've got no ideas how to debug if you guys can't reproduce it, lol. Here's the vid:

papr 01 September, 2020, 19:30:54

@user-196692 Thank you very much for the video. This makes the issue very clear and confirms that your description is correct.

user-196692 01 September, 2020, 19:35:55

@papr One potential clue (don't know if this is related or not) is that Pupil Capture crashes on exit for me almost every single time. Both 2.2.0 and 2.3.0 do this. I haven't been able to figure out what is special about the few times it doesn't crash. And it doesn't particularly bother me, because it doesn't crash while I'm using it. Crash log attached.

crashLog.txt

papr 01 September, 2020, 19:38:46

@user-196692 This is a separate issue with libtbb.dylib that I have encountered a few times before, but not regularly. The difficulty with this one is that I have not been able to reproduce this when running from source. The few times it happened, it happened when running the bundle.

user-196692 01 September, 2020, 19:42:40

@papr Interesting. I haven't tried running from source (for either of these). I'll give it a shot.

papr 01 September, 2020, 19:43:40

@user-196692 My feeling is that the throttle will be reproducible when running from source. But I am looking forward to your test results!

user-196692 01 September, 2020, 20:42:38

@papr So I tried running from source and both the throttle with changing window focus and my frequent crashes on exit are gone! Thanks again for your help. If you want more info on the system configuration let me know...

papr 01 September, 2020, 21:35:48

@user-196692 That is very unexpected! Thank you for giving it a try and letting us know that it worked.

user-5ac103 02 September, 2020, 07:04:48

Hey there. I have an Issue with my pupli labs eye tracking lenses. The cameras (libusbK USB Devices are not listing any cameras for the puli labs lenses)are not beening installed anymore since I have updated my pupli capure to 2.3. respecitvely 2.2.

Its not working on either PC (used for rendering) Laptop (normally not used to run experiments on)

My puli labs glasses are setup and ready to use on both machines and thier cameras are beeing installed properly

I tried the troubleshooting guid for puli core and resarted the system multiple times nothing seems to work...

Any ideas that might enlighten me?

(I am using Windows 10 1803/1908)

papr 02 September, 2020, 07:05:56

@user-5ac103 Hey, do you see the cameras being listed in a different category? e.g. Cameras or Imaging Devices?

user-5ac103 02 September, 2020, 07:09:01

@papr unforunately not

papr 02 September, 2020, 07:09:34

@user-5ac103 ok, thank you for checking. Please contact [email removed] with this information

user-5ac103 02 September, 2020, 07:10:15

@papr thay you for your quick reply

user-0cc7c5 02 September, 2020, 13:52:15

hey there, I exported some data from a video to analyze fixation duration (with pre-defined settings: dispersion = 1.5, min = 80ms, max = 220 ms).

Interestingly, most of the fixations' duration are exactly 217.864ms (or 217.863, see below). Therefore, I changed the maximum duration to 200ms and the following exported file showed a duration of 197.691ms for most of the fixations. How come? and is this data even valid?

Chat image

papr 02 September, 2020, 13:58:09

@user-0cc7c5 the algorithm aggregates as many gaze points as possible that fit into the [min, max] window. If a fixation was detected for that data, the length of the fixation is determined by last gaze timestamp - first gaze timestamp. Therefore, it is likely that the duration is not exactly the maximum window length.

user-0cc7c5 02 September, 2020, 13:59:22

so, this suggests that there might actually be a lot of longer fixations?

papr 02 September, 2020, 14:00:11

@user-0cc7c5 also, I do not know if this shows the full precision of the duration. Would you mind sharing the fixations file?

so, this suggests that there might actually be a lot of longer fixations? If you have multiple fixations that follow each other it is very possible that the actual fixation is longer.

user-0cc7c5 02 September, 2020, 14:01:14

here it is

fixations.csv

papr 02 September, 2020, 14:37:09

@user-0cc7c5 We have evaluated the correctness of the fixation detection a while ago. At that time, different settings resulted in inconsistent fixations. The "new"/fixed version is more consistent. https://nbviewer.jupyter.org/gist/papr/0160845a6eb04bd09acfe0874072a223

user-0cc7c5 02 September, 2020, 14:40:23

alright, I'll try that. thanks a lot!

user-7aaf5c 04 September, 2020, 14:55:29

Hello everyone, I want to ask a question. I want to draw Pupil Diameter (pixel) vs Time (sec) graph. Firstly, I used the Excel file that I exported from Pupil Player and I have successfully drawn my graph with this Excel. But, I want to draw my graph in real time. So, is there a way to obtain the pupil diameter data when I started the recording in Pupil Capture?

papr 04 September, 2020, 14:56:25

@user-7aaf5c Check out our real-time network api https://docs.pupil-labs.com/developer/core/network-api/

user-7d4a32 05 September, 2020, 09:06:37

Hello, I can't find the offline calibration controls in the Windows 10 version (pupil player v2.3.0). Specifically I am talking about markers calibration (like those in regular calibration in Pupil Capture). Is there a video that shows me how to do it or can you point me in the right direction?

Thanks

user-7d4a32 05 September, 2020, 11:04:10

Additionally, is it possible to delete specific reference points (I am getting noise detections but I cannot clearly see where on the screen). Is it possible to delete according to time? This is a sample of what is happening:

Chat image

user-7daa32 05 September, 2020, 18:19:46

@papr > @user-0cc7c5 the algorithm aggregates as many gaze points as possible that fit into the [min, max] window. If a fixation was detected for that data, the length of the fixation is determined by last gaze timestamp - first gaze timestamp. Therefore, it is likely that the duration is not exactly the maximum window length. @papr Before I asked for an understanding of what I have been bothering my mind after our group meeting, I want to ask why are we setting max duration. It the fixation duration supposed to be set (this may be kind of a novice question). If participants are allowed to visualize or perform a task freely with no time limit, why are we setting fixation duration. I want to know if what is being said here is a different thing.

Now to why I am here. I honestly don't understand the exported data on spreadsheet here . Let assume we have three AOI A, B &C. How can I know the individual dwell time first for intial fixation in A before transition to fixate in B, then the dwell time for the fixation in B etc ?. If I have a string with A1B1CA2BC1, how would I know the dwell time in each visit?

Note I am not talking about the coding. I know nothing about coding! Just wanna know how to identify dwell time for each fixations in the scanpath

user-7a04f9 05 September, 2020, 23:04:07

Hi Team, I'm a brain injury researcher here in Auckland NZ. Just a quick series of questions because we are finding that the 3D calibration is not getting our angular precision down to where we need it for the participant to follow a moving dot. The 2D calibration is also not ideal because it requires a chin rest to maintain an accurate recording (correct me if wrong). So we are trying to find ways around the calibration issues. It appears that turning the eye camera up to 200 Hz tends to really affect the accuracy of the data and pupil detection, unfortunately.

1) Is it safe to assume that the Z axis of the 3D eye model is the polar axis in the system? i.e. if we think of circle_3D_centre_x as the eye ball's longitude and circle_3D_centre_Y as latitude.

user-7a04f9 05 September, 2020, 23:04:40

** 2) Are theta and phi unaffected by calibration? How assumption-free are theta and phi and what is used to calculate this?

user-c5fb8b 07 September, 2020, 07:37:54

Hi @user-7d4a32

You can run the offline calibration from the Gaze Data menu. We have renamed offline calibration a while ago to Post-Hoc Gaze Calibration, which you can select in the Data Source field. Here's an example video with the old UI, so some of the labels and buttons might be different, but the general workflow is the same. First in the video there is Pupil Detection, the Post-Hoc Gaze Calibration starts at 1:55 https://www.youtube.com/watch?v=_Jnxi1OMMTc&t=1m55s

Once you have run the detection for Reference Locations in the Post-Hoc Gaze Calibration, you can manually edit them. Go to the menu for calibration, expand Reference Locations and enable Manual Edit Mode. You can then click on the world video to toggle the marker detection for the current frame.

user-c5fb8b 07 September, 2020, 09:15:23

Hi @user-7a04f9, here are a few answers and new questions regarding your questions :)

  • What angular precision values do you get out of the 3D calibration? I assume you are using a second validation step?

  • The 2D calibration does not necessarily require a chin rest, but it is sensitive to slippage. I recommend reading our best practices seciton (if you haven't done already), especially the "Choose the Right Gaze Mapping Pipeline" section: https://docs.pupil-labs.com/core/best-practices/

  • How does turning the eye camera to 200Hz affect your accuracy? From what base value, 120Hz?

  • The coordinate system of the eye model is the eye camera coordinate system. The origin is at the camera sensor and the z axis points along the view direction of the camera. This has nothing to do with the eye model. You can observe this by enabling the debug view in the 3D detector plugin in the eye window. If you look straight at the eye camera, you get a view direction vector close to (0, 0, -1) in eye camera coordinates.

  • Theta and Phi are not affected by the calibration. The eye model is built purely on the eye image and the result of the 2D pupil detector for that image. Calibration only takes care of learning a mapping between eye data (2D pupil for 2D calibration, or parameters of the 3D model for 3D calibration) and coordinates in the world camera image. Theta and Phi are calculated from the "eye-normal", i.e. the viewing direction of the eye in eye camera coordinates according to the following formulas:

r =  sqrt( x*x + y*y + z*z)
theta = acos( y / r)
phi = atan2(z, x )

So again if you look straight at the eye camera, you get a view direction vector of (0, 0, -1) which corresponds to a (phi, theta) of (-π/2, π/2).

user-7a04f9 07 September, 2020, 21:07:22

Thank you for your reply. Certainly a step in the right direction. Much appreciated! My concern with the offline calibration is that we will now need to incorporate calibration targets at the begging of our eye tracking experiment. I presume this calibration must be apart of the recording to enable post-hoc editing. We will therefore write some more targets into the code to commence at the beginning of the experiment. Our biggest problem is that when the participant performs a circular smooth pursuit, the gaze is always off-target by a couple of degrees, as if the participant is looking 1 inch to the left of the circle at all times.

The 3D calibration is getting us angular precision values of around 2-3 degrees. For 2D calibration, this is at 0.18 - 0.2 degrees.

When I turn the eye camera up to 200Hz, it always seems to delete more data during the calibration and when I open the debug window, my pupil appears to jump all over the place about 5-10% of the time, despite adjusting exposure, etc.

user-c5fb8b 08 September, 2020, 06:13:37

@user-7a04f9 Thanks for reporting back those values!

I can again only recommend that you read through the best practices, as recording the calibration and validation procedures is exactly what we recommend! https://docs.pupil-labs.com/core/best-practices/

Having a consistent offset bias in your data happens sometimes for some subjects, you can correct this in the post-hoc gaze mappers. The gaze mappers have a manual correction menu, where you can adjust flat x/y offsets.

An angular precision of 2-3 degrees is nothing unusual for the 3D calibration. If you need higher precision, I would recommend you try the 2D calibration and record another validation session at the end of each block. That way you get an idea of how much slippage might have accumulated during the experiment and how valid your data is.

What do you mean by "delete more data during the calibration"? Theoretically you get frame drops if your computer is not powerful enough to process the data at 200Hz. I would not open the debug window when actually recording/calibrating, as it might lead to additional frame drops, normally just looking at the green eyeball overlay should give you a reasonable hint at how good your 3D model is fit.

user-10fa94 08 September, 2020, 15:17:21

Hi! i am a researcher new to using pupil hardware and software. I was wondering if there was a way to extract the world and eye camera settings (exposure, aperture, brightness, etc)? are the opencv methods the way to access this information or are there already pre-built methods that are pupil hardware specific?

papr 08 September, 2020, 15:30:34

Hi @user-10fa94 Welcome! Most of the data streams can be accessed in realtime via our network api https://docs.pupil-labs.com/developer/core/overview/

You can use this api to set camera settings as well. Unfortunately, you cannot use it to read the current values. Instead, you would have to either modify the source code or write a custom user plugin: https://docs.pupil-labs.com/developer/core/plugin-api/

user-10fa94 08 September, 2020, 15:35:40

@papr thank you, excited to work with this community! i have the pupil core headset with the high speed world camera. Apologies if this question is already answered in the docs, but do these cameras auto adjust the aperture/exposure time settings based on environmental conditions or can i set them to a fixed known value if i need to know the specific settings.

papr 08 September, 2020, 15:37:31

@user-10fa94 All cameras have a fixed focus and use automatic exposure by default. You can set them to use a fixed exposure time in the Video Source menus.

user-7a04f9 09 September, 2020, 00:38:17

@user-c5fb8b that's perfect, thank you very much. I meant 'deleting more data during the calibraiton' as in after the calibration, is says something like 34% of the data is deleted for calibration purposes. The CPU issue may be just that, although it is a brand new gaming laptop. I'll investigate further. Many thanks

user-ee33ec 09 September, 2020, 07:10:15

Excuse me, I'm using pupil core hololens add-on. I followed the manual to set eye cameras and eye images, but i still have a very bad accuracy (especially when i look downwards). It seems that my pupils aren't tracked as well as the demo video in manual shows. Is there anything I can do to possibly increase the tracking accuracy?

papr 09 September, 2020, 07:11:27

@user-ee33ec Would you mind posting a screenshot of the eye windows? Or even a small recording where you look at different directions? This would help us to give more specific feedback.

papr 09 September, 2020, 07:23:15

@user-7daa32 Please see my notes below - Fixation detection - The literature often defines a time range (minimum and maximum duration) for naturally occurring fixations during free viewing tasks. This is reason 1 for setting a fixation. Reason 2 is that setting a maximum duration allows us to search very efficiently for fixations in the gaze signal. Of course you can ask a subject to fixate a specific target for longer than the naturally occurring maximum duration. But in this case the algorithm will detect two successive fixations.

- Dwell time on surfaces - Pupil Player does not calculate dwell times for you. You will have to calculate these yourself based on the "gaze_on_surface" csv files.

user-ee33ec 09 September, 2020, 07:23:21

user-ee33ec 09 September, 2020, 07:24:23

sometimes the red dot and blue dot are not consistent. I checked the algorithm image and made sure my pupil is fully covered by blue area.

papr 09 September, 2020, 07:30:15

@user-ee33ec Thank you for the recording. You have made a good job to make sure that the 2d and 3d pupil detection works as good as possible! The issue is that when you look downward, your lower eye lid obscures the pupil partially.

The algorithm is tuned to find fully visible pupils. If only parts of it can be found, the confidence of the output will be lower. This is what you are seeing in 0:04-0:05.

I am not sure about the incorrect 2d detection (blue circle) at 0:03. Your pupils are really wide. Did you increase the pupil max parameter?

user-ee33ec 09 September, 2020, 07:33:24

i am not sure how to set pupil max parameter properly. Do I need to set the max parameter so that the outer boundary is as close as the biggest detected area, or if i need to leave some space between?

papr 09 September, 2020, 07:34:13

I would leave some space. The pupil min and max parameters are mostly used to remove extreme outliers.

user-ee33ec 09 September, 2020, 07:37:27

Thank you. I'll try to change the parameter. Also I notice that my pupil looks larger than the demo video in the manual. Will experimenting in a brighter enviroment (in which hopefully my pupils get smaller) help increase accuracy?

papr 09 September, 2020, 07:39:48

@user-ee33ec The pupil detection and gaze mapping pipeline accuracy should be invariant to pupil size as long as the pupil min and max parameters are set correctly and there are not any other physical factors that influence the detection. For example, if your pupils are smaller, your lower eye lid would cover less area while looking down. This could improve accuracy.

user-ee33ec 09 September, 2020, 08:02:42

@papr Thanks for the information. I'll try😃

user-430fc1 09 September, 2020, 09:50:19

Hello, when I start and stop recordings using pupil remote, the pupil traces don't appear in the Pupil Player timeline graphs, whereas they do appear for recordings that I manually start and stop in Pupil Capture. Am I missing something?

papr 09 September, 2020, 09:54:52

@user-430fc1 Starting via Pupil Remote is equivalent to hitting the R button in the user interface. Have you made sure to enlarge the timeline area or scroll within it to make sure the timeline is not just outside of the visible area? Also, make sure to select "Pupil From Recording" as data source in the Pupil Data menu.

user-430fc1 09 September, 2020, 09:58:34

@papr yes, and they still don't seem to be there

Chat image

papr 09 September, 2020, 10:00:10

@user-430fc1 Is this the eye overlay plugin that is active?

user-430fc1 09 September, 2020, 10:00:33

@papr yes

papr 09 September, 2020, 10:00:59

Ah, actually, checkout the center of the timeline. It looks like all points are cluster in the center

user-430fc1 09 September, 2020, 10:01:13

@papr when I do a short recording from capture the data are there

Chat image

user-430fc1 09 September, 2020, 10:02:59

@papr I thought that could be what happened... how do I decluster? Is it something to do with timesync? I usually have pupil_remote.send_string('T {}'.format(time())) at the start of my scripts

papr 09 September, 2020, 10:04:59

This will be the difference. Unix epoch has a lot less precision than the default pupil epoch. It might be related to that. Also, the timeline can only draw x positions with 32-bit floats (OpenGL constraint). This looses additional timestamp precision

papr 09 September, 2020, 10:07:09
>>> t = time.time(); print(np.float64(t), np.float32(t))
1599646008.833998 1599646000.0
papr 09 September, 2020, 10:07:52

Yeah, the timeline looses precision in the order of seconds when converting the timestamps to float32

user-430fc1 09 September, 2020, 10:13:59

@papr ah ok, that makes sense. Perhaps I don't have need for unix epoch, then. I could just leave as is or set to 0, and then if I need pupil time I can say

t = pupil_remote.send_string('t')
papr 09 September, 2020, 10:14:37

@user-430fc1 If you do not need Unix epoch explicitly, I suggest not changing the Pupil epoch

papr 09 September, 2020, 10:15:34

t = pupil_remote.send_string('t') This will give you a timestamp that includes transmission delay

papr 09 September, 2020, 10:16:18

This is how you get rid of that delay: https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/TimeSync.cs#L66-L71

user-430fc1 09 September, 2020, 10:18:35

@papr OK, thanks!

user-430fc1 09 September, 2020, 10:53:43

@papr just to clarify, is unix epoch only required when sending annotations?

papr 09 September, 2020, 10:55:34

@user-430fc1 No, if you want to send annotations in realtime their timestamp are required to be in pupil epoch. You can either adjust the pupil epoch to your clock (unix epoch) or adjust your clock to pupil epoch (see hmd-eyes link)

user-430fc1 09 September, 2020, 11:15:41

@papr Would this be the Python implementation?

tBefore = time()
pupil_remote.send_string('t')
pupilTime = pupil_remote.recv_string()
tAfter = time()
delay = (tBefore + tAfter) / 2.0
timestamp = pupilTime - delay
papr 09 September, 2020, 11:22:18

I the variable name delay is incorrect.

delay     = (tAfter - tBefore) / 2.0
unix_time = (tBefore + tAfter) / 2.0

unix_time is the estimated time point at which Pupil Capture measures pupilTime.

Please be aware: delay is not the time difference between the epochs but the estimated transfer duration for the pupil remote command

papr 09 September, 2020, 11:22:42

@user-430fc1 I would try to follow the hmd-eyes implementation naming-wise for later reference

user-abc746 09 September, 2020, 11:43:08

Hey everyone

user-abc746 09 September, 2020, 11:43:28

does someone has problem to connect the glasses to computer ?

user-abc746 09 September, 2020, 11:43:53

i have three glasses and any of them can have a good connection

papr 09 September, 2020, 11:44:38

@user-abc746 Are you using the same cable for all of them? Have you tried using a different computer to reproduce the issue?

user-abc746 09 September, 2020, 11:44:54

there is always one of the three cam that doesn't work and i am not sure this is a contact failure

user-abc746 09 September, 2020, 11:45:15

i only have one computer at my disposition

papr 09 September, 2020, 11:45:25

What operating system does it run?

user-abc746 09 September, 2020, 11:45:31

i tried to changes cable

user-abc746 09 September, 2020, 11:45:35

Windows 10

papr 09 September, 2020, 11:46:28

@user-abc746 Can you check the Device manager and check if all three cameras appear there in the libUSBk category?

user-abc746 09 September, 2020, 11:47:01

ok, where is the device manager ? is it a windows stuff or pupil core ?

papr 09 September, 2020, 11:47:34

Its a program that comes with Windows. I want to test on which software level the issue appears

papr 09 September, 2020, 11:48:01

Btw, for a given headset, is the not-working camera always the same? Or is it inconsistent within a single device as well?

user-abc746 09 September, 2020, 11:50:50

no it change with cameras

user-abc746 09 September, 2020, 11:51:11

i see on device manger that ID1 not appearing

user-abc746 09 September, 2020, 11:51:22

sometime it does when it work

user-abc746 09 September, 2020, 11:51:55

so its look like a contact failure, but it means that all my three glasses has contact failure ?

papr 09 September, 2020, 11:52:01

Do you see cameras in the Cameras or Imaging Device category?

papr 09 September, 2020, 11:52:46

so its look like a contact failure, but it means that all my three glasses has contact failure ? This could be a possibility. I would like to ask you to contact [email removed] in this regard.

user-abc746 09 September, 2020, 11:54:23

these category are still in Windows ?

papr 09 September, 2020, 11:55:37

Yes, there should be categories in the device manager. If they do not show up, it means there are no devices for these categories. In this case, it looks like the drivers are correctly installed. Please `contact [email removed] My colleagues will be able to help you with that.

user-abc746 09 September, 2020, 11:58:52

ok, i see the three (sometimes two) cameras in the libusbK USB Devices category

user-abc746 09 September, 2020, 11:59:35

but it doesn't seem that the libusbK USB is in Cameras or Imaging Device category

user-abc746 09 September, 2020, 12:05:06

so it means that the devices are not well install ?

papr 09 September, 2020, 12:05:50

No, it means the drivers are installed correctly and it is likely a hardware-related issue. Please contact [email removed] in this regard.

user-7daa32 09 September, 2020, 12:31:34

I see that my previous question was responded to, I will check it after this😊.

I have been seeing the best practice on the need to "validate" please this question seems a lazy one- how do you validate?

Usually I just check what if the accuracy is more than 2.0. if it's more than that, I just move on with the tracking.

papr 09 September, 2020, 13:24:37

@user-7daa32 You can validate by hitting T after the calibration. The procedure is similar to the calibration but with different points. The accuracy result is more meaningful than directly after the calibration.

user-1ce4e3 09 September, 2020, 14:05:15

Pupil Player v2.3.0 keeps crashing when I try to drop recording directories onto it. Any troubleshooting tips?

papr 09 September, 2020, 14:06:09

@user-1ce4e3 Could you share the player.log file in the pupil_player_settings folder?

papr 09 September, 2020, 14:06:43

@user-1ce4e3 Also what type of recording is it?

user-1ce4e3 09 September, 2020, 14:08:42

@papr thanks for your quick response!! It is a recording with the surface tracking module from December 2019 (when I tried to use the earlier version of Pupil Player it said I needed >2.0)

user-1ce4e3 09 September, 2020, 14:09:35

The player.log file: 2020-09-09 10:07:41,753 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version. 2020-09-09 10:07:42,395 - player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those. 2020-09-09 10:07:47,179 - player - [INFO] launchables.player: Starting new session with 'C:\Users\cmvannelli\recordings\2019_12_13\006' 2020-09-09 10:07:47,231 - player - [INFO] pupil_recording.update.new_style: Checking for world-less recording... 2020-09-09 10:07:47,255 - player - [ERROR] libav.mjpeg: unable to decode APP fields: Invalid data found when processing input 2020-09-09 10:07:47,261 - player - [INFO] camera_models: Previously recorded intrinsics found and loaded! 2020-09-09 10:07:47,261 - player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 909, in player_drop File "shared_modules\pupil_recording\update__init__.py", line 79, in update_recording File "shared_modules\pupil_recording\update__init__.py", line 105, in _generate_all_lookup_tables File "shared_modules\video_capture\file_backend.py", line 248, in init File "shared_modules\camera_models.py", line 329, in from_file TypeError: init() takes 3 positional arguments but 5 were given

papr 09 September, 2020, 14:10:07

@user-1ce4e3 This is a known issue and will be fixed in our upcoming release. You should be able to use v2.2 though

user-1ce4e3 09 September, 2020, 14:28:18

Thanks!! Is v2.2 linked to the github?

papr 09 September, 2020, 14:28:35

@user-1ce4e3 https://github.com/pupil-labs/pupil/releases/v2.2#user-content-downloads

user-1ce4e3 09 September, 2020, 14:28:41

Thanks so much!!

user-7daa32 09 September, 2020, 19:11:43

@user-7daa32 You can validate by hitting T after the calibration. The procedure is similar to the calibration but with different points. The accuracy result is more meaningful than directly after the calibration. @papr single marker calibration is what I deal with. I will also hit the T and focus on the single calibration marker?

papr 09 September, 2020, 19:13:52

@user-7daa32 For the single marker calib you will need to do a viewing pattern, same as for the calibration

papr 09 September, 2020, 19:14:38

ideally you do a different pattern for the validation

user-7daa32 09 September, 2020, 19:16:47

Head rotation is just the only pattern I know. I think I read about side movement. This means that our study participatants will be doing calibration and validation. This might increase fatigue

papr 09 September, 2020, 19:17:11

But you might need more accurate data 🙂

user-7daa32 09 September, 2020, 19:17:32

👌😊

user-ed87f1 09 September, 2020, 19:38:25

Hi all. How can I contact pupil labs for customer support (US-based) on a hardware issue?

papr 09 September, 2020, 19:48:23

@user-ed87f1 Please contact info@pupil-labs.com

user-00fa16 10 September, 2020, 05:32:42

Hi all. Why is the program not responding when I want to get real time 'gaze' information?

user-00fa16 10 September, 2020, 05:34:46

Even when I get them in a child thread like following code

user-00fa16 10 September, 2020, 05:34:51

import threading import queue q = queue.Queue()

def th_work(): # Request 'SUB_PORT' for reading data pupil_remote.send_string('SUB_PORT') sub_port = pupil_remote.recv_string() subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('gaze.') # receive all gaze messages while True: topic,payload = subscriber.recv_multipart() message = msgpack.loads(payload) print(f"{topic}: {message}") break q_put(message)

t1 = threading.Thread(target=th_work) t1.setDaemon(True)#Deamon t1.start()#start child thread t1.join() while not q.empty(): result.append(q.get()) print(result)

user-c5fb8b 10 September, 2020, 07:22:44

@user-00fa16 what happens when you run this code and what do you expect to happen?

By the way, you can get nice formatting for code (easier to read), by wrapping your code like this: ```python

your python code

print("Hello, world!") ```

This e.g. produces:

# your python code
print("Hello, world!")
user-1ccccf 10 September, 2020, 07:32:19

@user-c5fb8b Hello, I met some performance differences when I processed a same eye video between Ubuntu 18.04 and Windows 10. I install pupil-detectors with pip : pip install pupil-detectors on Ubuntu 18.04 and Windows 10. The differences are as follows:

  1. The 3D detection has the performance differences. The first picture shows the video runs on Windows 10. We can notice that No.440-460 frames perform badly. The second picture shows the video on Ubuntu 18.04, and we can notice that No.440-460 frames perform normally. The eye video was recorded in the gaze calibration. The part of video with bad performance on Windows 10 was recorded when the user looked at the bottom left and bottom right.
  2. The 2D detection has the same performance. I processed the video with 2D detection algorithm, and I noticed that there is a same good performance on Ubuntu 18.04 and Windows 10.

I also noticed that you said that "When building the package on your own, you can experience severe performance differences when not having setup your dependencies correctly", but I install the prebuilt wheels. Why is there such differences? And how can I gain the same performance on Windows 10? Looking forward to your reply.

Chat image

user-1ccccf 10 September, 2020, 07:32:32

Chat image

user-00fa16 10 September, 2020, 07:33:13

What I want to come true is gain the real time information from pupil capture using network API, before the code I supplied, I have came true the remote control (recoding, calibration, and gaze information sometimes), but sometimes, the program can run, but most of the time it's unresponsive. And I try to set a child thread to get gaze information, no-response again (if I cancel the process of getting gaze information, there will be no unresponsive cases). So is it caused by Running out of memory?

user-1ccccf 10 September, 2020, 07:49:59

@papr @user-c5fb8b Sorry, In my previous description, I forget to say that, the white mask was added on eye picture because the left part of the origin picture was black due to lack of light.

user-1ccccf 10 September, 2020, 08:27:34

@papr @user-c5fb8b 😭 Looking forward to your prompt answers. I have tried to debug the source code these days on Ubuntu 18.04 and Windows 10, but due to the use of threads and the complicated code structure, it is difficult to figure out where the difference is.

user-c5fb8b 10 September, 2020, 08:33:53

Hi @user-1ccccf,

the notes about "performance" specifically relate to "processing time". If you build opencv without the performance tools that we use, processing a frame might take a lot longer, which might negatively impact your real-time experience e.g. in Pupil Capture. It should not have a noticeable effect on "accuracy" or "results" though, even though it might be that different versions of opencv might differ slightly in the results they produce (basically numerical jitter by different bit-widths and running algorithms on GPU vs CPU, etc.). Other than that: did you install pupil-detectors on Linux or did you use the output from the Pupil Player bundle? The thing is that we only offer pre-built wheels of pupil-detectors for Windows, on Linux running pip install pupil-detectors will actually built the code from source. However, if you use the prebuild bundles of Pupil, it will include a prebuilt version of pupil-detectors as well, which we optimized for Linux.

However, all of the considerations above do not explain the worse confidence on Windows. Do I understand correctly, that you have a single recording and run the offline pupil detection once on Windows and once on Linux on that recording? Do you get the same output (conf < 1.0) on Windows when you run the offline detection multiple times?

I would suggest you share the recording with [email removed] e.g. through Dropbox/GoogleDrive/etc. and we can take a look at what's going on.

user-c5fb8b 10 September, 2020, 08:50:43

@user-00fa16 I'm sorry I do not fully understand where your expectations diverge from the results of your script. Having a look at the code you posted, it also appears not to be syntactically correct Python code.

Please come up with a minimal working example. Explain the specific setup you use and which behavior and output you expect from running the program. Then explain what actual behavior and output you get from running the program.

This way we can help you identify whether you are making the wrong assumptions about how Pupil's network API works or whether you actually run into an issue with our software.

user-c5fb8b 10 September, 2020, 08:52:58

@user-00fa16 also please note that from v2.0 on, Pupil does not publish gaze data if you haven't calibrated yet. So when subscribing to the gaze topic you won't receive any data until you run the calibration.

user-1ccccf 10 September, 2020, 08:54:55

@user-c5fb8b Thanks for your reply. 1. I use the pupil-detectorswheel on Windows. So I don't need to build opencv. 2. Before I run pip install pupil-detectorson Linux, I run pip uninstall pupil-detectors to delete the prebuild bundles of Pupil. So I promise the pupil-detectors is the newest. 3. I run the offline detection multiple times on Windows and Linux. And I always get the same output. 4. Can I send my recording to your email? It has only 605KB. I use time.time() as my timestamp. So the timestamps are always different. But I don’t think it has a big impact.

user-c5fb8b 10 September, 2020, 08:57:00

@user-1ccccf in that case you can also just send the recording, yes. What do you mean, you use time.time() as timestamps? Did you not create the recording with Pupil Capture?

user-1ccccf 10 September, 2020, 08:58:49

@user-c5fb8b Yes, I create the recording by the opencv VideoWriter. So I don't record the timestamp. Can you tell me your email?

user-c5fb8b 10 September, 2020, 08:59:15

@user-1ccccf please send the recording to data@pupil-labs.com

user-00fa16 10 September, 2020, 09:00:09

Could I send my code to your email too?

user-c5fb8b 10 September, 2020, 09:00:58

@user-00fa16 as I mentioned above, your code is not syntactically correct Python code. Please come up with a minimal working example. Explain the specific setup you use and which behavior and output you expect from running the program. Then explain what actual behavior and output you get from running the program.

user-1ccccf 10 September, 2020, 09:01:28

@user-c5fb8b OK, Thank you very much. Looking forward to your reply. Need I to repeat my description again in the email?

user-c5fb8b 10 September, 2020, 09:01:48

@user-1ccccf no this is fine

user-1ccccf 10 September, 2020, 09:02:28

@user-c5fb8b Ok I'll send it right away.

user-00fa16 10 September, 2020, 09:17:16

Sorry to confuse you. My program is consisted with three parts: the first one attempts to remote control 'recording' and 'calibration' of pupil lab, Use the code you provide in this web: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py ; the second part is stimulus presentation, 8 rectangles in randomization; the last and most important part is to subscribe real time gaze information from IPC Backbone (please see the pictures for details) in each frame (put the code in picture into the code of stimulus presentation), but keep no-response at run time and reported no error (please ignore the child thread I mentioned above, it's just a try to solve this problem)

user-00fa16 10 September, 2020, 09:17:56

Chat image

user-00fa16 10 September, 2020, 09:18:19

Chat image

user-c5fb8b 10 September, 2020, 09:20:01

@user-00fa16 did you see my message above regarding calibrated gaze? Did you calibrate before subscribing to the gaze topic?

user-c5fb8b 10 September, 2020, 09:21:35

also please note that from v2.0 on, Pupil does not publish gaze data if you haven't calibrated yet. So when subscribing to the gaze topic you won't receive any data until you run the calibration.

user-1ccccf 10 September, 2020, 09:23:56

@user-c5fb8b Hi, the data was send successfully. I also attach the test code. Thank you very much.

user-00fa16 10 September, 2020, 09:42:15

The operation is successful, indeed, the problem was caused by calibration was not completed. Thanks for your attention and patience~

user-c5fb8b 10 September, 2020, 09:42:41

@user-00fa16 I'm glad we could help!

user-1ccccf 10 September, 2020, 10:08:58

@user-c5fb8b If there are any conclusions or solutions, please notify me immediately in Discord. I will wait for your reply. I really appreciate your help.

user-c5fb8b 10 September, 2020, 10:27:48

@user-1ccccf we will have a look at your recording as soon as possible and come back to you once we have some further information. If you prefer, we can also respond to your email, so you don't have to "wait" in Discord.

user-1ccccf 10 September, 2020, 10:37:02

@user-c5fb8b Thank you for your help. We can communicate in time through the Discord.

user-c5fb8b 10 September, 2020, 11:14:58

@user-1ccccf Did you record the eye video with a Pupil Labs eye-tracker? Did you record it with the Pupil Capture software?

user-1ccccf 10 September, 2020, 11:20:08

@user-c5fb8b Hi, I recorded the eye video with a customized hardware. I record it with the C++ OpenCV. And the video performs normally on the Linux. So I think the hardware or the software doesn't affect the result.

papr 10 September, 2020, 11:46:21

@user-1ccccf Please run detect.py on your Windows and linux machine: https://gist.github.com/papr/c763ec6ac938b59fdf62f509886fcec5

It requires https://github.com/pupil-labs/pyav as well as pandas

papr 10 September, 2020, 11:47:06

I have cleaned up the code, using pyav instead of OpenCV and using the internal video timestamps instead of time.time() for consistency

user-1ccccf 10 September, 2020, 11:48:54

@papr Thank you very much. I will try it right away.

papr 10 September, 2020, 11:49:20

My results for reference. You can use the vis.py file to visualize all results

results_macOS-10.15.6-x86_64-i386-64bit.csv

user-1ccccf 10 September, 2020, 11:50:50

@papr Thank you very much. I will try it on Windows 10 and find whether it performs normally.

papr 10 September, 2020, 11:51:34

@user-1ccccf If possible, please share the resulting csv file, too

user-1ccccf 10 September, 2020, 11:52:14

@papr OK, I will. And did you try it on Windows 10?

papr 10 September, 2020, 11:52:56

@user-1ccccf I do not have access to a windows machine right now which is why I would like to have your results for comparison

user-1ccccf 10 September, 2020, 11:53:24

@papr OK, I understand.

user-1ccccf 10 September, 2020, 12:18:35

Wow, the results have shown huge differences.

results_Linux-4.15.0-117-generic-x86_64-with-Ubuntu-18.04-bionic.csv

user-1ccccf 10 September, 2020, 12:18:40

results_Windows-10-10.0.18362-SP0.csv

user-1ccccf 10 September, 2020, 12:19:43

Three system result to compare.

Chat image

user-1ccccf 10 September, 2020, 12:21:10

Linux

Chat image

user-1ccccf 10 September, 2020, 12:21:53

Windows

Chat image

user-1ccccf 10 September, 2020, 12:23:38

@papr @user-c5fb8b And the macOS.

Chat image

user-1ccccf 10 September, 2020, 12:28:04

@papr @user-c5fb8b Why the pupil-detectors on Linux has shown the most perfect performance for 3D detection among these system? And how can I improve the performance on Windows?

papr 10 September, 2020, 12:47:22

@user-1ccccf lessen number one: don't use Opencv to extract images from a video.

papr 10 September, 2020, 12:48:26

@user-1ccccf you are using Pupil detectors 1.1.1 correct?

user-1ccccf 10 September, 2020, 12:49:58

@papr I use pyAV to extract images, as your code shows.

user-1ccccf 10 September, 2020, 12:51:22

@papr Yes, I only pip the prebuilt version through pip install pupil-detectors.

user-1ccccf 10 September, 2020, 12:51:58

And all the version is 1.1.1.

papr 10 September, 2020, 12:52:30

I am not sure what is going on with the 3d model though

user-7d4a32 10 September, 2020, 13:13:25

hey, in the fixation csv what is the "duration" and how is it different from the difference between the current world timestamp and the previous world timestamp?

user-1ccccf 10 September, 2020, 13:19:02

@papr yes, I also want to know what's happened in the code yesterday. So I try to debug the EyeModel.cpp and EyeModelFitter.cpp and print their output of the intermediate process. I notice that EyeModel.cpp Line 155-179 changed the sphere more frequently on Windows than Linux.

papr 10 September, 2020, 13:20:20

@user-1ccccf this is possibly a result of the lower confidence

user-1ccccf 10 September, 2020, 13:32:29

@papr I use Pupil v1.0 on Windows 10, and also notice the same performance as the pupil-detectors on Windows 10. I don't try it by Pupil v2.2 because I don't find the Video File Source as Pupil v1.0. So is there any possible solution to improve the performance? I will try it even though it may be failed.

user-1ccccf 10 September, 2020, 13:33:54

@papr Thank you very much for your attention and patience.👍

papr 10 September, 2020, 13:53:27

@user-1ccccf Since the video source rework, you can simply drop video files onto Capture and it will start the file source. This requires that there is an appropriate _timestamps.npy file though. This approach is not recommended though. Instead, I would create a Pupil Player compatible recording.

user-1ccccf 11 September, 2020, 01:30:31

@papr Thanks for your reminding. I have tried the Pupil Capture v 2.2 on Windows 10 with this video. I obtained the same result as the pupil-detectors on Windows.

user-00fa16 11 September, 2020, 01:56:11

Sorry, I made some mistake yesterday, I put the code of 'subscribing gaze information' in the end of experiment, but I was going to put it on every frame(subscribing gaze information every frame), so the problem was not caused by Incomplete calibration

user-00fa16 11 September, 2020, 01:56:56

@user-c5fb8b

user-00fa16 11 September, 2020, 03:53:44

Why does the pupil lab take up so much memory?

user-7daa32 11 September, 2020, 13:22:23

Hello everyone,

Please I am still trying to locate the dwell times for a fixation in in all surfaces. I don't understand the recording file

Difference between gaze_on _surface..... and fixation_on_surface...

user-7daa32 11 September, 2020, 13:25:51

How can I locate the time for dwell time in point A. Transition will occur. Then the dwell time in C. Transition will occur. Then the dwell time in B ....

Dwell time from one fixation to the next

Chat image

user-7daa32 11 September, 2020, 14:01:20

Please the fixations on surfaces don't follow the times showing on the frame.. in the picture, 123 fixation is showing but it's 130 in the timeframe

Chat image

papr 11 September, 2020, 14:20:53

@user-7daa32 130 refers to the scene image index, 123 to the fixation id. These numbers do not refer to the same thing and are therefore in most cases different.

papr 11 September, 2020, 14:26:35

@user-7daa32 Difference between gaze and fixations: - One fixation is always composed of multiple gaze points - Gaze points do not have a duration, they only have a single timestamp - Fixations have a duration

So if A,C,B are fixations in your example, the dwell time might just be duration of the fixation. But I am not sure what your definition of dwell time is.

papr 11 September, 2020, 15:54:14

@user-00fa16 You do not need to subscribe every frame. One subscription is enough. What you need to do regularly/often is recv() as you have done it in your background thread

user-7daa32 11 September, 2020, 16:54:16

@user-7daa32 Difference between gaze and fixations: - One fixation is always composed of multiple gaze points - Gaze points do not have a duration, they only have a single timestamp - Fixations have a duration

So if A,C,B are fixations in your example, the dwell time might just be duration of the fixation. But I am not sure what your definition of dwell time is. @papr A, B and C are gaze points in different surfaces. Sorry when I said dwell time, I means the time for the initial visit to a surface. We can have a scanpath comprising transition of gaze point to gaze point within all surfaces. Just want to know the times for all gaze points.

E.g A is the initial visit to Surfaces 1 and B is the initial visit to Surface 2 and C is the initial visit to Surface 3. I notice the time for all gaze point differ from what we have on the timeframe

user-ef3ca7 12 September, 2020, 17:04:21

@papr Hi, I have recorded some trials last year. I didn't use markers that time. Is it possible for me to draw heatmaps?

papr 12 September, 2020, 17:05:18

@user-ef3ca7 No, unfortunately, the surface tracking does not work without markers.

user-ef3ca7 12 September, 2020, 17:06:43

Is there any way I can draw heatmaps manually?

papr 12 September, 2020, 17:07:35

@user-7daa32 Yes, gaze timestamps are based on the eye videos original timestamps while the scene video has its own timestamps. To work around this issue, Pupil Player matches each gaze point to its closest scene video frame. That is what the "world index" refers to in the exported gaze-on-surface csv file

papr 12 September, 2020, 17:10:45

@user-7daa32 What you need to do is to merge the gaze-on-surface csv files for all surfaces, remove the entries whose "on_surf" entries are "False", and sort everything by the gaze timestamps.

user-701cbd 13 September, 2020, 19:00:53

Hello! Please help with DIY: I've bought two same cameras for World and Eye, installed drivers as needed Both cameras work fine separately (if I connect only one), but I can not force them to work together: Pupil Capture lists two cameras, but does not allow to choose one for World, another for Eye. If I select one in World, it disconnects another in Eye. Looks like it lists two cameras, but operates only with one of them I use windows10-64, Pupil Capture 2.03 Please give any hint where to look for Thanks in advance!

Chat image

papr 13 September, 2020, 19:03:04

@user-701cbd Looks like there is a name conflict. Check if you can somehow rename one of them (on system level) and restart capture

user-701cbd 13 September, 2020, 19:05:34

@user-701cbd Looks like there is a name conflict. Check if you can somehow rename one of them (on system level) and restart capture @papr Thank you, that is it I guess. I changed everything I was able in windows registry, assigned different "FriendlyNames", etc., but, apparently, Capture communicates with something at lower level than I can change

papr 13 September, 2020, 19:06:41

@user-701cbd I have never done this, so I am not even sure if it is possible 😕

user-701cbd 13 September, 2020, 19:09:26

Do you think there is a chance to change something in Capture scripts (for example, tell to address camera on different parameter) or the only way is to change at camera properties side? Just to locate where to seek for solution 🙂

papr 13 September, 2020, 19:12:28

@user-701cbd I would start by changing this line https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L926 to

label=f"{device['name']} [{device['uid']}] @ Local USB",

This way you have a way to differentiate the two cameras. If they have the same uid (shouldn't be possible) then you might be in trouble.

papr 13 September, 2020, 19:13:37

This is how you run from source on Windows https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md

user-701cbd 13 September, 2020, 19:14:03

Thank very much for help! Will try

user-f4f19b 14 September, 2020, 11:31:53

Hello,

Does anyone have an idea of how I can get a rough "human timeline" for pupil measurement ?

I tried by using the unix start_time _system_s (info.player.json), which should match with the begining of my recording session, and the pupil_timestamp. Since the sampling frequency of eye camera is 200hz, I innocently thought that each interval of 200 pupil _timestamps values (each eye) should be equivalent to 1 second ... which turned out to be totally inaccurate, obviously.

user-c5fb8b 15 September, 2020, 07:35:24

Hi @user-f4f19b, your observation is correct, the frame-rate will not be constantly frame-perfect, that's why we record the timestamps for each frame. You can use the start_time_system (unix format) and start_time_synced (pupil time) to calculate the offset for converting pupil timestamps to unix time, then you can generate a "human-readable" time for each frame by applying the offset to each frame's pupil time.

user-8effe4 15 September, 2020, 07:48:20

Hi guys, I am post-processing data collected using the Pupil Core and I'm having some issues using the surface tracker in order to map the gaze coordinates. The participant moves their head at times and so the markers on the screen are not always visible and this deforms the resulting surface (see screenshot below). I've tried editing the surface or using a fewer number of markers which are visible for most of the trial, but the result is actually worse. Moreover, in other, less extreme cases, I see in the surface window that some motion persists over the length of a trial, is this corrected for in terms of the gaze coordinates?

Chat image

papr 15 September, 2020, 08:22:59

For sake of completeness, we have already followed up with @user-8effe4 via email. Our general recommendations are - to use AprilTags instead of Legacy Markers - to run the camera intrinsics estimation to improve the undistortion of the gaze data Also, using more markers usually result in more stable surface tracking.

user-8effe4 15 September, 2020, 08:34:31

Thank you. the only problem is that we have already saved lots of valuable data. I was wondering if there is a way to solve the problem without AprilTags. Also, for camera intrinsics, the distortion problem seems to be almost solved in the pupil capture. however, the changes are not reflected on the player (world.intrinsics file seems to be loaded in the played but the screen still looks a bit rounded)

papr 15 September, 2020, 08:37:55

@user-8effe4 You are correct, the effects of the intrinsics are usually not visible as we are not distorting the scene image (which is very slow). Instead, we distort the marker locations and gaze data internally. If you want to check if the intrinsics work well for your recording you can export parts of the recording using the iMotions Exporter. This plugin exports actually undistorts the exported video using the intrinsics.

user-8effe4 15 September, 2020, 08:40:48

Oh okay, so just to recapitulat : if i export the gaze or fixation data (using iMotion Exporter) and plot it then i ll notice a diffrence in the positions?

papr 15 September, 2020, 08:42:52

@user-8effe4 Exporting using the iMotions exporter is mostly to verify the intrinsics visually by looking at the exported video. Edges that are straight in real life (e.g. the edge of your computer screen, visible in the screenshot) should look straight in the exported video as well.

user-8effe4 15 September, 2020, 08:44:12

Ah okay I will try that out. Thank you 😃

user-8effe4 15 September, 2020, 13:12:04

Okay, we have just tried that. it worked on the new recorded videos but when we tried to export our old data, it gave the following error (we tried multiple versions like 1.23 and 1.10)

Chat image

user-10fa94 15 September, 2020, 13:20:14

Hi, in the video source window, we are able to set the fps of the world video feed to either 30 fps or 60 fps depending on the resolution. However, when I convert the video into individual frames, I saw in the tutorial pages that "Most video files generated by Pupil will have a variable frame rate, mirroring the exact frame rate of the incoming frames from the cameras." and noticed the frame rate varied. Is there a way to set the frame rate?

papr 15 September, 2020, 13:27:00

@user-10fa94 no, unfortunately, there is no way to get perfectly even frame rates.

papr 15 September, 2020, 13:28:15

@user-8effe4 have you used the same headset for the old recordings? In this case, it is enough to export a new recording. Bare in mind that an intrinsics file is only accurate for a single camera with a specific lens.

user-8effe4 15 September, 2020, 13:29:33

@papr Yeah i used the same headset since we only have one 😃

papr 15 September, 2020, 13:30:07

@user-8effe4 OK, than you are fine by exporting a single recording and inspecting it

user-8effe4 15 September, 2020, 13:31:21

@papr that's the problem. I can't export them. i having an error "shown in the picture above"

papr 15 September, 2020, 13:37:04

@user-8effe4 you said, that it worked on a new recording, correct? That it is all you need

user-8effe4 15 September, 2020, 13:39:32

@papr I know but we need to use this technique on old data too (videos). and that didn't work.

papr 15 September, 2020, 13:41:16

@user-8effe4 no, I think you misunderstood me there. Once you have verified the intrinsics on a new recording, you can copy and paste them to the old recordings. (you need to redo the surface definitions, ideally with all markers for that surface visible)

papr 15 September, 2020, 13:43:53

But the intrinsics remain the same for all recordings from the same device and scene camera lens.

user-8effe4 15 September, 2020, 13:59:42

Yeah I got you. I did copy them to the old recordings. however now that i try to go "imotion exporter" and push the export button, it gave me the error. am i missing something?

papr 15 September, 2020, 14:00:50

@user-8effe4 you do not need to run the iMotions exporter anymore. The only purpose to run it, is to verify the intrinsics by visually inspecting the export result and ensuring that straight lines are indeed straight.

user-8effe4 15 September, 2020, 14:03:55

ohh ok, so even that the video looks distorted (fish eyed), the new exported gaze would be correct if i plot it on my surface ? since i will be plotting on matlab on the same surface

papr 15 September, 2020, 14:06:41

@user-8effe4 First, you will need to redefine the surface as the surface definition uses the intrinsics to build an undistorted surface model. If the intrinsics change, the old surface definitions are no longer valid.

Then, when mapping gaze from the scene video to the surface, the gaze will be mapped to the undistorted surface (gaze_on_surface*.csv files). Visualizing the gaze is only valid on an image that is also undistorted.

user-8effe4 15 September, 2020, 14:07:42

okay, i ll try that and see. thanks

user-7aaf5c 16 September, 2020, 12:32:57

Hello, I have a mp4 video file and I want to track the pupil in this video. I know I have to create info.json.player and eye0_timestamps.npy files but I don't know how to create these files. To obtain these two files, I took a recording with Pupil Capture and used the files from this recording. But, when I gave these files to Pupil Player, I observed that most of the video did not contain images and the part that containing images played very quickly. When I press redetect button, it also done vet quickly. Probably my eye0_timestamps.npy file is not okey for this video. What would you suggest me?

user-1ccccf 16 September, 2020, 13:21:59

@papr Hi, I find in the detector_3d.pyx of the pupil_detectors , there is a default focal_length value is 620.0. Need I to modify this value to better the peformance for the other IR camera?

papr 16 September, 2020, 13:23:54

@user-1ccccf which eye camera do you use? You can read the appropriate focal length defaults from here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L163-L201

papr 16 September, 2020, 13:24:33

@user-1ccccf And yes, the correct focal lengths will give you more accurate eye model positioning. This does not have an noticable effect on gaze estimation though

user-c5fb8b 16 September, 2020, 13:27:54

@user-1ccccf specifically: if you calibrate with a wrong focal length, then estimating gaze with the same (wrong) focal length will still produce correct results

papr 16 September, 2020, 13:28:43

@user-7aaf5c You can create timestamp files based on the videos internal timestamps. Part of the logic can be found here: https://gist.github.com/papr/c763ec6ac938b59fdf62f509886fcec5#file-detect-py-L26-L31

You just need to accumulate all timestamps into a numpy array and store it into the npy file

user-1ccccf 16 September, 2020, 13:32:35

@papr @user-c5fb8b Thanks for reply. I am still thinking about the question mentioned a few days ago. I wonder if it may be caused by the different focal lengths, although there is a same focal length in the code between Linux and Windows.

@user-c5fb8b Hello, I met some performance differences when I processed a same eye video between Ubuntu 18.04 and Windows 10. I install pupil-detectors with pip : pip install pupil-detectors on Ubuntu 18.04 and Windows 10. The differences are as follows:

  1. The 3D detection has the performance differences. The first picture shows the video runs on Windows 10. We can notice that No.440-460 frames perform badly. The second picture shows the video on Ubuntu 18.04, and we can notice that No.440-460 frames perform normally. The eye video was recorded in the gaze calibration. The part of video with bad performance on Windows 10 was recorded when the user looked at the bottom left and bottom right.
  2. The 2D detection has the same performance. I processed the video with 2D detection algorithm, and I noticed that there is a same good performance on Ubuntu 18.04 and Windows 10.

I also noticed that you said that "When building the package on your own, you can experience severe performance differences when not having setup your dependencies correctly", but I install the prebuilt wheels. Why is there such differences? And how can I gain the same performance on Windows 10? Looking forward to your reply.

papr 16 September, 2020, 13:34:30

@user-1ccccf the performance comparison code should be using the same focal length unless configured differently explicitly.

user-c5fb8b 16 September, 2020, 13:35:05

@user-1ccccf I don't think this is the issue

user-1ccccf 16 September, 2020, 13:36:05

@user-c5fb8b So do you think the focal length doesn't affect the performance of eye model fitting?

user-c5fb8b 16 September, 2020, 13:38:09

it will affect the 3D coordinate of the eye model, but won't impact the model confidence or gaze mapping if used correctly

user-1ccccf 16 September, 2020, 13:46:21

@user-c5fb8b Ok, thank you. I will try to find the reason why it shows the differences between the two sytems, or to improve the performance on Windows.

user-d1dd9a 17 September, 2020, 11:14:30

stores the pupil mobile app the recordings also locally on the android device ? ....so that i can export these files afterwards to pupil player ?

user-d1dd9a 17 September, 2020, 11:18:41

asked..because its in our usecase an disadvantage not to have a mobile recording unit with the pupil core headset

user-c5fb8b 17 September, 2020, 11:27:01

@user-d1dd9a exactly, this is actually the only recommended way (to use Pupil Mobile): Record locally on the android device with Pupil Mobile and then transfer the recording to the computer for further analysis/export with Pupil Player.

user-d1dd9a 17 September, 2020, 11:35:21

thx PFA for the answer. We are searching for an "alternative" to the smi glasses that we actually using...i know..its hard to compare.

user-c5fb8b 17 September, 2020, 11:38:22

@user-d1dd9a From what you are saying I think that Pupil Mobile will help you to solve your problem of needing a mobile recording unit. We're happy to answer all further questions that help you to decide if Pupil Core + Pupil Mobile would suit your needs. Did you have a look at Pupil Invisible?

user-d1dd9a 17 September, 2020, 11:51:42

Yes, also very interesting. Convenient for the wearer and easy to use. In this case, it is similar to the Smi glasses. The disadvantage, however, is that I only have the projected 2D gaze position as data. We also need the ability to calculate depth, so we need 3D gaze data. If I'm wrong, only the pupil core headset delivers.

papr 17 September, 2020, 12:45:02

@user-d1dd9a I would recommend you to contact info@pupil-labs.com in this regard.

user-d1dd9a 17 September, 2020, 13:01:11

o.k. i will do this papr. is it possible to contact in german there ?

user-29208e 17 September, 2020, 18:12:41

Could you guys please link me to a basic pipeline to measure saccades/fixations/blinks, etc? Many thanks!

user-908b50 17 September, 2020, 23:48:06

In this case I would try to reinstall ffmpeg and pyav and test if this improves the situation @papr just following up on this!! I did not uninstall and reinstall either of those packages. It took me so long to work through problems with my pc OS crash, then working through kinks with pyav and glfw that I actually prefer the easy workaround. So, if I ignore the message, it disappears and things continue or I just restart the terminal and start over. One disadavantage with not updating is that opening recording from terminal gets me an error but I just drag and drop and it works fine. I have tried these tricks out and so far its good. I have exported almost a quarter of my data so far. I didn't think it would affect the quality of my exports but I saw another post on different results on two different OS from exporting the same file and on possible performance errors. So, I just thought to test the installation and I got 10 errors. I think that's because I probably didn't clone pupil detectors separately...not that it should affect data analysis process. Kindly advise!!
Edit: (i have pupil version 2.0.363 (source) for data analysis and used bundled version 1.16 for data collection)

user-0f4533 17 September, 2020, 23:48:30

Hello! I am working on a project where I'm using Pupil Core to watch dance and using its data as choreographic material. Participants will be watching dance on a computer screen. I'm having trouble with the intitial set-up. I was wondering which method of calibation makes more sense for the project/ where to validate my calibration as it mentions in the best practices. I also wanted to know if there were any guidelines around how the eye-tracker should be positioned because it could not track my participant's pupils unless they didn't move them. When it did work for one of the participants, the data was skewed, it was always looking to the left when you could see that my participant's eyes were looking somewhere else. Any reccommendations will be greatly appreciated!

user-908b50 18 September, 2020, 00:11:38

Hello! I am working on a project where I'm using Pupil Core to watch dance and using its data as choreographic material. Participants will be watching dance on a computer screen. I'm having trouble with the intitial set-up. I was wondering which method of calibation makes more sense for the project/ where to validate my calibration as it mentions in the best practices. I also wanted to know if there were any guidelines around how the eye-tracker should be positioned because it could not track my participant's pupils unless they didn't move them. When it did work for one of the participants, the data was skewed, it was always looking to the left when you could see that my participant's eyes were looking somewhere else. Any reccommendations will be greatly appreciated! @user-0f4533 interested in seeing what the developers recommend. I'll tell you what we did. we started collection way before april tags were introduced and couldn't change the protocol midway. Mind you, the software and documentation is so much improved now!! Way, way better. In our manual calibration, we made sure to keep an eye on confidence intervals. Personally we agreed to keep the confidence interval at 99% (this can be set using the setting on capture) so if during calibration that did not work we fixed it. After successful calibration, we validated it by asking the participants to look at different points in the room in their line of vision without moving their head. We made sure to see if the eye-tracker (both world camera and eye camera) did what they were supposed to do. In case of problems, we literally just re-started the program and started over. If that didn't work, we re-started the pc too. Basic things that we could do with the participant present. Most of the time, this worked. Hope it helps!! Good luck

user-908b50 18 September, 2020, 00:14:52

@user-0f4533 also, there are certain things in the environment that helped. Keeping lights on worked better for the tracker, especially at that time. It did not work in the dark or even with low lights on (from what I had been told when I joined the team after the set-up etc). We also asked the participants to not wear heavy mascara or fake mascara. And, to not move their heads.

user-908b50 18 September, 2020, 00:53:34

Re: these are my errors.

PytestResults_Sep17

wrp 18 September, 2020, 02:55:49

Hi @user-29208e - I think we also communicated via email. Pupil Core software includes fixation and blink classifier plugins. Please see docs: - Fixation Detector: https://docs.pupil-labs.com/core/terminology/#fixations and https://docs.pupil-labs.com/core/software/pupil-capture/#fixation-detector - Blink Classifier: https://docs.pupil-labs.com/core/software/pupil-capture/#blink-detection

user-00fa16 18 September, 2020, 08:44:02

Hello everyone! There are 3 questions to ask: 1) would you have any standard of infants setting? such as marker size in calibration, sample duration, fixation, blink and sensor setting; 2) How's the 'gaze' defined?

user-00fa16 18 September, 2020, 09:22:16

And how to open the data format of .npy and .pldata?

user-2ff80a 18 September, 2020, 12:08:18

Hello I have a problem regarding the pupil detection part. I have followed the steps indicated in pupil docs regarding this but the green circle around the eye ball doesn't stay still whenever I performed the experiment like as they've shown in the example video in the pupil docs. Is there any special trick to follow in this?

user-664974 18 September, 2020, 13:18:25

hi all, I am wondering if there is a way to batch export the pupil lab recordings (like a script) instead of opening the pupil player and manually export the data. Thanks in advance

user-908b50 18 September, 2020, 20:59:13

hi all, I am wondering if there is a way to batch export the pupil lab recordings (like a script) instead of opening the pupil player and manually export the data. Thanks in advance @user-664974 https://github.com/tombullock/batchExportPupilLabs

user-908b50 18 September, 2020, 21:27:11

If you have version 2x, there is a module names batch_exporter. I am still testing it out. Its slightly different than what is on github.

user-908b50 18 September, 2020, 21:42:54

Hi, does anyone know how to use the batch_exporter? I got an error saying there isn't any exporter module so I changed that line to import raw_data_exporter from the a similar named module. But, now I am not getting an import module error for is_pupil_rec_dir from player_methods. I looked at player_methods and it doesn't have this function. Please help!

user-7daa32 20 September, 2020, 18:38:14

Hello,

Can I get the computer or software to calculate the movement time(saccade). As in the time to move from one AOI to Another AOI?

papr 21 September, 2020, 08:52:27

@user-29208e You can find a general documentation here: https://docs.pupil-labs.com/core/

You can turn on specific features, e.g. fixation detection, by enabling the corresponding plugins in the Plugin Manager menu. Please be aware that we are currently not offering saccade detection.

papr 21 September, 2020, 08:58:36

@user-0f4533 Good pupil detection is the foundation for all other processing steps that come after, e.g. gaze estimation. For this, it is often necessary to physically adjust the camera positions. Read more about it here: https://docs.pupil-labs.com/core/#_3-check-pupil-detection Should you continue having issue with the gaze data being skewed into one direction even though pupil detection works well, you can use the "Manual Offset Correction" feature in Pupil Player to correct it.

papr 21 September, 2020, 09:02:43

@user-2ff80a You can freeze the eye model once it is fit well. Please be aware that this makes the model susceptible to headset slippage. Should you detect headset slippage, we recommend unfreezing the model, refit it by rolling the eyes and freezing it again. You can freeze the model via the network api or the 3d pupil detector menu.

papr 21 September, 2020, 09:15:08

@user-00fa16 1) There are no special infant settings. Nonetheless, you should check the usual subject-specific parameters and make sure they work for the subject, e.g. pupil min and max values. 2) gaze is the output of the gaze estimation. Location of the subject's gaze within the world coordinate system. Specifically, it is the result from mapping pupil data (eye camera coordinates) to scene/world camera coordinates. You can read more about terminology here: https://docs.pupil-labs.com/core/terminology/ 3) npy files are numpy files and contain usually timestamps in the context of Pupil Core. They can be loaded with this python function: https://numpy.org/doc/stable/reference/generated/numpy.load.html pldata files are a Pupil Core specific files. You can read about the data structure here: https://docs.pupil-labs.com/developer/core/recording-format/#pldata-files You can use this function to read the files: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L139-L155 Both type of files are an intermediate data format and we usually do not recommend reading them directly.

papr 21 September, 2020, 09:44:10

@user-7daa32 As mentioned above, we do not support detecting saccades. Please be aware that saccades are a specific type of eye movement that does not necessarily correlate to eye movement between AOIs.

papr 21 September, 2020, 09:53:01

@user-908b50 Please see my notes below: - source version - The provided source version (2.0.363) looks like it is not based on the most recent tags. Please use git fetch --tags to fetch the most recent tags. If you want to be more accurate, you can provide the git hash of your current commit. Use git show to display information about your current commit. - batch exporter - The batch exporter has been disabled due to some technical details that were introduced with post-hoc detections. Unfortunately, there is no easy way to reenable the batch exporter functionality without adding a lot of complexity to the software. - errors - Thank you for providing these detailed errors. These are related to your setup. The syntax errors are due to using an old version of Python. Please be aware that we require Python 3.6.1 or higher. The other error refers to an incomplete pyglui installation.

user-8effe4 21 September, 2020, 13:49:08

Hi, We are having à problem with the Pupil capture. For some reason, the screen looks big (zoomed) on the camera. So, Is there à way to make it look smaller ? (without having to move far from it)

papr 21 September, 2020, 13:56:02

@user-8effe4 Have you switched the scene camera lens? It is possible that you changed from the wide angle to the narrow angle lens.

user-c563fc 21 September, 2020, 16:09:59

Hey Everyone. Can someone tell me how I export blink plugin data into csv. regular export does not includes activity data.

user-7daa32 21 September, 2020, 16:28:35

@user-7daa32 As mentioned above, we do not support detecting saccades. Please be aware that saccades are a specific type of eye movement that does not necessarily correlate to eye movement between AOIs. @papr not clear. Did you mean if I calculated the a duration a participant moved from from one AOI to another AOI, it won't correlates with eye movement between AOIs... The movement in between AOIs could have one or more gaze points(fixation outside the AOI) outside surfaces. That's what I mean when I said saccades

papr 21 September, 2020, 16:31:54

@user-7daa32 Saccades are a specific type of eye movement. Movements from one AOI to an other can include saccades but not all movements from one AOI to the other are necessarily saccades. Is this clearer?

papr 21 September, 2020, 16:32:18

The movement in between AOIs could have one or more gaze points(fixation outside the AOI) outside surfaces. Absolutely correct

papr 21 September, 2020, 16:34:03

@user-c563fc Are you looking for the raw blink activity data? This data is only used internally to create discrete blink events

user-c563fc 21 September, 2020, 16:38:20

@user-c563fc Are you looking for the raw blink activity data? This data is only used internally to create discrete blink events @papr Yeah, I need raw data. I can see the graph in Overview.

Chat image

user-c563fc 21 September, 2020, 16:39:19

@papr I need the data which is represented in the photo above in green color = activity

papr 21 September, 2020, 16:39:34

@user-c563fc yeah, that is not being exported. If you have some Python experience, you can create a user plugin that subclasses the offline blink detector and extends it to export the raw data

papr 21 September, 2020, 16:39:58

@user-c563fc ~~Actually, the green data is the 2d pupil confidence.~~

papr 21 September, 2020, 16:41:48

@user-c563fc https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L190 this is the class to subclass.

This is the function to extend https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L263

self.filter_response contains the green data

user-c563fc 21 September, 2020, 16:44:58

if filter_response is the green data, then its included in the exported csv then. id,start_timestamp,duration,end_timestamp,start_frame_index,index,end_frame_index,confidence,filter_response,base_data

user-c563fc 21 September, 2020, 16:49:03

@papr BTW, what is this base_data?

papr 21 September, 2020, 16:50:01

@user-c563fc The exported filter_response is not the raw signal but the response in the range of the specific blink

papr 21 September, 2020, 16:51:01

Basically a slice. Base data includes the original pupil data from which the filter response is generated https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L405-L406

user-908b50 21 September, 2020, 18:16:44

@user-908b50 Please see my notes below: - source version - The provided source version (2.0.363) looks like it is not based on the most recent tags. Please use git fetch --tags to fetch the most recent tags. If you want to be more accurate, you can provide the git hash of your current commit. Use git show to display information about your current commit. - batch exporter - The batch exporter has been disabled due to some technical details that were introduced with post-hoc detections. Unfortunately, there is no easy way to reenable the batch exporter functionality without adding a lot of complexity to the software. - errors - Thank you for providing these detailed errors. These are related to your setup. The syntax errors are due to using an old version of Python. Please be aware that we require Python 3.6.1 or higher. The other error refers to an incomplete pyglui installation. @papr Thank you for the detailed response.
- I am using python 3.8.5. I am not sure why pyglui installation was incomplete. I can re-install again. - This is what git show gets me: Merge: b3dda8cb aec6a18f Author: Fiza [email removed] Date: Thu Aug 6 14:56:35 2020 -0700

Merge pull request #1 from pupil-labs/master

Updating my fork with changes
user-908b50 21 September, 2020, 18:25:04

@papr based my git commit, what do you suggest? Do I need to update my fork?

papr 21 September, 2020, 18:28:11

@user-908b50 Keeping your fork up-to-date is generally recommended.

user-8effe4 21 September, 2020, 20:08:38

@user-8effe4 Have you switched the scene camera lens? It is possible that you changed from the wide angle to the narrow angle lens. @papr No i haven't modified anything in the hardware if that is what you meant. on the other hand, i think one of the teammates played with resolutions without realizing it. but i dont know if that is a thing in pupil lab.

papr 21 September, 2020, 20:09:55

@user-8effe4 The 1080p resolution has a much wider field of view than the other resolutions (Default is 720p). Maybe you were referring to that?

user-8effe4 21 September, 2020, 20:11:18

@papr yeah that will be nice if i can play with that. i want my screen to take only a small part in the video to make sure i wont miss any markers if the subject moves his head

papr 21 September, 2020, 20:16:31

@user-8effe4 be aware that the high distortion at 1080p might result in marker detection issues.

papr 21 September, 2020, 20:17:00

@user-8effe4 Also, if the subject is turning their head so far, it is unlikely the subject was not looking at the surface anyway

user-8effe4 21 September, 2020, 20:22:54

@papr the subject task is to look at the surface. howere since he is close to the screen by 40cm he would miss the marker by just slightly moving his head. for that it is a must to make it a bit smaller if there is a way?

papr 21 September, 2020, 20:25:00

@user-8effe4 Not all markers need to be visible at all times for the surface tracking to work. Generally: The more markers are visible, the more stable the result will be. How large is the stimulus?

user-8effe4 21 September, 2020, 20:31:26

it takes the entire screen and the markers are surrounding it

papr 21 September, 2020, 20:35:05

@user-8effe4 ok, then my question would be: how large is the screen. I am trying to understand the relations to get a feeling if your concerns are justified. Did you run a pre-test? Maybe testing 2-3 subjects on a shorter version of the experiment can give you enough insight to make decisions in this regard. Also, do you expect the subject to look somewhere else? Why would they turn their head while looking at the stimulus? (might be part of experiment procedure?)

papr 21 September, 2020, 20:40:58

I know that a 27 inch monitor fits the 720p resolution (wide angle lens) when I have my head centred. People tend to have a centre bias, meaning when they look to the right, they will move their head to the right, as well. This would likely move parts of of the 27 inch monitor out of the cameras field of view. A dense use of markers around the screen should counteract any issues regarding surface tracking.

user-8effe4 21 September, 2020, 20:44:24

well we are running a bell best where a subject select shapes on a touch screen. the resolution as it is now is making the screen big to the point where a slight head movement makes part of screen disapear from the the camera and we ended up loosing data for lots of subjects in this way. my question is how to change resolution to make the screen look smaller?

papr 21 September, 2020, 20:45:55

@user-8effe4 Understood. In the scene/world window, go to the "Video Source" (might be named differently in earlier versions) menu on the right, and change the resolution to 1920 x 1080.

papr 21 September, 2020, 20:49:00

@user-8effe4 Out of curiosity, how many markers around the screen have you been using for the tests until now?

user-8effe4 21 September, 2020, 20:51:16

oh thanks i think 12s so far

papr 21 September, 2020, 20:52:12

@user-8effe4 12 sounds reasonable

user-8effe4 21 September, 2020, 20:56:54

@papr yeah

user-430fc1 22 September, 2020, 10:16:02

Hello, would it be possible to get Pupil Labs' detection algorithms working with a raspberry pi and the NoIR camera?

papr 22 September, 2020, 12:08:14

@user-430fc1 I think people have tried. But there are two possible issues: You need to compile a lot from scratch as raspberry pis use ARM. The other issue is that the CPU power of the raspberry might not be sufficient to run the pupil detection at a high frame rate

user-430fc1 22 September, 2020, 12:19:40

Ah ok, thanks. If it was just for pupillometry do you think it would be easier to make recordings and perform post-hoc analysis?

papr 22 September, 2020, 12:21:02

@user-430fc1 It is not necessarily easier but an alternative approach if you need data with maximum sampling rate (and do not require it in real time)

user-4648c3 22 September, 2020, 15:56:42

Bit late to the party but could you (probably @wrp) explain why hot mirrors would give fresnel lens distortions? I only found something about French scientists who made a very compact IR lens using the old Fresnel trick. https://discord.com/channels/285728493612957698/285728493612957698/352475442659328000

user-3cff0d 23 September, 2020, 01:13:09

Hello everyone! I've spent the last while setting Pupil up on Windows, and I've got most of the installation process completed. There is one problem that I'm running into: I can't seem to install the pupil-detectors wheel via pip install pupil-detectors. pip wheel pupil-detectors also fails to install with the same error:

src/pupil_detectors/detector_2d/detector_2d.cpp(689): fatal error C1083: Cannot open include file: 'opencv2/core.hpp': No such file or directory
  c:\users\kbark\appdata\local\programs\python\python38-32\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'zip_save'
    warnings.warn(msg)
  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2
  ----------------------------------------
  ERROR: Failed building wheel for pupil-detectors

If anyone knows what I might be able to do to fix this, I would really appreciate it!

user-3cff0d 23 September, 2020, 01:15:29

Also, I should mention that on the windows dependencies list at https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md, the link to the ffmpeg shared binaries no longer works as the site http://ffmpeg.zeranoe.com/ was taken down a few weeks ago. I had to compile my own binaries manually

user-a98526 23 September, 2020, 01:41:12

Hi @papr , Is there a way I can calibrate the pupil core on the blackboard。

user-c5fb8b 23 September, 2020, 07:34:18

Hi @user-3cff0d, which Python version are you using? Pupil only supports version 3.6 on Windows currently. It seems your Python tries to build pupil-detectors when you install it, which requires a lot of other dependencies set up correctly. Because of this, we build wheels for Python 3.6 on Windows for pupil-detectors, which should get installed automatically when you run pip install pupil-detectors from Python 3.6 on Windows.

user-c5fb8b 23 September, 2020, 07:40:25

Hi @user-a98526, I hope I understand your question correctly. You can also calibrate Pupil Core without a monitor, by using printed markers that you present to the subject. You will have to use the Single Marker Calibration Choreography in Physical Marker mode. Please refer to these sections in the documentation: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-marker To get a better idea of how to use physical markers, I also recommend having a look at the following video. It actually showcases how to use post-hoc analysis in Pupil Player, but the video that is analysed shows usage of physical printed markers, which are presented to the subject by another person: https://www.youtube.com/watch?v=_Jnxi1OMMTc

user-3cff0d 23 September, 2020, 16:50:05

@user-c5fb8b Thanks, that fixed it!

user-1ce4e3 23 September, 2020, 18:54:14

Is there a plugin to validate calibration? I have printed the calibration markers and am looking to validate the mapping using different marker locations after the calibration routine

papr 23 September, 2020, 18:55:26

@user-1ce4e3 Yes, this functionality is built-in. Just hit the T button on the left to start the validation

user-1ce4e3 23 September, 2020, 18:56:48

thank you!!

user-a98526 24 September, 2020, 02:16:44

@user-c5fb8b Thank you for your patient help, it is useful.

user-4648c3 24 September, 2020, 09:18:17

Could you give a hint as to why hot mirrors would give Fresnel lens distortions? https://discord.com/channels/285728493612957698/285728493612957698/352475442659328000

papr 24 September, 2020, 09:22:03

@user-4648c3 I think @wrp was assuming that the camera/hot mirror setup would be placed behind the headset's fresnel lenses. These lenses would cause fresnel lens distortion in the eye camera image, not the hot mirror.

user-4648c3 24 September, 2020, 09:23:30

Ah I got it, the fresnel lenses in a VR device. Thanks!!

user-48dffb 24 September, 2020, 17:27:11

@papr I had a question regarding the initial utc-0 timestamp of the pupil time in the info.player.json file. Is the initial time set by the wall clock time (e.g., System.currentTimeMillis()) or the internal monotonic time (System.elapsedRealTime())?

papr 24 September, 2020, 17:44:25

@user-48dffb all timestamps are in seconds. The system start time has Unix epoch. The Pupil start time has an arbitrary epoch.

user-48dffb 24 September, 2020, 17:50:53

@papr I should give more context. I'm actually working with @user-c6717a , so we're finally testing the synchronization of the pupil recordings to other sensors that are synchronized over NTP.

Our NTP mechanism synchronizes Android sensors by calculating the drift of the internal Android monotonic clock (elapsedRealTime). I understand that the start time is arbitrary relative to pupil timestamps, but that little detail is important to us.

Concretely, we developed a side application running in parallel with the Pupil app that is recording the drift of the Android internal monotonic clock (elapsedRealTime) relative to NTP. If you are setting the initial pupil start time with elapsedRealTime, that would be very convenient for us. But if it is just the arbitrary wallclock time (e.g., System.currentTimeMillis()), it will be slightly more non-deterministic.

papr 24 September, 2020, 17:59:01

@user-48dffb Are you using Pupil Mobile or Pupil Capture to record recordings?

user-48dffb 24 September, 2020, 17:59:08

@papr Pupil Mobile

papr 24 September, 2020, 18:02:12

@user-48dffb Please be aware that we no longer maintain this app. We still offer it for legacy reasons, though.

I will look up the exact clock to determine start times in a second.

user-48dffb 24 September, 2020, 18:05:30

@papr Sorry, I realize that we had started this thread when we were using Pupil Mobile. We have since shifted to the Pupil Invisible, and I figured this had the same mechanism.

papr 24 September, 2020, 18:08:04

@user-48dffb

In the Android companion app this is implemented by sampling System.currentTimeMillis() * 1e6 in UTC once at start of capture and adding differences using a monotonic ns clock Source: https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=171999674

papr 24 September, 2020, 18:08:38

Also:

Time is counted in nanosecods

user-48dffb 24 September, 2020, 18:09:12

awesome! Thanks for the info!

user-8effe4 25 September, 2020, 12:38:18

Hello,
I have à simple question this time but it requires lots of writing : We are conducting an experiment where the user has to circle some shapes of the surface while using the pupil lab. -The problem is that the fixation data we are getting is always à bit shifted (I think this was due to the fish eye problem which gets even worse once the subjects start moving their heads). -The only way to plot the fixation at the correct position would be to take à screenshot of the video surface at the exact moment of the fixation (so it is not à handy solution at all). -In order to solve this problem, the camera intrinsics estimation ( https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation ) but still the results are still having the same problem. -We also found à file named surface_positions_unnamed.csv that seemed interesting and we wanted to try but the more we think about it, the more unlogical it seems (we wanted to use homography matrices ). -Now we are at the point where we ran out of solutions. What do you think we should do?

user-8b7bfd 25 September, 2020, 15:19:07

Does anyone know if it is possible to convert the x,y coordinates for the pupil positions and gaze to visual angle if I didn't precisely measure the distance from my participants' eyes to an object in view?

papr 25 September, 2020, 15:20:38

@user-8b7bfd You can use the scene camera intrinsics to unproject the x/y coordinates to normalized viewing directions within the scene camera coordinate system

papr 25 September, 2020, 15:21:38

@user-8b7bfd Pupil uses the same approach to calculate the dispersion between gaze angles in visual angles for the fixation detection https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L140-L148

user-8b7bfd 25 September, 2020, 15:27:11

@papr Ok, thanks. I can't say I totally understand now, but this gives me a place to start. Is this something I can do within pupil player or would I write code to process exported data?

papr 25 September, 2020, 15:30:55

@user-8b7bfd You would have to write code to process the exported data. You should be able to reuse the logic from the linked source code lines above.

user-8b7bfd 25 September, 2020, 15:31:43

@papr oh ok, then I think I can do that. That is much easier too since I already spent many hours exporting the segments I need. Thanks!

user-54b8f4 25 September, 2020, 22:10:30

Hi. I am having an issue with Pupil Player (2.4.0) crashing when I drop recordings collected with Pupil Capture into it. The window displays "Updating recording format" and crashes. This is happening for only a few files. I have attached the player.log file.

player_log_pupil_crash.txt

papr 25 September, 2020, 22:38:58

@user-54b8f4 Hey 👋 would you mind sharing the info.csv file of your recording?

user-54b8f4 25 September, 2020, 23:48:18

Thank you for the quick reply! Here is the info.csv from one of the folders. Let me know if you would like to see the other two.

info.csv

user-908b50 26 September, 2020, 00:18:07

Also, how does the minimum data confidence work within pupil player? How is it computed, for example, for both fixation detection and location and for saccades?

papr 26 September, 2020, 07:16:46

@user-54b8f4 The file is incomplete. Looks to me like the recording was interrupted, not stopped properly, e.g. recording software crashed. If you want you can share the recording with [email removed] and we can have a look to check if we can recover the recording.

papr 26 September, 2020, 07:19:26

@user-908b50 "Minimum data confidence" is a user setting with 0.6 as its default value. Confidence mostly generated by the pupil detection algorithm and then passed along to data calculated based on it, e.g. gaze data inherits confidence from its base pupil data. More high level detection algorithms, e.g. fixation detection, use the minimum data confidence to discard low-confidence data before processing it.

papr 26 September, 2020, 07:23:53

@user-8effe4 Check out our tutorial notebook on how to use the surface_positions csv file to visualize surface positions in pixel space: https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb

Regarding the "data shift" issue, I will need more information. Is the fixation data shifted in the scene image, on the surface, or both? Is the gaze shifted as well? Feel free to share an example recording with [email removed] such that we can have a closer look.

user-908b50 26 September, 2020, 08:53:03

Thanks, papr! I am using a higher confidence limit of 0.80 so I wondered if there was any documentation to get a sense of how liberal/stringent the processing is. Would you recommend a higher percentage? Also, is there are way to use the newer surface tracker plugin with data collected using the older pupil capture version that used manual markers instead of Apriltags? Could you point me towards relevant resources.

user-908b50 26 September, 2020, 12:03:50

nvm about confidence interval calculation. I found information on the algorithm in the white paper. I think I figured out the surface tracker's next steps. I'm trying out the use of inverted legacy markers. Let's see if that works out. fingers crossed! Note: we collected data using v1.11

user-20b83c 28 September, 2020, 02:20:57

Hello, where can i get the catalogue of Pupil core and HTV-VIVE add on?

user-a98526 28 September, 2020, 04:25:13

@papr Hello, I am going to entrust my teacher to purchase Pupil invisible. I would like to ask about the accuracy comparison of Pupil Invisible and Pupil Core, because in some experiments, the accuracy of the pupil core I used cannot meet the demand. In addition, I would also like to ask about the complexity of the calibration process, because it is very difficult for patients to perform repeated calibrations in patient experiments. Thank you!

user-52b71c 28 September, 2020, 10:29:31

Hello, I am working with @user-8effe4 trying to figure out the reliability of our data. Our participants are performing a task on a screen (we've placed markers along the screen borders) but often move their head and therefore we were unsure about whether the gaze position coordinates shifted in time when our surface shifts due to this motion. We've come to understand that gaze coordinates on the surface are correct, however, if we try to plot these on a static image (what we have projected on the screen) the correspondance is not accurate. I.e. the gaze positions within the surface coordinates are OK, but the features of the image within the surface move during a trial due to changes in the camera's point of view (see screenshots). Do you have any suggestion on how we could correct for the shift of the visual elements within the surface? We're thinking of writing a script to extract the features from each frame and match them to our input image, but we thought we'd ask first in case there's a simpler method already available. Also because this would mean having to create a new video with only the surface (like the debug window in surface tracker) since we wouldn't be able to use the world camera video and this is an extra challenge. Unless there's also a way to export the surface video? Thank you!

Chat image

user-52b71c 28 September, 2020, 10:29:34

Chat image

user-430fc1 29 September, 2020, 08:58:27

@user-a98526 Just a regular user here... In case you have not seen it: https://discord.com/channels/285728493612957698/733230031228370956/753204772928094280

user-b292f7 29 September, 2020, 09:15:14

Hi, can you tell me all the varibles that the pupil software can give me? ( fixation, gaze... what more?)

papr 29 September, 2020, 09:18:03

@user-b292f7 Please have a look at the export section of our Pupil Player software: https://docs.pupil-labs.com/core/software/pupil-player/#export

papr 29 September, 2020, 09:26:07

@user-52b71c

We're thinking of writing a script to extract the features from each frame and match them to our input image, but we thought we'd ask first in case there's a simpler method already available Unfortunately, this would be the way to go. Have a look at our tutorials though. You can extract the video frames from the video using FFMPEG [1] and find the surface locations within each frame [2]. This is how debug window draws the surface [3].

[1] https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb [2] https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb [3] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/gui.py#L617-L624

papr 29 September, 2020, 09:26:52

@user-20b83c If you are looking for the technical specifications, please have a look at our website: https://pupil-labs.com/products/

papr 29 September, 2020, 09:28:25

@user-a98526 I agree with @user-430fc1 that you should have a look at the linked paper. It explains the accuracy of Pupil Invisible across its FOV in a lot of detail.

user-b292f7 29 September, 2020, 09:57:52

Thank you ! so there is no information about saccades ?

papr 29 September, 2020, 09:58:33

@user-b292f7 No, unfortunately, we do currently not support saccade detection.

user-b292f7 29 September, 2020, 10:06:01

thank you, and if I do 2d detection and I want to know date about the dimeter of the pupil so I have to do 3d detection and export anew file?

papr 29 September, 2020, 10:24:23

@user-b292f7 The 2d pupil detection provides the diameter in pixels, the 3d pupil detection provides the diameter in millimeters. The usual workflow is to make a recording using Pupil Capture and aftward to process/export it using Pupil Player. Does this answer your questions?

user-b292f7 29 September, 2020, 10:53:18

I have the recording and export but I did it in 2d and now I want to have a data about the diameter in millimeters and I wander if to do start again detection with 3d now...

papr 29 September, 2020, 11:59:58

@user-b292f7 if you recorded the eye videos, you can run the offline pupil detection in 3d using Player

user-9d8c3a 29 September, 2020, 14:37:38

Hi, could you tell me about payment method change? (I want to know how can I cancel the last payment and recharge using different card)

user-c5fb8b 29 September, 2020, 14:39:54

Hi @user-9d8c3a, please contact sales@pupil-labs.com in this regard!

user-9d8c3a 29 September, 2020, 14:41:12

I already contact to email. I hope to know it is possible or not T_T...

user-9d8c3a 29 September, 2020, 14:42:23

Sorry for bothering you, i think this channel is about development issue, right?

user-9d8c3a 29 September, 2020, 14:44:43

Thx for notice the mail address. Last question, how many days can I have the answer?

papr 29 September, 2020, 15:22:14

@user-9d8c3a We generally try to respond to all questions within less than 3 business days.

user-8effe4 29 September, 2020, 15:30:43

@user-52b71c Unfortunately, this would be the way to go. Have a look at our tutorials though. You can extract the video frames from the video using FFMPEG [1] and find the surface locations within each frame [2]. This is how debug window draws the surface [3].

[1] https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb [2] https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb [3] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/gui.py#L617-L624 @papr Thank you for the links, [1] and [2] worked so far. I am interessted in getting the surface debugger pictures but point [3] seemed a bit complex. Now is there a function in the pupillab api to do that. since our main goal is to apply sift/surf on the two pictures (the static one and the pupilllab debbugger one) and using these two features matrices, we will be applying an affine alignement method to map the coordinates from the surface to a more accurate ones

papr 29 September, 2020, 15:36:26

@user-8effe4 Unfortunately, there is not such an api.

user-9d8c3a 29 September, 2020, 22:16:06

@papr Thank you for your help. Have a good day!

user-8effe4 30 September, 2020, 08:14:29

@user-8effe4 Unfortunately, there is not such an api. @papr Oh okay. Thank you

user-c563fc 30 September, 2020, 17:58:58

I am trying to run pupil player from source code and I am getting this error

user-c563fc 30 September, 2020, 17:58:59

ModuleNotFoundError: No module named 'pyglui'

user-c563fc 30 September, 2020, 17:59:11

Can someone help?

user-c563fc 30 September, 2020, 18:02:41

I am using windows 10

user-c563fc 30 September, 2020, 18:06:08

with python 3.8

user-b292f7 30 September, 2020, 20:03:12

Hi, do you know why the dispersion output is just between 1-1.6, or its spesific in my data?

End of September archive