core


user-c5fb8b 01 November, 2019, 12:04:16

@user-0eef61 Hey, moving this conversation from vr-ar to here. Can you please paste the exact error message you get when running the jupyter notebook?

user-2be752 01 November, 2019, 17:18:13

Hi, i've been using the pupil glasses for a year now they always worked great. Lately I've been using them from ubuntu, and all of the sudden when I try to plug them into a mac or a windows, it will not open the cameras, it outputs that they are in 'ghost mode'. They still work fine in Ubuntu though.. any thoughts?

user-908b50 01 November, 2019, 21:00:07

@papr I've attached a few picture what the setup is. Basically, we track eye movement as they play slot machines, We calibrate it manually.

Chat image

user-908b50 01 November, 2019, 21:00:54

That's what we have started doing this past week because of issues with calibration. @papr

Chat image

user-908b50 01 November, 2019, 21:01:19

And we use this to calibrate manually! @papr

Chat image

user-908b50 01 November, 2019, 21:02:54

This was the setup previously. The calibration had gotten worse. Our new setup (cc picture 2) is being tested on the go. It isn't 100% (won't work when someone has mascara on) but is better than before. @papr

Chat image

user-908b50 01 November, 2019, 22:31:03

@papr also we don't track offline data as we are not interested in that kind of analysis for now. I'm more interested in the calibration not being accurate before I even begin the recording. Do you recommend updating the software?

user-908b50 01 November, 2019, 22:39:29

Also, is there a way to log history for each person and have that saved into the individual folders with the relevant warning messages included? The capture settings just update onto the same file every single time and it does include real-time warning messages for later perusal.

user-0eef61 03 November, 2019, 18:55:45

Hi, I am trying to run the fixation detector code from: https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb I run the previous codes with my data files and all worked correctly but when I try the fixation detector it gives this error: %time fixations = list(detect_fixations(pupil_pd_frameL1VisualControl)) ^ IndentationError: unindent does not match any outer indentation level

user-0eef61 03 November, 2019, 18:56:27

pupil_pd_frameL1VisualControl is the name I have put for my pupil_positions.csv file. Is just like pupil_pd_frame in fixation detector code

user-0eef61 03 November, 2019, 18:57:59

I am trying to find how I can get the number of fixations in my recordings from pupil_positions.csv and how I can plot that number in a bar graph. I would also like to get the number of microsaccades as well if that is possible?

user-c5fb8b 04 November, 2019, 08:20:08

@user-0eef61 It seems you are mixing tabs and spaces to do indentation of your code. Python cannot deal with a mix of tabs and spaces and will give you this error instead. The guidelines recommend an indentation level of 4 spaces per indentation, this is also what you have in the notebook code on GitHub. I guess the code editor you are using is configured to use tabs instead.

user-c5fb8b 04 November, 2019, 10:17:37

@user-0eef61 We unfortunately don't have any code examples on how to compute microsaccades.

user-af938a 04 November, 2019, 11:45:10

Hi all. Please where i can find minimum requirements for pupil core?

papr 04 November, 2019, 12:27:35

@user-af938a Recommended Computer Specs - The key specs are CPU and RAM. We suggest at least an Intel i5 CPU (i7 preferred) with a minimum of 8GB of ram (16GB is better if possible). We support macOS (minimum v10.12), Linux (minimum Ubuntu 16.04 LTS), and Windows 10.

user-0eef61 04 November, 2019, 12:31:57

@user-c5fb8b Thanks for your reply. Ok so I am guessing is about I am using Spyder and not Jupyter. Now I tried Jupyter Notebook and I just copied and pasted the code as it was from the pupil labs tutorials. It shows this message

Chat image

user-0eef61 04 November, 2019, 12:33:11

@user-c5fb8b What I don't like about Jupyter is that it takes a lot of time to run the code and make the plots and that's why I use Spyder instead

user-c5fb8b 04 November, 2019, 12:34:11

@user-0eef61 You can change the default indentation settings of spyder to use spaces instead of tabs. See here: https://stackoverflow.com/questions/36187784/changing-indentation-settings-in-the-spyder-editor-for-python

papr 04 November, 2019, 12:35:25

@user-0eef61 Apparently, there are empty fixations, which cause the warnings

papr 04 November, 2019, 12:37:36

@user-0eef61 with np.seterr("raise") you can cause this to raise an exception. This should help you to look into what is going wrong.

Call np.seterr("raise") once in the beginning of the notebook.

papr 04 November, 2019, 12:42:35

@user-0eef61 Also, please be aware that data is filtered by confidence before the calculation. If all your data's confidence is too low, the script will operate on an empty list and show that warning

papr 04 November, 2019, 12:43:28

We will add check to the code to ensure that this does not happen

user-0eef61 04 November, 2019, 12:51:08

Many thanks for the quick responses @papr @user-c5fb8b I think that Jupyter is still running the code but it is taking some time so I am thinking to run the code again in Spyder and change the indentation as @user-c5fb8b mentioned. I went to the advanced settings in Spyder and it gives this information:

Chat image

user-0eef61 04 November, 2019, 12:51:16

I think is currently set to use spaces but is there any option I need to change?

user-0eef61 04 November, 2019, 12:51:41

Many thanks for the quick responses @papr @user-c5fb8b I think that Jupyter is still running the code but it is taking some time so I am thinking to run the code again in Spyder and change the indentation as @user-c5fb8b mentioned. I went to the advanced settings in Spyder and it gives this information:

Chat image

user-c5fb8b 04 November, 2019, 12:52:58

@user-0eef61 This is looking good I guess. Maybe something went wrong while copying the code. I am not using Spyder, but maybe you can copy over the code and then try to search and replace tab characters into 4 spaces?

papr 04 November, 2019, 12:53:40

To be honest, I do not know why the calculations in the jupter notebook should be slower than in spyder... In the end both they are both just an interface to the ipykernel in the background

user-c5fb8b 04 November, 2019, 13:03:47

Also @user-0eef61 how long is your recording? The fixation detector can require some time for larger recordings. The sample recording is 36 seconds and took 5 seconds to process on my machine. Note that the complexity of the algorithm seems to be quadratic, so having a recording that is 10 times longer will take roughly 100 times longer to process. This will also depend on your machine of course.

user-0eef61 04 November, 2019, 13:18:49

My recording is approximately 30 minutes per person and I also put many participants into one data file. However, now I tried with one file for one participant only and shows this in Jupyter:

Chat image

user-c5fb8b 04 November, 2019, 13:27:19

@user-0eef61 Can you check that the data you are inputting has the same shape as the data that is passed in the original sample notebook? I mean the notebook runs without problems, so there must be something wrong with the data you are passing I assume...

user-0eef61 04 November, 2019, 13:36:54

ok, so the problem was that I forgot to save the .ipynb file to the folder where my data is but now it runs and shows this:

Chat image

user-0eef61 04 November, 2019, 13:37:12

which is great but I would still be more keen on making this work in Spyder

user-0eef61 04 November, 2019, 13:37:49

May I ask what does Text(0.5, 1.0, 'Occurrences of Fixations') means?

user-0eef61 04 November, 2019, 13:37:59

can I know from this the number of fixations?

user-c5fb8b 04 November, 2019, 13:43:19

No this is the output from the last line of code in this cell, which is the matplotlib text object of the title text element. How experienced are you with data analysis in Python? You will have to do the heavy work of the data analysis yourself in the end. I recommend you read up on how to effectively work with Pandas dataframes and Numpy arrays. Regarding Spyder: As I said you should have no problems if you manually find-and-replace your tabs to spaces. I am not sure why there are tabs anyways since you configured it to use spaces already.

user-0eef61 04 November, 2019, 13:47:33

I am not very experienced with Python but I know the programming languages so I do understand Python but not perfectly. The figure is very nice and it gives information about the fixations but I was just interested to understand how I can make the conclusions of my results or how to interpret the figure. For this reason I think it will be nice to have also the number of fixations

user-0eef61 04 November, 2019, 13:47:44

But I will have a look and see how I can achieve that

papr 04 November, 2019, 13:48:26

@user-0eef61 in this case, there is a variable called fixations. It is a list containing all fixations.

user-0eef61 04 November, 2019, 13:50:37

@papr is that as a column in the pupil_positions.csv or gaze_positions.csv data files? I am not sure if I have that

papr 04 November, 2019, 13:51:15

@user-0eef61 check the cell that threw the warning in your screenshot above

papr 04 November, 2019, 13:51:23

fixations = ...

papr 04 November, 2019, 13:51:50

It is the output of the detect_fixations() function

user-0eef61 04 November, 2019, 13:51:58

Aa yes ๐Ÿ‘

user-0eef61 04 November, 2019, 13:52:57

so the fixations save a list of all fixations occurred

papr 04 November, 2019, 13:55:27

correct

user-0eef61 04 November, 2019, 13:57:08

Chat image

user-0eef61 04 November, 2019, 13:57:14

I think I managed to get the number of fixations

papr 04 November, 2019, 13:57:28

Very good ๐Ÿ‘

user-0eef61 04 November, 2019, 13:57:48

That's great, thanks a lot for the advises ๐Ÿ™‚

user-2be752 04 November, 2019, 18:16:07

Hi, sorry to resend is again. I've been using the pupil glasses for a year now they always worked great. Lately I've been using them from ubuntu, and all of the sudden when I try to plug them into a mac or a windows, it will not open the cameras, it outputs that they are in 'ghost mode'. They still work fine in Ubuntu though.. any thoughts?

papr 04 November, 2019, 18:17:39

@user-2be752 Hey, sorry for not responding earlier. I must have lost track of your previous question.

papr 04 November, 2019, 18:18:30

Do I understand correctly that you were able to use the Pupil Core headset successfully on all three platforms before and now only Ubuntu remains functional?

user-2be752 04 November, 2019, 18:20:10

yes, exactly

papr 04 November, 2019, 18:26:13

I have an explanation for windows but not for macOS... On Windows, the drivers can be overwritten with the operating system's default drivers after a Windows update. On macOS though, there is no necessity for installing drivers.

papr 04 November, 2019, 18:26:25

@user-2be752 What mac hardware and operating system are you using?

user-2be752 04 November, 2019, 18:26:38

I recently updated my mac.. that's maybe why?

papr 04 November, 2019, 18:27:17

@user-2be752 Updated to what operating system version exactly?

user-2be752 04 November, 2019, 18:28:00

macOS catalina version 10.15

papr 04 November, 2019, 18:28:45

I updated my Mac to that version, too. But I did not test Capture with an actual headset yet. I can tell you more tomorrow.

user-2be752 04 November, 2019, 18:28:53

okay

user-2be752 04 November, 2019, 18:28:59

it isn't working on windows either, though

papr 04 November, 2019, 18:29:44

Regarding troubleshooting Windows see https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting

user-2be752 04 November, 2019, 18:30:17

thanks! will wait for your update on mac tomorrow !

papr 05 November, 2019, 08:50:29

@user-2be752 I was able to connect a Pupil Core headset successfully to Pupil Capture on my MacBook (Pro 2015) with macOS Catalina 10.15.1

user-2be752 05 November, 2019, 16:27:19

@papr weird... so what do you recommend I do? It still only working in ubuntu.

user-908b50 05 November, 2019, 18:34:33

Just following up, would you recommend updating the software midway during data collection? We have been having issues with calibration. Our setup changes helped so the on-off problem is likely but would like it to be solved.

user-c6717a 06 November, 2019, 01:47:01

Hi, we are trying to record all data to pupil mobile and then do offline calibration in pupil player. When we download the data from the moto z3 (running pupil mobile 1.2.3) and drag/drop into player (v1.18) we get the attached error. Weโ€™ve tried reinstalling player and updating pupil but we still get the same problem. This is all on a windows 10 computer. When we load the same data on a Mac (player version 1.17) it loads properly. Any ideas on how to solve this issue. Thanks!

Chat image

papr 06 November, 2019, 06:40:24

@user-c6717a could you share the info.csv file of that recording with us?

user-c5fb8b 06 November, 2019, 09:04:27

Hi @user-c6717a we were able to reproduce the issue and are preparing a bugfix release for v1.18, which will be up in the next couple of hours. Thanks for reporting the issue!

papr 06 November, 2019, 10:34:59

@user-c6717a We have updated the Pupil release to v1.18-4. You should be able to open your Pupil Mobile recordings with it as usual.

papr 06 November, 2019, 10:37:32

@user-2be752 Please update your operating system and restart your Mac. Unfortunately, it is difficult for us to assess the situation since we cannot duplicate the issue.

Also, have you followed the Windows troubleshooting instructions?

papr 06 November, 2019, 10:44:03

@user-908b50 Sorry for the delayed response.

1) By changing your setup you mean that you cover parts of the display with a paper? 2) There were no changes to the pupil detection/calibration marker detection/gaze mapping methodology in a while. I doubt that a newer version would help.

Also, I can just recommend to download the newer version and to give it a try in a personal test.

user-c6717a 06 November, 2019, 20:08:40

Great news! Here's the info.csv file in case it helps.

info.csv

user-c6717a 06 November, 2019, 20:18:36

Happy to report the update worked on my end. Thanks!

user-c6717a 06 November, 2019, 20:29:35

I also wanted to suggest a potential new feature for Pupil Mobile. Do you think it would be possible to add a Network Time Protocol (NTP) column (from your favorite NTP server like Time.Google.Com) from the to the output data that is synced with the time data that Pupil Mobile logs? We are trying to use NTP times to sync the Pupil Mobile devices with other phones and a Raspberry Pi. Would love to discuss how best to do this! Thanks!

user-c6717a 06 November, 2019, 20:35:45

We are also interested in using the Pupil Core outside. Any suggestions on how to make sure the IR cameras don't 'white' out? We are thinking of using a baseball cap, but still need to test whether that will work. Thanks!

user-908b50 06 November, 2019, 22:53:40

@papr Yes, we cover the bottom part of the screen because it uses black and white text tabs that for some reasons may interfere with calibrating. Okay, I will give it a try. Also, is there a way to generate a log message for each participant? Some thing that includes warning messages and calibration-related ones as well.

papr 06 November, 2019, 23:02:56

@user-908b50 no, per recording log messages are not implemented yet

papr 06 November, 2019, 23:03:47

@user-c6717a this can be implemented based on the difference between system time (ntp synced) and synced time (pupil time). I can provide more details tomorrow.

user-c6717a 07 November, 2019, 00:03:17

Excellent. Thank you! To use the difference between the times, I think we'd just need to know the origin of the pupil time, then we can log both times in a separate app on the android phone to get the difference? Thanks!

user-abc667 07 November, 2019, 00:10:52

@papr FYI/FYA (for your amusement): If you try to calibrate using the manual marker mode, by yourself, sitting with the laptop screen in the field of view, the system will constantly blurt out error msgs about moving the marker too fast. Actually, it's seeing double (the world cam view of it on the screen as well), and it took a while before I noticed the error msg about two markers detected. Ah well, live and learn. Offering this in case someone else trips over it.

user-5d1626 07 November, 2019, 00:28:18

Hello, how to purchase pupil lab core from China?

user-c6717a 07 November, 2019, 01:15:05

Just stumbled across some fairly good advertising for Pupil on a YouTube video from the Royal Institute that was uploaded today: https://youtu.be/4pxYVlgSzCg at about 24 min in

user-a48e47 07 November, 2019, 01:28:48

@wrp I referred to the your previous mention. What I wanted was a "manual correction" of "gaze mappers" in pupil player . Thank you.

wrp 07 November, 2019, 01:38:06

@user-5d1626 Pupil Labs ships products worldwide (including China ๐Ÿ˜ธ). You can make an quote/order request via the Pupil Labs website - https://pupil-labs.com/products - alternatively you can send an email to sales@pupil-labs.com

wrp 07 November, 2019, 01:38:13

@user-c6717a thanks for the link!

wrp 07 November, 2019, 02:27:49

@user-a48e47 you can apply a manual correction post-hoc by defining a new calibration in Pupil Player. See screenshot (specifically the Manual Correction section). It seems like you might have already found this. Does this respond to your prior question?

Chat image

user-a48e47 07 November, 2019, 03:03:46

@wrp Yes. That's what I asked. Another question is: If I modify and export x and y coordinates, will the modified coordinates be extracted as Excel raw data?

wrp 07 November, 2019, 03:35:06

@user-a48e47 If you make a new gaze mapper in Pupil Player, then the exported gaze coordinates will be generated by the new gaze mappers that you have created. So, yes, if you manually modify the x and y in your gaze mapper, that will be applied to the exported data as well.

user-a48e47 07 November, 2019, 16:02:54

@wrp But there is a problem. If some data is adjusted by the gaze coordinate, the fixation detector will not be executed. I just want to shift the coordinates of the fixation and gaze data and I wonder why one is and not one.

papr 07 November, 2019, 16:08:12

@user-a48e47 Can you confirm that you perform these steps? 1. Changed offset 2. Hit recalculate 3. Wait for it to be finished 4. Fixation detector should start detecting 5. Wait for fixation detector to finish

papr 07 November, 2019, 16:09:04

And if steps 4-5 do not happen, can you confirm that you see the yellow fixation visualizations in one place and the gaze data in an offset location?

user-c6717a 07 November, 2019, 22:17:57

@papr The updated Pupil Player build (1.18-4) now loads the folder from Pupil Mobile, but when I go to perform Offline Pupil Detection I get the following error:

user-c6717a 07 November, 2019, 22:18:00

Running PupilDrvInst.exe --vid 1443 --pid 37424 eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace: Traceback (most recent call last): File "launchables\eye.py", line 457, in eye File "shared_modules\plugin.py", line 323, in init File "shared_modules\plugin.py", line 356, in add File "shared_modules\video_capture\uvc_backend.py", line 72, in init File "shared_modules\video_capture\uvc_backend.py", line 171, in verify_drivers File "subprocess.py", line 729, in init File "subprocess.py", line 1017, in _execute_child FileNotFoundError: [WinError 2] The system cannot find the file specified

user-c6717a 07 November, 2019, 22:19:32

I was able to get Offline Pupil Detection and Offline Gaze calculation to perform on my MacBook Pro yesterday, but am having trouble getting that to work on my Windows computer now. Any suggestions? Data is from Pupil Mobile (1.2.3).

papr 07 November, 2019, 22:20:31

@user-c5fb8b Please have a look at this tomorrow morning ๐Ÿ‘† Please verify that the Windows bundle includes the driver installer exe

user-c6717a 08 November, 2019, 02:18:04

Also wanted to follow up on how to use the pupil sync times to synchronize with a NTP server ("system time"). RE: @user-c6717a this can be implemented based on the difference between system time (ntp synced) and synced time (pupil time). I can provide more details tomorrow.

user-175561 08 November, 2019, 04:30:09

Hey guys. How is everyone? I'm in a bit of a pickle right now and just found this server, and was hoping to get some help. I'm working on a project that focuses on analyzing eyes at the extremes (we're trying to observe and classify severe nystagmus) and I'm finding that my confidence levels are extremely low. At the moment I have about 120 recordings where most of them have very low confidence levels during these periods of nystagmus. What would be the best way to increase these confidence levels after the fact? Sorry if this is a simple question, but I've been struggling a lot with it.

user-a48e47 08 November, 2019, 06:17:11

@papr Could you mind helping me using Chrome Remote Assistance?

papr 08 November, 2019, 07:00:12

@user-a48e47 I am sorry but we cannot provide this type of support on Discord. If you feel the need for a personal video support call, please consider getting a support contract from our website.

Nonetheless, we will of course try to help you figure out the problem. :)

Could you open the "Calibrations" and "Gaze Mappers" menu and share a screenshot of them?

papr 08 November, 2019, 07:01:54

@user-175561 Is it possible for you to share a short example with data@pupil-labs.com ? We could have a look at the videos and check the pupil detection works as expected.

user-a48e47 08 November, 2019, 07:28:13

@papr Thank you very much. I share four screenshots. 1)Fixation Detector 2) "Calibration" in default state 3) "Gaze Mapper" in default state 4)Modified Fixation (After modifying coordinates)

Chat image

user-a48e47 08 November, 2019, 07:28:13

Chat image

user-a48e47 08 November, 2019, 07:28:14

Chat image

user-a48e47 08 November, 2019, 07:28:15

Chat image

papr 08 November, 2019, 07:29:25

@user-a48e47 if you check the gaze mapper, it says that it is not calculated yet. Could you please hit recalculate and see if you get any error messages?

papr 08 November, 2019, 07:30:01

Without gaze data, the fixation detector won't work :)

user-a48e47 08 November, 2019, 07:35:27

OMG my mistake. Thank you very much. Now I can see the yellow circle. If you don't mind, can I ask you one more question?

user-c5fb8b 08 November, 2019, 07:43:28

@user-c6717a Regarding your error, please try opening Pupil Capture v1.18.4 first once before opening Player. It seems the file for installing the drivers is missing in Player. Can you tell me whether this worked?

papr 08 November, 2019, 07:46:25

Correction, player does not need this file. There is something else going wrong

papr 08 November, 2019, 07:54:18

We were able to isolate the issue. This is a Windows specific issue. We will let you know as soon as we have uploaded a corrected bundle.

papr 08 November, 2019, 07:56:01

@user-a48e47 Great to hear that it worked out. Feel free to ask as many questions as you like. :)

user-a48e47 08 November, 2019, 08:27:57

@papr 1) If I move x, y coordinates, the result will not be displayed immediately. The position is the same even if the pupil player is done... ๐Ÿ˜ฆ

2) I know that yellow is fixation and green is gaze data. However I can see that two circles do not match in the recorded video. How should this be interpreted? I'll upload the screenshot.

Chat image

papr 08 November, 2019, 08:40:13

@user-a48e47 after adding an offset and recalculating the mapper, it takes a while for the fixation detector to recalculate. You should see the a progress indicator around the menu icon during this period. Only afterwards, the visualization will be correct

user-a48e47 08 November, 2019, 08:49:07

@papr It seems that today's my questions did not come out calmly. I'll have to search in detail.Thank you for sharing your precious time.

user-a4a77a 08 November, 2019, 09:11:58

Hello, anyone know how to solve "Install pyrealsense to use the Intel RealSense backend" error while opening pupil player

Chat image

user-c5fb8b 08 November, 2019, 09:23:02

Hi @user-a4a77a This is not an error, but just an info note. Do have an Intel RealSense 3D camera that you want to use together with Pupil? If not, you can ignore this message.

user-c5fb8b 08 November, 2019, 09:25:34

Hi @user-e10103 are you asking where you can download the application? You can find the latest release on GitHub, see the following link. Scroll down to the bottom to Assets, there are bundled versions for macos/linux/windows that you can download. https://github.com/pupil-labs/pupil/releases/latest

user-a4a77a 08 November, 2019, 09:41:49

Hi @user-c5fb8b, Thanks for the reply, actually pupil player is not opening and it shows the following screen,

user-c5fb8b 08 November, 2019, 09:46:40

@user-a4a77a which verson of Pupil Player are you running? And what operating system? Are you running from source or from bundle?

papr 08 November, 2019, 09:50:38

@user-a4a77a please delete the user_settings files in the pupil_capture_settings folder and try starting Player again

papr 08 November, 2019, 09:51:50

@user-c5fb8b the issue seems to be that sometimes the restored player window is offscreen without the option to recover it

user-a4a77a 08 November, 2019, 09:57:39

@user-c5fb8b Pupil player was working yesterday, but today I am trying to open it its not working as @papr said its offscreen, although pupil capture is working fine.

user-a4a77a 08 November, 2019, 10:00:56

@papr There is no such folder with the name settings and I can't find the files with user_settings in other folders too

user-a4a77a 08 November, 2019, 10:09:27

and pupil version is 1.18-4, recently downloaded from the web

user-a4a77a 08 November, 2019, 10:22:09

@papr Thanks Its working now after deleting the user files, I found the files in recording folder.

papr 08 November, 2019, 10:31:20

@user-a4a77a in this case starting a newer version fixed the issue, not the deletion of any files in the recording folder. I highly recommend to not delete anything in the recording folder which you are not 100% sure that it can be deleted.

user-a4a77a 08 November, 2019, 11:39:38

Thaanks ๐Ÿ‘

user-86d8ec 08 November, 2019, 20:14:01

Hi, I have currently have a Tobi X3-120 eyetracker and iMotions software. I would like to purchase the Pupil labs Core and was wondering if I should purchase the iMotions software for Pupil labs Core. What value will it add? Can I do without it? Thanks

user-175561 09 November, 2019, 02:57:02

@papr Sent it. Thank you so much!

user-e637bd 09 November, 2019, 04:20:44

I saw a message last year that the oneplus 6t does not work with pupil mobile because of the android version 9. Is this resolved? Would the new pixel 4 work?

user-121d2c 09 November, 2019, 16:25:46

Hi, our research group has collected some data with our two HTC vive add ons (to start), but upon inspection of the data, it appears to be horrendously messy - with over 60% of the csvs generated containing inaccurate and low confidence reads. Our test brings the eyes to the peripherals by design and involves changing light levels, and also involves drugs which impact the eyes, so maybe this is the cause, but the videos themselves are very crisp and at all times I can identify the pupil position easily, whereas I see the algorithm messing up and missing the pupil. What steps can I take to generate accurate csvs from the videos?

wrp 11 November, 2019, 03:29:25

Hi @user-121d2c did you record the calibration procedure? If so, then you can do post-hoc pupil detection and modify the parameters there and then re-calibration (post-hoc gaze estimation) all in Pupil Player.

user-c5fb8b 11 November, 2019, 08:11:00

@user-c6717a Sorry for coming back to you so late, are you still having trouble with your time synchronization, or were you able to come up with a solution? I don't know what your exact setup is, but from what I understand, you should calculate the difference between pupil-time and system-time a couple of times for incoming data (and average). This time difference should stay constant while pupil is running, so afterwards you can apply it to all recorded pupil-time timestamps to convert them to system-time.

papr 11 November, 2019, 08:54:03

@user-121d2c If you share a small example with data@pupil-labs.com we might be able to give concrete recommendations for the pupil detection parameters.

involves drugs which impact the eyes, I guess this means that the subjects either have very small or very big pupils. In these cases, you need to adapt the "min pupil"/"max pupil" parameters in the Pupil detector menu accordingly.

papr 11 November, 2019, 08:54:40

@user-e637bd Pupil Mobile supports android version 9 by now, yes. I do not know about the Pixel 4 though.

papr 11 November, 2019, 10:32:39

@user-86d8ec I am not sure if you even need an other copy of the iMotions software. You might be able to import Pupil recordings just fine already. I think the iMotions support should be able to answer your questions in detail.

papr 11 November, 2019, 12:55:32

@user-c6717a In Pupil v.1.18-35, we fixed the issue that caused Pupil Player to crash when running the Offline Pupil Detection on Windows.

user-abc667 11 November, 2019, 19:52:38

Our work involves measuring cognitive status of people in part by having them take a variety of pen and paper tests. We've had some success, but often run into the problem that people who are looking down at the table top appear to the Pupil as having closed their eyes. We have the extenders in place, but even so eyelids and sometimes lashes block the view of the pupil. (The subject is still looking at the page, but is gazing down sufficiently that the eye cameras can't see their pupils. I know this is a challenging use of any eye tracker, but wondered if there's any advice about how to deal with this. Are there longer extenders, for example? Any ideas, suggestions, and especially experience, would be great to hear. Thanks.

user-baf003 11 November, 2019, 21:05:12

Hello, will be happy with a quick question - I collect with 200hertz eye cameras, and 120hertz world view camera in a 2D mode. If I understand correctly, in the exported data there are approximately 400 norm_pos_x and norm_pos_y gaze positions for every second. Is this because the eye cameras do not record in a synchronized manner so that when the eye timestamps are matched to the world camera timestamps there are two similar timestamps of the same eye in several world view timestamps, but because there are not the same for the other eye the number of gaze points per second is doubled? Thank you!

user-00a1c8 11 November, 2019, 23:31:11

Hi, I ran into a problem with a setup of the core setup with 2 x 200Hz eye cameras and I was using the v1.13 software. And the timing of the eyes were not in sync. This happened 2 times in a row. I have since updated to 1.18 and have not had a problem with the recording. Is there anyway that I can go back and fix the timing of the previous recording?

user-c6717a 12 November, 2019, 08:14:58

@user-c5fb8b thank you! ideally Iโ€™d like to store the ntp (system) time in a log file every time the pupil mobile system makes a pupil-time stamp. Is there any easy way to integrate this into the output on pupil mobile? We have some python code that is currently doing this on a rPi and a different android phone. I think the main issue we are trying to solve is how we get the pupil time stamp written at the same time as an ntp time stamp. If we have that even, only a few instances, we could use your method to average and get the constant difference. Does that make sense? Thank you again for your time and help! I feel like adding a ntp time stamp option when pupil mobile is connected online would be very useful to the app for many applications. Its one of the main methods our network and embedded systems engineers are using to sync multiple networked systems (human brain implant, cell phone imu, GoPro, rPi, etc).

user-300fbb 12 November, 2019, 12:50:15

Hello, I just started using the glasses. I noticed that one eye-camera is upside down (see eye0 on the attached screenshot). Is this normal?

Chat image

user-300fbb 12 November, 2019, 13:03:53

(never mind, I found my answer!)

papr 12 November, 2019, 13:29:06

@user-c6717a When Pupil Mobile starts a recording, it records the starting time in system and in pupil time. We basically measure the same point in time with the two different clocks. Now, assuming the clocks do not drift, you can calculate the difference between these two measurements and apply it post-hoc to all pupil timestamps

papr 12 November, 2019, 13:30:54

This is the simplest way. Alternatively, you would have to sync Capture to the unix epoch, and sync Pupil Mobile to Capture. This way Pupil Mobile stores everything in unix epoch

user-abc667 12 November, 2019, 14:58:32

@papr "When Pupil Mobile starts a recording, it records the starting time in system and in pupil time. We basically measure the same point in time with the two different clocks. Now, assuming the clocks do not drift, you can calculate the difference between these two measurements and apply it post-hoc to all pupil timestamps" Very useful info -- where are these values recorded/stored so I can get to them? Thanks

papr 12 November, 2019, 15:00:11

@user-abc667 in the info csv file

user-abc667 12 November, 2019, 15:00:50

@papr Excellent; thanks.

user-d3d852 12 November, 2019, 15:20:03

Hiย everyone, we have recently started using the glasses for UX research, primarily on mobile devices. We're having real trouble with the accuracy of the glasses. Does anyone have any experience using the glasses for UX research and also any tips on improving accuracy? The eye cameras appear to show that the pupils are tracking fine (red line around the pupils).

user-a4a77a 12 November, 2019, 15:55:28

Hello, How I extract smooth heatmap surface,,,, the extraction is pixelated, heatmap smoothness level not working in my case.

Chat image

user-4a4530 12 November, 2019, 17:06:17

Hello, we are working on a machine learning application with Pupil Labs Core and Python. We need to record the pupil diameter in real time and combine these measurements with those of other sensors, all managed by Python. The problem is that Pupil records with a little delay and then drags all the other instruments. Do you have any suggestions for me and my team? Thank you in advance for your help.

user-9d7bc8 12 November, 2019, 20:20:22

Hello. I'm trying to use the Fixation Detector plugin with the Network API, but I can't find what topic name it sends data under, if it even does at all. Looking at the source code, it looks like it should be "fixations", but subscribing to that topic yields no data at all. Does that plugin send data over the network api?

papr 12 November, 2019, 20:49:12

@user-a4a77a The smoothness only relates to size of the gaussian filter used on the heatmap. The pixalation comes from the fact that we scale the image using a nearest-neighbours approach, which preserves the original bin size of the heatmap visually.

papr 12 November, 2019, 20:52:24

@user-d3d852 Have you checked the accuracy field in the Accuracy Visualizer menu? What does it say after a calibration? Depending on your used detection and mapping approach, values below 1.5 degrees are considered good.

papr 12 November, 2019, 20:54:36

@user-4a4530 How much delay are you measuring? Does Capture run on the same computer as the script that receives the data?

papr 12 November, 2019, 20:57:16

@user-9d7bc8 Please make sure the fixation detector is running and that you calibrated successfully. You should see a yellow circle in the world window when a fixation was detected. Also, try subscribing to fixation (singular, not plural). Are you able to receive other data, e.g. pupil data?

user-2be752 12 November, 2019, 21:35:21

Hi there, in the older data format, pupil had a notifications dictionary with builtin notifications for calibration starting and finishing, is this also happening for the newer pupil data format? I can't seem to find it. Is it now something that we need to build ourselves? Thank you so much!!

papr 12 November, 2019, 21:36:07

@user-2be752 That is still there. All notifications are saved to notify.pldata.

papr 12 November, 2019, 21:36:42

@user-2be752 You can use this code to load the data from the file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L137

user-2be752 12 November, 2019, 21:36:56

mmh oh okay! maybe i'm not loading it correctly then! I can only see data, topics and timestamps. And for topics it's only pupil.0 and pupil.1

papr 12 November, 2019, 21:37:26

@user-2be752 Then you are looking at pupil.pldata. That is a different file ๐Ÿ™‚

user-2be752 12 November, 2019, 21:37:58

oh!! totally my bad, thanks! you are the best ๐Ÿค˜

papr 12 November, 2019, 21:38:12

The old pupil_data file contained everything in one file. Due to technical reasons, we decided to split each data type into a separate file

user-4a4530 12 November, 2019, 21:40:41

@papr Capture run on the same computer, the delay at the beginning of the recording the delay is very low and then rises drastically. Where can I find out more about this?

papr 12 November, 2019, 21:41:32

@user-4a4530 this is very unusual! Does it keep increasing endlessly over time?

papr 12 November, 2019, 21:42:06

I would be interested in how you measure the delay. Doing this time measurements can be very tricky.

user-4a4530 12 November, 2019, 21:42:47

@papr No, it starts very low, then rises and stabilizes.

user-4a4530 12 November, 2019, 21:43:50

@papr At the moment I'm not in the lab, I'll try to write to you tomorrow so that I can give you more precise information!

papr 12 November, 2019, 21:44:54

@user-4a4530 ok, thank you ๐Ÿ™‚

user-0767a7 13 November, 2019, 00:46:24

hi all. I'm hoping to talk to someone about the fixation detection algorithm - I'm getting some really inconsistent output, and I have no idea why that would be

user-0767a7 13 November, 2019, 00:53:47

also, FYI, the documentation for the fixation detector (here: https://docs.pupil-labs.com/core/software/pupil-capture/#fixation-detector) is just broken links and infinite loops.

wrp 13 November, 2019, 03:44:16

@user-0767a7 thanks for the feedback. We have an open issue for fixing the links and fixation detector documentation here: https://github.com/pupil-labs/pupil-docs/issues/320

wrp 13 November, 2019, 03:45:01

@user-0767a7 could you define inconsistent/provide some more concrete information?

user-0767a7 13 November, 2019, 03:45:52

sure. I'll describe my protocol first, so you'll know where I'm coming from.

user-0767a7 13 November, 2019, 03:47:16

I'm looking at "the length of the fixation prior to event X". X is defined by a frame in the world output video, so I'm looking for the length of the fixation that immediately precedes a certain point in the video, defined by a given frame.

wrp 13 November, 2019, 03:49:19

@user-abc667

Our work involves measuring cognitive status of people in part by having them take a variety of pen and paper tests. We've had some success, but often run into the problem that people who are looking down at the table top appear to the Pupil as having closed their eyes. We have the extenders in place, but even so eyelids and sometimes lashes block the view of the pupil. (The subject is still looking at the page, but is gazing down sufficiently that the eye cameras can't see their pupils. I know this is a challenging use of any eye tracker, but wondered if there's any advice about how to deal with this. Are there longer extenders, for example? Any ideas, suggestions, and especially experience, would be great to hear. Thanks.

I would suggest trying to angle the eye cameras up (orbiting about their ball joints) if you haven't done this already. Additionally, we do document the geometry of the mounts if you wanted to develop/prototype your own custom extenders: https://github.com/pupil-labs/pupil-geometry

user-0767a7 13 November, 2019, 03:49:49

there's a "maximum duration" setting for exporting fixations; my problem is that I'm getting completely inconsistent output depending on that max duration. if it's set to 1000ms, I'll find totally different fixations (different lengths, start frames, end frames, positions) than if I set it to 1100 ms.

wrp 13 November, 2019, 03:52:05

@user-0767a7 are you using 2d or 3d mode?

user-0767a7 13 November, 2019, 03:55:02

@wrp good question. how can I tell?

wrp 13 November, 2019, 03:56:03

@user-0767a7 Without looking at your data, I can only say getting different results might make sense given the change in parameters. Fixations are classified based on the dispersion and duration parameters. If the params are changed, then you will likely end up with different fixations being classified.

wrp 13 November, 2019, 03:58:20

If you're curious, you can see the source code of the fixation detector here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py

user-0767a7 13 November, 2019, 03:59:32

I'm not sure that explains it. here's an example of my problem: I set the max duration to 300 ms. I'll find 3 consecutive fixations; the first two are 300ms, the last one is 100. So one fixatoin of 700ms duration. Now I set the max duration to 800 ms. I won't find a 700 ms fixation, but rather something completely different. Two 500ms fixations, non-adjacent, or a single 200ms fixation, and then nothing for another few seconds.

wrp 13 November, 2019, 03:59:57

2d vs 3d mode - this is set when you make a recording in Pupil Capture > General settings menu. You can take a look at exported gaze_positions.csv data from Pupil Player and if it has columns *3d - then the 3d mode was used.

user-0767a7 13 November, 2019, 04:00:36

i

user-0767a7 13 November, 2019, 04:00:41

'm using 3d mode

wrp 13 November, 2019, 04:00:45

ok, thanks

wrp 13 November, 2019, 04:01:08

@user-0767a7 maybe you would like to send a small sample recording to data@pupil-labs.com and someone from my team can follow up with you

user-0767a7 13 November, 2019, 04:01:27

if i max out the fixation duration (4k ms), I find almost nothing in a three minute video except for a few scattered 100ms fixations.

user-0767a7 13 November, 2019, 04:02:11

thanks for the source code, I'll bookmark that and look at it later

user-0767a7 13 November, 2019, 04:09:39

@wrp surething. I'll write up an email and send you a brief recording tomorrow

user-0767a7 13 November, 2019, 04:09:54

thanks, I appreciate the help. cheers

user-abc667 13 November, 2019, 16:50:50

@wrp Thanks for the info and the pointer. Yes, the first thing we tried was to angle the cameras up as much as possible. Still not enough. We'll try making a longer one and see if it does the trick. Meanwhile, if anyone else reading this has any experience to share, that would be great.

papr 13 November, 2019, 16:56:38

@user-abc667 Just to be sure, are you using the fish-eye lens that is shipped with the Pupil Core headset?

user-860618 13 November, 2019, 22:03:39

I've asked the research publications community, but I thought maybe one of you can help me.
Hello all, I am planning on using the Pupil labs software for a research project, and I have been following the DIY intructions from the Pupil labs website. I have a few questions for the commuinty. So far, I have trouble with the webcams. I keep getting a time stamp error...Does anyone know how I can fix this? I don't have the error infront of me but I can get the exact message another time. I am using Ubuntu 18.04 kernel version 5.0.0. Also, I have had trouble finding the correct uvc compliant webcams. For those that have made their own devices, what webcam models have you had success with with? Much appreciated.

user-860618 13 November, 2019, 22:06:29

I should point out that I have the software up and running but I am coming across the issues mentioned above.

user-c6717a 15 November, 2019, 07:06:12

Hey @papr: to follow up on your last response. Where can we 'measure' or record the pupil clock from pupil mobile? We would need to know the pupil clock time and the 'system' clock time at the same time to calculate that difference. I'm just not sure how to get both times logged simultaneously for even 1 point to calculate the difference between clocks. Do you agree? Or are we thinking about different solutions? Thank you again for your time and help!

papr 15 November, 2019, 07:09:49

@user-c6717a My solution only works after the effect since it requires the info.csv of an existing recording. Pupil Mobile stores the start time of the recording in both clocks there.

user-c6717a 15 November, 2019, 07:12:28

Ah I see. I missed that there were two times. Do you know the official definitions of the System Time and the Synced Time in that case. What do they correspond to on the phone?

user-c6717a 15 November, 2019, 07:20:36

So system time seems to be the Unix system time to the thousandths of a second. How is the 'synced' time generated relative to that? For instance, the start time (system) for 1 of my recordings is: 1573001832.048. The start time (Synced) for the same recording is 1677.583900708. What units is the Synced time in? Seconds? And what are these units relative to? For instance, what would be a 0 (synced) time? Is it time since opening the app? Sorry for all of the questions, but just trying to understand how to take it from here. Thank you again!

papr 15 November, 2019, 07:54:57

@user-c6717a synced time is a monotonic clock whose start point has been either not defined or synchronized to a Pupil Capture instance.

papr 15 November, 2019, 07:55:39

Units are always seconds.

papr 15 November, 2019, 07:56:48

Monotonic clocks are used for precise time measurements, where the difference between two clock measurements is more important than the absolute position in time (e.g. unix clock) of the time measurement.

user-c14158 15 November, 2019, 15:02:00

hello, i have a issue using pupil mobile version 1.2.3. I had no issue with the 1.2.2 version. Now when i record data i have multiple file stored for the different camera (for example world_001,world_002... and so on ). And when i replay the data on the player i have after some time this issue : 2019-11-15 14:23:02,523 - player - [ERROR] video_capture.file_backend: Found no maching pts! Something is wrong! 2019-11-15 14:23:02,524 - player - [INFO] video_capture.file_backend: No more video found

user-c14158 15 November, 2019, 15:02:40

i tried with the player version 1.16 (that i was using before) and the new version 1.18 on windows 10 with still the same issue

user-c14158 15 November, 2019, 15:03:35

the offline pupil detection seems to works fine although the eye video are also in multiple part

user-c14158 15 November, 2019, 15:04:37

if i dont do any processing i am able to export the world video in mp4 and i can play this file without problem

user-c14158 15 November, 2019, 15:05:00

do you have any insight on this issue ?

user-c14158 15 November, 2019, 15:05:46

alternatively is there a way to go back to the 1.2.2 version ?

user-c14158 15 November, 2019, 15:06:01

thanks in advance

user-9228ee 15 November, 2019, 19:29:14

Hey everyone, For a project I need to run the software on a NVIDIA Jetson AGX Xavier. Does anyone have any experience with installing the dependencies and the pupil software?

user-c6717a 15 November, 2019, 20:35:35

@papr Ok. I think I understand and will talk with our engineers about it. Do you know where on the phone the Pupil Mobile app pulls this montonic clock information to write it to the csv file? I think we can just link our NTP time to the system time and then the system time is already linked to the Synced (Monotonic clock) time. Is that what you're suggesting? Thanks!

papr 15 November, 2019, 21:22:21

@user-c6717a please be aware that the monotonic clock might use an additional offset while being time synced... Do you need to be time synced in real time or do you just need to synchronize your clocks after the effect? There are two clear solutions for both of these use cases. Let me know which one you need.

user-e637bd 15 November, 2019, 23:04:39

Hello, I am having issues with pupil mobile; i can change the settings of the camera, such as making the eye camera 400x400px, and exposure time, etc. However when recording it always defaults back and does not record with the same settings. Any idea how to solve it?

user-abc667 16 November, 2019, 00:46:00

@papr Yes, using the headset exactly as it was sent to us.

user-abc667 16 November, 2019, 01:47:59

@papr System crash (not sure if this is the right place to report it). Running Pupil Capture v1.18.4 (the version of capture in the v1.18.35 release), windows x64. When detecting a surface, if I hit Add/Remove markers I get a reliable crash with the attached stack trace.

Crash.txt

user-c5fb8b 18 November, 2019, 08:37:31

@user-abc667 Are you using the legacy square markers?

user-f3a0e4 18 November, 2019, 09:53:49

@papr Hi. I am just looking for some advice. I am currently trying to collect gaze data from children (8 years old) using pupil labs, but am finding it extremely more difficult to effectively capture their pupils compared to when using pupil on adults. From what I can see, this seems to be caused by having to have the camera further away from the eye to account for their smaller heads. This seems to produce an image that is much darker and shadowed. Using the arm extenders helps to get squarer on with the eye, but subsequently causes an even darker image with even more shadow. Has anyone faced this before and know how to get around it?

user-f3a0e4 18 November, 2019, 09:54:33

I have tried changing the ROI but this does not remedy the issue

papr 18 November, 2019, 10:08:58

@user-f3a0e4 Adjusting the RoI would have been my suggestion, too. Can you provide an example image?

user-f3a0e4 18 November, 2019, 10:37:11

This is an example from the weekend. I don't have an example using the extenders as I decided to remove them before recording, but this example provided similarly poor pupil detection, especially when looking more downwards.

Chat image

papr 18 November, 2019, 10:39:40

@user-f3a0e4 The pupils seem comparably big, too. I suggest you use the lower resolution settings (320x240 for 120Hz eye cameras, or 192x192 for 200Hz eye cameras) and increase the Pupil max parameter in the pupil detector menu of the eye windows.

papr 18 November, 2019, 10:42:46

When using the lowest resolution, the algorithm does not perform the coarse detection which can result in false negative detections if the pupil is comparably large in the image.

user-f3a0e4 18 November, 2019, 10:43:46

Okay, thanks! Are there any other settings that might improve pupil capture in these instances? For example, brightness or contrast etc?

papr 18 November, 2019, 10:45:52

@user-f3a0e4 Regarding the UVC post-processing parameters, I would recommend adjusting the gain.

user-f3a0e4 18 November, 2019, 10:46:48

Oh, I wasn't aware you could adjust the UVC settings post-hoc? Where are the settings for that?

papr 18 November, 2019, 10:48:45

Sorry, this is a misunderstanding. I was talking about UVC settings that are applied after the image exposure, but before the image is transferred to the computer. The UVC Source menu lists them as "Post-processing" since it they are "Camera post-processing" parameters.

user-f3a0e4 18 November, 2019, 10:49:48

Okay, so I cannot adjust these settings once the recording is made?

user-f3a0e4 18 November, 2019, 10:50:36

And on another note, just to pick your brains. Often with the children I have troubles getting a good calibration out of them, likely because they don't keep their head still/aren't looking where they're supposed to. Do you know of any measures I can take to counter this issue?

papr 18 November, 2019, 10:55:10

@user-f3a0e4 correct, these values cannot be readjusted by Player after the effect.

user-abc667 18 November, 2019, 16:15:56

@user-c5fb8b Yes, we're using legacy square markers -- we have testing forms that have been in use previously and for consistency we need to maintain the appearance. The problem got a lot better (ie no need to edit the surface) when I was careful about never using a marker on more than one form (not even in a different combination of markers). Now surfaces are recognized quickly and no need to edit.

user-c5fb8b 18 November, 2019, 16:18:39

@user-abc667 we were able to reproduce the issue you reported, thanks a lot for that! We are preparing a bugfix release to be released later this week, where we will address the issue. Glad to hear you are managing with this version until then.

user-abc667 18 November, 2019, 16:19:49

@user-c5fb8b And thanks for the rapid response.

user-2be752 18 November, 2019, 23:26:56

Hello! I'm having some issues installing capture in Ubuntu 14.04. I've installed it but then it won't open and thought whether maybe there is some issues with this version I am not aware of?

user-771cfd 19 November, 2019, 03:15:51

Hi everyone! My lab have bought some eye cameras recently and the resolution(400x400) is much lower than the ones we bought two years ago(1920x1080). Is it possible to increase the resolution or buy old versions? Thanks!

user-860618 19 November, 2019, 06:24:26

@user-771cfd I donโ€™t mean to sound flippant but can you use the old ones lol. If you need a lower resolution you can always down sample the 1080p cameras.

user-860618 19 November, 2019, 06:24:36

Itโ€™s in the options

wrp 19 November, 2019, 06:25:21

@user-771cfd please send an email to sales@pupil-labs.com with your request

user-771cfd 19 November, 2019, 06:31:16

@user-860618 @wrp Thanks a lot! Will send an email, we didn't buy enough cameras last time.๐Ÿ˜€

user-abc746 19 November, 2019, 13:52:42

Hi everybody, do you know where find furhter information about the calibration ? I use Pupilab Core in order to measure particular charachteristic in vergence eye movements and i will know which kind of calibration is the best for this kind of measure

user-abc746 19 November, 2019, 13:52:53

i am at the godd place for this question ?

user-abc667 19 November, 2019, 21:52:46

@papr OK, seriously naive question, but it matters -- it's possible to put the eye camera arm extender onto the appropriate part of the frame in either of two orientations. The bottom of the extender has horseshoe-like cuts in it. When the extender is put on the arm, does it go on horseshoe-end first? We had been using it this way but having trouble getting a view of the pupil when people are looking down on the table (where we have them taking a pen and paper test). By reversing it the camera sits lower (and closer to the cheek), and seems to give us a much better pupil view for those looking-down situations. Is this backwards? Even so, it is ok to use it this ways? Thanks!

papr 19 November, 2019, 21:55:01

@user-abc667 The point of the the extenders is to provide better view of the pupil during pupil detection. It does not matter in which direction you are using it as long as you are receiving better results as in the other configurations.

user-abc667 19 November, 2019, 23:09:02

@papr Excellent; thanks.

user-92a820 20 November, 2019, 01:18:55

Hey pupil labs! We've got a Vive add on that is no longer being recognized by any pc at all - USB malfunction, device not recognised. Code 43 device descriptor failed, which happens regardless on all pcs tried, whether the device has been previously installed or not. Apart from the usual usb trou bleshooting is there anything you guys would reccomend? I'm thinking reflashing the firmware somehow?

wrp 20 November, 2019, 03:21:19

@user-92a820 I would recommend that you contact sales@pupil-labs.com so that our Hardware team can get in touch with you to perform remote diagnostics and/or repair of this unit.

user-92a820 20 November, 2019, 03:22:20

Great, thanks.

user-4f0036 20 November, 2019, 16:12:24

Hi I would like to ask you, is it possible to upgrade mobile bundle Motorola Z3 to Android 9 Pie? Pupil Labs Mobile will work under it?

user-86d8ec 20 November, 2019, 20:54:09

Hi, my group is looking to purchase Core, I was wondering if there was a tutorial available as well as sample eyetracking data I could practice using pupil player with as well?

user-31df78 21 November, 2019, 00:41:56

Hi, is there a plugin that allows for manual coding of video by frame or by fixation? Like let's say I want to assign a label representing some AOI to each frame as I go through them, is this functionality built into any plugin already?

papr 21 November, 2019, 08:34:35

@user-31df78 None of the built-in plugins provide manual AOI annotation functionality. Pupil Capture and Player only provide automatic AOI tracking using fiducial markers: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

papr 21 November, 2019, 08:35:18

@user-86d8ec You can download a sample recording here: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing

user-31df78 21 November, 2019, 08:36:08

Great, thanks for the response @papr Just looking to confirm for PI.

papr 21 November, 2019, 08:36:58

@user-86d8ec We do also have some Jupyter notebooks that show case how to process exported data by Pupil Player https://github.com/pupil-labs/pupil-tutorials

papr 21 November, 2019, 08:38:22

@user-31df78 In this case, I would recommend to move your question to the invisible channel.

papr 21 November, 2019, 08:39:29

@user-4f0036 Yes, the recent Pupil Mobile v1.2.3 version does work on Android 9 to our knowledge.

user-e637bd 21 November, 2019, 14:10:35

Sorry if its out there, but I can't find any documentation on telling pupil player what the name of the cameras should be. We recorded on the pupil mobile app, and it let us change the name of the cameras from eye 0 to Right Eye, etc. Now dragging the folder into pupil player, it says it cannot find any world or eye camera. Where is the name settings?

papr 21 November, 2019, 14:12:16

@user-e637bd Hey ๐Ÿ‘‹ you will have to rename the video and respective .time files before opening the recording in Player

user-e637bd 21 November, 2019, 14:14:30

On another note, I asked before about recording camera settings on phone. So far the oneplus 6T is working well for us if others are also interested. We only find some issues in the app not recording the settings we stored for each camera. I think there are two possibilities but not totally sure. The first seems to be it works if we set it, start recording, and then open the camera image to check again that the settings are right, then it records correctly. The other, and maybe more accurate problem, is that sometimes the cameras connect and disconnect (it seems the wire is very fragile on the headset), because of this, it defaults to some settings rather than storing the user settings for the camera. So if it is the latter, if its not possible now, maybe a feature could be to store user settings and autoapply them?

user-e637bd 21 November, 2019, 14:16:30

@papr thanks for the fast response! That is a lot of renaming, I guess this was a mistake haha. Is the name stored somewhere in a python script by anychance in a single location and I can just edit that?

papr 21 November, 2019, 14:19:37

If you run from source, yes, let me look it up

user-c5fb8b 21 November, 2019, 16:24:52

@here ๐Ÿ“ฃ Pupil Software Release v1.19 ๐Ÿ“ฃ This release primarily addresses improvements/changes to the Fixation Detector and Binocular Gaze Mapper.

Check out the release page for more details and downloads: https://github.com/pupil-labs/pupil/releases/tag/v1.19

user-9212ea 22 November, 2019, 00:54:47

Hello, we need to use eye tracking technology to run UX usability tests (web and mobile apps). Can someone recommend the best option and configuration?

user-35de32 22 November, 2019, 21:33:55

Hi everyone, Is there a study that shows the eye tracking accuracy vs the eye image resolution? Since it matters for large scale data collection going from 192x192 resolution to 400x400 for example. Thanks

user-a10852 22 November, 2019, 23:00:58

Hello! Does anyone know if there is a way to have the offline pupil detection applied to only a small section of the recording? Is it possible to trim the entire pupil recording (I know it is possible to trim what is exported, I mean the entirety of what we see in pupil player). Thanks!

user-c629df 23 November, 2019, 05:35:10

Hi everyone! I'm currently using Pupil Labs for an experiment where we require participants to look down on a computer screen. I'm wondering if there is a way to avoid disturbance from eyelashes during the process? Thank you!

wrp 23 November, 2019, 05:52:54

@user-c629dfhave you tried setting Region of Interest in eye windows? Have you tried eye camera arm extenders?

user-c629df 23 November, 2019, 06:13:53

@wrp Thank you for your response! May I ask how to set Region of Interest in eye windows?

user-c629df 23 November, 2019, 06:16:31

@wrp Oh nevermind! I figured it out!

papr 23 November, 2019, 11:33:22

@user-a10852 This would mean removing data permanently from the original recording. Pupil Capture and Player follow the policy to never delete previously recorded/exported data. Therefore, trimming the entire recording is not possible in Pupil Player.

user-abc667 23 November, 2019, 19:26:07

@papr Have set up several surfaces (actually paper forms our test subjects write on) and wondered about supplying their dimensions, so that gaze is accurately translated to surface position. Is it the distance between the edges of the tags or their centers? If the edges, inside or outside edge? From the appearance of the surface on the screen, it looks like top left to top right corners for width and top left to bottom left for height. Yes? Thanks.

user-36b0a3 24 November, 2019, 11:17:51

Hey all, I'm trying to go back through some old data that I originally pulled in Capture 0.3.9.

I installed Player 1.19 (macOS) and dragged the directory to it, and it instructed me to use 1.17 to update the format. I

I got a copy of 1.17, dragged the directory over, and it worked no problem.

I then tried with the next one. "Invalid Recording / There is no info file in the target directory". This makes no sense, since all the same types of files are in the new directory.

I try another. Same.

I try all of the others. Same.

Since it doesn't seem likely that just happened to start with the only recording that could be used (it wasn't the first or the last from the study; it was one in the middle), I assumed something was messed up with the Player 1.17 file or something. I installed Player 1.17 on Windows and tried there. Same behavior.

These all still open in 0.3.9.

I really need to get into these things, and will likely need the current version of the software. Any thoughts?

user-36b0a3 24 November, 2019, 11:20:37

Holy crap I just figured it out.

user-36b0a3 24 November, 2019, 11:21:05

I realized that I hadn't tried and failed to open the recordings past the first one in 1.19.

user-36b0a3 24 November, 2019, 11:21:32

I just tried another one, first doing it in 1.19, then following the direction to do it in 1.17. It worked.

user-36b0a3 24 November, 2019, 11:21:36

That is weird...

user-36b0a3 24 November, 2019, 11:24:08

Okay, so what I have to do is try and fail to open it in 1.19, then open it in 1.17, then open it again in 1.19 to fully update.

papr 24 November, 2019, 13:27:26

@user-36b0a3 This procedure is unfortunate but correct. There is a bug in v1.16/v1.17 which prevents Player from correctly detecting the recording version of these old recordings. Please see the v1.16 release notes section on "Deprecated Recordings" for details on why these recordings are not supported in Pupil Player versions newer than v1.17 https://github.com/pupil-labs/pupil/releases/tag/v1.16

user-00cf0f 25 November, 2019, 01:16:37

@papr I noticed in June that you said the head pose tracker is not supported on Windows yet... has that been updated?

user-e7102b 25 November, 2019, 02:22:15

Hi @user-fbd5db I noticed some messages from you earlier in the year inquiring about creating a batch exporter for pupil data. I'm curious if you ever developed this code, and if so, would you be willing to share? I have scripts that will batch export annotation data and pupil diameter, but I'm struggling to add the functionality to export gaze and surface data.

user-e7102b 25 November, 2019, 02:22:27

Thanks

user-e7102b 25 November, 2019, 02:31:36

Hi @papr are you aware of any other users that have developed batch exporter scripts? I'm attempting to adapt the scripts that I originally forked from another user (https://github.com/tombullock/batchExportPupilLabs) to export surface and gaze data in addition to pupil diameter, but am struggling due to lack of python experience. Cheers

user-36b0a3 25 November, 2019, 06:29:37

Hey folks, back atcha with another question (got all the videos updated and tables exported; thanks!):

I will likely need to explain the pupil detection confidence value. In the Kassner et al. 2014 paper, it states:

Detect edges using Canny [14] to find contours in eye image. Filter edges based on neighboring pixel intensity. Look for darker areas (blue region). Dark is specified using a user set offset of the lowest spike in the histogram of pixel intensi- ties in the eye image. Filter remaining edges to exclude those stemming from spectral reflections (yellow region). Remain- ing edges are extracted into into contours using connected components [29]. Contours are filtered and split into sub- contours based on criteria of curvature continuity. Candi- date pupil ellipses are formed using ellipse fitting [16] onto a subset of the contours looking for good fits in a least square sense, major radii within a user defined range, and a few ad- ditional criteria. An augmented combinatorial search looks for contours that can be added as support to the candidate el- lipses. The results are evaluated based on the ellipse fit of the supporting edges and the ratio of supporting edge length and ellipse circumference (using Ramanujans second approxima- tion [18]). We call this ratio โ€œconfidenceโ€.

Can anyone propose a pithy, more-accessible description of this ratio?

user-36b0a3 25 November, 2019, 07:08:38

โ€”Or point me to another paper to cite?

user-abc667 25 November, 2019, 18:45:45

@user-c629df Regarding looking down -- we have the same problem and are experimenting with 3-d printing camera arm extenders that lower the cameras. Happy to share the results. One other trick is to put the existing extenders on backwards; this significantly lowers the cameras, but also brings them close in to the face, possibly too close for some people.

user-c629df 25 November, 2019, 20:04:22

@user-abc667 That's a good idea! Thanks for recommending!

user-c629df 25 November, 2019, 20:04:57

Hi everyone! I'm wondering if there is any documentations for pupil labs eye tracking data analysis? Thanks for helping!

papr 25 November, 2019, 21:19:34

@user-c629df what data are you interested in? I might be able to point you in the right direction.

user-abc667 25 November, 2019, 21:35:46

@papr Have set up several surfaces (actually paper forms our test subjects write on) and wondered about supplying their dimensions, so that gaze is accurately translated to surface position. Is it the distance between the edges of the tags or their centers? If the edges, inside or outside edge? From the appearance of the surface on the screen, it looks like top left to top right corners for width and top left to bottom left for height. Yes? Thanks.

papr 25 November, 2019, 21:39:02

@user-abc667 you can adjust the surface relative to the markers. This way you do not need to worry about the the positioning of the markers.

user-abc667 25 November, 2019, 21:47:11

@papr Sorry, I'm being unclear. I have carefully put the markers at the four corners of the page, so there's never any question about where they are, or where the surface is (the forms get re-used for multiple subjects). As I understand it, the conversion from normed x and y positions to scale x and y depends on the form "dimension", which I take to be the defined by the position of the fiducial markers, yes? As we need the most precision we can get, I was checking that the dimensions of the form are determined by the outer edges of the fiducials (using the apriltags 36 11 square markers), yes? (Rather than say the center of the fiducials.) Or am I misunderstanding you?

papr 25 November, 2019, 21:51:48

@user-abc667 by default, if you add a surface without editing it, it will wrap around the 4 outer most corners of all detected markers. The marker definition is the previewed by the green(?) boxes. I do not remember the exact color.

user-abc667 25 November, 2019, 21:53:46

@papr Yes, that's what I thought, and that's what I needed to know -- outermost corners. Thanks!

user-abc667 25 November, 2019, 22:32:58

@papr We're going to have camera extender arms 3d printed that lower the cameras (per some of my earlier posts). Do you happen to have a recommended material to use?

papr 25 November, 2019, 22:49:47

@user-abc667 I do not have enough knowledge in this field to make recommendations. But @user-755e9e might be able to tell you more. Also, are you aware of https://github.com/pupil-labs/pupil-geometry/blob/master/Pupil%20Headset%20triangle%20mount%20extender.stl ?

user-abc667 25 November, 2019, 23:03:10

@papr Yes, I have that file, and the CAD savy folks in my Lab modified it to make extenders that are the same shape but drop the camera lower. Sending the file out for printing even as we speak (type). Thanks.

user-c37dfd 25 November, 2019, 23:26:07

Hi. I have been getting a couple errors when using pupil mobile. I have two phones that are both fully up-to-date and have the most up-to-date app. Only one of the phones is giving me these errors and I wanted to see if anyone else has encountered this issue. Thanks.

Chat image

user-c37dfd 25 November, 2019, 23:26:14

second error

Chat image

user-755e9e 26 November, 2019, 06:41:49

Hi @user-abc667 , i would recommend SLS 3d print since itโ€™s strong and flexible.

papr 26 November, 2019, 07:03:33

@user-c37dfd I have encountered these on Android 10. Is it possible that the other phone is running Android 9?

user-c9d205 26 November, 2019, 10:11:08

Is there a guide on surface tracking you could link me to?

user-b8789e 26 November, 2019, 16:37:18

Hello there! I'm from Rio, Brazil. Anyone is offering services of eye tracking here? Or Latin America? We have a project starting in 10 days, and we need to make sure that everything will run fine. We do not have a Pupil Core or Invisible yet, but is possible to buy an used one.

user-e7102b 26 November, 2019, 20:26:43

Hey @papr, I'm sorry to keep bothering you about this, but I really need to figure out how to batch export pupil labs data. Currently I'm able to export annotations and pupil diameter using the scripts in this repo (which is linked to pupil labs community) https://github.com/tombullock/batchExportPupilLabs . I just need to add surface and gaze data export functionality. Essentially, all I want to do is have the script automate an export from pupil player with the "annotation player", "offline surface tracker" and "raw data exporter" plugins activated. I've been attempting to edit the scripts in the repo to add this functionality but have not gotten very far (I've uploaded my edits to a new folder in the repo). Is there perhaps an easier way to do this that I'm missing? Presumably these functions already exist in pupil_src, so it should just be a case of writing a wrapper function to loop through all data files in a directory? Any guidance would be appreciated. Thanks.

user-a98526 27 November, 2019, 04:17:03

Hi @papr ,I want to using gaze coordinate from Pupil Labs to control robot, i want know which of the core and invisible is more accurate. In addition, there is another problem. Which of core and invisible is easy to development

papr 27 November, 2019, 08:58:53

@user-c9d205 This is the link to our surface tracking documentation https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

papr 27 November, 2019, 09:48:12

@user-e7102b

offline surface tracker Well, this would not be as easy, because this requires a series of different steps, namely 1. marker detection, 2. surface definitions/tracking 3. gaze mapping

You might be able to extract the surface data that was recorded in realtime to the surfaces.pldata file. Would this be sufficient?

papr 27 November, 2019, 10:00:18

@user-a98526 In general, your application would receive gaze via our network apis. The network apis for Pupil Invisible and Pupil Core are slightly different, but for both there are examples on how to access the data in Python.

Gaze coordinates are relative to the world camera of the glasses/headset that the subject is wearing. So you could implement a simple "look to the left" -> "robot steers to the left" control software. For this approach, I suggest using Pupil Invisible due to the minimum amount of setup that it would require.

In case that you want a "looking at door" -> "robot drives to door" control mechanism, the setup would be far more complex. I can give more details if necessary, but in general this would require to run Pupil Capture with the head pose estimation plugin. It is still possible to use both types of eye trackers here.

user-e7102b 27 November, 2019, 16:42:08

@papr yes, extracting the surface data recorded in realtime to surfaces.pldata would actually be ideal. I only suggested offline surface detection because in order to export surface data I'm required to activate the "Surface Tracker" plugin when I export data via the gui, and this plugin appears to use offline surface detection. I'm attempting to extract and write out the realtime surface data in the following script (lines 189-278) and I think this is nearly correct, but it seems to only write out one row of surface data per world camera frame, and fails for some datasets (https://github.com/tombullock/batchExportPupilLabs/blob/master/Add_Surface_Gaze_Export_Unfinished/extract_diameter.py)

papr 27 November, 2019, 16:43:46

@user-e7102b one surface datum per surface per world frame in which the surface was detected

user-e7102b 27 November, 2019, 18:09:11

@papr ok, so perhaps that's why it's currently failing for some datasets, if the gaze was off the surface? If I edit my script and write out datum['gaze_on_srf'] it seems like the entire data structure is being written out (which is what I want), but now the formatting of the outputted .csv is messed up, with mutliple data rows per row in the .csv. Is there an example in pupil labs src that would show me how to write this out correctly? I've searched but haven't been able to find anything. Thanks

papr 27 November, 2019, 18:12:28

@user-e7102b can you send me an example pldata file with timestamps for which it fails?

user-e7102b 27 November, 2019, 18:28:01

@papr sure. This one breaks the current version of my script. However, even when it does work, I'm still only getting one surface datum per world frame (when detected). If I change my script to output the full datum, this data file does not fail, but the outputted .csv is all messed up. https://www.dropbox.com/sh/t70f8nq1micuc5p/AADW2ULoj5fi_A3o39XoNbW1a?dl=0

user-dfeeb9 27 November, 2019, 19:01:04

We had a recording that accidentally went for far too long, it also looks like while the video recording itself is intact, some data is missing. Specifically, the timestamp numpy files. We have eye1, eye0, gaze.pldata, info.csv, and pupil.pldata

Is there any way we can create a new set of timestamp files from this information or load the current data into pupil player without it crashing? There may also be a problem with memory given that the video files themselves are gigantic

papr 27 November, 2019, 19:38:40

@user-dfeeb9 you can reproduce eye timestamps from pupil.pldata. What about world? Do you have a world file? You might want to modify this function to only read part of the data: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L145-L146

papr 27 November, 2019, 19:46:58

@user-e7102b

I'm still only getting one surface datum per world frame Since the recording only includes one detected surface, this is expected.

user-dfeeb9 27 November, 2019, 19:49:48

@papr We don't have world, and I don't know if it was necessarily recorded because the experiment in mind only concerns the pupils

papr 27 November, 2019, 19:50:43

@user-dfeeb9 Ok, that is fine. Then you will also have to edit the duration in info.csv. In a world-less recording the Player timeline will be set to recording_start to recording_start+duration

papr 27 November, 2019, 19:51:36

But the complete data is loaded nonetheless. So you might need to edit the pupil file before opening it in player

papr 27 November, 2019, 19:52:16

Btw what stopped the recording? Did it crash due to insufficient memory? Or disk space?

user-dfeeb9 27 November, 2019, 19:53:31

I'm looking into this too, because if it crashed the video headers are probably busted which is an extra pain

papr 27 November, 2019, 19:53:42

correct

user-dfeeb9 27 November, 2019, 19:53:47

For the record this isn't a study i'm running but one I'm called in to help with whenever pupil stuff goes wrong

user-dfeeb9 27 November, 2019, 19:54:05

Thanks a tonne for the help at this hour pablo

user-dfeeb9 27 November, 2019, 19:54:08

One last question

papr 27 November, 2019, 19:54:42

video headers are probably busted which is an extra pain This is very likely

user-dfeeb9 27 November, 2019, 19:54:55

The func you linked me looks independent of any classes etc. What would be your advised method to call this function

papr 27 November, 2019, 19:56:26

I would not call it directly as it is as it loads all the serialized data in memory which will likely exhaust your memory. How big is the pupil.pldata file?

user-dfeeb9 27 November, 2019, 19:56:55

I don't have it on hand so I'll have to grab it and check, but you should know that the data in time is at least a half hour

user-dfeeb9 27 November, 2019, 19:57:06

possibly/likely quite a bit larger than that in fact

user-dfeeb9 27 November, 2019, 19:57:27

approximately 6gb for the whole directory including both pupils

user-dfeeb9 27 November, 2019, 19:57:32

video files etc.

papr 27 November, 2019, 19:57:44

ah well, but this should be doable

papr 27 November, 2019, 19:57:58

The reason Player crashes is not the length but the missing files

user-dfeeb9 27 November, 2019, 19:58:36

I figured that was probably the case, so i presume that if the mp4s are intact I can replace the missing timestamp files and it'll go

user-e7102b 27 November, 2019, 20:00:52

@papr so does the single surface datum that we get per world frame contain multiple gaze positions? When i export the data using the gui I get a gaze_positions_on_surface_XXX.csv file. This is essentially exactly what I'm looking to output with my script.

user-8bf70c 27 November, 2019, 20:01:01

I'm running Capture on a Linux machine and am unable to record audio. The 'voice and sound' is selected on settings, and the audio capture plugin is selected. But on the audio capture, the dropdown only shows 'no audio' rather than options for an input device. The laptop's built in mic works fine, but it doesn't seem to be communicating with the Pupil software. Any thoughts?

papr 27 November, 2019, 20:03:13

@user-e7102b yes, all gaze pos during a frame will be mapped to the surface

papr 27 November, 2019, 20:04:35

@user-8bf70c We are currently investigating a series of audio recording issues on all platforms. We will keep the channel up-to-date regarding developments in this regard.

papr 27 November, 2019, 20:07:34

@user-dfeeb9

import os
import msgpack
import numpy as np

def restore_pldata_timestamps(directory, topic):
    msgpack_file = os.path.join(directory, topic + ".pldata")

    ts = []
    with open(msgpack_file, "rb") as fh:
        for topic, payload in msgpack.Unpacker(fh, raw=False, use_list=False):
            data = msgpack.unpackb(payload, raw=False, use_list=False)
            ts.append(data["timestamp"])

    ts_file = os.path.join(directory, topic + "_timestamps.npy")
    np.save(ts_file, ts)
papr 27 November, 2019, 20:08:01
user-dfeeb9 27 November, 2019, 20:08:20

wow thanks!

user-e7102b 27 November, 2019, 20:08:30

@papr ok, so essentially all I need to do is load the surfaces.pldata file, loop through those gaze positions and output each position to my .csv file? If that's the case, then is there an example in pupil src that shows how to do that (because presumably the exporter performs something similar)?

user-dfeeb9 27 November, 2019, 20:08:52

This also means that the video files are not read in any way while generating the timestamp numpy files - presumably because the pldata has all the information anyway

user-dfeeb9 27 November, 2019, 20:09:04

Makes sense seeing as it's just more expensive to parse the video inputs again, neat

user-dfeeb9 27 November, 2019, 20:09:21

Unless i'm mistaken of course

papr 27 November, 2019, 20:11:17

@user-dfeeb9 No, the video is not used to restore the time information in this case.

user-dfeeb9 27 November, 2019, 20:11:25

excellent

user-e7102b 27 November, 2019, 20:14:56

@papr great - thanks!

user-e7102b 27 November, 2019, 20:31:21

@papr sorry - one more question - can you direct me towards where the "_export_gaze_on_surface" function is called?

papr 27 November, 2019, 20:32:15

give me a sec, I am working on a version that does not require to replicate the complete Player pipeline

papr 27 November, 2019, 21:17:45

@user-e7102b https://nbviewer.jupyter.org/gist/papr/87157c5da93d838012444f4f6ece6bcc

user-e7102b 27 November, 2019, 21:34:35

@papr awesome - thank you!

user-a98526 28 November, 2019, 14:58:53

@papr Thank you very much, i just want to

user-a98526 28 November, 2019, 15:00:08

"looking at door" -> "robot drives to door", i would love to get some information about this

user-dfeeb9 28 November, 2019, 18:06:17

@papr Hi, regarding the recovery job from yesterday. It does indeed seem recording of both eyes are corrupt. I'm primarily on linux but have access to windows too, do you have any header recovery methods you guys use yourselves for cases like this that you might recommend? No worries otherwise, I'll get around it myself. Sorry I also pinged you on the wrong welcome channel

papr 28 November, 2019, 18:09:21

@user-dfeeb9 I cannot give recommendations on this regard

user-e7102b 28 November, 2019, 18:19:50

Hey @papr thanks again for the help yesterday with creating the surface export script. This works great. The only issue is that all my data are recorded in the old pupil data format (I think v1.5) and need updating before the surface export will run. I use a function from update_methods.py (update_recording_to_recent) in my original batch export script to automate this update. However, update methods seems to have disappeared in the latest version of pupil_src. Can you tell me where the update function lives now? Thanks

papr 28 November, 2019, 18:24:23

@user-e7102b https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/pupil_recording/update

Checkout the __init__.py specifically

papr 28 November, 2019, 19:54:31

@user-a98526 First, you need to setup a reference coordinate system in which one can track the head pose in. Once you have this setup, one can map gaze into the reference system. Checkout our video tutorial on this: https://www.youtube.com/watch?v=9x9h98tywFI

Then, your robot needs to learn to place itself in this coordinate system. The simplest way to do so, would be to add a camera to the robot and to run a second Pupil Capture instance with the same headpose tracking algorithm in the same reference system.

You basically need to make sure that there is always at least one marker visible in the headset's and the robot's camera field of view.

Reading this now, it actually does not sound that complicated to set up ๐Ÿค”

papr 28 November, 2019, 20:04:15

@user-a98526 @marc also had the idea for a relative control system. Instead of setting up an absolute coordinate system as in the approach above, you can simply put such a marker on the robot itself. When the robot is in the field of view of the eye tracker one can take the inverse of the head pose--the 3d location of the marker--and calculate the necessary relative movement for the "move" command. This would require that the robot's marker and the move target are visible to the eye tracker at the same time.

user-e7102b 28 November, 2019, 22:17:52

@papr thank you. It seems like I need "recording_update_to_latest_new_style" from the new_style.py script. When I try to import "new_style" it tells me I need pyglui. Given that I'm not using the gui, is there a way around this?

user-d672d4 28 November, 2019, 22:31:52

One of our pupil cameras is significantly darker than the other, with identical settings (confidence value holds constant at 0)

user-d672d4 28 November, 2019, 22:32:19

No setting changes seem to remedy this. Is replacement hardware available?

papr 29 November, 2019, 08:04:19

@user-e7102b I can't find the location where it is required. Maybe because it tries to import video_capture?

papr 29 November, 2019, 08:04:36

@user-d672d4 Please contact info@pupil-labs.com in this regard

user-0767a7 29 November, 2019, 19:40:51

hi all - I collected some gaze behaviour awhile back with a binocular pupil core headset, and the data from one of the eyes is terrible. it's throwing off the detection of fixations of the other (good) eye. is there a way to tell pupil player to ignore data from one eye when calculating fixations?

user-d672d4 30 November, 2019, 07:47:55

@papr Updating to the latest software version and resetting back to default settings was actually able to remedy the issue, but thanks for the response!

user-65eab1 30 November, 2019, 11:15:47

hi all, i am new on pupil labs. I have downloaded pupil capture, player and service. I calibrate after using pupil capture. However, I do not know how to get data from pupil capture. Can you help me to get real time data from pupil capture application? (on windows 10 machine)

user-aaa87b 30 November, 2019, 14:19:17

Hi all, I'm having a problem with Pupil Capture on my MacBook Pro (Sierra 10.12.6). Installation went quite ok, and both world- and eye-cameras of our DIY headset are working. Everything seem to work ok, but whenever I start the calibration at the end of the procedure, Capture crashes and only the eye-camera window remains on. Any help? Thank you.

user-e7102b 30 November, 2019, 20:18:52

@papr yes, the problem is that I try to import new_style it tries to import functions from video capture. This then fails on my machine because I don't have pyglui, and I'm unable to get pyglui to install successfully, so I'm stuck again. The old function "update_recording_to_recent" didn't require pyglui. Perhaps there is a way to run the new update script without needing pyglui? Thanks

user-e2056a 30 November, 2019, 23:51:12

Hi@papr, i have asked earlier regarding adding the surface back with Pupil player, and the process were supposed to be the same as adding suface in Pupil capture. However, since the pupil player doesn't show the world camera view, it's hard to find out if the markers are actually recognized or not, should I use pupil capture at the same time for this process?

End of November archive