@user-0eef61 Hey, moving this conversation from vr-ar to here. Can you please paste the exact error message you get when running the jupyter notebook?
Hi, i've been using the pupil glasses for a year now they always worked great. Lately I've been using them from ubuntu, and all of the sudden when I try to plug them into a mac or a windows, it will not open the cameras, it outputs that they are in 'ghost mode'. They still work fine in Ubuntu though.. any thoughts?
@papr I've attached a few picture what the setup is. Basically, we track eye movement as they play slot machines, We calibrate it manually.
That's what we have started doing this past week because of issues with calibration. @papr
And we use this to calibrate manually! @papr
This was the setup previously. The calibration had gotten worse. Our new setup (cc picture 2) is being tested on the go. It isn't 100% (won't work when someone has mascara on) but is better than before. @papr
@papr also we don't track offline data as we are not interested in that kind of analysis for now. I'm more interested in the calibration not being accurate before I even begin the recording. Do you recommend updating the software?
Also, is there a way to log history for each person and have that saved into the individual folders with the relevant warning messages included? The capture settings just update onto the same file every single time and it does include real-time warning messages for later perusal.
Hi, I am trying to run the fixation detector code from: https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb I run the previous codes with my data files and all worked correctly but when I try the fixation detector it gives this error: %time fixations = list(detect_fixations(pupil_pd_frameL1VisualControl)) ^ IndentationError: unindent does not match any outer indentation level
pupil_pd_frameL1VisualControl is the name I have put for my pupil_positions.csv file. Is just like pupil_pd_frame in fixation detector code
I am trying to find how I can get the number of fixations in my recordings from pupil_positions.csv and how I can plot that number in a bar graph. I would also like to get the number of microsaccades as well if that is possible?
@user-0eef61 It seems you are mixing tabs and spaces to do indentation of your code. Python cannot deal with a mix of tabs and spaces and will give you this error instead. The guidelines recommend an indentation level of 4 spaces per indentation, this is also what you have in the notebook code on GitHub. I guess the code editor you are using is configured to use tabs instead.
@user-0eef61 We unfortunately don't have any code examples on how to compute microsaccades.
Hi all. Please where i can find minimum requirements for pupil core?
@user-af938a Recommended Computer Specs - The key specs are CPU and RAM. We suggest at least an Intel i5 CPU (i7 preferred) with a minimum of 8GB of ram (16GB is better if possible). We support macOS (minimum v10.12), Linux (minimum Ubuntu 16.04 LTS), and Windows 10.
@user-c5fb8b Thanks for your reply. Ok so I am guessing is about I am using Spyder and not Jupyter. Now I tried Jupyter Notebook and I just copied and pasted the code as it was from the pupil labs tutorials. It shows this message
@user-c5fb8b What I don't like about Jupyter is that it takes a lot of time to run the code and make the plots and that's why I use Spyder instead
@user-0eef61 You can change the default indentation settings of spyder to use spaces instead of tabs. See here: https://stackoverflow.com/questions/36187784/changing-indentation-settings-in-the-spyder-editor-for-python
@user-0eef61 Apparently, there are empty fixations, which cause the warnings
@user-0eef61 with np.seterr("raise")
you can cause this to raise an exception. This should help you to look into what is going wrong.
Call np.seterr("raise")
once in the beginning of the notebook.
@user-0eef61 Also, please be aware that data is filtered by confidence before the calculation. If all your data's confidence is too low, the script will operate on an empty list and show that warning
We will add check to the code to ensure that this does not happen
Many thanks for the quick responses @papr @user-c5fb8b I think that Jupyter is still running the code but it is taking some time so I am thinking to run the code again in Spyder and change the indentation as @user-c5fb8b mentioned. I went to the advanced settings in Spyder and it gives this information:
I think is currently set to use spaces but is there any option I need to change?
Many thanks for the quick responses @papr @user-c5fb8b I think that Jupyter is still running the code but it is taking some time so I am thinking to run the code again in Spyder and change the indentation as @user-c5fb8b mentioned. I went to the advanced settings in Spyder and it gives this information:
@user-0eef61 This is looking good I guess. Maybe something went wrong while copying the code. I am not using Spyder, but maybe you can copy over the code and then try to search and replace tab characters into 4 spaces?
To be honest, I do not know why the calculations in the jupter notebook should be slower than in spyder... In the end both they are both just an interface to the ipykernel in the background
Also @user-0eef61 how long is your recording? The fixation detector can require some time for larger recordings. The sample recording is 36 seconds and took 5 seconds to process on my machine. Note that the complexity of the algorithm seems to be quadratic, so having a recording that is 10 times longer will take roughly 100 times longer to process. This will also depend on your machine of course.
My recording is approximately 30 minutes per person and I also put many participants into one data file. However, now I tried with one file for one participant only and shows this in Jupyter:
@user-0eef61 Can you check that the data you are inputting has the same shape as the data that is passed in the original sample notebook? I mean the notebook runs without problems, so there must be something wrong with the data you are passing I assume...
ok, so the problem was that I forgot to save the .ipynb file to the folder where my data is but now it runs and shows this:
which is great but I would still be more keen on making this work in Spyder
May I ask what does Text(0.5, 1.0, 'Occurrences of Fixations') means?
can I know from this the number of fixations?
No this is the output from the last line of code in this cell, which is the matplotlib text object of the title text element. How experienced are you with data analysis in Python? You will have to do the heavy work of the data analysis yourself in the end. I recommend you read up on how to effectively work with Pandas dataframes and Numpy arrays. Regarding Spyder: As I said you should have no problems if you manually find-and-replace your tabs to spaces. I am not sure why there are tabs anyways since you configured it to use spaces already.
I am not very experienced with Python but I know the programming languages so I do understand Python but not perfectly. The figure is very nice and it gives information about the fixations but I was just interested to understand how I can make the conclusions of my results or how to interpret the figure. For this reason I think it will be nice to have also the number of fixations
But I will have a look and see how I can achieve that
@user-0eef61 in this case, there is a variable called fixations
. It is a list containing all fixations.
@papr is that as a column in the pupil_positions.csv or gaze_positions.csv data files? I am not sure if I have that
@user-0eef61 check the cell that threw the warning in your screenshot above
fixations = ...
It is the output of the detect_fixations()
function
Aa yes ๐
so the fixations save a list of all fixations occurred
correct
I think I managed to get the number of fixations
Very good ๐
That's great, thanks a lot for the advises ๐
Hi, sorry to resend is again. I've been using the pupil glasses for a year now they always worked great. Lately I've been using them from ubuntu, and all of the sudden when I try to plug them into a mac or a windows, it will not open the cameras, it outputs that they are in 'ghost mode'. They still work fine in Ubuntu though.. any thoughts?
@user-2be752 Hey, sorry for not responding earlier. I must have lost track of your previous question.
Do I understand correctly that you were able to use the Pupil Core headset successfully on all three platforms before and now only Ubuntu remains functional?
yes, exactly
I have an explanation for windows but not for macOS... On Windows, the drivers can be overwritten with the operating system's default drivers after a Windows update. On macOS though, there is no necessity for installing drivers.
@user-2be752 What mac hardware and operating system are you using?
I recently updated my mac.. that's maybe why?
@user-2be752 Updated to what operating system version exactly?
macOS catalina version 10.15
I updated my Mac to that version, too. But I did not test Capture with an actual headset yet. I can tell you more tomorrow.
okay
it isn't working on windows either, though
Regarding troubleshooting Windows see https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting
thanks! will wait for your update on mac tomorrow !
@user-2be752 I was able to connect a Pupil Core headset successfully to Pupil Capture on my MacBook (Pro 2015) with macOS Catalina 10.15.1
@papr weird... so what do you recommend I do? It still only working in ubuntu.
Just following up, would you recommend updating the software midway during data collection? We have been having issues with calibration. Our setup changes helped so the on-off problem is likely but would like it to be solved.
Hi, we are trying to record all data to pupil mobile and then do offline calibration in pupil player. When we download the data from the moto z3 (running pupil mobile 1.2.3) and drag/drop into player (v1.18) we get the attached error. Weโve tried reinstalling player and updating pupil but we still get the same problem. This is all on a windows 10 computer. When we load the same data on a Mac (player version 1.17) it loads properly. Any ideas on how to solve this issue. Thanks!
@user-c6717a could you share the info.csv file of that recording with us?
Hi @user-c6717a we were able to reproduce the issue and are preparing a bugfix release for v1.18, which will be up in the next couple of hours. Thanks for reporting the issue!
@user-c6717a We have updated the Pupil release to v1.18-4. You should be able to open your Pupil Mobile recordings with it as usual.
@user-2be752 Please update your operating system and restart your Mac. Unfortunately, it is difficult for us to assess the situation since we cannot duplicate the issue.
Also, have you followed the Windows troubleshooting instructions?
@user-908b50 Sorry for the delayed response.
1) By changing your setup you mean that you cover parts of the display with a paper? 2) There were no changes to the pupil detection/calibration marker detection/gaze mapping methodology in a while. I doubt that a newer version would help.
Also, I can just recommend to download the newer version and to give it a try in a personal test.
Great news! Here's the info.csv file in case it helps.
Happy to report the update worked on my end. Thanks!
I also wanted to suggest a potential new feature for Pupil Mobile. Do you think it would be possible to add a Network Time Protocol (NTP) column (from your favorite NTP server like Time.Google.Com) from the to the output data that is synced with the time data that Pupil Mobile logs? We are trying to use NTP times to sync the Pupil Mobile devices with other phones and a Raspberry Pi. Would love to discuss how best to do this! Thanks!
We are also interested in using the Pupil Core outside. Any suggestions on how to make sure the IR cameras don't 'white' out? We are thinking of using a baseball cap, but still need to test whether that will work. Thanks!
@papr Yes, we cover the bottom part of the screen because it uses black and white text tabs that for some reasons may interfere with calibrating. Okay, I will give it a try. Also, is there a way to generate a log message for each participant? Some thing that includes warning messages and calibration-related ones as well.
@user-908b50 no, per recording log messages are not implemented yet
@user-c6717a this can be implemented based on the difference between system time (ntp synced) and synced time (pupil time). I can provide more details tomorrow.
Excellent. Thank you! To use the difference between the times, I think we'd just need to know the origin of the pupil time, then we can log both times in a separate app on the android phone to get the difference? Thanks!
@papr FYI/FYA (for your amusement): If you try to calibrate using the manual marker mode, by yourself, sitting with the laptop screen in the field of view, the system will constantly blurt out error msgs about moving the marker too fast. Actually, it's seeing double (the world cam view of it on the screen as well), and it took a while before I noticed the error msg about two markers detected. Ah well, live and learn. Offering this in case someone else trips over it.
Hello, how to purchase pupil lab core from China?
Just stumbled across some fairly good advertising for Pupil on a YouTube video from the Royal Institute that was uploaded today: https://youtu.be/4pxYVlgSzCg at about 24 min in
@wrp I referred to the your previous mention. What I wanted was a "manual correction" of "gaze mappers" in pupil player . Thank you.
@user-5d1626 Pupil Labs ships products worldwide (including China ๐ธ). You can make an quote/order request via the Pupil Labs website - https://pupil-labs.com/products - alternatively you can send an email to sales@pupil-labs.com
@user-c6717a thanks for the link!
@user-a48e47 you can apply a manual correction post-hoc by defining a new calibration in Pupil Player. See screenshot (specifically the Manual Correction
section). It seems like you might have already found this. Does this respond to your prior question?
@wrp Yes. That's what I asked. Another question is: If I modify and export x and y coordinates, will the modified coordinates be extracted as Excel raw data?
@user-a48e47 If you make a new gaze mapper in Pupil Player, then the exported gaze coordinates will be generated by the new gaze mappers that you have created. So, yes, if you manually modify the x and y in your gaze mapper, that will be applied to the exported data as well.
@wrp But there is a problem. If some data is adjusted by the gaze coordinate, the fixation detector will not be executed. I just want to shift the coordinates of the fixation and gaze data and I wonder why one is and not one.
@user-a48e47 Can you confirm that you perform these steps? 1. Changed offset 2. Hit recalculate 3. Wait for it to be finished 4. Fixation detector should start detecting 5. Wait for fixation detector to finish
And if steps 4-5 do not happen, can you confirm that you see the yellow fixation visualizations in one place and the gaze data in an offset location?
@papr The updated Pupil Player build (1.18-4) now loads the folder from Pupil Mobile, but when I go to perform Offline Pupil Detection I get the following error:
Running PupilDrvInst.exe --vid 1443 --pid 37424 eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace: Traceback (most recent call last): File "launchables\eye.py", line 457, in eye File "shared_modules\plugin.py", line 323, in init File "shared_modules\plugin.py", line 356, in add File "shared_modules\video_capture\uvc_backend.py", line 72, in init File "shared_modules\video_capture\uvc_backend.py", line 171, in verify_drivers File "subprocess.py", line 729, in init File "subprocess.py", line 1017, in _execute_child FileNotFoundError: [WinError 2] The system cannot find the file specified
I was able to get Offline Pupil Detection and Offline Gaze calculation to perform on my MacBook Pro yesterday, but am having trouble getting that to work on my Windows computer now. Any suggestions? Data is from Pupil Mobile (1.2.3).
@user-c5fb8b Please have a look at this tomorrow morning ๐ Please verify that the Windows bundle includes the driver installer exe
Also wanted to follow up on how to use the pupil sync times to synchronize with a NTP server ("system time"). RE: @user-c6717a this can be implemented based on the difference between system time (ntp synced) and synced time (pupil time). I can provide more details tomorrow.
Hey guys. How is everyone? I'm in a bit of a pickle right now and just found this server, and was hoping to get some help. I'm working on a project that focuses on analyzing eyes at the extremes (we're trying to observe and classify severe nystagmus) and I'm finding that my confidence levels are extremely low. At the moment I have about 120 recordings where most of them have very low confidence levels during these periods of nystagmus. What would be the best way to increase these confidence levels after the fact? Sorry if this is a simple question, but I've been struggling a lot with it.
@papr Could you mind helping me using Chrome Remote Assistance?
@user-a48e47 I am sorry but we cannot provide this type of support on Discord. If you feel the need for a personal video support call, please consider getting a support contract from our website.
Nonetheless, we will of course try to help you figure out the problem. :)
Could you open the "Calibrations" and "Gaze Mappers" menu and share a screenshot of them?
@user-175561 Is it possible for you to share a short example with data@pupil-labs.com ? We could have a look at the videos and check the pupil detection works as expected.
@papr Thank you very much. I share four screenshots. 1)Fixation Detector 2) "Calibration" in default state 3) "Gaze Mapper" in default state 4)Modified Fixation (After modifying coordinates)
@user-a48e47 if you check the gaze mapper, it says that it is not calculated yet. Could you please hit recalculate and see if you get any error messages?
Without gaze data, the fixation detector won't work :)
OMG my mistake. Thank you very much. Now I can see the yellow circle. If you don't mind, can I ask you one more question?
@user-c6717a Regarding your error, please try opening Pupil Capture v1.18.4 first once before opening Player. It seems the file for installing the drivers is missing in Player. Can you tell me whether this worked?
Correction, player does not need this file. There is something else going wrong
We were able to isolate the issue. This is a Windows specific issue. We will let you know as soon as we have uploaded a corrected bundle.
@user-a48e47 Great to hear that it worked out. Feel free to ask as many questions as you like. :)
@papr 1) If I move x, y coordinates, the result will not be displayed immediately. The position is the same even if the pupil player is done... ๐ฆ
2) I know that yellow is fixation and green is gaze data. However I can see that two circles do not match in the recorded video. How should this be interpreted? I'll upload the screenshot.
@user-a48e47 after adding an offset and recalculating the mapper, it takes a while for the fixation detector to recalculate. You should see the a progress indicator around the menu icon during this period. Only afterwards, the visualization will be correct
@papr It seems that today's my questions did not come out calmly. I'll have to search in detail.Thank you for sharing your precious time.
Hello, anyone know how to solve "Install pyrealsense to use the Intel RealSense backend" error while opening pupil player
Hi @user-a4a77a This is not an error, but just an info note. Do have an Intel RealSense 3D camera that you want to use together with Pupil? If not, you can ignore this message.
Hi @user-e10103 are you asking where you can download the application? You can find the latest release on GitHub, see the following link. Scroll down to the bottom to Assets, there are bundled versions for macos/linux/windows that you can download. https://github.com/pupil-labs/pupil/releases/latest
Hi @user-c5fb8b, Thanks for the reply, actually pupil player is not opening and it shows the following screen,
@user-a4a77a which verson of Pupil Player are you running? And what operating system? Are you running from source or from bundle?
@user-a4a77a please delete the user_settings files in the pupil_capture_settings folder and try starting Player again
@user-c5fb8b the issue seems to be that sometimes the restored player window is offscreen without the option to recover it
@user-c5fb8b Pupil player was working yesterday, but today I am trying to open it its not working as @papr said its offscreen, although pupil capture is working fine.
@papr There is no such folder with the name settings and I can't find the files with user_settings in other folders too
and pupil version is 1.18-4, recently downloaded from the web
@papr Thanks Its working now after deleting the user files, I found the files in recording folder.
@user-a4a77a in this case starting a newer version fixed the issue, not the deletion of any files in the recording folder. I highly recommend to not delete anything in the recording folder which you are not 100% sure that it can be deleted.
Thaanks ๐
Hi, I have currently have a Tobi X3-120 eyetracker and iMotions software. I would like to purchase the Pupil labs Core and was wondering if I should purchase the iMotions software for Pupil labs Core. What value will it add? Can I do without it? Thanks
@papr Sent it. Thank you so much!
I saw a message last year that the oneplus 6t does not work with pupil mobile because of the android version 9. Is this resolved? Would the new pixel 4 work?
Hi, our research group has collected some data with our two HTC vive add ons (to start), but upon inspection of the data, it appears to be horrendously messy - with over 60% of the csvs generated containing inaccurate and low confidence reads. Our test brings the eyes to the peripherals by design and involves changing light levels, and also involves drugs which impact the eyes, so maybe this is the cause, but the videos themselves are very crisp and at all times I can identify the pupil position easily, whereas I see the algorithm messing up and missing the pupil. What steps can I take to generate accurate csvs from the videos?
Hi @user-121d2c did you record the calibration procedure? If so, then you can do post-hoc pupil detection and modify the parameters there and then re-calibration (post-hoc gaze estimation) all in Pupil Player.
@user-c6717a Sorry for coming back to you so late, are you still having trouble with your time synchronization, or were you able to come up with a solution? I don't know what your exact setup is, but from what I understand, you should calculate the difference between pupil-time and system-time a couple of times for incoming data (and average). This time difference should stay constant while pupil is running, so afterwards you can apply it to all recorded pupil-time timestamps to convert them to system-time.
@user-121d2c If you share a small example with data@pupil-labs.com we might be able to give concrete recommendations for the pupil detection parameters.
involves drugs which impact the eyes, I guess this means that the subjects either have very small or very big pupils. In these cases, you need to adapt the "min pupil"/"max pupil" parameters in the Pupil detector menu accordingly.
@user-e637bd Pupil Mobile supports android version 9 by now, yes. I do not know about the Pixel 4 though.
@user-86d8ec I am not sure if you even need an other copy of the iMotions software. You might be able to import Pupil recordings just fine already. I think the iMotions support should be able to answer your questions in detail.
@user-c6717a In Pupil v.1.18-35
, we fixed the issue that caused Pupil Player to crash when running the Offline Pupil Detection on Windows.
Our work involves measuring cognitive status of people in part by having them take a variety of pen and paper tests. We've had some success, but often run into the problem that people who are looking down at the table top appear to the Pupil as having closed their eyes. We have the extenders in place, but even so eyelids and sometimes lashes block the view of the pupil. (The subject is still looking at the page, but is gazing down sufficiently that the eye cameras can't see their pupils. I know this is a challenging use of any eye tracker, but wondered if there's any advice about how to deal with this. Are there longer extenders, for example? Any ideas, suggestions, and especially experience, would be great to hear. Thanks.
Hello, will be happy with a quick question - I collect with 200hertz eye cameras, and 120hertz world view camera in a 2D mode. If I understand correctly, in the exported data there are approximately 400 norm_pos_x and norm_pos_y gaze positions for every second. Is this because the eye cameras do not record in a synchronized manner so that when the eye timestamps are matched to the world camera timestamps there are two similar timestamps of the same eye in several world view timestamps, but because there are not the same for the other eye the number of gaze points per second is doubled? Thank you!
Hi, I ran into a problem with a setup of the core setup with 2 x 200Hz eye cameras and I was using the v1.13 software. And the timing of the eyes were not in sync. This happened 2 times in a row. I have since updated to 1.18 and have not had a problem with the recording. Is there anyway that I can go back and fix the timing of the previous recording?
@user-c5fb8b thank you! ideally Iโd like to store the ntp (system) time in a log file every time the pupil mobile system makes a pupil-time stamp. Is there any easy way to integrate this into the output on pupil mobile? We have some python code that is currently doing this on a rPi and a different android phone. I think the main issue we are trying to solve is how we get the pupil time stamp written at the same time as an ntp time stamp. If we have that even, only a few instances, we could use your method to average and get the constant difference. Does that make sense? Thank you again for your time and help! I feel like adding a ntp time stamp option when pupil mobile is connected online would be very useful to the app for many applications. Its one of the main methods our network and embedded systems engineers are using to sync multiple networked systems (human brain implant, cell phone imu, GoPro, rPi, etc).
Hello, I just started using the glasses. I noticed that one eye-camera is upside down (see eye0 on the attached screenshot). Is this normal?
(never mind, I found my answer!)
@user-c6717a When Pupil Mobile starts a recording, it records the starting time in system and in pupil time. We basically measure the same point in time with the two different clocks. Now, assuming the clocks do not drift, you can calculate the difference between these two measurements and apply it post-hoc to all pupil timestamps
This is the simplest way. Alternatively, you would have to sync Capture to the unix epoch, and sync Pupil Mobile to Capture. This way Pupil Mobile stores everything in unix epoch
@papr "When Pupil Mobile starts a recording, it records the starting time in system and in pupil time. We basically measure the same point in time with the two different clocks. Now, assuming the clocks do not drift, you can calculate the difference between these two measurements and apply it post-hoc to all pupil timestamps" Very useful info -- where are these values recorded/stored so I can get to them? Thanks
@user-abc667 in the info csv file
@papr Excellent; thanks.
Hiย everyone, we have recently started using the glasses for UX research, primarily on mobile devices. We're having real trouble with the accuracy of the glasses. Does anyone have any experience using the glasses for UX research and also any tips on improving accuracy? The eye cameras appear to show that the pupils are tracking fine (red line around the pupils).
Hello, How I extract smooth heatmap surface,,,, the extraction is pixelated, heatmap smoothness level not working in my case.
Hello, we are working on a machine learning application with Pupil Labs Core and Python. We need to record the pupil diameter in real time and combine these measurements with those of other sensors, all managed by Python. The problem is that Pupil records with a little delay and then drags all the other instruments. Do you have any suggestions for me and my team? Thank you in advance for your help.
Hello. I'm trying to use the Fixation Detector plugin with the Network API, but I can't find what topic name it sends data under, if it even does at all. Looking at the source code, it looks like it should be "fixations", but subscribing to that topic yields no data at all. Does that plugin send data over the network api?
@user-a4a77a The smoothness only relates to size of the gaussian filter used on the heatmap. The pixalation comes from the fact that we scale the image using a nearest-neighbours approach, which preserves the original bin size of the heatmap visually.
@user-d3d852 Have you checked the accuracy field in the Accuracy Visualizer menu? What does it say after a calibration? Depending on your used detection and mapping approach, values below 1.5 degrees are considered good.
@user-4a4530 How much delay are you measuring? Does Capture run on the same computer as the script that receives the data?
@user-9d7bc8 Please make sure the fixation detector is running and that you calibrated successfully. You should see a yellow circle in the world window when a fixation was detected. Also, try subscribing to fixation
(singular, not plural). Are you able to receive other data, e.g. pupil
data?
Hi there, in the older data format, pupil had a notifications dictionary with builtin notifications for calibration starting and finishing, is this also happening for the newer pupil data format? I can't seem to find it. Is it now something that we need to build ourselves? Thank you so much!!
@user-2be752 That is still there. All notifications are saved to notify.pldata
.
@user-2be752 You can use this code to load the data from the file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L137
mmh oh okay! maybe i'm not loading it correctly then! I can only see data, topics and timestamps. And for topics it's only pupil.0 and pupil.1
@user-2be752 Then you are looking at pupil.pldata
. That is a different file ๐
oh!! totally my bad, thanks! you are the best ๐ค
The old pupil_data
file contained everything in one file. Due to technical reasons, we decided to split each data type into a separate file
@papr Capture run on the same computer, the delay at the beginning of the recording the delay is very low and then rises drastically. Where can I find out more about this?
@user-4a4530 this is very unusual! Does it keep increasing endlessly over time?
I would be interested in how you measure the delay. Doing this time measurements can be very tricky.
@papr No, it starts very low, then rises and stabilizes.
@papr At the moment I'm not in the lab, I'll try to write to you tomorrow so that I can give you more precise information!
@user-4a4530 ok, thank you ๐
hi all. I'm hoping to talk to someone about the fixation detection algorithm - I'm getting some really inconsistent output, and I have no idea why that would be
also, FYI, the documentation for the fixation detector (here: https://docs.pupil-labs.com/core/software/pupil-capture/#fixation-detector) is just broken links and infinite loops.
@user-0767a7 thanks for the feedback. We have an open issue for fixing the links and fixation detector documentation here: https://github.com/pupil-labs/pupil-docs/issues/320
@user-0767a7 could you define inconsistent/provide some more concrete information?
sure. I'll describe my protocol first, so you'll know where I'm coming from.
I'm looking at "the length of the fixation prior to event X". X is defined by a frame in the world output video, so I'm looking for the length of the fixation that immediately precedes a certain point in the video, defined by a given frame.
@user-abc667
Our work involves measuring cognitive status of people in part by having them take a variety of pen and paper tests. We've had some success, but often run into the problem that people who are looking down at the table top appear to the Pupil as having closed their eyes. We have the extenders in place, but even so eyelids and sometimes lashes block the view of the pupil. (The subject is still looking at the page, but is gazing down sufficiently that the eye cameras can't see their pupils. I know this is a challenging use of any eye tracker, but wondered if there's any advice about how to deal with this. Are there longer extenders, for example? Any ideas, suggestions, and especially experience, would be great to hear. Thanks.
I would suggest trying to angle the eye cameras up (orbiting about their ball joints) if you haven't done this already. Additionally, we do document the geometry of the mounts if you wanted to develop/prototype your own custom extenders: https://github.com/pupil-labs/pupil-geometry
there's a "maximum duration" setting for exporting fixations; my problem is that I'm getting completely inconsistent output depending on that max duration. if it's set to 1000ms, I'll find totally different fixations (different lengths, start frames, end frames, positions) than if I set it to 1100 ms.
@user-0767a7 are you using 2d or 3d mode?
@wrp good question. how can I tell?
@user-0767a7 Without looking at your data, I can only say getting different results might make sense given the change in parameters. Fixations are classified based on the dispersion and duration parameters. If the params are changed, then you will likely end up with different fixations being classified.
If you're curious, you can see the source code of the fixation detector here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py
I'm not sure that explains it. here's an example of my problem: I set the max duration to 300 ms. I'll find 3 consecutive fixations; the first two are 300ms, the last one is 100. So one fixatoin of 700ms duration. Now I set the max duration to 800 ms. I won't find a 700 ms fixation, but rather something completely different. Two 500ms fixations, non-adjacent, or a single 200ms fixation, and then nothing for another few seconds.
2d vs 3d mode - this is set when you make a recording in Pupil Capture > General settings menu. You can take a look at exported gaze_positions.csv
data from Pupil Player and if it has columns *3d - then the 3d mode was used.
i
'm using 3d mode
ok, thanks
@user-0767a7 maybe you would like to send a small sample recording to data@pupil-labs.com and someone from my team can follow up with you
if i max out the fixation duration (4k ms), I find almost nothing in a three minute video except for a few scattered 100ms fixations.
thanks for the source code, I'll bookmark that and look at it later
@wrp surething. I'll write up an email and send you a brief recording tomorrow
thanks, I appreciate the help. cheers
@wrp Thanks for the info and the pointer. Yes, the first thing we tried was to angle the cameras up as much as possible. Still not enough. We'll try making a longer one and see if it does the trick. Meanwhile, if anyone else reading this has any experience to share, that would be great.
@user-abc667 Just to be sure, are you using the fish-eye lens that is shipped with the Pupil Core headset?
I've asked the research publications community, but I thought maybe one of you can help me.
Hello all, I am planning on using the Pupil labs software for a research project, and I have been following the DIY intructions from the Pupil labs website. I have a few questions for the commuinty. So far, I have trouble with the webcams. I keep getting a time stamp error...Does anyone know how I can fix this? I don't have the error infront of me but I can get the exact message another time. I am using Ubuntu 18.04 kernel version 5.0.0. Also, I have had trouble finding the correct uvc compliant webcams. For those that have made their own devices, what webcam models have you had success with with? Much appreciated.
I should point out that I have the software up and running but I am coming across the issues mentioned above.
Hey @papr: to follow up on your last response. Where can we 'measure' or record the pupil clock from pupil mobile? We would need to know the pupil clock time and the 'system' clock time at the same time to calculate that difference. I'm just not sure how to get both times logged simultaneously for even 1 point to calculate the difference between clocks. Do you agree? Or are we thinking about different solutions? Thank you again for your time and help!
@user-c6717a My solution only works after the effect since it requires the info.csv of an existing recording. Pupil Mobile stores the start time of the recording in both clocks there.
Ah I see. I missed that there were two times. Do you know the official definitions of the System Time and the Synced Time in that case. What do they correspond to on the phone?
So system time seems to be the Unix system time to the thousandths of a second. How is the 'synced' time generated relative to that? For instance, the start time (system) for 1 of my recordings is: 1573001832.048. The start time (Synced) for the same recording is 1677.583900708. What units is the Synced time in? Seconds? And what are these units relative to? For instance, what would be a 0 (synced) time? Is it time since opening the app? Sorry for all of the questions, but just trying to understand how to take it from here. Thank you again!
@user-c6717a synced time is a monotonic clock whose start point has been either not defined or synchronized to a Pupil Capture instance.
Units are always seconds.
Monotonic clocks are used for precise time measurements, where the difference between two clock measurements is more important than the absolute position in time (e.g. unix clock) of the time measurement.
hello, i have a issue using pupil mobile version 1.2.3. I had no issue with the 1.2.2 version. Now when i record data i have multiple file stored for the different camera (for example world_001,world_002... and so on ). And when i replay the data on the player i have after some time this issue : 2019-11-15 14:23:02,523 - player - [ERROR] video_capture.file_backend: Found no maching pts! Something is wrong! 2019-11-15 14:23:02,524 - player - [INFO] video_capture.file_backend: No more video found
i tried with the player version 1.16 (that i was using before) and the new version 1.18 on windows 10 with still the same issue
the offline pupil detection seems to works fine although the eye video are also in multiple part
if i dont do any processing i am able to export the world video in mp4 and i can play this file without problem
do you have any insight on this issue ?
alternatively is there a way to go back to the 1.2.2 version ?
thanks in advance
Hey everyone, For a project I need to run the software on a NVIDIA Jetson AGX Xavier. Does anyone have any experience with installing the dependencies and the pupil software?
@papr Ok. I think I understand and will talk with our engineers about it. Do you know where on the phone the Pupil Mobile app pulls this montonic clock information to write it to the csv file? I think we can just link our NTP time to the system time and then the system time is already linked to the Synced (Monotonic clock) time. Is that what you're suggesting? Thanks!
@user-c6717a please be aware that the monotonic clock might use an additional offset while being time synced... Do you need to be time synced in real time or do you just need to synchronize your clocks after the effect? There are two clear solutions for both of these use cases. Let me know which one you need.
Hello, I am having issues with pupil mobile; i can change the settings of the camera, such as making the eye camera 400x400px, and exposure time, etc. However when recording it always defaults back and does not record with the same settings. Any idea how to solve it?
@papr Yes, using the headset exactly as it was sent to us.
@papr System crash (not sure if this is the right place to report it). Running Pupil Capture v1.18.4 (the version of capture in the v1.18.35 release), windows x64. When detecting a surface, if I hit Add/Remove markers I get a reliable crash with the attached stack trace.
@user-abc667 Are you using the legacy square markers?
@papr Hi. I am just looking for some advice. I am currently trying to collect gaze data from children (8 years old) using pupil labs, but am finding it extremely more difficult to effectively capture their pupils compared to when using pupil on adults. From what I can see, this seems to be caused by having to have the camera further away from the eye to account for their smaller heads. This seems to produce an image that is much darker and shadowed. Using the arm extenders helps to get squarer on with the eye, but subsequently causes an even darker image with even more shadow. Has anyone faced this before and know how to get around it?
I have tried changing the ROI but this does not remedy the issue
@user-f3a0e4 Adjusting the RoI would have been my suggestion, too. Can you provide an example image?
This is an example from the weekend. I don't have an example using the extenders as I decided to remove them before recording, but this example provided similarly poor pupil detection, especially when looking more downwards.
@user-f3a0e4 The pupils seem comparably big, too. I suggest you use the lower resolution settings (320x240
for 120Hz eye cameras, or 192x192
for 200Hz eye cameras) and increase the Pupil max
parameter in the pupil detector menu of the eye windows.
When using the lowest resolution, the algorithm does not perform the coarse detection which can result in false negative detections if the pupil is comparably large in the image.
Okay, thanks! Are there any other settings that might improve pupil capture in these instances? For example, brightness or contrast etc?
@user-f3a0e4 Regarding the UVC post-processing parameters, I would recommend adjusting the gain
.
Oh, I wasn't aware you could adjust the UVC settings post-hoc? Where are the settings for that?
Sorry, this is a misunderstanding. I was talking about UVC settings that are applied after the image exposure, but before the image is transferred to the computer. The UVC Source menu lists them as "Post-processing" since it they are "Camera post-processing" parameters.
Okay, so I cannot adjust these settings once the recording is made?
And on another note, just to pick your brains. Often with the children I have troubles getting a good calibration out of them, likely because they don't keep their head still/aren't looking where they're supposed to. Do you know of any measures I can take to counter this issue?
@user-f3a0e4 correct, these values cannot be readjusted by Player after the effect.
@user-c5fb8b Yes, we're using legacy square markers -- we have testing forms that have been in use previously and for consistency we need to maintain the appearance. The problem got a lot better (ie no need to edit the surface) when I was careful about never using a marker on more than one form (not even in a different combination of markers). Now surfaces are recognized quickly and no need to edit.
@user-abc667 we were able to reproduce the issue you reported, thanks a lot for that! We are preparing a bugfix release to be released later this week, where we will address the issue. Glad to hear you are managing with this version until then.
@user-c5fb8b And thanks for the rapid response.
Hello! I'm having some issues installing capture in Ubuntu 14.04. I've installed it but then it won't open and thought whether maybe there is some issues with this version I am not aware of?
Hi everyone! My lab have bought some eye cameras recently and the resolution(400x400) is much lower than the ones we bought two years ago(1920x1080). Is it possible to increase the resolution or buy old versions? Thanks!
@user-771cfd I donโt mean to sound flippant but can you use the old ones lol. If you need a lower resolution you can always down sample the 1080p cameras.
Itโs in the options
@user-771cfd please send an email to sales@pupil-labs.com with your request
@user-860618 @wrp Thanks a lot! Will send an email, we didn't buy enough cameras last time.๐
Hi everybody, do you know where find furhter information about the calibration ? I use Pupilab Core in order to measure particular charachteristic in vergence eye movements and i will know which kind of calibration is the best for this kind of measure
i am at the godd place for this question ?
@papr OK, seriously naive question, but it matters -- it's possible to put the eye camera arm extender onto the appropriate part of the frame in either of two orientations. The bottom of the extender has horseshoe-like cuts in it. When the extender is put on the arm, does it go on horseshoe-end first? We had been using it this way but having trouble getting a view of the pupil when people are looking down on the table (where we have them taking a pen and paper test). By reversing it the camera sits lower (and closer to the cheek), and seems to give us a much better pupil view for those looking-down situations. Is this backwards? Even so, it is ok to use it this ways? Thanks!
@user-abc667 The point of the the extenders is to provide better view of the pupil during pupil detection. It does not matter in which direction you are using it as long as you are receiving better results as in the other configurations.
@papr Excellent; thanks.
Hey pupil labs! We've got a Vive add on that is no longer being recognized by any pc at all - USB malfunction, device not recognised. Code 43 device descriptor failed, which happens regardless on all pcs tried, whether the device has been previously installed or not. Apart from the usual usb trou bleshooting is there anything you guys would reccomend? I'm thinking reflashing the firmware somehow?
@user-92a820 I would recommend that you contact sales@pupil-labs.com so that our Hardware team can get in touch with you to perform remote diagnostics and/or repair of this unit.
Great, thanks.
Hi I would like to ask you, is it possible to upgrade mobile bundle Motorola Z3 to Android 9 Pie? Pupil Labs Mobile will work under it?
Hi, my group is looking to purchase Core, I was wondering if there was a tutorial available as well as sample eyetracking data I could practice using pupil player with as well?
Hi, is there a plugin that allows for manual coding of video by frame or by fixation? Like let's say I want to assign a label representing some AOI to each frame as I go through them, is this functionality built into any plugin already?
@user-31df78 None of the built-in plugins provide manual AOI annotation functionality. Pupil Capture and Player only provide automatic AOI tracking using fiducial markers: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@user-86d8ec You can download a sample recording here: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing
Great, thanks for the response @papr Just looking to confirm for PI.
@user-86d8ec We do also have some Jupyter notebooks that show case how to process exported data by Pupil Player https://github.com/pupil-labs/pupil-tutorials
@user-31df78 In this case, I would recommend to move your question to the invisible channel.
@user-4f0036 Yes, the recent Pupil Mobile v1.2.3 version does work on Android 9 to our knowledge.
Sorry if its out there, but I can't find any documentation on telling pupil player what the name of the cameras should be. We recorded on the pupil mobile app, and it let us change the name of the cameras from eye 0 to Right Eye, etc. Now dragging the folder into pupil player, it says it cannot find any world or eye camera. Where is the name settings?
@user-e637bd Hey ๐ you will have to rename the video and respective .time
files before opening the recording in Player
On another note, I asked before about recording camera settings on phone. So far the oneplus 6T is working well for us if others are also interested. We only find some issues in the app not recording the settings we stored for each camera. I think there are two possibilities but not totally sure. The first seems to be it works if we set it, start recording, and then open the camera image to check again that the settings are right, then it records correctly. The other, and maybe more accurate problem, is that sometimes the cameras connect and disconnect (it seems the wire is very fragile on the headset), because of this, it defaults to some settings rather than storing the user settings for the camera. So if it is the latter, if its not possible now, maybe a feature could be to store user settings and autoapply them?
@papr thanks for the fast response! That is a lot of renaming, I guess this was a mistake haha. Is the name stored somewhere in a python script by anychance in a single location and I can just edit that?
If you run from source, yes, let me look it up
@user-e637bd https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_recording/update/mobile.py#L114-L118
@here ๐ฃ Pupil Software Release v1.19 ๐ฃ This release primarily addresses improvements/changes to the Fixation Detector and Binocular Gaze Mapper.
Check out the release page for more details and downloads: https://github.com/pupil-labs/pupil/releases/tag/v1.19
Hello, we need to use eye tracking technology to run UX usability tests (web and mobile apps). Can someone recommend the best option and configuration?
Hi everyone, Is there a study that shows the eye tracking accuracy vs the eye image resolution? Since it matters for large scale data collection going from 192x192 resolution to 400x400 for example. Thanks
Hello! Does anyone know if there is a way to have the offline pupil detection applied to only a small section of the recording? Is it possible to trim the entire pupil recording (I know it is possible to trim what is exported, I mean the entirety of what we see in pupil player). Thanks!
Hi everyone! I'm currently using Pupil Labs for an experiment where we require participants to look down on a computer screen. I'm wondering if there is a way to avoid disturbance from eyelashes during the process? Thank you!
@user-c629dfhave you tried setting Region of Interest in eye windows? Have you tried eye camera arm extenders?
@wrp Thank you for your response! May I ask how to set Region of Interest in eye windows?
@wrp Oh nevermind! I figured it out!
@user-a10852 This would mean removing data permanently from the original recording. Pupil Capture and Player follow the policy to never delete previously recorded/exported data. Therefore, trimming the entire recording is not possible in Pupil Player.
@papr Have set up several surfaces (actually paper forms our test subjects write on) and wondered about supplying their dimensions, so that gaze is accurately translated to surface position. Is it the distance between the edges of the tags or their centers? If the edges, inside or outside edge? From the appearance of the surface on the screen, it looks like top left to top right corners for width and top left to bottom left for height. Yes? Thanks.
Hey all, I'm trying to go back through some old data that I originally pulled in Capture 0.3.9.
I installed Player 1.19 (macOS) and dragged the directory to it, and it instructed me to use 1.17 to update the format. I
I got a copy of 1.17, dragged the directory over, and it worked no problem.
I then tried with the next one. "Invalid Recording / There is no info file in the target directory". This makes no sense, since all the same types of files are in the new directory.
I try another. Same.
I try all of the others. Same.
Since it doesn't seem likely that just happened to start with the only recording that could be used (it wasn't the first or the last from the study; it was one in the middle), I assumed something was messed up with the Player 1.17 file or something. I installed Player 1.17 on Windows and tried there. Same behavior.
These all still open in 0.3.9.
I really need to get into these things, and will likely need the current version of the software. Any thoughts?
Holy crap I just figured it out.
I realized that I hadn't tried and failed to open the recordings past the first one in 1.19.
I just tried another one, first doing it in 1.19, then following the direction to do it in 1.17. It worked.
That is weird...
Okay, so what I have to do is try and fail to open it in 1.19, then open it in 1.17, then open it again in 1.19 to fully update.
@user-36b0a3 This procedure is unfortunate but correct. There is a bug in v1.16/v1.17 which prevents Player from correctly detecting the recording version of these old recordings. Please see the v1.16 release notes section on "Deprecated Recordings" for details on why these recordings are not supported in Pupil Player versions newer than v1.17 https://github.com/pupil-labs/pupil/releases/tag/v1.16
@papr I noticed in June that you said the head pose tracker is not supported on Windows yet... has that been updated?
Hi @user-fbd5db I noticed some messages from you earlier in the year inquiring about creating a batch exporter for pupil data. I'm curious if you ever developed this code, and if so, would you be willing to share? I have scripts that will batch export annotation data and pupil diameter, but I'm struggling to add the functionality to export gaze and surface data.
Thanks
Hi @papr are you aware of any other users that have developed batch exporter scripts? I'm attempting to adapt the scripts that I originally forked from another user (https://github.com/tombullock/batchExportPupilLabs) to export surface and gaze data in addition to pupil diameter, but am struggling due to lack of python experience. Cheers
Hey folks, back atcha with another question (got all the videos updated and tables exported; thanks!):
I will likely need to explain the pupil detection confidence value. In the Kassner et al. 2014 paper, it states:
Detect edges using Canny [14] to find contours in eye image. Filter edges based on neighboring pixel intensity. Look for darker areas (blue region). Dark is specified using a user set offset of the lowest spike in the histogram of pixel intensi- ties in the eye image. Filter remaining edges to exclude those stemming from spectral reflections (yellow region). Remain- ing edges are extracted into into contours using connected components [29]. Contours are filtered and split into sub- contours based on criteria of curvature continuity. Candi- date pupil ellipses are formed using ellipse fitting [16] onto a subset of the contours looking for good fits in a least square sense, major radii within a user defined range, and a few ad- ditional criteria. An augmented combinatorial search looks for contours that can be added as support to the candidate el- lipses. The results are evaluated based on the ellipse fit of the supporting edges and the ratio of supporting edge length and ellipse circumference (using Ramanujans second approxima- tion [18]). We call this ratio โconfidenceโ.
Can anyone propose a pithy, more-accessible description of this ratio?
โOr point me to another paper to cite?
@user-c629df Regarding looking down -- we have the same problem and are experimenting with 3-d printing camera arm extenders that lower the cameras. Happy to share the results. One other trick is to put the existing extenders on backwards; this significantly lowers the cameras, but also brings them close in to the face, possibly too close for some people.
@user-abc667 That's a good idea! Thanks for recommending!
Hi everyone! I'm wondering if there is any documentations for pupil labs eye tracking data analysis? Thanks for helping!
@user-c629df what data are you interested in? I might be able to point you in the right direction.
@papr Have set up several surfaces (actually paper forms our test subjects write on) and wondered about supplying their dimensions, so that gaze is accurately translated to surface position. Is it the distance between the edges of the tags or their centers? If the edges, inside or outside edge? From the appearance of the surface on the screen, it looks like top left to top right corners for width and top left to bottom left for height. Yes? Thanks.
@user-abc667 you can adjust the surface relative to the markers. This way you do not need to worry about the the positioning of the markers.
@papr Sorry, I'm being unclear. I have carefully put the markers at the four corners of the page, so there's never any question about where they are, or where the surface is (the forms get re-used for multiple subjects). As I understand it, the conversion from normed x and y positions to scale x and y depends on the form "dimension", which I take to be the defined by the position of the fiducial markers, yes? As we need the most precision we can get, I was checking that the dimensions of the form are determined by the outer edges of the fiducials (using the apriltags 36 11 square markers), yes? (Rather than say the center of the fiducials.) Or am I misunderstanding you?
@user-abc667 by default, if you add a surface without editing it, it will wrap around the 4 outer most corners of all detected markers. The marker definition is the previewed by the green(?) boxes. I do not remember the exact color.
@papr Yes, that's what I thought, and that's what I needed to know -- outermost corners. Thanks!
@papr We're going to have camera extender arms 3d printed that lower the cameras (per some of my earlier posts). Do you happen to have a recommended material to use?
@user-abc667 I do not have enough knowledge in this field to make recommendations. But @user-755e9e might be able to tell you more. Also, are you aware of https://github.com/pupil-labs/pupil-geometry/blob/master/Pupil%20Headset%20triangle%20mount%20extender.stl ?
@papr Yes, I have that file, and the CAD savy folks in my Lab modified it to make extenders that are the same shape but drop the camera lower. Sending the file out for printing even as we speak (type). Thanks.
Hi. I have been getting a couple errors when using pupil mobile. I have two phones that are both fully up-to-date and have the most up-to-date app. Only one of the phones is giving me these errors and I wanted to see if anyone else has encountered this issue. Thanks.
second error
Hi @user-abc667 , i would recommend SLS 3d print since itโs strong and flexible.
@user-c37dfd I have encountered these on Android 10. Is it possible that the other phone is running Android 9?
Is there a guide on surface tracking you could link me to?
Hello there! I'm from Rio, Brazil. Anyone is offering services of eye tracking here? Or Latin America? We have a project starting in 10 days, and we need to make sure that everything will run fine. We do not have a Pupil Core or Invisible yet, but is possible to buy an used one.
Hey @papr, I'm sorry to keep bothering you about this, but I really need to figure out how to batch export pupil labs data. Currently I'm able to export annotations and pupil diameter using the scripts in this repo (which is linked to pupil labs community) https://github.com/tombullock/batchExportPupilLabs . I just need to add surface and gaze data export functionality. Essentially, all I want to do is have the script automate an export from pupil player with the "annotation player", "offline surface tracker" and "raw data exporter" plugins activated. I've been attempting to edit the scripts in the repo to add this functionality but have not gotten very far (I've uploaded my edits to a new folder in the repo). Is there perhaps an easier way to do this that I'm missing? Presumably these functions already exist in pupil_src, so it should just be a case of writing a wrapper function to loop through all data files in a directory? Any guidance would be appreciated. Thanks.
Hi @papr ,I want to using gaze coordinate from Pupil Labs to control robot, i want know which of the core and invisible is more accurate. In addition, there is another problem. Which of core and invisible is easy to development
@user-c9d205 This is the link to our surface tracking documentation https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@user-e7102b
offline surface tracker Well, this would not be as easy, because this requires a series of different steps, namely 1. marker detection, 2. surface definitions/tracking 3. gaze mapping
You might be able to extract the surface data that was recorded in realtime to the surfaces.pldata
file. Would this be sufficient?
@user-a98526 In general, your application would receive gaze via our network apis. The network apis for Pupil Invisible and Pupil Core are slightly different, but for both there are examples on how to access the data in Python.
Gaze coordinates are relative to the world camera of the glasses/headset that the subject is wearing. So you could implement a simple "look to the left" -> "robot steers to the left" control software. For this approach, I suggest using Pupil Invisible due to the minimum amount of setup that it would require.
In case that you want a "looking at door" -> "robot drives to door" control mechanism, the setup would be far more complex. I can give more details if necessary, but in general this would require to run Pupil Capture with the head pose estimation plugin. It is still possible to use both types of eye trackers here.
@papr yes, extracting the surface data recorded in realtime to surfaces.pldata would actually be ideal. I only suggested offline surface detection because in order to export surface data I'm required to activate the "Surface Tracker" plugin when I export data via the gui, and this plugin appears to use offline surface detection. I'm attempting to extract and write out the realtime surface data in the following script (lines 189-278) and I think this is nearly correct, but it seems to only write out one row of surface data per world camera frame, and fails for some datasets (https://github.com/tombullock/batchExportPupilLabs/blob/master/Add_Surface_Gaze_Export_Unfinished/extract_diameter.py)
@user-e7102b one surface datum per surface per world frame in which the surface was detected
@papr ok, so perhaps that's why it's currently failing for some datasets, if the gaze was off the surface? If I edit my script and write out datum['gaze_on_srf'] it seems like the entire data structure is being written out (which is what I want), but now the formatting of the outputted .csv is messed up, with mutliple data rows per row in the .csv. Is there an example in pupil labs src that would show me how to write this out correctly? I've searched but haven't been able to find anything. Thanks
@user-e7102b can you send me an example pldata file with timestamps for which it fails?
@papr sure. This one breaks the current version of my script. However, even when it does work, I'm still only getting one surface datum per world frame (when detected). If I change my script to output the full datum, this data file does not fail, but the outputted .csv is all messed up. https://www.dropbox.com/sh/t70f8nq1micuc5p/AADW2ULoj5fi_A3o39XoNbW1a?dl=0
We had a recording that accidentally went for far too long, it also looks like while the video recording itself is intact, some data is missing. Specifically, the timestamp numpy files. We have eye1, eye0, gaze.pldata, info.csv, and pupil.pldata
Is there any way we can create a new set of timestamp files from this information or load the current data into pupil player without it crashing? There may also be a problem with memory given that the video files themselves are gigantic
@user-dfeeb9 you can reproduce eye timestamps from pupil.pldata. What about world? Do you have a world file? You might want to modify this function to only read part of the data: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L145-L146
@user-e7102b
I'm still only getting one surface datum per world frame Since the recording only includes one detected surface, this is expected.
@papr We don't have world, and I don't know if it was necessarily recorded because the experiment in mind only concerns the pupils
@user-dfeeb9 Ok, that is fine. Then you will also have to edit the duration in info.csv. In a world-less recording the Player timeline will be set to recording_start
to recording_start+duration
But the complete data is loaded nonetheless. So you might need to edit the pupil file before opening it in player
Btw what stopped the recording? Did it crash due to insufficient memory? Or disk space?
I'm looking into this too, because if it crashed the video headers are probably busted which is an extra pain
correct
For the record this isn't a study i'm running but one I'm called in to help with whenever pupil stuff goes wrong
Thanks a tonne for the help at this hour pablo
One last question
video headers are probably busted which is an extra pain This is very likely
The func you linked me looks independent of any classes etc. What would be your advised method to call this function
I would not call it directly as it is as it loads all the serialized data in memory which will likely exhaust your memory. How big is the pupil.pldata file?
I don't have it on hand so I'll have to grab it and check, but you should know that the data in time is at least a half hour
possibly/likely quite a bit larger than that in fact
approximately 6gb for the whole directory including both pupils
video files etc.
ah well, but this should be doable
The reason Player crashes is not the length but the missing files
I figured that was probably the case, so i presume that if the mp4s are intact I can replace the missing timestamp files and it'll go
@papr so does the single surface datum that we get per world frame contain multiple gaze positions? When i export the data using the gui I get a gaze_positions_on_surface_XXX.csv file. This is essentially exactly what I'm looking to output with my script.
I'm running Capture on a Linux machine and am unable to record audio. The 'voice and sound' is selected on settings, and the audio capture plugin is selected. But on the audio capture, the dropdown only shows 'no audio' rather than options for an input device. The laptop's built in mic works fine, but it doesn't seem to be communicating with the Pupil software. Any thoughts?
@user-e7102b yes, all gaze pos during a frame will be mapped to the surface
@user-8bf70c We are currently investigating a series of audio recording issues on all platforms. We will keep the channel up-to-date regarding developments in this regard.
@user-dfeeb9
import os
import msgpack
import numpy as np
def restore_pldata_timestamps(directory, topic):
msgpack_file = os.path.join(directory, topic + ".pldata")
ts = []
with open(msgpack_file, "rb") as fh:
for topic, payload in msgpack.Unpacker(fh, raw=False, use_list=False):
data = msgpack.unpackb(payload, raw=False, use_list=False)
ts.append(data["timestamp"])
ts_file = os.path.join(directory, topic + "_timestamps.npy")
np.save(ts_file, ts)
wow thanks!
@papr ok, so essentially all I need to do is load the surfaces.pldata file, loop through those gaze positions and output each position to my .csv file? If that's the case, then is there an example in pupil src that shows how to do that (because presumably the exporter performs something similar)?
This also means that the video files are not read in any way while generating the timestamp numpy files - presumably because the pldata has all the information anyway
Makes sense seeing as it's just more expensive to parse the video inputs again, neat
Unless i'm mistaken of course
@user-dfeeb9 No, the video is not used to restore the time information in this case.
excellent
@papr great - thanks!
@papr sorry - one more question - can you direct me towards where the "_export_gaze_on_surface" function is called?
give me a sec, I am working on a version that does not require to replicate the complete Player pipeline
@user-e7102b https://nbviewer.jupyter.org/gist/papr/87157c5da93d838012444f4f6ece6bcc
@papr awesome - thank you!
@papr Thank you very much, i just want to
"looking at door" -> "robot drives to door", i would love to get some information about this
@papr Hi, regarding the recovery job from yesterday. It does indeed seem recording of both eyes are corrupt. I'm primarily on linux but have access to windows too, do you have any header recovery methods you guys use yourselves for cases like this that you might recommend? No worries otherwise, I'll get around it myself. Sorry I also pinged you on the wrong welcome channel
@user-dfeeb9 I cannot give recommendations on this regard
Hey @papr thanks again for the help yesterday with creating the surface export script. This works great. The only issue is that all my data are recorded in the old pupil data format (I think v1.5) and need updating before the surface export will run. I use a function from update_methods.py (update_recording_to_recent) in my original batch export script to automate this update. However, update methods seems to have disappeared in the latest version of pupil_src. Can you tell me where the update function lives now? Thanks
@user-e7102b https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/pupil_recording/update
Checkout the __init__.py
specifically
@user-a98526 First, you need to setup a reference coordinate system in which one can track the head pose in. Once you have this setup, one can map gaze into the reference system. Checkout our video tutorial on this: https://www.youtube.com/watch?v=9x9h98tywFI
Then, your robot needs to learn to place itself in this coordinate system. The simplest way to do so, would be to add a camera to the robot and to run a second Pupil Capture instance with the same headpose tracking algorithm in the same reference system.
You basically need to make sure that there is always at least one marker visible in the headset's and the robot's camera field of view.
Reading this now, it actually does not sound that complicated to set up ๐ค
@user-a98526 @marc also had the idea for a relative control system. Instead of setting up an absolute coordinate system as in the approach above, you can simply put such a marker on the robot itself. When the robot is in the field of view of the eye tracker one can take the inverse of the head pose--the 3d location of the marker--and calculate the necessary relative movement for the "move" command. This would require that the robot's marker and the move target are visible to the eye tracker at the same time.
@papr thank you. It seems like I need "recording_update_to_latest_new_style" from the new_style.py script. When I try to import "new_style" it tells me I need pyglui. Given that I'm not using the gui, is there a way around this?
One of our pupil cameras is significantly darker than the other, with identical settings (confidence value holds constant at 0)
No setting changes seem to remedy this. Is replacement hardware available?
@user-e7102b I can't find the location where it is required. Maybe because it tries to import video_capture
?
@user-d672d4 Please contact info@pupil-labs.com in this regard
hi all - I collected some gaze behaviour awhile back with a binocular pupil core headset, and the data from one of the eyes is terrible. it's throwing off the detection of fixations of the other (good) eye. is there a way to tell pupil player to ignore data from one eye when calculating fixations?
@papr Updating to the latest software version and resetting back to default settings was actually able to remedy the issue, but thanks for the response!
hi all, i am new on pupil labs. I have downloaded pupil capture, player and service. I calibrate after using pupil capture. However, I do not know how to get data from pupil capture. Can you help me to get real time data from pupil capture application? (on windows 10 machine)
Hi all, I'm having a problem with Pupil Capture on my MacBook Pro (Sierra 10.12.6). Installation went quite ok, and both world- and eye-cameras of our DIY headset are working. Everything seem to work ok, but whenever I start the calibration at the end of the procedure, Capture crashes and only the eye-camera window remains on. Any help? Thank you.
@papr yes, the problem is that I try to import new_style it tries to import functions from video capture. This then fails on my machine because I don't have pyglui, and I'm unable to get pyglui to install successfully, so I'm stuck again. The old function "update_recording_to_recent" didn't require pyglui. Perhaps there is a way to run the new update script without needing pyglui? Thanks
Hi@papr, i have asked earlier regarding adding the surface back with Pupil player, and the process were supposed to be the same as adding suface in Pupil capture. However, since the pupil player doesn't show the world camera view, it's hard to find out if the markers are actually recognized or not, should I use pupil capture at the same time for this process?