Hi, I want to record eye movment while playing a video. I'm using Pupil Capture, Pupil Player and Stream VR and I cannot play vido thru any of them. I've checked all the YT tutorials and I still miss something. Can anybody help?
Hi @user-e94d2a 👋 May I ask you if you are using Unity3D or directly playing some games through Steam VR?
The reason I am asking is because our existing VR integration is for Unity3D, and you'll need that to stream the content to Pupil Capture. If you haven't already, please check out our hmd-eyes repository (https://github.com/pupil-labs/hmd-eyes), where you'll find some demo scenes that might be helpful for you.
Please keep in mind that in order to stream the VR scene to Pupil Capture, you'll need to enable the screencast feature. You can find further details here: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#screencast
Hi, any tips on why I couldn't receive video streaming with such a message? Many thanks!
Hi @user-fae66d! Can you double-check that the connections on your Core headset are all fully secure, especially at the cameras. Then follow these debugging steps: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Hi, can someone tell me what that CPU value indicates? I notice that it is different on different machines and builds.
Hi @user-8619fb! This is indicative of the CPU use on the computer running Pupil Capture. It will differ on different machines for reasons such as what CPU the machine has, and available CPU resources
So is it possible to provide a solution that has already been given? Furthermore. Also, I can't disassemble the cable from the pupil core right now, is this cable fixed in the skeleton (not removable)?
Hi @user-37a2bd 👋. Re. https://discord.com/channels/285728493612957698/1047111711230009405/1166306678229192704, the output you've shared looks like opencv is unable to load the reference image. Are you sure you input the path correctly?
So when I go through the documentation for defining the AOIs on your website there are only partial code and I input all of that code into my python application. There is a part where it says on your documentation that you can find all the code required from the hyperlink, but I am unable to find the full python code. When I run the bits of code in python it gave me that error message but there was no prompt to select the path for the reference image.
I used the following link from your website - https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/
I tried searching for the whole code in this link - https://github.com/pupil-labs/pupil-docs/tree/master/src/alpha-lab/gaze-metrics-in-aois/
Couldn't find any other code.
Yes, running each code snippet standalone might not work as expected. I just checked the docs and the link to the full code is indeed working. Perhaps there's some confusion as to the format of the code. It's contained within a Jupyter notebook. You'll need to download and run that, and you can find some instructions on how to do so here: https://docs.jupyter.org/en/latest/running.html
I have jupyter installed on my PC. Cold you point out the file I should download on Github? I am a bit rusty with Python. It has been a while since I used it.
The README.ipynb
is the code, but it doesn't appear to render correctly on github unless you look at that file directly. If you look here: https://github.com/pupil-labs/pupil-docs/blob/master/src/alpha-lab/gaze-metrics-in-aois/README.ipynb, you'll see that this file includes code, notes, and sample output.
You should clone and download that entire folder (which includes the README.ipynb
and data
subfolder) and open the README.ipynb
in Jupyter Notebooks (or a compatible editor). Run each cell in order and you should have the same results.
If you aren't comfortable with Jupyter, you can generally take the code from each cell and combine them (in order!) to a single Python file to get to the same end result. Having said that, Jupyter is definitely worth learning, especially if you work with data a lot
I already tried combining the code in order on pycharm. But it threw me the error for which I shared the screenshot on the neon chat room. If I were to just run the code in python wouldn't I have to tinker the code such that it accepts a file path instead from my computer? Also can you be specific as to which folder to clone? The README.ipynb is under the data tab on GitHub. Do I just clone and copy the whole alpha lab folder?
I downloaded the whole git. Opened via a jupyter notebook and I am getting the error shown in the attached image.
Hi @user-37a2bd ! Python 3.12 is pretty new, not sure, but Jupiter might not be compatible yet. Nevertheless, if you would like to use a vanilla version of the AOI code, you have it available here https://gist.github.com/mikelgg93/a250811d59885e791cbeeb99fd12ef55
Thanks for that Miguel. The code seems to work without a flaw. I also tried running the code to generate static and dynamic scanpaths with the reference image mapper from the following link - https://docs.pupil-labs.com/alpha-lab/scanpath-rim/ I got the following error. Could you help me out with this as well?
Hi @user-37a2bd ! My colleague @user-c2d375 could probably assist you better with, since she wrote the code. Otherwise, I could help you next week, if that's okay. That said, from the terminal output, there is some issue with pandas library, perhaps they do not support yet 3.12 or the version of pandas changed how the loc function.
@user-37a2bd My colleague is right. This script has been tested to function with Python 3.7 or newer versions. However, I have not specifically tested with Python 3.12. In my recent experience, I have been using it effectively with Python 3.10.
I checked my python Version it says I have 3.7.6 and I executed the code in PyCharm (if that makes a difference).
Like I mentioned earlier I checked in cmd and the python version i have currently is 3.7.6, could there be another issue?
I'll try to replicate this issue with Python 3.7.6 and I will inform you as soon as possible
Thank you, also a little bit more information, I ran the script and it asked me to select a folder. I selected a folder from my local disk on a mini project that we performed with the neon glasses. It threw the error after selecting the folder.
Can you confirm you selected a Reference Image Mapper export folder?
The folder I selected was one which I extracted to my local disc. Does it need to be the .zip folder? Yes it was the Reference Image Mapper folder.
The folder you've unzipped is the correct one. Thank you for providing that additional context. Would you be able to share the Reference Image Mapper folder with me? This will allow me to precisely replicate the issue you're facing and assist you in finding a solution. Please feel free to use any file-sharing service of your preference and share the folder link with me via DM
I downloaded the file from the downloads section on the Pupil Cloud from our project. The folder has the reference image, 3 csv files and the txt file.
DM'ed the link. Let me know if you have any issues downloading
Unfortunately, I was not lucky in replicating your issue with Python 3.7.6. Based on the log, it seems the issue might be related to the pandas library. I suggest confirming the proper installation of the Pandas library and attempting a reinstallation or upgrading to the latest version.
Hi there! I am trying to put the recorded eye videos side by side in a third party movie editor. However, I noticed that the videos are not synced, i.e. one eye moves faster than the second eye. Is there a way to have them run simultaneously and at the same rate?
Hi @user-8619fb. Which model of eye tracker was used to make the recordings?
Hi @nmt , its the AR-VR HTC Vive lens set
Hi @&288503824266690561 , We were going through the python code for Defining AOIs and Calculating the Gaze Metrics. We tinkered the code for our needs and wanted to get your opinion on whether we manipulated the data correctly. We added code for three outputs that we would require. I am attaching the files here. Can you guys take a look and let me know if the output we got is correct as per our interpretation? I am attaching a word document with the code and what we expect from running this code and the outputs for each code separately attached as an excel. Please let us know at the earliest whether we read and manipulated the data correctly. Please note that we have executed this code with the existing examples in the link - https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/ . We had uncommented the predefined AOIs and executed the code with those AOIs. Thanks in advance.
If you guys would like the whole code file please let me know.
Hi @user-37a2bd 👋. Thank you for sharing. We would be happy to dive into code reviews/help you with custom implementation questions. However, in order for us to devote the time and attention required, we would ask you to consider purchasing a support contract. You can review the details of our packages here: https://pupil-labs.com/products/support
Hello, Can I use the Neon Companion app on my iPhone because I don't have an Android phone? Also, when I upload the recording to the pupil cloud, do I still need to operate the app on my phone when processing the data on my computer? 🙏
Hi @user-e757d2 ! Unfortunately, the device only works with the Companion Device (which could be a One Plus 10Pro or a One Plus 8T, both Android phones ).
The reason for this is that we fine-tune our neural network to work best on those architectures and because Android does not have the same restrictions as iOS does.
But no worries, our bundle already includes the Companion Device (phone), and everything you need to perform eye-tracking, so you do not have to worry about it.
Once the recording is on Cloud, you can delete it from the phone and work and analyse the data from your computer.
Hey @nmt, any chance you would know if there is anything I could do to make the videos run synchronously side by side when I merge them together to the same video?
If I understand your question correctly, this can be accomplished with the hstack
filter using ffmpeg
. E.g.,
ffmpeg -i left.mp4 -i right.mp4 -filter_complex hstack=inputs=2 combined-output.mp4