I want to connect pupil lab with psychopy using IPC backbone, so I copy and paste the code in the network API documentation, but the program keeps loading after a bunch of results appear and cannot exit
Hi @user-00fa16, just copy-and-pasting the example will probably not work. It is important to understand the code structure of the Psychopy-generated experiment and inserting the parts from the experiment at the right places. You can read more about the example parts and their meaning in our network API documentation https://docs.pupil-labs.com/developer/core/network-api/. If this does not work it sounds like you might need more dedicated help. I would recommend reaching our to info@pupil-labs.com for that.
Would you tell me why the script keep going on like this and fail to respond?
the script is about 'reading from IPC Backbone'
Hi, the video image of our eyetracker suddenly turned black and white. The color information seems to get lost. Maybe you have an idea what could have caused the problem? The image is black and white in the preview of the app, as well as when we record the data. In addition, the connection to the eyetracker was interrupted in two cases when we used it in an experiment. There was no error, but no data was recorded (once after a few seconds and the other time after ~15 minutes). We suspect a problem with the cable between the eyetracker and the phone?
@user-abbed5 please contact info@pupil-labs.com in this regard
Hi @papr , sorry for maybe a bit naive question. What are the reasons for having an accelerometer and gyroscope in the Invisible? Do you use it for calibration purposes? Thanks, Stijn
Hey @user-4648c3! The IMU data is not utilized by the gaze estimation piepeline. It is just another sensor available in the device that essentially allows you to measure changes in head-pose, which can be interesting in different applications.
We are having issues with Pupil Player (2.3.0) crashing when we try to load some newly collected data from a Pupil Invisible (no crash screen to share). It says it's updating the format and then crashes with the icon in the toolbar disappearing. Any ideas of what to look at?
Actually looking into this further, it looks like the world video did not get captured on the Pupil Invisible. It's just a gray screen when we look at it on Pupil Cloud and in Pupil Player 2.1. The world.mp4 seems to show no data (zero bytes). Any idea why this might have happened? I double checked the video feed and calibration right before running the experiment, so it seems like it should have recorded. Thanks!
Turns out we had a conflicting app on the phone. Not sure what the conflict is (we have a secondary app capturing the phone audio and a time drift measure for synchronization with other wearables), but have isolated it to that. Will let you know if we find anything interesting.
@user-c6717a We are aware of the issue with opening Pupil Invisible recordings in Player v2.3 that do not have any valid scene video files. Our upcoming v2.4 release will include a fix for this issue.
we have a secondary app @user-c6717a Could you be more specific to which app this is? Is this app available publicly or is it a self-developed app?
Hey, great work with your Pupil Invisible white paper! Nice analysis and it is easy to follow your different steps. Thank you very much! One question regarding the recorded ground truth (and the corresponding marker): are you just placing specific marker (e.g., AruCo Tags) somewhere in the room and calculate the ground truth gaze ray based on the marker detection in the scene camera?
@user-92dca7 can you respond to @user-fb5b59 ☝️ ?
Hi @user-fb5b59, thanks for your positive feedback! We are happy to hear that you enjoyed reading the white paper. You are right, ground-truth gaze direction is determined based on an analysis of marker tags in the scene camera video.
Hey, I have a question regarding external triggers in Pupil invisible recordings. How can I access the information when and what triggers were sent. I send triggers through a python script running on a laptop on the same local network.
@user-bf7a13 Triggers sent during a recording are stored to the events.txt/.time
files https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793
@papr thank you for the quick reply. I already found the documentation but when sending triggers in my recording I only see one string in the event.txt file e.g. "marker" and can't open the event.time file
@user-bf7a13 Then you might not be sending the events correctly. Events sent to the phone will be echoed via the network api event sensor. You can use it to receive a confirmation that everything worked correctly. https://docs.pupil-labs.com/developer/invisible/#network-api Simply change the GAZE_TYPE
from gaze
to event
Also, how are you attempting to open the event.time file?
Edit: corrected link
@papr we try to open the event.time file with a normal text editor (Notepad)
@user-bf7a13 The .time
files are binary files. Checkout the "timestamps" sheet/tab of the recording format link above for details
@papr thank you we manage to retrieve the timestamp information with a python script
I am now trying import ndsi after sucessfully installing it. unfortunately i get a dll load error
@user-bf7a13 How did you install it?
Did you install it via the wheel? https://github.com/pupil-labs/pyndsi/releases/download/v1.3/ndsi-1.3-cp36-cp36m-win_amd64.whl
I followed the following instructions: https://github.com/pupil-labs/pyndsi
@user-bf7a13 Please give the wheel a try. Download it and run pip install <path to wheel>
unfortunately i get the following error:
@user-bf7a13 Ah, this wheel is specifically for Python 3.6
i have python 3.7.9
@user-bf7a13 in this case, it might make sense to proceed with running from source. Make sure to include the location of the libturbojpeg and ffmpeg dlls files in you system path. That step might not as explicitly mentioned in the docs as it should
i'll try python 3.6 first
adding the file to the system path made the difference. Thank you very much for your help!
and after successfully installing ndsi I can send marker to my android device! Great support!
One question regarding synchronization: when is the camera world frame und corresponding gaze frame timestamp generated? Is it generated on the time the image is taken or when the data frame is actually streamed?
Edit: Or is it taken when the recorded image is recevied by the mobile device?
One more question: is it possible to remove for example the lower part of each eye glasses. It might be nice (for other face algorithms) to have as less as possible occlusion in the face.
@user-fb5b59 Currently, it is not possible to remove the lower part of the glasses' frame. I will look into the timestamp question and come back to you.
@user-fb5b59 It is the last one, when the frame is received on the device.
@papr Thank you very much! Reason why I'm asking: I would need a synchronization between a camera running on the computer and the simultanously running PupilInvisible.
@user-fb5b59 The timestamps are measured as nanoseconds in Unix epoch. When using Python, you can use https://docs.python.org/3/library/time.html#time.time_ns to get a synchronized clock (assumes that your computer is synced via NTP).
has anyone experimented with running a Pupil Invisible data receiver on iOS? i found a Zyre library for iOS, but i haven't had much luck yet implementing the NDSI protocol by hand.
Hi! We are trying to extent the OnePlus recording time by using an additional power bank. However the problem is that either power or data transfer works when using an external adaptor (we've tried different types). Are there any software restrictions that prevent simultaneous power and data transfer? Does anybody have experience with a similar setup. I know that the recommended way to extend recording time is to use multiple mobile phones, but unfortunately this is not a solution in our use case scenario.
Hi, I have a question regarding gaze calibrations. Yesterday, we made some recordings of the gaze behaviour of basketball players using Pupil Invisible. Before we started the recording, we used the live screen in the app to correct for the offset. However, while playing the recordings in Pupil Player, I found out that the detected gaze is still a bit off from the markers we used. Is it possible to correct for this offset afterwards? And if so, how can I do that?
are there any non-python reference implementations of NDSI? interested in C/C++ in particular
Hello!
I tried building pupil monitor from source and ran into a problem afterwards
I working on a wondows 10 machine, installed python 3.6 and followed the instructions layed out in the readme
cloned the repo from git, installation itself didn't get any errors, but I'm not sure what to do afterwards. an executable was placed in the Scripts folder of the python installation
and when I'm calling pupil_invisible_monitor.exe now, I'm getting dll import errors. it looks like the site packages that were installed can't be found? and I'm not sure why
Hi @user-1391e7, is there a reason for why you need to build from source and cannot just use the app? It also sounds like you are mixing building from source and running the app, as building and running from source does not involve any .exe file
it's just about the visualization, the chosen size of the gaze circle
I'd like it to be smaller, so I wanted to edit that a little and build it again, see what it looks like then
Ok, please be aware that the instructions to run from source will not create a .exe file.
Instead it will install Pupil Invisible Monitor as an executable library into the Python environment that you used for building.
As mentioned in the instructions, you can just type pupil_invisible_monitor
into you terminal after having installed the library.
This should start the application.
right, I called python -m pupil_invisible_monitor
or this, yes
and this does not work?
that leaves me with the ImportError yeah
Also what did you mean with:
an executable was placed in the Scripts folder of the python installation
just what happened after I ran "python -m pip install ."
Ok!
did I screw up the installation itself? 🙂
We had reports about missing DLLs in the past, that were caused by a missing Windows package. Please try downloading and installing this package and test if the problem still persists: https://www.microsoft.com/en-us/download/details.aspx?id=14632
is it a problem if I get the error that I have a newer version of said redistributable already installed?
can you post the exact message?
just a sec
a screenshot would be fine as well
10.0.40219 is what windows is telling me
meaning my version of the redistributable
that's the error I'm getting when I'm trying to run from source
It seems I send you a link to an old version, can you try downloading vc_redist.x64.exe
from this page: https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads
It's listed under the headline Visual Studio 2015, 2017 and 2019
got it, updated
Do you still get the error when trying to run Pupil Invisible Monitor?
yes, same error
Can you run python -m pip freeze
and post the output?
@user-1391e7 I found the reason for this issue:
where did I go wrong? 🙂
It was not an error on your side. Pupil Invisible Monitor will also require you to install FFMPEG binaries to your computer, which we are not mentioning in the docs.
You can download the latest FFMPEG Windows build from: https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full-shared.zip
Please extract the ffmpeg-release-full-shared.zip
somewhere where you can keep it around, as you will need these files to run Pupil Invisible Monitor.
Then you will need to add the bin
folder from the extracted folder to your PATH environment variable in the Windows settings.
Please try running Pupil Invisible Monitor again afterwards, if it still does not work, you might need to restart your computer once for it to find the FFMPEG library.
We will make sure to include this information in the future, thanks for bringing it to our attention!
thank you for the help! I'll do that, reboot and let you know if that fixed it
I'm still getting the same error
You need to reopen the terminal after setting the PATH environment variables. Else it will not recognize the new values.
I did reboot
@user-1391e7 could you post a screenshot of your PATH environment values?
@user-1391e7 I'm not sure what the problem is here. Installing ffmpeg should have fixed the error. Can you try resetting your Python environment and reinstalling all Python libraries?
I'll try it on my personal machine, maybe something interferes locally that I'm not aware of.
thank you for the assistance in any case! I'll inform you if I happen to get it to work tomorrow
Hey guys :) I want to get a gaze vector located at the eye position (or mean eye position, e.g., between both eyes) and NOT at the world camera position of the PupilInvisible. Do you have any idea how to deal with this problem? I know that I can use the intrinsic of the camera to calculate the gaze ray originating at the PupilInvisble World camera. Do you have any idea how to logical shift this vector to an other position?
Hi folks, not sure if anyoen here has had the issue I am having ( too many threads to scroll through).. So here goes. Unboxed the invisible today (Yay!). All good until some test recording were made. No scene video is being captured. Noticed the cloud space only allows sotorage for now. NO bother. Downloaded Player 2.4 packaged and still no joy. EYe videos show when I select post-hoc etc but still no world video capture, no fixations showing etc.. really confused guys. ANy ghelp would be awesome.
@user-df1f44 You should see gaze data (green circles) without using any of the post-hoc features in Player
And yes, all cables connected, all instructions follwed to a tee. App UI shows world and eye cameras capturing during recording but no cigar afterwards to view. the file itself shows something is there but no viewing
So you do not see the red circle indicating the gaze point, do I understand correctly? (within the Android app)
@user-df1f44 if this is the case, please share the original recording with data@pupil-labs.com and we will have a look.
within the android app - the scene shows and the red circle shows - however post recording - no joy with reviewing captured information.
Will send sample data to email adress as stipulated.
@user-df1f44 Please open the recording in Player, and "Restart from default settings" in the general settings menu
cool. will try that now and see
@papr - just did - no joy
@user-df1f44 this looks like the scene video was not recorded. Once we reviewed the recording, we will come back to you via email.
@papr OK, thanks - data coming through to the e-mail adress shortly. Looking forward to your thoughts in due course
@papr - Data uploading with drive link for you guys - But just in case - I made a slightly shorter recording of 20secs to send over. Thought i'd give it another quick try to see. Still no joy as before but also this time, I noticed a warning on placing the recording folder int ot he player interface. Error message "Player: Moov atom not found" in my latest trial. Does this say anything to you? Thanks
@user-df1f44 This error message indicates missing meta data in the video file. Our Android development team is already investigating possible causes for this issue. We will keep you up-to-date in this regard.
@papr So, did a good 'ol reboot of the android device and I have been able to capture 2 recordings succesfully...😂 . Although all previous recordings are still kaput as before...😕 . Either way, all looks good over here... Anyone else have moov atom issues, Restart the android device.. Lol.... Fingers crossed it stays this way.. PICNIC...
@user-fb5b59 This problem with calculating this is that the distance of the 3D gaze point is unknown. As you say correctly you can use the camera intrinsics to calculate the 3D gaze vector originating from the scene camera. If you had the distance as well, you could determine the 3D gaze position. With an estimate of the eye pose in relation to the scene camera you could then calculate the gaze ray originating from the eye. Without knowing the distance, which Pupil Invisible unfortunately can not calculate, you can not solve this problem of epipolar geometry. Maybe your application allows you to estimate the distance somehow though?
@user-9a1bb2 Currently it is unfortunately not possible to change the offset value post-hoc. This is a feature we are looking to add to Pupil Cloud, but we do not have a release date yet. Since this is really just a fixed offset it would however be possible for you to estimate it yourself and manually add it to the gaze data reported in the CSVs you can download from Pupil Cloud. Sorry for the inconvenience until the feature becomes available in Pupil Cloud!
@user-e6124a No, there is unfortunately no other reference implementation of NDSI.
@marc Thanks for your answer! That is actually the way I'm doing this at the moment...either just define a specific, "standard" distance or using some markers with known position with respect to the camera. But as you/I said it is not possible to get the gaze vector originating in at the pupil when someone is just looking around wihtout knowing any additional (depth) information
@user-fb5b59 I see! Unfortunately there is no easy way around this problem. Using markers (or even surfaces as available in Pupil Player) may be a practical way of getting rough distance estimates.
@user-16f325 While I personally do not have the expertise to explain to you why this is not possible, I am aware that there are non-trivial issues that prevent this. I may be able to refer an alternative solution however: We will soon switch to the OnePlus8 phone as the new Companion device for Pupil Invisible. Using that, the battery life extends to ~2.5 hours. Would that be enough to enable your use-case? If so, let me know and I can give you a few details on what you would have to do to switch to this device.
Hi Marc, thank you for coming back to me about this. Unfortunately, even ~2.5 h will not really do the trick in our case. Do you know whether there are any hardware restriction for the intended solution? To me, the problem seems to be that the invisible companion software seems to prevent charging and data transfer at the same time. Maybe you could check that issue with your tech guys?
@user-16f325 This is not a limitation that is enforced by the app but the operating system.
Aah, it's the operating system. Thank you, this was really helpful and, at least, prevented us from finding a hardware configuration that would work. Just out of interest: Is this a general Android issue or due to the specific implementation in the OnePlus phones?