Hello! Is there a way to specify the IP address when using the Network API? It seems like the wrong network interface is searched, since I am using a separate WLAN to communicate with the phone and my LAN internet connection is searched instead.
@user-abbed5 You need to disable the LAN connection and restart the desktop application.
Thank you for your answer, but I search for a way without having to disable the LAN connection. So there is no way to specify the IP address? I am using the example from https://docs.pupil-labs.com/developer/invisible/#recording-format.
@user-abbed5 Unfortunately, it is not possible to manually set the IP.
Are you on Windows?
Okay, thank you. I am on Linux (Ubuntu 16.04).
Hello again, I have another question. We are using the Pupil Invisible eyetracker and want to receive data in a ROS Node using the Network API (example from https://docs.pupil-labs.com/developer/invisible/#network-api). Since we are using Python 2.7 in our project, we tried to modify ndsi to make it compatible by passing abc.ABCMeta instead of abc.ABC and removing type annotations, etc. mainly in network.py, sensor.py. and formatter.py. When executing the Network API example code, we now get a "TypeError: new() got an unexpected keyword argument 'notify_endpoint'" when network.handle_event() is called. You can see the full error message attached. We suspect that ABCMeta.new() can not handle keyword arguments. Maybe you have an idea what causes the error or suggestions for a workaround?
Hi @user-abbed5 In Python 2, meta classes had to be defined slightly differently:
class MyClass:
__metaclass__ = ABCMeta
Alternatively, you can get rid of the ABC inheritance completely if you like.
By using class MyClass(ABCMeta)
you are actually subclassing the meta class instead of telling the system that MyClass
is an abstract class. Which causes the unexpected keyword argument error.
Thank you! I changed this, but then get another error.
@user-abbed5 Please be aware that in Python 2 .7 it also makes a difference how you define your class.
class WithoutParentheses:
# Py2 -> Old Style, Py3 -> New Style
pass
class WithParentheses():
# Py2 -> New Style, Py3 -> New Style
pass
Please make sure to use the new style (with parentheses) for all class definitions.
@user-abbed5 How did you replace the SensorType
enum?
You could use this backport instead of changing ndsi code https://pypi.org/project/enum34/
Okay thanks, I will check again all class definitions and try again. I am already using enum34.
Hi, I checked all class definitions and still get the same error when running the Network API example. The Remote Control example from https://docs.pupil-labs.com/developer/invisible/#remote-control already works. I had to change the classes GazeValue, IMUValue, etc. in formatter.py to
GazeValue = collections.namedtuple('GazeValue', ['x', 'y', 'timestamp'])
but this should be equivalent? I appreciate any ideas to what I could have missed.
@user-abbed5 Hi, apologies for the delayed response. Yes, using collections.namedtuple
should be equivalent. Can you confirm that your error message is still generated by the typing
module? As you are using collections.namedtuple
, there should be no reason why typing._genereic_new
is being called.
@user-abbed5 How does your SensorFetchDataMixin
class definition look like? What changes have you made?
The remote control example only creates a hardware sensor which inherits from Sensor
while the gaze sensor also inherits from SensorFetchDataMixin
The SensorFetchDataMixin
inherits typing.Generic[SensorFetchDataValue]
Instead of removing typing
references, it might work to install pip install typing
I am new to eye tracking using invisible. I want to track human motion using IMU based body suit and eye tracker in the Unity3D environment. Could you inform me if there is such assets for invisible without HMD?
@marc You mentioned a paper on performance coming out soon. Is it available? We are considering purchasing the Invisible but would first like to know a bit about performance and what code we can access via open source or API. Thanks.
@user-90270c we hope to release the paper within this month. We are in the final stages of editing. Regarding the API: there is a newtwork based API that you can use - see more: https://docs.pupil-labs.com/developer/invisible/#network-api
@user-c5f657 sounds like an interesting project. There are no Unity assets available for Pupil Invisible, but you can see how to use the network API - https://docs.pupil-labs.com/developer/invisible/#network-api (python example) to receive gaze data.
@user-c5f657 sounds like an interesting project. There are no Unity assets available for Pupil Invisible, but you can see how to use the network API - https://docs.pupil-labs.com/developer/invisible/#network-api (python example) to receive gaze data. @wrp thank you for the link.
Hello. I use the Pupil Invisible sitting deep in a car bucket seat. Depending on the shape of the user's head and the way he sits, the folded over part of the included USB would hit the headrest of the seat, causing the glasses to float or shift. I tried other USBs, but I couldn't get it to record properly. Is there any solution to this problem?
Hey @user-584c4a! Since the data transmission requirements for the USB connection are very high, many 3ed party USB-cables will not work robustly with Pupil Invisible. A very high-quality cable is required and we have tested the cable we ship extensively. Unfortunately we can not refer you to another cable with a more suitable connector for your use case, of which we are confident that it would work robustly.
We have however had positive experiences with L-shaped adapters in the past. One of my colleagues has succesfully used the following under a motor cycle helmet, which would otherwise have been a tight fit as well. This would deacrease the length of the temple at least a bit. We have not properly tested something like this though, so please make sure to test it yourself before relying on it if you want to try it out!
What could also be helpful for you is the sports strap we offer in the accessories section: https://pupil-labs.com/products/invisible/accessories/ This strap allows you to fixate the glasses more tightly to the head of the subject. It was originally intended for very dynamic use-cases, but it might also help with the floating behavior you describe.
@marc Thank you. I was thinking of eventually using it with the helmet on, so I'll also refer to the L-shaped adapter.
I had a generic sports strap and tried it out. It was difficult to use it because it did not solve the slipping part of the problem, although it does come back when it slips off, but it did not solve the slipping part of the problem.
Hi all - appreciate this Discord channel. It has been extremely helpful as we work with our Pupil Invisible glasses. I had a quick question for the experts here - as we are still beginners at working with eye tracking software of this caliber. We are researching how baseball batters utilize their eyes while in the batter's box. In order to be as granular and data-driven as possible, we're attempting to create surfaces in post using the april tags - this will allow us to control for significant head movement/rotation and develop data+heat map video overlays. Our main issue however, is that it appears the tags must be quite close (~5-10 ft) in order for the glasses to pick them up and register them for surface creation, and our true object of interest is ~60 ft away. I had two potential solves, but wanted to ask the group if there's another option I'm not thinking of, or if we're going about this the wrong way. One solve is to create incredibly large tags so they work from a further distance. The second is to put the tags closer to the batter (10 ft) and create surfaces that approximate where the batter is looking roughly ~60 ft away, though that may not give us the granularity we need. Open to any alternative routes or features I haven't yet explored. Many thanks in advance !
@user-9513c2 Hey there, even though I am not part of the pupil lab team here are some general ideas on that topic. IIRC the invisible glasses tracker has a build in IMU, you might use that data to estimate head motion (at least for the orientation part), that data however will unfortunately drift with respect to time and temperature - so depending on your desired accuracy that might be sufficient. You might also go for a third party IMU and adapt that to the Frame or via a Headband to the users head - XSENS for example sells high accuracy IMU's that do limit drift (in magnetically disturbed environment's) to 3°/h.
Hey @user-9513c2! I am glad to hear that you find this channel helpful! As @user-26fef5 correctly suggested (thanks for that! 👍 ), there is an IMU integrated into the Pupil Invisible glasses, that could be used to track head motion. Attaching a 3ed party IMU or integrating with e.g. a 3ed party motion capture system, would of course also be possible if you have access to them, but might require some effort.
Alternatively, you could also consider the head pose tracking plugin in Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking This plugin is also based on AprilTags, but you could place them anywhere in your environment, i.e. also close to the camera, and as long as they are detected in the image, the plugin will tell you the pose of the glasses in relation to the markers.
To also get heatmaps using surfaces, I think you have already correctly summarized the two options you have, although making markers that are big enough to be detected at a 60 ft distance does not sound particularly feasible. I would recommend the second option you have mentioned, i.e. to build a "frame" of markers around the batters gaze target that is placed closer to the batter and orthogonal to his viewing direction. The heatmaps you generate this way should be equivalent to the ones you would get with large markers placed at the actual distance.
Hello, I could not be successful for installing pyndsi on my computer (I tried on both linux and window 10). Any help regarding the following error?
@user-c5f657 Hi, apologies. It looks like the list of dependencies is incomplete. Please install turbojpeg following these instructions for your appropriate operating system: https://github.com/pupil-labs/pupil/tree/master/docs
We will update the pyndsi docs in the coming days
@papr Yes, now it is working. I think it will be good if there is any comment in the Readme regarding these dependencies. Thank you!
Hi, I'm trying to download a recorded video from pupil cloud, however always fail after almost done. Is there any trouble?
@user-bea039 apologies for the delayed reply. @user-53a8c4 can you please look into this issue?
@wrp I discussed with @user-bea039 and resolving the issue
@papr Hi, thanks for your answer, but I gave up on trying to make it work with Python 2.7 and now start the ROS Node in a conda environment with Python 3. So now everything works fine. However, I have another question: I need the 3D gaze vector in space instead of only the 2D gaze point on the camera image. Is there a possibility to get this data with the pupil invisible eyetracker? Or do you have any suggestions on how to use the available data to calculate the gaze vector in space?
@user-abbed5 Using Python 3 is more future-proof anyway 🙂 We use the camera intrinsics to unproject 2d gaze points into 3d cyclopean gaze vectors. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L523-L552
You can use the pre-recorded intrinsics (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L130-L151) but I would recommend running the camera intrinsics estimation plugin in Capture and use the custom-fit intrinsics that will be stored in ~/pupil_capture_settings/PI world v1.intrinsics
instead.
@user-abbed5 Depending on your use case, head pose tracking might be useful for you, too https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins
Thank you again for your help!