invisible


Year

user-abbed5 01 July, 2020, 14:33:26

Hello! Is there a way to specify the IP address when using the Network API? It seems like the wrong network interface is searched, since I am using a separate WLAN to communicate with the phone and my LAN internet connection is searched instead.

papr 01 July, 2020, 14:34:21

@user-abbed5 You need to disable the LAN connection and restart the desktop application.

user-abbed5 01 July, 2020, 14:40:37

Thank you for your answer, but I search for a way without having to disable the LAN connection. So there is no way to specify the IP address? I am using the example from https://docs.pupil-labs.com/developer/invisible/#recording-format.

papr 01 July, 2020, 14:44:32

@user-abbed5 Unfortunately, it is not possible to manually set the IP.

papr 01 July, 2020, 14:44:39

Are you on Windows?

user-abbed5 01 July, 2020, 14:46:22

Okay, thank you. I am on Linux (Ubuntu 16.04).

user-abbed5 02 July, 2020, 10:34:44

Hello again, I have another question. We are using the Pupil Invisible eyetracker and want to receive data in a ROS Node using the Network API (example from https://docs.pupil-labs.com/developer/invisible/#network-api). Since we are using Python 2.7 in our project, we tried to modify ndsi to make it compatible by passing abc.ABCMeta instead of abc.ABC and removing type annotations, etc. mainly in network.py, sensor.py. and formatter.py. When executing the Network API example code, we now get a "TypeError: new() got an unexpected keyword argument 'notify_endpoint'" when network.handle_event() is called. You can see the full error message attached. We suspect that ABCMeta.new() can not handle keyword arguments. Maybe you have an idea what causes the error or suggestions for a workaround?

user-abbed5 02 July, 2020, 10:34:53

Chat image

papr 02 July, 2020, 10:37:45

Hi @user-abbed5 In Python 2, meta classes had to be defined slightly differently:

class MyClass:
    __metaclass__ = ABCMeta
papr 02 July, 2020, 10:38:33

Alternatively, you can get rid of the ABC inheritance completely if you like.

papr 02 July, 2020, 10:42:53

By using class MyClass(ABCMeta) you are actually subclassing the meta class instead of telling the system that MyClass is an abstract class. Which causes the unexpected keyword argument error.

user-abbed5 02 July, 2020, 11:28:59

Thank you! I changed this, but then get another error.

Chat image

papr 02 July, 2020, 11:35:06

@user-abbed5 Please be aware that in Python 2 .7 it also makes a difference how you define your class.

class WithoutParentheses:
# Py2 -> Old Style, Py3 -> New Style
  pass

class WithParentheses():
# Py2 -> New Style, Py3 -> New Style
  pass

Please make sure to use the new style (with parentheses) for all class definitions.

papr 02 July, 2020, 11:36:49

@user-abbed5 How did you replace the SensorType enum?

papr 02 July, 2020, 11:38:45

You could use this backport instead of changing ndsi code https://pypi.org/project/enum34/

user-abbed5 02 July, 2020, 11:54:28

Okay thanks, I will check again all class definitions and try again. I am already using enum34.

user-abbed5 06 July, 2020, 10:16:04

Hi, I checked all class definitions and still get the same error when running the Network API example. The Remote Control example from https://docs.pupil-labs.com/developer/invisible/#remote-control already works. I had to change the classes GazeValue, IMUValue, etc. in formatter.py to

GazeValue = collections.namedtuple('GazeValue', ['x', 'y', 'timestamp'])

but this should be equivalent? I appreciate any ideas to what I could have missed.

papr 07 July, 2020, 13:46:13

@user-abbed5 Hi, apologies for the delayed response. Yes, using collections.namedtuple should be equivalent. Can you confirm that your error message is still generated by the typing module? As you are using collections.namedtuple, there should be no reason why typing._genereic_new is being called.

papr 07 July, 2020, 13:50:19

@user-abbed5 How does your SensorFetchDataMixin class definition look like? What changes have you made?

papr 07 July, 2020, 13:52:33

The remote control example only creates a hardware sensor which inherits from Sensor while the gaze sensor also inherits from SensorFetchDataMixin

papr 07 July, 2020, 13:52:48

The SensorFetchDataMixin inherits typing.Generic[SensorFetchDataValue]

papr 07 July, 2020, 13:57:45

Instead of removing typing references, it might work to install pip install typing

user-c5f657 09 July, 2020, 14:27:54

I am new to eye tracking using invisible. I want to track human motion using IMU based body suit and eye tracker in the Unity3D environment. Could you inform me if there is such assets for invisible without HMD?

user-90270c 09 July, 2020, 22:47:26

@marc You mentioned a paper on performance coming out soon. Is it available? We are considering purchasing the Invisible but would first like to know a bit about performance and what code we can access via open source or API. Thanks.

wrp 10 July, 2020, 02:19:20

@user-90270c we hope to release the paper within this month. We are in the final stages of editing. Regarding the API: there is a newtwork based API that you can use - see more: https://docs.pupil-labs.com/developer/invisible/#network-api

wrp 10 July, 2020, 02:20:34

@user-c5f657 sounds like an interesting project. There are no Unity assets available for Pupil Invisible, but you can see how to use the network API - https://docs.pupil-labs.com/developer/invisible/#network-api (python example) to receive gaze data.

user-c5f657 10 July, 2020, 08:54:48

@user-c5f657 sounds like an interesting project. There are no Unity assets available for Pupil Invisible, but you can see how to use the network API - https://docs.pupil-labs.com/developer/invisible/#network-api (python example) to receive gaze data. @wrp thank you for the link.

user-584c4a 14 July, 2020, 09:58:45

Hello. I use the Pupil Invisible sitting deep in a car bucket seat. Depending on the shape of the user's head and the way he sits, the folded over part of the included USB would hit the headrest of the seat, causing the glasses to float or shift. I tried other USBs, but I couldn't get it to record properly. Is there any solution to this problem?

marc 14 July, 2020, 10:47:56

Hey @user-584c4a! Since the data transmission requirements for the USB connection are very high, many 3ed party USB-cables will not work robustly with Pupil Invisible. A very high-quality cable is required and we have tested the cable we ship extensively. Unfortunately we can not refer you to another cable with a more suitable connector for your use case, of which we are confident that it would work robustly.

We have however had positive experiences with L-shaped adapters in the past. One of my colleagues has succesfully used the following under a motor cycle helmet, which would otherwise have been a tight fit as well. This would deacrease the length of the temple at least a bit. We have not properly tested something like this though, so please make sure to test it yourself before relying on it if you want to try it out!

Chat image

marc 14 July, 2020, 10:49:59

What could also be helpful for you is the sports strap we offer in the accessories section: https://pupil-labs.com/products/invisible/accessories/ This strap allows you to fixate the glasses more tightly to the head of the subject. It was originally intended for very dynamic use-cases, but it might also help with the floating behavior you describe.

user-584c4a 15 July, 2020, 04:31:01

@marc Thank you. I was thinking of eventually using it with the helmet on, so I'll also refer to the L-shaped adapter.

I had a generic sports strap and tried it out. It was difficult to use it because it did not solve the slipping part of the problem, although it does come back when it slips off, but it did not solve the slipping part of the problem.

user-9513c2 15 July, 2020, 04:31:23

Hi all - appreciate this Discord channel. It has been extremely helpful as we work with our Pupil Invisible glasses. I had a quick question for the experts here - as we are still beginners at working with eye tracking software of this caliber. We are researching how baseball batters utilize their eyes while in the batter's box. In order to be as granular and data-driven as possible, we're attempting to create surfaces in post using the april tags - this will allow us to control for significant head movement/rotation and develop data+heat map video overlays. Our main issue however, is that it appears the tags must be quite close (~5-10 ft) in order for the glasses to pick them up and register them for surface creation, and our true object of interest is ~60 ft away. I had two potential solves, but wanted to ask the group if there's another option I'm not thinking of, or if we're going about this the wrong way. One solve is to create incredibly large tags so they work from a further distance. The second is to put the tags closer to the batter (10 ft) and create surfaces that approximate where the batter is looking roughly ~60 ft away, though that may not give us the granularity we need. Open to any alternative routes or features I haven't yet explored. Many thanks in advance !

user-26fef5 15 July, 2020, 09:48:33

@user-9513c2 Hey there, even though I am not part of the pupil lab team here are some general ideas on that topic. IIRC the invisible glasses tracker has a build in IMU, you might use that data to estimate head motion (at least for the orientation part), that data however will unfortunately drift with respect to time and temperature - so depending on your desired accuracy that might be sufficient. You might also go for a third party IMU and adapt that to the Frame or via a Headband to the users head - XSENS for example sells high accuracy IMU's that do limit drift (in magnetically disturbed environment's) to 3°/h.

marc 15 July, 2020, 13:47:23

Hey @user-9513c2! I am glad to hear that you find this channel helpful! As @user-26fef5 correctly suggested (thanks for that! 👍 ), there is an IMU integrated into the Pupil Invisible glasses, that could be used to track head motion. Attaching a 3ed party IMU or integrating with e.g. a 3ed party motion capture system, would of course also be possible if you have access to them, but might require some effort.

Alternatively, you could also consider the head pose tracking plugin in Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking This plugin is also based on AprilTags, but you could place them anywhere in your environment, i.e. also close to the camera, and as long as they are detected in the image, the plugin will tell you the pose of the glasses in relation to the markers.

To also get heatmaps using surfaces, I think you have already correctly summarized the two options you have, although making markers that are big enough to be detected at a 60 ft distance does not sound particularly feasible. I would recommend the second option you have mentioned, i.e. to build a "frame" of markers around the batters gaze target that is placed closer to the batter and orthogonal to his viewing direction. The heatmaps you generate this way should be equivalent to the ones you would get with large markers placed at the actual distance.

user-c5f657 15 July, 2020, 14:16:01

Hello, I could not be successful for installing pyndsi on my computer (I tried on both linux and window 10). Any help regarding the following error?

Chat image

papr 15 July, 2020, 14:18:52

@user-c5f657 Hi, apologies. It looks like the list of dependencies is incomplete. Please install turbojpeg following these instructions for your appropriate operating system: https://github.com/pupil-labs/pupil/tree/master/docs

papr 15 July, 2020, 14:19:18

We will update the pyndsi docs in the coming days

user-c5f657 15 July, 2020, 14:30:42

@papr Yes, now it is working. I think it will be good if there is any comment in the Readme regarding these dependencies. Thank you!

user-bea039 16 July, 2020, 14:03:00

Hi, I'm trying to download a recorded video from pupil cloud, however always fail after almost done. Is there any trouble?

wrp 20 July, 2020, 05:03:43

@user-bea039 apologies for the delayed reply. @user-53a8c4 can you please look into this issue?

user-53a8c4 20 July, 2020, 05:40:45

@wrp I discussed with @user-bea039 and resolving the issue

user-abbed5 21 July, 2020, 12:40:27

@papr Hi, thanks for your answer, but I gave up on trying to make it work with Python 2.7 and now start the ROS Node in a conda environment with Python 3. So now everything works fine. However, I have another question: I need the 3D gaze vector in space instead of only the 2D gaze point on the camera image. Is there a possibility to get this data with the pupil invisible eyetracker? Or do you have any suggestions on how to use the available data to calculate the gaze vector in space?

papr 21 July, 2020, 12:44:42

@user-abbed5 Using Python 3 is more future-proof anyway 🙂 We use the camera intrinsics to unproject 2d gaze points into 3d cyclopean gaze vectors. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L523-L552

You can use the pre-recorded intrinsics (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L130-L151) but I would recommend running the camera intrinsics estimation plugin in Capture and use the custom-fit intrinsics that will be stored in ~/pupil_capture_settings/PI world v1.intrinsics instead.

papr 21 July, 2020, 12:47:17

@user-abbed5 Depending on your use case, head pose tracking might be useful for you, too https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins

user-abbed5 23 July, 2020, 10:33:59

Thank you again for your help!

wrp 30 July, 2020, 08:33:05
End of July archive