💻 software-dev


user-c9d205 01 March, 2020, 08:50:18

@user-c9d205 We have merged a fix to master. Could you give it a try please? @papr Gave it a try, did not fix the problem, unless I have the wrong version, is it 1.22.7? Commit bcfbf7d

papr 01 March, 2020, 09:14:50

@user-c9d205 that is the correct version. Are you using Pupil Mobile or a direct usb connection to the device.

user-c9d205 01 March, 2020, 09:26:21

usb

user-2be752 01 March, 2020, 21:08:52

@papr gaze_data in detect_fixations() is all of the gaze data available, or gaze.data or each of the gaze datum gaze.data[x]?

user-f8c051 02 March, 2020, 10:39:34

Hey Pupil team, thanks for all the support so far! I've been doing some further testing with pyuvc, it works really well on Catalina with our microscope, super handy!

However I'm now testing it on a raspberry PI-4. I think I managed to compile and install all the dependencies listed here https://github.com/pupil-labs/pyuvc "Dependencies Linux".

But when running the example code Im getting an import error: /usr/local/lib/libuvc.so.0: undefined symbol: libusb_hadnle_events_completed

user-f8c051 02 March, 2020, 10:40:43

I also wasn't sure if I need to run python setup.py install on linux as it seem to appear under MacOS dependencies

user-f8c051 02 March, 2020, 11:14:22

I tried following the suggestion here: https://github.com/pupil-labs/pyuvc/issues/45 But it seems like I already have pkg-config installed on the pi so i guess there is no point re-compiling libuvc and then pyuvc

user-f8c051 02 March, 2020, 12:33:30

thanks!!

user-5c16b8 03 March, 2020, 12:14:00

Hi Pupil team! We are currently working on measuring torsion based on the eye images end preliminary data are looking promising. However, for now I have all the data in the reference frame of the eye cam and I would like to transform it into the world cam reference frame. How can I calculate the rigid transform (at least I assume it is a rigid tranform) to get from the 3D base data in the eye cam reference frame to the 3D gaze data in the world cam reference frame? I have been trying some things and I have found some relevant code in the pupil_calibrate_3D.py and associated files, but I haven't succeeded yet and I don't understand the fixed translation vector you're using. Can you help me with this?

papr 03 March, 2020, 21:28:39

@user-f8c051 looks like there is a typo. Did you copy and paste the error message? If yes, there might be something wrong in our libuvc.

papr 03 March, 2020, 21:32:58

@user-5c16b8 could you elaborate on how torsion relates to the transform between eye and world camera? I can link you to the relevant transformation code when I am back in the office.

papr 03 March, 2020, 21:34:01

@user-c9d205 could you share the capture.log file after reproducing the issue?

papr 03 March, 2020, 21:34:18

@user-2be752 I will have to look this up when I am back in the office

user-f8c051 04 March, 2020, 04:17:08

hey @papr thanks, it was my typo sorry, just logged into discord on the pi so I can copy paste the errors, here is the actual error:

import uvc Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /usr/local/lib/libuvc.so.0: undefined symbol: libusb_handle_events_completed

user-f8c051 04 March, 2020, 04:18:55

seems like libuvc cant find libusb?

user-5c16b8 04 March, 2020, 09:59:59

@papr That would be great, thanks! For now we have the 3D eye movements, including torsion, as a quaternion in the eye reference frame (calculated using the pupil norm 3D vector for the horizontal and vertical eye movement and the torsion estimated based on custom computer vision algorithms in Matlab applied to the eye video images). But in order to say something about the orientation of Listing's plane, we would like to get to a quaternion in the world reference frame. Then we first need to get it in the world camera reference frame. I hope you can help us with it.

user-79ce5d 04 March, 2020, 16:44:43

Hi we have been using the pupil cam with a samsung tablet. However December last year, the tablet's software was updated and since then the camera has not been working well with the tablet. The video capture page freezes and after recording and i try to access the file on the computer, there is no MP4 vidoe file and the audio file is empty. I tried to uninstall the software update on the tablet but it couldn't be done. Can you help with this?

user-79ce5d 04 March, 2020, 16:47:53

this is what the files like now after recording with the pupil cam

Chat image

user-79ce5d 04 March, 2020, 16:48:50

This is what we have before the software update on the tablet

Chat image

wrp 05 March, 2020, 06:39:00

HI @user-79ce5d what Android version was your Samsung tablet using in December last year vs now? I ask because this may be related to an issue in Android v10.

user-acc960 05 March, 2020, 11:55:28

Hi, I am working on a project with the Pupil Labs Invisible. I want to programmatically start and stop recording. How can I achieve this?

papr 05 March, 2020, 12:17:19

@user-acc960 Please checkout this section of our documentation. It includes an example, too. https://docs.pupil-labs.com/developer/invisible/#remote-control

user-acc960 05 March, 2020, 12:19:45

@papr Thank you for the suggestion. I already looked into that, but that only works if the devices are on the same WiFi network I think. Our solution will end up using LTE and thus won't have the same WiFi network. How can I connect my devices without them being on the same WiFi network?

papr 05 March, 2020, 12:27:20

@user-acc960 The application uses the ZRE protocol (a local network discovery protocol) for its remote control. It is not designed to work outside of the local network. Therefore, you won't be able to remote control the phone via the mobile LTE connection. 😕

user-acc960 05 March, 2020, 12:46:00

@papr Currently we are working on a project for a company and they require us to create an application that can, in short, combines data of multiple input devices. Part of these devices are two pairs of Pupil Labs Invisible glasses. We have to use LTE because the application will be used in cars and on the street. We have the flexibility to use something like a Raspberry PI as back-end device. This 'in between'-device will allow us to gather all required data more easily as it doesn't have to be multiple platform yet. In the this back-end we will 'stream' the data via e.g. a socket protocol to the front-end. The front-end will be multi-platform and the final product for our product owners. I was wondering if you know if this is actually possible as the ZRE protocol seems to be quite a limiting factor for us. If it is possible I would like to know how you would take on this task if you had to create something similar. Thanks in advance!

papr 05 March, 2020, 13:05:52

@user-acc960 I see the following option as a possible setup: - Use the RaspberryPI to host a local wifi network, - connect your two Companion phones to the network, - receive the streams from the phones on the PI, - use the PI to forward the streams to where-ever you need the data.

user-acc960 05 March, 2020, 13:12:57

@papr Thanks a lot. This will give us confirmation that it is possible and it gives a better grasp on the project. We'll be working it out. 👍

user-045b36 05 March, 2020, 13:31:09

Hello! How to find out the fps for eye cameras from application?

user-045b36 05 March, 2020, 13:33:36

I have a problem. I constantly receive msgPack with timestamp <0 for world camera.

user-79ce5d 05 March, 2020, 17:46:05

HI @user-79ce5d what Android version was your Samsung tablet using in December last year vs now? I ask because this may be related to an issue in Android v10. @wrp I'm not entirely sure right now. I'll confirm later but I think it updated to an Android v9. and it does say that the pupil mobile software is not compatible with that version. Is there any way around this or any solution?

user-5c16b8 06 March, 2020, 14:14:50

@papr That would be great, thanks! For now we have the 3D eye movements, including torsion, as a quaternion in the eye reference frame (calculated using the pupil norm 3D vector for the horizontal and vertical eye movement and the torsion estimated based on custom computer vision algorithms in Matlab applied to the eye video images). But in order to say something about the orientation of Listing's plane, we would like to get to a quaternion in the world reference frame. Then we first need to get it in the world camera reference frame. I hope you can help us with it. @papr , did you have time to look into this?

wrp 07 March, 2020, 01:45:59

@user-79ce5d you should be able to use Pupil Mobile with Android v9. Please check the Android version on your tablet to check when possible.

wrp 07 March, 2020, 01:46:36

@user-5c16b8 a member of our team will be able to reply to your questions early next week. Apologies for the delay.

user-5c16b8 09 March, 2020, 10:35:16

@wrp No problem, thanks for the update

user-79ce5d 09 March, 2020, 19:45:51

@user-79ce5d you should be able to use Pupil Mobile with Android v9. Please check the Android version on your tablet to check when possible. @wrp Hi....So i just confirmed and it is currently an Android v9. So it used to be a v8. The app also states in google play store that 'this app may not be optimized for your device"

user-79ce5d 11 March, 2020, 14:49:39

is there an email I can reach you guys on?

papr 11 March, 2020, 14:55:04

@user-79ce5d You can reach us via [email removed] too

user-79ce5d 11 March, 2020, 16:07:54

@user-79ce5d You can reach us via [email removed] too @papr Thanks

papr 11 March, 2020, 16:36:04

@user-5c16b8 After a successful calibration, Capture publishes the calibration.calibration_data notification. It contains a field called mapper_args. In case of a 3d calibration it contains the following subfields: - eye_camera_to_world_matrix (monocular calibration) or, - eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1 (binocular calibration)

where matrix[0:3, 0:3] corresponds to the rotation matrix, and matrix[0:3, 3] to the translation vector (Note: This indexing starts at 0, and slices include the starting value, but exclude the end value.)

papr 11 March, 2020, 16:37:26

@user-045b36 FPS setting: Go to the Video Source (earlier versions UVC Source) menu of the eye processes. It includes a selector for the camera frame rate.

papr 11 March, 2020, 16:38:02

@user-045b36 Is it possible that you have synced your Pupil Capture instance with a different clock?

user-c50a5c 11 March, 2020, 21:18:17

hey guys I am still getting the turboJPEG error that was discussed a long long time ago. I have the DIY and I an considering working with the source code and changing the bandwidth. Do I just need a better computer? What's the bottle neck with getting feed from two webcams into the computer program ? CPU hits 65+ % I changed the world cam to also be a microsoft hd6000 which does not give the turbo JPEG error but I get other issues so I will soon change back to the logitech world camera.

user-5e6759 11 March, 2020, 21:38:12

Hi, when using the surface tracking plugin, I can see the information of the surface I added in events['surfaces']. However, I couldn't find a way to obtain the position of each marker in the world camera image. I checked the documents that it seems that this information will be stored when using the recorder, but may I ask if there's a way to get those in real-time?

user-5e6759 11 March, 2020, 21:38:15

Thanks

user-acc960 12 March, 2020, 08:12:33

My Pupil Labs Invisible sends the data to the Pupil Cloud. Is there a way that I can access cloud videos programmatically?

papr 12 March, 2020, 09:05:22

@user-c50a5c TurboJPEG errors appear if the jpeg payload was not transferred correctly. This can either happen due to a misconfigured USB bandwidth or due a damaged physical connection. The computer is not the bottle neck here as frames would be simply dropped in case of insufficient CPU resources.

papr 12 March, 2020, 09:05:42

@user-acc960 Yes, there is: https://github.com/pupil-labs/pupil-cloud-client-python/

papr 12 March, 2020, 09:07:56

@user-5e6759 The detected markers are currently not published nor exported in Pupil Player. We will add this functionality to one of our upcoming releases.

wrp 12 March, 2020, 10:14:42

@user-acc960 I'd like to add on to what @papr linked, you will need to be logged in to Pupil Cloud in order to view api endpoints in the browser at https://api.cloud.pupil-labs.com

user-045b36 12 March, 2020, 10:27:10

@user-045b36 Is it possible that you have synced your Pupil Capture instance with a different clock? @papr i don't know, i use default settings pupil-capture. About fps eye-camera. How do I find out FPS eye- and world- camera's from my C ++ application?

papr 12 March, 2020, 10:30:20

@user-045b36 Could you elaborate on what your C++ application does?

user-5c16b8 12 March, 2020, 11:38:19

@papr Thanks!

user-045b36 12 March, 2020, 11:49:50

@user-045b36 Could you elaborate on what your C++ application does? @papr creates one live video from images captured by the eye camera's and Worldcamera images. I need to know from the с++ application FPS cameras for correct operation.

user-c50a5c 12 March, 2020, 15:21:30

@user-c50a5c TurboJPEG errors appear if the jpeg payload was not transferred correctly. This can either happen due to a misconfigured USB bandwidth or due a damaged physical connection. The computer is not the bottle neck here as frames would be simply dropped in case of insufficient CPU resources. @papr Thank you for the tip! I'll delve into learning the code and try out some new connection options for the cameras. I had ben using amazon usb extension cables, 3ft

papr 12 March, 2020, 15:36:44

@user-c50a5c Ah, yes, passive usb extension cables are known to cause this issue, too. You might be more lucky with active extensions

user-0ef155 13 March, 2020, 11:57:19

Hi, I am not familiar with Python at all. Is there a C++ variant of the NDSI package?

papr 13 March, 2020, 12:51:09

@user-0ef155 Hi. Not that I know of. 😕

user-0ef155 13 March, 2020, 12:52:06

@papr Thanks for the answer. I'll look into rewriting the packages possibly or learning Python then. 🙃

papr 13 March, 2020, 12:52:44

@user-0ef155 in which context do you want to use NDSI?

user-0ef155 13 March, 2020, 12:55:43

@papr Every single minute I want to saved the data of the incoming ' world' video with a marker on it (that indicates where the user is looking) for the pupil labs invisible.

user-0ef155 13 March, 2020, 12:57:36

@papr The file that is being uploaded to the Pupil Labs Cloud is way too big for me and only uploads in the end. To speed up this proces, I only want to upload the relevent video to the cloud and upload it more regurlarly.

papr 13 March, 2020, 12:57:37

@user-0ef155 if latency is not an issue for you, you could also use Pupil Capture to relay the Pupil Invisible video stream. The access to the image stream via Capture is easier.

papr 13 March, 2020, 12:59:08

Just select your PI device in the "Video Source" menu of Pupil Capture and subscribe to the "Frame Publisher" plugin's data. This is our python example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py

user-0ef155 13 March, 2020, 13:00:36

@user-0ef155 Thank you for your help. I'll look into it! 👍

user-acc960 24 March, 2020, 08:07:23

@user-c5fb8b Thank you for your reply. The error is saying: Socket operation on non-socket. I don't quite get it as the python library does what is shown in the attached image. Do you perhaps know how I can extract 'self._zmq_socket' from my zyre node?

Chat image

user-c5fb8b 24 March, 2020, 08:29:38

@user-acc960 no, sorry. I also have no experience with the Matlab wrapper. I just wanted to point you to the original zmq API in order to investigate your error. Maybe someone with more experience on Matlab can help here... Please also note that neither matlab-zmq nor pyzmq are developed by us, so you can also always open issues on their github pages and ask for help.

user-acc960 24 March, 2020, 08:30:01

@user-c5fb8b I am sorry for bothering you .I found it at after literally hours of searching. Apperantly I had to

Chat image

user-c5fb8b 24 March, 2020, 08:41:32

@user-acc960 awesome! Hope I could help!

user-808722 24 March, 2020, 13:12:48

Hi! Could the Pupil Player save its settings (plugins and its settings) and open up with that next time opening?

user-c5fb8b 24 March, 2020, 13:14:05

Hi @user-808722 yes, Player stores it's settings (and open Plugins) when closing it normally (not when e.g. killing the process).

user-808722 24 March, 2020, 13:14:55

hm interesting, I'll check on that, mine (on two separate laptops) seems to start easing the past settings

user-dfa378 25 March, 2020, 21:31:07

@user-dfa378 Checkout the info.player.json file. It contains the recording start time in system time (unix epoch) and pupil time. You can use these two timestamps to calculate the offset between unix epoch and pupil time and apply it to your other data. @papr hello once again. I did this but I'm really sceptical that the value in 'start_time_system_s' is not the actual start time. Moreover it is inconsistent over multiple recordings (for eg: I did two recordings at 45 min difference but the info.player.json had a difference of 37:42 mins). Is this a known issue?

papr 26 March, 2020, 11:32:10

@user-dfa378 It is the output of time.time() https://docs.python.org/3/library/time.html#time.time It is therefore subject to system clock changes.

user-dfa378 26 March, 2020, 15:59:39

@papr thanks once again for the quick response. I really appreciate that. I can't think of a reason for the system time to have changed. But I'll dig a little deeper and let you know if I find something worth sharing

papr 26 March, 2020, 16:01:31

@user-dfa378 I can't think of anything either. 😕

End of March archive