@user-c9d205 We have merged a fix to
master
. Could you give it a try please? @papr Gave it a try, did not fix the problem, unless I have the wrong version, is it 1.22.7? Commit bcfbf7d
@user-c9d205 that is the correct version. Are you using Pupil Mobile or a direct usb connection to the device.
usb
@papr gaze_data in detect_fixations() is all of the gaze data available, or gaze.data or each of the gaze datum gaze.data[x]?
Hey Pupil team, thanks for all the support so far! I've been doing some further testing with pyuvc, it works really well on Catalina with our microscope, super handy!
However I'm now testing it on a raspberry PI-4. I think I managed to compile and install all the dependencies listed here https://github.com/pupil-labs/pyuvc "Dependencies Linux".
But when running the example code Im getting an import error:
/usr/local/lib/libuvc.so.0: undefined symbol: libusb_hadnle_events_completed
I also wasn't sure if I need to run python setup.py install
on linux as it seem to appear under MacOS dependencies
I tried following the suggestion here: https://github.com/pupil-labs/pyuvc/issues/45 But it seems like I already have pkg-config installed on the pi so i guess there is no point re-compiling libuvc and then pyuvc
thanks!!
Hi Pupil team! We are currently working on measuring torsion based on the eye images end preliminary data are looking promising. However, for now I have all the data in the reference frame of the eye cam and I would like to transform it into the world cam reference frame. How can I calculate the rigid transform (at least I assume it is a rigid tranform) to get from the 3D base data in the eye cam reference frame to the 3D gaze data in the world cam reference frame? I have been trying some things and I have found some relevant code in the pupil_calibrate_3D.py and associated files, but I haven't succeeded yet and I don't understand the fixed translation vector you're using. Can you help me with this?
@user-f8c051 looks like there is a typo. Did you copy and paste the error message? If yes, there might be something wrong in our libuvc.
@user-5c16b8 could you elaborate on how torsion relates to the transform between eye and world camera? I can link you to the relevant transformation code when I am back in the office.
@user-c9d205 could you share the capture.log file after reproducing the issue?
@user-2be752 I will have to look this up when I am back in the office
hey @papr thanks, it was my typo sorry, just logged into discord on the pi so I can copy paste the errors, here is the actual error:
import uvc Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /usr/local/lib/libuvc.so.0: undefined symbol: libusb_handle_events_completed
seems like libuvc cant find libusb?
@papr That would be great, thanks! For now we have the 3D eye movements, including torsion, as a quaternion in the eye reference frame (calculated using the pupil norm 3D vector for the horizontal and vertical eye movement and the torsion estimated based on custom computer vision algorithms in Matlab applied to the eye video images). But in order to say something about the orientation of Listing's plane, we would like to get to a quaternion in the world reference frame. Then we first need to get it in the world camera reference frame. I hope you can help us with it.
Hi we have been using the pupil cam with a samsung tablet. However December last year, the tablet's software was updated and since then the camera has not been working well with the tablet. The video capture page freezes and after recording and i try to access the file on the computer, there is no MP4 vidoe file and the audio file is empty. I tried to uninstall the software update on the tablet but it couldn't be done. Can you help with this?
this is what the files like now after recording with the pupil cam
This is what we have before the software update on the tablet
HI @user-79ce5d what Android version was your Samsung tablet using in December last year vs now? I ask because this may be related to an issue in Android v10.
Hi, I am working on a project with the Pupil Labs Invisible. I want to programmatically start and stop recording. How can I achieve this?
@user-acc960 Please checkout this section of our documentation. It includes an example, too. https://docs.pupil-labs.com/developer/invisible/#remote-control
@papr Thank you for the suggestion. I already looked into that, but that only works if the devices are on the same WiFi network I think. Our solution will end up using LTE and thus won't have the same WiFi network. How can I connect my devices without them being on the same WiFi network?
@user-acc960 The application uses the ZRE protocol (a local network discovery protocol) for its remote control. It is not designed to work outside of the local network. Therefore, you won't be able to remote control the phone via the mobile LTE connection. 😕
@papr Currently we are working on a project for a company and they require us to create an application that can, in short, combines data of multiple input devices. Part of these devices are two pairs of Pupil Labs Invisible glasses. We have to use LTE because the application will be used in cars and on the street. We have the flexibility to use something like a Raspberry PI as back-end device. This 'in between'-device will allow us to gather all required data more easily as it doesn't have to be multiple platform yet. In the this back-end we will 'stream' the data via e.g. a socket protocol to the front-end. The front-end will be multi-platform and the final product for our product owners. I was wondering if you know if this is actually possible as the ZRE protocol seems to be quite a limiting factor for us. If it is possible I would like to know how you would take on this task if you had to create something similar. Thanks in advance!
@user-acc960 I see the following option as a possible setup: - Use the RaspberryPI to host a local wifi network, - connect your two Companion phones to the network, - receive the streams from the phones on the PI, - use the PI to forward the streams to where-ever you need the data.
@papr Thanks a lot. This will give us confirmation that it is possible and it gives a better grasp on the project. We'll be working it out. 👍
Hello! How to find out the fps for eye cameras from application?
I have a problem. I constantly receive msgPack with timestamp <0 for world camera.
HI @user-79ce5d what Android version was your Samsung tablet using in December last year vs now? I ask because this may be related to an issue in Android v10. @wrp I'm not entirely sure right now. I'll confirm later but I think it updated to an Android v9. and it does say that the pupil mobile software is not compatible with that version. Is there any way around this or any solution?
@papr That would be great, thanks! For now we have the 3D eye movements, including torsion, as a quaternion in the eye reference frame (calculated using the pupil norm 3D vector for the horizontal and vertical eye movement and the torsion estimated based on custom computer vision algorithms in Matlab applied to the eye video images). But in order to say something about the orientation of Listing's plane, we would like to get to a quaternion in the world reference frame. Then we first need to get it in the world camera reference frame. I hope you can help us with it. @papr , did you have time to look into this?
@user-79ce5d you should be able to use Pupil Mobile with Android v9. Please check the Android version on your tablet to check when possible.
@user-5c16b8 a member of our team will be able to reply to your questions early next week. Apologies for the delay.
@wrp No problem, thanks for the update
@user-79ce5d you should be able to use Pupil Mobile with Android v9. Please check the Android version on your tablet to check when possible. @wrp Hi....So i just confirmed and it is currently an Android v9. So it used to be a v8. The app also states in google play store that 'this app may not be optimized for your device"
is there an email I can reach you guys on?
@user-79ce5d You can reach us via [email removed] too
@user-79ce5d You can reach us via [email removed] too @papr Thanks
@user-5c16b8 After a successful calibration, Capture publishes the calibration.calibration_data
notification. It contains a field called mapper_args
. In case of a 3d calibration it contains the following subfields:
- eye_camera_to_world_matrix
(monocular calibration) or,
- eye_camera_to_world_matrix0
and eye_camera_to_world_matrix1
(binocular calibration)
where matrix[0:3, 0:3]
corresponds to the rotation matrix, and matrix[0:3, 3]
to the translation vector (Note: This indexing starts at 0, and slices include the starting value, but exclude the end value.)
@user-045b36 FPS setting: Go to the Video Source
(earlier versions UVC Source
) menu of the eye processes. It includes a selector for the camera frame rate.
@user-045b36 Is it possible that you have synced your Pupil Capture instance with a different clock?
hey guys I am still getting the turboJPEG error that was discussed a long long time ago. I have the DIY and I an considering working with the source code and changing the bandwidth. Do I just need a better computer? What's the bottle neck with getting feed from two webcams into the computer program ? CPU hits 65+ % I changed the world cam to also be a microsoft hd6000 which does not give the turbo JPEG error but I get other issues so I will soon change back to the logitech world camera.
Hi, when using the surface tracking plugin, I can see the information of the surface I added in events['surfaces']. However, I couldn't find a way to obtain the position of each marker in the world camera image. I checked the documents that it seems that this information will be stored when using the recorder, but may I ask if there's a way to get those in real-time?
Thanks
My Pupil Labs Invisible sends the data to the Pupil Cloud. Is there a way that I can access cloud videos programmatically?
@user-c50a5c TurboJPEG errors appear if the jpeg payload was not transferred correctly. This can either happen due to a misconfigured USB bandwidth or due a damaged physical connection. The computer is not the bottle neck here as frames would be simply dropped in case of insufficient CPU resources.
@user-acc960 Yes, there is: https://github.com/pupil-labs/pupil-cloud-client-python/
@user-5e6759 The detected markers are currently not published nor exported in Pupil Player. We will add this functionality to one of our upcoming releases.
@user-acc960 I'd like to add on to what @papr linked, you will need to be logged in to Pupil Cloud in order to view api endpoints in the browser at https://api.cloud.pupil-labs.com
@user-045b36 Is it possible that you have synced your Pupil Capture instance with a different clock? @papr i don't know, i use default settings pupil-capture. About fps eye-camera. How do I find out FPS eye- and world- camera's from my C ++ application?
@user-045b36 Could you elaborate on what your C++ application does?
@papr Thanks!
@user-045b36 Could you elaborate on what your C++ application does? @papr creates one live video from images captured by the eye camera's and Worldcamera images. I need to know from the с++ application FPS cameras for correct operation.
@user-c50a5c TurboJPEG errors appear if the jpeg payload was not transferred correctly. This can either happen due to a misconfigured USB bandwidth or due a damaged physical connection. The computer is not the bottle neck here as frames would be simply dropped in case of insufficient CPU resources. @papr Thank you for the tip! I'll delve into learning the code and try out some new connection options for the cameras. I had ben using amazon usb extension cables, 3ft
@user-c50a5c Ah, yes, passive usb extension cables are known to cause this issue, too. You might be more lucky with active extensions
Hi, I am not familiar with Python at all. Is there a C++ variant of the NDSI package?
@user-0ef155 Hi. Not that I know of. 😕
@papr Thanks for the answer. I'll look into rewriting the packages possibly or learning Python then. 🙃
@user-0ef155 in which context do you want to use NDSI?
@papr Every single minute I want to saved the data of the incoming ' world' video with a marker on it (that indicates where the user is looking) for the pupil labs invisible.
@papr The file that is being uploaded to the Pupil Labs Cloud is way too big for me and only uploads in the end. To speed up this proces, I only want to upload the relevent video to the cloud and upload it more regurlarly.
@user-0ef155 if latency is not an issue for you, you could also use Pupil Capture to relay the Pupil Invisible video stream. The access to the image stream via Capture is easier.
Just select your PI device in the "Video Source" menu of Pupil Capture and subscribe to the "Frame Publisher" plugin's data. This is our python example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@user-0ef155 Thank you for your help. I'll look into it! 👍
@user-c5fb8b Thank you for your reply. The error is saying: Socket operation on non-socket. I don't quite get it as the python library does what is shown in the attached image. Do you perhaps know how I can extract 'self._zmq_socket' from my zyre node?
@user-acc960 no, sorry. I also have no experience with the Matlab wrapper. I just wanted to point you to the original zmq API in order to investigate your error. Maybe someone with more experience on Matlab can help here... Please also note that neither matlab-zmq nor pyzmq are developed by us, so you can also always open issues on their github pages and ask for help.
@user-c5fb8b I am sorry for bothering you .I found it at after literally hours of searching. Apperantly I had to
@user-acc960 awesome! Hope I could help!
Hi! Could the Pupil Player save its settings (plugins and its settings) and open up with that next time opening?
Hi @user-808722 yes, Player stores it's settings (and open Plugins) when closing it normally (not when e.g. killing the process).
hm interesting, I'll check on that, mine (on two separate laptops) seems to start easing the past settings
@user-dfa378 Checkout the
info.player.json
file. It contains the recording start time in system time (unix epoch) and pupil time. You can use these two timestamps to calculate the offset between unix epoch and pupil time and apply it to your other data. @papr hello once again. I did this but I'm really sceptical that the value in 'start_time_system_s' is not the actual start time. Moreover it is inconsistent over multiple recordings (for eg: I did two recordings at 45 min difference but the info.player.json had a difference of 37:42 mins). Is this a known issue?
@user-dfa378 It is the output of time.time()
https://docs.python.org/3/library/time.html#time.time It is therefore subject to system clock changes.
@papr thanks once again for the quick response. I really appreciate that. I can't think of a reason for the system time to have changed. But I'll dig a little deeper and let you know if I find something worth sharing
@user-dfa378 I can't think of anything either. 😕