Is there a way by which I can save the video frames from both the cameras ...i tried the following using VideoWriter of OpenCV
Define the codec and create VideoWriter objectfourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480)) while True: frame = cap.get_frame_robust()
out.write(frame)
Try frame.img
but because the frame is not a numpy array rather a uvc.Frame type its not working
Out.write(frame.img)
yeah got it
thanks
Beware that frame.img yields an rgb image. I could imagine that Opencv requires an BGR image
@papr yeah sure I took care of that...thanks for the info btw
Hello! When in Player and going fixation by fixation, is there a way to see the time elapsed in the clip on the screen? I am trying to line up fixations with music to see where fixations are in terms of the music being played.
The seek bar should jump forward on jumping fixations
It does, but it doesn't tell me how much time has elapsed
Do you want the duration of a fixation?
No, I've got those. I'd like to know that fixation #45 for example happened at 38seconds into the recording
Hum.. You need to subtract start timestamp - fixation timestamp
The fixation start time on the excel spreadsheet has a column for timestamp but it's stamps like "327598.1496"
Yep, you need to subtract from the first timestamp
The next release will show all information about current fixations. This will also show its timestamp
Cool! Do you have an eta on that release?
Doing this you will have time in seconds from the start of the recording
And in the meantime- how do I know that the timestamp of the very first fixation is also the actual start of the recording?
But @user-41f1bf is correct. You might have to subtract the start timestamp
The first gaze point has the start timestamp
Note: not the first fixation
Yes, the first gaze timestamp, not the first fixation
Ok - and where can I find that first gaze stamp? All I see is fixations
Use the raw data exporter to export gaze data
Exporting raw gaze data
Ok, I'll give that a try. Thank you so much!
No problem
Hello! I am looking at a fixation csv file and wondered what unit of measurement the "start timestamps" are in? For example, my first fixation start timestamp is 327598.1496
The unit is seconds, as all timestamps in Pupil
Ok, so then my next question is why it doesn't start at 0?
We use the operating system's monotonous clock. The start time is arbitrary, but often corresponds to the boot of the computer
gotcha. Thanks!
No problem
anyone have recommendations for marker sizes when using manual marker calibration?
@user-8a9ca1 this depends in which distance you want to show them
is there a particular ratio of distance to size I should be aiming for?
difficult to say, usually bigger is better than smaller
I printed out the provided markers and they filled an entire sheet of paper, that seems like it might be a bit too big 😃
with monitor distance 4cm in diameter is more than enough
okay, that should give me a baseline, we'll try a few different sizes based on that
important is that the rings are distinguishable in your world camera image
meaning a low resolution requires bigger markers
okay, cool
Could someone point me in the direction for the code for how pupil syncs multiple cameras? I've got some projects where I need to sync some usb cameras.
Take a look at the zmq zyre code
Also, take a look at pupil groups
Sorry, do you mean remote time sync? Or local sync?
I guess local? The goal is to sync frames together for vision. Although I think I have a handle on how it should work, capture multiple camera's frames with timestamps in seperate processes/threads -> send to a master process/thread which will match up frames, a code example would be helpful
Also the uvc trick to get multiple cameras on a single usb bus is also something I'm looking into
For local, all you need is a central monotonic time "server"
In linux, they use clock_gettime
Each frame will receive a timestamp as they arrive from a camera, assynchroneously,
For remote, I dont know...
ok, I think I can figure it out from that
thanks a bunch!
Pupil uvc cameras also support hardware timestamping. Right now I dont remember the details, please take a look at the docs
Hello. I'm using the Vive add-on. Before I install the camera drivers (or after I uninstall them twice), they show up in control panel as Imaging devices> Pupil Cam1 ID0, and Pupil Cam1 ID1. After I install the drivers, they show up as libusbK USB Devices > Pupil Cam1 ID0, and Pupil Cam1 ID0. i.e., they are both ID0. See the attached image. The consequence of this is that when starting pupil_service, only one window opens. I can send a command over zmq to open the other one and I can use the GUI to select the source, so this isn't a deal breaker, I just thought it was odd and it might be nice if I can fix it.
Separate question: How much overhead does the eye process visualization take? I'm using pupil_service, and I don't really need to visualize two different high-res video streams while I'm running my experiment, so I'd be happy to close these windows (while gaze continues to be calculated) if it meant decreasing processor load at all.
Minimizing the eye windows will reduce the required load.
I am not sure but Pupil Capture only opens one eye window by default. Same is possible service.
If you open both windows and restart service, then both should open as well.
@papr, thank you. IMO pupil_service shouldn't open any windows at all... it's just a service. It should be up to the client to change any settings or visualize the images. In PSMoveService (for tracking PSMove controller pose from its IMU and PSEye camera), whenever there's a client that is requesting the video feed, we write the frames to a shared memory location and then any client that wants to view the frame can. It's still up to the service to do CV on the raw frames, and actually the service writes the CV result (ellipses, ROI boxes, etc) onto the frame in shared memory so the client gets that too.
Anyway, I see that when the eye process window is minimized that it's still processing data but the visualization is not updating so that's good enough for now.
Hi guys, I'm trying to build Pupil from source on Linux (on Ubuntu 16.04). I'm currently stuck installing some of the dependencies:
$ sudo pip3 install git+https://github.com/pupil-labs/pyuvc
Error is attached:
Notice the error message: /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libturbojpeg.a(libturbojpeg_la-turbojpeg.o): relocation R_X86_64_32 against `.data' can not be used when making a shared object; recompile with -fPIC
All the dependencies installed fine before your custom libjpegturbo & uvc, but I'm stuck there. Has anyone seen this problem? From the error message perhaps I need to use a different version of GCC? (My default GCC is 5.4.0)
Please delete libturbojpeg.a and try again.
Thanks, I tried deleting that file and rebuilding libturbojpeg and libuvc and then pyuvc but I still get the same error when building pyuvc.
I got it working! I noticed I also had another version of libturbojpeg.a at "/usr/lib/x86_64-linux-gnu/libturbojpeg.a" in my system folder. After deleting that file and rebuilding everything, now pyuvc builds 😃 Thanks for your help @mpk!
I managed to build all the Linux dependencies, but when I run it I get a runtime crash due to protobuf version. I tried searching the web about it, I get the impression I need to uninstall my system libprotobuf and install a newer version then reconfigure & rebuild OpenCV?
/Linux/PupilLabs/pupil/pupil_src $ python3 main.py MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. [libprotobuf FATAL /Linux/opencv/3rdparty/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-ui6vjS/mir-0.26.3+16.04.20170605/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".) terminate called after throwing an instance of 'google::protobuf::FatalException' what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-ui6vjS/mir-0.26.3+16.04.20170605/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
Hello, I was using the eye tracker to make a few recordings the other day, and something happened with 3 of the 4 videos I made. First, the audio didn't get recorded and then when I brought up the file in Player, it said there were a certain number of fixations, but going fixation by fixation, I was seeing scribbles instead of dots for the fixations. The exported video did the same thing. The time was running, but the video looked frozen. I'm wondering if something happened with my FPS? Would my computer capacity mess up FPS? I also had an external mic plugged in - I don't know if I was doing too much on my computer and that messed up the processing. I wanted to see if anyone else had run into this or if you have any ideas?
Hey @user-2798d6 If you want you can send me the recording and I will have a look at it tomorrow. I will test it with the new fixation detector and see if the old one is simply buggy
Sure thing! Is there a specific email address I should use or just the info email?
Just upload it to Google Drive and send me the link in a direct message 😃
@user-2798d6 Indeed, it looks like you recorded with an average FPS of 15. This is very likely due to high load. This also explains why you can see fixations even though there are gaze points displayed that do not belong to the fixation. Everything that is displayed (gaze, fixations, eye overlays) is shown during the world frame that they are closest to. If you only have few frames you end up with more data points per frame.
Hi, I am just wondering how to record audio using pupil labs on a laptop? Thanks, Sara
@user-380f66 you can load the Audio Capture plugin before recording and then select the audio source from within the plugin.
Thanks! Could I get a link to this plugin? I searched but didn't see it.
here is an example
You load new plugins from the plugin menu in the top of the UI in Pupil Capture's World window
BTW @user-380f66 audio syncing is only available on macos and linux at this time
That is fine thanks, I have a mac. I can load it from the window. But do I need to download something first?
No need for any other downloads other than Pupil software - the plugin is included in Pupil Capture. Please ensure that you're running the latest version of Pupil software: https://github.com/pupil-labs/pupil/releases/latest
Ok, thanks.
You're welcome 😄
Also, I have a separate question. My external cameras and neuro equipment are all sampling at 25 fps. If I set the eye tracker to sample at this rate as well for a social cog study, would it cause problems with the sampling rate being too low?
I would set both the world and eye cams to sample at this rate
@user-380f66 I would recommend correlating time-stamped data post-hoc.
Does your neuro equipment save timestamps?
Any idea why the python example below does not work with the new version of pupil service (V0.9.14): https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_messages.py
@user-0d187e Do you have an eye window open?
yes
Do you see a live image from the camera or a gray image?
I have the eye image but the loop breaks in its first round after topic = sub.recv_string()
So, do you mean that you get a Python exception? If yes, which one is it?
sorry I tested it again. not really! it just waits there. I think it's not connected to the service
How could I get the ip and port for the service
Ok, good to know. Open Pupil Capture and scroll down to the Pupil Remote menu. It should show you IP and port to which you have to connect to.
I killed all pupil services running in the background, and now it seems to be working even though the port number is different from what it was mentioned in the pupil capture
btw, I accidently closed the Remote menu. How can I have it shown again?!
sorry about these stupid questions!
It is a plugin. Use the "Open Plugin" selector at the top of part of the main menu
Pupil Service is a stand-alone app. You can run Service and Capture in parallel but they will have two Pupil Remote instances that are independent of each other.
I see. How to get the ip/port of the service then?
Pupil Service forces port 50020. It does not start if it cannot allocate this port.
Alright. Thanks
The thing is that when I close the pupil service on my mac it doesn't kill the process
How do you close it? Via a notification?
😃 right click and quit
Mmh. I see that this is an intuitive way to close it but the implementation does not support it. =/ We will work on a fix for that,
In the mean time you can use Pupil Capture. It is the same as Service but features an interface which you can use directly. Service is supposed to be controlled via network notifications.
pupil capture displays the images and that may slow down the processing, is that correct?
It uses exactly the same code to display eye images and calculate pupil positions as Service. You can minimize all windows to get the same performance as Service.
Hey! Do you have somewhere that I can read through what algorithms are used in the pupil system? I didn't really want to dig through the code and assume i'm missing something because the docs are great.
algorithms for what?
do you mean for pupil detection?
Sure
I cannot give you the exact answer as I don't know about the code. But I'm trying to help clarifying your quesiton. for pupil detection, part of the code detects the pupil blob in the image, part of it does ellipse fitting and, part of it does some geometrical calculations to estimate the 3D position of the eyeball. You need to explain what you need exactly!
To be honest, I was really just interested in the general flow from capture of image through to generated output and what underlying algorithms have been implemented. I have assumed they would either have been implemented from research papers, or written up as research papers themselves. The device obviously detects and tracks the pupil, and maybe some other things (I don't have the device). I'm interested in the flow of
Looking through the documentation on pupil-labs it looks very thorough, including how to cite the use of pupil itself but not what has been used to develop it.
So I thought I might be missing something obvious
The algorithm has been published as a research paper. https://arxiv.org/pdf/1405.0006.pdf
This is alos the paper that you should cite,
How to calibrate the cameras in order to have reliable 3d data (e.g. gaze_point_3d, gaze_normal_3d شدی eye_center_3d)
First off, the pupil detection needs to be good. Second, the 3d model should have converged. Third, the subject should have performed well during calibration.
of course. I mean world camera calibration to obtaine external camera parameters
if that's needed?!
That's great, thanks @papr
e.g. when I change the orientation of the world camera, it change the center of the camera. Then how the tracker knows where the new center is when reporting the 3D gaze coordinates relative to that center?
You break the calibration by changing any of the physical relations of the cameras. You will need to recalibrate afterwards
so the tracker calibrates the camera as well during the gaze calibration?
But then how does it now the depth of the gaze point in 3d (gaze_point_3d.z) using only one time calibration for one depth?
the 3d calibration mostly calibrates the physical relations of the cameras, yes. Additionally a few physiological parameters are fitted as well.
Unless the gaze_point_3d is always assumed to be inside the screen plane
The depth is only available using a binocular mapper. It uses the intersection of two mapped gaze normals as depth indicator
if you know what i mean?
I need to think about it
thanks though
😃
No problem. Do not hesitate to post further questions as they come up 😃
sure
Hi, we are trying to drag copies of our data directories for each tracker into pupil player, but we are getting a notice from pupil player that it needs to update its recording format, then it boots us out of the software (quits without warning). We are just wondering what we can do to troubleshoot this so that we can view our files in pupil player? Thanks!
Hi,
Can you run pupil from terminal and paste the output?
Sorry--we're not super tech savvy--can you exp;lain to us how to test this?
@Sara#2380 on windows the windows terminal opens automaticaly with the app. On Mac and linux just open the terminal app and type pupil_player and hit enter.
@user-45d36e A side note: The upgrade window is supposed to close after the upgrade procedure.Afterwards a new window should open. This might take a while though if you have very long recordings.
Ok, thank you. On a side note, for our data that is not with the phones, can we access audio in the pupil capture playback? Thanks!
sorry. pupil player playback
Do you mean record audio when you say access audio? If yes, this feature is currently not available on Windows. The audio/video framework that we use under the hood, does not support it reliably.
Yes, I am able to record audio and get an mp4 file on the mac
I just can't figure out how to get the audio when I player even though the audio file is in the directory
In order to have audio in Pupil Player you will have to record audio in Pupil Capture. You can do so using the Audio Capture plugin, that as mentioned above, does not work on Windows.
I see, you will need to export the video using the Video Exporter plugin to merge video and audio.
Ok, so I need to export the video while it is recording in pupil capture with that plugin? Or after it has saved in player?
Ok. So I recorded with audio and dragged the directory into pupil player. Then under audio I added the Video Export Launcher plugin. Then I clicked on the down arrow on the menu on the left to export, and it told me it had exported. Then I looked for a new file and also tried to paly the current one but didn't have audio.
I think I must be missing something!
Sorry, under analyzer
Did you open the video in Quicktime Player? It is known to have problems playing the audio of exported videos.
No, I was still trying to play it in pupil player
We recommend using the VLC media player to playback exported videos.
The exported video is just a video. It is not a valid Pupil Player recording. You will need a media player to playback audio. Audio playback from within Pupil Player is not supported.
Oh ok I see. Ok, I'll play around with this for a bit.
Ok, thanks. I got the audio with VLC. However, I think we may need to be able to view the videos with audio in quicktime as well. Is there a format we can convert our files to that might work better with quicktime player?
You can use VLC to convert them by following these instructions: https://en.softonic.com/articles/vlc-media-player-as-video-converter
But test your videos first in quicktime. Sometimes they work out-of-the-box
Ok, thanks. Next issue (because we're on a roll today!). How do I sync 2 worldcam streams that are being recorded on the same macbook pro? I tried adding the time sync plugin, but it set both streams to clock master status and when I hit record on one stream, recording didn't start on the other stream.
Two things. First: Two start the recordings simultaneously you will need to activate Pupil Groups and make sure that they are in the same group (in which they should be by default). Second: One Capture instance is set to be master randomly. You can increase the bias of one instance to force it to be master.
Hi papr, both ideas worked, thank you!
Nice! No problem.
Hello, would anyone happen to know the ratio Pupil Capture records in?
@user-f1d099 what do you mean by ratio? The video frame ratio?
The video scale ratio.
I am trying to work with the exported video in post and I need the ratio to scale it properly.
Depends in which resolution you recorded the video. If I remember correctly, it is written down in the info.csv file within the recording folder.
Alternatively, you can look it up yourself by opening the video in the VLC media player and display the file's meta information.
I will take a look at the csv file first
thank you for the quick reply
Hi, I have been trying to drag the folders to the pupil capture windows. The recordings are for no more than 6-8 minutes. i have been waiting for almost 5 minutes but the pupil capture just exits itself. do you know how long does it usual take for the folders to open on pupil player?
sorry pupil player windows, and not pupil capture*
@user-45d36e there seems to be a relevant issue. Maybe related? https://github.com/pupil-labs/pupil/issues/837
@user-45d36e nevermind this is not related. Sorry.
Can you send one of your recordings to info[at]pupil-labs.com We will have a look and report back.
Hello, I am just wondering if it is possible to manually adjust the max_dispersion and min_duration settings for the 3D Pupil Angle report?
@papr, I just saw your message from Monday about my low FPS. Would that be from having so many things plugged into my computer USB ports plus trying to run the pupil lab stuff or is my computer just overall not powerful enough?
@user-2798d6 Indeed, it looks like you recorded with an average FPS of 15. This is very likely due to high load. This also explains why you can see fixations even though there are gaze points displayed that do not belong to the fixation. Everything that is displayed (gaze, fixations, eye overlays) is shown during the world frame that they are closest to. If you only have few frames you end up with more data points per frame.
I am trying to get a timestamp for each fixation within an audio track. Previously, I subtracted the first gaze position time stamp from the first fixation timestamp to get that. However, in my current file, the first fixation occurred before the first gaze point. What should I do now to get the fixation time points?
Hi all! Does anyone have a link to a bare-bones Unity example to calibrate and then just read 2D/3D gaze points? No recording etc, etc as in the example project that is available via gitHub.
@user-2798d6 It should not matter how many USB devices you have connected to your computer but which other programs you are are running. in parallel.
@user-2798d6 Are you using the offline fixation detector to get the fixations or do you extract them from the pupil_data
file?
@user-d08045 I would recommend asking in the 🥽 core-xr channel. It is dedicated to unity-related questions.
@papr Thanks, will do!
Hi! I just bought the pupil lab kit and am trying to build pupil on windows. I have encoutered issues with the GLFW library: .. "Failed to load GLFW3 shared library".. Anybody has had that issue?
Hey @user-9d900a We recommend using the bundled app on Windows. You can download it here: https://github.com/pupil-labs/pupil/releases/tag/v0.9.14
Hey papr, thanks for your prompt reply. I will most likely need to develop plugins for pupil.. so I figured starting from the source code would be the way to go... What would you suggest?
For most use-cases it is enough to use Pupil Capture's network interface. There are only a few cases where it makes sense to write a plugin. Would you mind telling me a bit more about your use-cases? I could give you hints on where to look next.
I'm interested in building a packaged app that will allow the computation of "cognitive load" metrics from pupil dilation measures...
Are you planning on doing this in an online-fashion with realtime feedback or would it be sufficient to analyse the data after a recording?
Well that's a specs we haven't decided on yet. Real-time would be amazing.. Off-line would be okay for a first version I suppose..
The offline version could be impelmented by making a recording, exporting the pupil data with Pupil Player and then analysing the exported csv file with your application. In case of online-analytics you can use Pupil Remote, Pupil Capture's network interface.
I would recommend reading our Development Overview
( https://docs.pupil-labs.com/#development-overview ) and the Interprocess and Network Communication
chapter of our docs: https://docs.pupil-labs.com/#interprocess-and-network-communication
And in our Pupil Helpers
repository you can find an example Python script on how to access pupil data via the network interface: https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_messages.py
Thanks @papr I'll look into this and may get back to you. Is Discord the best place to talk?
@user-9d900a Yes, it is. Do not hesitate to ask any questions.
@papr I've been using the offline fixation detector - Screen marker
@user-2798d6 Does that mean that you had Capture and Player running at the same time?
oh - no, I just have capture going while I'm doing recordings, then I open Player later
@user-2798d6 No, the issue is about the load during the recording. Edit: The question is if you had anything else running during the recording. Would you mind sharing your system details (cpu, memory, harddisk/ssd) if the former is not the case.
Are we talking about why the FPS was so low? I usually have capture going and then I also had iTunes running for some audio. Then I had an external mic plugged in and my computer was running the audio to bluetooth
I'm on a MacBook Air with 1.6 GHz processor, 8GB memory
Mmh, ok. Is it possible that you activated the smaller file, more CPU
option in the recorder settings? This option transcodes the mjpeg stream to a h264 stream before writing it to the hard disk. This uses a lot of CPU and might be the reason for your low fps.
It's set on bigger file, less CPU
So the external mic/bluetooth/iTunes wouldn't be affecting it?
I cannot say it for sure. But I would recommend testing it without running the audio stuff.
Will do! One other question - I've been having some issues with the screen marker and the manual marker calibration lately. The eye cameras look like everything is set up well - I'm getting bright red pupil circles in different eye directions, but the minute I click the calibration button, it doesn't respond. Or the manual marker doesn't register.
And I also had the question about the first gaze timestamp when it comes after the first fixation timestamp. I'd like to be able to like up fixation times with my audio recording.
What do you mean by "it does not respond"? Does the white screen with the calibration marker appear? If no, does the whole app freeze? If yes, does the marker only show a red dot instead of a green one?
sorry-yes, the bullseye appears, but just shows a red dot. No blinking green at all even though the eyes look like they are being well detected
Make sure that the monitor is in the center of the scene video. If this is already the case, try moving the headset closer to the monitor. Also try to increase the monitor brightness to improve the marker's contrast.
I will try that!
Hi, how do you play the videos from pupil mobile recorded in an android into pupil player?
Hello, I'm trying to build from source on a Mac. I keep asserting at pyglui_version >= '1.9' in the world.py file. Looking through my packages, it indicates my version is installed as 1.10. Any help?
$ python3 main.py MainProcess - [INFO] os_utils: Disabled idle sleep. world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "/Users/me/Code/pupil/pupil_src/launchables/world.py", line 98, in world assert pyglui_version >= '1.9', 'pyglui out of date, please upgrade to newest version' AssertionError: pyglui out of date, please upgrade to newest version
MainProcess - [INFO] os_utils: Re-enabled idle sleep.
Hi folks. My robotics lab is using the eye tracker in conjunction with controlling a robot through ROS. Does anyone have experience with ROS integration in general? I've found a few packages that listen on the ZMQ socket and publish the data as ROS topics, which is useful, but the particular issue I'm currently concerned about is time synchronization with ROS. It looks like pupil uses the linux monotonic clock, whereas ROS uses the wall clock by default. Does anyone have advice/ideas about best practices for rectifying the two? Thanks
@user-6273aa This will be fixed as soon as we merge an outstanding pull request. Sorry for the inconvenience. You could use this commit as a temporary solution until we have merged the pull request: https://github.com/pupil-labs/pyglui/commit/8c2bcce41b82403ba59720012ff04f3934b46d6f
@user-3a93aa you can set the pupil clock to whatever timebase you want via pupil remote: https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/pupil_remote_control.py
or timesync: https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_sync/pupil_time_sync_master.py
Quick question: we're getting a unicodeDecodeError on a windows installation when we start the calibration. Suspect it might be an issue with windows itself, just wondering if anyone has any quick/dirty suggestions to try to get it working. Thanks!
Hello again, I'm having trouble getting IntelRealSense working as the world camera. pyrealsense installation is failing. I've tried the instructions on the pupil docs using pip3 install git+https://github.com/toinsson/pyrealsense
and running pip3 install pyrealsense
. Seems like it's not finding my rs.h. Running on OS X and it seems like support for RealSense is limited. Anyone have experience getting this working? Thanks!
@user-6273aa did you install librealsense? https://github.com/pupil-labs/librealsense
Thanks again! I got both installed correctly now. Looks like I missed a step from the librealsense installation compiling from source. It would be helpful to point the librealsense section in the developer docs to the pupil labs fork installation instructions. The docs are currently linked to the librealsense master which are not helpful at all. How do I launch the depth camera as the world camera now? I'm running python3 main.py from pupil_src. The pupil camera is showing by default and working, but I don't see a place to switch to the depth camera anywhere in the settings.
Yeah. We are working on improving the docs. The capture settings have switch button called Preview Depth
Is that under the General Settings gear on the right side? I don't see the switch button.
No, it should show under the camera icon
Another question - we're getting significant amounts of the Turbojpeg jpeg2yuv errors. Is this typical? The only other reference I can find here was someone with a bad camera. The camera eye cameras occasionally don't load and occasionally have significant artifacts.
Hmm, I only see the options for the Pupil Cam1 ID0 controls (sensor settings and image post processing). Should the realsense camera show up in the UVC manager and do I need to change something there to see it? Thanks for the help.
You need to select the Realsense manager and afterwards the camera. Did it automatically select the eye video for the world window?
It automatically selected the eye video for the world window
Looking at the console I see an error: world - [ERROR] video_capture.uvc_backend: The selected camera is already in use or blocked.
I had mistakenly assumed that you had already selected the realsense camera.
It looks like you still have the UVC manager selected. Can you switch to the RealSense manager? Above the camera icon, there should be an list-like icon. Click on that. The top selection field should list the realsense manager
I only see Video File Source, Local USB, Test Image, Pupil Mobile
ok, then pyrealsense is not recognized correctly yet
Try to open a terminal, type python3
and afterwards import pyrealsense
. This should hopefully give you a meaningful error message.
$ python3
Python 3.6.3 (default, Oct 4 2017, 06:09:15)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrealsense
>>>
No error
$ pip3 install pyrealsense
Requirement already satisfied: pyrealsense in /usr/local/lib/python3.6/site-packages
Requirement already satisfied: numpy in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Requirement already satisfied: cython in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Requirement already satisfied: pycparser in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from pyrealsense)
Please try to use pip3 install -U git+https://github.com/pupil-labs/pyrealsense
, restart Pupil Capture and check if the manager is still missing
It's working! Thanks papr 😃
Got the depth map
Appreciate the help
We're getting significant amounts of the Turbojpeg jpeg2yuv errors. Is this typical? The only other reference I can find here was someone with a bad camera. The camera eye cameras occasionally don't load and occasionally have significant artifacts.
@user-6273aa No problem. @user-24362d This issue is not critical. I cannot remember what caused them right now. Did you have a look at the github issues?
I did not, I will check there.
There was mention here of it being related to bandwidth
Yes, but you are probably using a usb-c clip, am I right?
yes
so it's not an actual problem or losing any data, and we can use the small plugin listed on github to make it go away?
Mmh. I am not entirely sure. @mpk might be able to answer this one.
in the actual recordings, the frames at the same time as those errors are appearing flat grey, no image data
or partial image data is greyed out
actually, no whole frames are missing, just partial ones
Hi, please help me with issue #880
For some reason my def init_ui is not being called
I am trying to update my custom calibration plugin
@user-41f1bf did you update pyglui? The icon bar is missing. The screen shot does not look like the new version...
Hi every one, is it technically possible to run two instances of pupil labs on one computer?
Hello,
Is it possible to push gaze_point_3d data over LSL?
@user-006924 yes this is possible but if you have only 1usb controller you are likely to saturate it. You may have to reduce resolutions in order to be able to run both instances on a less powerful machine.
@user-ecbbea this would require an update to out plugin in the LSL repo: https://github.com/sccn/labstreaminglayer/blob/master/Apps/PupilLabs/README.md
Thanks @wrp, I figured it out. Can I ask why both pupil and gaze data are pushed using the same function when the available fields are different? It makes it hard to push things that are available in one topic but not the other
@user-ecbbea and @wrp, I'm one of the more active developers on LSL. I was planning on updating Pupil-LSL plugin in the near future. I haven't looked at the existing plugin in any detail yet, but please let me know how you would like it to work. e.g., I notice that in this channel it is often recommended to write a client app instead of writing a plugin. Is that applicable here too? Is a LSL_Pupil app preferred to the plugin?
@user-5d12b0 Welcome to our community channel! I am the developer of the current Pupil LSL relay plugin. Thank you for your interest in updating the current Pupil LSL integration.
@cboulay Yes, for most cases it is recommended to use our network api instead of writing a plugin. LSL might be one of the exceptions though. I think we would both benefit from a small chat to update each other on our systems' requirements and constraints. Please send an email to pp@pupil-labs.com if you are interested in that as well.
Hey guys, is it possible to change camera settings as brightness, contrast etc. over network, via messages? setting the video recording path via network would be nice 😃
[email removed] Changing camera settings over the network is not supported for "Local USB" sources yet. But you can set the session name via the recording.should_start
notification or by sending R <rec path>
via Pupil Remote.
hi, what's the difference between eye image and world image?
I'm referring to the definitions of norm_pos_x and norm_pos_y
in pupil positions it's eye image while in gaze positions it's world image
Hello guys! I'm new to this channel so I'm not familiar with the protocol for asking quick questions. Here goes: 1. The right eye camera of tracker is flipped. I can 'flip' the image by clicking in the viewer but does this mean that the calibration will also use the 'flipped' image? 2. What are your experiences of recording scene+eyes+gaze along with other applications? Have you experienced bottlenecks? I intend of recording data from a 3D sensor in parallel.
Hey @user-c828f5 Welcome to the channel!
The flipping is just a question of visualization. The calibration will use the original image.
The answer to your second question depends on how much computation power your computer has. May I ask what type of 3D sensor we are talking about? Usually, this is a question of trying it out.
@papr Thanks so much for your quick response! It's great to hear that the calibration is not affected by the viz. I intend on using a ZED stereo camera, i.e, writing out 2 X 1080p images at 30Hz. Along with that, I intend on recording data from an IMU in parallel. I was curious about what strategies does pupil take when the system is overloaded. Does pupil drop frames? Stops execution?
@user-c828f5 Pupil Capture will drop frames if either the computation or writing the frames to disk takes too long. Did you hear about Pupil Mobile? You can connect the headset to an Android phone with USB-C and record locally on the phone. Calibration and detection can be done after the recording in Pupil Player.
Ah! I'm sorry, I did not research enough before asking that over here. I could try the following and update you/the group with this info: 1. Record Stereo images + IMU + Pupil data on a local machine. 2. In the event that fails, connect Pupil to an Android phone.
From my experience in the past, recording sensor data from 2 different systems causes timing offsets. Fingers crossed that option 1 works for me.
No worries! Your are welcome to ask any question here. Our documentation is not perfect yet. We are happy to point in the right directions. Concerning the potential time offset: Pupil Capture and Pupil Mobile use a time sync protocol to synchronize their clocks. It is open source and we have example scripts which you can use to synchronize time with 3rd-party systems as well. I think you would need to use time sync in both of your cases.
Hello, is there a channel for doing eye-pupil tracking and gaze detection on a raspberry pi? I have opencv built on there and a small IR camera to do the video capture. https://www.raspberrypi.org/products/pi-noir-camera-v2/
@papr Thanks a lot!
Hello, I modified scripts in 'pupil_remote\recv_world_video_frames.py' to get frames came from camera.
like this topic, msg = recv_from_sub() if topic == 'frame.world': img = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) img2 = np.frombuffer(msg['raw_data'][1], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) cv2.imshow('test', img) cv2.imshow('test2', img2) cv2.waitKey(1)
But It doesn't work. how can I get frames from all cameras in pupil-headset?
somebody help me in using recv_world_video_frames
@user-6ac66a Did you start the frame publisher plugin in Pupil Capture?
frame publisher?
Um... I just run Pupil Capture app and recv_world_video_frames.py
It works when I only want to take world video
but.. um... I don't know how to get eye images...
You need to change the topic to which you subscribe and filter. I will look up the correct topic in a bit. I am currently on mobile and will notify you as soon as I have more information for you.
Thank you
@user-6ac66a Hey, so receiving frames works in the following way: Frames are a data type as everything in Pupil, e.g. pupil and gaze positions. Each data type has its own "topic". To receive a specific data type you need to subscribe to its topic. The example script does that in line 40:
sub.setsockopt_string(zmq.SUBSCRIBE, 'frame.')
See https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/recv_world_video_frames.py#L40
With this line you will subscribe to all frames, world frames as well as eye frames. In line 64 you filter all incoming frames manually by calling if topic == 'frame.world':
. This gives you all world frames. If you want to filter by a specific eye id you will need to change it to if topic == 'frame.eye.0':
. Remove the if-statement to have access to all images.
Additionally, you will have to revert your changes. Specifically img2 = np.frombuffer(msg['raw_data'][1], dtype=np.uint8).reshape(msg['height'], msg['width'], 3)
will not work.
Umm... like... change the last number 3 to 1?
The frames come message by message. msg['__raw_data__']
only has data for one frame. Therefore calling msg['__raw_data__'][1]
will fail. You will have to remove all your code that is related to the img2
variable.
Hmm.. then is there any route to get both eye images immediately?
No. The eye cameras are captured independently of each other by Pupil Capture. You can correlate the images using msg['timestamp']
.
May I ask what your use case is? Maybe I can suggest a possible solution.
case of... um...? You mean error code?
Use case in the sense of what your application is. Why do you need two eye images at the same exact time?
um, What I want to do was just... 'get raw world camera & 2 eye camera in each frame' and... with that data, I wanna test my own algorithm on gaze estimation.
Ah, I understand. Well, this is not quite possible since we cannot force all cameras to give us a picture at the very same time. Instead we grab pictures as they come and tag them with their original timestamp. This way you are able to handle different frame rates as well.
In you case I would introduce thre variables: recent_world
, recent_eye0
, recent_eye1
. Each holds the most recent frame data for its specific topic. And then you do calculation depending on which data comes in: If it is an eye0 frame, you do detection on this frame and apply your mapping function onto recent_world
frame. Do you understand what I mean? Instead of assuming that you get the data at the same time, you can cache incoming data and check afterwards if anything updated.
hm... could you tell me what parts of the scripts(RECV_WORLD_VIDEO_FRAMES) should I change? sorry for my bad english.
@user-6ac66a This would be the recommended changes: https://gist.github.com/papr/5840cdee64a4d20d5139b2dfce77dcd2/revisions
Thanks, running with that scripts,
Traceback (most recent call last): File "recv_world_video_frames.py", line 72, in <module> recent_eye0 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) ValueError: cannot reshape array of size 921600 into shape (640,640,3)
the error was like this. I think 'msg['height']' is should be 480. so I rewrite that line into
recent_eye0 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(480, 640, 3)
and I printed out the results but it seems.. wrong img i think.
Oh yes, that is a bug in an older Pupil Capture version. This has been fixed already. Just update to the newest version and it should work correctly.
Can you post a screenshot of the image that you think is wrong? Maybe I can tell if this is expected or not.
sure, wait a second, please.
oops, um... sorry, I seems correct img, it was my mistake...
No worries! I am happy that it sems to work 👍
The light of the lab went out and the image looked like 'something wrong'
ALL DARK...
I owe you a big appreciate. Thank you!
No problem 😃 Happy to help!
Hi everyone, any idea why Pupil Capture does not recognise the USBs for a particular user in Ubuntu? It works well for other users...
Hey @user-cf2773 this is due to an user permissions issue. You will need to add the user to a specific group called plugdev
.
That's the info I was looking for, thanks a lot @papr !
No problem! 😃
sorry my message might have gotten lost in the mix. Is there by any chance I could run Pupil's software for eye gaze detection on the Raspberry Pi 3 (Python) using the rpi camera? https://www.raspberrypi.org/products/pi-noir-camera-v2/
@user-768a2c I think you would be limited by CPU power of the RPI. Even if you do successfully set up all dependencies, I fear that you would have very low frame rate on the RPI.
@user-768a2c what are you trying to achieve exactly?
@wrp How do you get the value of the gaze_distance in the Vector_Gaze_Mapper(https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_routines/gaze_mappers.py), is just given by you or calculated by some method?
what are the exact requirements for the opengl support?
Hi Everyone, Does anybody get this error in the picture? Pupil Player, the latest version on windows 10 was working fine one minute and suddenly closed and never opened again the next.
Hey @user-006924 See the linked issue above. It is exactly about that problem.
@papr Thanks ^^
Hello all, how do I install more plugins to the pupil-player app?
I want to extract blink events (offline). I'm working on my event-detection algorithm which does not handle blinks.
Hey @user-c828f5 We currently do not offer offline blink detection. It is on our todo list though. :)
Please feel free to implement it and to make a pull request. We are happy about each contribution.
@papr Ah! Yes, I plan on doing that at some point. There are tons of event-detection algorithms which are fairly easily to implement and can be added into your SDK.
I also noticed that while exporting, it links each gaze point to the closest video sample. How do I correlate it with the correct eye image?
The pupil positions have their own timestamps. You can use them to infer the blink event timestamps. And afterwards you correlate them using the same function as for gaze.
I'm sorry, I don't understa
understand*
timestamp - timestamp of the source image frame.
This is the time (in MS since the app first started up) for each gaze sample.
I also have MP4 files for each eyes at 120 fps. How do I correlate each gaze sample to it's associated eye image?
Each frame gets its own timestamp, exactly. Pupil timestamps are their eye frame's timestamp. To correlate these timestamps to the scene frame timestamps, you will need to find the closest match. There is a method called correlate_data in player_methods.py which does that. Have a look at it to understand how gaze is correlated to the scene camera timestamps. Afterwards you can apply the same method to your blink events.
BTW, each video has also a timestamps file. It is a numpy file. Therefore you can easily extract timestamps on your own if necessary.
Got it.
Last question! Is there way in which I can write out the absolute time stamp? In hours, minutes, seconds, ms etc? I need to correlate all of this data with other sources.
This is difficult to do since the absolute start time is not saved in a precise way. It is written in the info.csv file. You can take the the difference between this time and the first timestamp to calculate a fixed time difference which you can use to calculate your dates.
The correct way to do this would be to synchronize the clocks between capture and your other sources before making the recording.
I need to leave now, but I will answer any questions when I am back. :) so feel free to ask for details.
@papr The problem with that approach is that the starting time stamp is written out in H, M and S. At the worst, I'm looking at a delay of 1000ms, which is a lot. It doesn't have a MS component. I could go in and modify the plugin .. I'll update you with results.
As far as data writeout goes: I can record [email removed] along with Scene + Eyes + Gaze data on the same computer without any frame dropping.
Exactly. That is what I meant with not saved in a precise way. You will need to synchronize clocks before recording data.
Hmm, got it. I also have an off the shelf calibration routine but I'll try and learn how to synchronize clocks. The good thing is that all sources are being recorded from the same system. Also, I noticed that you do have an online blink detector based on confidence values. I could perhaps try tapping into that and write out blink ON OFF events.
@user-c828f5 The blink events are actually recorded in the pupil_data
file. You can access them by deserializing the file yourself using msgpack or via a Player plugin and accessing g_pool.pupil_data['blinks']
.
Hello! I have a question about calibration - I am using the glasses on conductors while they look at a musical score that is on a music stand in front of them. I started by having them calibrate on the computer screen, but the area of the musical score and the distance of the score away from the person is a little different than the computer screen, so I felt like I wasn't getting a good calibration. Now, I'm using the manual marker calibration. The distance is a little less than 1.5 meters - will that be ok? Exactly how accurate can I expect the calibration to be when I'm looking on the screen in Capture. Should It be that what it looks like they're looking at is exactly what they're looking at down to the 1 degree of error?
Hello! I followed the instructions for installing windows dependencies. When I run main.py I got this error:
C:\work\pupil\pupil_src>python main.py
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version.
cl : Command line warning D9025 : overriding '/W3' with '/w'
cl : Command line warning D9002 : ignoring unknown option '-std=c++11'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.11.25503\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "C:\work\pupil\pupil_src\launchables\world.py", line 113, in world
import pupil_detectors
File "C:\work\pupil\pupil_src\shared_modules\pupil_detectors\__init__.py", line 16, in <module>
build_cpp_extension()
File "C:\work\pupil\pupil_src\shared_modules\pupil_detectors\build.py", line 25, in build_cpp_extension
ret = sp.check_output(build_cmd).decode(sys.stdout.encoding)
File "C:\Python36\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "C:\Python36\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['C:\\Python36\\python.exe', 'setup.py', 'install', '--install-lib=C:\\work\\pupil\\pupil_src\\shared_modules']' returned non-zero exit status 1.
I have tried reinstalling multiple times but ended up the same error message. Any help would be appreciated!
Hey all, how's it going. Has anyone ever replaced the eye camera with an off the shelf webcam/camera? Whilst it doesn't look like a huge amount of faff to buy the suggested webcam and alter the filters etc I was wondering if anyone has found a decent off-the-shelf IR camera
@user-8250bf there is not really. This is why we have this: https://pupil-labs.com/cart/?0_product=e120upgrade&0_qty=1
Ah that's cool, great idea! Thanks.
Reposting in case this got lost in the feed: Hello! I have a question about calibration - I am using the glasses on conductors while they look at a musical score that is on a music stand in front of them. I started by having them calibrate on the computer screen, but the area of the musical score and the distance of the score away from the person is a little different than the computer screen, so I felt like I wasn't getting a good calibration. Now, I'm using the manual marker calibration. The distance is a little less than 1.5 meters - will that be ok? Exactly how accurate can I expect the calibration to be when I'm looking on the screen in Capture. Should It be that what it looks like they're looking at is exactly what they're looking at down to the 1 degree of error?(edited)
@user-2798d6 You can use the accuracy visualizer plugin to calculate the error of your calibration in degrees.
Hi, I am just wondering how to improve the accuracy of an offline calibration. I have refined the range down to the right section of the video and it seems to be detecting the marker, but sometimes the fixation appears far away from the marker and I know the person was looking at it. thanks!
@papr: Thanks for the accuracy visualizer suggestion! SO my question is what should that look like if the calibration is good and what does it look like if it's bad?
So if it looks like this...what does that mean?
Hi Guys,
I'm working on a research project that want's to make use of the pupil labs eye tracker on an Android platform
does anyone know how and if there is an sdk available to interface the eye tracker with Android?
@user-ef948c We have an Android app called Pupil Mobile. You can connect the headset to USB-C phones and use the app to make recordings and to stream the video to your computer running Pupil Capture. https://play.google.com/store/apps/details?id=com.pupillabs.pupilmobile
@papr theres not a way to do the tracking and calibration on the device itself?
@user-2798d6 the orange lines are the prediction errors. You want them as short as possible. Your case looks good but you should spread the markers more during the calibration.
@user-ef948c No. If you need live interaction, you can stream the video to Pupil Capture which does the detection/mapping and broadcasts the result via Capture's network interface.
Dang. We had bought the pupil labs tracker hoping it would be ideal for a embedded prosthetic solution not coupled to pc hardware. We'll have to reevaluate
The cameras are connected via the UVC standard. You can implement your own app that does the detection on the phone. The algorithm is open source and has been published in a paper.
Roger. Thanks for the info. Will the Android app source be released at all? Just wondering
No, not as far as I know. But the UVC lib that we use is open source as well https://github.com/saki4510t/UVCCamera
Hi all, has anyone succeeded in aggregating data from multiple participants in order to create a heatmap that highlights the most common gaze locations?
If so, which programming language have you used to achieve that?
@user-0f3eb5 A common way would be to use the Python libraries numpy and matplotlib. Be aware that a gaze position in the field of view itself is meaningless if you do not know where the subject was looking at.
You can solve this by using our surface tracker plugin and aggregating gaze positions within defined surfaces.
@papr Thank you. I haven't learned Python (yet) but will dive into it. Do you know if anyone has used R to do it?
You can export the surface gaze into csv files and process them with R. It is not necessary to do this with Python.
Super cool. I haven't purchased the gear yet because I want to make sure I am able to aggregate the data, preferably with R (glad to hear it doesn't necessarily have to be Python). Do you have / know if there is example data (csv exports) from different sessions of the same surface area that I can play around with to see if I can make it happen?
We have an example data set that includes multiple surfaces but only includes data from one session. This should be enough to start with though.
1. Download and install Pupil Player https://github.com/pupil-labs/pupil/releases/tag/v0.9.14
2. Download the data set https://drive.google.com/uc?export=download&id=0Byap58sXjMVfZUhWbVRPWldEZm8
3. Start Pupil Player
4. Drop the unzipped dataset onto the Player window
5. See https://docs.pupil-labs.com/#pupil-player for an introduction into Pupil Player
6. Open the Offline Surface Tracker
7. Wait for the surfaces to be detected
8. Export the surface data by hitting the e
key or clicking the ⬇ thumb on the left
9. The dataset folder should have a folder called export
that includes the exported csv files
Thank you, I'll try it out!
Feel free to ask further questions in case that encounter any problems with the above points. 🙂
Thanks a lot!
I have downloaded the Pupil Player and the demo data. I went through the steps and exported the Offline Surface Tracker data. In there there are also heatmap png's per surface, however, those png's are empty. What should I do to be able to export those heatmaps?
as you can see all of the png's are Zero Bytes
@user-0f3eb5 Were the csv files exported correctly? The empty heatmaps look like a bug. I will try to replicate this issue.
I have indicated here which files were empty, it includes some of the CSV files
I will also redo the whole process from the start to see if I can replicate it
You do not need to re-download everything. Just start at step 3
Ok
Started from step 3, same outcome
Maybe I should mention that when I open the Offline Surface Tracker, I get this message:
"No gaze on any surface for this section"
Ok, this explains a possible reason for the bug. No gaze -> no data for heatmaps -> no heatmaps
Uninstalled and re-installed pupil player just in case, unfortunately same result
Did you activate the "Offline Calibration" by any chance?
I don't remember doing that, but possibly I did when I was exploring...
Please click the "Restart with default setttings" button in the General Settings menu and try to open the Offline Surface Tracker only
Oh, my bad. There is no such button in this version. Please select Gaze From Recording
in the Data Source
field
that worked!!!
🙏
So the heatmaps are exported correctly?
I think so, here's an example of how they are exported now, is this correct?
Nice! The heatmaps in this version are rotated due to a bug which is fixed in the upcoming version
awesome, thanks! I can start merging heatmaps in R now 😃
I just noticed these are still empty in the export
Could you upload the files to https://gist.github.com/ ?
not sure if this is what you wanted, but I've put the contents of the first file in here: https://gist.github.com/anonymous/a5f8612b265596300f00b1060c1ecc6e
Oh, my bad. Fixations are only mapped onto surfaces if they are available. Start the Offline Fixation Detector to detect fixations. Afterwars repeat the export and these files should be filled.
"Gaze Position 2D Fixation Detector" worked 😃
Thanks @papr! One other question about the Accuracy Test - It gives numbers for angular accuracy and angular precision - what should those numbers be?
@user-2798d6 Do you ask for the meaning of these numbers or which numbers to expect?
You should be mostly interested in Angular Accuracy. Everything below 1 degree is very good. 2-3 degrees is ok, but not great. >3 you should redo the calibration.
Ok, good to know! Thank you!
@papr heatmaps with R work fine, thanks for your help! 😃