@user-057596 Do you need real-time or post hoc access to the data?
Post hoc: If you log into your Pupil Cloud account, you will be able to see all recordings that have been uploaded from your Pupil Invisible Companion device. You can download them from there just with right-click + Download.
Real-Time: Similarly to Pupil Capture, the Pupil Invisible Companion app also publishes all generated data to the local network. See the documentation here: https://docs.pupil-labs.com/developer/invisible/#recording-format
good afternoon I am very interested in buying the new pupil labs device, but would like more information on the camera used if possible.
Hi @user-3f0708 ! What information would you like to get? The technical specifications are available here: https://pupil-labs.com/products/invisible/tech-specs
Hi Marc, thanks for that reply I can pass on the information to our developers. Regarding the Pupil Cloud account is that one which was set up using the Google email account when initially setting up the companion device? Also our developer asks do you have a C/C++ binding for the NDSI?
Hi, I used the exported gaze_positions.csv file and manually rendered a new video with gaze crosshair. What I noticed was that 1) the manually rendered video and the invisible exported video's crosshairs are not identical (sometimes very different); 2) at the end of both videos, the manually rendered video's crosshair disappeared while the invisble video still have gaze crosshair. Is that a bug or there is a smoothing function for rendering the invisible exported video? Is there any ways to get the x and y gaze values of the invisible exported video?
@user-fd5a69 The x and y values of the gaze in the exported video should be corresponding to data in gaze_positions.csv
. The framerate at which gaze samples are being generated (about 55 Hz) and the framerate of the scene camera (30 Hz) are not the same. Instead both data streams are timestamped and when correlating them, samples should be matched by timestamp. Did you do that for your manual version? If there can are multiple gaze points per frame of scene video, Pupil Player will render multiple cross hairs in the exported video (this might be hard to see though).
@user-057596 Yes, the Pupil Cloud account would be the same one you signed in with to the Companion device. Authenticating with Google is one of the supported options, so that sounds correct. We do currently not have a C/C++ binding of NDSI.
@user-057596 The NDSI protocol definition is public though: https://github.com/pupil-labs/pyndsi/blob/master/ndsi-commspec.md Please let your developer know that he/she can contact us in software-dev regarding protocol specific questions.
Hi @marc, what I did was that I used the world_index column in gaze_positions.csv file for correlating the gaze xy values with the scene camera frame number. If there are multiple rows correspond to one frame number, I just ignored the x=0 and y=1 values and take the average of the rest. I just noticed that if I divide the number of frames in the world_index column by the duration of the video, the frame rate is around 27.5 frames per second, not 30 fps.
@user-fd5a69 The framerate can drop below 30 Hz if the camera is recording in darker environements. In that case the auto exposure algorithm of the camera sets the exposure time to a large value in order to guarantee a bright enough image. At those large exposure times it can happen that a 30 Hz framerate can no longer be reached though. The way you desribe you correlation process sounds like it should work. Would you be able to share your recording with us at [email removed] so we can take a detailed look at what is going on?
Thanks @marc and @papr. Our software analysis streams the data from Pupil Capture so is it possible to stream the data from the Pupil Invisible companion device to the Pupil Capture software rather than to the Pupil Monitor Desktop App as this would allow us to use our software right away with the Invisible glasses rather than having to spend further time and significant cost on adapting our software to connect with the Invisible glasses.
@marc I did another test and I want to share the result with you. This time instead of using the world_index, I used the timestamp column to correlate the two data streams and rendered a new video with gaze crosshair. The rendered crosshair looks the same as the crosshair from the exported video. I think this result and the crosshair mismatch I mentioned earlier might suggest that the world_index column is not correctly indicating the closest world associated frame number.
@user-fd5a69 Hi, I am glad to hear that using the timestamps does work for you.
manually rendered a new video with gaze crosshair Could you share a bit more information on how you have done this? Have you extracted single frames from the video using
ffmpeg
, rendered the crosshair on each image, and stitched them back together? Or have you followed a different approach?
@user-057596 You can use this Capture plugin to receive realtime gaze data from your Companion device https://github.com/pupil-labs/pi_preview/ Additionally, you can stream the scene video to Capture by selecting your Companion device in the Pupil Mobile video backend (requires Pupil Capture ≥v1.17
).
Thanks for that @papr . I seem to have a problem playing on Pupil Player a downloaded recording from Pupil Cloud . I unzipped the file and tried to lead onto Pupil Player but it says it’s not a valid recording. I’ve photographed the content of the folder, the information on the Pupil Player control panel and the version of the Pupil Player which is playing files recorded using Pupil Capture.
@user-057596 Player v1.11 does not support Pupil Invisible recordings. I think v1.11 was released way before there was Pupil Invisible 😄
Please use a recent version of Pupil Player to open your downloaded recordings: https://github.com/pupil-labs/pupil/releases
Thanks @papr . That’s working perfectly now can about the Pupil Invisible Gaze Preview at https://github.com/pupil-labs/pi_preview . What file do I actually download and where do I download it to in order for the P1 Preview Plugin to appear on the Plugin Manger menu. Sorry about all the questions our developer is away at the moment and we have testing in the next week or so which means we are trying to get everything up a going before then. Cheers Gary
Actually, you need to download all of them and put them into a subfolder within the plugins folder
The description could use some improvement. I will look at that tomorrow.
Where exactly is the plugins folder located and what should the subfolder be called. I can’t see any plugins folder in the latest version pupil_capture_windows folder but there is plugins folder designated pupil_capture_settings which was created September 2018 when the Capture software was loaded on and it contains one file frame_publisher_fixed.py.
@user-057596 ~/pupil_capture_settings/plugins
is the correct location. Just create a folder called pi_preview
in it and copy the files from the repository to it.
Hi @papr, sorry for the late reply. Yes, what I did was I extract each frame, rendered the crosshair and then stitched them back into a video.
@user-fd5a69 we have noticed that in some cases ffmpeg extracts more frames than the original video contains. This happens if ffmpeg encounters long durations between frames. As a result, ffmpeg duplicates frames. We did not find the reason why/when ffmpeg these long durations between frames. There might be an option to disable this duplication of frames. ffmpeg will also print warnings during the extraction when it duplicates frames.
@papr I see, but I rendered the crosshair twice, first time using the world index, second time using the timestamp. I did the rendering based on the same set of extracted frames. The first attempt using world index column failed, and the second attempt using timestamps works. So I think that the timestamp is not correctly mapped with the world index.
@user-fd5a69 that is a valid argument, we will look into it.
Hi @papr . Thanks for that information, I’ve done as instructed and installed P1 Preview plugin which is now showing gaze locations but no world view camera streaming. I’ve included photos of the Pupil Capture control screens to let you know what’s happening. Thanks Gary
@user-057596 The plugin only displays gaze. To receive the world video, select "Pupil Mobile" as manager (see UVC Manager menu), elect your device as host, and activate the world camera.
@papr I’ve now set it up to receive the world video but the gaze tracking is lost. Is that what is meant to happen that you can only have the world view or gaze but not both.
@user-057596 No, this is not meant to happen. Can you disable and reenable the preview plugin?
That’s it fixed @papr. Thank you so much for your help and @marc most appreciated and it’s been excellent.
Is there a limit to file sizes that can be uploaded from the companion device to the Pupil Cloud? We have tacked some regional anaesthesia training which lasted 35 mins and the file size is 5.5 GB and it won’t upload as I would assume the file is too big. How can we transfer the recording from the companion device. Thanks a Gary?
@user-057596 there is no limit. We have uploaded 5h recordings. It just take a while. If you get a red '!' then there might be another issue.
Thanks mpk. There was a red exclamation symbol which appears then disappears leaving the upload circle . I tried another long recording of 17 mins which has uploaded to the Pupil Cloud but for some reason the recording performed on Monday of 35 min and two new wearers will not sync with the cloud.
Hi Pupil community! I've been using apriltags to define an AOI in a recording done with the Invisible. I am looking for the variable "Gaze History Length" that is mentioned in the Pupil Player documentation to define the part of the test session where the participant is tending to the task involving the surface. It is sometimes visible in the remainder of the test session but this is not supposed to be included in the calculation of the heatmap. This is what I am seeing:
@user-ea78b7 gaze history length
only applies to Pupil Capture - or real-time surfaces (AOIs)
If you want to change the duration/section of the recording that is used for the heatmap, you should move the trim marks in the timeline.
(Note: trim marks are the green rounded rectangle at either end of the timeline. Click and drag to adjust the trim marks. Setting trim marks will affect the section of the data that is exported for other plugins as well).
@wrp Thank you, I did not know about the trim marks! Works like a charm.
Hello, I'm currently using the Invisible and it's working great, but I have a question about surfaces in Player. I have several 16 surfaces per video and I'm running into an issue where after adding about 10 surfaces the next "new surface" doesn't show up in the plugin for me to edit. It appears it's just beneath the edge of the screen because it shows when I add the next surface. Is there a way to make it such that I can scroll down further to edit the surface? Currently I am just adding an additional surface I don't measure so that the one just before it will show up. It's difficult to explain, so let me know if I can explain further. Attached is a photo for example. In this case, there is an additional surface below "psych" that I can't scroll to in order to edit until I add a new surface.
Hi @user-83ceb7 which operating system are you using?
I sent an email for Niall [email removed] last week to ask about shipment of pupil invisible world camera , however still no reply... Does someone ask him to reply to my email? @wrp @papr
@user-bea039 I will forward your request. We will be in touch via email.
@papr Thank you!
@papr I got a reply, thank you!
@user-c5fb8b Mac OS Mojave (10.14.6) with the newest version of Pupil Player
@user-83ceb7 Am I assuming correctly, that you are using a Retina display?
Hi all we have some problem running the recording of the pupil invisable, it get stuck on 'syncing templates'
anyone knows how to fix it?
@papr Correct - would it be better on a regular HDMI screen?
@user-83ceb7 In this case, I think you have encountered a ui bug with retina displays, yes. Could you try using a non-retina display, e.g. a regular HDMI screen, and confirm that this solves the issue?
Sure thing. Will report back soon
@papr Seems to work! Thanks for the tip
@user-83ceb7 thanks! We will look into fixing the issue.
@user-7505e4 Is the 'syncing templates' issue happening after account creation?
Good morning, could I please ask you if you have a contact email for the sales team? Thank you.
@user-e86d49 info@pupil-labs.com
Thank you! @papr