πΆ invisible Hi, colleagues! we met problems during recordings. Suddenly the recordings stops and data is missing. Unfortunately the problem repeats and we hardly cam make a half of sessions. The report about mistake attached. What could you advice? we are in the middle of a project
Hi @user-35fbd7 ! I see you have also contacted us via [email removed] We will answer your request via email!
I am waiting for your reply because the issue is really urgent and working process extremely unstable
My colleagues should be responding any minute. I can already tell you that this appears to be a hardware failure and we will facilitate a replacement asap.
Marc, I still haven't got any answer.
Hi @user-35fbd7 π, we have responded via email
I have two questions: 1. Are the heat-map generation scripts for Invisible recordings available? 2. Is there a way to allocate a pixel number and a heat amount to each pixel in a surface defined for Invisible recordings?
also about this, which of the resulting csv files of an Invisible recording is used for the generation of the heat map?
and another question about this, we get heat maps in the form of a square even if our surface had other shapes. Is there a way to match the resulting heat map's dimensions to the surface?
Can I please get some guidance about my questions?
Hi @user-98789c! The code for the heatmap generation is not directly available anywhere currently, but there is nothing proprietary/special happening. I'll see if we can find a way to publish code like this!
What the algorithm does is calculate a 300x300 histogram of the gaze data, and then put a gaussian blur on top with a STD corresponding to the given scale parameter.
The raw gaze data is used for this, which is the same that that would be contained in the gaze.csv
files of a Raw Data Export.
Currently, there is no way to change the aspect ratio of the heatmap. It would not be incorrect to simply resize it to the correct aspect ratio however.
Does this answer all your questions?
thanks @marc for the explanations.
a . I'm not sure it I got an answer to question 2, can you please explain more?
b. so for me to make and tweak a heat map based on a recording while looking at a defined surface, I should use gaze_positions_on_surface ?
c. if I have a picture I want to define as my surface, would you say it would make a better histogram if I convert it into a size of 300 by 300?
and as you said you'd kindly see if the code can be published, can that also be the case for the overlay of surface and heat map?
a) I am not 100% sure I understand the question. The gaze coordinates that fall onto the surface are translated into surface coordinates, that range from 0 to 1. Within those surface coordinates the 300x300 histogram is calculated. The heatmap is not directly connected to the pixel values of the original world camera images. The heat value assigned to each bin correspond to the histogram value of the bin.
b) Yes, for the heatmap on a surface you would use the gaze in that file, which contains raw gaze in surface coordinates.
c) No, this does not really make a difference. If you have a non-square surface and calculate a square histogram on it, this will mean that the bins are not square and instead have the same aspect ratio as the image. You should simply resize the resulting heatmap to the target aspect ratio, which will also correct the bins to become square.
I'll see if we can get that code released quickly!
Hi Pupil labs, was just wondering why my recordings have this frequent problem... with a warning sign/still uploading after 10 days. I tried downloading them and uploading it to imotions as well but its indicating that there are a couple of missing files as well. could you help me with this?
^
Thanks again @marc for your explanations, it's clearer now. I looked through my recordings with Invisible, in the gaze_positions_on_surface csv file, and there always seems to be identical values for x_norm and x_scaled and also y_norm and y_scaled.
If I want to make my own heat map out of this csv file, which columns should I use?
I should only use the rows whose on_surf column value is True, right?
also about this, @marc if you could let me know, the histogram is is calculated using the number of gaze coordinates that fall onto the surface? is this how the gaze duration in each bin is calculated?
Judging from this it seems like you are using the Surface Tracker of Pupil Player rather than the Marker Mapper enrichment in Pupil Cloud. They only differ slightly.
The Surface Tracker allows you to specify the size of your surface (in a unit of your choice). By default it is set ti 1x1. The scaled output will use the given size to give an output in the same unit. The normalized output simply outputs in coordinates from 0 tot 1. If the size is not specified they are equal.
You can use either of the values to calculate your own heatmap. It will only influence the the unit of the edge coordinates of your histogram bins.
Filtering for on_surf == True
would make sense unless you further restrict the histogram somehow. You do not want points outside of the surface to be counted towards the histogram.
Yes, the number of gaze coordinates within a bin is what is visualized in the histogram.
All code from Pupil Player is already open-source. The heatmap generation code is here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L615
thanks a lot @marc
We have been looking into your failed recordings. Both were affected by one-off errors that should not re-occur (at least not if your are using the newest version of the app!). We were able to immediately resolve one of them (that recording should be available for download and further processing now), but the other is more difficult. Do you need both recordings for your work?
hi marc, thanks for the help. Its alright for now as i made sure to have extra recordings in case this happens! may i know too, why sometimes the device frequently requires me to detach the glasses? is it a problem with overheating? thanksπ
Hello, Can anyone offer tips to help Surface Tracking be more reliable. Below is a link of my recording from an invisible inside Pupil Player. The Surface Markers are very visible but only the bottom right marker stays stable, it's making data collection very difficult. https://youtu.be/zDwCCRqAOMM
So I managed to fix this myself. I adjusted the lighting in the room so there was more backlight, so the surface markers had more light on them. I also pasted the surface markers onto some black cards so that they had a border. Not sure which of these had more of an effect, but its super reliable now.
For completeness: While the white border around the markers is important, the black outer border should not have an effect on the algorithm. Besides lighting that allows for good contrast on the markers, what should also help is making them larger.
Hi @user-ffb13b. Yes, ambient illumination can be important for marker detection, particularly when the scene camera is pointing toward a bright monitor and the markers aren't presented digitally on screen. In such instances, the camera's exposure time will reduce.
Overheating is not really possible. Is the phone vibrating and showing you a message asking you to reconnect? In that case this my be do to a connection problem within the hardware. So this happens frequently?
Hi, yes it happened frequently during one of my sessions.
@user-046edb In that case we can either initiate a repair right away, or you wait and see if this is happening again. Either way, you would just have to contact [email removed] to get the repair started.
Hi, has anybody experience with anonymization of world camera videos from Pupil Invisible while preserving the gaze information? We did recordings in public space and need to blur passers-by. Blurring the world video and trying to overlay the gaze information using pupil_player causes an error with the time stamp information from the original video.
Hi Martina! We will actually add a face blurring feature to Pupil Cloud very soon! Within the next ~4 weeks it should become available. It will allow you to blur faces on upload, such that the original videos will not be stored in the cloud (except very briefly to perform the blurring).
Did you implement your own blurring using OpenCV? OpenCV is sometimes skipping frames in the video file while reading, and if you save the video again using OpenCV it can happen that the resulting video has less frames than the original. This can then cause an error in Pupil Player as the number of frames and timestamps is not equal.
Hi Marc, thank you for your promt reply! Great news that you are adding face blurring! And you guessed right: we are using OpenCV. Thanks a lot for the hint!
In order to use the surface tracker, do I absolutely need one frame in each trial containing all of the tags? I have a great surface to cover (a trampoline) with multiple tags and in my case, it is not simple to get this type of view with the world camera.
You don't need to have all markers visible in the same frame, you can add more markers by scrubbing to a different time and add a marker. Process is slightly different in cloud vs player, but possible with both.
Thank you very much, I'll have a look at it π
Easy question: how much storage do we have on the Pupil Cloud, and is there any way to check how much we are currently using?
Currently, the storage is unlimited! We might limit it in the future at some point, but if we do we will make sure to give sufficient notice!
Hi @marc . Is this ready? Thank you very much
No, not yet. I expect a release within the next 2-4 weeks!
@marc ok. thanks for the info
Hi! Does anyone know how to get a single frame with all the corresponding files and data from a folder placed in pupil player?
@user-b62d99 This is unfortunately not immediately possible in Pupil Player. You can only export temporal sections of a recording. The CSV files contain a column for the world frame index though, which makes it relatively easy to filter for a specific frame. You would need to extract the corresponding frame from the video file yourself though.
Thank you for your response! Do you have any idea how long it takes to resolve the issue?
Sorry, but no. I am an academic, and not with Pupil Labs. ...but, I can't imagine they could give you an estimate if they wanted to, or they would have already. This is a difficult issue.
Hi! We plan to use Pupil Invisible at our lab. If an unspecified number of people use 1 pupil invisible, is it possible to share the same account on multiple Pupil Invisible Companion App?
Yes, that is possible.
Hi@marc, I want to know why the gaze data I exported from the pupil player is different from that from Enrichments.
This is a partial result
The calculation formula I used isοΌ px=1088norm_pos_x py=1080(1-norm_pos_y)
make sure to compare the gaze from the same timestamps. The export from Player might export different time ranges than Cloud
Thanks for your answer@paprοΌI want to know how to align the timestamp provided by PupilPlayer and Cluod? I use the results provided by pupilplayer to calculate the pixel coordinates of the fixation point, will it be errors?
The alignment of Player and Cloud is possible but slightly complicated due to the transformations that Player applies when opening a PI recording. I suggest to use only one of either: the Cloud export or the Player export.
I want to obtain each frame and the corresponding gaze point coordinates. Can this be obtained using the Cloud?
Yes, Cloud exports the necessary information to find the gaze location for each frame in the scene video. Due to the different sampling rates between scene and eye cameras, there is no 1-to-1 mapping though. There are two alignment approaches:
A) One gaze sample per scene video frame: For every timestamp in world_timestamps.csv
find the closest timestamp in gaze.csv
that has the same recording id
B) All gaze samples for each scene video frame: For every timestamp in gaze.csv
find the closest timestamp in world_timestamp.csv
with the same recording id
@user-b14f98 @user-2d66f7 Please update your Companion apps. We have released an update that should improve the IMU timing.
That's good to know! Thank you
Thanks!
Hello @papr . can you solve my doubts?
Hello! I updated my Companion app and it seems that uploads are not being pushed through to the cloud. Tried basic troubleshooting & uninstalling, they just spin around at 0%. Any advice appreciated, but also may not be back in the office until Monday. Thank you!
Using the OnePlus6 companion device btw
Did some searching and clearing the app data fixed it! Sorry for spam, reposting just in case π
Hi, may I ask where i could find the full tech specs of the eyetracking cameras on Invisible? I can see some on your website, but i would like to know the manufacturer, focus length, sensor size etc. as well. Thanks.
Hello, I am 99% sure this question has been asked and answered already but I searched for 30 mins and could not find it - I have 112 separate pupil invisible recordings downloaded from cloud. I wish to use just the gaze_positions.csv for each file. Is there a way to do this without having to open each recording in Pupil Player and manually exporting the gaze positions? Thanks in advance
Unfortunately, Player was not designed for batch processing Pupil Cloud recordings. In this case, a custom script for extracting this data might be the easiest approach. The recording format is documented here. https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793
If you need further technical help feel free to reach out to info@pupil-labs.com
Thanks for the reply! Unfortunately I have not much experience in python and it would probably be quicker for me to manually extract the gaze_positions.csv from pupil player for each recording than write a script.
Hi, just a quick follow up question: Are you aware that Pupil Cloud started offering CSV exports? See https://docs.pupil-labs.com/cloud/enrichments/#raw-data-exporter The only difference to the Player export would be that Player exports gaze in normalized coordinates and uses seconds as time units.
Hi im using Pupil Invisible, and exporting gaze overlaid world video using Pupil Player.
May I ask if I can plot an edited gaze position file(csv, for example) on the original world video. For example, if I want all the gaze to shift to the top a little bit, and export the shifted gaze overlaying the world video, how would I do that?
Thanks.
Hi, unfortunately, that is only possible with a custom script. Alternatively, I can share a Pupil Player plugin that enables you to apply a fixed offset to the gaze data within the application. Would that be helpful, too?
[email removed] I would appreciate it if i could try the plugin.
May i also ask if i want to explore the custom script option, would it be a solution in the app, or something like external script like opencv, for example?
I can share the plugin soon. Let me quickly make sure it still works with the latest Player version π
Technically, both are possible. A custom importer might be technically easier but requires knowledge on how the application internals work. An external script might be easier to implement without knowledge of the app.
Is anyone here familiar with using the 'saccades' package within R that I could ask a few questions about?
[email removed] any luck with the plugin?
Another question, i see there are around 10-20more timestamps than there are world video frames (frames read by opencv), before Pupil Player export. But after exporting, world.mp4's frame and timestamp counts are now the same. May I ask what changed during the export to make the frame count increase to match the timestamp count?
Thanks.
Download this file to your pupil_player_settings/plugins folder, start Player, open the recording, and select "Offset Corrected Gaze Data" from the gaze data menu
Apologies, I lost track of that. I will test the plugin now.
In the past, I have experienced that OpenCV was not always accurate when it came to reading all frames. I recommend using https://github.com/PyAV-Org/PyAV to read our videos.