πŸ•Ά invisible


user-35fbd7 02 August, 2021, 09:40:47

πŸ•Ά invisible Hi, colleagues! we met problems during recordings. Suddenly the recordings stops and data is missing. Unfortunately the problem repeats and we hardly cam make a half of sessions. The report about mistake attached. What could you advice? we are in the middle of a project

Chat image

marc 02 August, 2021, 11:52:54

Hi @user-35fbd7 ! I see you have also contacted us via [email removed] We will answer your request via email!

user-35fbd7 02 August, 2021, 12:01:32

I am waiting for your reply because the issue is really urgent and working process extremely unstable

marc 02 August, 2021, 12:13:35

My colleagues should be responding any minute. I can already tell you that this appears to be a hardware failure and we will facilitate a replacement asap.

user-35fbd7 02 August, 2021, 13:42:42

Marc, I still haven't got any answer.

nmt 02 August, 2021, 14:26:15

Hi @user-35fbd7 πŸ‘‹, we have responded via email

user-98789c 05 August, 2021, 10:00:38

I have two questions: 1. Are the heat-map generation scripts for Invisible recordings available? 2. Is there a way to allocate a pixel number and a heat amount to each pixel in a surface defined for Invisible recordings?

user-98789c 05 August, 2021, 11:36:33

also about this, which of the resulting csv files of an Invisible recording is used for the generation of the heat map?

user-98789c 06 August, 2021, 08:43:34

and another question about this, we get heat maps in the form of a square even if our surface had other shapes. Is there a way to match the resulting heat map's dimensions to the surface?

user-98789c 06 August, 2021, 09:58:06

Can I please get some guidance about my questions?

marc 06 August, 2021, 11:31:22

Hi @user-98789c! The code for the heatmap generation is not directly available anywhere currently, but there is nothing proprietary/special happening. I'll see if we can find a way to publish code like this!

What the algorithm does is calculate a 300x300 histogram of the gaze data, and then put a gaussian blur on top with a STD corresponding to the given scale parameter.

The raw gaze data is used for this, which is the same that that would be contained in the gaze.csv files of a Raw Data Export.

Currently, there is no way to change the aspect ratio of the heatmap. It would not be incorrect to simply resize it to the correct aspect ratio however.

Does this answer all your questions?

user-98789c 06 August, 2021, 11:50:25

thanks @marc for the explanations.

a . I'm not sure it I got an answer to question 2, can you please explain more?

b. so for me to make and tweak a heat map based on a recording while looking at a defined surface, I should use gaze_positions_on_surface ?

c. if I have a picture I want to define as my surface, would you say it would make a better histogram if I convert it into a size of 300 by 300?

and as you said you'd kindly see if the code can be published, can that also be the case for the overlay of surface and heat map?

marc 06 August, 2021, 12:32:32

a) I am not 100% sure I understand the question. The gaze coordinates that fall onto the surface are translated into surface coordinates, that range from 0 to 1. Within those surface coordinates the 300x300 histogram is calculated. The heatmap is not directly connected to the pixel values of the original world camera images. The heat value assigned to each bin correspond to the histogram value of the bin.

b) Yes, for the heatmap on a surface you would use the gaze in that file, which contains raw gaze in surface coordinates.

c) No, this does not really make a difference. If you have a non-square surface and calculate a square histogram on it, this will mean that the bins are not square and instead have the same aspect ratio as the image. You should simply resize the resulting heatmap to the target aspect ratio, which will also correct the bins to become square.

I'll see if we can get that code released quickly!

user-046edb 07 August, 2021, 15:31:08

Hi Pupil labs, was just wondering why my recordings have this frequent problem... with a warning sign/still uploading after 10 days. I tried downloading them and uploading it to imotions as well but its indicating that there are a couple of missing files as well. could you help me with this?

user-046edb 07 August, 2021, 15:32:06

^

Chat image

user-98789c 08 August, 2021, 10:26:53

Thanks again @marc for your explanations, it's clearer now. I looked through my recordings with Invisible, in the gaze_positions_on_surface csv file, and there always seems to be identical values for x_norm and x_scaled and also y_norm and y_scaled.

  1. If I want to make my own heat map out of this csv file, which columns should I use?

  2. I should only use the rows whose on_surf column value is True, right?

user-98789c 09 August, 2021, 08:36:40

also about this, @marc if you could let me know, the histogram is is calculated using the number of gaze coordinates that fall onto the surface? is this how the gaze duration in each bin is calculated?

marc 09 August, 2021, 11:42:22

Judging from this it seems like you are using the Surface Tracker of Pupil Player rather than the Marker Mapper enrichment in Pupil Cloud. They only differ slightly.

The Surface Tracker allows you to specify the size of your surface (in a unit of your choice). By default it is set ti 1x1. The scaled output will use the given size to give an output in the same unit. The normalized output simply outputs in coordinates from 0 tot 1. If the size is not specified they are equal.

You can use either of the values to calculate your own heatmap. It will only influence the the unit of the edge coordinates of your histogram bins.

Filtering for on_surf == True would make sense unless you further restrict the histogram somehow. You do not want points outside of the surface to be counted towards the histogram.

Yes, the number of gaze coordinates within a bin is what is visualized in the histogram.

All code from Pupil Player is already open-source. The heatmap generation code is here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L615

user-98789c 11 August, 2021, 08:07:46

thanks a lot @marc

marc 09 August, 2021, 12:30:00

We have been looking into your failed recordings. Both were affected by one-off errors that should not re-occur (at least not if your are using the newest version of the app!). We were able to immediately resolve one of them (that recording should be available for download and further processing now), but the other is more difficult. Do you need both recordings for your work?

user-046edb 11 August, 2021, 18:04:08

hi marc, thanks for the help. Its alright for now as i made sure to have extra recordings in case this happens! may i know too, why sometimes the device frequently requires me to detach the glasses? is it a problem with overheating? thanksπŸ˜†

user-ffb13b 11 August, 2021, 04:42:18

Hello, Can anyone offer tips to help Surface Tracking be more reliable. Below is a link of my recording from an invisible inside Pupil Player. The Surface Markers are very visible but only the bottom right marker stays stable, it's making data collection very difficult. https://youtu.be/zDwCCRqAOMM

user-ffb13b 11 August, 2021, 06:40:57

So I managed to fix this myself. I adjusted the lighting in the room so there was more backlight, so the surface markers had more light on them. I also pasted the surface markers onto some black cards so that they had a border. Not sure which of these had more of an effect, but its super reliable now.

Chat image

marc 11 August, 2021, 07:59:15

For completeness: While the white border around the markers is important, the black outer border should not have an effect on the algorithm. Besides lighting that allows for good contrast on the markers, what should also help is making them larger.

nmt 11 August, 2021, 07:39:56

Hi @user-ffb13b. Yes, ambient illumination can be important for marker detection, particularly when the scene camera is pointing toward a bright monitor and the markers aren't presented digitally on screen. In such instances, the camera's exposure time will reduce.

marc 11 August, 2021, 18:20:45

Overheating is not really possible. Is the phone vibrating and showing you a message asking you to reconnect? In that case this my be do to a connection problem within the hardware. So this happens frequently?

user-046edb 11 August, 2021, 18:21:29

Hi, yes it happened frequently during one of my sessions.

marc 11 August, 2021, 18:26:01

@user-046edb In that case we can either initiate a repair right away, or you wait and see if this is happening again. Either way, you would just have to contact [email removed] to get the repair started.

user-e91538 16 August, 2021, 10:57:08

Hi, has anybody experience with anonymization of world camera videos from Pupil Invisible while preserving the gaze information? We did recordings in public space and need to blur passers-by. Blurring the world video and trying to overlay the gaze information using pupil_player causes an error with the time stamp information from the original video.

marc 16 August, 2021, 12:23:53

Hi Martina! We will actually add a face blurring feature to Pupil Cloud very soon! Within the next ~4 weeks it should become available. It will allow you to blur faces on upload, such that the original videos will not be stored in the cloud (except very briefly to perform the blurring).

Did you implement your own blurring using OpenCV? OpenCV is sometimes skipping frames in the video file while reading, and if you save the video again using OpenCV it can happen that the resulting video has less frames than the original. This can then cause an error in Pupil Player as the number of frames and timestamps is not equal.

user-e91538 17 August, 2021, 13:30:13

Hi Marc, thank you for your promt reply! Great news that you are adding face blurring! And you guessed right: we are using OpenCV. Thanks a lot for the hint!

user-e0a93f 16 August, 2021, 16:05:15

In order to use the surface tracker, do I absolutely need one frame in each trial containing all of the tags? I have a great surface to cover (a trampoline) with multiple tags and in my case, it is not simple to get this type of view with the world camera.

wrp 16 August, 2021, 23:46:05

You don't need to have all markers visible in the same frame, you can add more markers by scrubbing to a different time and add a marker. Process is slightly different in cloud vs player, but possible with both.

user-e0a93f 17 August, 2021, 12:41:03

Thank you very much, I'll have a look at it πŸ™‚

user-ffb13b 17 August, 2021, 05:44:19

Easy question: how much storage do we have on the Pupil Cloud, and is there any way to check how much we are currently using?

marc 17 August, 2021, 07:15:34

Currently, the storage is unlimited! We might limit it in the future at some point, but if we do we will make sure to give sufficient notice!

user-0defc6 17 August, 2021, 11:57:26

Hi @marc . Is this ready? Thank you very much

marc 17 August, 2021, 12:21:06

No, not yet. I expect a release within the next 2-4 weeks!

user-0defc6 17 August, 2021, 12:21:47

@marc ok. thanks for the info

user-b62d99 17 August, 2021, 12:51:13

Hi! Does anyone know how to get a single frame with all the corresponding files and data from a folder placed in pupil player?

marc 17 August, 2021, 13:41:31

@user-b62d99 This is unfortunately not immediately possible in Pupil Player. You can only export temporal sections of a recording. The CSV files contain a column for the world frame index though, which makes it relatively easy to filter for a specific frame. You would need to extract the corresponding frame from the video file yourself though.

user-2d66f7 18 August, 2021, 09:51:14

Thank you for your response! Do you have any idea how long it takes to resolve the issue?

user-b14f98 23 August, 2021, 12:26:08

Sorry, but no. I am an academic, and not with Pupil Labs. ...but, I can't imagine they could give you an estimate if they wanted to, or they would have already. This is a difficult issue.

user-c6dd21 21 August, 2021, 02:04:54

Hi! We plan to use Pupil Invisible at our lab. If an unspecified number of people use 1 pupil invisible, is it possible to share the same account on multiple Pupil Invisible Companion App?

papr 22 August, 2021, 18:45:31

Yes, that is possible.

user-a98526 21 August, 2021, 02:43:49

Hi@marc, I want to know why the gaze data I exported from the pupil player is different from that from Enrichments.

user-a98526 21 August, 2021, 02:44:34

This is a partial result

Chat image

user-a98526 21 August, 2021, 02:45:35

The calculation formula I used is: px=1088norm_pos_x py=1080(1-norm_pos_y)

papr 22 August, 2021, 18:45:15

make sure to compare the gaze from the same timestamps. The export from Player might export different time ranges than Cloud

user-a98526 23 August, 2021, 01:53:33

Thanks for your answer@papr,I want to know how to align the timestamp provided by PupilPlayer and Cluod? I use the results provided by pupilplayer to calculate the pixel coordinates of the fixation point, will it be errors?

papr 24 August, 2021, 08:53:21

The alignment of Player and Cloud is possible but slightly complicated due to the transformations that Player applies when opening a PI recording. I suggest to use only one of either: the Cloud export or the Player export.

user-a98526 23 August, 2021, 01:54:43

I want to obtain each frame and the corresponding gaze point coordinates. Can this be obtained using the Cloud?

papr 24 August, 2021, 09:09:04

Yes, Cloud exports the necessary information to find the gaze location for each frame in the scene video. Due to the different sampling rates between scene and eye cameras, there is no 1-to-1 mapping though. There are two alignment approaches: A) One gaze sample per scene video frame: For every timestamp in world_timestamps.csv find the closest timestamp in gaze.csv that has the same recording id B) All gaze samples for each scene video frame: For every timestamp in gaze.csv find the closest timestamp in world_timestamp.csv with the same recording id

papr 23 August, 2021, 12:28:47

@user-b14f98 @user-2d66f7 Please update your Companion apps. We have released an update that should improve the IMU timing.

user-2d66f7 23 August, 2021, 13:14:18

That's good to know! Thank you

user-b14f98 23 August, 2021, 12:46:38

Thanks!

user-a98526 24 August, 2021, 08:22:34

Hello @papr . can you solve my doubts?

user-d52987 27 August, 2021, 20:25:39

Hello! I updated my Companion app and it seems that uploads are not being pushed through to the cloud. Tried basic troubleshooting & uninstalling, they just spin around at 0%. Any advice appreciated, but also may not be back in the office until Monday. Thank you!

user-d52987 27 August, 2021, 20:27:30

Using the OnePlus6 companion device btw

user-d52987 27 August, 2021, 20:34:13

Did some searching and clearing the app data fixed it! Sorry for spam, reposting just in case πŸ™‚

Chat image

user-bbf437 29 August, 2021, 16:09:12

Hi, may I ask where i could find the full tech specs of the eyetracking cameras on Invisible? I can see some on your website, but i would like to know the manufacturer, focus length, sensor size etc. as well. Thanks.

user-ffb13b 30 August, 2021, 05:01:20

Hello, I am 99% sure this question has been asked and answered already but I searched for 30 mins and could not find it - I have 112 separate pupil invisible recordings downloaded from cloud. I wish to use just the gaze_positions.csv for each file. Is there a way to do this without having to open each recording in Pupil Player and manually exporting the gaze positions? Thanks in advance

papr 30 August, 2021, 07:47:18

Unfortunately, Player was not designed for batch processing Pupil Cloud recordings. In this case, a custom script for extracting this data might be the easiest approach. The recording format is documented here. https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793

If you need further technical help feel free to reach out to info@pupil-labs.com

user-ffb13b 30 August, 2021, 07:50:25

Thanks for the reply! Unfortunately I have not much experience in python and it would probably be quicker for me to manually extract the gaze_positions.csv from pupil player for each recording than write a script.

papr 30 August, 2021, 08:47:19

Hi, just a quick follow up question: Are you aware that Pupil Cloud started offering CSV exports? See https://docs.pupil-labs.com/cloud/enrichments/#raw-data-exporter The only difference to the Player export would be that Player exports gaze in normalized coordinates and uses seconds as time units.

user-bbf437 30 August, 2021, 08:33:15

Hi im using Pupil Invisible, and exporting gaze overlaid world video using Pupil Player.

May I ask if I can plot an edited gaze position file(csv, for example) on the original world video. For example, if I want all the gaze to shift to the top a little bit, and export the shifted gaze overlaying the world video, how would I do that?

Thanks.

papr 30 August, 2021, 08:49:22

Hi, unfortunately, that is only possible with a custom script. Alternatively, I can share a Pupil Player plugin that enables you to apply a fixed offset to the gaze data within the application. Would that be helpful, too?

user-bbf437 30 August, 2021, 08:57:49

[email removed] I would appreciate it if i could try the plugin.

May i also ask if i want to explore the custom script option, would it be a solution in the app, or something like external script like opencv, for example?

papr 30 August, 2021, 09:01:06

I can share the plugin soon. Let me quickly make sure it still works with the latest Player version πŸ™‚

papr 30 August, 2021, 09:00:36

Technically, both are possible. A custom importer might be technically easier but requires knowledge on how the application internals work. An external script might be easier to implement without knowledge of the app.

user-ffb13b 31 August, 2021, 14:47:48

Is anyone here familiar with using the 'saccades' package within R that I could ask a few questions about?

user-bbf437 31 August, 2021, 15:51:53

[email removed] any luck with the plugin?

Another question, i see there are around 10-20more timestamps than there are world video frames (frames read by opencv), before Pupil Player export. But after exporting, world.mp4's frame and timestamp counts are now the same. May I ask what changed during the export to make the frame count increase to match the timestamp count?

Thanks.

papr 31 August, 2021, 16:05:31

Download this file to your pupil_player_settings/plugins folder, start Player, open the recording, and select "Offset Corrected Gaze Data" from the gaze data menu

gaze_with_offset_correction.py

papr 31 August, 2021, 15:54:34

Apologies, I lost track of that. I will test the plugin now.

In the past, I have experienced that OpenCV was not always accurate when it came to reading all frames. I recommend using https://github.com/PyAV-Org/PyAV to read our videos.

End of August archive