invisible


Year

user-5aeab9 02 February, 2021, 14:17:54

Hi, I 've a problem with pupil invisible hardware. The scene camera and eye cameras are not being recorded or recognized in the companion app. I think that the OnePlus6 firmware was upgrated. How can I restore The phone ? Regards

marc 02 February, 2021, 14:19:09

Hi @user-5aeab9! Could you let me know which version of Android is installed on you OnePlus6 device?

marc 02 February, 2021, 14:21:07

You find that info in settings -> About Phone

user-5aeab9 02 February, 2021, 14:35:59

Thank @marc . The Android version is 8.1.

marc 02 February, 2021, 14:36:53

@user-5aeab9 That version is supported and should not be the cause of the problem. Can you confirm that you have OTG enabled when trying to connect the glasses and the phone?

user-5aeab9 02 February, 2021, 14:38:30

@marc , Yes the OTG is enabled .

marc 02 February, 2021, 14:42:50

Okay, last question: Are you using the USB-C cable that was included with your Pupil Invisible glasses, or a different cable? I am assuming you made very sure this is plugged in well!

marc 02 February, 2021, 14:44:27

If the answer is yes than there is most likely a hardware defect in your glasses. Please reach out to [email removed] so we can get a repair going!

user-5aeab9 02 February, 2021, 14:47:44

Ok , it's the correct cable . I try to contact [email removed] Best Regards

user-137f16 03 February, 2021, 12:01:12

Hi everyone. We would like to track the position of the eyes in a monitor screen and synchronize it with a video. We are interested to know if at a specific frame there was a fixation and the position of this fixation normalized to the screen. Which is the best setup? Can you tell me how should I do it? Thank you very much.

marc 03 February, 2021, 12:03:52

Hi @user-137f16! You could do something like this using the Marker Mapper enrichment. Have you checked that out already?

user-137f16 03 February, 2021, 12:19:14

No, I am reading about it right now. Thanks. I guess that somehow I will have to correlate the fps of the Invisible with the fps of the video. Is that something I can do directly from Projects in Pupil Cloud?

marc 03 February, 2021, 12:30:32

From the Marker Mapper you get gaze mapped to the screen with timestamps. If you know the timestamps of the individual frames you are presenting you can use that to correlate them. You can get the Marker Mapper data as CSV and will have to do the rest in e.g. a Python script.

user-137f16 03 February, 2021, 13:13:30

Perfect. That helps me a lot. Thank you very much @marc .

marc 03 February, 2021, 13:14:06

Your welcome! 👍

user-d20949 07 February, 2021, 14:51:35

Hey everybody, a question concerning the raw data obtained from pupil invisible. Does the raw data contain gaze coordinates only or also things like fixations, blink data, etc? Would I have to calculate the other measurements myself using the raw data? Thanks in advance.

marc 08 February, 2021, 08:49:48

Hi @user-d20949! The raw data contains the gaze point at ~55 Hz. After uploading a recording to Pupil Cloud the signal is recomputed automatically to 200 Hz. We are working on adding both fixation and blink detection to Pupil Cloud, but as of now these are not yet available. You could of course measure those yourself using the the raw gaze data and appropriate algorithms.

user-d20949 08 February, 2021, 09:36:08

Ahh okay I understand. Thanks a lot for answering @marc 👍

user-17da3a 10 February, 2021, 11:23:04

Hey guys, I have a question regarding the reusing of the 3D model of head movement. By using “April tags” around my target space (a big wall projection), in a bright room I have already defined my 3D model of head movement post_hoc through Pupil Player. As my experiments supposed to be recorded in a dark room while projecting the stimuli on the wall, I cannot redefine the model by each recording cause the room is dark and tags could not be detected. I would greatly appreciate if you could inform me how I can reuse the model I have for the rest of my experiments? Thanks in advance,

marc 11 February, 2021, 08:32:37

Hi @user-17da3a! After defining the model a file is created in the recording folder that contains the definition. The name of the file depends on what name you specify for it in the UI. The default name is default and the file would then be called default.plmodel. Do I understand you correctly, that you plan to turn off the light at some point during your experiment, such that the markers can no longer be detected? Please note that while the markers can not be detected the head pose tracking will also not work. Detecting the markers is not only necessary for model definition but also to run the head pose estimation within the model.

user-17da3a 11 February, 2021, 13:55:45

Hi Marc! Thank you so much for your reply. I see now.
- One more question in this regard. To open the model extension, I can use the function .plmodel to visualize the model data? Right?

  • As for the experimental setting: You are right. Experiment will be presented via wall projection (the projection space has a 270 cm length and 118 cm height) and I have positioned the April tags around my projection space. I just noticed one more thing. Not only in dark room but also when I keep the light on, If I have the projector on projecting into the wall the markers could not be detected (i.e., The markers could only be detected when there is no wall projection). Could that be an interruption due to the projection light comes to scene camera?

  • Is there any possible solution to realize head pose estimation in such a setting? Thank you!

marc 11 February, 2021, 15:29:53

@user-17da3a You would need to find a way to make the markers detectable. To improve the detection rate you could try to improve the contrast at which the markers are recorded by improving the lighing conditions (the markers should not be over- or underexposed) or potentially the material you print on (I'd imagine a matte surface would be better if a projector might shine on it). You could also try to print the markers larger to make them easier to detect or improve their positioning. The markers must not necessarily be at the exact edge of the surface, but could also have some distance to it. The important thing is that they are in plane with the surface and they are visible whenever the surface is viewed.

I am not sure what you are referring to with use the function .plmodel. You can visualize the model in Player after you have copied the model to a recording.

user-17da3a 11 February, 2021, 17:54:13

Thank you so much!

user-17da3a 12 February, 2021, 13:22:37

Hi, I am facing a problem in Pupil companion. After I record the videos, they could not be uploaded on the cloud and I have to always transfer the files through connector cord to my laptop. I press on upload; it shows the 0% sign and does not proceed anymore. I must upload them on cloud to be able to define events and have the 200 Hz gaze data. Please let me know what the problem with uploading is. Thank you!

marc 12 February, 2021, 13:26:48

@user-17da3a To state the obvious, you can confirm that the phone is connected to the internet, right? Could you also try to restart the app by holding the app icon -> click "app info" --> click "force stop" and then restart the app and see if the uploads start happening? If that does not work could you try to pause the upload for the recording(s) where it does not work and start it for others to see if this is connected to a specific recording that is blocking the queue?

user-17da3a 12 February, 2021, 13:37:32

Hey Marc! Thanks for your reply. Yes, companion is connected to internet. I force stopped the app as you told me but did not work. I also paused all recordings which are still not uploaded and checked them one by one. not working as well.

papr 12 February, 2021, 13:42:42

Could you please try logging out and back in again, please?

user-17da3a 12 February, 2021, 13:54:45

Hi Papr! I did so and also rebooted the phone but still does not work.

user-53a8c4 12 February, 2021, 15:16:18

@user-17da3a please try clearing the app data (this will reset the app but will not delete existing recordings/wearers)

- Long press Invisible Companion icon
- Click App Info
- Click Storage 
- Click Clear Data
user-17da3a 12 February, 2021, 15:51:07

Thanks Dan! I did so. Its working now 🙂

user-2652b9 13 February, 2021, 15:18:03

Hello, I'd like to know if it is possible to record without the Pupil glasses being connected to the phone via cable? I read this in the guide, but I'm not sure how that'd work, and maybe I'm misunderstanding (English isn't my native language) 🙂 It says "you could start recording without Pupil Invisible glasses connected. Then at a later point in time plug in Pupil Invisible glasses, and Pupil Invisible Companion app will automatically start recording video and gaze data once connected."

wrp 15 February, 2021, 07:28:15

Pupil Invisible glasses must be connected to the Companion Device (Android device). Power for the sensors on the glasses comes directly from the Companion Device (no batteries in the glasses) and all computing happens on the Companion Device.

In the docs we are trying to communicate that if Pupil Invisible Glasses are disconnected from the Companion Device, the app can continue recording - but there will be no scene video or gaze data when glasses disconnected. When glasses are re-connected the video, audio, gaze data, imu, etc will resume automatically.

user-17da3a 14 February, 2021, 21:02:15

Hey guys, I have a question regarding AprilTags. As from one hand our experimental setting works optimum in the dark condition and on the other hand, we need to position AprilTags on the wall (which could not be detected when the room is dark) to measure the head poses, we have got an idea and would like to check if this could be realized. So, the idea is to print out the AprilTags on large “Neon papers” which allows the tags stay bright in a dark setting. Please see the attached link regarding the kind of paper I mean: Premium Neon Paper – Pack of 5 Luminous/Phosphorescent Printer Paper A4 Neon Paper for Inkjet Printer/Inkjet Printer, UV Glow (Pack of 5 Coloured Card, A4 Size): Amazon.de: Bürobedarf & Schreibwaren Do you know if this idea is basically applicable? Or the colour of the paper might obstruct detection of the tags? Thank you so much!

marc 15 February, 2021, 08:27:45

We have never tried the markers in such a setup and it is a bit hard to judge if that would work. If the room is dark the scene camera exposure will be long, which might lead to blurry edges on the markers if they are actively glowing. Would be worth a try though! Another potential idea: If the sections during which the light is off were sufficiently short, you could also fill the gaps using the IMU. The IMU suffers from drift error, so absolute head-pose can not be measured for prolonged time periods, but if it was shorter intervals that could work. Not sure if something like that would be accomodatable in your experiment design.

user-17da3a 15 February, 2021, 13:57:42

Dear Marc! Thanks for your reply. I have recorded several times in our setting. By pre-defining the model with having lights on and then applying the model to the rest of the recording (i.e., recordings in dark condition) I can receive the output with increased data points regarding the head poses. I also increased the number & size of the AprilTags to increase the chance of model being defined properly. I have a question now regarding the model itself. The model I want to look for head data on it, is a rectangular plane on the wall as I told you before. To define this model, I walked slowly in the room and looked at the AprilTags from various angles. Almost all markers were detected (around 30 Tags) but the 3D model was quite strange (Not a rectangle). I attached the model to be clearer. I once defined a rectangular modal, so it’s possible, but after I positioned bigger tags and increased the number of them, I cannot receive a rectangle anymore. Could you please let me know what I am doing wrong? And if I can properly define a rectangular modal to look for head data inside this plane on the wall? Thank you so much!

Chat image

user-73a5fe 15 February, 2021, 08:44:52

Hi quick question. Is there a limit to the cloud storage (e.g. 100gb? 1TB?) thanks!

marc 15 February, 2021, 12:36:01

Hi @user-73a5fe! As of now there is no limit on how much can be uploaded to Pupil Cloud. We will however introduce a limit in the future together with the option to purchase more storage. I can't give you exact numbers or a date when this will happen yet, but we will make sure to announce that sufficiently in advance such that everybody has a chance to accommodate those changes.

papr 15 February, 2021, 15:10:17

You have not reused any of the tags, have you? In other words, can you confirm that every of the printed markers has a unique id / pattern?

user-17da3a 15 February, 2021, 15:13:04

Not, really. Actually, I guess I have some tags with same ID and I was not aware that I cannot use tags of same IDs.

papr 15 February, 2021, 15:16:10

In this case, we will have to improve our head pose tracker documentation. This is indeed an important requirement as the software uses the marker id to look it up within the 3d model. If you use an ID twice at different locations, the algorithm gets confused.

user-17da3a 15 February, 2021, 15:21:19

Thanks a lot Papr! I see now.

user-0b8ac6 15 February, 2021, 17:05:49

How far can the invisible eye tracking glasses reach?

marc 15 February, 2021, 17:06:41

Hi @user-0b8ac6! Reach in what way? Do you mean the maximum distance a gaze target may have?

user-0b8ac6 15 February, 2021, 17:32:50

Yes the maximum distance a gaze target may have. I am looking to use them in an arena

marc 15 February, 2021, 18:12:05

I see! While at very close distances problems may occur, there is no limit to how far away the gaze target can be. I see no problem with using Pupil Invisible in a stadium!

user-0b8ac6 15 February, 2021, 18:12:31

thank you!

user-16e6e3 16 February, 2021, 14:07:08

Hi @marc! I have one question related to the last one that has been asked. What problems occur at closer distances? Me and @user-17da3a are working on a project where we present a big projection with images in front of the participant. The distance eye-projection is approxiamtely 65 cm. Would this be a problem?

marc 16 February, 2021, 14:09:30

@user-16e6e3 At closer distances a parallax error is introduced. This is a constant offset in the prediction that is dependent on the distance to the gaze target. It starts to become noticeable at distances < 100 cm. Given that in you setup the subjects remain at a (more or less) constant distance to the projection, it should be no problem however to correct for this parallax error using the offset correction feature.

user-16e6e3 16 February, 2021, 14:22:21

Thanks @marc! We'll keep doing the offset calibration every time before the recording then. One more question regarding our setup: The goal is to track both eye and head movements, while participants view the presented stimuli. We surrounded the projection area with april tags. And also did a separate recording to pre-define the model that will be for head pose tracking (light on, all markers were detected). When we record in our conditions (we tried it with both lights on and off), only 1-2 markers could be detected and we can export 103 head poses in a 30 sec recording (~3Hz). When we apply the pre-defined model of the projection space in Player to the same recordings, we could increase the number of head pose data points to 840Hz (~

papr 16 February, 2021, 14:28:51

The head pose tracking requires the markers to be visible in order to orient itself. If you have a model already, the algorithm can infer the head pose from a single frame. On the other side, to build the model, it needs to triangulate each marker position in relation to its neighbouring markers. For that, it needs to have seen the markers from different perspectives. If you have bad marker detection, e.g. due to low light, building a model is very difficult as it has far less data available. The localization will be worse accordingly.

user-16e6e3 16 February, 2021, 14:24:18

28Hz, which is enough for us). However, since we don't really understand the algorithm behind it, we wanted to make sure that this increase in data quantity when applying the pre-defined model to the recording, means that the data quality is also good (despite only 1-2 markers being detected during the actual recording)?

user-16e6e3 16 February, 2021, 14:24:42

*sorry, 840 points not Hz

marc 16 February, 2021, 14:32:13

@user-16e6e3 So in other words, if you put in a high-quality model (created under good lighting conditions), this makes the tracking easier and can thus explain why you get more data using the pre-recorded model. However, if only very few markers are detected at every given point in time, there is a chance that the calculated head pose signal is noisy. Our recommendation is to have at least 3 markers visible at all times. You could plot e.g. the head translation signal and see if it is too noisy for your application.

user-16e6e3 16 February, 2021, 16:37:31

Thank you both! So putting in a high-quality pre-recorded model to a recording with only few detected markers does not necessarily give you good data. With our setup (215x118 projection on the wall, ~65 distance to projection, markers around projection), we can see at least 2-3 markers in the scene camera video at all times, but they are still not detected, probably because they are quite far and participants do not look at them directly? What we decided to do now is to combine the model definition (lights on, looking at the markers from different perspectives) with the actual experimental trials (dimmer lights so that projection is more visible, sitting in front of projection) in one recording. This way the model could be defined with all the markers, and although the markers were not detected in all the frames, the data looked fine when we plotted it.

papr 16 February, 2021, 16:40:53

I would highly recommend using the same model for all recordings instead of fitting it for each recording. This way the head pose estimations are truly comparable. If you refit the model for each recording, the resulting models are likely to be similar but not identical.

user-16e6e3 16 February, 2021, 16:42:59

Okay, hmm... then we don't really know how to solve the problem with only a few markers being detected during the actual experimental recording. We already increased the number of markers surrounding the projection, and we can not put any markers on the projection itself since it will interfere with the visual stimulus.

papr 16 February, 2021, 16:44:27

The issue is not the total amount of detected markers but the bad detection during the dim light / projection phase. Your improved results should not come from having included a predefined model.

papr 16 February, 2021, 16:46:20

Can you share an example world frame from the dim light condition where some of the markers are visible?

user-17da3a 16 February, 2021, 18:49:07

Hi! Please find the world cameras related to the above question attached. One recording is without light but markers are still visible as you see. The other recording is with having one light on. I should also mention that the big tags you see on corners are removed now because I could not define the model by using tags of different sizes.

user-16e6e3 16 February, 2021, 16:55:19

Sure, Parishad will share them here later. It's a dilemma because by having brighter light, we can probably increase the number of detected markers but will decrease stimulus quality. We also had the idea of printing the markers on neon paper to make them "glow" in the dim light, but are not sure if the color plays a role for detection. Do they have to be black and white to be detected?

papr 16 February, 2021, 16:59:48

The recorded frame is converted to a gray image before detection. As long as the contrast in the gray image is large enough, that should work. But I have never tried that myself. Therefore, I cannot tell if the contrast will be sufficient. I would estimate a 70-80% chance that it could work.

user-17da3a 16 February, 2021, 18:49:11

user-17da3a 16 February, 2021, 18:52:01

One more question: I just noticed that in some of my recordings the world camera is named “PI world v1 ps1.mp4” instead of “world.mp4”. Could you please also explain what does it mean? why they don't have same file names?

marc 17 February, 2021, 09:19:17

@user-17da3a When opening a Pupil Invisible recording in Pupil Player, the data format of the recording is changed to be more in line with Pupil Core recordings, which Pupil Player was originally designed to work with. Part of this is renaming the world video files to world.mp4. So the recordings that feature this name are the ones you have opened in Pupil Player before! The setup in you videos looks fine to me as long as enough markers are detected. Enough markers seem to be visible at all times.

user-16e6e3 17 February, 2021, 10:05:25

Hi @marc! Yes, even though the markers seem to be visible all the time, they are still not detected (only 1-2 are detected). So we will print them out on neon paper and see if it increases the number of detected markers. We also read that marker family 16h5 has a higher chance of being detected, so we'll try to use that instead of the current 36h11. But any other ideas on how to improve the detection in this setup would be highly appreciated! 🙂

marc 17 February, 2021, 09:31:30

Hi @user-94f03a!

1) Currently, a surface can only be created based on one (or multiple) Pupil Invisible recordings. The background image used for heatmap generation is a crop that is taken when clicking the "Define surface" button during creation. When using a jpg-image, would you want to do this mostly because it would make the creation workflow easier for you or because this would increase the quality of the image used for heatmap generation? Thank you very much for the feedback here! I'd be very interested in understanding in more detail what workflow you would find ideal here!

2) All enrichments are defined with a start and end event. The processing of the enrichment will happen on every recording that is part of the project that contains those events. If a recording contains those events multiple times, all the according sections of the video will be processed. If additional recordings are added or the events are edited, you will be offered to re-compute the enrichment / add the computation needed for the additional sections. Note the "Sections" column in the enrichment overview that summarizes how many sections have been found across all recordings in the project.

user-94f03a 23 February, 2021, 04:55:42

Hi @marc thank you for your reply.

1) Well, that would be interesting for both reasons. For our experiment materials, the user reads several pages with information and we want to know exactly how much time the spent looking at each pages + gaze behaviour. We integrated the Apriltags in the production of the stimuli, so now each 'page' is defined by 4 different apriltags. We then print it altogether. We also have record of which apriltag is associated with which stimulus. So:

1.1) given all our stimuli are printed material, we could easily upload a jpg/png to the surface tracker, we don't need to wait for the eye-tracking data. that would allow for more clear graphics under the heatmap.

1.2) in another case, we could even define that apriltag XYZ is associated with event 'W', e.g. flash an apriltag to 'start' task and another to 'end task'. That way we can have a tight integration between various stimuli and quickly annotating the videostream.

Hope that is useful, happy to discuss this further.

2) OK makes sense, thanks!

marc 17 February, 2021, 11:29:58

@user-16e6e3 Playing around with different marker families is a good idea. We have on average had the best results with 36h11 but it also depends on the use-case. I just saw that the original AprilTag repo was updated a while ago and now recommends using tagStandard41h12 as the default, so that might be worth trying too.

user-17da3a 17 February, 2021, 12:14:29

Thanks Marc! can the markers of these families then be found online? Or do we need to create them ourselves? So far we could not find them online.

marc 17 February, 2021, 12:16:01

@user-17da3a You can find them here: https://docs.pupil-labs.com/core/software/pupil-capture/#markers

user-17da3a 17 February, 2021, 12:38:10

Thanks for the link. Actually when I want to download the ready .png files from the link, tags are so small and blurry.

marc 17 February, 2021, 12:40:10

Yeah, that is a bit inconvenient. You need to scale them up using "nearest neighbor interpolation".

marc 17 February, 2021, 12:40:45

This can be done with a script or a graphics editing program.

papr 17 February, 2021, 13:09:40

@user-17da3a please see this for reference https://github.com/pupil-labs/pupil-helpers/tree/master/markers_stickersheet

user-17da3a 17 February, 2021, 13:15:01

Thanks a lot!

papr 17 February, 2021, 13:10:38

You can change the marker family here https://github.com/pupil-labs/pupil-helpers/blob/master/markers_stickersheet/create_marker_sheet.py#L9

user-17da3a 18 February, 2021, 15:18:47

Hi! By using nearest linear interpolation, I was able to change the size of AprilTags into our planned size. Tags are from family 41h12. You can see the image of tags attached. The problem is none of the tags from this family could be detected even during the model identification with light and marker detection failed. Is there any specific reason for that? Thanks!

Chat image

papr 18 February, 2021, 15:19:56

Are you using Pupil Cloud or Pupil Player for detection?

user-17da3a 18 February, 2021, 15:20:23

Pupil Player

papr 18 February, 2021, 15:21:51

Have you changed the marker type accordingly in the "Marker Detection Parameters" menu already?

user-17da3a 18 February, 2021, 15:28:07

Through offline head pose tracker menu?

papr 18 February, 2021, 15:30:01

Ah, apologies, I was thinking you were using the surface tracker. The head pose tracker currently only supports the default 36h11 family.

marc 18 February, 2021, 15:32:52

@user-17da3a My bad! I missed that this is only a settable parameter for the surface tracker, but not for the head pose tracker

user-17da3a 18 February, 2021, 15:35:26

I see now. I changed the surface tag ID and now markers got green but in form of marker cache. So we have no option rather using tag family 36h11.

papr 18 February, 2021, 15:36:00

Unfortunately, this is correct for the head pose tracker.

user-17da3a 18 February, 2021, 15:44:57

Thanks. I also projected the tags on the wall, but does not work as well. I am actually thinking now about painting or printing the tags on a thick board. We have to make it work. I still have no idea why tags from family 36h11 are always visible in world view but could not be identified.

papr 18 February, 2021, 16:05:46

A list of typical issues is: - Markers too small - bad lighting, causing the contrast between black and white areas to be too low - missing white border around tags - motion blur

user-17da3a 19 February, 2021, 13:15:06

Hi guys! I Have a question regarding the Pupil invisible connector. When I was recording, at some points, I noticed that there is no scene camera rather a gray scale in the video (i.e., I recorded but the recording did not include the world scene). I understood the problem is with cord attachment to the glasses because sometimes by having a bit pressure on attachment part I could activate the scene camera again. Is this a technical problem? or how I can make the attachment more stable and be sure I would not lose the scene camera at any of my recording? Thanks!

marc 19 February, 2021, 13:22:46

Hi @user-17da3a! Yeah, if there are gray frames in the video when playing it back in Pupil Player (or Cloud) this means that the scene camera has not been properly connected at that time. Usually the USB-C connection is pretty stable simply because the connector is quite stable by design. The cable or the phone's socket can however in some cases "wear out" a bit making the connection less stable. What helps with the connection is of course minimizing the tear on the connection (e.g. while having the phone in your pocket and walking around there might be some tear on the cable). As the subject is seated in your setup as far as I understand, it could be helpful to have the phone lying on a table rather than putting it in the subject's pocket. If the connection is not stable even with no or little tear on the cable, this would indicate that the cable or phone's socket is somehow broken and we should investigate this further.

user-17da3a 19 February, 2021, 13:26:07

Thanks a lot Marc! Good idea, I would put the phone on the table then and see if this happens again.

user-7c714e 20 February, 2021, 14:46:19

@marc Hi, is it now possible to download the data from the Invisible in CSV format in Pupil Player and Pupil Cloud?

marc 22 February, 2021, 08:48:42

@user-7c714e We are currently working on that! I expect this to become available within the next ~4 weeks!

user-17da3a 22 February, 2021, 10:46:41

Hi! I have a question regarding the Pupil Player. I want to know what exactly marker cache is? Or what is the difference between marker cache and marker detection? Also would be great to know the meaning of color codes (green / red) in the marker cache timeline.

user-0e7e72 22 February, 2021, 10:53:48

Good morning, while recording a pupil invisible session, I need to mark the gaze in the moment I press a button on the keyboard. I guess I should develop something similar to https://docs.pupil-labs.com/developer/invisible/#events but it is not clear to me how to define the event "user pushed a button". Thank you very much!

marc 22 February, 2021, 11:09:05

Yes this would be exactly the tool for it. An event is basically a string (the event_message) that is associated with a timestamp. You could for example build your script to send the event message button pressed automatically whenever the button is pressed. The event will then be saved in a running Pupil Invisible recording (together with the according timestamp) and will later show up in Pupil Cloud. The string you choose is arbitrary and could also be something like pressed key: d. If you later want to aggregate over those events with enrichments, it would make sense to use the same string for all the sets of events you want to aggregate over.

marc 22 February, 2021, 11:03:57

@user-17da3a Detecting the markers in a large recording takes a bit of computation and in order to not do this computation multiple times the results are cached. So the cache filling up in Pupil Player simple corresponds to the algorithm going through the recording and detecting markers for every frame. The timeline visualization of the cache indicates when markers have been detected. Green refers to "markers detected" and red to "no markers detected".

marc 23 February, 2021, 08:58:36

@user-94f03a Thank you very much for the details, that is very helpful! We have internally discussed the benefits of being able to upload images of the surface before and your input reinforces the idea. 👍 We will most likely add this functionality down the line. I will also discuss the option of batch-uploading images for definition of multiple surfaces with the team!

user-2d66f7 23 February, 2021, 10:21:03

Hi! I have some questions on the data of the IMU sensor. As you have mentioned on this page, the IMU suffers from drift. The absolute head pose can, therefore, only be measured with short measurements. Do you maybe have a definition of short measurements? Are these 1 minute time periods or can it be 10 minute time periods? Are you guys going to improve the data output of the IMU sensor in the future with for example an AHRS and/or V-SLAM algorithm?

marc 23 February, 2021, 11:47:00

@user-2d66f7 We have so far not exactly quantified the measurement drift of the IMU. The acceptable error of course also depends on the application, but I would expect the maximum time intervals to be more in the ballpark of ~1 min or less, rather than 10 minutes. We have an implementation of the Madgwick algorithm (an AHRS algorithm) that removes gyro drift in pitch and roll axes by using accelerometer feedback to monitor position relative to gravity. Using that the absolute headpose can be calculated at least in pitch and yaw. You can find it here: https://gist.github.com/N-M-T/ec8071bd211db287f4879e0b48874505

Independent of the IMU, we will actually start a beta phase for a V-SLAM implementation in Pupil Cloud soon (probably late March). More info on that will be coming soon!

Edit: I mixed up the terminology! The drift in pitch and roll is fixed by the Madgwick algorithm, the yaw/heading is still affected by drift.

user-26fef5 24 February, 2021, 08:02:17

Sorry to Highjack this answer - just to make sure no one misinterprets this: you mean pitch and roll instead of yaw angle right? The yaw angle (or heading) is typically the one causing trouble in pure IMU cases (no heading reference).

user-2d66f7 23 February, 2021, 11:51:18

Cool! Thank you for the information

user-ce4c1d 23 February, 2021, 16:49:37

Hi there, I currently use Tobii Glasses 2 and 3. I would love some more information about how your Glasses differ and what we could expect if we made a switch.

marc 23 February, 2021, 17:22:58

Hi @user-ce4c1d! We offer two models of eye tracking glasses, Pupil Core and Pupil Invisible.

Pupil Core is using a traditional gaze estimation pipeline based on pupil detection and a 3D eye model. As such, it has a similar workflow and similar strengths/weaknesses as the Tobii Glasses 2/3. That is, it requires a calibration procedure before every use and it achieves high accuracy in controlled conditions. In more challenging conditions, e.g. outdoors or in applications with a lot of subject movement, the accuracy and availability of the signal can drop significantly.

Pupil Invisible is based on a deep learning pipeline for gaze estimation, that is optimized for robustness. It does not require a calibration but works right away after putting on the glasses. The gaze estimation is robust even to bright outdoor conditions and heavy subject movement. The peak accuracy in controlled lab conditions of Pupil Invisible is however worse than that of Pupil Core. A detailed report on accuracy can be found here: https://arxiv.org/pdf/2009.00508 Another limitation is that no pupillometry data is available for Pupil Invisible (while for Pupil Core it is) and another advantage is that Pupil Invisible is probably the most natural looking eye tracker currently available, which can be helpful when subjects are required to behave naturally. In terms of robustness and ease-of use we believe Pupil Invisible to be the best device currently available.

user-94f03a 24 February, 2021, 03:53:46

Hi @marc In terms of the invisible, I agree the form factor is very useful. I just wanted to follow up on the point on accuracy (I have seen the paper). Does that difference on the accuracy rely on the calibration procedure? i.e. if we follow the calibration procedure can we improve accuracy? Or is it a matter of eye-camera positioning etc?

marc 23 February, 2021, 17:23:00

What is also worth mentioning is that we put a lot of emphasis on the accessibility of data. All the sensor data is available in real-time and in recorded form with no restrictions. The Pupil Core software is open-source. Pupil Invisible recordings are also compatible with Pupil Cloud, our online analysis and data management platform. Using Pupil Cloud makes data logistics much easier and offers different ways of processing eye tracking recordings (e.g. tracking of areas of interest in the scene video).

I hope this gives an idea of what you could expect. Let me know if you have further questions @user-ce4c1d!

user-ce4c1d 23 February, 2021, 17:29:10

Thank you Marc, is there some way we could jump on a call at some point, I feel like the amount of questions I have would be better answered verbally. We do a lot of eye tracking, to make a switch in glasses is a huge decision for us.

marc 23 February, 2021, 17:33:42

Sure! There is indeed a lot to consider! Please schedule something either with me or my colleague Will through one of the following links: <links removed>

user-ce4c1d 23 February, 2021, 17:34:07

Wil do, thank you Marc.

user-be55fc 24 February, 2021, 07:02:16

Hi! Not sure where to ask this but for Pupil Invisible, how do I fix the issue of the eye tracking circular marker being “stuck” at the top left corner of the calibration screen on the phone + only being able to be dragged across to somewhere around 40% middle of the screen?

marc 24 February, 2021, 08:56:40

@user-26fef5 You are absolutely correct, thank you for the correction! The drift in pitch and roll is fixed, the yaw/heading is still affected by drift as you describe. Sorry for the mixup! @user-2d66f7 Please note this correction.

marc 24 February, 2021, 09:00:41

@user-94f03a For Pupil Core the quality of the calibration and the positioning of the eye cameras does affect the accuracy a lot. A good calibration and camera positioning is a requirement for reaching high accuracy (and higher accuracy than Pupil Invisible). For Pupil Invisible there is no general calibration procedure, the offset correction can improve the accuracy in some cases though.

marc 24 February, 2021, 09:03:16

Hi @user-be55fc! This is a good place to ask! Could you let me know what phone you are using (OnePlus 6 or 8?) and what version of the app (click and hold the app icon and select "app info" in the pop-up menu to find the app's version number)? Also, could you try to log out of the app, log back in and see if that already fixes it?

user-be55fc 24 February, 2021, 11:18:35

We are using OnePlus 8. App version 1.0.3-prod. We will try updating the app and logging out. Currently, the inner cameras have also stopped working

user-7c714e 24 February, 2021, 10:06:18

Hi Marc, how can I change the method from 2d gaze in 3d gaze (Pupil Invisible)?

Chat image

marc 24 February, 2021, 10:18:36

Hi @user-7c714e! The contents of the screenshot look like an export from Pupil Player's fixation detector, is that correct? Pupil Core can provide 3d gaze data as well as 2d gaze data and the fixation detector allows to choose which stream should be used. Pupil Invisible does only provide 2D data, so this can not be changed for a Pupil Invisible recording. Note however, that the fixation detector is not compatible with Pupil Invisible recordings and will not yield accurate results. It will be disabled for Pupil Invisible recordings in future releases.

user-7c714e 24 February, 2021, 10:23:32

But how would I detect the fixations? This is the most important thing.

marc 24 February, 2021, 10:27:06

Currently, only the raw gaze signal is available for Pupil Invisible recordings. We are working on fixation detection for Pupil Invisible, but right now it can not be achieved with the tools we offer.

user-7c714e 24 February, 2021, 10:30:21

This means that the duration is not right and will not be there anymore?

Chat image

marc 24 February, 2021, 10:33:01

Yes, the raw gaze signal is simply the gaze location at 200 Hz and does not have a concept of duration. The fixation detection algorithm of Pupil Player is not tuned to the noise patterns of the Pupil Invisible gaze signal, which makes false detections very likely. For most applications the fixation detector will thus yield inaccurate results.

user-7c714e 24 February, 2021, 10:34:49

Does the surface tracker at least function with markers?

user-7c714e 24 February, 2021, 10:35:50

When are you going to release the fixation detection?

marc 24 February, 2021, 10:37:26

Yes, the surface tracker does work with markers. It is compatible with both Pupil Core and Pupil Invisible recordings and is available in Player and Cloud.

marc 24 February, 2021, 10:38:44

I can not give you am accurate release date for the fixation detection yet. It is most likely at least 2-3 months out.

user-7c714e 24 February, 2021, 11:17:22

That's nice, I hope it comes soon.

user-17da3a 24 February, 2021, 13:30:41

Hi guys! I am writing to ask for some more possible advice regarding our setting. In the meanwhile we did some changes in the setting to increase the chance of markers being detected (e.g., printing bigger tags with bigger borders around them on neon papers, using small LEDs around the space, using study light on the right side of projection as well as creating a white border around the image mosaic stimulus to make the stimulus apart from the tags). Printing the tags on Neon paper actually worked and markers could be detected and we defined 3D head model with them but they only work when big lights in the room are on, like the last time. I would be so thankful to hear any suggestion that can increase the chance of marker detection. We could not yet detect much marker although the room is already much brighter.

Chat image

papr 24 February, 2021, 13:38:49

Short note: The white border of the stimulus overlaps with the marker's white border. That might cause detection issues as they are unevenly lit.

user-1cf7f3 24 February, 2021, 18:29:24

Hi! When downloading a video from the cloud, I realised some files that are usually present in the folder were not there, namely the file PI world p1 ps1.time, PI world p1 ps1.mp4. Apparently this was a problem when wanting to import the folder in Imotions (the software we use for analysis). During that recording, at some point the cable connecting glasses and phone came off, but we simply put It back on without stopping the recording and it seemed to be working fine. Also when watching the video in the cloud we could see the video including the gaze (besides the grey frames when the connection was lost).
My question is therefore, was indeed the lost connection that caused the lack of those files? If so, probably it would have been better to stop the recording and begin a new one, I guess. Do you seen any other possible cause?

wrp 25 February, 2021, 02:59:29

Hi @user-1cf7f3 - Pupil Invisible Companion App is designed to handle exactly the situation you encountered with the inadvertent unplug. Your recording protocol seems good as is 😸

If you can see the world video during playback in Pupil Cloud, then it should be included in the download as well. Can you please try downloading again and follow up with results?

user-1cf7f3 25 February, 2021, 09:26:20

Hi @wrp Indeed it worked! I think it was more an internal problem within our team folder 😄 I do see P1 and P2 of the same files. Is that because of the disconnection, so a 2nd file was created?

marc 25 February, 2021, 09:30:16

@user-1cf7f3 That is correct, whenever any of the sensors reconnects the recording will continue in a new file. FYI since this can be a bit cumbersome to work with, we will soon offer an additional export option for recordings where those multi-part files will be merged and the formats will overall get a little easier to work with (single-part CSV and MP4 files).

End of February archive