invisible


Year

user-b10192 01 March, 2021, 11:07:54

Hi! I could not install pupil invisible companion on my Samsung note 10 phone. It said "Your device isn't compatible with this version". Is there any version that is compatible with my phone?

user-0f7b55 01 March, 2021, 11:08:32

@user-b10192 Invisible companion works only on OnePlus 6(Android 8/9) Oneplus8 and OnePlus8T (Android 11 both)

user-b10192 01 March, 2021, 22:58:39

I checked my phone android version and it is Android version 11. Please find attached screenshots.

Chat image

user-b10192 01 March, 2021, 11:10:20

Thanks

user-0f7b55 01 March, 2021, 11:08:43

No other device is supported.

user-b10192 01 March, 2021, 22:58:42

Chat image

user-0f7b55 01 March, 2021, 23:06:36

@user-b10192 as I said above we don't support Samsung. Only the OnePlus models I listed above.

user-b10192 01 March, 2021, 23:13:15

Gotcha.

user-3315ca 02 March, 2021, 14:07:02

Hello, after download Pupil Player, the computer says the file is damaged. I repeated the download twice. It is a MacOS 10.13.16 High Sierra. I cannot find anything here on Discord. And I find no software limitations on the site. What is wrong?

papr 02 March, 2021, 14:08:21

Hi, our latest release no longer supports MacOS High Sierra. Please use Pupil 3.0 instead https://github.com/pupil-labs/pupil/releases/v3.0

user-3315ca 02 March, 2021, 14:10:00

Thank you for the link!

papr 02 March, 2021, 14:14:50

I recommend upgrading to macOS 10.15 Catalina if possible though. There is a lot of software out there that no longer provides support for macOS High Sierra. On the other side, there are many applications that do not have full macOS 11 Big Sur support yet. Personally, I think Catalina is the best option compatibility wise at the moment.

user-2d66f7 03 March, 2021, 09:30:07

Hi, I have a two questions about the pupil invisible glasses. 1. If I am correct, the IMU doesn’t contain a magnetometer. Did you have a specific reason not to include a magnetometer in the IMU? 2. The software of the pupil core shows the confidence of the pupil detection. Is it also possible to get the confidence of the pupil detection with the pupil invisible?

marc 03 March, 2021, 10:20:22

Hi @user-2d66f7!

1) Correct, the IMU does not include a magnetometer. There was no specific reason to not include it and we might consider changing that in future hardware iterations. There is no concrete plan as of now though.

2) The gaze estimation pipeline of Pupil Invisible is an "end-to-end" machine learning approach and does not include an explicit pupil detection step. Accordingly there is no confidence value for pupil detection results. There is no confidence value for the gaze estimation quality as a whole either, if that is what you were ultimately interested in!

user-2d66f7 03 March, 2021, 11:06:28

Hi @marc , thanks for your response. Is it possible to get the pupil invisible with magnetometer on request? Or should we wait for changes in the future?

user-be55fc 05 March, 2021, 01:27:35

Hi, there’s a red light flashing on the right side of the Pupil Invisible inner cameras. It’s also failing to record because it can’t detect the inner cameras. Anyone know what causes this?

wrp 05 March, 2021, 05:20:39

@user-be55fc send an email to info@pupil-labs.com with a screenshot of the error dialogue and we can assist.

marc 05 March, 2021, 08:58:05

@user-2d66f7 Sorry for the delayed reply! Unfortunately we can not accommodate a custom IMU upon request. You can of course wait for it, but I do not expect anything to become available before the end of the year!

user-2d66f7 15 March, 2021, 16:51:56

Sorry for my late replay. Thank you for the information!

user-26fef5 06 March, 2021, 23:20:20

@user-2d66f7 Sorry to Highjack the question again, you might want to try a visual inertial odometry or slam approach (maybe orbslam 3 - using the world camera and Imu from the invisible glasses?)

user-2d66f7 15 March, 2021, 16:52:29

Thanks for the tip! I will look into it

marc 10 March, 2021, 12:51:03

Sorry, forgot to answer! We already have some experience with SLAM approaches and will actually release a tool for AOI tracking without markers very soon. The problem with that technology in our experience is that it is either very computationally intensive or not sufficiently robust. The scene video often features fast rotational movements which is usually a problem. But we are actively looking at those technologies and try to make them work in an eye tracking context. If you have further references you think we should look at I would be curious to check them out!

user-a98526 08 March, 2021, 06:48:30

Hi@marc, I am trying to run pupil-invisible-monitor from source on Windows, but the following error appears :

user-a98526 08 March, 2021, 06:48:35

(pupil_invisible) E:\pupil\pupil_invisible\pupil-invisible-monitor>pupil_invisible_monitor Traceback (most recent call last): File "e:\anaconda\envs\pupil_invisible\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "e:\anaconda\envs\pupil_invisible\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "E:\Anaconda\envs\pupil_invisible\Scripts\pupil_invisible_monitor.exe__main__.py", line 4, in <module> File "e:\anaconda\envs\pupil_invisible\lib\site-packages\pupil_invisible_monitor__main__.py", line 11, in <module> from .models import Host_Controller File "e:\anaconda\envs\pupil_invisible\lib\site-packages\pupil_invisible_monitor\models.py", line 4, in <module> import ndsi File "e:\anaconda\envs\pupil_invisible\lib\site-packages\ndsi__init__.py", line 24, in <module> from ndsi.formatter import DataFormat File "e:\anaconda\envs\pupil_invisible\lib\site-packages\ndsi\formatter.py", line 10, in <module> from ndsi.frame import JPEGFrame, H264Frame, FrameFactory ImportError: DLL load failed:

user-a98526 08 March, 2021, 06:49:40

I have followed the method of https://github.com/pupil-labs/pupil-invisible-monitor/issues/24 to fix it, but still failed to solve this problem.

user-32f69e 08 March, 2021, 08:18:38

Is it safe to use the Pupil Invisible glasses on children?

nmt 08 March, 2021, 11:37:48

@user-32f69e We have found that Pupil Invisible works well for ages 3 and up. We recommend using a head strap to ensure a secure fit, particularly during more dynamic movements

papr 08 March, 2021, 09:16:20

@user-a98526 Please see this conversation for reference: https://discord.com/channels/285728493612957698/446977689690177536/796747481693028443 If you have further questions regarding running the software from source, please post them to software-dev

user-a98526 09 March, 2021, 01:48:08

Hi@papr,I found that in the process of using invisible, sometimes the user does not wear Pupil-invisible, but still generates gaze points. Is there a confidence level similar to pupil-core used to determine whether a user wears Pupil-invisible? There is worn ps1.raw in the record format. Can it be used to determine whether there is a user? If so, how can I get this data?

user-a350f5 09 March, 2021, 08:48:53

Hello! I have two questions regarding the PI: 1) is the tracking robust in users wearing contact lenses? in other eye-tracking systems based on corneal reflection, wearing contact lenses (especially those with cylindric correction) can distort the data and/or introduce heavy jitter. How does the PI handle this?

2) does the tracking algorithm account for ocular dominance? Usually, when performing monocular tracking, the dominant eye is preferred, but since the PI has effectively a combined binocular tracking, I was wondering if it is able to tell which eye is the dominant one thus weighing it differently.

marc 09 March, 2021, 08:56:55

Hi @user-a350f5 ! 1) The predictions are not affected my contact lenses. We have a small section on this in our white paper: https://arxiv.org/pdf/2009.00508 (see Figure 5 A) 2) There is no model for ocular dominance used in the gaze pipeline of Pupil Invisible or Core.

user-a350f5 09 March, 2021, 09:00:55

1) That's great! for future testing you might want to consider comparing different types of lenses, as spherical correction (i.e. nearsightedness) doesn't nearly have the same impact as cylindrical (i.e. astigmatism)

2) Cool, thanks for letting me know 🙂

marc 09 March, 2021, 09:02:23

@user-a98526 The "worn" data does is a boolean timeseries that indicating if the glasses are worn or not, but the estimate is not perfect but works most of the time. In the app this estimate is used to not show a gaze point in live preview while the headset is not worn. The app will still publish "best guess" gaze data though. You can read the worn data files in Python with np.fromfile(file_path, dtype="uint8").

papr 09 March, 2021, 09:09:11

@user-a98526 to clarify, to my knowledge, the worn data is not being streamed in realtime.

marc 09 March, 2021, 09:02:56

@user-a350f5 Thanks for the info! We will try to keep that in mind! 👍

user-a98526 09 March, 2021, 09:08:01

@marc Is there a way to obtain this data in real time?

user-a98526 09 March, 2021, 09:12:28

Okay, thank you for your notice.

user-17da3a 09 March, 2021, 11:22:21

Hi guys, I have a question regarding pupil invisible camera scene. I recorded a video which lasted 15 minutes. I am actually quite sure I have saved the recording properly and I had both camera and eye sensors activated. however, I see on the recording details that the duration of recording is 15 minutes, the video scene is recorded only for the first 4 seconds. Please let me know what the reason can be and whether the video could be retrieved if it exists at all. Thanks!

marc 09 March, 2021, 11:29:30

Hi @user-17da3a! If the scene camera was not properly connected it would in theory be possible to make a recording of that duration without the scene camera recording. One way to tell if the the world video was recording but the file is somehow corrupted is to check the size of the world video file. For a 15 min recording the PI world v1 ps1.mp4 file in the folder should be at least several MB in size, whereas a real 4s recording should be much smaller. The circular animation in the Companion App should also visualize to you if both the eye cameras and world video camera are successfully recording.

user-17da3a 09 March, 2021, 11:59:51

Thanks! I have checked the size of the recording which is 4 MB. Do you think I can access the video data? 4 MG should be enough for 15 mins?

user-17da3a 09 March, 2021, 12:03:22

Unfortunately, if I cannot retrieve the video scene and this happens again at any point of time and I am not aware of it, I will lose the whole data from my experiment. I also left the the phone still on the table to be sure the attached cord is stable.

marc 09 March, 2021, 12:26:48

@user-17da3a This should of course not happen. Would you be able to share the recording [email removed] so we can take a look at what went wrong?

user-17da3a 09 March, 2021, 12:28:16

Sure, will do so.

user-5882af 09 March, 2021, 19:02:23

Hello, I have a couple questions in regards to analyzing pupil invisible data in the pupil player. I am conducting an experiment that records two hour chunks of gaze data from the pupil invisible and also trying to analyze 16 surfaces with April tags. I am currently struggling to process the data at any reasonable speed as initial video processing on my personal computer takes 9+ hours as it has a pretty old processor but my university computer cannot even load the file because it does not have enough RAM. When I add/move/adjust markers for surfaces, I have to wait for 3-5 minutes while it processes the change. My main question is what operating system is pupil player currently the most optimized for and what computer config works the best for the pupil player? I was not able to find any recommendations on the website. However, I could just be blind. If it just takes this long to analyze this large of files that is fine as well.

papr 09 March, 2021, 19:07:54

Hey, editing surfaces is indeed slower on Windows compared to MacOS or Linux. I recommend using Linux in this case.

user-5882af 09 March, 2021, 19:59:40

@papr Thank you, just to confirm pupil player effectively uses multiple cores of a CPU and is not limited to just one core per process it is completing?

papr 09 March, 2021, 20:03:15

It uses background processes to calculate the surface gaze mapping and heatmap. But initiating them takes longer on Windows than on Unix operating system

user-5882af 09 March, 2021, 20:04:57

Okay, Thank you!!

papr 09 March, 2021, 20:21:10

More RAM is more important than a lot of CPU cores, though.

user-0e7e72 10 March, 2021, 11:46:47

Good afternoon! I am experiencing an issue with pupil invisible. The connection to the oneplus is unstable so the video and gaze data feed get interrupted. It seems to me that the problem is the connection cable rather then the PI connector. Then I thought to get a simple usb-c replacement cable but i read somewhere that is not recommended. Why so? Thank you very much!

marc 10 March, 2021, 12:54:57

@user-0e7e72 We are maxing out the USB-C connection and in our experience lower/medium quality cables often lead to unstable connections. If it is just about checking if the connection remains stable for ~20 seconds using other cables should be fine, but for longer time periods connection problems are somewhat expected when using other cables. If you think this is a hardware issue please reach out to [email removed] so we can issue a repair or get you a new cable!

user-0e7e72 11 March, 2021, 11:04:50

Thanks marc! Are cables which meet PI specifications available on the market? If so, could you point me to one example?

user-26fef5 10 March, 2021, 13:04:02

@marc Hey Marc, no worries. I have some experience regarding v-slam and inertial pose estimation. As my last pose mentioned, I would definitely recommend ORB-Slam 3 right now, because it features mono-/ stereo- /rgb-d + inertial pose estimation and is from my experience sufficient for head pose estimations (depending on the application though - very fast movements might still be a Problem). It is computationally expensive yes, but I know from experience that there are several small machines capable of dealing with this and are giving great results (e.g. Nvidia Jetson Boards). I am not entirely sure if it has been implemented for Android phone's yet, but this will sooner or later happen. Oh, and also - it is fully open source!

marc 10 March, 2021, 14:03:27

Okay, thanks a lot for the input! We'll keep looking into it!

user-0e7e72 10 March, 2021, 20:04:24

Hi again! When I use the cloud API to download the data, the gaze data at 200hz is not available. Is there a way to obtain it through the API? Thanks! For reference I am using the following code:

with open(last_recording.name+'.zip', 'wb') as f: f.write(api.download_recording_zip(last_recording.id, _preload_content=False).data)

papr 11 March, 2021, 07:37:56

Hey, our Cloud API development team deployed a fix and you should be able to retry without any issues. Should you encounter recordings that are missing the 200hz gaze data anyway, please contact info@pupil-labs.com with the recording ID.

papr 10 March, 2021, 20:07:08

I will forward your question to our Cloud API development team. 👍

user-17da3a 11 March, 2021, 15:07:01

Hi Guys! I have a question regarding surface definition on the enrichment section of the cloud. As far as I understood, on this section one should at first be sure that all markers are blue (i.e., included in the model). By having all markers in the model the lines that connect the edges together are not defined automatically and I have to draw the model visually and attach the lines by myself which seems unstable at some frames. 1. How reliable would be the defined surface in such a way? (when I look to the heatmaps, their edges don’t look the same, although I tried to keep the surface identical in all defined sections) 2. Would it be possible to define the surface once on cloud for the whole recording and then apply it to all sections I will define later? Thanks!

marc 11 March, 2021, 15:15:45

@user-0e7e72 The only other compatible cable I know of that was on the market is no longer being produced. You can however get additional cables from us. Please reach out to [email removed] for that!

marc 11 March, 2021, 15:32:25

Hey @user-17da3a! You are correct that all markers you want to include in the model should appear blue (instead of red) during definition. You can move around the corners of the defined surface to any location you like and the surface edges do not need to be aligned with the markers.

To get robust tracking of the surface you need to ensure two things: 1) The markers need to be "co-planar" with the surface you want to track, i.e. they can not be tilted at an angle to the surface or elevated from the surface. In your case the markers are placed on the wall you do the projection on, so that is co-planar!

2) At least ~3 markers need to be detected in every frame to ensure a stable localisation of the surface. In frames where just 1-2 markers are detected the localisation will be less accurate which would introduce noise.

If you create an enrichment based on a start and end event, you can easily calculate the enrichment on more recordings in your project by defining the same start and end events in all your recordings. If you define the same pair of start and end events multiple times within the same recording, you can also get multiple sections out of the recording using the same surface definition.

user-16e6e3 11 March, 2021, 16:11:52

Hi @marc ! In the recordings @user-17da3a and I work with max 2-3 markers could be detected. In most frames so far, at least 2 markers are detected and the surface definition seems correct, i.e. it matches the projection space. When 1 or no markers are detected, then it's quite inaccurate of course. Is there an automated way to extract/exclude the frames with 1 or no detected markers when we export the gaze data as csv? This way we would only analyze gaze in frames with at least 2 detected markers, so that the surface definition fits our projection.

papr 11 March, 2021, 16:59:41

Hey, would it be possible to share one of your surface exports from Cloud which exhibits this issue? I might have an idea on how to filter the data even if the marker information is not available at the moment. (Can be shared privately as well. No videos or gaze related data necessary.)

marc 11 March, 2021, 16:54:08

The exports of the Surface Tracker do currently not include which markers were detected or how many of them, so an automated filtering of that is unfortunately not easily possible.

marc 11 March, 2021, 16:57:07

This is a valid feature request though and I will forward this to our engineering team.

papr 12 March, 2021, 13:08:38

@user-16e6e3 I cannot give an estimate for that yet.

user-16e6e3 11 March, 2021, 17:13:43

Thanks @marc! That would be great. @papr, sure, attached is an example surface export from Cloud.

gaze.csv

papr 11 March, 2021, 18:09:13

My idea was to plot the aoi corner coordinates over time and to detect outliers using a median absolute deviation (MAD) approach. Ideally, the outliers would be present for all 4 corners at the same time. But unfortunately, most of the time the AOI corners are not visible in the scene frame.

Note: The corner names (per row) might not be correct. I was not sure about the corner order.

Chat image

user-16e6e3 11 March, 2021, 17:13:44

aoi_positions.csv

user-16e6e3 11 March, 2021, 17:13:45

sections.csv

user-16e6e3 12 March, 2021, 12:55:06

@papr thanks for trying it out! So if the AOI corners are not visible in the scene frames, then the surface is not defined properly and we cannot use it for our gaze data? The way our experiment is setup, participants sit quite close to the projection space (because we want to elicit head movements as well), so all of the projection (which should ideally match the defined surface) is not visible in each frame, only a part of it and a part of the surrounding markers. Wondering if that's a problem for surface definition?

papr 12 March, 2021, 12:59:17

No, there is a slight correction I have to make. The surface can be perfectly valid, even if none of its corners is visible in the scene frame. I had hoped that the corner location signal was "clean" enough in your case such that one could detect outliers easily. But given your setup, it is totally expected that the corners are outside of the scene video frame. And that is fine as long sufficient markers are visible. You will need access to the number of detected markers to correctly filter the outliers.

user-16e6e3 12 March, 2021, 13:06:44

Alright, thanks for clarifying! How soon could this new feature on the number of detected markers per frame be implemented? Just so we can plan the study better.

user-94f03a 15 March, 2021, 07:53:15

Hi, is there a way to improve calibration after the end of the recording on the pupil invisible, on the cloud or the pupil player?

user-82e5bd 15 March, 2021, 08:47:17

I have a question.

i want to use this and put it in a map called pupil_player_settings and then plugin but i cannot find in in my pupil player workspace (i know this is an old version) so from that is also have the question where can i find the new custom offsett (beacuase i have no phyton) only R and how can i lauch the player again so that it work

custom_offset.py

user-17da3a 15 March, 2021, 09:01:54

Hey guys, I am trying to upload a recording into Pupil Player but I am receiving an error says: "Cannot load file containing pickled data when allow_pickle=False", as you can see on the screen shot. What does the error mean? I have uploaded this recording into Player before and it was fine. But why its not working now? Thanks a lot!

Chat image

papr 15 March, 2021, 09:04:42

Could you please post a list of the files included in this recording?

user-17da3a 15 March, 2021, 09:11:32

Sure, Here is the list of all files included in the folder.

Chat image

papr 15 March, 2021, 09:13:20

Could you please repeat that for the offline_data folder?

user-17da3a 15 March, 2021, 09:21:02

Chat image

user-17da3a 15 March, 2021, 09:21:05

Chat image

papr 15 March, 2021, 09:31:07

Apologies for being unclear. The first way to screenshot it was fine. I would like you to repeat the same method for the contents of the offline_data folder that is listed in this recording.

It looks like the software is having issues reading the marker_detection_timestamps.npy file. Please delete it and the marker_detection.pldata file. Afterward, open the recording in Player and rerun the marker detection.

user-17da3a 15 March, 2021, 09:37:17

I did so and it could be uploaded this time. Thanks a lot!

user-17da3a 15 March, 2021, 09:34:29

Oh, sorry, I see now. Please find it attached.

Chat image

papr 15 March, 2021, 09:36:21

All expected files are there. Might have been an issue when writing the marker detection cache to disk.

user-0e7e72 15 March, 2021, 10:44:21

Good morning! I am using a code to send events to pupil companion as described in https://docs.pupil-labs.com/developer/invisible/#event-network-interface I want to check if the timestamps of the events are well synchronised with the gaze data. If I understand correctly this should be explained in https://docs.pupil-labs.com/developer/invisible/#time-synchronization , but I do not understand what happens in point 3 "Compare the timestamp in the echo with your target time while taking into account the round-trip-time." or as in the code, time_diff = (event_time_received + event_time_send) / 2 - event_time_phone which gives me an output of about 2 seconds. Thank you!

marc 15 March, 2021, 11:09:42

You add the following custom plugin to Player which introduces post-hoc offset calibration (you can add plugins by copying them into the plugin folder inside of the pupil_player_settings folder) . This is currently the only way to add post-hoc "calibration". https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433

user-94f03a 15 March, 2021, 12:27:35

thanks Marc!

user-fb5b59 15 March, 2021, 12:44:51

Hey guys! Recordings saved on the MobilDevice should have 30 FPS for the World Image Video, am I right? Are there any possibilities, that this changes to 20 FPS at night time (dark environment)?

marc 15 March, 2021, 12:46:21

@user-fb5b59 Yes, in very dark environments the framerate will go below 30 FPS. This is because the required exposure time in darkenvironments is actually > 1/30 sec.

user-fb5b59 15 March, 2021, 12:57:14

Thanks for the fast response!

marc 15 March, 2021, 12:56:45

@user-0e7e72 This line is trying to check if the phone and the computer are both accurately synced to UTC. We assume that in the middle between sending the event and receiving the answer on the desktop machine, the phone should roughly be receiving the event notification. The phone saves the timestamp in event.timestamp, thus the calculated time_diff value should be very low (<<1 sec). If it is 2 seconds in your case this should indicate that either the phone or the computer are not properly synced to UTC. You can trigger synchronization by disabling and enabling automated time sync in the OS settings. For Android you can find the instructions her in section Forcing NTP timesync: https://docs.google.com/document/d/16JpIUUXNQvJ74FqfVJI6PAUUAV2nrbcCpdYamrJif_M/edit#heading=h.8qa9c4k2uxa7 For the computer it depends on the OS you use but you will find it in the time settings. The devices need to be connected to the internet during the synchronization (no internet required while recording afterwards).

user-0e7e72 15 March, 2021, 15:07:52

Thank you marc, very clear! I retriggered the sync on both the oneplus and macos but i did not gain much, now I am at 1.78 seconds. I tried to change the sync server of macos from Apple to Google but the result is the same. Unfortunately there is no way to change time server in android.

marc 15 March, 2021, 15:09:43

Okay. I'll check with the engineering team what might be the issue here!

marc 16 March, 2021, 12:24:55

@user-0e7e72 After discussing this with the engineering team: One possible reason for this could be that the two devices use different NTP servers to sync with and those servers might not be perfectly in sync with each other. On MacOS it is easy to specify which NTP server should be used, so you can try to set it to what Android is/should be using. Could you repeat the test after setting the NTP server in MacOS to one of the following values depending on your region?

Asia
NTP_SERVER=asia.pool.ntp.org

Europe
NTP_SERVER=europe.pool.ntp.org

North America
NTP_SERVER=north-america.pool.ntp.org
user-0e7e72 16 March, 2021, 14:11:03

Thanks marc! I have set up the NTP server to Europe and now I get about -0.1 seconds

user-fb5b59 16 March, 2021, 17:13:51

Is it possible to get absolute timestamps of a processed recording? In the "world_timestamps.csv" there are only relative ones, am I right?

user-94f03a 17 March, 2021, 13:26:54

@user-0e7e72 and @marc I think it used to be possible setup a 'local' ntp server (i used that in the past) and a quick search shows it is still feasible in windows (don't know for mac). You could set one of the two device as the ntp server so all the latency is 'within the room'.

papr 17 March, 2021, 13:39:02

On macOS, you can simply set the NTP server address in the user settings. How are you setting the NTP server for Android?

marc 17 March, 2021, 13:31:02

@user-94f03a Thanks for the tip! That should indeed improve the synchronization further if needed. I am not sure how difficult it would be to specify the NTP server Android should be using, as this is not settable through the UI.

marc 17 March, 2021, 13:34:14

@user-fb5b59 The timestamps in the original raw recording data are saved as UTC timestamps. Within the next ~2-3 weeks we will release an update to Pupil Cloud which will allow you to export recordings in CSV format including those timestamps from cloud. Until then you could try to read the raw recording data directly. See this Python gist for an example on how to do that: https://gist.github.com/marc-tonsen/d230301c6043c4f020afeed2cc1f51fe

user-94f03a 17 March, 2021, 13:51:42

@papr It's been a while since I did that, but from looking at my notes, you create "local" wifi from the laptop, then connect to it, and specify the IP of the laptop as the NTP address. It's been a while since I wrote this little app to do that in order to compute the time offset between an android and a windows lap https://github.com/pmavros/insynch – feel free to fork it of course! It was inspired by LSL anyway, which could be another way to solve this and integrate with https://github.com/sccn/labstreaminglayer/ (maybe you already do? not sure).

papr 17 March, 2021, 13:57:03

Does inSynch change the Android's clock? This is what would be necessary to sync Pupil Invisible Companion to the NTP server. I was wondering if there was a system setting which allows users to manually set NTP servers. But it looks like one needs specialised apps for that.

LSL uses their own time synchronization system. The Pupil Invisible LSL integration assumes synchronized clocks of the Android and LSL device. Therefore, their timesync is not really a solution for this issues (given the current integration)

user-94f03a 17 March, 2021, 14:00:51 feature request 1: when exporting a recording from the cloud, also export any annotations/events done manually? The event annotation is so much easier in the cloud than on Pupil Player for now as far as I can tell. I can see ourselves doing some basic annotation in Cloud (e.g. task start , task end)> feature request 2: Pupil Player caches all the surfaces detected with surface mapper, but it is not (human) readable. In some setups, it would be useful to have access to it as csv or plaintext? (I can elaborate if that's any use)
papr 17 March, 2021, 14:02:31

re 2) what files/data do you have in mind that is not being exported as a CSV by Player?

marc 17 March, 2021, 14:02:43

Regarding Feature 1 I have good news: The release we'll have in ~2-3 weeks I mentioned just above for CSV downloads from cloud will also include event annotations! 👍

user-94f03a 17 March, 2021, 14:05:46

No that was not possible (at least for me back then) but what you could do is probe the NTP server on the laptop side multiple times, and compute the time-offset; then subtract the offset accordingly from the android timestamps as a separate process.

papr 17 March, 2021, 14:31:39

In this case, just for clarification for other users reading this, inSync does not help with improving the time sync. After talking to our Android development team, it does not look like it is possible to manually select a NTP server for system clock sync on Android (neither as an app or as a user). Therefore, one has to sync all other devices to the same NTP server that the Android device is using. See this message for reference https://discord.com/channels/285728493612957698/633564003846717444/821358368101761064

On the other side, testing the time sync is a best practice and should be done before any experiment. This can be achieved with inSync or the script linked in the documentation https://docs.pupil-labs.com/developer/invisible/#time-synchronization

user-94f03a 17 March, 2021, 14:06:58

square_marker_cache surface_definitions_v01

e.g. if I want to avoid defining all the surfaces manually, but having access to when they appear in view is useful regardless if they person looked at them (e.g. for the timing of events in an experiment, presence of info in the visual field, etc)

reverse adding definitions, from a script (essentially a human readable/editable 'surface_definitions_v01' could be useful too

papr 17 March, 2021, 14:22:06

I can see the need for exporting detected markers. I noted this down as a feature request. Surface definitions are usually only necessary once per project and can be shared between recordings. They also only include the defining marker ids and their relative position in surface coordinates. I am not sure how this would be helpful for further processing. Also, even if they were exported, they would not help you defining new ones.

Surface locations are already exported for every frame in which they were detected "regardless if they person looked at them".

user-94f03a 17 March, 2021, 14:25:48

Indeed they are exported, but only after defininng them. Defining a few surfaces is fine if you only have a few objects in a scene (example a blackboard), but they can be very powerful if the markers are on the stimuli (that's our approach). This way we can tell when users are looking on the screen and which stimulus was being presented. But with manual definitions it is a bit cumbersome. Luckily for now we only have a few objects, but scripted definitions of surfaces could enable new paradigms.

papr 17 March, 2021, 14:56:02

Apologies, this is of course a valid use case. Unfortunately, not one that Player was designed for. Player uses the intermediate files only as a cache. While it is running it keeps and modifies the definitions in memory. On shutdown, the current state is written into the file, overwriting the previous state. To apply changes programmatically, one has to apply them while Player is not running.

The file is a binary file on purpose to increase writing/loading speeds and to ensure people modifying it know what they are doing. Errors in the file can cause Player to crash as it expects a correct format and does not do additional error checking. This is a necessary tradeoff to keep the software maintainable.

Nonetheless, the binary format is open source and the file can be edited programmatically if necessary. You can read and write it using this code https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L38-L64 This is how player processes the content of this file https://github.com/pupil-labs/pupil/blob/668ceeb0594c674ffc6e113c60f0231c72208999/pupil_src/shared_modules/surface_tracker/surface_file_store.py#L81-L109 (beware this is comparably complex as we had to add multiple abstraction layers to support the different surface definition formats that developed over the time.)

An alternative would be to use our surface-tracker Python module: https://github.com/pupil-labs/surface-tracker It implements the complete surface tracker pipeline. See this example that shows each step one-by-one https://nbviewer.jupyter.org/github/pupil-labs/surface-tracker/blob/master/examples/visualize_surface_in_image.ipynb (I recommend running this locally to see the intermediate cell outputs) (Player does not make use of this library as it does not support legacy surface definition formats, etc)

user-94f03a 17 March, 2021, 15:00:31

Thanks @papr! We are using the surface-tracker already and it works well, it is just a matter of generating all the definitions and porting them across (I guess Pupil Cloud enrichment saims to resolve that, but there too having many separate surface definitions could be cumbersome). I will have a closer look on the materials sent.

user-29d6bb 22 March, 2021, 07:46:15

Hi, we've been using two Invisibles for several months to record video, gaze, and audio. We've been having trouble, with both devices, with audio-video sync in the mp4 files -- both the original world.mp4 ones recorded on the phone and the ones that are loaded and rendered by the pupil player. We've tried viewing the mp4s in Quicktime, VLC, and Premiere Pro, and they all seem to be out of sync (although the synchronization seems to be different in different players). Are we doing something wrong? Thank you!

marc 22 March, 2021, 09:08:26

Hi @user-29d6bb! Sorry to hear that! Would you be able to share an example recording that shows the issue with [email removed] so we can have a look?

user-29d6bb 22 March, 2021, 16:06:41

Thank you @marc ! We've mostly noticed it in long (~1 hr) recordings -- is it okay to send data that big? Also I'm not sure if this is related, but we've had multiple cases recently where the video files seem to be corrupted -- when we try to load into the Pupil Player, we get a "missing moov atom" error, and the mp4 files can't be read by VLC or quicktime either. I deleted and reinstalled the Invisible Companion app and thought that had fixed the problem, but maybe it's causing the audio synchronization problem also?

marc 22 March, 2021, 16:19:52

@user-29d6bb Would you be able to upload the recording somewhere, such that you can share a link with us? We are aware of an issue that existed in older versions of the app, that could cause the "missing moov atom" error. What version of the app have you been using to record the affected recordings? (this info can be found in the included info.json file).

user-29d6bb 22 March, 2021, 16:25:05

Thank you! I'll send a link. For the version, it looks like it's 1.1.4-prod

marc 22 March, 2021, 16:30:26

Thanks @user-29d6bb! In order to not share the recording publicly you can share the link with [email removed]

Just to clarify, 1.1.4-prod is the version that caused the moov atom issue not the audio/video sync problem, right?

marc 22 March, 2021, 16:34:35

@user-29d6bb In that case it would be great if you could share a recording featuring the moov atom issue as well.

user-29d6bb 22 March, 2021, 17:13:04

Thank you @marc! I sent an email with two recordings that have the audio/video sync problem (taken at the same time with two devices). The sync problem happened with 1.1.4-prod. The missing moov atom problem happened with 1.1.1-prod. I'll try to find one to send you. Thank you!

marc 22 March, 2021, 17:20:47

Thanks @user-29d6bb! We'll take a look and get back to you asap!

user-17da3a 25 March, 2021, 12:06:27

Hi guys, I have some questions regarding surface definition via Pupil Player. • What is the measurement unit for width and height of the surface? In mm or cm? • There is a triangle in the middle of the surface, does the upper angle pointing to the top of the surface? • How can I apply an already defined surface via Player to the rest of the recordings? Thanks!

marc 25 March, 2021, 12:44:41

Hi @user-17da3a! You can pick the unit you prefer for the surface size. The results are given in normalized coordinates (so with no unit) as well as scaled to the size you have defined. The scaled version will be in whatever unit you have specified the size in.

Yes, the triangle is pointing upwards and defines the orientation of the surface definition like that.

To use a pre-defined surface from one recording in another you need to copy the surface_definitions file from the first recording to the other.

user-5e8fad 26 March, 2021, 10:28:18

Hi everyone! We recently bought an invisible device. Although our recordings are being uploaded from the smartphone to the cloud we cannot access cloud via the pc. The sign in via google account doesn't seem to work. Is there any known issue?

marc 26 March, 2021, 11:44:31

Hi @user-5e8fad! We are not aware of any issues! Could you send me a DM with the email address of the Google account you are using?

user-1391e7 29 March, 2021, 12:00:45

hello hello!

user-1391e7 29 March, 2021, 12:01:52

I'm not sure if I'm having a hardware or software issue, maybe you have encountered the error message before

user-1391e7 29 March, 2021, 12:02:08

when using the pupil invisible glasses in combination with the companion app

user-1391e7 29 March, 2021, 12:02:52

I'm not getting any errors prior to recording, but once I hit save, I get a recording error that prompts me to reconnect the glasses

user-1391e7 29 March, 2021, 12:03:25

OTG is enabled, both world cam & eye tracking sensor indicators light up in the app

user-1391e7 29 March, 2021, 12:09:39

after I hit save everything seems fine for a moment, then I'm getting the Recording failure

user-1391e7 29 March, 2021, 12:10:13

the message is "We experienced a problem during recording! Please stop recording, disconnect PI and connect it again! Sensor: PI left v1"

marc 29 March, 2021, 12:14:54

hello @user-1391e7! So you get this message after hitting save, while no recording is going on anymore? Do you have any trouble playing the affected recordings in Pupil Cloud or Pupil Player? The message you receive should in theory only appear if there was an error with one of the sensors while recording. If this happens after the recording has technically finished this might be a software bug.

user-1391e7 29 March, 2021, 12:16:25

I can play the recordings, but the gaze doesn't show up, since the left eye data seems to be missing from the recording

user-1391e7 29 March, 2021, 12:17:11

it sounds like a hardware issue, but I'm not 100% sure in regards to the timing

marc 29 March, 2021, 12:20:20

@user-1391e7 When you say the left eye data is missing, do you mean the PI left v1 ps1.mp4 file is missing entirely from the recording folder, or is the file empty/unreadable?

user-1391e7 29 March, 2021, 12:22:34

PI left v1 ps1.mjpeg & PI left v1 ps1.time

user-1391e7 29 March, 2021, 12:22:47

they exist, but have nothing inside

user-1391e7 29 March, 2021, 12:26:29

I just tested it on the second phone we bought, there it happens as well, same behaviour. makes me think hardware issue is more likely

user-1391e7 29 March, 2021, 12:28:52

two different versions of the app are installed on the phones:

user-1391e7 29 March, 2021, 12:29:00

0.8.24-prod

user-1391e7 29 March, 2021, 12:29:02

1.1.4-prod

user-1391e7 29 March, 2021, 12:46:15

sadly I wasn't present when the recordings went from fine to not fine. there aren't any visible damages on either the cable or the glasses, the colleagues who performed the recordings didn't mention any incidents either

user-1391e7 29 March, 2021, 12:50:01

I could uninstall and reinstall the companion app, but that may remove the ability to reproduce the error, in case it isn't hardware related

marc 29 March, 2021, 12:56:58

@user-1391e7 Indeed this does sound more like a hardware issue then and we should initiate a repair. You can try to reinstall the app but I doubt this will fix the problem in this case. Please reach out to [email removed] to initiate the repair mentioning the serial of the device (you can find it at the tip of the left arm of the glasses) and your address!

user-1391e7 29 March, 2021, 12:58:52

thank you, will do!

End of March archive