Hi! I could not install pupil invisible companion on my Samsung note 10 phone. It said "Your device isn't compatible with this version". Is there any version that is compatible with my phone?
@user-b10192 Invisible companion works only on OnePlus 6(Android 8/9) Oneplus8 and OnePlus8T (Android 11 both)
I checked my phone android version and it is Android version 11. Please find attached screenshots.
Thanks
No other device is supported.
@user-b10192 as I said above we don't support Samsung. Only the OnePlus models I listed above.
Gotcha.
Hello, after download Pupil Player, the computer says the file is damaged. I repeated the download twice. It is a MacOS 10.13.16 High Sierra. I cannot find anything here on Discord. And I find no software limitations on the site. What is wrong?
Hi, our latest release no longer supports MacOS High Sierra. Please use Pupil 3.0 instead https://github.com/pupil-labs/pupil/releases/v3.0
Thank you for the link!
I recommend upgrading to macOS 10.15 Catalina if possible though. There is a lot of software out there that no longer provides support for macOS High Sierra. On the other side, there are many applications that do not have full macOS 11 Big Sur support yet. Personally, I think Catalina is the best option compatibility wise at the moment.
Hi, I have a two questions about the pupil invisible glasses. 1. If I am correct, the IMU doesn’t contain a magnetometer. Did you have a specific reason not to include a magnetometer in the IMU? 2. The software of the pupil core shows the confidence of the pupil detection. Is it also possible to get the confidence of the pupil detection with the pupil invisible?
Hi @user-2d66f7!
1) Correct, the IMU does not include a magnetometer. There was no specific reason to not include it and we might consider changing that in future hardware iterations. There is no concrete plan as of now though.
2) The gaze estimation pipeline of Pupil Invisible is an "end-to-end" machine learning approach and does not include an explicit pupil detection step. Accordingly there is no confidence value for pupil detection results. There is no confidence value for the gaze estimation quality as a whole either, if that is what you were ultimately interested in!
Hi @marc , thanks for your response. Is it possible to get the pupil invisible with magnetometer on request? Or should we wait for changes in the future?
Hi, there’s a red light flashing on the right side of the Pupil Invisible inner cameras. It’s also failing to record because it can’t detect the inner cameras. Anyone know what causes this?
@user-be55fc send an email to info@pupil-labs.com with a screenshot of the error dialogue and we can assist.
@user-2d66f7 Sorry for the delayed reply! Unfortunately we can not accommodate a custom IMU upon request. You can of course wait for it, but I do not expect anything to become available before the end of the year!
Sorry for my late replay. Thank you for the information!
@user-2d66f7 Sorry to Highjack the question again, you might want to try a visual inertial odometry or slam approach (maybe orbslam 3 - using the world camera and Imu from the invisible glasses?)
Thanks for the tip! I will look into it
Sorry, forgot to answer! We already have some experience with SLAM approaches and will actually release a tool for AOI tracking without markers very soon. The problem with that technology in our experience is that it is either very computationally intensive or not sufficiently robust. The scene video often features fast rotational movements which is usually a problem. But we are actively looking at those technologies and try to make them work in an eye tracking context. If you have further references you think we should look at I would be curious to check them out!
Hi@marc, I am trying to run pupil-invisible-monitor from source on Windows, but the following error appears :
(pupil_invisible) E:\pupil\pupil_invisible\pupil-invisible-monitor>pupil_invisible_monitor Traceback (most recent call last): File "e:\anaconda\envs\pupil_invisible\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "e:\anaconda\envs\pupil_invisible\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "E:\Anaconda\envs\pupil_invisible\Scripts\pupil_invisible_monitor.exe__main__.py", line 4, in <module> File "e:\anaconda\envs\pupil_invisible\lib\site-packages\pupil_invisible_monitor__main__.py", line 11, in <module> from .models import Host_Controller File "e:\anaconda\envs\pupil_invisible\lib\site-packages\pupil_invisible_monitor\models.py", line 4, in <module> import ndsi File "e:\anaconda\envs\pupil_invisible\lib\site-packages\ndsi__init__.py", line 24, in <module> from ndsi.formatter import DataFormat File "e:\anaconda\envs\pupil_invisible\lib\site-packages\ndsi\formatter.py", line 10, in <module> from ndsi.frame import JPEGFrame, H264Frame, FrameFactory ImportError: DLL load failed:
I have followed the method of https://github.com/pupil-labs/pupil-invisible-monitor/issues/24 to fix it, but still failed to solve this problem.
Is it safe to use the Pupil Invisible glasses on children?
@user-32f69e We have found that Pupil Invisible works well for ages 3 and up. We recommend using a head strap to ensure a secure fit, particularly during more dynamic movements
@user-a98526 Please see this conversation for reference: https://discord.com/channels/285728493612957698/446977689690177536/796747481693028443 If you have further questions regarding running the software from source, please post them to software-dev
Hi@papr,I found that in the process of using invisible, sometimes the user does not wear Pupil-invisible, but still generates gaze points. Is there a confidence level similar to pupil-core used to determine whether a user wears Pupil-invisible? There is worn ps1.raw in the record format. Can it be used to determine whether there is a user? If so, how can I get this data?
Hello! I have two questions regarding the PI: 1) is the tracking robust in users wearing contact lenses? in other eye-tracking systems based on corneal reflection, wearing contact lenses (especially those with cylindric correction) can distort the data and/or introduce heavy jitter. How does the PI handle this?
2) does the tracking algorithm account for ocular dominance? Usually, when performing monocular tracking, the dominant eye is preferred, but since the PI has effectively a combined binocular tracking, I was wondering if it is able to tell which eye is the dominant one thus weighing it differently.
Hi @user-a350f5 ! 1) The predictions are not affected my contact lenses. We have a small section on this in our white paper: https://arxiv.org/pdf/2009.00508 (see Figure 5 A) 2) There is no model for ocular dominance used in the gaze pipeline of Pupil Invisible or Core.
1) That's great! for future testing you might want to consider comparing different types of lenses, as spherical correction (i.e. nearsightedness) doesn't nearly have the same impact as cylindrical (i.e. astigmatism)
2) Cool, thanks for letting me know 🙂
@user-a98526 The "worn" data does is a boolean timeseries that indicating if the glasses are worn or not, but the estimate is not perfect but works most of the time. In the app this estimate is used to not show a gaze point in live preview while the headset is not worn. The app will still publish "best guess" gaze data though. You can read the worn data files in Python with np.fromfile(file_path, dtype="uint8")
.
@user-a98526 to clarify, to my knowledge, the worn data is not being streamed in realtime.
@user-a350f5 Thanks for the info! We will try to keep that in mind! 👍
@marc Is there a way to obtain this data in real time?
Okay, thank you for your notice.
Hi guys, I have a question regarding pupil invisible camera scene. I recorded a video which lasted 15 minutes. I am actually quite sure I have saved the recording properly and I had both camera and eye sensors activated. however, I see on the recording details that the duration of recording is 15 minutes, the video scene is recorded only for the first 4 seconds. Please let me know what the reason can be and whether the video could be retrieved if it exists at all. Thanks!
Hi @user-17da3a! If the scene camera was not properly connected it would in theory be possible to make a recording of that duration without the scene camera recording. One way to tell if the the world video was recording but the file is somehow corrupted is to check the size of the world video file. For a 15 min recording the PI world v1 ps1.mp4
file in the folder should be at least several MB in size, whereas a real 4s recording should be much smaller.
The circular animation in the Companion App should also visualize to you if both the eye cameras and world video camera are successfully recording.
Thanks! I have checked the size of the recording which is 4 MB. Do you think I can access the video data? 4 MG should be enough for 15 mins?
Unfortunately, if I cannot retrieve the video scene and this happens again at any point of time and I am not aware of it, I will lose the whole data from my experiment. I also left the the phone still on the table to be sure the attached cord is stable.
@user-17da3a This should of course not happen. Would you be able to share the recording [email removed] so we can take a look at what went wrong?
Sure, will do so.
Hello, I have a couple questions in regards to analyzing pupil invisible data in the pupil player. I am conducting an experiment that records two hour chunks of gaze data from the pupil invisible and also trying to analyze 16 surfaces with April tags. I am currently struggling to process the data at any reasonable speed as initial video processing on my personal computer takes 9+ hours as it has a pretty old processor but my university computer cannot even load the file because it does not have enough RAM. When I add/move/adjust markers for surfaces, I have to wait for 3-5 minutes while it processes the change. My main question is what operating system is pupil player currently the most optimized for and what computer config works the best for the pupil player? I was not able to find any recommendations on the website. However, I could just be blind. If it just takes this long to analyze this large of files that is fine as well.
Hey, editing surfaces is indeed slower on Windows compared to MacOS or Linux. I recommend using Linux in this case.
@papr Thank you, just to confirm pupil player effectively uses multiple cores of a CPU and is not limited to just one core per process it is completing?
It uses background processes to calculate the surface gaze mapping and heatmap. But initiating them takes longer on Windows than on Unix operating system
Okay, Thank you!!
More RAM is more important than a lot of CPU cores, though.
Good afternoon! I am experiencing an issue with pupil invisible. The connection to the oneplus is unstable so the video and gaze data feed get interrupted. It seems to me that the problem is the connection cable rather then the PI connector. Then I thought to get a simple usb-c replacement cable but i read somewhere that is not recommended. Why so? Thank you very much!
@user-0e7e72 We are maxing out the USB-C connection and in our experience lower/medium quality cables often lead to unstable connections. If it is just about checking if the connection remains stable for ~20 seconds using other cables should be fine, but for longer time periods connection problems are somewhat expected when using other cables. If you think this is a hardware issue please reach out to [email removed] so we can issue a repair or get you a new cable!
Thanks marc! Are cables which meet PI specifications available on the market? If so, could you point me to one example?
@marc Hey Marc, no worries. I have some experience regarding v-slam and inertial pose estimation. As my last pose mentioned, I would definitely recommend ORB-Slam 3 right now, because it features mono-/ stereo- /rgb-d + inertial pose estimation and is from my experience sufficient for head pose estimations (depending on the application though - very fast movements might still be a Problem). It is computationally expensive yes, but I know from experience that there are several small machines capable of dealing with this and are giving great results (e.g. Nvidia Jetson Boards). I am not entirely sure if it has been implemented for Android phone's yet, but this will sooner or later happen. Oh, and also - it is fully open source!
Okay, thanks a lot for the input! We'll keep looking into it!
Hi again! When I use the cloud API to download the data, the gaze data at 200hz is not available. Is there a way to obtain it through the API? Thanks! For reference I am using the following code:
with open(last_recording.name+'.zip', 'wb') as f: f.write(api.download_recording_zip(last_recording.id, _preload_content=False).data)
Hey, our Cloud API development team deployed a fix and you should be able to retry without any issues. Should you encounter recordings that are missing the 200hz gaze data anyway, please contact info@pupil-labs.com with the recording ID.
I will forward your question to our Cloud API development team. 👍
Hi Guys! I have a question regarding surface definition on the enrichment section of the cloud. As far as I understood, on this section one should at first be sure that all markers are blue (i.e., included in the model). By having all markers in the model the lines that connect the edges together are not defined automatically and I have to draw the model visually and attach the lines by myself which seems unstable at some frames. 1. How reliable would be the defined surface in such a way? (when I look to the heatmaps, their edges don’t look the same, although I tried to keep the surface identical in all defined sections) 2. Would it be possible to define the surface once on cloud for the whole recording and then apply it to all sections I will define later? Thanks!
@user-0e7e72 The only other compatible cable I know of that was on the market is no longer being produced. You can however get additional cables from us. Please reach out to [email removed] for that!
Hey @user-17da3a! You are correct that all markers you want to include in the model should appear blue (instead of red) during definition. You can move around the corners of the defined surface to any location you like and the surface edges do not need to be aligned with the markers.
To get robust tracking of the surface you need to ensure two things: 1) The markers need to be "co-planar" with the surface you want to track, i.e. they can not be tilted at an angle to the surface or elevated from the surface. In your case the markers are placed on the wall you do the projection on, so that is co-planar!
2) At least ~3 markers need to be detected in every frame to ensure a stable localisation of the surface. In frames where just 1-2 markers are detected the localisation will be less accurate which would introduce noise.
If you create an enrichment based on a start and end event, you can easily calculate the enrichment on more recordings in your project by defining the same start and end events in all your recordings. If you define the same pair of start and end events multiple times within the same recording, you can also get multiple sections out of the recording using the same surface definition.
Hi @marc ! In the recordings @user-17da3a and I work with max 2-3 markers could be detected. In most frames so far, at least 2 markers are detected and the surface definition seems correct, i.e. it matches the projection space. When 1 or no markers are detected, then it's quite inaccurate of course. Is there an automated way to extract/exclude the frames with 1 or no detected markers when we export the gaze data as csv? This way we would only analyze gaze in frames with at least 2 detected markers, so that the surface definition fits our projection.
Hey, would it be possible to share one of your surface exports from Cloud which exhibits this issue? I might have an idea on how to filter the data even if the marker information is not available at the moment. (Can be shared privately as well. No videos or gaze related data necessary.)
The exports of the Surface Tracker do currently not include which markers were detected or how many of them, so an automated filtering of that is unfortunately not easily possible.
This is a valid feature request though and I will forward this to our engineering team.
@user-16e6e3 I cannot give an estimate for that yet.
Thanks @marc! That would be great. @papr, sure, attached is an example surface export from Cloud.
My idea was to plot the aoi corner coordinates over time and to detect outliers using a median absolute deviation (MAD) approach. Ideally, the outliers would be present for all 4 corners at the same time. But unfortunately, most of the time the AOI corners are not visible in the scene frame.
Note: The corner names (per row) might not be correct. I was not sure about the corner order.
@papr thanks for trying it out! So if the AOI corners are not visible in the scene frames, then the surface is not defined properly and we cannot use it for our gaze data? The way our experiment is setup, participants sit quite close to the projection space (because we want to elicit head movements as well), so all of the projection (which should ideally match the defined surface) is not visible in each frame, only a part of it and a part of the surrounding markers. Wondering if that's a problem for surface definition?
No, there is a slight correction I have to make. The surface can be perfectly valid, even if none of its corners is visible in the scene frame. I had hoped that the corner location signal was "clean" enough in your case such that one could detect outliers easily. But given your setup, it is totally expected that the corners are outside of the scene video frame. And that is fine as long sufficient markers are visible. You will need access to the number of detected markers to correctly filter the outliers.
Alright, thanks for clarifying! How soon could this new feature on the number of detected markers per frame be implemented? Just so we can plan the study better.
Hi, is there a way to improve calibration after the end of the recording on the pupil invisible, on the cloud or the pupil player?
I have a question.
i want to use this and put it in a map called pupil_player_settings and then plugin but i cannot find in in my pupil player workspace (i know this is an old version) so from that is also have the question where can i find the new custom offsett (beacuase i have no phyton) only R and how can i lauch the player again so that it work
Hey guys, I am trying to upload a recording into Pupil Player but I am receiving an error says: "Cannot load file containing pickled data when allow_pickle=False", as you can see on the screen shot. What does the error mean? I have uploaded this recording into Player before and it was fine. But why its not working now? Thanks a lot!
Could you please post a list of the files included in this recording?
Sure, Here is the list of all files included in the folder.
Could you please repeat that for the offline_data folder?
Apologies for being unclear. The first way to screenshot it was fine. I would like you to repeat the same method for the contents of the offline_data folder that is listed in this recording.
It looks like the software is having issues reading the marker_detection_timestamps.npy
file. Please delete it and the marker_detection.pldata
file. Afterward, open the recording in Player and rerun the marker detection.
I did so and it could be uploaded this time. Thanks a lot!
Oh, sorry, I see now. Please find it attached.
All expected files are there. Might have been an issue when writing the marker detection cache to disk.
Good morning! I am using a code to send events to pupil companion as described in https://docs.pupil-labs.com/developer/invisible/#event-network-interface I want to check if the timestamps of the events are well synchronised with the gaze data. If I understand correctly this should be explained in https://docs.pupil-labs.com/developer/invisible/#time-synchronization , but I do not understand what happens in point 3 "Compare the timestamp in the echo with your target time while taking into account the round-trip-time." or as in the code, time_diff = (event_time_received + event_time_send) / 2 - event_time_phone which gives me an output of about 2 seconds. Thank you!
You add the following custom plugin to Player which introduces post-hoc offset calibration (you can add plugins by copying them into the plugin
folder inside of the pupil_player_settings
folder) . This is currently the only way to add post-hoc "calibration".
https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433
thanks Marc!
Hey guys! Recordings saved on the MobilDevice should have 30 FPS for the World Image Video, am I right? Are there any possibilities, that this changes to 20 FPS at night time (dark environment)?
@user-fb5b59 Yes, in very dark environments the framerate will go below 30 FPS. This is because the required exposure time in darkenvironments is actually > 1/30 sec.
Thanks for the fast response!
@user-0e7e72 This line is trying to check if the phone and the computer are both accurately synced to UTC. We assume that in the middle between sending the event and receiving the answer on the desktop machine, the phone should roughly be receiving the event notification. The phone saves the timestamp in event.timestamp
, thus the calculated time_diff
value should be very low (<<1 sec). If it is 2 seconds in your case this should indicate that either the phone or the computer are not properly synced to UTC.
You can trigger synchronization by disabling and enabling automated time sync in the OS settings. For Android you can find the instructions her in section Forcing NTP timesync
:
https://docs.google.com/document/d/16JpIUUXNQvJ74FqfVJI6PAUUAV2nrbcCpdYamrJif_M/edit#heading=h.8qa9c4k2uxa7
For the computer it depends on the OS you use but you will find it in the time settings. The devices need to be connected to the internet during the synchronization (no internet required while recording afterwards).
Thank you marc, very clear! I retriggered the sync on both the oneplus and macos but i did not gain much, now I am at 1.78 seconds. I tried to change the sync server of macos from Apple to Google but the result is the same. Unfortunately there is no way to change time server in android.
Okay. I'll check with the engineering team what might be the issue here!
@user-0e7e72 After discussing this with the engineering team: One possible reason for this could be that the two devices use different NTP servers to sync with and those servers might not be perfectly in sync with each other. On MacOS it is easy to specify which NTP server should be used, so you can try to set it to what Android is/should be using. Could you repeat the test after setting the NTP server in MacOS to one of the following values depending on your region?
Asia
NTP_SERVER=asia.pool.ntp.org
Europe
NTP_SERVER=europe.pool.ntp.org
North America
NTP_SERVER=north-america.pool.ntp.org
Thanks marc! I have set up the NTP server to Europe and now I get about -0.1 seconds
Is it possible to get absolute timestamps of a processed recording? In the "world_timestamps.csv" there are only relative ones, am I right?
@user-0e7e72 and @marc I think it used to be possible setup a 'local' ntp server (i used that in the past) and a quick search shows it is still feasible in windows (don't know for mac). You could set one of the two device as the ntp server so all the latency is 'within the room'.
On macOS, you can simply set the NTP server address in the user settings. How are you setting the NTP server for Android?
@user-94f03a Thanks for the tip! That should indeed improve the synchronization further if needed. I am not sure how difficult it would be to specify the NTP server Android should be using, as this is not settable through the UI.
@user-fb5b59 The timestamps in the original raw recording data are saved as UTC timestamps. Within the next ~2-3 weeks we will release an update to Pupil Cloud which will allow you to export recordings in CSV format including those timestamps from cloud. Until then you could try to read the raw recording data directly. See this Python gist for an example on how to do that: https://gist.github.com/marc-tonsen/d230301c6043c4f020afeed2cc1f51fe
@papr It's been a while since I did that, but from looking at my notes, you create "local" wifi from the laptop, then connect to it, and specify the IP of the laptop as the NTP address. It's been a while since I wrote this little app to do that in order to compute the time offset between an android and a windows lap https://github.com/pmavros/insynch – feel free to fork it of course! It was inspired by LSL anyway, which could be another way to solve this and integrate with https://github.com/sccn/labstreaminglayer/ (maybe you already do? not sure).
Does inSynch change the Android's clock? This is what would be necessary to sync Pupil Invisible Companion to the NTP server. I was wondering if there was a system setting which allows users to manually set NTP servers. But it looks like one needs specialised apps for that.
LSL uses their own time synchronization system. The Pupil Invisible LSL integration assumes synchronized clocks of the Android and LSL device. Therefore, their timesync is not really a solution for this issues (given the current integration)
re 2) what files/data do you have in mind that is not being exported as a CSV by Player?
Regarding Feature 1 I have good news: The release we'll have in ~2-3 weeks I mentioned just above for CSV downloads from cloud will also include event annotations! 👍
No that was not possible (at least for me back then) but what you could do is probe the NTP server on the laptop side multiple times, and compute the time-offset; then subtract the offset accordingly from the android timestamps as a separate process.
In this case, just for clarification for other users reading this, inSync does not help with improving the time sync. After talking to our Android development team, it does not look like it is possible to manually select a NTP server for system clock sync on Android (neither as an app or as a user). Therefore, one has to sync all other devices to the same NTP server that the Android device is using. See this message for reference https://discord.com/channels/285728493612957698/633564003846717444/821358368101761064
On the other side, testing the time sync is a best practice and should be done before any experiment. This can be achieved with inSync or the script linked in the documentation https://docs.pupil-labs.com/developer/invisible/#time-synchronization
square_marker_cache surface_definitions_v01
e.g. if I want to avoid defining all the surfaces manually, but having access to when they appear in view is useful regardless if they person looked at them (e.g. for the timing of events in an experiment, presence of info in the visual field, etc)
reverse adding definitions, from a script (essentially a human readable/editable 'surface_definitions_v01' could be useful too
I can see the need for exporting detected markers. I noted this down as a feature request. Surface definitions are usually only necessary once per project and can be shared between recordings. They also only include the defining marker ids and their relative position in surface coordinates. I am not sure how this would be helpful for further processing. Also, even if they were exported, they would not help you defining new ones.
Surface locations are already exported for every frame in which they were detected "regardless if they person looked at them".
Indeed they are exported, but only after defininng them. Defining a few surfaces is fine if you only have a few objects in a scene (example a blackboard), but they can be very powerful if the markers are on the stimuli (that's our approach). This way we can tell when users are looking on the screen and which stimulus was being presented. But with manual definitions it is a bit cumbersome. Luckily for now we only have a few objects, but scripted definitions of surfaces could enable new paradigms.
Apologies, this is of course a valid use case. Unfortunately, not one that Player was designed for. Player uses the intermediate files only as a cache. While it is running it keeps and modifies the definitions in memory. On shutdown, the current state is written into the file, overwriting the previous state. To apply changes programmatically, one has to apply them while Player is not running.
The file is a binary file on purpose to increase writing/loading speeds and to ensure people modifying it know what they are doing. Errors in the file can cause Player to crash as it expects a correct format and does not do additional error checking. This is a necessary tradeoff to keep the software maintainable.
Nonetheless, the binary format is open source and the file can be edited programmatically if necessary. You can read and write it using this code https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L38-L64 This is how player processes the content of this file https://github.com/pupil-labs/pupil/blob/668ceeb0594c674ffc6e113c60f0231c72208999/pupil_src/shared_modules/surface_tracker/surface_file_store.py#L81-L109 (beware this is comparably complex as we had to add multiple abstraction layers to support the different surface definition formats that developed over the time.)
An alternative would be to use our surface-tracker Python module: https://github.com/pupil-labs/surface-tracker It implements the complete surface tracker pipeline. See this example that shows each step one-by-one https://nbviewer.jupyter.org/github/pupil-labs/surface-tracker/blob/master/examples/visualize_surface_in_image.ipynb (I recommend running this locally to see the intermediate cell outputs) (Player does not make use of this library as it does not support legacy surface definition formats, etc)
Thanks @papr! We are using the surface-tracker already and it works well, it is just a matter of generating all the definitions and porting them across (I guess Pupil Cloud enrichment saims to resolve that, but there too having many separate surface definitions could be cumbersome). I will have a closer look on the materials sent.
Hi, we've been using two Invisibles for several months to record video, gaze, and audio. We've been having trouble, with both devices, with audio-video sync in the mp4 files -- both the original world.mp4 ones recorded on the phone and the ones that are loaded and rendered by the pupil player. We've tried viewing the mp4s in Quicktime, VLC, and Premiere Pro, and they all seem to be out of sync (although the synchronization seems to be different in different players). Are we doing something wrong? Thank you!
Hi @user-29d6bb! Sorry to hear that! Would you be able to share an example recording that shows the issue with [email removed] so we can have a look?
Thank you @marc ! We've mostly noticed it in long (~1 hr) recordings -- is it okay to send data that big? Also I'm not sure if this is related, but we've had multiple cases recently where the video files seem to be corrupted -- when we try to load into the Pupil Player, we get a "missing moov atom" error, and the mp4 files can't be read by VLC or quicktime either. I deleted and reinstalled the Invisible Companion app and thought that had fixed the problem, but maybe it's causing the audio synchronization problem also?
@user-29d6bb Would you be able to upload the recording somewhere, such that you can share a link with us? We are aware of an issue that existed in older versions of the app, that could cause the "missing moov atom" error. What version of the app have you been using to record the affected recordings? (this info can be found in the included info.json
file).
Thank you! I'll send a link. For the version, it looks like it's 1.1.4-prod
Thanks @user-29d6bb! In order to not share the recording publicly you can share the link with [email removed]
Just to clarify, 1.1.4-prod
is the version that caused the moov atom
issue not the audio/video sync
problem, right?
@user-29d6bb In that case it would be great if you could share a recording featuring the moov atom
issue as well.
Thank you @marc! I sent an email with two recordings that have the audio/video sync problem (taken at the same time with two devices). The sync problem happened with 1.1.4-prod. The missing moov atom problem happened with 1.1.1-prod. I'll try to find one to send you. Thank you!
Thanks @user-29d6bb! We'll take a look and get back to you asap!
Hi guys, I have some questions regarding surface definition via Pupil Player. • What is the measurement unit for width and height of the surface? In mm or cm? • There is a triangle in the middle of the surface, does the upper angle pointing to the top of the surface? • How can I apply an already defined surface via Player to the rest of the recordings? Thanks!
Hi @user-17da3a! You can pick the unit you prefer for the surface size. The results are given in normalized coordinates (so with no unit) as well as scaled to the size you have defined. The scaled version will be in whatever unit you have specified the size in.
Yes, the triangle is pointing upwards and defines the orientation of the surface definition like that.
To use a pre-defined surface from one recording in another you need to copy the surface_definitions file from the first recording to the other.
Hi everyone! We recently bought an invisible device. Although our recordings are being uploaded from the smartphone to the cloud we cannot access cloud via the pc. The sign in via google account doesn't seem to work. Is there any known issue?
Hi @user-5e8fad! We are not aware of any issues! Could you send me a DM with the email address of the Google account you are using?
hello hello!
I'm not sure if I'm having a hardware or software issue, maybe you have encountered the error message before
when using the pupil invisible glasses in combination with the companion app
I'm not getting any errors prior to recording, but once I hit save, I get a recording error that prompts me to reconnect the glasses
OTG is enabled, both world cam & eye tracking sensor indicators light up in the app
after I hit save everything seems fine for a moment, then I'm getting the Recording failure
the message is "We experienced a problem during recording! Please stop recording, disconnect PI and connect it again! Sensor: PI left v1"
hello @user-1391e7! So you get this message after hitting save, while no recording is going on anymore? Do you have any trouble playing the affected recordings in Pupil Cloud or Pupil Player? The message you receive should in theory only appear if there was an error with one of the sensors while recording. If this happens after the recording has technically finished this might be a software bug.
I can play the recordings, but the gaze doesn't show up, since the left eye data seems to be missing from the recording
it sounds like a hardware issue, but I'm not 100% sure in regards to the timing
@user-1391e7 When you say the left eye data is missing, do you mean the PI left v1 ps1.mp4
file is missing entirely from the recording folder, or is the file empty/unreadable?
PI left v1 ps1.mjpeg & PI left v1 ps1.time
they exist, but have nothing inside
I just tested it on the second phone we bought, there it happens as well, same behaviour. makes me think hardware issue is more likely
two different versions of the app are installed on the phones:
0.8.24-prod
1.1.4-prod
sadly I wasn't present when the recordings went from fine to not fine. there aren't any visible damages on either the cable or the glasses, the colleagues who performed the recordings didn't mention any incidents either
I could uninstall and reinstall the companion app, but that may remove the ability to reproduce the error, in case it isn't hardware related
@user-1391e7 Indeed this does sound more like a hardware issue then and we should initiate a repair. You can try to reinstall the app but I doubt this will fix the problem in this case. Please reach out to [email removed] to initiate the repair mentioning the serial of the device (you can find it at the tip of the left arm of the glasses) and your address!
thank you, will do!