πŸ•Ά invisible


user-50082f 01 June, 2024, 05:41:31

Neon Companion has not detected a disconnection with the glasses.Also,I can't see the image of the eye camera from the Neon companion, I don't use a third-party USB when connecting, and I haven't updated the version of Android. Please let me know if you have any suggestions on how to deal with it.

user-082c1d 01 June, 2024, 19:23:09

Hi! May I ask why the video is not able to be uploaded to the cloud? I have several videos that have been loading for many days. The phone end said it is uploaded but I am not able to see it on the cloud. Thanks!

Chat image

user-07e923 02 June, 2024, 06:11:25

Hi @user-082c1d, could you open a ticket in πŸ›Ÿ troubleshooting and describe the issue there? We'll also need the recording IDs of the affect recordings. You can find the IDs by right clicking > view recording information.

user-870276 01 June, 2024, 22:33:40

Hey Pupil Labs, I am currently working on eye and hand tracking during natural tasks. For the hand tracking, we are using Xsens and Invisible for the eye. In this, we are having issues with the time synchronization between both devices with a delay of around 10 seconds for some subjects. The difference is inconsistent between the subjects and for the same subjects too. There might be inconsistencies with clicking the recording button, but it won't be 10 seconds. Is there a solution to work on these issues, as the task duration itself is a maximum of 10 seconds for most of them? This is stopping us because it's not possible to do the time-related analysis.

nmt 03 June, 2024, 03:56:42

Hi @user-870276. Would you be able to provide more information about how you're syncing the two systems? That way we can try to provide some concrete feedback.

user-870276 03 June, 2024, 05:01:26

I'm creating a Go timestamp in the cloud and exporting and using that events.csv' Go signal epoch value file as a Go timestamp and correlating it with the Xsens record first frame start epoch values. I'm observing an 8-10 second delay in start times between Xsens and Pupil. This shouldn't be the case as Xsens is started first and then Pupil starts recording, and there is a 4-count given before the Go signal is given. But even then, the recording start of Xsens is 8-10 seconds ahead of the Pupil Go signal epoch value. The difference is also inconsistent among different subjects, which is causing problems in time-related analyses like latencies.

nmt 03 June, 2024, 05:47:58

How did you sync the xsens clock with the Companion Device's clock?

user-870276 03 June, 2024, 10:15:01

Testing this before we did a test run. In that test run, there wasn’t this much of a time delay. So we hadn’t cared much about synchronizing clocks prior.

user-870276 03 June, 2024, 10:24:28

As said the difference also is also not being constant

nmt 03 June, 2024, 10:52:49

If the clocks haven't been in anyway synchronized, then you cannot rely on comparing absolute timestamps from both systems. The temporal offset between both could be arbitrary.

nmt 03 June, 2024, 10:57:12

What sync capabilities does the xsens system have?

user-870276 03 June, 2024, 12:05:27

It takes time from its companion laptop just as Pupil

user-870276 03 June, 2024, 12:05:55

In that case the time variation should be constant right ?

user-870276 03 June, 2024, 12:06:25

This is the variation I'm observing

Chat image

nmt 03 June, 2024, 12:33:37

No not necessarily. In fact, it's difficult to say much from that data. The Xsens system may well take timestamps from a laptop, but do you know what options xsens provides to sync its time with other systems?

user-870276 03 June, 2024, 17:04:53

Global timestamps is the most preferred way for xsens

nmt 04 June, 2024, 00:24:21

If you plan to work with the global timestamps from both systems, you'll need to find a way to synchronise them, or determine the temporal offset between them and apply that to your calculations. Our Companion Device can sync to a master clock, so that could be an option if the Xsens can too. You can read more about that in this section of the docs. Another way to calculate the offset would be to record a common event with both systems. As an aside, I took a quick look at the Xsens docs, and they have quite a few ways to sync with other devices. So you definitely have options.

user-870276 04 June, 2024, 00:54:42

Thanks you Neil! Also I want ask why - the delay between imu beginning and the gaze data and world record beginning - sampling frequency is not consistent in gaze.csv

nmt 04 June, 2024, 01:23:11
  • Not all sensors onboard the Invisible start at exactly the same time. So you might expect delays.
  • Are you referring to gaze data downloaded from Pupil Cloud?
user-870276 04 June, 2024, 00:55:30

Sampling frequency is not being consistently 200 Hz it's fluctuating

user-870276 04 June, 2024, 01:36:48

Yes downloaded from the cloud

nmt 04 June, 2024, 01:44:55

What variability exactly do you see in the sampling rate?

user-870276 04 June, 2024, 02:22:40

So for example or the first time stamp it's being 100 hz and second 150 hz third it's 50 hz and forth 190hz. But on avg it's being 200 hz but there is being a fluctuation

nmt 04 June, 2024, 03:09:28

It is expected that the inter-sample durations have a distribution, but they should average out to roughly 200 Hz. For context, each Gaze datum is generated using left/right-eye image pairs and inherits the left eye image's timestamp. Timestamps are generated by the Companion App when the eye image is received, and there is usually a small transmission delay over USB (from the camera to the app). We account for this with a fixed offset, but the actual transmission can be variable, which accounts for most of the variability seen.

See this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/862718351268511795

user-0a5287 04 June, 2024, 10:01:44

Hello! We had a problem with the cable connecting the phone and glasses. Apparently he was bent over, so the recording is periodically interrupted. Please tell me the requirements for this cord so that we can buy a similar one.

user-07e923 04 June, 2024, 11:26:23

I just wanted to put the response here, in case other people might be interested.

For those interested in getting a spare USB-C cable, please try to get high-quality cables that offer both power and data transfer. Lower-quality cables can sometimes cause intermittent signal loss. For example, this cable has the L-shape connector, and is suitable for Invisible.

user-1c97d7 07 June, 2024, 00:13:00

Hi all! I am hoping someone might be able to help! I have been given a fixations.csv generated from a pupil labs invisible set of glassed. I am trying to analyse the head rotation and the docs suggest that the azimuth and elevation should give this, however although the participant had done a full rotation the azimuth values only range from -40 to 50 which suggests they dont hold a position in degrees? Am i missing something?? Thank you!!

nmt 07 June, 2024, 00:15:45

Hi @user-d91c62! Would you be able to share some more details on the task the wearer was doing? It’s just that fixations, and the elevation and azimuth values, describe eye movements, not head movements per se. Are you trying to infer head rotation based on the eye movements?

user-d91c62 07 June, 2024, 09:00:16

Hi Neil, Many thanks for the reply! I am basically trying to get the orientation of the participents head. So in the recording they are walking around a city, but I have to know which direction they are looking in and the elevation. I have been working with the fixations.csv as I now believed I had misunderstood that the azimuth and elevation would give me this. I also have a file imu.csv which seems to have some gyro information along with roll and pitch (but no yaw - although this could be an error from the person giving me the data). I can match the timestamps from the fixations.csv to the imu.csv. If I had pitch yaw and roll in there would that give me the head rotation (pose)? If not is there a way to reconstruct it from the gyro data? Thank you!

user-f43a29 13 June, 2024, 11:55:57

Hi @user-d91c62 , I'm briefly stepping in for @nmt here.

You are correct that the imu.csv file contains values related to head rotation, but Pupil Invisible only provides pitch and roll, as it does not contain a magnetometer. You can find more information about the various coordinate systems here and the formats of the data files are specified here.

For example, the raw gyro values provide angular velocity about the Y-axis of Pupil Invisible's IMU, but this cannot be used to infer absolute yaw orientation. The published literature on IMUs contains more details about that.

user-3c26e4 07 June, 2024, 13:34:11

Hi, is there a way to hide previous fixations (sometimes 7-8 fixations) from the video in Pupil Cloud, so that I can only see the present fixation? Thanks in advance.

user-07e923 07 June, 2024, 13:50:01

Hi @user-3c26e4, there's currently no way to change the number of fixations shown on Pupil Cloud. What you can do is to disable the fixation visualization completely, but leave the gaze (red circle) on. To do this, click on the eye icon next to fixation.

There's an alternative method using Neon Player, but the visualization looks slightly different. May I know what's the goal of showing only the current fixation?

Chat image

user-3c26e4 07 June, 2024, 13:58:57

Unfortunately the red circle is not precise..

user-3c26e4 07 June, 2024, 13:51:47

Hi @user-07e923 , we are counting the number of fixations to different objects in a dynamic situation.

user-07e923 07 June, 2024, 14:03:31

In that case, the other method might not be feasible, because it won't show you the fixation number. But it'll show you which objects were gazed on, depending on how you configure the gaze history.

I'll still share with you the idea in case this is something you might be interested in.

This method only works on individual recordings, because it uses Pupil Player, which is part of the Pupil Core bundle. You'll need to download the native recording data from Pupil Cloud (or export them directly from the Invisible Companion device) and load them onto the software.

What you'll need is to enable the visualization plugins. Specifically, you'll need the circle, the polyline, and the light point visualizations. You can then configure the gaze history (i.e., the duration of past gaze) to be visualized for the polyline.

Chat image

user-3c26e4 07 June, 2024, 14:08:22

I will give it a try and will come back to you. Thank you very much.

user-3c26e4 07 June, 2024, 14:10:45

But can I export data from the Invisible Glasses and use Neon Player?

user-07e923 07 June, 2024, 14:13:07

Oh, my apologies. I mistook the channel. This is the πŸ•Ά invisible channel, and not for Neon.

In this case, you'll need to use Pupil Player instead, which is part of the Pupil Core bundle.

I'll revise my above response for Pupil Player.

I'd like to also point out that there's no fixation detector for Pupil Invisible when using Pupil Player. So what you're seeing is only the gaze history, and not the fixation history.

user-5be4bb 14 June, 2024, 09:08:36

Hi, I am exporting the raw data of a recording using Pupil Player, the time is only shown in second relative to the start of the recording, how can I have the unix timestamp if I dont have the exact time (hh:mm:ss:ff )when the recording started ?

user-07e923 14 June, 2024, 10:08:47

Hi @user-5be4bb, you could try processing the exported data using this repo, instead of putting them into Pupil Player.

user-d91c62 14 June, 2024, 10:15:46

Hi Rob, Many thanks for coming back to me! I am very grateful for the reply! I don't actually need the absolute yaw value. Just relative to the start position. I can see I have gyro data for X, Y, Z. If I assume that the participant starts the recording with the glassed facing north for example, is the gyro data enough to be able to calculate the relative direction that the users head is facing at each fixation point, relative to the start position?

user-f43a29 14 June, 2024, 11:59:46

Hi @user-d91c62 , that is also not possible with the gyroscope data, unfortunately, but please note that you can still do a lot with the gyroscope Y data, such as compute variability of head movements or peak velocity about the Y axis.

Regardless of whether the starting position is known or assumed, any integration of the gyroscope Y data alone will quickly undergo exponential drift. The IMU literature contains more details about that. Pitch and roll are available because the gyroscope data are 'fused' with the accelerometer values, which tells the system which way is down. Without also 'fusing' the gyroscope data with signals from a magnetometer or visual odometry, it will not be possible to extract absolute or relative yaw from Pupil Invisible's IMU.

user-df855f 14 June, 2024, 10:20:27

It seems like the video renderer takes a lot of time to be created, could that be true?

user-d407c1 14 June, 2024, 10:31:46

Hi @user-df855f ! The time it takes an enrichment/visualisation to process depends on the amount of recordings and length that are going to be enriched and the queue of enrichments/visualisations that are being running at the moment.

In other words, if many people submit jobs at the same time, they go on a queue and they will be process as soon as there is processing power available.

If after refreshing the webpage and waiting some time, seems to be still stuck, please reach us with the enrichment/visualisation ID on πŸ›Ÿ troubleshooting and we will check it. The ID can be found in the enrichment/viz view clicking on the 3 dots next the name.

user-5be4bb 14 June, 2024, 12:55:34

Thanks @user-07e923 , I exported the data using this code, but I previously also made timestamp calculation based on timestamps in file world_timestamp from exported data from pupil player and frames where we showed the time using an atomic clock time, it is more or less accurate in terms of seconds and milliiseconds. When comparing results to check accuracy, this pl-rec-export gives me time 2 hours behind but same in minutesand seconds: for example: it gives me 9:58:18 am while my calcluated (and actual date from unix timestamp when the test was done) is 11:58:17 am. Why the code gives timestamp with 2 hours behind? If I know the reason we can solve and account for the problem that makes this issue

user-07e923 14 June, 2024, 13:24:47

May I know how you're reading or converting the timestamps? UTC time, your systems time, or something else?

user-5be4bb 14 June, 2024, 14:25:10

I will share with you this excel file containing the two methods with brief explanation. Kindly find it attached. Thank you.

converted_timestamps.xlsx

user-07e923 14 June, 2024, 15:04:45

It seems like there might not be any flags to use UTC time. Could you try to flag UTC in method 1? pd.to_datetime(... utc=true)

user-5be4bb 14 June, 2024, 15:07:37

This code is without considering time zone difference. Do you mean considering the time zone difference ? because this test is done in Italy (UTC+2)

user-07e923 15 June, 2024, 04:49:54

Hi @user-5be4bb, I'd like to check something. Could you try running pd.to_datetime(... utc=true) and exporting the data as .csv instead of excel? Do you see the time difference in this case?

user-5be4bb 15 June, 2024, 10:34:10

Also, I tried now this code that transforms Unix Epoch timestamp to date time format, 'datetime_obj = datetime.datetime.fromtimestamp(epoch_timestamp)' for the timestamp exported from the pl-rec-export and gives me the actual local time: The formatted date and time is: 2024-05-03 11:58:17. So, I think the problem was from the first code I tried πŸ˜…

user-5be4bb 15 June, 2024, 10:28:29

Hi @user-07e923, yes, the output after running the code is like this: 2024-05-03 09:58:18.439769983+00:00.

user-07e923 15 June, 2024, 11:25:34

Great! I'm glad the issue is resolved πŸ˜„

user-5be4bb 17 June, 2024, 08:20:31

Thanks a lot.

user-61e6c1 18 June, 2024, 08:55:56

Hello! I have a question within our project preparations: How is the connection between the cloud and the companion app, or the glasses, established? How can I imagine this in practice? Are the glasses automatically assigned to my cloud account or do I have to set up the connection? Thank you in advance!

user-480f4c 18 June, 2024, 09:04:40

Hi @user-61e6c1. You need to use the same email to access Pupil Cloud and the Invisible Companion App. You need to sign up for Pupil Cloud, and then simply connect your Companion Device to Wi-Fi when you first use the App. The Companion App and your Cloud account will be then synced automatically. I recommend having a look at these instructions on how to make your first recording with Invisible. πŸ™‚

user-61e6c1 18 June, 2024, 09:14:47

@user-480f4c Thank you Nadia for this information! The thing is that we plan to buy one invisibles and to lend another one from a project partner. So as described in the instructions it won't be possible to link collected data from borrowed glasses to the cloud account, which is connected with the glasses in our property?

user-480f4c 18 June, 2024, 09:17:18

You can simply log in with your email/password in the borrowed Companion Device and the App will reflect the information of your account (e.g., your workspace/wearers/templates)

user-61e6c1 18 June, 2024, 09:18:44

@user-480f4c Great! Thank you so much for sharing this valuable information with me πŸ™‚ we would love to buy more than one but budget is too low

user-480f4c 18 June, 2024, 09:20:15

happy to help @user-61e6c1! I hope you have a great experience collecting data with Pupil Invisible. πŸ™‚

user-35fbd7 20 June, 2024, 07:38:09

Hi, I am trying to download recordings for further analyses in iMotions. for it a recording needs to have files as gaze ps1.raw, gaze ps1.time, PI world v1 ps1.mp4, extimu ps1.raw, extimu ps1.time and info.json. I tried different types of downloading but don`t see the required files. Where I can find this type of downloading? will be grateful for your prompt responce

user-07e923 20 June, 2024, 07:47:36

Hi @user-35fbd7, thanks for reaching out πŸ™‚ Are you trying to download a Pupil Invisible recording from Pupil Cloud?

user-35fbd7 20 June, 2024, 07:47:46

yes

user-07e923 20 June, 2024, 07:48:24

Great! On Pupil Cloud, right click the recording and select "Native Recording Data".

user-35fbd7 20 June, 2024, 07:49:45

I don`t see this option

user-07e923 20 June, 2024, 07:50:36

Do you see "Pupil Player format" then?

user-35fbd7 20 June, 2024, 07:51:02

yes, this i see

Chat image Chat image

user-07e923 20 June, 2024, 07:51:32

Yes, that's the one. Pupil Player format contains the raw data (e.g., gaze ps1.raw, etc.).

user-35fbd7 20 June, 2024, 07:54:49

but iMotions said it is the wrong format

Chat image

user-07e923 20 June, 2024, 07:59:48

Could you try to download the raw data onto a folder that isn't linked to OneDrive? OneDrive has caused some unexpected behavior in the past.

user-35fbd7 20 June, 2024, 08:00:35

It`s already ok now. I see where was my mistake. Thank you for support!

user-5be4bb 21 June, 2024, 14:42:05

Hi ! Is it possible to download blinks data and pupil size data for Pupils Invisible recording? I tried the pl-rec-export code on github, it gives me a file called "blinks. csv" but it is empty.

user-07e923 21 June, 2024, 15:38:21

Hi @user-5be4bb, you can download blink data on Pupil Cloud. Right-click on the desired recording and select "Timeseries data". Pupil size data is unavailable for Pupil Invisible. It is not part of the data stream.

user-5be4bb 21 June, 2024, 15:53:29

Hi @user-07e923, okay thank you for your help !

user-f6ea66 26 June, 2024, 12:50:58

Hi, I'm new to the Pupil Labs eye tracking. Seems like a great system Is it possible to have two different users on the same one set of glasses/phone, to do two different scientific projects? In my case: - The phone is already set up with a different user who has her own Pupil Cloud workspace connected. - I want to have a different Pupil Cloud (my own) connected as well, to the same device, so that we do not need access to the same Pupil Cloud account. It's about data safety and not risking messing things up, so quite important. - Currently, when I open the "Invisible Companion" app, she is logged in with her personal workspace (on her personal Pupil Cloud account). (picture 1) - If I try to change workspace, it seems I can only add extra workspaces on the same account and not link the same app to a different Pupil Cloud account - so a different user altogether (picture 2).

Is it correct that this is not possible and the same phone can only be linked to one Pupil Cloud account, or did I miss some option or setting? Thanks!!

user-480f4c 26 June, 2024, 12:58:38

Hi @user-f6ea66! Thanks for reaching out and welcome to the community πŸ™‚

Let me see if I understand correctly:

Your friend (let's call her User1) has an account on Pupil Cloud with email [email removed] . She uses this email to access the Invisible Companion App and the recordings she makes are downloaded to her workspace User1 Workspace.

You are User2 and you already have an account on Cloud with email [email removed] and you want to upload your recordings to User2 Workspace.

In that case, you simply need to log out from the Invisible App if you're friend is logged in, and use your email and password. Then, you'll be able to upload any recordings you make on your workspace.

user-f6ea66 26 June, 2024, 12:51:33

Chat image Chat image

user-f6ea66 26 June, 2024, 12:59:39

100% correctly understood and thanks for the welcome πŸ˜€

user-f6ea66 26 June, 2024, 13:01:11

I just don't see a log out button, but I'm used to a samsung android phone, so I might be a little 'dyslexic' here.. I'll look some more.

user-f6ea66 26 June, 2024, 13:01:38

(ah, at the bottom of the settings menu)

user-f6ea66 26 June, 2024, 13:22:53

Ok, that worked perfectly, thank you. But, now I made two test recordings as a new user and it automatically uploaded them to my Pupil Cloud, which is excellent. However, it won't let me create a project or view either of the videos? It doesn't seem like it's still processing them (the upload is complete and there seemingly isn't any other activity going on in the Pupil Cloud). Not sure why the recordings would have errors. I just put the glasses on, used a computer for a short while and stopped the recording again?

user-f6ea66 26 June, 2024, 13:23:13

Chat image Chat image

user-ab605c 26 June, 2024, 13:31:32

Hi, I have the exact same upload problem that @user-f6ea66 is facing as of now: for the last hour or so I cannot upload any recordings to the cloud using pupil invisible. The recordings are fine on the phone, but on the cloud they show the same error, cannot be replayed/downloaded, show blank snapshot images on the list, etc.

user-f6ea66 26 June, 2024, 13:32:41

(by the way, thanks for the super swift and helpful reply)

user-480f4c 26 June, 2024, 13:46:56

@user-f6ea66 @user-ab605c can you create a ticket in πŸ›Ÿ troubleshooting so that we can assist you better?

user-ce2025 27 June, 2024, 05:56:01

hello! I am a researcher using invisible products. This is no different. I uploaded a video shot with an eye tracker to Pupil Cloud, but after the upload and processing process was completed, I tried to download the blink information of the video, but an error occurred where it was displayed as an empty folder and the video could not be played. I would like to re-upload the video to pupil cloud, but is there any way? Even if I remove the video from pupil cloud, it doesn't seem to be able to be uploaded again because it has already been uploaded on the mobile phone app. Is there another way?

Chat image Chat image

user-07e923 27 June, 2024, 06:56:39

Hi @user-ce2025, thanks for reaching out πŸ™‚ It's not possible to re-upload recordings from the app. Could you tell me what sort of error did you see or what kind of error it was? Also, were you trying to download the timeseries data or the Pupil Player format data?

Have you deleted the recording from Pupil Cloud? Could you check if it's still in the trash? If so, can you restore it and provide the recording ID so that I look into what the issue is?

To check the trash: click on the 3 dots, then select "show trashed"

user-ce2025 27 June, 2024, 07:34:41

thank you However, we have never moved the file to the Recycle Bin. The recorded video ID you mentioned is 91b9db68-3361-4b0d-ad7d-11600b951799. Please confirm

user-f6ea66 27 June, 2024, 11:32:20

Hi again, thanks for fixing my previous issue! I am now trying to create marker mapper enrichments for my test videos. This will be essential to my research. I can see that I need AprilTags, which I googled and found some github libraries for related stuff. However, I'm not sure how to proceed. First of all, I want to print out 4 markers to position around a computer screen. Are there some optimal/suggested markers for this? (recordings will be one hour and I want to know where people look on the screen during that hour) According to your brief guide when I press Marker Mapper (see picture) you refer to learning more "here" twice, but there are no links or similar:

Chat image

user-480f4c 27 June, 2024, 11:34:43

Hi @user-f6ea66! Thanks for spotting this!

You can find the markers and instructions on how to use this tool on our Marker Mapper documentation.

user-f6ea66 27 June, 2024, 11:44:10

Thanks!

user-f6ea66 27 June, 2024, 13:34:19

I have another question of a different sort.. I intend to do an educational/health sciences study but without any patients. Only voluntarily participating medical students. So only "medical student" eye data, so to speak. The videos of their eye movements are included in the upload to the pupil cloud. My group and I are wondering (since Pupil Labs seems to be used in many research projects) if you have some sort of data security statements or agreements or other "legal" considerations in place already, regarding getting ethics/data security approval for studies? Thanks.

user-480f4c 27 June, 2024, 13:37:55

Hi @user-f6ea66. Pupil Cloud is secure and GDPR compliant. You can read more about the privacy policies and security measurements in this document

user-f6ea66 27 June, 2024, 13:38:45

Excellent. I will take a look : )

user-7a517c 28 June, 2024, 11:18:21

Hello! We use pupil invisible for our study in which we have two people conversing. We want to align (synchronise) the two recordings. The phones themselves don't seem to be synchronised -- it appears in the cloud that we started the two recordings minutes aparts from each other when in fact we do so seconds later, so the UTC timestaps are not reliable. So we decided to use an auditory trigger. However in the cloud, it is extremely laborious to find this trigger to the millisecond and add an event since we cannot visualise the spectrogram like in an audio analyser. One idea we had was to download the scene videos, and find the auditory trigger (a hollywood clap) using the spectrogram. However, this time point does not seem to align with the point where we hear the clap in the cloud video (ie the cloud video has a lag). Would you have any solutions for us, assuming it's not possible to upload our cut videos back into the cloud for analysis? Many many thanks!

nmt 29 June, 2024, 03:06:15

Hi @user-7a517c - I'm stepping in for @user-07e923 here briefly!

I'd like to understand your setup better, and perhaps we can offer some concrete ways to synchronise your existing recordings, as well as advice for future recordings.

I'm assuming the recordings you already have are important and you'd like them synced, so let's tackle those first.

Using UTC timestamps from two Companion Devices to synchronise recordings is a valid approach. However, it only works if the two devices were synced to the same master clock/NTP server before making the recordings, as outlined in this section of the docs.

So my first question is, were you able to do that? If not, there may be a temporal offset between the devices and UTC timestamps. (Note it should definitely still be possible nonetheless to figure out a temporal offset via your clap signal!)

user-07e923 28 June, 2024, 11:26:46

Hi @user-7a517c, can I check if the video playback in the app (on the device) have similar issues as what you've described?

user-7a517c 28 June, 2024, 11:32:32

Hi @user-07e923 , how can we check this?

user-07e923 28 June, 2024, 11:33:34

On each of the device, navigate to the Invisible Companion app, then tap the recordings and play them back from the phone.

user-7a517c 28 June, 2024, 11:39:12

I see, but the devices don't allow me to navigate to exact milliseconds so I cannot really find the time point at which the clap occurs

user-7a517c 28 June, 2024, 11:41:07

Between the downloaded videos and the cloud videos, the lag is little (probably around 0.5 seconds) but due to the nature of our study, we would like to synchronise the recording to the millisecond as much as is possible

user-07e923 28 June, 2024, 12:23:27

I see. Invisible's scene camera operates only at 30 Hz, which translates to 1/30 = 0.033 seconds. So this won't be as precise if you want millisecond precision. We recommend synchronizing both the devices using the real-time API or with something like Lab-Streaming Layer.

Please note that the scene camera sampling rate doesn't affect the audio, because the audio is sampled at a different rate and is stored in the audio channel.

As for the data you already have, here's a suggestion:

If you already know which timestamp the clap happens, you can send events programmatically through our Cloud API. Find the timestamp when the clap happens, then use the api to send an event to both the recordings on Pupil Cloud. Edit: You'll need to request for a developer token to use the Cloud api.

user-3c26e4 30 June, 2024, 17:40:20

Hi, I have a problem with visualizing AOI Heatmap. I always get the message "Internal Server Error" How can I solve the problem?

user-3c26e4 30 June, 2024, 17:40:23

Chat image

user-480f4c 01 July, 2024, 07:40:22

Hi @user-3c26e4! Can you please create a ticket in the πŸ›Ÿ troubleshooting channel? We'll assist you there in a private chat.

End of June archive