Neon Companion has not detected a disconnection with the glasses.Also,I can't see the image of the eye camera from the Neon companion, I don't use a third-party USB when connecting, and I haven't updated the version of Android. Please let me know if you have any suggestions on how to deal with it.
Hi! May I ask why the video is not able to be uploaded to the cloud? I have several videos that have been loading for many days. The phone end said it is uploaded but I am not able to see it on the cloud. Thanks!
Hi @user-082c1d, could you open a ticket in π troubleshooting and describe the issue there? We'll also need the recording IDs of the affect recordings. You can find the IDs by right clicking > view recording information.
Hey Pupil Labs, I am currently working on eye and hand tracking during natural tasks. For the hand tracking, we are using Xsens and Invisible for the eye. In this, we are having issues with the time synchronization between both devices with a delay of around 10 seconds for some subjects. The difference is inconsistent between the subjects and for the same subjects too. There might be inconsistencies with clicking the recording button, but it won't be 10 seconds. Is there a solution to work on these issues, as the task duration itself is a maximum of 10 seconds for most of them? This is stopping us because it's not possible to do the time-related analysis.
Hi @user-870276. Would you be able to provide more information about how you're syncing the two systems? That way we can try to provide some concrete feedback.
I'm creating a Go timestamp in the cloud and exporting and using that events.csv' Go signal epoch value file as a Go timestamp and correlating it with the Xsens record first frame start epoch values. I'm observing an 8-10 second delay in start times between Xsens and Pupil. This shouldn't be the case as Xsens is started first and then Pupil starts recording, and there is a 4-count given before the Go signal is given. But even then, the recording start of Xsens is 8-10 seconds ahead of the Pupil Go signal epoch value. The difference is also inconsistent among different subjects, which is causing problems in time-related analyses like latencies.
How did you sync the xsens clock with the Companion Device's clock?
Testing this before we did a test run. In that test run, there wasnβt this much of a time delay. So we hadnβt cared much about synchronizing clocks prior.
As said the difference also is also not being constant
If the clocks haven't been in anyway synchronized, then you cannot rely on comparing absolute timestamps from both systems. The temporal offset between both could be arbitrary.
What sync capabilities does the xsens system have?
It takes time from its companion laptop just as Pupil
In that case the time variation should be constant right ?
This is the variation I'm observing
No not necessarily. In fact, it's difficult to say much from that data. The Xsens system may well take timestamps from a laptop, but do you know what options xsens provides to sync its time with other systems?
Global timestamps is the most preferred way for xsens
If you plan to work with the global timestamps from both systems, you'll need to find a way to synchronise them, or determine the temporal offset between them and apply that to your calculations. Our Companion Device can sync to a master clock, so that could be an option if the Xsens can too. You can read more about that in this section of the docs. Another way to calculate the offset would be to record a common event with both systems. As an aside, I took a quick look at the Xsens docs, and they have quite a few ways to sync with other devices. So you definitely have options.
Thanks you Neil! Also I want ask why - the delay between imu beginning and the gaze data and world record beginning - sampling frequency is not consistent in gaze.csv
Sampling frequency is not being consistently 200 Hz it's fluctuating
Yes downloaded from the cloud
What variability exactly do you see in the sampling rate?
So for example or the first time stamp it's being 100 hz and second 150 hz third it's 50 hz and forth 190hz. But on avg it's being 200 hz but there is being a fluctuation
It is expected that the inter-sample durations have a distribution, but they should average out to roughly 200 Hz. For context, each Gaze datum is generated using left/right-eye image pairs and inherits the left eye image's timestamp. Timestamps are generated by the Companion App when the eye image is received, and there is usually a small transmission delay over USB (from the camera to the app). We account for this with a fixed offset, but the actual transmission can be variable, which accounts for most of the variability seen.
See this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/862718351268511795
Hello! We had a problem with the cable connecting the phone and glasses. Apparently he was bent over, so the recording is periodically interrupted. Please tell me the requirements for this cord so that we can buy a similar one.
I just wanted to put the response here, in case other people might be interested.
For those interested in getting a spare USB-C cable, please try to get high-quality cables that offer both power and data transfer. Lower-quality cables can sometimes cause intermittent signal loss. For example, this cable has the L-shape connector, and is suitable for Invisible.
Hi all! I am hoping someone might be able to help! I have been given a fixations.csv generated from a pupil labs invisible set of glassed. I am trying to analyse the head rotation and the docs suggest that the azimuth and elevation should give this, however although the participant had done a full rotation the azimuth values only range from -40 to 50 which suggests they dont hold a position in degrees? Am i missing something?? Thank you!!
Hi @user-d91c62! Would you be able to share some more details on the task the wearer was doing? Itβs just that fixations, and the elevation and azimuth values, describe eye movements, not head movements per se. Are you trying to infer head rotation based on the eye movements?
Hi Neil, Many thanks for the reply! I am basically trying to get the orientation of the participents head. So in the recording they are walking around a city, but I have to know which direction they are looking in and the elevation. I have been working with the fixations.csv as I now believed I had misunderstood that the azimuth and elevation would give me this. I also have a file imu.csv which seems to have some gyro information along with roll and pitch (but no yaw - although this could be an error from the person giving me the data). I can match the timestamps from the fixations.csv to the imu.csv. If I had pitch yaw and roll in there would that give me the head rotation (pose)? If not is there a way to reconstruct it from the gyro data? Thank you!
Hi @user-d91c62 , I'm briefly stepping in for @nmt here.
You are correct that the imu.csv
file contains values related to head rotation, but Pupil Invisible only provides pitch and roll, as it does not contain a magnetometer. You can find more information about the various coordinate systems here and the formats of the data files are specified here.
For example, the raw gyro values provide angular velocity about the Y-axis of Pupil Invisible's IMU, but this cannot be used to infer absolute yaw orientation. The published literature on IMUs contains more details about that.
Hi, is there a way to hide previous fixations (sometimes 7-8 fixations) from the video in Pupil Cloud, so that I can only see the present fixation? Thanks in advance.
Hi @user-3c26e4, there's currently no way to change the number of fixations shown on Pupil Cloud. What you can do is to disable the fixation visualization completely, but leave the gaze (red circle) on. To do this, click on the eye icon next to fixation.
There's an alternative method using Neon Player, but the visualization looks slightly different. May I know what's the goal of showing only the current fixation?
Unfortunately the red circle is not precise..
Hi @user-07e923 , we are counting the number of fixations to different objects in a dynamic situation.
In that case, the other method might not be feasible, because it won't show you the fixation number. But it'll show you which objects were gazed on, depending on how you configure the gaze history.
I'll still share with you the idea in case this is something you might be interested in.
This method only works on individual recordings, because it uses Pupil Player, which is part of the Pupil Core bundle. You'll need to download the native recording data from Pupil Cloud (or export them directly from the Invisible Companion device) and load them onto the software.
What you'll need is to enable the visualization plugins. Specifically, you'll need the circle, the polyline, and the light point visualizations. You can then configure the gaze history (i.e., the duration of past gaze) to be visualized for the polyline.
I will give it a try and will come back to you. Thank you very much.
But can I export data from the Invisible Glasses and use Neon Player?
Oh, my apologies. I mistook the channel. This is the πΆ invisible channel, and not for Neon.
In this case, you'll need to use Pupil Player instead, which is part of the Pupil Core bundle.
I'll revise my above response for Pupil Player.
I'd like to also point out that there's no fixation detector for Pupil Invisible when using Pupil Player. So what you're seeing is only the gaze history, and not the fixation history.
Hi, I am exporting the raw data of a recording using Pupil Player, the time is only shown in second relative to the start of the recording, how can I have the unix timestamp if I dont have the exact time (hh:mm:ss:ff )when the recording started ?
Hi @user-5be4bb, you could try processing the exported data using this repo, instead of putting them into Pupil Player.
Hi Rob, Many thanks for coming back to me! I am very grateful for the reply! I don't actually need the absolute yaw value. Just relative to the start position. I can see I have gyro data for X, Y, Z. If I assume that the participant starts the recording with the glassed facing north for example, is the gyro data enough to be able to calculate the relative direction that the users head is facing at each fixation point, relative to the start position?
Hi @user-d91c62 , that is also not possible with the gyroscope data, unfortunately, but please note that you can still do a lot with the gyroscope Y data, such as compute variability of head movements or peak velocity about the Y axis.
Regardless of whether the starting position is known or assumed, any integration of the gyroscope Y data alone will quickly undergo exponential drift. The IMU literature contains more details about that. Pitch and roll are available because the gyroscope data are 'fused' with the accelerometer values, which tells the system which way is down. Without also 'fusing' the gyroscope data with signals from a magnetometer or visual odometry, it will not be possible to extract absolute or relative yaw from Pupil Invisible's IMU.
It seems like the video renderer takes a lot of time to be created, could that be true?
Hi @user-df855f ! The time it takes an enrichment/visualisation to process depends on the amount of recordings and length that are going to be enriched and the queue of enrichments/visualisations that are being running at the moment.
In other words, if many people submit jobs at the same time, they go on a queue and they will be process as soon as there is processing power available.
If after refreshing the webpage and waiting some time, seems to be still stuck, please reach us with the enrichment/visualisation ID on π troubleshooting and we will check it. The ID can be found in the enrichment/viz view clicking on the 3 dots next the name.
Thanks @user-07e923 , I exported the data using this code, but I previously also made timestamp calculation based on timestamps in file world_timestamp from exported data from pupil player and frames where we showed the time using an atomic clock time, it is more or less accurate in terms of seconds and milliiseconds. When comparing results to check accuracy, this pl-rec-export gives me time 2 hours behind but same in minutesand seconds: for example: it gives me 9:58:18 am while my calcluated (and actual date from unix timestamp when the test was done) is 11:58:17 am. Why the code gives timestamp with 2 hours behind? If I know the reason we can solve and account for the problem that makes this issue
May I know how you're reading or converting the timestamps? UTC time, your systems time, or something else?
I will share with you this excel file containing the two methods with brief explanation. Kindly find it attached. Thank you.
It seems like there might not be any flags to use UTC time. Could you try to flag UTC in method 1?
pd.to_datetime(... utc=true)
This code is without considering time zone difference. Do you mean considering the time zone difference ? because this test is done in Italy (UTC+2)
Hi @user-5be4bb, I'd like to check something. Could you try running pd.to_datetime(... utc=true)
and exporting the data as .csv instead of excel? Do you see the time difference in this case?
Also, I tried now this code that transforms Unix Epoch timestamp to date time format, 'datetime_obj = datetime.datetime.fromtimestamp(epoch_timestamp)' for the timestamp exported from the pl-rec-export and gives me the actual local time: The formatted date and time is: 2024-05-03 11:58:17. So, I think the problem was from the first code I tried π
Hi @user-07e923, yes, the output after running the code is like this: 2024-05-03 09:58:18.439769983+00:00.
Great! I'm glad the issue is resolved π
Thanks a lot.
Hello! I have a question within our project preparations: How is the connection between the cloud and the companion app, or the glasses, established? How can I imagine this in practice? Are the glasses automatically assigned to my cloud account or do I have to set up the connection? Thank you in advance!
Hi @user-61e6c1. You need to use the same email to access Pupil Cloud and the Invisible Companion App. You need to sign up for Pupil Cloud, and then simply connect your Companion Device to Wi-Fi when you first use the App. The Companion App and your Cloud account will be then synced automatically. I recommend having a look at these instructions on how to make your first recording with Invisible. π
@user-480f4c Thank you Nadia for this information! The thing is that we plan to buy one invisibles and to lend another one from a project partner. So as described in the instructions it won't be possible to link collected data from borrowed glasses to the cloud account, which is connected with the glasses in our property?
You can simply log in with your email/password in the borrowed Companion Device and the App will reflect the information of your account (e.g., your workspace/wearers/templates)
@user-480f4c Great! Thank you so much for sharing this valuable information with me π we would love to buy more than one but budget is too low
happy to help @user-61e6c1! I hope you have a great experience collecting data with Pupil Invisible. π
Hi, I am trying to download recordings for further analyses in iMotions. for it a recording needs to have files as gaze ps1.raw, gaze ps1.time, PI world v1 ps1.mp4, extimu ps1.raw, extimu ps1.time and info.json. I tried different types of downloading but don`t see the required files. Where I can find this type of downloading? will be grateful for your prompt responce
Hi @user-35fbd7, thanks for reaching out π Are you trying to download a Pupil Invisible recording from Pupil Cloud?
yes
Great! On Pupil Cloud, right click the recording and select "Native Recording Data".
I don`t see this option
Do you see "Pupil Player format" then?
yes, this i see
Yes, that's the one. Pupil Player format contains the raw data (e.g., gaze ps1.raw, etc.).
but iMotions said it is the wrong format
Could you try to download the raw data onto a folder that isn't linked to OneDrive? OneDrive has caused some unexpected behavior in the past.
It`s already ok now. I see where was my mistake. Thank you for support!
Hi ! Is it possible to download blinks data and pupil size data for Pupils Invisible recording? I tried the pl-rec-export code on github, it gives me a file called "blinks. csv" but it is empty.
Hi @user-5be4bb, you can download blink data on Pupil Cloud. Right-click on the desired recording and select "Timeseries data". Pupil size data is unavailable for Pupil Invisible. It is not part of the data stream.
Hi @user-07e923, okay thank you for your help !
Hi, I'm new to the Pupil Labs eye tracking. Seems like a great system Is it possible to have two different users on the same one set of glasses/phone, to do two different scientific projects? In my case: - The phone is already set up with a different user who has her own Pupil Cloud workspace connected. - I want to have a different Pupil Cloud (my own) connected as well, to the same device, so that we do not need access to the same Pupil Cloud account. It's about data safety and not risking messing things up, so quite important. - Currently, when I open the "Invisible Companion" app, she is logged in with her personal workspace (on her personal Pupil Cloud account). (picture 1) - If I try to change workspace, it seems I can only add extra workspaces on the same account and not link the same app to a different Pupil Cloud account - so a different user altogether (picture 2).
Is it correct that this is not possible and the same phone can only be linked to one Pupil Cloud account, or did I miss some option or setting? Thanks!!
Hi @user-f6ea66! Thanks for reaching out and welcome to the community π
Let me see if I understand correctly:
Your friend (let's call her User1
) has an account on Pupil Cloud with email [email removed] . She uses this email to access the Invisible Companion App and the recordings she makes are downloaded to her workspace User1 Workspace
.
You are User2
and you already have an account on Cloud with email [email removed] and you want to upload your recordings to User2 Workspace
.
In that case, you simply need to log out from the Invisible App if you're friend is logged in, and use your email and password. Then, you'll be able to upload any recordings you make on your workspace.
100% correctly understood and thanks for the welcome π
I just don't see a log out button, but I'm used to a samsung android phone, so I might be a little 'dyslexic' here.. I'll look some more.
(ah, at the bottom of the settings menu)
Ok, that worked perfectly, thank you. But, now I made two test recordings as a new user and it automatically uploaded them to my Pupil Cloud, which is excellent. However, it won't let me create a project or view either of the videos? It doesn't seem like it's still processing them (the upload is complete and there seemingly isn't any other activity going on in the Pupil Cloud). Not sure why the recordings would have errors. I just put the glasses on, used a computer for a short while and stopped the recording again?
Hi, I have the exact same upload problem that @user-f6ea66 is facing as of now: for the last hour or so I cannot upload any recordings to the cloud using pupil invisible. The recordings are fine on the phone, but on the cloud they show the same error, cannot be replayed/downloaded, show blank snapshot images on the list, etc.
(by the way, thanks for the super swift and helpful reply)
@user-f6ea66 @user-ab605c can you create a ticket in π troubleshooting so that we can assist you better?
hello! I am a researcher using invisible products. This is no different. I uploaded a video shot with an eye tracker to Pupil Cloud, but after the upload and processing process was completed, I tried to download the blink information of the video, but an error occurred where it was displayed as an empty folder and the video could not be played. I would like to re-upload the video to pupil cloud, but is there any way? Even if I remove the video from pupil cloud, it doesn't seem to be able to be uploaded again because it has already been uploaded on the mobile phone app. Is there another way?
Hi @user-ce2025, thanks for reaching out π It's not possible to re-upload recordings from the app. Could you tell me what sort of error did you see or what kind of error it was? Also, were you trying to download the timeseries data or the Pupil Player format data?
Have you deleted the recording from Pupil Cloud? Could you check if it's still in the trash? If so, can you restore it and provide the recording ID so that I look into what the issue is?
To check the trash: click on the 3 dots, then select "show trashed"
thank you However, we have never moved the file to the Recycle Bin. The recorded video ID you mentioned is 91b9db68-3361-4b0d-ad7d-11600b951799. Please confirm
Hi again, thanks for fixing my previous issue! I am now trying to create marker mapper enrichments for my test videos. This will be essential to my research. I can see that I need AprilTags, which I googled and found some github libraries for related stuff. However, I'm not sure how to proceed. First of all, I want to print out 4 markers to position around a computer screen. Are there some optimal/suggested markers for this? (recordings will be one hour and I want to know where people look on the screen during that hour) According to your brief guide when I press Marker Mapper (see picture) you refer to learning more "here" twice, but there are no links or similar:
Hi @user-f6ea66! Thanks for spotting this!
You can find the markers and instructions on how to use this tool on our Marker Mapper documentation.
Thanks!
I have another question of a different sort.. I intend to do an educational/health sciences study but without any patients. Only voluntarily participating medical students. So only "medical student" eye data, so to speak. The videos of their eye movements are included in the upload to the pupil cloud. My group and I are wondering (since Pupil Labs seems to be used in many research projects) if you have some sort of data security statements or agreements or other "legal" considerations in place already, regarding getting ethics/data security approval for studies? Thanks.
Hi @user-f6ea66. Pupil Cloud is secure and GDPR compliant. You can read more about the privacy policies and security measurements in this document
Excellent. I will take a look : )
Hello! We use pupil invisible for our study in which we have two people conversing. We want to align (synchronise) the two recordings. The phones themselves don't seem to be synchronised -- it appears in the cloud that we started the two recordings minutes aparts from each other when in fact we do so seconds later, so the UTC timestaps are not reliable. So we decided to use an auditory trigger. However in the cloud, it is extremely laborious to find this trigger to the millisecond and add an event since we cannot visualise the spectrogram like in an audio analyser. One idea we had was to download the scene videos, and find the auditory trigger (a hollywood clap) using the spectrogram. However, this time point does not seem to align with the point where we hear the clap in the cloud video (ie the cloud video has a lag). Would you have any solutions for us, assuming it's not possible to upload our cut videos back into the cloud for analysis? Many many thanks!
Hi @user-7a517c - I'm stepping in for @user-07e923 here briefly!
I'd like to understand your setup better, and perhaps we can offer some concrete ways to synchronise your existing recordings, as well as advice for future recordings.
I'm assuming the recordings you already have are important and you'd like them synced, so let's tackle those first.
Using UTC timestamps from two Companion Devices to synchronise recordings is a valid approach. However, it only works if the two devices were synced to the same master clock/NTP server before making the recordings, as outlined in this section of the docs.
So my first question is, were you able to do that? If not, there may be a temporal offset between the devices and UTC timestamps. (Note it should definitely still be possible nonetheless to figure out a temporal offset via your clap signal!)
Hi @user-7a517c, can I check if the video playback in the app (on the device) have similar issues as what you've described?
Hi @user-07e923 , how can we check this?
On each of the device, navigate to the Invisible Companion app, then tap the recordings and play them back from the phone.
I see, but the devices don't allow me to navigate to exact milliseconds so I cannot really find the time point at which the clap occurs
Between the downloaded videos and the cloud videos, the lag is little (probably around 0.5 seconds) but due to the nature of our study, we would like to synchronise the recording to the millisecond as much as is possible
I see. Invisible's scene camera operates only at 30 Hz, which translates to 1/30 = 0.033 seconds. So this won't be as precise if you want millisecond precision. We recommend synchronizing both the devices using the real-time API or with something like Lab-Streaming Layer.
Please note that the scene camera sampling rate doesn't affect the audio, because the audio is sampled at a different rate and is stored in the audio channel.
As for the data you already have, here's a suggestion:
If you already know which timestamp the clap happens, you can send events programmatically through our Cloud API. Find the timestamp when the clap happens, then use the api to send an event to both the recordings on Pupil Cloud. Edit: You'll need to request for a developer token to use the Cloud api.
Hi, I have a problem with visualizing AOI Heatmap. I always get the message "Internal Server Error" How can I solve the problem?
Hi @user-3c26e4! Can you please create a ticket in the π troubleshooting channel? We'll assist you there in a private chat.