Hi, I wonder is there a way to drag the recording video in pupil cloud in millisecond precision to achieve sync with some stimuli appearing in the video, thanks! Also is there a way to directly show UTC timestamp on the pupil cloud?
Hi @user-e93961! The closest thing might be to use the keyboard shortcuts ,
and .
which would seek backwards/forwards with a step size of 30 ms, which roughly mimics the step size of the 30 Hz scene video.
Regarding the UTC timestamp, no, unfortunately this is currently not possible. It's an issue/request we have seen before and we are considering implementing this.
Hi @marc , thank you for your reply! So may I assume the systemic bias of this kind of sync approach is 30 ms?
Depends on what datastream you sync to. E.g. the gaze signal is recorded at 200 Hz so it would allow for tighter synchronization. But for the scene camera that is the limitation. Although I guess the synchronization error would be 15 ms on average rather than 30 ms π€
Yeah you're right, thank you! About the UTC timestamp, so the first timestamp in the world_timestamps.csv corresponds to the recording.begin in the pupil cloud, right?
Correct
I see, thank you!
Hi! In gaze.csv, what is azimuth and elevation?
for example, in this diagram is azimuth the angle between the orange vector and the z axis, and elevation the angle between the orange vector and the y axis? (imagine the eye is the scene camera)
Hello, I would ask a question about post-hoc gaze calibration. These are my steps: I record my data from a Raspberry pi, then post-hoc the data via a desktop PC. I calibrate the eye tracker on the Raspberry pi and unchecked the pupil detection during the recording. Then, on the desktop side, I select post-hoc pupil detection. My situation is: when I only select Post-Hoc Gaze Calibration, I do not have data on gaze direction. I only get data on gaze direction when I press the button Detect References (see figure bellow). And my question is: is it the right way for having the gaze direction data? Or do I need to do other steps? Thank you in advance in your instruction, have a nice day.
Hi @user-80123a! Could you first clarify what eye tracking device you are using? Are you using Pupil Invisible? Using the gaze pipeline implemented in Pupil Capture/Player was designed for Pupil Core and it is expected to deliver very poor results for Pupil Invisible.
Good morning pupil Labs! I have a short question, how can I download the video with the red Circle from Pupil Cloud to my pc?
Good morning @user-f1fcca! If you want the red circle to be rendered into the scene video, you will have to create a gaze overlay enrichment. Steps for this are 1) create a project 2) add the recording you want to render out to the project 3) create the gaze overlay enrichment 4) start computing the enrichment and wait until it finishes 5) download the finished video.
Let me know if you have further questions!
Ahh wonderful! Can I set the starting time of the enrichment (the red circle) be at the start of the lsl labrecorder ?
Yes, if you have the event already created, you can select it when generating the enrichment, it would let you choose the start and end events, by default they will be recording.begin and recording.end, which corresponds as you can guess to the recording start and end. If you want another event, you can also manually add them. https://docs.pupil-labs.com/invisible/basic-concepts/events/
By default on the Events it has a lsl.time_synch.etc, is this the right event, or i should follow the lsl time alignment procedure to get the json file and then create the corresponding event ?
Hi @user-f1fcca! The lsl.time_sync event is generated by the relay and is used for post-hoc synchronisation, it would be generated at the point of start of the LSL. If you only want to visualise the gaze overlay from the beginning of the LSL recording that is totally fine. The correction that you link to, is done to match the gaze timestamps recorded on the LSL (at 60Hz) with the ones provided by the Cloud (at 200Hz). Such that you can use the 200Hz sampled data with the LSL time sync.
Ahh, so this event with red arrow, is the beginning of the LSL! That means that is the beginning of the LabRecorder? Because the LabRecorder works with LSL!
Hey, I know this gets asked many times, but we are having issues reliably connecting to the pi.local:8080 and pi-1.local:8080. We can usually connect to either the pi.local or the ip address without issues the first time, but we if close the web browser and try to open the link back up, we just get a white screen.
We are not using an institutional router and we've reserved the IP address so we could create a url shortcut on our desktop to quickly open it during our study
Hi @user-2ecd13 - have you tried hard-refreshing the browser page? Out of curiosity, why do you need to close the web browser?
@nmt I've tried refreshing the browser and it just stays white. It's not even just from closing the browser, it's refreshing the browser, closing it, etc. We work primarily with kids, so sometimes things come unplugged and we are just trying to understand how we can reliably get the monitoring to work (should we need to troubleshoot/close something).
For example, yesterday in testing I got both monitoring apps to work but after closing out the browsers, to test it again, I couldn't pull up the browser interface. The webpage just stayed white. It only worked if I opened up chrome (for the first time) and tried, but then even that stopped working after closing that browser.
Thanks for confirming. I've just tested this out with Chrome + my home network and I'm unable to replicate the behaviour you describe. I can open multiple browser instances, refresh and close/reopen. Do you have any other apps running on the phone?
Monitor app streaming
Hi Pupil Labs Team! I saw that its possible to buy an "Pupil Invisible Companion" package from your website. I was just wondering: What does it include in comparison to when I simply buy an OnePlus 8 Smartphone myself?
If you mean the "Pupil Invisible Companion" that is mentioned in the accessories section here, it's exactly the same, except for the added USB-C cable and the phone case. There is no functional change to the phone itself. You could absolutely buy it somewhere else! https://pupil-labs.com/products/invisible/accessories/
Hello. We share an Invisible between two labs. Can we both have our own workspace? Now the Companion is showing only one Workspace (xx Owner). Can I create a new workspace that is completely separate from the owner's workspace?
Hi @user-d209ef! Yes, you can create entirely separate workspaces within the Pupil Cloud UI. You might also want to create separate Pupil Cloud accounts for each lab (or every user) for access control.
Thanks @marc ! Could you direct me to a link / documentation on how to do this? So far I have only found https://docs.pupil-labs.com/invisible/glasses-and-companion/companion-device/
Hi @user-d209ef π here you can find more information about workspaces https://docs.pupil-labs.com/invisible/basic-concepts/projects-and-workspaces/#workspaces. To create a new workspace, go to Pupil Cloud, click on your profile icon that is at the bottom left corner of the page, and click on "Create a new workspace" in the menu.
@user-c2d375 I can create a new workspace, but how do I make sure that my recordings end up in my workspace?
You need to switch to the desired workspace in the Invisible Companion App. Tap on the current workspace name (the default one is called [name surname]'s Workspace), and then on "switch workspace" to activate it. Once you have activated the desired workspace, any new recordings will be uploaded to that workspace.
Hi, we use the Pupil Invisible for a research project, and we cannot upload the data on Pupil Cloud because some data are sensitive. We would like to use the timestamps saved by the Companion App on the phone, we found the documentation on how they are encoded but we are struggling to decode and read them. Could you provide us with some tips or a procedure to decode these timestamps, preferably using Python ? Thank you in advance for your help !
Hi! You can still load Pupil Invisible recordings to Pupil Player for easiness, or here you have how to read them in python https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_recording/update/invisible.py#L494 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_recording/update/invisible.py#L460
Hi I am trying to change my work space but the device(phone) is taking a lot of time to sync. Plz help.
Can you try to logging out of the Companion App, by clicking on the hamburger menu, settings, scrolling down and selecting log out, and then log in again? If the issue persist after that, can you try https://speedtest.cloud.pupil-labs.com/ and let us know your results?
Hello! I was wondering, is there a way if i have a specific timestamp to know the exact time on a video ? For example the timestamp in ns : 1677583793620556032 what is the closest frame of the video that it's corresponds to ? Thanks in advance!
Good Morning! How I'm Able to check in what point of the video a specific gaze data referring to ?
Hi @user-f1fcca! Have a look at the following how-to guide, which discusses this problem! https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/
Hello! After putting some markers on the surface of interest, I wonder if there is a way to obtain the surface coordinates during recording. I'm assuming using real-time api to get the gaze data and frame image, and processing the data on the host side, like marker mapper.
Hi @user-e93961! Technically this is possible, because the implementation of the surface tracker (that is what this feature is called in the Pupil Core software; its the same as the Marker Mapper algorithmically) is open-source. https://github.com/pupil-labs/surface-tracker
Practically speaking though, the implementation is currently more or less free of any documentation and thus very difficult to use. It's definitely on our roadmap to generate documentation for the repo, but its gonna take a while until we get around to it I am afraid.
If you are up for the challenge, I maybe have a couple additional code snippets that might be helpful. Let me know!
Hi @marc ! Is there a way to evaluate the gaze data (which parameters?) in different AOI? What I mean is % gaze, number of fixations, fixation duration, frequency of gaze shifts between AOIs etc. I have only the heatmaps in every single AOI but I need more data.
Hi @user-7c714e! If you download the enrichment results you get gaze.csv
and fixation.csv
files with data mapped onto the AOI. From these values you can calculate all the metrics you listed per AOI. Frequency of gaze shifts between AOI might be a little more difficult to compute, but the others can be aggregated directly from the CSV files. Let me know if this is still not clear enough!
Hi @marc , thank you for the reply! I would like to have a try, could you please provide with the additional code snippets?
Yes, I will DM you!
See e.g. the documentation of the Marker Mapper exports here: https://docs.pupil-labs.com/invisible/reference/export-formats/#marker-mapper
Thanks @marc, I'll try it.
Hello developers, I'm Rohan. I've recently started using the Pupil Invisible for a test experiment, and I have an issue. The test goes as follows: I wore the glasses to look, and five individual dots, sequentially placed on a screen, are all separated by a good amount of distance.
Upon looking at the data/recording, it seems as if the gaze markers are off by a significant amount. Even though I'm looking at the rightmost dot on the screen, the gaze markers are stuck to the left and would only reach the second marker from the left at most.
I'm still an amateur, trying to learn how to calibrate, make things accurate, and ultimately learn. I would like to have any help from the community. Thanks
Hi @user-e2db0a π. Check out this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/999986149378506832
LSL Relay
Hi we used the Marker Mapper on many occasions and Iβve always placed the markers with the same orientation as they are given on Pupil page but do they have to be placed this way to work or does it not matter which side is on the top? We are just about to use them in a medical training research project where the clinicians will be setting up and so we want to make the process as simple as possible.
Hi @user-057596 π markers orientation does not matter. Please note that it's essential to use distinct markers to avoid any duplicates.
Thanks Eleonora that will make it so much easier for them to use. ππ»
Hi I am uploading the raw data exported to the pupil player but I am unable to see any video. there is nothing visible. I even tried reinstalling the player but i am getting the same issue. I am able to see the data on the cloud but not on the player. can I get some help.
Hi Anuj! How are you downloading the recording? To be opened by Player, you should right click on the recording and download it as βpupil player formatβ. If you did so, what issues are you finding? What the download folder contains?
yes i am downloading for pupil player
this is the error i am recieveing
Hi @user-f01a4e. Please share the full player.log file. Search on your machine, 'pupil_player_settings' - inside that folder is the log file
@nmt This one?
Hi @user-f01a4e. It seems that the world video is missing from what you tried to load into Pupil Player. Can you please try re-downloading freshly from Cloud?
Hello! Tell me, please, in the Pupil Invisible Monitor application, the sound from the microphone of the glasses should be broadcast?
Hello! I would like to stream my Invisible to my PC. Works fine at home via pi.local:8080 but not at work (academic institution). Could this be due to the fact that this connection is considered "not safe" (http instead of https)? Is there a workaround?
Large public networks usually block the type of traffic that is required for streaming the data. Given that you won't be able to change the network configuration, you need to use a different network. This could be another less strict network, a dedicated router or a hot spot hostess by the phone or computer.
@nmt hey thanks for responding. I have done that multiple times and getting I am getting the same results.
@user-f01a4e are you using Safari?
@user-d407c1 I am using chrome
could you please write an email to info@pupil-labs.com with the recording ID and stating the content of the downloaded folder?
Sure I'll do that. Thank you.
Hello, I have a few questions, 1. After downloading the whole recording folder from the pupil cloud using the API, I noticed that the timeseries data is not included and I can't seem to find an appropriate API request to do so. 2. The recordings include files with .time format and .raw formatand I am not sure how to decode them. I would like to synchronize them and export all 3 cameras synchronized frame by frame labeled. I have seen the "Raw Data Exporter" from the pupil player that partly may solve this? But I would rather not have to use a GUI. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/raw_data_exporter.py
Hi @user-60d698! The documentation of the cloud API is a bit lacking unfortunately. But you can use the following endpoint to download recordings in the convenient format also used by the raw data exporter.
https://api.cloud.pupil-labs.com/v2/workspaces/<workspace_id>/recordings:raw-data-export?ids=<recording_id_1>&ids=<recording_id_2>
Hi, I forgot to change who the wearer was between my recordings and I have calibrated for each wearer at the beginning, is there a way to change the wearer afterwards so that the correct calibration compensation can be applied?
Hi @user-6e1fb1! There's no way to retrospectively apply offset corrections to different recordings. However, if you set the offset with each new wearer and then made some recordings, the offset will have been applied to those recordings.
@user-d407c1 Hi, i have sent your the mail as per your request. kindly have a look and let me know how i can resolve the issue.
I've answered there, basically you will need the Pupil Player Format in order to use it with Pupil Player, not the raw data enrichment download.
@user-d407c1 sry, I think attached the wrong one then. I'll send the pupil player format.
@user-d407c1 it working now. thanksπ
@user-60d698 Okay, got it! The timestamps included in the gaze.csv
file coincide with the timestamps of the left eye video, i.e. there should be one row per video frame.
For the right eye this is more complicated, as the timestamps are not included in this download. Instead you will indeed have to decode the .time
files.
The raw sensor files are not really meant to be consumed directly, so this is a bit inconvenient. The timestamps of the left eye camera video are in PI left v1 ps1.time
and the ones for the right eye in PI right v1 ps1.time
.
To read them in Python you could use
import numpy as np
np.fromfile(file_path, dtype="uint64")
thank you! π
hello again, The size of the arrays of the [camera_name].time files does not match the frame count of the mp4 videos (using ffmepg.probe), how do you deal with this? Sometimes there are less frames than timestamps, sometimes there are more frames than timestamps. I need to be precise on the timestamp to frame matching to synchronize the frames from each camera to eachother. Do all the 3 cameras take the first frame perfectly at the same time, cause if that is the case, I can just cut off the last ones?
Hello! Can't calibrate the Invisible. The red circle is not visible, so I can't to adjust the gaze. In the recordings errors are significant. Could you advice me how to make the recording more precize?
Can you please make sure the app is up to date in Google Play store, then try logging out and back into the app. If that doesn't work, please reach out to info@pupil-labs.com
I followed your advice, log out and log in and as the result the App doesn't repy at all. What's the hell?! It is unreliable for work. I have the recording scheduled for today. What have I to do? to reject the project? or there is some possibiilty to come back to working regime?
It doesn't work. "The recording is failed. To disconnect P1". THat's all
Invisible app
Hello @nmt , @marc ! Our glasses get hot (about 55 Β°C). Please tell me what is the normal temperature for glasses to work for an hour?
Hi @user-0a5287. The Invisible glasses do get warm, but nothing hot should ever touch the users skin and it should never be uncomfortable. The temperature reached after several minutes is the maximum temperature and the glasses can be used continuously for around 150 minutes prior to the phone running out of battery.
Hello! Is it possible to calibrate the fixations in Pupil Player after making a recording?
Hi! are you using Pupil Invisible? Fixations for Pupil Invisible are available through the Cloud, you can download there the .csv file and apply any offset correction you want to the x and y coordinates reported.
Yes, I am using invisible. Can you point me to any existing code or a formula that I could use to make those corrections in the .csv file?
Hi! I conducted an experiment with the invisible glasses and want to upload it to the cloud. The experiment room had limited WIFI and thus the upload stayed at ~0%. Now, the "upload circle" is just empty. When I click on "upload" no visible changes are seen. It is also not uploaded on pupil cloud. Is there a trick to make the upload still work? I can see the whole recording on the companion app by clicking on it. It is just not uploading. I am also now in an environment with regular wifi strength
Hi @user-ace7a4 ! In the recordings view if you click on "upload" it should be sufficient to restart the upload, please try somewhere where you have an stable connection. You can test your connection speed to the Cloud here https://speedtest.cloud.pupil-labs.com/
I should be in an environment with a stable connection now. Following the link gives me a "Site cant be reached; The connection was reset" error
Are you connected to an institutional network? If so, the firewall might be blocking the connection, would you mind testing on a different network?
Ah yes, that was indeed the issue. Thanks!
Hi, I wanna uninstall the Pupil Player v3.4.0 so as to successfully install the v3.5.0. But I came up with this error prompt all the time.Please help
Hi @user-9c87a0 π. This is a Window's permission error if I'm not mistaken. Does your user have necessary privileges to uninstall/install software?
Hey guys, I swear I saw a pupil guide for the invisible on running a study on mobile applications / for dynamic screens, does anyone know where I can find that?
Hey @user-cd03b7 , I understand that you mean the tutorial in our Alpha lab website. Here it is https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/ - with this tutorial you can map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the Reference Image Mapper enrichment.
Hi! I am trying to run the dense-pose-colab notebook. I uploaded the raw folder to my drive and ran the notebook. Unfortunately, I do not get a new folder that holds the dense-pose video and csv files. I renamed the raw folder from pupil cloud into something meaningful. Could this be the issue? I restarted the notebook and ran it again, but I still have the same "error". Of course I adapted the path and it seems to be correct.
Hi @user-ace7a4 Where do you get the error in Colab?
My folder structure looks like this. Do I need to restructure it? I ran the notebook again but I still do not get a new folder in my google drive.
I get the run time error, which is, according to the comments, "normal", but other than that every cell gets a green tick. The issue I have is that I do not get the new results folder in drive, that holds the video and the two csv files.
On the last cell click on show code, there you will have the possibility to change the path of output. Please let me know if you still have issues.
While this indeed changed something (a % value indicating the current processing progress was printed ) it took ~4-5 hours to reach 76% and then the notebook crashed. Is this due to a poor network connection of mine, or is this still "normal"? Also a mp4 called densepose
was created. I cannot open it and it only holds 48 kbs.
I think I know where the problem occurs, I will look a t it tomorrow and update the notebook.
Alright, I figured it out and it worked, thank you very much! One follow-up question: In your documentation, I saw that it makes sense to cut out portions where participants are looking somewhere else (e.g., their smartphone). Do you have a rule of thumb regarding how long of a 'distracted gaze' may be acceptable? For example, if we are talking about a 60-second clip of someone looking at a supermarket shelf and one second of that is spent looking at a smartphone or a person standing next to them, would you still recommend cutting out that part or would you say it doesn't matter too much since it makes up less than 2% of the overall video length? I hope my question makes sense. Thanks!
It's great to hear that you were able to figure it out! In terms of cutting out portions where participants are looking somewhere else, there is no hard and fast rule regarding how long of a 'distracted gaze' may be acceptable. And it makes total sense as a question
The reason why we ask to use events to make smaller sections, where only you expect subjects to gaze on the reference image, is mostly computational.
Not only it will take longer for your enrichment to finish, since it will have to localise the whole video, but also since you already know that the participant is looking somewhere else (e.g. cashier), it makes no sense to compute that part of the recording.
Kindly note also, that the amount of points/features used to build the "3d environment" and match them between the scanning recording and the experiment recordings is fixed. Thus, if you cover a large area (that is for example the whole supermarket), the points will be dispersed across the whole supermarket, and only few of them will be on the shelf. When matching those points, you will have less points to match and the matching might be less accurate. While if you select only a portion of the video where you expect to see the shelf, more points will be concentrated there and therefore matching them will be easier/better.
I hope explained myself clearly.
Hi there! I wonder if there is a easy way to get each frame in the scene video with corresponding timestamp.
Hi @user-e93961 π in the export file world_timestamps.csv
is possible to easily access the timestamp of every world video frame
@user-ace7a4 I've updated the repo and roll back the notebook. Although I was not able to replicate the problem.
Few things, please kindly make a copy of the notebook if you want to modify it, rather than disabling the test mode.
Second, would you mind trying it with a small section of your video using events?
Please, create events on the cloud and download the raw enrichment again, which should include those events in the events.csv
file. Then define them on the last cell as arguments using --start "event.name"
and --end "event.name"
If you see any error, please let me know what the error states . If the error is a Drive timeout, you will need to not use Google Drive. Upload the folder on the side bar and change the input path and output path from Colab accordingly.
If that still doesn't work you can try substituting
!python -m pip install 'git+https://github.com/pupil-labs/densepose-module.git'
on the 3rd cell with: `!python -m pip install [email removed]
which will use an older version of the pl-densepose module.
Also kindly note:
Google Colab notebooks have an idle timeout of 90 minutes and absolute timeout of 12 hours. This means, if user does not interact with his Google Colab notebook for more than 90 minutes, its instance is automatically terminated. Also, maximum lifetime of a Colab instance is 12 hours.
If you want to run it locally https://github.com/pupil-labs/densepose-module
hello, I am new here and not sure where to start, but this thread is immediately related to what I am trying to do: Short story: I have pupil invisible recordings that we downloaded from the phone and exported the relevant ssections via the player software. I've uploaded the folder to gdrive and tried to run the colab-notebook. It took everything, but upon clicking the button to exectute it, it was done immediately, but had no output file and there was not error message nor any output in the folder. Is there i) maybe a demo video that I could use to make sure I am doing it right? 2) i read that one should upload data to the cloud (which I'd need permission for before I could). Is that really necessary? I mean I have exported data that should contain everything. THanks a lot for your help!
Hi there! We discovered a problem, that after every recording we get the message "You have an unsaved recording from a previous session" and we can only start a new recording after closing and restarting the companion app. It occurs also after deleting all recordings. Is this a known problem or is there any solution to this? I hope you can help me.
Hi @user-5f1b97 ! Can you try clearing the app's cache data? Hold the icon, press on App Info
, then Storage and cache
and then click on Clear cache
button.
(also, just to confirm that that isn't the culprit: I can work with the data transferred from the phone and exported via the player, right?)
Thatβs what the override flag is meant for
@user-c9d495 unfortunately the change above means no override in Colab for now, I will try to fix it and update. But if you have a linux or mac, feel free to try it locally.
@user-d407c1 thanks for the update! Update from my side: I was able to run it and it is currently running locally (I noticed that while the colab cell didn't show outputs, downloading the nb and opening it in jupyter showed errors, whcih helped me fix it). So, as said, now it is running. However, your message suggests that next time I disconnect (which happens regulalry on colab, as you will know), it will no longer run with the --override? That's unfortunate. Is there a chance for me to change the line "!python -m pip install 'git+https://github.com/pupil-labs/densepose-module.git'" to some older version that would be the one I am currently executing (branch, flag?)??
You can save your notebook and use it, I did not modify the repo but only changed the notebook to use an older version. The line you mention will pull the latest one, which does have the override flag. Thanks for the update too!
Again me, feel free to use the latest version of the notebook, it works. β’οΈ
Google Colab seems to skip logging some times but at least now you should see some progress bars.
I can't find an easy fix for the logging output, so I will leave as it is, the code works and you get the output but the displaying of the current state may fail.
I also added a few more parameters to choose the confidence value, the start and end events for easiness.
Hello! is there a way (or a roadmap for) to fix gaze offset errors in the pupil cloud? I think we can do this offine (pupil player) but that means downloading all recordings and losing the event annotations we have made on the cloud interface. cheers!
You can check the roadmap here https://pupil-labs.canny.io/pupil-cloud
Feel free to suggest this feature
Hi, new to using Pupil Labs invisible, but have run some successful trials with them last week. I'm struggling to get the app to sync, in particular the wearers are not syncing and it's not uploading to the cloud even though I'm connected to the internet. Still able to record footage, but not able to sync. Help appreciated!
I uninstalled the app and re-installed it and that seemed to fix it π