csv file is directly what I want, But I couldn't find the gaze/fixa data on the reference image. According to the output, it seems only export data on scanning video and original video(main gaze video)
Hi @user-cc9ad6 , that looks like the time series data, could you click on the right side panel, where it says "Nimat" under the "Enrichment Data"? that should fet you the right data
@user-cc9ad6 which folder did you download? If you download the enrichment data Nimat
(right panel on your Downloads view), you should be able to get the data you need
I found that! Thanks a lot!
Hi, do you know why sometimes we can't calibration ? As you can see, the 'Adjust' button is gray.
Hi @user-cc9ad6, can you please clarify if the glasses are connected and if the scene and eye cameras stream fine?
Forget to connect the glasses..... Thanks very much, Nadia! Hope you have a nice day!
Hi, I was trying to run the code at https://github.com/pupil-labs/real-time-blink-detection/ on my recording with the pupil invisible, since the documentation said that it's working for both invisible and neon. However the code doesn't run, there is any possible modification that I can do to solve this problem? Waiting for your answer, thank you a lot
Hi @user-3e88a5 ! The realtime API for Pupil Invisible does not stream the eye cameras, thus you won't be able to detect blinks on realtime.
You can use the blink detector to compute blinks offline, but in that case I'd recommend that you use this code
Hi, I am using the pupil invisible glasses for a study, and in the invisible companion app, I am able to see the eye movements and gaze with the red circle, but when the data is exported to a computer, the red circle is not there anymore. Instead the eye gaze is exported in 2 different files. I was wondering if there is a way to go about exporting the real world recording with the red circle (the gaze)? Also, I have downloaded the pupil player and other plugins, but I am not able to view any of the recordings on the pupil player. I would love some clarifications, thank you in advance!
Hi @user-b434c8 ! The raw data does not include the overlay of gaze in the video directly, you can:
Create such a visualisation in Pupil Cloud using the Video Renderer visualisation and download the video.
In Pupil Player, load it (note that timestamps and other data is converted), then you can add visualisation plugins and export it. Note that to load recordings in pupil player you need to Export the recordings
Create a custom visualisation using the gaze data and video in any programming language of your choice.
Let us know if you need help with any of this.
Hi, I draw AOI places after using Reference Image Mapper, And now I want to find the pixel position of the AOI areas,But it seems that the cloud don't offer that service?
Hi! Iβm having issues logging into my pupil cloud account. Everything was working well while I was logged in. After logging out to check another workspace, I'm unable to log into either of them. I don't receive any error messages. When I click the login button, the box refreshes, but nothing happens. I tried resetting my password but as I enter my email I get the message βunable to send the emailβ. Is there a maintenance going on? Or is it just me?
Hi @user-fb64e4! Did you sign up with Google OAuth or did you use an email? Could you please let us know either here or by email [email removed] what email address are you using for log in? Thanks in advance
Hi @user-cc9ad6! I will pass the feedback, there is a way to obtain the mask as a base64 encoded image using the Cloud API but it is a bit uneasy as you will get then some sort of binary mask.
This will be the endpoint. /workspaces/{workspace_id}/projects/{project_id}/enrichments/{enrichment_id}/aois/{aoi_id} if you are unfamiliar with the API and not too keen on programming, I would recommend looking at our custom consultancy options
Thanks a lot, I will try!
Thank you for the update! I exported the videos from the app, but I am still not able to view them on pupil player. Unfortunately for privacy purposes and the nature of the study, I am not able to upload the data to pupil cloud.
What is the issue that you are facing loading the recordings in Player? Does it say anything?
thank you for this! what's a good email I can invite you or miguel with?
I have sent you a friend request and I will follow up there with my email
Hi, I am new to pupil labs and want to figure out how to use the demo videos for practice before I work with actual videos recorded through the device. When I open the cloud and try to add an event (so I can experiment with enrichments) it says "disabled by permissions". Is this normal?
it would say no recordings found. I'm trying to upload the mp4 files
Did you follow these steps to export and transfer the recordings to your computer? If so, you need to drag the whole recording folder onto the grey window of Pupil Player.
Hi @user-b9d000 ! That is normal, the demo workspace is available to everyone, so we disable certain functions to prevent this demo from becoming a mess. You should be able to create events in your recordings.
is there a way I can make my own copy of the demo to work with and explore features?
No, there is no way to do so π . May I ask what you are missing/want to explore that is not currently in the demo space?
I see. Yes I wanted to try the enrichments and auto detection features on some test videos. Is there any way to do this
HI! I would like to know the uncertainty with which the duration of each fixation is estimated. where can it be found? Thanks !
Hi @user-c07f5f ! There are no uncertainty values for the duration of fixations, you can read about the algorithm employed here and even modified it if you need it.
thanks!
I have and it's still not working. Is there anyone I can call to help troubleshoot this?
Hi @user-b434c8! Sharing the 'player.log' will help us figure out what's going on. Please try to load a recording into Pupil Player. Subsequently, search on your computer, 'pupil_player_settings'. Inside that folder is the 'player.log' file.
Hey @user-b9d000 π. So, while the demo workspace isn't user-editable, it does demonstrate a lot of Cloud's capabilities in terms of data logistics, aggregate analyses, and so on. Our aim with this was to showcase what's possible and enable people to replicate it using their own eye trackers and studies. Invisible is fairly easy to use, so I'd definitely recommend making some test recordings with it and experimenting π
Thanks! Would it be possible for me to upload an older video recorded with Pupil into pupil cloud? I havenβt been able to successfully upload any videos as it doesnβt seem to be permitted
Hi Neil, I am physically not able to upload or even drag the files or folder into pupil player. Even after I have tried saving the folder to desktop, it is still not letting me upload it on there.
Hi @user-b434c8 π ! What version of Windows are you using? Is it some sort of virtual machine? Could you try installing Pupil Player with admin rights?
Hi Neil, I am physically not able to upload or even drag the files or folder into pupil player. Even after I have tried saving the folder to desktop, it is still not letting me upload it on there, that's my main issue. But the pupil player log is saying this in the attached image.
Hi, could you please recommend me other device than Anker PowerExpand to be able to charge the Pupil Companion through a powerbank while running the gaze measurement with Pupil Invisible?
Hi @user-3c26e4 π ! this is the only one that we have tested properly, that said any USB hub with power a usb type c port that is both power delivery and data transferring should work.
Hi, when I create a reference image using a scan and an image, can I use it for different videos in the pl-dynamic-rim (taken at the same spot)? Also, when I download the data for the enrichment, I get a gaze.csv file. Which gaze does it refer to? Because it only contains the scanning video, which has no gaze in it.
Hi @user-1af4b6 ! I am not sure I am following upon your first question. You can add multiple recordings to your reference image mapper enrichment and usepl-dynamic-rim
with all the videos of the same enrichment.
The neural network will attempt to provide you with gaze data even when you are not wearing the glasses.
That data you can throw it away, as it has no value. In fact, in that file you probably see a column "worn" that says False, indicating that nobody was wearing the glasses, also the returned gaze from the neural network would be always around the center.
Hi @user-b9d000 ! Could you develop what you mean with not being permitted? Do you have Cloud upload enabled in the companion app settings?
Yes so i do not have the app yet, the video was recorded by a previous researcher who has it on their computer but can send it to me to analyze. Would I need to get the app to upload that video?
Recordings can only be uploaded from the Companion Device. If your collaborator has it on the Cloud, they can invite you to your workspace.
Thank you so much
Hi Guys, I need to re-install Invisible Companion in one of the devices. It doesn't appear in Google Store any more. I tried to download it from the link on the website, and it says that the Android version is not compatible. The version on the device is 13. It was working fine before. Could you tell me how to install it again? Many thanks,
Hi @user-1e8b04 π ! The Invisible app is optimized to only work with certain Android versions. If the device is on version 13, then it needs to rolled back to an older Android version. Check our instructions for rolling back the device and then try the Google Play Store link again and let us know if that has cleared things up.
Hi Rob!! Thanks so much! We will do it! Many thanks!
Hi! I'm using Pupil Invisible for a research and I need to find the heatmaps, but in the Pupil Cloud when I run the Reference Image Mapper enrichment, it seems to start but then the percentual of the process remains at 0%, even after 2 hours. I think I do everything is necessary, I choose the video, the reference image, the start and end events. Is there something that I could do?
Hi @user-72baa8 π !
First of all, could you do some hard refresh of the tab? Some web browsers aggressively put some tabs to sleep and you don't get them updated. If after that it is still at 0%, there are few things to consider.
The time it takes an enrichment to process depends on the length and amount of recordings that are going to be enriched and the queue of enrichments at that are being running at the moment. I.e if many people submit jobs they go on a queue and they will be process as soon as there is processing power available.
If after refreshing the webpage and waiting some time, seems to be still stuck, please reach us with the enrichment ID and we will check it. The id can be found in the enrichment view clicking on the 3 dots next the enrichment name.
In particular, I'm recording the videos from a fixed point from which I observe the surrounding environment, as I need to see what I observe from a certain point, so in theory the observer should not move. Is this way okay?
Thank you for the answer, I tried to refresh the webpage and to wait some time, but nothing changes and it's still at 0%. There is only one recording to be enriched right now and it's only 11 seconds so its' not long. I will send you the enrichment ID, thank you. ca2eb5e3-df2c-4c98-92c1-891cb2892283
6ca9bdac-cd64-4322-b927-aec686ff2cdc This is an other one of 20 seconds with more fixations, maybe it is more complete if you want to check this one
Hub 10Gbps Typ C Hub 100W Power Delivery mit 4 Datenports USB C 3.1 Gen2 Ultra Slim Datenhub 5 in 1 USB C Hub auf USB C Adapter fΓΌr MacBook Pro/Air M2 M1, iPad Air/Pro, Dell, HP, Surface https://amzn.eu/d/9dqKKdQ
Is this hub going to do the job for Pupil Invisible, so that the companion can be loaded by a powerbank while recording with Invisible?
Hey @user-3c26e4! We can't say for sure that it'd work. Not sure if you've seen this already, but we have a link to one that we already tested in our docs: https://docs.pupil-labs.com/invisible/hardware/using-a-usb-hub/#using-a-usb-c-hub
Hi, sorry to write again, this is the ID of one enrichment with Reference Image Mapper I'm trying to process but it's not working (it's always at 0%), could you tell me if there is something wrong in the recording video or in the reference image? 63f99dbd-c658-479b-9114-3c579ebd077d Thank you
Hi @user-72baa8 ! You need at least 2 videos in your project to be able to perform a successful reference image mapper.
These enrichments seem to have failed quite early. Would you mind inviting me to your workspace so that I can check the videos and give you more feedback on why they might have failed?
Yes of course, do you mean two videos of the same environment of the reference image? How can I invite you in the workspace? I mean with which name?
I mean a scanning recording and a regular video, you can do so by inviting this account [email removed]
But how can I insert a regular video inside the pupil cloud? Because I tried but I couldn't. Do I need to do that always with the app Pupil Invisible Companion?
Sorry, it seems like I expressed myself wrongly. You need a scanning recording and a regular recording, mainly because the scanning recording would not contribute to the reference image mapper heatmaps or AOI heatmaps. Both videos need to be done with Pupil Invisible.
I have now looked at your workspace. Firstly, I can not find those enrichments you refer to as being 0%. Perhaps you can point me to them? All of your enrichments seem to have failed and show an "Error warning", letting you know that they failed because of scanning recording and reference image.
It is expected that they failed, as you did not move at all during the scanning recording. You need to capture the area you want to scan from different perspectives. Please have a look here for examples and best practices when performing a scanning recording.
Also, this video shows you how to perform the scanning recording. It hasn't been updated with the new Cloud interface, but you can get the idea. https://youtu.be/ygqzQEzUIS4
I sent you the invite.
Yes now they show the "error warning" but only after I closed Pupil Cloud, before they were just loading at 0%. Ssorry if I ask you but then I don't really understand the difference between the scanning recording and the regular one, if they are both taken with the glasses, are they not both scanning videos?
I replied to your DM, but I will post it here, too, in case somebody has the same question as you do. You need two types of recordings, i.e., normal eye-tracking recordings and the scanning recording.
Both have to be taken with the glasses but you can make the scanning recording with the glasses in the hand, as the eye tracking data won't be used.
Scanning recording: The purpose of this video is only to create a 3D model of the environment that we can later use to find the image you provide us and match it. For that, you need to capture the environment moving around and record it from different perspectives/angles. This recording has to be less than 3 min long.
Normal eye-tracking recordings: After that, you can have as many normal eye-tracking recordings as you want. They can last as long as you need, and you can be static, if you will. We will try to find the reference image on those recordings based on the image and the scanning recording you provided us.
Hi, besides your website, do you offer any workshops or classes to teach how to use eye tracker?
Hi @user-58c1ce π ! Yes, if you need more than the documentation provides, then we offer training in the form of Workshops. Please check our support page for more information. If you purchased the device after April 2022, then you also have access to a free 30 min onboarding call, if you have not done so already.
Hi, have a problem with recording. The app stops recording with an anouncement "Sensor Failure. The Intertial Measurement Unit has stopped providing data." It happens on 12-20 seconds. When Invisible is connected with a phone the monitor is twinkling. So, it doesnt work at all. I haven
t used it for several months, and was asked to renew program. Could it be the reason? or what I should do?
Hi @user-35fbd7 ! Could you please write to [email removed] such that we can follow up with some debugging/troubleshooting steps?
yes, i am writing
Hi, I have a major problem with the automatically generated screenshot in Pupil Cloud, onto which I'm defining the areas of interest. This screenshot is made only at the beginning of the video while driving a bicycle. For the first event section this is okay. In the same video I'm having another event section starting from the middle of the whole video and there I want to define another areas of interest according to the different view. Unfortunately a screenshot appears from the beginning of the video, though I put the new start and end of the event. This doesn't make sense, because I cannot define the areas of interest according to the new view. How can this be done? If this is not possible, then I would advise you to program it, because it's a very important thing for research. Thanks for your answer in advance.
Hi @user-3c26e4 ! Are you using the reference image mapper?
There is something else: why can't I extend the four points for the surface boundary on the whole visible picture?
No, I am using the video recording. Can pupil cloud generate a screenshot according to other events in the video? If not, can I cut the recording somehow? How can I use the reference image mapper?
I guess, if you are not using the Reference Image Mapper, then you are using the Marker Mapper,~~ but I doubt you used markers while riding on the bike. Is that the case? ~~
Just to be sure we are on the same page, to define Areas of Interest in the Cloud, you currently need to either use the reference image mapper enrichment or the marker mapper.
Both enrichment or analysis tools serve to remap the gaze from the scene camera onto a picture or surface (i.e. a 2D space). The difference is that one requires you to use AprilTag markers (some sort of QR codes), and the other one needs you to scan the environment and provide a picture.
In the case of the Marker Mapper, the image is selected from the content within the surface (as defined using the markers) in between the start and end events. For the Reference Image Mapper, you provide the image.
Yes, I am using the marker mapper and as you can see I have markers.
Hi, I just insert the lenses, and on the first recording, this message showed up. "Recording error We have detected an error during recording! The second message is a pop-up window titled "Sensor failure" and reads: The Inertial Measurement Unit has stopped providing data. Please stop recording. Unplug the USB from your Pupil Invisible Glasses and then plug it back in. If this behavior persists, it may indicate a Glasses or Companion Device issue. Reach out to our support team for help." Can you assist with it, please?
Hi @user-58c1ce ! Could you please contact us through info@pupil-labs.com with your glasses serial number? we will follow up with some troubleshooting steps, but this sounds like some sensor may have been damaged while changing the lenses.
Hi! I'm interested in using the face mapper enrichment in research. What we want to do is determine where on the face an individual is looking, for how long, etc. and compare this across participants. I have two questions: 1) It seems the best way to do this would be to compare fixation coordinates with the coordinates of the face features (e.g., left eye, right eye) and see when they match. Just wanted to check that this is the best way to do this and I'm not missing some kind of data that actually tells you where on the face an individual is fixating. 2) In the face_detections readout, it lists the coordinates of facial features as they change in real time. I was wondering about the temporal granularity of these coordinate changes -- basically, what triggers the recording of a new set of coordinates for the features? Is it when the x and y coordinates of the bounding box rectangle change?
Hi @user-05d680 ! My colleague @user-480f4c is working on something like that for an Alpha Lab article. We will be able to share more details in the upcoming week. Regarding the second point, face location and landmarks are computed for every scene camera frame this is at 30FPS.
Hello Pupil Labs, we have finished collecting data and are working on trying to analyze it but have a couple questions:
Hi @user-b9d000 π. Invisible recordings can only be uploaded to Cloud from the Companion App for reasons of robustness, stability and data security. It's not possible to modify them outside of Cloud and then re-upload them. As regards tracking a person in the scene video, you might want to look at this Alpha Lab article we wrote π
Hey there, I'm having trouble downloading scene videos from pupil cloud. The videos display properly on the browser, but when I attempt to download the scene video, no video file is dowloaded. Does someone know what might be the issue?
Hi @user-ed2612 π ! Which browser are you using? And which download link are you clicking on when that happens?
Hi Rob, I'm using safari and the download button the shows up when you right click on a video in pupil cloud. Switching to chrome resolved the download issue. Thanks!
Hello, when we've been using the glasses after 1-6minutes the phone will vibrate and we'll get this error message and the recording stops (it won't continue recording) and won't save
Hey @user-14536d π. Please reach out to info@pupil-labs.com and someone will respond with some debugging steps!
Hello Pupil Labs, your answers have been immensely helpful. Can we run reference image mapper or something similar on Python? It seems this tutorial exists (https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/) but it requires us to use Pupil Cloudβs Reference Image Mapper beforehand which we cannot do because we have been correcting the gaze in Pupil Player. Is there any way we can use the reference image mapper outside of Cloud?
In addition, because we are using Pupil Player to correct videos saved in Pupil Cloud, we were wondering if the fixations automatically calculated in Cloud will also be transferred to Player and corrected. I am aware that Pupil Player does not do fixations for Invisible, which is why it would be great if there was a way to preserve the enrichments/corrections to the fixations computed by Cloud even after moving it to Pupil Player (is merging the fixation dataframe in Python a viable option prior to gaze correction?)
Hi all. I ran a study that calculated a very high average fixation duration (~1500 milliseconds) for several participants using Invisible. Is it possible for you to take a look at some of our data recordings and see if the tracker failed?
Hi @user-27eb9f! Would you mind inviting us to your workspace to have a look at this data? To invite me to your workspace, please click on the dropdown next to the workspace name, select Invite Collaborators, and type my email [email removed]
hi... the enrichment I've just created has been running for 1hr...is it normal?
hi... the enrichment I've just created
Does anyone know if the blinks detected by pupils lab count winks? would a blink only be a blink if both eyes are closed? what if they close one eye only (tic r something)
The blink detector does take both left and right eye video frames as input. You can read about the algorithm in the blink detector whitepaper. Do you have a specific use-case in mind?
Hi! I'm using the realtime API to stream video from the scene camera, and I've noticed that some frames have a poor quality (see picture). I would guess it has something to do with the encoding, but I didn't find much documentation on this. Any advice would be greatly appreciated π
Hi @user-3c0775! Could you provide some further information about the network you're running this on? Is it an institutional network/an environment with lots of network traffic?
We work with children and we noticed that some of our young kiddos (4yo) partially close their eyes or blink quite often but not both their eyes and we were wondering if those one-eye blinks would change anything.
Interesting - thanks for explaining! I don't have a solid handle on what you might encounter in these cases. But I would expect there might be at least some false positive and/or negative blink detections. With certain participants, would it be of interest to you to manually inspect the blink detection result, for example, when viewing the eye video in parallel with the blink result?
Hey Neil, i was wondering if there is also an academic price for the invisible Glasses. Just found the regular price of β¬ 4750. Im a psychology Master Student and would like to buy (or rent?) these for our research? Thanks in advance.
Hi @user-fbac90! Unfortunately, the price you saw was a bug on our website. It has been fixed to reflect the correct price of Pupil Invisible which is β¬ 5900 (β¬ 5200 with the academic discount). Apologies for that!
Please bear in mind that we are discontinuing Pupil Invisible. We would highly recommend Neon instead since it offers improvements in accuracy, modularity, provided metrics, scene camera field of view and more (please have a look at the comparison table). Note that the cost difference is only 50 EUR between the two!
If you'd like to see both products in action, feel free to DM me or send us a mail to info@pupil-labs.com and we can schedule a Demo and Q&A session via video call π
thanks
Hi Pupil Labs!
I am trying to get real time IMU data from the glasses. I used to do it using the ndsi lib but now that is not working. The new real time API seems to have a dedicated funcion for this but is blocking and not working either.
I've read in a post by Marc that the new companion version for the smart phone will support the streaming of IMU data in real time. Is that so? Is it possible to know a date for the release of that new phone app supporting real time IMU data streaming?
cheers, TomΓ‘s.