Hi! I would like to analyze joint visual attention with Pupil Invisible. We are using the apriltags to detect the surfaces of the table. So far, my idea for the analysis has been to first download the data to Pupil Cloud and use the Marker Mapper for each video to detect the surfaces and get some relative x and y coordinates for the fixations. Then I am planning to export the data, but I noticed that I would probably encounter some problems with the coordinates. So I am aiming to synchronize data from four people who are sitting in front of or next to each other. Any ideas on how to do that if the coordinates are reverse? I also checked that I cannot make the recordings work with Pupil Player so do you know what could be the reason? I get an error of "No camera intrinsics available for camera world at resolution (1280, 720)!"
Hi @user-2251c4 π. Do I understand correctly that you want to map everyone's gaze onto the same area defined by the marker mapper?
Also, which version of Player do you have installed?
Hi! I want to put a high quality picture of the pupil invisible glasses on a conference poster. are any such images available?
@nmt @user-480f4c would really appreciate your help! Thanks in advance π
Hey @user-eea97f π. Reach out to info@pupil-labs.com and we can coordinate some images for you from there π
Thanks will do!
Hi there! I am using Invisible for research in my lab. We recently discovered that the eye tracking cameras on our glasses had stopped functioning. The world view camera still functions properly. I was wondering if there is anything that could be done to troubleshoot the eye-tracking cameras or if a replacement will be necessary?
Hi @user-ed2612 ! Sorry to hear this, would you mind writing to info@pupil-labs.com so we can follow up with some debugging steps and/or replacement if necessary?
Hi@user-d407c1, I would like to know if there is any progress now on real time fixation indexing and blink indexing for Invisible.
Hi @user-a98526 ! the blink detector standalone is currently working for Neon and we are working to make it compatible with Invisible, I will let you know as soon as it is ready. You can nonetheless, run it here https://github.com/pupil-labs/pl-rec-export/ EDIT: The realtime blink detection won't be possible with Invisible since you can not stream the eye cameras.
Hi, can I do post-hoc calibration on pupil cloud? I have some recordings on cloud in which the gaze hasnβt been properly calibrated Iβm wondering if I can edit them on cloud.
Hi @user-fb64e4 ! Unfortunately, post-hoc offset correction is not possible on Cloud, but you can do it so with Pupil Player https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html and this plugin https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433
Hi, Do we need to update to de newest OxegenOS version, or will that give problems?
Hi @user-6d90d6 ! We recommend not to update to the latest version https://docs.pupil-labs.com/invisible/glasses-and-companion/companion-device/#android-os
Regarding the notification, would you mind sending an email to [email removed] so we can follow up with debugging steps?
On one of our Pupil invisbles we get the notification: systeem failure during recording no connection. Is there something we can do to solve that problem?
Hi Pupil team, I received a phone (One Plus 8) with my pupil invisible that was already on Android 12. I'm following the step to go back to a previous version according to the documentation (https://docs.pupil-labs.com/invisible/troubleshooting/). However, my access is denied when I try to download the Rollback APK (I attached a screen shot). Could you help with that? Thank you!
Hi @user-20a5eb ! Seems like the page from One Plus is down, feel free to grab the APK from here https://drive.google.com/file/d/1QY0J_h9Ds-A_zNp6Rmm9Li4DgRSizj7P/view?usp=sharing
I did a recording but unfortuantely it says the recording contains no scene video...im assuming this is a permissions issue, but no chance in getting it at this point correct?
Hi @user-91d7b2 ππ½ ! Is it possible for you to share the recording's raw data (Download > Pupil Plauer Format
) with [email removed] This might help us understand what went wrong. May I also ask if this happened only with one recording? Have you experienced this issue more than once?
Hey Pupil Labs I have an unusual question: I am trying to use the Invisible eye-tracker with a patient in a minimal consciousness state. One eye of the patient is non-responsive, which I assume affects the accuracy of the gaze position estimate. Nevertheless, I am more interested in estimating the direction of the gaze shifts (up-down and left-right) than the exact gaze position. Do you think that the readings I am getting (of these shifts) are accurate?
Hi @user-5b371f π. Invisible's gaze estimation pipeline utilises data from both eyes to compute its gaze estimates and was not specifically trained to handle conditions like the one you describe. Therefore, if used with the patient, you can likely anticipate reduced gaze accuracy. Unfortunately, there's no definitive way to ascertain if the gaze estimates are accurate enough, as the patient can't voluntarily adopt certain eye movements. We may have a method that could assist here, although it would come with specific constraints. Please contact info@pupil-labs.com and reference our conversation.
Hello! I want to record how people interact with a website that includes scrolling. I have sed the april tags before, and I have read the "map your gaze to 2d screen" article. My question is if there is a way to map the gaze with the actual webpage 'space' (image a long webpage) so that the gaze coordinates are mathed to the content, not the screen.
Is this possible already with some plugin? or perhaps some integration with a 3d party software?
Thanks!
Hi @user-94f03a ! Have a look at https://github.com/pupil-labs/real-time-screen-gaze this will help you get the gaze on screen coordinates in realtime. So you could adapt it to track a web.
Are you analyzing/using the gaze data in real-time or post-hoc? If real-time, I'd use the package @user-d407c1 linked. With it, you could either:
Embed AprilTag markers along the length of the page such that they scroll with it. You will need to know the x/y coordinates of each marker to give to the real-time-screen-gaze
library. If you're using multiple or unknown devices/monitors/browsers to render the webpage, this could be a little tricky. For just a single, known, web browsing environment it's simple though.
Fix AprilTag markers in static positions on the screen (e.g., the four corners). This is a bit more of a traditional approach, but would require you to record the scroll position of the webpage over time and use that as an offset to the surface-gaze position to get the gaze position on the web page content.
If you're doing it post-hoc with data from Pupil Cloud, the approach for #1 wouldn't work the same because I believe all the markers for a surface have to be visible simultaneously when defining a surface, which obviously won't work if you have markers that scroll in and out of view. Instead, you'd have to define multiple surfaces on the webpage. The approach for #2 would work the same though
Thinking about it a little more, although the realtime-screen-gaze
package was developed with real-time use cases in mind, there's no reason why it wouldn't work with pre-recorded data. You would need to decode the video yourself (via something like pyav
, decord
, etc) and manually pair those frames with the gaze data
Hello, I'm not sure if this is the right place to post this, but I have been getting "internal server errors" on Pupil Cloud all morning, and now I can't seem to download videos / time series data. The download will go on for a random time (between 5s and 5min), and then fail. I have tried on different computers, networks and browsers, but the problem persist... Does anyone has any advice?
Hi @user-3c0775 ! Please see my message on the Neon channel, we will update you as soon as the problems are fixed.https://discord.com/channels/285728493612957698/1047111711230009405/1174014973446455366
Hello I do need help with an app I have reconfigurated my phone &now the invisible app is not working or scene camera
Hey @user-918434 π. Which model of Companion phone do you have, and which Android version is currently installed on it?
I did buy a new cable now I am trying to install
Systems updates maybe it can fix my
Problem?
Thank you
Hello, I have many problems while recording gaze with Pupil Invisible since couple of months. Some recordings cannot be stopped on the cell phone after completion and then the video of the entire measurement is not available. Either it is due to a loose connection on the cable or the cell phone. Is this a warranty case? What can I do? This ruins my measurements, as the subjects cannot repeat the road section firstly because of time reasons and secondly because they already know what is being measured (scenario). Please give me some solution. Thanks in advance. Kind regards.
Hey @user-3c26e4. That's definitely unexpected behavior, and could well be due to a faulty USB cable or connection. Please reach out to info@pupil-labs.com and someone will coordinate debugging!
Thank you for your fast response. If it is the cable, it is not a problem, but what can I do if it is the phone connection?
There are some debugging steps we can take to determine the root cause - I see we've received your email so will coordinate from there.
Hi @nmt . I sent you a description of the debugging process and asked you some questions in a second mail, but I forgot to ask whether it is a problem that there are 140 recordings in the phone. There is no lack of storage in the phone though (43 % free).
Hey hey,
I work at invisible in an eye tracking research centre. We did a whole study, the subjects rode on a tram simulator. We're supposed to see the differences between experts and novices - simple stuff.
Importantly, we used QR markers to mark the AOIs of interest (like the windscreen, or the right rear-view mirror).
We've got all the data nicely on Pupil Lab and now there's a problem, because all the data we download in .csv format doesn't contain information about which AOI the fixation is in. Where to get such data from? It has to be possible, because if it couldn't, why would there be a function to determine AOIs based on QR codes.
I am very much asking for help, as we are agonising over this for the second week.
Hi @user-06b8c5 π. Did you run the Marker Mapper enrichment successfully before downloading the data?
Hey ho!
I'm experiencing some issues recording recently and I'm having trouble pinning the problem down.
Hey @user-1391e7 ππ½ ! Sorry to hear you're having issues with your recordings. Please send an email to [email removed] referring to this conversation and a member of the team will follow-up with debugging steps to help you resolve this issue.
hm, it might be the usb cable
then again, maybe it isn't.. it's weird
when I reboot the phone, start the app again, I can't get it to disconnect even if I whip my head around wildly. if it really was an issue with the cable, I'd get that all the time I think?
I've seen odd popups after a few minutes, a permission error with the sensor and a while later, I sometimes even get an error that the IMU stopped responding and that I have to stop the recording, reconnect the cable etc.
I'm on the OnePlus 6, app version 1.4.30-prod
I'll upload the recordings to the cloud, see if I can view them there, see what happened
thank you, will do!
Hey, short questions: receiving the single eye images from the PupilInvisible is not possible by using the new RTSP connections, but only using the old/deprecated NDSI implementation, am I right?
Hi @user-fb5b59, that's correct! The eye cameras on Pupil Invisible are placed at the side, and the view is quite angled. While these images are useful for our deep neural network to estimate gaze, they are not so useful, to for example get pupillometry.
Thus, we did not deem the effort to port the eye cameras functionality to Pupil Invisible newest realtime API valuable. These eye cameras are better placed in Neon, which allows you to capture pupil and that's one of the reason why eye videos are available to stream in realtime for Neon.
That said, you can still access the eye videos in the raw data, or if you need them in realtime, use the old API to stream them. π
I hate that my initial instinct was correct and it ended being the usb cable. the time wasted! the pain of it! π
Hi there! I would like to export my eye-tracking data using PupilPlayer for iMotions. When I open the plugin manager and select iMotions exporter, then press 'e', I get the error: "The iMotions exporter does not yet support Pupil Invisible recordings!" In my research, I am not allowed to use PupilCloud, so I must export the files locally. Appreciate any guidance
@user-6e200f, it's my understanding Pupil Invisible recordings don't need to go through Player to be compatible with iMotions. That should be documented in the iMotions support centre, if memory serves.
Hi Neil, thanks for the response. My imports into iMotions are not working because I don't have the required files for the iMotions import based off an export via USB to the PC. My exported recordings do not have an info.json file, just an info.invisible.json and info.player.json file. Even after renaming them in various combinations (and deleting) for importing into iMotions, I still am unable to import. Is it possible my exports are messed up somehow? iMotions says it needs PI world v1 ps1.time and .mp4, but I don't have those files either, though I do have PI left v1_sae_log_1bin.bin, the right one, and world.time for example. To sum up, I don't have these three files in any of my exports, and info.json (at least at first) is the one preventing my importing into iMotions.
γData loss issue with pupil invisible and pupil cloudγ Hello, I have been having trouble with the Smartphone app, and I tried to restart it several times, but it did not open (in fact, it seemed to open and close repeatedly). So, I uninstalled the app and reinstalled it, but when I logged in with the same ID, all my data was gone. Moreover, all the data that I had previously uploaded to pupil cloud was also gone. I am very confused and frustrated because it was very important data. I would appreciate it if you could tell me if there is any way to recover the data.
Hi @user-11d0d6 ! when you say your data is gone do you mean that you don't see the recordings on the "Recordings" view? If so can you go to "Settings" > Data Usage > Import recordings, then click "Yes" and "Use this folder"? This should reimport the recordings from your previous installation.
We were able to recover our data! I am so relieved to have my precious data back before the analysis. I am filled with gratitude to you. Thank you so much.
Hello, I'm using the pupil invisible device to track gaze data for a research project in an unsupervised environment (a room) where people is working or moving around. I would like to find the AOIs and for how much time the subject is looking at them. I saw some examples on alphaLab but every of them refer to a static environment. Do you think there's a way to do it also in a more dynamic environment?
Hi @user-3e88a5 ππ½ ! For the Reference Image Mapper, indeed, the scene needs to have relatively static features in the environment. If there is a lot of movement or the objects change in appearance or shape, the mapping might not work. Could you elaborate on your specific setup? What would you like to define as an AOI in the room you will be conducting your experiment?
I'm working with a surgeon performing an arthroscopic surgery and I would like to know if he is looking: at the arthroscopic monitor (in a fixed position in front of him); at the patient knee operated (where the surgeon is using instruments), at other people assisting him during the work
In that case, you might want to try out the following tools for your specific AOIs:
a) For the monitor placed in front of the surgeon you can place Apriltags at the corners of the monitor and then use the Marker Mapper
on Cloud. This would allow you to define the monitor as an AOI and get the normalized gaze positions on this surface. See also here: https://docs.pupil-labs.com/invisible/pupil-cloud/enrichments/marker-mapper/#selecting-markers-in-the-cloud
b) For the patient knee operated: We offer an Alpha Lab tutorial that allows you to map gaze onto body parts: https://docs.pupil-labs.com/alpha-lab/dense-pose/#map-gaze-onto-body-parts-using-densepose. This tool can work even in a dynamic environment. However, please note that we do not offer a turnkey solution for tracking moving objects (e.g., the instruments that the surgeon will be using). A workaround would be to mark the relevant points of the recording using Events
(e.g., start_using_instrument_A and stop_using_instrument_A) and then analyze the periods where the surgeon is using each instrument: https://docs.pupil-labs.com/invisible/data-collection/events/.
c) To map gaze onto other people, you can use the tutorial that I mentioned in my previous point (mapping gaze on body parts) and/or the Face Mapper
enrichment that automatically detects gaze and fixations on faces appearing on the scene camera. https://docs.pupil-labs.com/invisible/pupil-cloud/enrichments/face-mapper/
I hope this helps!
I'm trying with other imaging techniques such as segmentation with neural network, or the new AI; I asked you just to understand if a faster way was available or if the examples provided on alpha lab could be suitable
Thank you a lot for the suggestions
They'll be useful for sure!!!
Hi, I keep getting 500 errors returned when trying to download heatmaps on Pupil Cloud. Can anybody help with this?
Here is the request url: https://api.cloud.pupil-labs.com/v2/workspaces/1a9b70fd-eacf-4b08-a6e6-2755c82d6074/heatmap/
And the Response:
{
"code": 500,
"message": "internal server error",
"request_id": "b3612a5d6886b6e59e4b4f3288b87a9b",
"status": "error"
}
Also the sentry Id for your Dev team: {"id":"d9f246710b7d47e5b982c1ed0ba53a3f"}
Hi @user-795302 ! Thanks for the information, I have passed it to the Cloud team, so they can further investigate it. I will keep you posted
Hi! @user-795302 It should be fixed now, apologies for the inconvinience
Thank you! It's much appreciated!
One thing while debugging it, the cloud team noticed that there were duplicate markers, perhaps you reused the same marker or there was a mirror?
One more thing, while debugging it, the Cloud team found that there were duplicate markers detected, could it be that you reused the same marker or there was a mirror that could have reflected them? Note that this may affect your data quality
Hi, I have an upload error for a 5-min-long recording. It replays perfectly fine on the companion device, and the recording's status shows as uploaded. On pupil cloud however, it shows as "uploading 100.0%" for the last six hours, and I cannot replay the recording or cannot see the thumbnail image. Is there a way to fix the issue? Thanks!
Hey @user-ab605c, can you try refreshing your browser window to rule out any weird caching behaviour? Does that solve it?
iMotions compatible recording
Hello, is there any tutorial video for the invisible mobile app?
Hi @user-228c95 ππ½ ! You can find our documentation for Pupil Invisible here https://docs.pupil-labs.com/invisible/ - Are you looking for a tutorial about something specific?
Thank you but Δ±'ve seen this documentation and read its content. If there is a video related to the application, it would be very helpful for me. There is no specific topic; a video on the initial setup of the device and the mobile app in general would be very useful.