Hello! Who can I contact with a question about the technical side of the Pupil Invisible EyeTracker glasses device?
Hi @user-680341, this is Wee from the Product Specialist team. I've moved your message to this channel for all things related to Invisible. Free feel to ask your questions here,
Hi, please tell me ASAP why can't collaborators login to a workspace, when I invite them? "Acces is denied" all the time. Do they have to register with the Email? In this case they cannot access the workspace I share as well.
Hi @user-3c26e4 ๐ ! Yes, they need to have a registered Pupil Cloud account first, with the same email address that is used to invite them. Then, they need to go back and click on the link in the invite email again. Depending on how long it has been, they might need a new invite. Has this process already been tried?
Hi @user-f43a29 , yes, we still have problems with a few workspaces, but it worked with some of them. I have another question. Could you please advise how the QR codes can be recognised, when they are lit up? I was hoping to solve the problem in dark environment, but they are not recognised by the system. Can I try darkening the video and how? Is there a solution? As you can see the not lit up QRs are found sometimes (green in the left upper corner) but the lit up are not.
Marker Mapper will have trouble when the AprilTag markers are illuminated in this way. It works best when the markers are uniformly illuminated, such that the white borders can be clearly seen and there is good contrast with the background. Post-processing the video would not be recommended in this case, as re-uploading the videos to Pupil Cloud is not a supported workflow. Is it possible to change the code of your driving simulator to display the AprilTags on the monitor display? If not, you could give the Reference Image Mapper a try with a test recording, but the scene might need a few more static/stable features.
Thank you @user-f43a29 ! What if the lamps are positioned higher than that and at a bigger angle, so that the QR codes are fully lit up from their front? Do you think that might work? Otherwise I have to ensure that it is bright enough in the room, so that the normal QR codes can be clearly seen. I don't think that it will be that easy to display the AprilTags on the monitor display.
Yes, this is worth a try with a test recording. When the white borders are clearly visible in the scene camera video and are uniformly illuminated (i.e., have roughly the same brightness around the whole border), then you should get better results.
Great, I will try it and will let you know. Thanks a lot.
Hello! I wanted to ask in which format the eye video is recorded and then saved on the cloud since I could only find informations regarding the scene video for which your description states mp4. Thank you very much!
Hi @user-61e6c1, can I clarify with you if you want access to the eye video data?
Invisible's eye video is only available offline. You'll need to access them using Pupil Player, and enabling the eye video exporter plugin. Exported eye videos are in .mp4 format.
Hi @user-07e923 thank you for your message! My question was regarding to a Datamanemant plan I'm actually preparing for our project. I have to declare where and how our data will be stored. For the scene video I think I'm in page. But I didn't really understand how it's working for the eye video. So my question is in which format the eye video gets stored on the cloud?
Ah I see. The eye videos are stored as .mp4 format on the Cloud. This is the same raw data file that you get after recording on the Invisible Companion app.
@user-61e6c1 minor correction: the eye videos are stored as .mp4, not .raw on Pupil Cloud.
Thank you so much @user-07e923 but the scene video is stored in mp4, right? And then I've got another question regarding the companion app: As I understood the data gets transferred to the cloud simultaneously as collected, right? So am I suggesting right, that on the companion the data is stored just locally or will it remain there too?
Yes to mp4. And yes, the data is also stored locally on the Companion device until you delete it locally. On Cloud, files that are moved to trashed are deleted automatically 30 days later, but you can also delete them immediately in the trash. We won't be able to recover deleted files from Cloud when it's gone.
Ok, now I got it! Thanl you very much for your quick help @user-07e923
@user-61e6c1 Just to add on top of my colleague response, data is not transferred simultaneously to Cloud but rather uploaded after the fact (assuming internet connection is available).
I there! I am having some problems with the upload to the pupil cloud. i am using the pupil invisible. The phone attached to the pupil states that the upload has been completed, however in the cloud it states that the upload has failed. Now I am able to export the recording via the phone through a USB cable. However, it is now not in the cloud and when I am uploading to pupil player it doesn't show the fixation point. What could be the problem of the uploading error and if it occurs, how can I make sure I still now where the person wearing the glasses is focussing on?
Hi @user-df855f, could you open a ticket in ๐ troubleshooting? We can proceed from there.
Thank you Miguel! But the upload is done nevertheless automatically if connected to the internet?
Yes, as long as Cloud uploads are enabled in the settings of the app (On by default).
Hi, what is the optimal size of the April Tags for monitors, which are approx. 1,30 far from the eyes? 12 cm x 12 cm or more?
Hi @user-3c26e4, there's no recommended size for April Tags. This requires a bit of trial-and-error, as many factors can affect the visibility of the tags, e.g., viewing distance, ambient lighting, the medium (digial/printed) which the tags are presented/displayed.
Ideally, you'd want them to be large enough to be visible and have good contrast. What's also important is that the white border around the tags must be visible for the tags to be reliably detected. See source.
Hi, Iยดm using pupil invisible and have a question about Pupil Cloud. I have been invited several times to different workspaces so that I can edit recordings. I have accepted the invitations, but only one workspace is displayed. What could be the problem? Is it only possible to accept one invitation? Whenever I accept the invitation, i get the following message: "Unable to accept invitation This workspace invitation has already been accepted or has expired." i have to join the other workspace to work on the recordings.
Hi @user-75e0ea , invitiations to muliple workspaces are allowed. To be sure, the email that was invited and the email that you use to log into Pupil Cloud are the same?
yes the email is the same
And when you click on the arrow next to the current workspace name, you only see your personal workspace and the demo workspace in the list?
i can see my personal, the demo and one more. but there should be more than three.
Ok, how old are the invitations?
the last invitation is from tuesday. and i also accepted it right on tuesday
Ok, are you comfortable with sending an email to info@pupil-labs.com from the same email address that you use to access Pupil Cloud? I would then follow-up with an invitation to a test workspace.
ok
Hi, I have a major problem, that some of the students cannot see the workspace, though they have already registered and verified the mail. Please tell me ASAP what they have to do, because it has been more than 2 hours now since they can't work.
Hi @user-7c714e, can you please open a ticket in the ๐ troubleshooting channel so that we can assist you there in a private chat?
how to trim the video on pupil cloud
Hi @user-870276 ๐ ! Do you mean the visualisation? You can create a video renderer and use the advance settings > temporal selection and use events.
?
can we do Reference image mapper with Invisible?
Hi @user-2b5543. Yes, you can. Check out this section of the docs
hey there! i tried that i marked the beginning and end of video and i selected in the temporal selections but its not changing anything
Well, based on that screenshot, you are using recording.begin
and recording.end
as the events for start and end, which will essentially take the whole recording. What happens if you select indor.begin
as the start? Regarding your other recording.end
event, try not to use the same name for the events.
i want to know if we can do the same like Pupil core player trimimming the video with the slider bar and exporting and getting the updated CSV files
Unfortunately, that's not possible, but feel free to request it on๐ก features-requests
This looks like a bug, could you create a ticket in ๐ troubleshooting so we can follow up and tackle it? When creating it please add the enrichment ID (which you can obtain by clicking on the three dots above the Settings label (right side).
Meanwhile, I would suggest trying to create a fresh video renderer if you haven't tried it.
i tried renaming but its, its not reflecting on the temporal selection to trim the video
so you mean we cannot trim the video at go signal manually and export it and get fresh exports ?
Yes, in Cloud you can not crop the CSV exports. That said, the events.csv
contains timestamps which you can use to crop other files. You can see an example of how to do so using python here.
Hi there, I am having an issue where when I attempt to download my recordings in pupil player format from pupil cloud, only the mp4 file downloads and I am unable to open it in pupil player. And I am able to view gaze positions in the browser, but they do not download with the data. Does anyone know what might be the problem?
Hi @user-ed2612! Did you select the Pupil Player format
download option? This is the folder you need to get from Cloud to be able to load the recording on Pupil Player. Once downloaded, it should be a zip folder. Could you clarify which files do you see when unzipping the folder?
Hi @user-480f4c , yes I clicked the download pupil player format option. And when the file downloads it just contains a folder with the world.mp4 in it
Which browser and OS are you using? For unzipping, we recommend 7-zip. Which software are you using for unzipping?
My apologies for the delay in my response. I was having trouble accessing our account on another browser. My initial errors were using safari. Switching to chrome solved the problem. Thanks for your help!
Hello! Has anyone conducted research with videos or films using Pupil Invisible? How, in this case, is enrichment loaded and how to analyze these videos further? Because in the enrichment -> reference image mapper it is possible to load only the image reference, but there is no option to load video reference...
Hi @user-a3c8db , while it is not possible to upload reference videos to the Reference Image Mapper, you can still use it to conduct research with videos and films. Check out our Alpha Labs article for mapping gaze onto dynamic screen content, which shows how to use the output from the Reference Image Mapper to map gaze onto videos/films. Please also let me know if I misunderstood your question.
Hi @user-66797d , following up on your question (https://discord.com/channels/285728493612957698/285728493612957698/1243253128862634074), maybe give a different browser a try. It also sounds like your network connection could be unstable, so trying a different network connection is also worth a test. You can visit our speedtest to see if you have a poor connection with our servers.
Good evening! I'm sorry to bother you again but I was trying to follow the instruction you send me and I got this mistake "self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: no display name and no $DISPLAY environment variable" while downloading pl-dynamic-rim. Have you ever encountered the same mistake? If you can help me, I will be really grateful. Thanks in advance!
Hi @user-a3c8db , how have you installed Python? What Operating System are you using?
I have installed Python from the official site and I'm using Linux, ubuntu.
Ok. That is a Linux error that typically indicates that the monitor (or an underlying GUI system) cannot be found, but it can also be due to specific configuration choices. Just to confirm, this is not Ubuntu running within Windows Subsystem for Linux? If you are running a dedicated Ubuntu installation, are you running it on a laptop with an Nvidia GPU?
Hi guys, lately i have this problem that while invisible companion shows that i record the scene video and capture eye movements, after ending the recording via my automatized python script, it does not contain any information. The recording seems to be connected to by script (kind of as it stops aftre the expected time) but contains no scene video, no event data. And no data whatsoever. However i dont get a error-message or anything. Not at the software(script) nor hardware. The only thing i did encounter was that the mobile and glasses get incredibly hot at times. Even the mobile sends a message of danger that it is overheating. Howeve rit doesnt happen all the time ( i think. I cant check the recording and app during the experiment i use the glasse sfor al lthe time). I am not sure if its connected at all. Is it a hardware issue?
Hi @user-5ab4f5, could you open a ticket in ๐ troubleshooting so that we can help you with debugging?
Also it does not happen all the time with the empty recording. Just occasionally.
Yes, I'm using Ubuntu 20.04 running within Windows Subsystem for Linux.
Hi @user-a3c8db , while WSL does have GUI support and GPU "passthrough" and ideally it should support this approach, it is also likely to be causing your issue here.
Can you try running pl-dynamic-rim in a Windows terminal using a Windows copy of Python?
I use 'Reference Image Mapper' to map gaze from video to one image, but after that the timestamp in gaze.csv is changed, could you tell me why? In theory, it should be the same, right?
Hi @user-cc9ad6 ๐ ! Would you mind clarifying which gaze.csv
file are you referring to? Is that the one from the enrichment? If so note that reference image mapper needs to localise the image, using the scene camera which has a 30 fps sampling frequency.
Yes, it is from enrichment. For example, the original gaze.csv (without mapper, from video and not in the enrichment) has 152356 rows, after mapper, the gaze.csv out from the enrichment only include 147964 rows. 4379 rows lost. Could you tell me how the 30 fps sampling cause the 4379 rows lost?
Hi @user-cc9ad6 ! That should not be the case, could you create a ๐ troubleshooting ticket, and privately share the gaze.csv
files or send them by email to ||data@pupil-labs.com ||, such that we can further investigate it?
Thanks for your reply. As the file is too big, I can't upload it to the private channel. Instead, I send it to the email. Looking forward to your feedback:)
Hi @user-cc9ad6 ๐ ! I have just looked at your files, the only missing timestamps I could found are those when the reference image was not localised, which is expected. Can you confirm that is the case?
So in a nutshell, the gaze.csv
from the reference image mapper would only contain gaze points where the reference image is located, and a boolean value to whether you gazed upon or not.
Hi all! I love the interface at the pupil cloud, the quality of the videos is great and the fixation patterns (the small blue rounds) are very interesting and insightful. I was wondering if it is possible to download the videos as such? As I don't own the pupil cloud myself I have to delete my recordings afterwards but I feel it would be a waste if I can't access these great videos anymore. The quality of pupil player for instance is way less.
Hi @user-df855f ๐ ! You can download them using the video renderer which would allow you to trim them, plot fixations, or tune the gaze circle color and size.