Okay, I was assuming it to be this way. Thanks a lot, I cannot stress enough how helpful you guys are!
Hello developers, I had some issues while using the invisible glasses.
I observed some problematic behavior with the connection between eyeglasses and the companion device. After connecting the glasses and phone, when I try to wear it, the "World/Scene Camera" shuts down/disconnects. Even with the slightest flex in the glasses, this keeps happening. I believe the companion phone is around 30-40 percent charged when this happens.
The gaze offset adjustment: I followed the reference message: https://discord.com/channels/285728493612957698/633564003846717444/1085099673125138492 and did what is intended as shown in the app, but the gaze markers are still way off and don't seem to be improving no matter how many times I repeat the procedure. Is there something I'm missing?
I really appreciate any help. Thanks.
Hi @user-e2db0a!
1) This is not expected and sounds like there might a hardware issue with your device. Please reach out to [email removed] about it to facilitate a repair.
2) The offset correction is limited in what in can compensate for. It is literally correcting for an offset in the predictions. This is helpful when dealing with parallax error, which occurs at close distances < 1m, but it does not help compensate for other more complex errors. Repeating the correction procedure does thus also not help to further improve the accuracy. Generally accuracy can vary a bit between subjects. Have you tried it with multiple people? Without seeing example data it is a bit hard to say if the accuracy you see is in the expected range. If you are able to share something I'd be happy to take a look!
Has there been a fix for the permissions bug in android?
We've also lost some data due to an RA accidently clicking out of the permissions request box and not getting the prompt despite restarting the device, resetting the permissions, and unplugging the usb-c cable. They confirmed it was recording data via the display circles and preview mode, but 5 minutes after starting data collection, the permission box popped up and we lost the video data
Hi @user-2ecd13! If you click the checkbox of the dialog before accepting, the dialog should in theory not pop up again. If you do not check the box, the permissions are given only temporarily and it can happen that they get revoked during a recording.
To force all dialogs to reappear you need to clear the data of the app. To do this tap and hold the icon -> click app info -> click storage usage -> clear data.
Not that after this you will have to log in to the app again. None of the existing recordings on the phone will be deleted by this action, but you will have to import them from the filesystem again from the settings view.
Let me know if you still have questions!
Hi Pupil labs, I would like to buy a new cable for the eyetracker-smartphone connection. Is there anything i have to pay attention to, or do you even have a suggested cable?
Hi @user-4771db! In principle the cable we use is a regular USB-C cable. Note however that the data throughput via the cable is high and that lower quality cables will often suffer from intermittent disconnects.
So one option is to purchase a high-quality cable from a 3rd party. In theory that should work fine, but I'd recommend testing the stability before relying on it. The second option is to purchase it from us. To get a quote please reach out to info@pupil-labs.com
Let me know if you have further questions!
Hi @marc (Pupil Labs), I have the following questions about Pupil Labs Invisible: 1. In the results I get for every AOI while cycling (fixations.csv) I have fixation x [normalized] and fixation y [normalized] with the values you see as an example, which should be in [px]. How should I understand the values? Are fixations in EVERY AOI defined through normalized pixel coordinates between (0;0) upper left corner and (1;1) lower right corner? 2. Same question as 1. for the gaze.csv but there are NEGATIVE values for gaze position on surface x [normalized]. How can I interpret the gaze area? 3. The heatmaps I get from Pupil Cloud are shown on a random screenshot, automatically chosen by the program, which I donβt like. Isnβt it possible to choose a screenshot? 4. It seems it is not possible to calculate the fixations in Pupil Player and I thought you wrote, that this has already been implemented. Files fixations_on_surface_xxxx.csv are all empty! How can I get information about the fixations, because I need more AOI (better in Pupil Player) and in PupilCloud I cannot connect the AOI right. 5. Is there a way to define an AOI on an object like a traffic light when a vehicle is approaching, or a moving pedestrian? As it is in a dynamic environment, I canβt use reference image mapper like you show in your examples with the pictures. Thank you for your help.
Hi @user-7c714e!
1) + 2) When using the Marker Mapper (or Surface Tracker in Pupil Player) the mapped gaze and fixation data is always given in normalized coordinates. Your understanding (0;0) upper left corner and (1;1) lower right
is correct. In order to convert that to pixel coordinates of a screen (I am assuming your target is a screen?) you'd simply have to multiply the values by the screen resolution. This is assuming you have defined to corners of the surface to be congruent with the corners of the screen.
It is possible to get negative values. This happens if the surface was detected successfully, such that relating it to the gaze/fixation information was possible, but the gaze/fixation point was simply outside of the surface. You may want to filter such values out using the gaze detected on surface
column if you are only interested in samples in the surface.
3) Currently the user can unfortunately not really influence what frame is used for the crop underneath the heatmap overlay. However, the heatmap download contains a semi transparent image of only the "heat" and no underlying image. You could overlay this yourself onto any screenshot of your choosing.
4) Pupil Player is only capable of calculating fixations for Pupil Core. The algorithm implemented there is not compatible with Pupil Invisible. Could you elaborate a bit on what makes you prefer Pupil Player for the AOI work? I'd love to understand how Pupil Cloud is lacking there!
You could in theory calculate fixations in Pupil Cloud and then manually map them on to the surface detections found by Pupil Player. There is however no easy/automated way to do this.
5) That is a though one! You could of course user markers again, but that is probably not feasible in practice. We are currently implementing a tool for manually mapping gaze data onto a reference. This would allow you to solve the problem albeit with some manual labor required. ETA for this is around May.
Tracking arbitrary objects in a video is not a solved problem in computer vision. However, for traffic situations specifically there are some AI tools that are decent at segmenting images. You could use those to detect and segment traffic lights and then map gaze onto the results. This would of course require some implementation effort and potentially some manual corrections in the end because the tools are not perfect. For example Detectron comes to mind: https://github.com/facebookresearch/Detectron
Hi Pupil Labs, on my device, the videos on the Invisible Companion wonβt upload to clouds. I have turn on and off the Wi-Fi many times and restart the devices many times. But it didnβt help. Any suggestions for solving this issue?
Hey @user-6e2996 π. Please try logging out and back into the app. This usually triggers the upload.
Hi @marc, thank you so much for your kind answers. Do I understand right that in 1) the (0;0) upper left corner and (1;1) lower right is for EACH AOI while driving? One more question: I would like to define an AOI of the whole field of view in Pupil Cloud and Pupil PLayer. How could I do this, because in a dynamic environment while driving subjects rotate their heads a lot?
Concerning your question about 4). What Pupil Cloud is lacking IMO is the definition of multiple AOI in the video as I do it in Pupil Player. When you define 1 AOI at a time, you cant really correlate it to the next few AOI.
You're very welcome! π Yes, every surface/AOI has their own normalized coordinate system.
Not sure I understand what you mean with the whole field of view. Do you mean you have a subject sitting in the car and you would like to have an AOI that covers 360 degrees all around it (or maybe the more relevant frontal ~180 degrees?)?
That's exactly what I am doing with Pupil Player and therefore I think it would be great if fixations can be calculated there. To create multiple surfaces that collectively span the entire scene in Pupil Cloud is difficult and not precise enough because one doesn't have a reference to all other AOI having to define them one by one, not seeing the others. But it's a great idea to create a 3D model! How can I do it? Just record a video once with the Invisible? Please advise.
You can find documentation on the head-pose tracker here: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Note that the head-pose tracker plugin itself will not generate a high-fidelity model of the car. It will only localize the markers in 3D. You'd have to use 3rd party software to create a 3D model to align this with. I don't really have a concrete recommendation I am afraid. There are probably smartphone apps that make this pretty easy, but I do not have first-hand experience.
I understand the issue with surfaces in Pupil Cloud. If you can only ever see one surface at a time its hard to place them correctly. I think we will be able to improve that aspect a lot within our upcoming UI updates!
Hi, I was wondering if there was a way to calculate real-time 3d gaze direction (using pupil invisible) across a recording with respect to say the first frame of the video to then eventually calculate gaze velocity using the tutorial. My current strategy was to detect markers using OpenCV (at least two markers per frame), calculate the camera movement with respect to the first frame, convert 2D gaze point to 3d using camera intrinsics and the estimated posed from the above step with respect to the first frame (F_pos^-1 * F_instrinsics^-1 * gaze_pos). Is that the appropriate way to obtain 3d gaze direction? Or, another way could be to compare each frame of the video to a reference image containing all of the markers and then use homography to transform gaze points onto the reference image. But, then velocity would be pixel/s. I was wondering what would be the recommended way.
Hi @user-8664dc! Let me see if I understand this correctly: You want to calculate 3D gaze direction absolute coordinates. - "Unprojecting" the 2D gaze point using the intrinsics provides 3D gaze directions in relative coordinates. - Using markers you want to track the scene camera pose/extrinsics - You want to transform the relative gaze directions into absolute directions defined by the coordinate system of the first scene video frame
Is that correct? I do not yet understand why the transformation into an absolute coordinate system would be necessary to calculate gaze velocity. Often gaze velocity essentially describes how fast the eye ball rotates and would be measured in degrees / second. This you could already calculate in the relative system though. Is your goal to calculate a more complicated version of gaze velocity than this?
The approach you describe should in principle work. Using methods like cv2.solvePnP
you could measure changes in pose between the first frame and any other frame.
What might help to speed things up for you is to use the head-pose tracker available in Pupil Player. It uses markers placed in the environment to define a 3D coordinate system around an "origin marker" and then provides you with the pose of the camera within this coordinate system. Conceptually its similar to the solvePnP
approach but it does a larger global bundle adjustment to fit the 3D model and locate the camera poses.
You can learn more here:
https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Hi Marc, I have a question regarding the Pupil Invisible Monitor. It seems like everything works fine, but its not possible to start the recording, because the template needs input. We do indeed use a template with an input field, but is there a way to fill out the template from the monitor itself? Or to at least start and end the recording and fill out the template just before saving on the smartphone?
Hi @user-4771db! Filling out the template via the Monitor app (or the real-time API) is currently not possible. What you could do though is use a template that has no fields marked as "required". When using such a template the Monitor app should have no issue starting and stopping a recording and you will be able to fill it out on the phone (before stopping the recording). When doing that the app will of course no longer enforce that this field is filled out before stopping a recording.
Hi Pupil Labs-Team! We have recently conducted an archaeological psychological study in Pompeii (Italy) on the archaeological excavation to investigate the perception of ancient houses. Among other things, we would like to use the Reference Image Mapper Enrichment to visualise the results. The automatically generated heatmaps already look very good, but as psychologists we are very interested in the exact data used by the Enrichment to gain a deeper understanding of the data. This way we can better understand the algorithm used or create a customised heatmap from the data using our own resources. Is there a way to manually access this pre-sampled data that the Enrichment uses? We thought of a 2D image on which the fixation densities of our subjects are displayed as individual points or point clouds (without the abstraction provided by the heat maps). The alternative would be that we manually note down each fixation by hand, which would be a considerable extra effort. Thanks in advance for your answer and greetings from Germany! Jonas
Hi @user-df468f! That sounds like a fun data collection!
If you download the enrichment data in CSV format from the enrichment list, the download will include gaze.csv
and fixations.csv
files, which contain the individual gaze samples and fixations in reference image coordinates. Using this data, you could generate any custom visualization or aggregation.
More info is available in the docs here: https://docs.pupil-labs.com/invisible/reference/export-formats/#gaze-csv
Is this what you were looking for?
url = f"https://api.cloud.pupil-labs.com/v2/workspaces/{self.workspace_id}/recordings:raw-data-export?ids={recording_id}" res = requests.get(url, headers={"api-token": self.api_key}) print(res.content)
Output:
'{"code":401,"message":"The server could not verify that you are authorized to access the URL requested. You either supplied the wrong credentials (e.g. a bad password), or your browser doesn\'t understand how to supply the credentials required...}
Apologies @user-13f46a, I made small mistake in the code! While its called API token within the UI, the header needs to be called api-key
. I have edited the example above!
It is accessible by webpage (putting the url in chrome link and hitting enter), but python request wouldn't allow.
I suppose there's an issue with the authentication phase in Python
Hello, I'm new to discord so I'm a little lost. I just have a question about the resolution of the pupil labs invisible video recording. This doesn't seem like the right place to ask the question, but when I clicked "Chat" it brought me here π Using the PupilLabs invisible, would you be able to see writing on a whiteboard from the back fo the class in the video recording?
Hi @user-e34e91! This place is exactly the right place for this question! π On paper the resolution of Pupil Invisible is 1088 x 1080px and the field of view is 82Β°x82Β°. Understanding what this means in practice is of course not straight forward. If you'd be able to make out writing on a whiteboard from the back of a classroom really depends a lot on the font size and the size of the class room. I have attached an example screenshot from a demo dataset available in Pupil Cloud so you can get a rough idea. If you create a Pupil Cloud account (which is completely free) you can access the entire demo dataset and try to get a better impression!
Hi! I have some questions on using real-time API. How is it possible to connect a OnePlus device via the code? I tried to connect to the same WI-FI network, to use Bluetooth, but it does not work. This is the code which I used:
from pupil_labs.realtime_api.simple import discover_one_device device = discover_one_device()
and
from pupil_labs.realtime_api.simple import Device device = Device("pi.local", 8080)
Thank You!
Hi @user-5d429a! Please check out this troubleshooting section on the real-time API https://docs.pupil-labs.com/invisible/troubleshooting/#i-cannot-connect-to-devices-using-the-real-time-api
If your problems persist, let me know!
Hi! We experience an increasing amount of failures of the companion app or the recording itself. In the two videos you can see two examples. 1. the app gets stuck in a white screen, where we have to turn the entire phone off and on again. 2. the phone vibrates and tells us that the recording start failed. From the troubleshooting documentation page, its difficult to say what the problem is. Can you say if the problem is more likely due to the phone, the cable or the invisible itself? What can we do to find out what the problem is?
Hi @user-4771db ! Please contact info@pupil-labs.com on this regard.
Hi We encountered a problem when using invisible. The actual collection time is 2 minutes and 49 seconds, but the scene video shows 1 minute and 23 seconds. What is the problem with this
Hi @user-4bc389 π ! Could you please check if the uploaded version in Pupil Cloud plays the whole video? Alternatively, if you are not uploading the recordings to the Cloud, can you check if the recording folder contains multiple scene video files? (I.e named ps1,ps2)
Thank you for your reply. Currently, I have encountered some issues using Invisible. These issues have been sent to your email to see what the specific issues are. Thank you
https://discord.com/channels/285728493612957698/285728493612957698/1098955560658927617 Hi @user-732eb4 ! I am replying to you on the Invisible thread, for housekeeping. It sounds like your scene camera might be broken, but can you contact info@pupil-labs.com such that we can assist you better?
Thanks for the fast reply. Sorry, I didn't noticed I was posting in the core channel. I will do so.
Hi, should there be a problem with enrichment when recording with LSL?
Hey @user-535417! Our LSL integration really just streams raw data over the network - that in itself will not affect enrichments. This is because enrichments are computed on recordings stored on the phone and subsequently uploaded to Pupil Cloud - independently of the LSL stream.
Hi Pupil Labs Team! I have a quick question: Have you published the algorithm that creates the heatmaps from the Reference Image Mapper Enrichment? I would like to recreate this for myself and adapt it if necessary. Greetings, Jonas
Hey @user-df468f π . The algorithm that creates the heatmap from the Reference Image Mapper enrichment is the same one that is implemented in the public surface tracker repository. You can find it here: https://github.com/pupil-labs/surface-tracker/blob/10a14b9ca6f1e1e3374d291a1b596983ce7a86eb/src/surface_tracker/heatmap.py#L85
Hi Pupil Labs team, I work to improve public transit systems, and we just had a full day of data collection in the field with focus groups wearing the Pupil Invisible glasses. I have a question regarding the upload behaviour of Pupil Cloud.
Problem description 1. I made a recording in the field using Pupil Invisible. When playing back on the Invisible Companion App, it appears fine (with world video and the eye tracking circle mapped on it) 2. It uploaded successfully to Pupil Cloud - but when I tried to play it back there in the cloud, it stays frozen on a black screen with an error message along the lines of "this is either a network problem or incompatible file format" (I didn't manage to take a screen shot of the error). 3. I tried to troubleshoot this by Trashing the recording on Pupil Cloud, thinking that the Invisible Companion App will detect this, and automatically try to re-upload the recording. However, the recording still appears as fully synced to the cloud (cloud icon with checkmark). 4. Next, I went into the trash bin of Pupil Cloud and deleted the recording, hoping to achieve the same. On the Invisible Companion App, it still shows the same fully-synced status (cloud icon with checkmark). 5. I took the final step of uninstalling and reinstalling the Invisible Companion app on the phone. This time, the recording list turned up empty (along with all other recordings I had on the app at the time - luckily, those have been earlier synced to Pupil Cloud). I'm afraid that I have permanently lost the troubled recording in question.
Questions 1. Is there a way to recover the recording I've deleted from Pupil Cloud? 2. In the android Gallery app of the companion device, I can still see the 3 video files of the recording in question (left eye, right eye, & world scene video). Is there a way to 'reconstruct' the full recording and have it uploaded to Pupil Cloud, using these 3 videos I still have on the phone?
update: I went into the android Files app and managed to find the recording directory, with all the different scene videos and supporting files intact (ps1, ps2, PI world, gaze, event.time...). Is there a way to reconstitute these and sync the recording back into Pupil Cloud?
Hey @user-e65190 π I'm sorry to hear that you were experiencing issues with your recordings. Please send us your workspace id/name to info@pupil-labs.com - if the recording is recently deleted we can still recover it.
Hi, I've got someone out in the field who's had the message in this photo. The device is connected to the internet and has been for some time. They've restarted and the message has not gone away, so they are unable to create a new wearer. Any advice? Many Thanks
Hi @user-bdf59c! Please visit https://speedtest.cloud.pupil-labs.com/ on the phone and execute the speedtest to ensure the phone is able to access the Pupil Cloud servers well. Next, try to reset the app with the following instructions in order to provoke a fresh download of relevant data from the Pupil Cloud servers: - Click and hold the app icon and select "App info" - Click "Force stop" - Go to "Storage usage" and hit "Clear data" - Start the app while connected to the internet and see if the issue is fixed.
Note: No recording data is deleted when "clearing" the app data, but they will no longer be shown in the app. To get them back into the app you'd have to import them by going to the settings view in the app and selecting "Import recordings".
If this did not fix the problem, please send me a DM with the serial number of the scene camera module. To find it detach the scene camera module and check the golden plaque on the back.
@user-e65190 sounds like a cool project! Just sent you a PM, would love to chat.
Has anyone tried using the pupil glasses with users who have low vision? I'm curious how people with low vision view/interact with their environment, but something tells me the hardware only works with people who have full ocular capabilities...
Hi@user-cd03b7! Low vision per se should not be detrimental to Invisible's gaze estimation. But there are other eye conditions that might be. What did you have in mind?