Hi to all! I don't know if that was already reported, but the Companion app seems to be working fine on the OnePlus 8 Pro. Though, I had to upgrade to the last OS version (Oxygen OS 11.0.8.8), as I had connection issues with the eye cameras it seems (serial number of the glass was not shown, only the one of the scene camera; the "Play button" remained greyed out).
To clarify, did upgrading to Oxygen OS 11.0.8.8 fix your issue?
Hi everyone, does anyone know how to decode the .imu file that is output from the system? It appears to outputting the imu values in the phone attached to the invisible glasses. Is there a specific script or algorithm that can be used to unpackage the hex that I am seeing in the .imu file?
@user-f75e38 you can open in Pupil Player - alternatively you can export IMU data from Pupil Cloud with Raw Data Exporter enrichment.
@user-f75e38 In addition, you can find the definition of the raw format here https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793 (in case you want to process them without our software)
Thank you @wrp and @papr (Pupil Labs) !
Hi Pupil Labs! Where can I find the serial number for the product? Thank you
Hi Pupil Labs, is the one plus 8 Update to 11.0.8.8IN21AA compatible with Pupil Invisible?
Yes, it is. Make sure to update the Companion app to version 1.2.3 first, before installing the Android update.
Hi Pupil Labs, we paused our eyetracking-endeavours for quite some time and just today checked if everything was working. Is there any reason to upgrade the OnePlus OS when the most up-to-date Companion App is working fine? Otherwise, we'd just leave it be due to adverse experiences in the past, because we want to use it tomorrow and time is of the essence. Thanks!
Thank you!
Hi @papr, our lab is interested in purchasing the Pupil Invisible and wanted to know if we can extract measures like fixation length, saccades, blinks and pupil size. Thanks.
Hi 🙂 Currently, Pupil Invisible only estimates the raw gaze signal in scene camera coordinates. These recordings can be enriched via Pupil Cloud https://docs.pupil-labs.com/cloud/enrichments/ While we are actively working on fixation and blink detection for Pupil Invisible recordings, these features are not quite ready yet. Due to the lateral placement of the eye cameras, the pupil is often only partially visible making pupil size measurements very difficult.
Alternatively, I can direct you to our Pupil Core product for which fixation and blink detection, as well as pupil size estimations, are available today: https://pupil-labs.com/products/core/
Thanks, we already have a Core but were looking in to the Invisible as well, it seems to be more user friendly for the participants and doesn't require calibration. I guess it might be difficult for you to estimate but do you have some timeline on when fixation and blink would be available? What about saccades, will that be available in the near future?
I've been using the pupil invisible and pulling out fixation information using data exports from pupil player and the 'saccades' package in Rstudio. Once Pupil Labs makes fixation information available, I am interested to see how close my estimations are to theirs.
Is there any way to replace the wired usb c to c cable with a wireless one for Pupil Invisible
No, this is not possible. The Pupil Invisible Glasses themselves do not have a battery or e.g. a bluetooth chip that could facilitate the communication.
Fixations should become available in October. Depending on how you define saccades, you could consider the time between fixations to be saccades. A more fine-grained classification between e.g. fixations, saccades, micro-saccades, VOR and smooth pursuit will initially not be possible though. Blinks will take a bit longer, but will hopefully still be released this year.
Thanks, I think the timeline should be OK for us
Regarding blinks, what information do you need for research exactly?
We just need the blink rate
Thanks, are you referring to https://cran.r-project.org/web/packages/saccades/index.html. ?
yes
Have you also calculated saccade metrics like velocity, direction using this or any other package?
No, and the package doesn't either Perhaps a person more savvy with R could write functions that could determine it, but that's beyond my skillset.
Hi, is there anywhere I can find troubleshooting advice for uploads to pupil cloud stopping throughout the upload?
@user-bdf59c Please see this previous message ☝️
Thank you!
Hi, can anyone advise if the One Plus 6 OS software update 10.3.12 is compatible with Invisible Companion?
@user-bdf59c please do not update to 10.3.12. There is a known bug in Android OS that will block USB access which is needed for Pupil Invisible glasses. Android OS v8, v9, and v11 are supported. But v10 is not.
Thank you! The reason I ask is that after correcting the gaze in the preview mode. The red circle on the screen cannot be seen in preview or in the recording itself. I'm trying to troubleshoot this.
@user-bdf59c can you confirm the android version you are using? Also could you update to the latest version of Pupil Invisible Companion App.
Troubleshooting: 1. Log out of the app 2. Clear cache (long press on app icon app info > storage & cache > clear cache) 3. Relaunch app & log in
Looking forward to resolving this 😸
Hi, I have tried this but my colleague has reported still seeing no red circle in the preview mode at all now? Are there any other troubleshooting tips for this?
I will find the Android Version that we're using, a colleague has made the mistake of updating to V10. But we do have another One Plus device that we can use in the meantime. I will pass this onto to our researcher currently using the glasses and hopefully it will fix it. Thank you!
You can roll back the android version see: https://discord.com/channels/285728493612957698/633564003846717444/654683156972175360
Amazing, thank you!
Please back up all data prior to rollback.
Hello, I'm new to pupil and have one question: I'm trying to use the enrichment Reference Image mapper for a 10 meter projected pathway (see attached picture). For some reason I keep getting the same error: "The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image". I've tried hundredths of different ways to scan the surface and to take pictures, but I keep getting the same error. Could you tell me if it's possible to use the enrichment for such a long surface on the ground?
Hi @user-b79e7d 👋 . The reference image mapper ultimately prefers a feature rich environment. The visual properties of the flooring in your reference image are quite uniform, which is a potential reason as to why the enrichment fails. In this reference image taken from a basic obstacle negotiation task, gaze has been mapped well, due in part to the abundance of unique features. Concretely, I would try to start with a smaller area of the walkway, and perhaps add some different features (e.g. floor markings) if protocol allows.
Hello! I am new to Pupil Invisible and I was wondering if you can provide a sample of the output data? What format does it output as?
What I'm really trying to figure out at the moment as I look at multiple headsets is what the data outputs from each device are, that way I know what to expect from each headset. For instance, for the gaze data, is it one point that is the average gaze point between the two eyes, or is there a gaze point for both eyes? Or does the data output both the average gaze and the two individual eye gaze points?
If you could provide a rundown of the data outputs, or provide a sample of data output, that would be extremely helpful!
Hi all, I've just tested again and we have no red circles still. We really need to get this sorted as we have fieldwork due to start on Monday. Any ideas?
Would it be possible for you to share an example recording with us? You can either export it from the app or download from pupil cloud. Please share it with data@pupil-labs.com
Thanks Papr, should be shared with you now.
I will come back to you later today, once I reviewed the recording.
Thanks Papr. As previously mentioned, we really need to fix these today so we can get the back out later today.
Hi @user-31ef4c 👋. Pupil Invisible employs a neural network that provides stable gaze estimation using images from both eye cameras (binocular). Gaze is provided in normalized world-camera coordinates. You can read more about the gaze estimation here: https://arxiv.org/abs/2009.00508 In addition, there are several enrichment tools that you can use to analyse recordings within Pupil Cloud, and each have different outputs (as well as the raw gaze output). Check out our online documentation for a comprehensive overview: https://docs.pupil-labs.com/cloud/enrichments/#enrichments
Thank you very much @nmt , this is extremely helpful!
@user-468386 @user-bdf59c I have jumped in and took a look to speed things up. The recorded eye videos in your sample recordings are unfortunately defective recording only black video, indicating a hardware failure with either the eye cameras or the IR LEDs.
To get you back up and running asap we will have to replace the hardware. I have advised my colleagues to send you an immediate replacement with express delivery today. They will inform you about how to facilitate the rest via email. I am sorry for the inconvenience and hope the effect on your data collection will be manageable!
Is the same address as for the original delivery okay, or would you prefer something else?
Hi Marc, thanks for taking a look! The external camera recordings appear to be working fine so is it just the eye facing recordings that aren't working? Let me just discuss delivery with my colleagues. When would you expect these to be delivered?
The recording of the scene camera is fine, but the eye camera videos are not. In Pupil Player you can playback the eye videos and see that they only show up black. You are based in the UK, right? The delivery will leave our office shortly after we have determined the address and should arrive on Monday. Post-Brexit there is a chance for hick-ups in customs though.
@user-468386 Please send the desired address as well as a shipping contact person and phone number as a follow-up to you previous email once you have it!
@user-468386 just so you are aware, we will need the info in the next 60 mins, since the customs office will close early on friday, after that the next chance to export is monday.
Hi Moritz - Can we please send to Eve Duncan, Swain House, Main Street, Bishop Wilton, YO42 1SR
These will be going directly to one of our team. Will we need any setup once they arrive (e.g. syncing with the phones we have etc) or should it just be a case of swapping the glasses in for the old set?
Thank you! No setup will be needed, they should work out of the box!
Sorry can you amend the name to Steve & Eve Duncan please
Perfect, thanks Marc!
Could you still let me know a phone number to tell the currier in case of any problems? Please send it via direct message!
Thanks Marc - should be with you now!
@nmt or anyone else, do the pupil invisible glasses output pupil diameter in the raw data, in the Enrichments raw data exporter, I don't see anything mentioning pupil diameter
Pupil Invisible does not currently produce any pupil diameter measurements, so this will not be available. Pupillometry data is available using the Pupil Core device. Pupil Core was designed for this use-case. It delivers high quality images of the pupil and pupil detection, and pupil diameter estimation is already available in the accompanying software.
I was looking for the answer to exactly that question. I also assume Invisible can't be easily enriched with pupillometry data by attaching the IR cameras to it?
Hi. Pupil Invisible already uses IR cameras. But due to the lateral placement of the eye cameras, the pupil is often only partially visible making pupil size measurements very difficult.