Thanks a lot. One more thing. I added the reference picture and cliked the running button but it is not processing
Hi, I have an issue. Am not sure if this should be a troubleshooting question. I have the invisible companion (IC) app and I had two functioning workspaces on it. After some app start issues I "deleted the cache" in the phone app settings for the IC. Now it only sees the default workspace, which isn't the one I need to work in.
In the pupil cloud my workspace is still there and seemingly working fine.
I tried to google solutions, but nothing specific came up.
In the app it says I should create a new workspace in the pupil cloud, which I tried. But it doesn't seem to ask me to connect to a specific phone or similar. I don't remember how I did it when I first made the pupil cloud account.
So the question is - how do I either reconnect a workspace to a specific phone app - or just make an additional workspace would be fine as well
Thanks!
@user-f6ea66 can you please create a ticket in the π troubleshooting channel?
Yes
Did it, you can erase the above if you want.
Hi, I have the following question: Is it possible for invisible to calculate the gaze (direction vector) in world coordinates? In imn data, there is no yaw available, since to magnetometer is not build in the glasses. However, is it somehow possible using gaze and imu to recalculate the gaze direction in world coordinates? May be using the video? Thank you in advance!
Hey @user-529f78, thanks for getting in touch π Regarding yaw computation, it's not possible if you want to compute absolute yaw. You can try to compute relative yaw, but then this will be affected by drift over time. See https://discord.com/channels/285728493612957698/633564003846717444/1030029919473913886
As for your other question, the issue here is that gaze is always provided with reference to the wearer. If you don't have a known position/coordinate in the environment (like a marker or something), then it's difficult to calculate this without using some kind of sophisticated object-detection/face-detection algorithm. So when that object moves out of the scene camera, then you don't know where it is, how far it is from the scene camera, etc.
To map gaze onto faces, you could try using the face mapper, but it won't work if the face moves out of the scene camera. It also won't tell you how far the faces are from the scene camera.
Actually, what I want to find at the end is the distance (in degrees) to an object (e.g. face), even when the object is outside of the field of view. May be there is another, "easier" way to find it. I would be happy, if you could help me with it.
@user-07e923 thanks for the fast reply. Yes, I see. When the object is out of the scene camera (and it is also moving) then of course it is difficult to calculate the distance. Yes, I am going to use the face mapper to calculate the distance when it is in the scene camera. My thoughts was that may be it is possible to interpolate the distance to the face when is is outside of the field, assuming that the object is still there. What do you think, how large is the drift?
I have also another question, which confuses me due to the orientation of the axes. In the imu data, pitch and roll are the rotations around, correspondingly, y and x axes, as the standard definition, or around x and z, since z is directed down? How they are calculated from accelerometer and gyroscope data?
Regarding IMU, I think this figure that shows how the IMU is oriented in Invisible should help clarify the data.
Regarding the message: does Pupil Player works with Invisible data?
Yes, Pupil Player works with Invisible data. Please download the "Pupil Player format" or the "Native Recording data" from Pupil Cloud.
Hi, I downloaded the Pupil Player. My question is if i can somehow add the Marker Mapper enrichment in pupil player, or if that is only possible in the Pupil Cloud? Thx
Hi again @user-f6ea66! Yes, Pupil Player has a similar functionality, called Surface Tracker. To use it simply download the recording from Cloud (in Pupil Player Format
) or transfer it directly from the phone to your laptop/PC, load the recording on Pupil Player, and select the Surface Tracker for the plugin manager. This plugin detects the markers and allows you to define surfaces similar to what you can do on Cloud with marker Mapper.
Ok thanks!
Hi, I have the following question: I am currently analyzing Timeseries Data exported from Pupil Cloud, and I've noticed that there are instances in the gaze.csv file where the fixation id and blink id overlap. In such cases, which one should be considered correct?
Hi @user-f03094! It is possible for fixations and blinks to overlap. During certain phases of the blink sequence, such as when the eyelid begins to close or starts to reopen after a blink, the pupil may still be visible. This can result in overlapping classifications of fixation and blink.
In general, I recommend excluding gaze points that are close to blink events. A common practice is to add buffers of 100-150 milliseconds around blinks (e.g., 100 milliseconds before the start of a blink and 100 milliseconds after the end of the detected blink).
@user-480f4c Thanks for the fast reply. You've clarified my question, and I appreciate it. On a different note, I have also emailed @user-07e923regarding the reason for the fluctuations in the timestamp[ns] in the gaze.csv file within the Timeseries Data, which range approximately between 4000000 and 8000000[ns]. If you have time, I would appreciate any insights you can provide on this matter as well. Thank you!
Hi @user-f03094, as described in the email, I'm still collecting information regarding this, and I can only give you a response next week.
@user-07e923 Thank you for your response. I'll wait for your reply next week. I appreciate your help!
Hey guys
can the pupil glasses record and be used without a phone
does the online platform have the capability of conducting recordings?
Hey @user-bef103, thanks for reaching out π Pupil Invisible must be connected to the Invisible Companion Phone and app for power and for recording, respectively.
It's not possible to record on Pupil Cloud, because Pupil Cloud is for storage and analysis.
Yea i thought as well
so its not even possible to use companion app without a oneplus phone?
For Pupil Invisible, the compatible devices are OnePlus 6, 8, and 8T. We don't recommend using other devices because we've done rigorous testing on them.
okay interesting, but in theory other android devices do work?
Like I said, we don't recommend using any other devices.
A colleague of mine is telling me that the app cannot be used on other devices.
Yea i get that
Thanks
@user-07e923 sorry if i am botheritng you alot. does this also apply for neon glasses?
regarding the recommnded oneplus phones
Hi @user-bef103, if I can interject, both Invisible and Neon companion apps can only be downloaded from Google Play Store on the supported phones and Android versions. They cannot be used on other devices or Android versions
So its also just the recommended oneplus phone?
Here are the recommended devices for using Neon. Please note that if you've questions concerning Neon, please post them in the π neon channel. Thanks.
Will do in the future, thanks alot guys
Hi all! When I go into the pupil invisible companion app to create a new βwearerβ, I keep getting a message saying βsyncing wearers.β This message has been on the phone for nearly 24 hours. I have tried the following steps (all of which have been unsuccessful): Disconnecting from wifi, restarting the app, clearing the app cache, and restarting the phone. Any troubleshooting tips?
Hi @user-5553ff! We received your email and my colleague has already replied. π But since you reached out here as well, would you mind trying to log out and log in again in the app?
Hello! Can you tell me where I can write a technical question? I am interested in the following question: is it possible to calibrate The Pupil Invisible glasses?
Hi @user-680341 π ! This is the right channel for Pupil πΆ invisible technical questions.
Pupil Invisible uses a neural network-based gaze estimation system. That means it has been trained with a large cohort of eyes and conditions and can generates gaze data for each eye camera image without the need for calibration.
Thus, while direct calibration isnβt possible, what you can do, is to apply an offset correction to adjust the gaze estimate.
This can be done, prior to make the recording (have a look here) or if you already made the recordings, you can apply it post-hoc in Pupil Cloud.
Ok, I see. I will try this. Thanks)