šŸ•¶ invisible


Year

user-ebd8d5 05 September, 2025, 14:48:25

Hi, I having some issues running the reference mapper enrichment on the cloud. I have collected scanning videos with and without an april tag. However, the enrichments runs for about 2 mins prior to crashing with a server error. Could I check what could be going wrong?

user-4c21e5 05 September, 2025, 16:37:01

Hi @user-ebd8d5! Can you invite cloud-support@pupil-labs.com to your workspace so we can take a look?

user-ebd8d5 05 September, 2025, 16:47:50

Hi @user-4c21e5, I have invited them to the workspace, thank you

user-c2d375 08 September, 2025, 11:29:43

Reference Image Mapper support

user-ebd8d5 16 September, 2025, 12:57:17

Hi, could I get some information on what is saved in the poses.p file after RIM? Specifically, what does the start_timestamp and end_timestamp refer to, and how are they linked with the timestamps in gaze?

user-f43a29 16 September, 2025, 13:35:38

Hi @user-ebd8d5, the pose timestamps align with the scene camera timestamps. For each scene camera frame, Reference Image Mapper estimates a pose, but a pose might not be estimated for some frames. This means that the poses can start after the first scene camera frame but end no later than the last scene camera frame.

All data from Pupil Invisible are timestamped by the same high precision clock, so the timestamps for gaze, scene, and pose can all be directly compared to each other.

user-ebd8d5 16 September, 2025, 13:50:18

Hi @user-f43a29: Thank you. Just to clarify: 1) What are the conditions for the pose to not be computed for certain frames? 2) It seems that the number of poses is almost equivalent to the number of world timestamps (discarding the frames where pose was not calculated). However, with the frequency of gaze being higher than world timestamps, I would need to select the appropriate gaze using the world timestamps to plot them in the world coordinate (as the pose is calculated at that specific frame)

user-f43a29 16 September, 2025, 14:00:56

Hi @user-ebd8d5 , here are answers:

1) It depends on several factors, such as motion blur, enough color/texture variation, detecting the scanned region in the uploaded recording, etc. It is a complex process that can vary from environment to environment, so it is not really possible to give a fixed answer, but so long as you follow our Best Practices, you should be good to go.

2) Yes, the poses are derived from the scene camera frames (i.e., the world camera frames), so each pose corresponds to a world timestamp. Gaze is indeed sampled at a higher frequency, so correct, you select the corresponding gaze data that is appropriate for your situation. That could be a single gaze point or the mean gaze for the given scene frame, as is done in the Tag Aligner Visualization tools.

user-ebd8d5 16 September, 2025, 14:08:28

@user-f43a29 : thank you. Is that the function find_gaze_index_by_timestamp in playback.py (in the tag aligner project)?

Just to give some background, I have developed a 3D model using NERF (based on the pyflux project) with poses computed using COLMAP. I would like to now project the gazes onto the 3D model (similar to the tag aligner project). However, I do not have the poses file which is obtained from the recording or I would not like to use the poses file (which lends towards working with the raw data from the cloud). Therefore, I do not have the start and end timestamps

user-f43a29 16 September, 2025, 14:17:53

By that function, or if using the Blender plugin, by these lines.

May I ask: if you are not working with data from Pupil Cloud or the Companion Device, then what data source are you using?

user-ebd8d5 16 September, 2025, 14:20:22

I am working with data from Pupil Cloud. However, I am not performing any enrichment, which I believe provides the poses file. I download the timeseries + scene video from each recording and I am trying to use the gaze data from that folder.

user-f43a29 16 September, 2025, 14:39:01

I see. Then you may want to reference how we did it for COLMAP and NERFStudio.

user-ebd8d5 16 September, 2025, 14:51:51

Hi, I have used that as the base and I have developed the 3D model. However, it also uses poses file to compute gaze which is where I am currently stuck at.

user-f43a29 16 September, 2025, 15:15:48

Hi, I have used that as the base and I

user-ebd8d5 22 September, 2025, 13:40:27

Hi, I have another question (sorry): I am trying to calculate the transformation from scene camera to IMU coordinate frame (or vice-versa). There is a function for the transformation from scene camera to IMU coordinate system for the neon system (https://docs.pupil-labs.com/alpha-lab/imu-transformations/). I would like to calculate something similar for the invisible: 1) Are there exact measurements for the rotation/translation between IMU and scene camera for the invisible? 2) I am assuming there would no rotation difference in the coordinate system between invisible and scene camera's coordinate system as the X,Y and Z are similar (X is right, Y is downward and Z is in the direction of the optical axis) ? 3) Is there an IMU reference system such as ENU or NED?

Thanks a lot

user-f43a29 22 September, 2025, 13:54:19

Hi @user-ebd8d5 , nothing to be sorry about!

So, Pupil Invisible does not provide Yaw, only Roll and Pitch; that is, you do not get absolute "left/right" orientation with respect to magnetic North.

From my understanding of your use case, you would also want left/right, correct?

user-ebd8d5 22 September, 2025, 14:01:54

Hi @user-f43a29 : Thanks for your reply. I am trying to implement virtual inertial SLAM using the scene camera and IMU. So following the steps in this link (https://uk.mathworks.com/help/vision/ref/monovslam.html), I need the gyroscope and accelerometer values (which we obtain from IMU.csv), a camera to IMU transformation and the reference frame for the IMU? For the overall use case I would like absolute left/right orientation. But for just viSLAM, I do not think it is necessary

user-f43a29 22 September, 2025, 14:03:41

I see.

The reference frame for the IMU is diagrammed here.

I would need to check with the team about its pose relative to the world camera coordinate system. I will update you once I have that info.

user-ebd8d5 22 September, 2025, 14:04:12

Thanks a lot. Are there absolute values for the distance of the IMU from the world scene camera?

user-f43a29 22 September, 2025, 14:23:18

Hi @user-ebd8d5 , so you can use this diagram showing the positioning of the IMU to approximate the relative distances. You can simply use a ruler for this with your Pupil Invisible.

The reason is that its arms can flex differently for different headsizes, so it is not really possible to provide a fixed number.

As can be seen here, the coordinate system labelling of the scene camera is the same as that of the IMU:

  • X is to the right
  • Y is downwards
  • Z is forwards

Assuming that they are aligned should be sufficient for viSLAM purposes, although it has not been tested.

user-f43a29 22 September, 2025, 14:06:15

I will also be checking with the team about that šŸ‘

user-ebd8d5 22 September, 2025, 14:29:54

Hi @user-f43a29 : thanks. I have used a ruler to get approximate measurements and shall continue with the same. Thank you for the clarification

user-870276 24 September, 2025, 03:56:17

Hi Pupil Labs, I’m trying to better understand the coordinate system of the Pupil Invisible. On your website, it says the (origin is at the top-left corner of the image). Is this origin fixed for every frame throughout the recording, or does it change with each frame?

I basically wanted know this as I wanted to convert the pixel coordinates to meters in the world view

user-f43a29 24 September, 2025, 07:00:52

Hi @user-870276 , the coordinate system of the scene camera is fixed, so the origin is always at the top left corner of the image for every frame and does not change with movement of the head.

To determine where they are looking in world coordinates, that coordinate system is a start, but some extra steps are needed. It requires also being able to localize the wearer in the environment or estimating the distance from image properties. We have tools that help with this. You can either:

  • Use our Tag Aligner tool and choose meters as the units for your coordinate system. Although it was built for Neon, it is in principle also compatible with Pupil Invisible.
  • Try monocular depth estimation networks, as shown in the Map Gaze Onto Anything Alpha Lab. Again, although developed for Neon, it is in principle also compatible with Pupil Invisible.
user-9e276a 30 September, 2025, 10:12:09

Hi @user-f43a29 , I am trying to re-upload the downloaded data from pupil cloud back to the cloud. Could you guide me how to do that?

user-f43a29 30 September, 2025, 10:13:50

Hi @user-9e276a , re-uploading data is not a supported workflow. You can use our offline tools, such as Pupil Player, for that data.

user-9e276a 30 September, 2025, 10:14:13

Could you guide me how to get the offline tool?

user-f43a29 30 September, 2025, 10:14:55

Sure. It is here and the Documentation is here.

user-9e276a 30 September, 2025, 10:30:23

Hi @user-f43a29 , i uploaded the folder on Pupil Player but no video is coming on screen though its playing. What could be the reason?

user-f43a29 30 September, 2025, 10:47:11

[email removed] , i uploaded the folder on Pupil

End of September archive