๐Ÿ•ถ invisible


Month (messages)

user-1e429b 03 March, 2026, 14:15:13

Hi, everyone! Iโ€™ve decided to get back to my recordings from pupil Invisible, but all recordings are some kind of closed to analysis. Iโ€™ve got a notification about full storage and an update of Cloud Plan? Could anyone explain, please, what it is? Is it necessary to update a subscription to work with old recordings?

user-d407c1 03 March, 2026, 14:28:13

Moving forward, you can:

  • Acquire a Unlimited Analysis Plan: You can add an unlimited analysis plan to your Cloud account (see here https://pupil-labs.com/products/cloud/pricing) to work with the data without any restriction.

  • Work under the Free- 2h quota: You can analyze and download the data in batches of 2h, after that you would need to remove recordings from your workspace (including the trash) to make more space available. Kindly note that recordings can't be re-uploaded and you won't be able to aggregate more than those 2h into enrichments and visualizations.

  • Work completely offline: Pupil Cloud is optional, you can operate completely offline by transferring your recordings via USB and leveraging Pupil Player. But note that, gaze is densified in Cloud to provide you full 200Hz sampling rate with Pupil Invisible, as realtime can only obtain up to 120Hz

If you have any further questions, please donโ€™t hesitate to reach out.

user-d407c1 03 March, 2026, 14:26:29

Hi @user-1e429b ๐Ÿ‘‹ ! To keep providing all Cloud capabilities sustainably, back in July 2024, we introduced Cloud Plans (formerly known as Add-ons) and the Early Adopters Club (see announcement https://discord.com/channels/285728493612957698/733230031228370956/1292784609938898954).

As part of that club, we provided unlimited storage free of charge for devices purchased before October 1st, 2024 for a year.

That period concluded October last year, and recordings exceeding the 2-hour quota are temporarily restricted, though no data has been deleted.

Your account should have received several notifications about this topic over the past year, we also posted it here and a banner was present in Cloud for several months, where we have covered how would that affect your Cloud storage and how you can keep working with the data.

user-ac2210 05 March, 2026, 12:07:54

Hi everyone! I have a question regarding real-time tracking with Pupil Invisible. Is it possible to track and record the 3D position/pose of a 2D plane (a monitor screen) in real-time? For context: Iโ€™m defining the screen surface using tags (AprilTags). I want to dynamically track its position in 3D space relative to the wearer. So, for example, if I physically move away from the monitor, the system registers that the plane has moved further away and its relative scale has changed. Is this achievable out-of-the-box in real-time via the Network API or any existing plugins? Thanks in advance for any tips!

user-d407c1 05 March, 2026, 12:35:59

Hi @user-ac2210 ๐Ÿ‘‹ ! That should be possible even with a single Apriltag marker of a known size.

Essentially, you can leverage the pupil-apriltags library, at the detector, you would need to enable the estimate_tag_pose flag, pass the camera parameters and the physical size of the tag in meters.

tags = at_detector.detect(
                gray_img,
                estimate_tag_pose=True,
                camera_params=camera_params,
                tag_size=TAG_SIZE,
            )

I am attaching a snippet that does the same but for ๐Ÿ‘“ neon , to adapt it to Pupil Invisible, you would need to download the scene_camera.json and read the camera parameters from there, as they are not stored on the device.

https://docs.pupil-labs.com/alpha-lab/undistort/#reading-from-the-cloud-download-json-file

detect_markers.py

user-ac2210 05 March, 2026, 14:36:58

Thank you so much for the detailed explanation and the code snippet! This is exactly what I was looking for. Iโ€™ll make sure to download the scene_camera.json for my Pupil Invisible and adapt the script as you suggested. Really appreciate the help!

user-ab65d6 08 March, 2026, 14:25:14

Hello, I'm facing the same issue as described here: https://github.com/pupil-labs/pupil/issues/733 I have a Pupil Invisible and am trying to run the Pupil Capture app on Windows. I can see the camera feed as a standard USB camera in the standard Photos application on Windows, but when I launch Pupil Capture I get the error: "[ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied."

The video feed is then gray and various errors are printed on the screen.

user-ab65d6 08 March, 2026, 14:26:35

I have tried to reinstall the drivers, run the application in Admin mode, but it doesn't change anything. I think the hardware is OK because it works fine on Android (but the battery drains really quickly hence why I'm trying Windows).

user-ab65d6 08 March, 2026, 14:27:58

Would appreciate any info you have on this particular bug, if it was fixed in other builds etc. I have tested Pupil Capture 3.0.7 and 3.5.1 which I got from here: https://github.com/pupil-labs/pupil/releases Thanks in advance!

user-f43a29 08 March, 2026, 14:56:00

Hi @user-ab65d6 , do you have a Pupil Core or a Pupil Invisible? To clarify, Pupil Invisible is used with the Invisible Companion app on an Android smartphone to make recordings.

user-ab65d6 08 March, 2026, 14:58:27

It's a Pupil Invisible. Yes I've tried it with the supplied Android phone (I got the kit from my university's lab) which works fine. However I'd rather run the software on Windows so it can integrate easily with the rest of my test, and also the battery on Android drains really quickly when using it.

user-f43a29 08 March, 2026, 15:00:37

Hi @user-ab65d6 , using it directly over USB with a PC like that is not supported. Rather, you connect to it via WiFi, Ethernet, or hotspot via the Real-time API. If the Android's battery is draining quickly, then it could be that the phone needs repair or you can simply source a new phone locally. The supported Android phones and Android versions are listed here.

user-ab65d6 08 March, 2026, 15:03:05

OK I see, thanks for clarifying. Can I connect to Invisible over Wifi then? Is there a wifi chip inside? Then I could connect my PC to it by Wifi, and would the Pupil Capture software work then?

user-f43a29 08 March, 2026, 15:06:56

Hi @user-ab65d6 , as documented & diagrammed here, Pupil Invisible connects to the phone and within the phone, there is a WiFi chip. The glasses themselves contain only the sensors that collect the raw data.

You would no longer use Pupil Capture then. That software is for our original eyetracker, Pupil Core. Rather, you would use Python in most cases to connect to Pupil Invisible through the Real-time API, once the PC and the phone are connected to the same WiFi network. (Note that we do not recommend University or work WiFi, but rather use a dedicated local WiFi router; it does not need Internet access).

What is your experimental goal? You might not even need to connect your PC to the device, if you only need to record data.

user-ab65d6 08 March, 2026, 15:03:31

It's not super clear for me..

user-ab65d6 08 March, 2026, 15:05:15

On the other hand, the link you sent suggests to try a USB hub with the phone to supply power, so maybe I can try that at least

user-f43a29 08 March, 2026, 15:07:47

Yes, this is also a viable option.

user-ab65d6 08 March, 2026, 15:14:38

OK I see. So there is no provided software for Invisible that runs on Windows then. My goal is to capture the video feed from the camera mounted on the glasses and to record it on the PC. There is also a Unity application running on the PC at the same time, I want to record the screen so both Unity and the camera feed are captured (the Unity software is a driving simulator).

If I just use the raw video feed from Invisible then it's OK, I can already access this in Windows and could even integrate it with Unity directly. But for the eye-tracking data, I thought I needed to use the specific Pupil software to capture it. It seems the only way to have eye-tracking data in Windows then would be via the Python API?

user-f43a29 08 March, 2026, 15:27:41

And to clarify, you can run a recording in the app on the phone to get all data in a high-fidelity format that is immediately usable with all the tools in the Pupil Invisible ecosystem. You can then either upload it to Pupil Cloud and/or export it via USB cable to your PC. That should be easier in this case than trying to record data via the Real-time API and construct your own format.

user-f43a29 08 March, 2026, 15:24:06

Hi @user-ab65d6 , you should not need to capture the video feed from Pupil Invisible in that manner to achieve that goal. You can use the Marker Mapper Enrichment on Pupil Cloud to map gaze to the simulator surface. The Reference Image Mapper on Pupil Cloud can also be used to this end. If you do not use Pupil Cloud, then you can load the recordings into Pupil Player to do Surface Tracking. This will let you know where they are looking in pixel coordinates on the monitor.

To synchronize the high-res Unity capture with the eyetracking data, you could freshly sync both devices to the same clock via NTP before each experiment session. So long as you save some Unity timestamps in UTC nanosecond format throughout the experiment, you can post-hoc sync the timelines. That is probably sufficient in your case, or how much time sync precision do you need?

Additionally, you can use Events for sync. Note that the Real-time API is language agnostic and can also be used from C# in Unity. You would only need to send an HTTP Post request via Unity to the Events endpoint.

user-ab65d6 08 March, 2026, 15:52:31

Thanks for these ideas! I'm just getting started with Pupil so wasn't aware of these features. Just so I'm clear, the Marker Mapper Enrichment and Reference Image Mapper both require the camera feed to be captured with the phone or via the Pupil API? They won't work after the test is done with just raw video data ? (I guess not as there is no metadata stored in that case)

user-f43a29 08 March, 2026, 16:13:27

No problem. You caught me as I was at the computer ๐Ÿ˜‰ To use those tools, you want to use the record button in the app on the phone. You also have 2 free hours of recording storage on Pupil Cloud, which you can manage as you see fit for analysis. Let us know how it goes and if you have any other questions

user-ab65d6 08 March, 2026, 15:53:07

I don't need super-high precision, 50ms or so should be OK. For synchronising with the Unity logs, I'm thinking to just do it using UI events (eg something is displayed on the screen, or a timer runs on the screen) that the Pupil will also see in its feed, so I can sync the 2 timelines/logs later

user-ab65d6 08 March, 2026, 15:56:50

In any case it seems that capturing on the phone will be the best way to go, to get all the correct data and analyze later. I'll add the Marker Mappers to each corner of the TV display also. If I can get the hub to power the phone, this is prob the easiest way. I don't have that much time to spend on custom integration with APIs etc.

user-ab65d6 08 March, 2026, 16:04:08

I'll try this and let you know if any other questions. Many thanks for your support, especially on a Sunday! ๐Ÿ™‚

user-f6ea66 18 March, 2026, 14:25:20

Hi, I have a question regarding the data output from pupil invisible recordings.

This is a sample of the fixation duration data from 1 participant doing a 60ish minute procedure where she's looking at a screen, doing a surgical simulation. Some of the "fixation durations" are about 25 seconds long (25000ms). That seems unrealistic for humans - how are the fixation durations calculated? Are they somehow "merged" - representing time intervals or so? Concatenated? Post-processed somehow? Inaccurate outliers?

I don't need to know the exact numeric details, but i need to know how these numbers are arrived at in principle, so that if reviewers of my paper comment that the numbers can't be right, I need to know the reason.

Data overview: (notice one fixation of 23000+)

"Df.fixation_duration.describe() count 1940.000000 mean 1383.844330 std 2342.983792 min 61.000000 25% 269.000000 50% 489.000000 75% 1237.000000 max 23305.000000 Name: fixation_duration, dtype: float64"

It is also evident in this plot (from a slightly different procedure):

user-f43a29 19 March, 2026, 08:46:25

Hi @user-f6ea66 , the parameters and procedure of the fixation detector are in our white paper. A high level overview of fixation detection is here and the details behind the algorithm are in this published Behavior Research Methods paper. All of this can be shared with reviewers.

To answer your questions:

  • The fixation duration is simply calculated as the difference between detected fixation start and detected fixation end, based on the output of the fixation detector.
  • The fixations are not merged. May I ask what you mean by "representing time intervals" with respect to "merging"?
  • The fixations are not concatenated.
  • The fixations are not post-processed. You are given the raw output of the detector.
  • Yes, you could have outliers.
  • A fixation ID is simply a number that helps distinguish one fixation from another. They increase in sequential order and yes, one ID is for one fixation.

The fixation is reported as so long, since the participant continued to hold their gaze in a way that satisified all the parameters of the detector. You can simply filter out fixations that are over a duration threshold of your choice. We give you the full raw data so that you are free to process it as you see fit, rather than make broad decisions that would affect all users.

If you want us to investigate the recording, you can also share it with data@pupil-labs.com and we can give more direct feedback.

user-f6ea66 18 March, 2026, 14:25:27

Chat image

user-f6ea66 18 March, 2026, 14:26:32

I suppose another question can be: what exactly is a "fixation ID"? Just supposed to be a single fixation (so we get the duration of that single fixation) - if so, how could it be appr. 25000ms?

user-f6ea66 19 March, 2026, 11:39:47

Well, by this "May I ask what you mean by "representing time intervals" with respect to "merging"?" ..I mean that since the fixation durations seem unrealistically long, I guessed that they (fixation IDs) might each represent the sum of fixation durations in a time interval. Or perhaps several fixations within a very narrow field.

user-f43a29 19 March, 2026, 11:43:17

Ah, the fixation IDs are simply labels.

user-f6ea66 19 March, 2026, 11:42:57

Thanks for the answer. I took a look at your links. But may I ask, since you're the expert: From the overall sources I've found, it seems that the general consensus is that fixation durations over ~2000ms are probably artifacts or represent "dwell time" rather than fixation durations.

From your answer above, I guess you'd strongly disagree? In this case, people are working with surgical simulation displayed on screen, so they do have to focus a lot.

As an estimation, what is the approximate longest lenght you'd say a fixation duration would be?

user-f43a29 19 March, 2026, 11:45:35

Ah, I do not disagree. It is merely that a โ€œfixationโ€ in our pipeline is not a direct physiological measurement, but the result of an algorithmic classification applied to the gaze signal, as my colleague @user-d407c1 has put it.

The approximate longest length is dependent on task and the experimental conditions, and even on conventions in different fields. The published literature that is most relevant to your task will be worth reviewing in this respect.

user-f6ea66 19 March, 2026, 11:43:31

In general, and also from your knowledge about your software's calculations.

user-f6ea66 19 March, 2026, 11:44:29

E.g. in my one-person data in the plot above, plenty of the fixation durations are over 10,000ms - so I suppose that might be a lot of artifacts/outliers.

user-f43a29 19 March, 2026, 11:46:36

This could also be 3 or 4 quite close fixations that are below the thresholds, so not necessarily artifacts or outliers.

user-f6ea66 19 March, 2026, 11:44:51

But I might be totally wrong about the generally accepted "realistic" fixation durations.

user-f6ea66 19 March, 2026, 11:46:18

This is the simulator view (with some learning support)

Chat image

user-f6ea66 19 March, 2026, 11:46:30

(just in case it might make a difference)

user-f6ea66 19 March, 2026, 11:50:44

Ok - interesting.

So: "A โ€œfixationโ€ in our pipeline is not a direct physiological measurement, but the result of an algorithmic classification applied to the gaze signal, as my [email removed] / Miguel (Pupil Labs) has put it." 1. Where is this specific definition exactly, please? (sorry if it's somewhere in the links you provided

  1. So this "This could also be two or 3 quite close fixations that are below the thresholds, so not necessarily artifacts or outliers." Hmm, so is this ('in layman's terms') that e.g., 3 fixations are so close together on the screen, that the algorithm considers them the same fixation and therefore bundles/merges their length/duration?
user-d407c1 19 March, 2026, 11:57:46

Hi @user-f6ea66 ๐Ÿ‘‹ ! Please allow me to step in for my colleague @user-f43a29 . Youโ€™re right to flag those long fixation durations, as they can look counterintuitive at first glance.

In principle, a โ€œfixationโ€ in our pipeline is not a direct physiological measurement, but the result of an algorithmic classification applied to the gaze signal.

Fixations are detected by grouping consecutive gaze samples that meet certain criteria, in the case of Pupil Invisible, a velocity based algorithm with optic flow compensation.

You can find more info on these messages:

https://discord.com/channels/285728493612957698/1047111711230009405/1280892486360764416 https://discord.com/channels/285728493612957698/1047111711230009405/1281180585792114718

Or as pointed out by my colleague in our fixation detector whitepaper or the publication in Behavior Research Methods, which you can cite to address reviewers concerns.

Because of this, long fixation durations (e.g. ~20โ€“25 seconds) can occur if:

  • The gaze remains relatively stable over time (e.g. looking at a screen with minimal movement), and
  • The algorithm does not detect sufficient motion to segment that period into multiple fixations

So rather than being โ€œmergedโ€ post hoc, these long fixations are effectively continuous segments that were never split by the fixation detection algorithm.

This can happen in tasks like surgical simulations or screen-based tasks, where: - visual attention is sustained on a small area
- eye movements are subtle
- micro-saccades may not cross the detection thresholds

user-d407c1 19 March, 2026, 11:58:41

As a result, what is labeled as a single โ€œfixationโ€ may actually correspond to a longer period of stable gaze behavior, rather than a classical short fixation in vision science terms, as you noted.

But feel free to open a ticket on ๐Ÿ›Ÿ troubleshooting and share the recording so we can have a look.

user-d407c1 19 March, 2026, 12:03:28

That said, on Neon the velocity based algorithm also leverages the IMU sensor, in the case of Pupil Invisible, there is no magnetometer, and it uses optic flow from the scene camera alone, which might be less precise on screen based tasks.

In your use case, the image probably does not vary at all, in pratical terms might be like looking at a blank/empty wall.

What can you do to mitigate this? Well, our fixation detector is open-source, you can find it here, so you can tune the parameters to your liking (adjust the velocity threshold for example). We use what we found are the best across a general dataset of different activities, but you may yield better results on your specific application with other parameters.

Alternatively, Pupil Core also has an open-source dispersion based algorithm that you can try. As it is dispersion based and not velocity based, might also yield better results on your screen based task.

user-f6ea66 19 March, 2026, 12:05:49

Ok, thank you both.

That makes sense. But yeah, it does make it a little more complex/specific to your algorithm.

Btw, i already completed the data collection, and i think adjusting parameters is beyond my scope at this point ๐Ÿ™‚ )

I wonder how that should/might influence how to correctly interpret the fixation durations (and mean fixations per second) from this data - perhaps compared to overall/general 'vision science' (you know better than I). E.g., how to interpret what fixation durations mean for cognitive load (which is the concern in my paper) - if the fixation durations here should be interpreted differently.

user-d407c1 19 March, 2026, 12:22:35

You could still try tweaking the params here and report the thresholds you used, or use a different fixation detector algorithm altogether as mentioned.

That said, if the data collection is already complete and reprocessing is out of scope, you are right to be cautious about interpreting those fixation durations in the same way you would in classical vision science and to avoid over-interpreting the absolute duration values, especially the long tail.

For the paper, the safest framing is probably to treat fixation duration as an operational metric of gaze stability within this pipeline, rather than as a direct measure of discrete physiological fixations in the classical sense, unless you tweak it to that sense.

So if you discuss cognitive load, Iโ€™d recommend framing it carefully: - shorter vs longer fixation durations may still reflect differences in viewing behavior or attentional strategy within your dataset - but the absolute values should probably not be compared too directly to canonical fixation durations from vision science or other papers using different methods. - and very long fixations should be acknowledged as likely reflecting the behavior of the segmentation algorithm in low-motion conditions

Similarly, for mean fixations per second, Iโ€™d interpret that as the rate at which the algorithm segmented stable gaze periods, rather than the true physiological fixation rate.

Note that you can also accompany this data with a report of blink rate.

For future studies, you may want to consider ๐Ÿ‘“ neon , especially for screen based tasks, as it has better accuracy, more robust fixation detections and it also offers you more metrics such as pupillometry.

user-f6ea66 19 March, 2026, 12:06:37

I'll try to open a ticket and I'd appreciate if you'll have a look, just to check if things look right

user-f6ea66 19 March, 2026, 12:42:47

Thank you for the exceptionally thorough support!

user-ac7203 23 March, 2026, 16:12:15

Hi! I don`t have an access to my PupilLabs Cloud. Is is said Cloud Plan Expired. What should I do to restore an access?

user-f43a29 23 March, 2026, 16:13:31

Hi @user-ac7203 , accessing Pupil Cloud and expiration of a Plan are independent and distinct. You should still be able to access your Workspaces if you use the same username and password as before, even if a Plan expires. If not, could you please open a Support Ticket in ๐Ÿ›Ÿ troubleshooting ? Thanks.

user-ac7203 23 March, 2026, 16:17:12

Sorry, not fully understandable. How I can open a Support ticket?

user-ac7203 23 March, 2026, 16:18:55

I was said, that I don`t have a right to write messages on the channel Troubleshooting

user-f43a29 23 March, 2026, 16:19:37

Hi @user-ac7203 , no worries. You successfully opened a ticket here (https://discord.com/channels/285728493612957698/1485673902058242139/1485673903513665689). Click that link and we can continue the discussion there.

End of March archive