🕶 invisible


user-3c26e4 11 May, 2025, 08:50:03

Hi everyone, can I export via USB and load the recordings directly into the Pupil Player when using the Invisibel Glasses without going through PupilCloud?

user-d407c1 11 May, 2025, 10:20:53

Hi @user-3c26e4 👋 ! Thanks for following up here. Yes! you can transfer recordings through USB without needing Cloud at all. Please find detailed instructions on how to transfer recordings via USB here. Additionally, similar instructions can be found for Neon and Neon Player here.

user-3c26e4 15 May, 2025, 11:37:52

Thank you very much, I will try it.

user-9e276a 15 May, 2025, 12:00:41

@user-f43a29 Hi, I am trying to use Pupil Invisible for a study in which a person's eye movement on a video needs to be studied. There will be moving scenes, and the person has to look at the video. I have put April tags at the four corners of the curved projector screen on which the video is being displayed. During the data collection, sometimes the April tags are going out of the frame, whereas at other times, they are not getting detected in the pupil cloud while doing the analysis. How to fix this problem? Should we use smaller April tags? Or is it related to the lighting condition?

Chat image

user-f43a29 15 May, 2025, 16:16:42

Hi @user-9e276a , thanks for providing the image. A few points of feedback:

  • AprilTag detection is better when the markers are visible, so you don't want to make them smaller. In your case, the size could probably be increased.
  • Surface detection is improved when you have more markers. You could probably hang 2 or 4 more AprilTags.
  • The bottom right AprilTag is curled and not fully taped. You want them all to be flat.
  • Yes, illumination plays a role. You want them all uniformly illuminanted and well iluminated. The tag at the top left is perhaps too dark and the tag at the top right is not uniformly illuminated (i.e., it is partly in dark shadow).

After improving those points, feel free to let us know how it works out for you!

user-d5b3a8 16 May, 2025, 15:49:56

Hi everyone, I'm conducting a study using the Pupil Invisible with the aim of investigating how visual advertising stimuli — such as shop windows, political posters, or illuminated ads — influence our attention to safety elements and the act of crossing roads.

At the moment, I’m running some tests at a crosswalk to get familiar with the Pupil Cloud software, as this is my first time using this technology. However, I'm having some issues creating an enrichment for the pedestrian crossing light. I'm using an image along with the corresponding scene recording, but Pupil Cloud keeps giving me an error.

Has anyone experienced this before or knows how to solve it? Any help would be greatly appreciated!

Chat image

user-480f4c 16 May, 2025, 15:53:51

Hi @user-d5b3a8, may I ask which enrichment are you using? Is it the Reference Image Mapper? If you could also share more details on the error you get, that would help!

user-d5b3a8 16 May, 2025, 15:56:18

Sorry! I'm trying to use the reference image mapper and the error I got says the following: "The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image." I have tried this multiple times but still got the same error.

user-480f4c 16 May, 2025, 16:18:23

Thanks for sharing more details! Most likely the scanning recording and the reference image were not optimal, and this is why the mapping failed. Have you checked our documentation on scanning recording best practices: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/#scanning-best-practices ?

Is it possible that the scanning recording didn't fully comply with the recommendations outlined in our documentation?

user-d5b3a8 16 May, 2025, 16:45:51

Yes, I've been having trouble understanding which of the examples of that documentation should I use. For example, in this case, should I upload a photo of the crossing light in the other side of the road, or like the picture above? And what about the scanning video, should it be around 45 seconds, or more like 2 minutes?

nmt 16 May, 2025, 17:30:38

Hi @user-d5b3a8! Would you be able to invite us to your workspace so we can look at an example recording and provide concrete feedback? If so, please invite [email removed]

user-d5b3a8 16 May, 2025, 17:32:57

Surely, thank you so much! I just have to confirm with my supervisor first and I'll let you know.

user-d5b3a8 19 May, 2025, 14:25:59

@nmt hi! I've just invited you to my workspace. The project that I'm talking about is named "AdsAndSafety", and the enrichment in question is called "Semáforo".

user-480f4c 20 May, 2025, 18:59:57

Hi @user-d5b3a8! 👋🏼 Jumping in for Neil here—I've taken a look at your recordings and wanted to share a few suggestions:

  • Feature-rich context: While your scene does include the traffic light, it would benefit from incorporating a broader area. Effective mapping relies on varied visual features. Including more of the surrounding environment—like the nearby building, the “Escola de Condução” sign, or other static elements—can improve accuracy.

  • Improve scanning recording: The current scanning recording contains several moving elements (cars, pedestrians, and the flickering red light), which can negatively affect mapping. Try to capture a wider, more stable portion of the scene, and ensure your reference image reflects that same view. For best results, please follow the guidelines listed in our best practices documentation.

  • Events: Use Events to mark relevant segments. Some recordings don't show the traffic light at all. Including those in the enrichment process can slow things down unnecessarily. Using events to mark when the scene is actually visible to the participant can help streamline processing.

Lastly, I highly recommend checking out this tutorial, which walks through how to use the Reference Image Mapper enrichment effectively with Events.

Let us know if you have any follow-up questions!

End of May archive