πŸ•Ά invisible


user-3c26e4 01 February, 2025, 11:23:39

Hi @user-f43a29 I can't reach to your explanations in the two different conversation windows underneath invisible. Could you please turn them on again for me to read your suggestions again? Alpha Lab was important but unfortunately I didn't save it.

nmt 01 February, 2025, 14:00:46

Hi @user-3c26e4! If you hover over the invisible channel name with your mouse, the 'threads' in that channel will appear. If you click 'See All', you'll be able to find those you've been active in. More information about using threads here: https://support.discord.com/hc/en-us/articles/4403205878423-Threads-FAQ

user-3c26e4 01 February, 2025, 14:51:08

Oh yes, thanks a lot.

user-3857bf 05 February, 2025, 17:26:22

Hello. I am doing a research for the Emtional Cities Project using the Invisible Eye Tracker. Because of a different set-up, the data was not upload to the server therefore I could not auotmatically detect fixation and the other metrics. I have just the camera gaze coordinates and time stamp + the recording. Can you help on this matter? I have tried to use the repo avialbe on the pupilabs website without success. Thank you in advance Leonardo

user-f43a29 05 February, 2025, 19:08:47

Hi @user-3857bf , when you say "the recording", do you mean the native recording data (Pupil Player Format, as it is called) or do you mean only the scene camera MP4 file?

May I also ask which repo you are specifically referring to?

user-d407c1 06 February, 2025, 08:11:15

@user-df855f Although originally intended for Neon and egocentric cameras, this tutorial/ code can serve you as a reference on how to translate gaze from Pupil Invisible scene camera onto a 3rd party camera, like the one with the higher frame rate you are mentioning.

Kindly note, that if your camera is not sharing a similar point of view, that code won't work. If that's the case, and depending on your system, you may want to sync the camera and Pupil Invisible using Lab Streaming Layer. You can use this 3rd party snippet to stream your high frame rate camera to LSL and our relay to stream Pupil Invisible data and later sync based on timestamps.

About the post-hoc calibration, this is just a linear offset, there is a plugin for manual offset correction in Pupil Player and it is also available in Cloud. If that's what you are looking for.

Lemme know if you need any clarification

user-df855f 06 February, 2025, 10:36:30

Thank you for the suggestion! I am going to look into the lab streaming layer. The other camera is recording the participants movements so the point of view is different. I already found the plugin and that works perfectly so thank you!

user-3857bf 06 February, 2025, 13:53:43

Hi Rob. I am refering to this repo https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py for the fixation detector. As of the video, I do have only the scene camera. Thanks

user-f43a29 06 February, 2025, 15:03:47

I see! Just to be sure, is it not possible to get the full original recording folder with all files? That will make things much easier than trying to extract and run the fixation detection code from the Pupil repository.

If not, then we can discuss alternative solutions.

user-3857bf 06 February, 2025, 21:06:13

Unfortunately for the set up used I have just the streaming data. So basically what previous mentioned. Yes please I'd be keen to discuss alternative solutions. Thank you for your help

user-f43a29 07 February, 2025, 09:06:11

Alright!

So, first, you'll want to use the fixation detector from pl-rec-export:

  • The fixation detector in the Pupil Player software is designed for Pupil Core.
  • For Pupil Invisible, the fixation detector in pl-rec-export would be preferred here, as it accounts for the stabilizing eye movements that occur when a wearer fixates and makes a head motion while walking around.

However, pl-rec-export expects the data to be in the raw binary format, as provided by the Invisible Companion app.

Are you comfortable with using Python? If so, you could potentially convert your data into a pl-rec-export compatible format and try running the _process_fixations function on it. It only needs the scene video, gaze coordinates, and the corresponding timestamps for both of those files.

You do not need to extract that function into a separate file. You can use pl-rec-export like a normal Python package.

user-3857bf 07 February, 2025, 15:45:21

Hi Rob thank you for your help. Before the conversion, can you tell me how the dataframe is expected to be? Thanks

user-3857bf 07 February, 2025, 16:46:07

Also, I am trying to compute the optical flow correction which is needed for the script but I am not sure if it can be computed just from the recording or it is requiring other data

user-f43a29 08 February, 2025, 00:42:34

Hi @user-3857bf , the optic flow is computed from the recording (i.e., the world MP4 video that you have). The _process_fixations function will do it for you automatically.

Once the data has been converted to a compatible format, you should not need to do anything more than run _process_fixations to get the fixation data. I will inform you about the proper format by Monday.

user-3857bf 10 February, 2025, 11:20:25

Thank you Rob. Apparently the optical flow was not computed as the "stream = WorldSensor" war returning an empty object. However, I have been changin one of the fucntion "get_global_grid_based_optic_flow_LK" using opencv. Looking forward to hear from you. Thanks Leonardo

user-f43a29 10 February, 2025, 13:29:17

Hi @user-3857bf , this sounds like the files first need to be in the proper format. I will do a quick test here today, to be sure I point you in the right direction, and then update you.

user-f43a29 10 February, 2025, 22:25:12

Hi @user-3857bf , give this a try, but please note that I cannot make a 100% guarantee.

It should hopefully serve as a headstart to get you to your final goal.

user-3c26e4 11 February, 2025, 11:28:37

Hi @user-f43a29 , can I delete all Events for all participants and not delete them one by one?

user-f43a29 11 February, 2025, 11:40:25

Hi @user-3c26e4 , this is currently not possible in Pupil Cloud, but if you'd like to see it added, then feel free to make a πŸ’‘ features-requests !

user-b14b09 11 February, 2025, 14:08:58

Hi community I am working on Gaze transfer from pupil's camera to gopro camera. I am trying some method using Homography but it's not working. If any one of have done this please can you share the code to me at [email removed]

user-f43a29 12 February, 2025, 09:46:19

Hi @user-b14b09 , have you seen our Egocentric Camera Gaze Mapping guide? It was tested with a GoPro and is in principle applicable to Pupil Invisible recordings.

user-35fbd7 14 February, 2025, 16:24:43

Hi! I have a recordings with length of 39 minutes but when I imported it to iMotions the recordings has lenght only of 9 minutes. How I can import the whole recordings or al least the second part? I see that files PI world has 3 versions and the first one just finishes at 9 minutes. The file PI world ps3 is a second part of my recordings. Could you help me?

user-878a5a 17 February, 2025, 12:28:14

Hello everyone, I am trying to get the fixation to show on my Reference Image Mapper, but I can't seem to find it on the Pupil Cloud, can someone tell me where I can find it? Also the metrics such as total fixation and time to first fixation, are those also available on the website?

user-f43a29 17 February, 2025, 14:25:21

Hi @user-878a5a , to see the results of your Reference Image Mapper Enrichment in real-time, you can open it up and click the playback button for a selected recording.

To produce a plot, you then want to run a Visualization, such as an AOI Heatmap Visualization. You find these under the Visualization tab in the Project view.

The metrics, such as total fixation and time to first fixation, can then be visualized on Pupil Cloud, and you can also download the values in a CSV file.

Please let us know if that clears things up!

user-878a5a 18 February, 2025, 19:05:29

Hello everyone, I have the following question regarding an experiment. I am doing a navigation process inside a model that is being projected onto a wall. The person wearing the glasses will be somewhat stationary, interms of their distance from the wall, head movements might be involved. My question is this, is it possible to do Image Reference Mapper when the participant is only watching a projection on a wall? Or is it only possible for the tracker to work when the participant moves inside a physical space. I have already ran a couple of tests, and the Image Reference Mapper gives me an error after many attempts, I am wondering if that is because the approach(Projection on a wall) is simply wrong? or if there is a mistake in my post-processing? I would appreciate a conclusive answer about whether projections on a wall can be tracked at all. Thank you in advance.

nmt 19 February, 2025, 02:22:48

Hi @user-878a5a! In principle what you describe can work with reference image mapper. I think it would be helpful if you could invite us to your workspace such that we can provide concrete feedback. Would that be possible? If so, please invite [email removed] to your workspace.

user-878a5a 19 February, 2025, 09:44:39

Thank you for your answer Neil, I have invited the above mentioned email, the latest video is the video that I am talking about. Please let me know what I can assist you with, in terms of reference image and screenshots.

nmt 20 February, 2025, 01:48:11

Thanks for inviting us to the workspace, @user-878a5a. It's very helpful to see the testing environment.

Things are a bit more complicated in this case. The projection shows a first-person view from an avatar navigating a virtual space, which is constantly changing as it’s being navigated. The Reference Image Mapper works well if there are sufficient static features in an environment. If this were a real space being navigated, it would work well. However, what you effectively have is the Invisible wearer standing in a physical space, looking into a virtual space, making direct mapping to the virtual space difficult.

It would, nevertheless, be possible to use the Reference Image Mapper to map gaze onto the projection wall itself. From there, you could potentially work into the virtual space. My question is: do you know the coordinates where the stimuli of interest will be presented on the 2D projection image?

user-269e8c 19 February, 2025, 10:40:42

Hi there, I have a question that might be basic, but I want to clarify. We're using Pupil Invisible and exploring data with Cloud Pupil Lab and its enrichments. I also noticed that we have two local tools, e.g., Pupil Player. If I transfer recordings via USB, I understand that Pupil Player is required. What are the advantages of using Pupil Capture and Pupil Player? Thanks!

user-d407c1 19 February, 2025, 14:52:16

Hi @user-269e8c πŸ‘‹ ! Pupil Capture is only meant to be used for data capture together with Pupil Core ( πŸ‘ core ). On the other hand, Pupil Playerr was initially wrote as a player to visualise, export and analyse the data from Pupil Core, but it did later, received support for loading Pupil Invisible recordings.

If you are simply looking to export the data locally onto a tabular format like CSV files, I would rather recommend using our library pl-rec-export as Pupil Player does convert the format to Pupil Core expected format which might complicate things if you are not used to.

May I ask what are you trying to achieve?

user-269e8c 19 February, 2025, 16:13:01

Thank you, Miguel. We are analyzing data on people cooking in the kitchen, focusing on where participants look and interact. 1. I’ve been experimenting with Pupil Cloud and find that analysing data with the Reference Image Mapper works quite well. What do you think? Maybe already enough for us.

  1. Also, I noticed the documentation mentions that offsets often occur when participants are very close to objects. This is relevant to our case, as participants will be near cooking utensils. Do you think we should be concerned about this?
user-d5a41b 19 February, 2025, 16:18:39

Hi! I am wondering if there is a way to run the pl-rec-export tool over data that has been opened with Pupil Player/ Neon Player. I know that the folder/structure is changed after opening files with these tools but I wonder which steps I would need to take to make folders compatible with pl-rec-export again. Thank you!

user-d407c1 19 February, 2025, 16:28:55

@user-d5a41b May I ask on which one has it been opened? Changes vary.

user-d407c1 19 February, 2025, 16:28:07

@user-269e8c It all depends on your needs, but Cloud offers more enrichment and analysis tools than Pupil Player. If Cloud meets your requirements, there’s no need to use Pupil Player unless you prefer to.

Regarding offset correction, you can adjust it in Cloud for a specific distance. However, parallax error is unavoidable due to the scene camera positioning. If this is a significant issue, you may want to consider upgrading to Neon, which not only provides better accuracy but also eliminates parallax error.

user-d5a41b 19 February, 2025, 21:21:22

Both- I have Pupil Invisible data that was opened with Pupil Player and Neon data that was opened with Neon Player. But it seems like the export tool works fine with data from Neon Player?

user-d407c1 20 February, 2025, 07:30:11

We generally recommend saving a copy before opening or modifying files.

Neon Player creates a separate folder (neon_player) before conversion, as you noted, so it works out of the box.

However, Pupil Player directly converts the recording, making it more complicated to revert changes. All modifications made to the recording are described here. To restore the original recording, you would need to manually revert those changes.
If you happen to have a copy/backup on the phone, Cloud or elsewhere it might be easier to get it that way.

user-b14b09 22 February, 2025, 05:37:04

Hi community after I have a problem at a hand when I, am downloading video captured from pupil cloud and extracting it. The video is not getting extracted. I am getting error as There was error while extraction.

user-480f4c 24 February, 2025, 07:29:18

Hi @user-b14b09! Could you try using 7-zip for unzipping the folder?

user-b14b09 22 February, 2025, 05:38:30

Chat image

user-17d0c0 23 February, 2025, 17:56:00

Subject: Inquiry About Calibration Issues with Pupil Invisible Eye Tracker. Dear Pupil Labs Support Team. My name is Ilya Chapliev, and I am a student at Pavlov First Saint Petersburg State Medical University. I am part of the student scientific society in neurology at our university. Our department has a Pupil Invisible eye tracker produced by your company, and I am currently using it for my research. I have created a stimulus presentation using PsychoPy (displaying various points) specifically for this device. However, I am experiencing a problem with calibration. After uploading the recorded video to the cloud and reviewing it, I notice that the fixation overlay does not align with the presented points. According to your website, the calibration is supposed to occur automatically. I have also seen that your website often refers to GitHub for potential programming-related solutions. Unfortunately, I am not very experienced in coding. Could you please advise if there is an alternative way to resolve this calibration issue without relying heavily on programming? Is it possible to perform a manual or preliminary calibration to ensure accurate data? I apologize for any inconvenience and thank you in advance for your assistance. I look forward to hearing from you. Sincerely, Ilya Chapliev

user-7e9cde 27 February, 2025, 12:33:28

Hi Ilya, have you set the offset? Iβ€˜m doing something similar and have added a fixation cross at the beginning of each trial and let subjects confirm via a key press when they are fixating the cross. The key press triggers an event in the recording, so I can check the offset again in the saved recording.

user-480f4c 24 February, 2025, 07:27:05

Hi @user-17d0c0!

Could you clarify what you mean by "the fixation overlay does not align with the presented points"? Are you referring to a scenario where participants were instructed to fixate on specific points on the screen, but their recorded fixations did not match those locations? If so, it would be helpful if you could share a screenshot or screen recording illustrating the issue.

Please note that Pupil Invisible does not require calibration. This is because gaze estimation with Pupil Invisible relies on a deep learning approach that provides calibration-free data that is robust to headset slippage and works in any envronment. You can find more information about Pupil Invisible's gaze estimation in our documentation: https://docs.pupil-labs.com/invisible/data-collection/data-streams/#gaze

user-a578d9 24 February, 2025, 14:57:27

Hi. I have a question regarding reference imaging mapping when observing firearms units during their clearing drills. Specifically, I'm interested in whether the 360-degree perspective of an entire room could potentially cause any issues.

For example, how would reference imaging mapping work when the reference image is from the doorway looking into an entire room? Has anyone here worked with similar scenarios or have any insights on potential challenges and solutions?

Thanks in advance for your help!

user-480f4c 25 February, 2025, 10:46:44

Hi @user-a578d9! You can use the Reference Image Mapper for any area of interest that is static. If you use a reference image from the doorway looking into a room, then the scanning recording would need to capture this view.

If you haven't checked it out already, please refer to this guide on how to use multiple Reference Image Mapper enrichments to map gaze on different areas of interest as the person moves within an entire room: https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-on-multiple-reference-images-taken-from-the-same-environment

user-6d90d6 24 February, 2025, 14:59:48

Good afternoon, the images of the world cam of the invisible are darker than they used to be and it is not because of the cloudy days here in the netherlands :D. Is there something I can do to make de images more bright?

user-480f4c 25 February, 2025, 10:51:43

Hi @user-6d90d6! Have you tried adjusting the exposure of the scene camera in the Settings of the Invisible Companion App? You could tweak the manual exposure and try to optimize that value for best exposure of your scene. Please also refer to this relevant message: https://discord.com/channels/285728493612957698/633564003846717444/1120999521799901256

user-6d90d6 18 March, 2025, 18:53:12

Thanks, that did the trick! Is there a possibility to correct a dark video that is allready made with the wrong exposure setting? I tried to find settings in pupil player, but coult not find anything. Do you have a suggestion?

user-bc092c 27 February, 2025, 11:02:22

Hi community, I have outside measurements for a research project. However, some light rain is expected. How waterproof is the pupil invisible and can I use it while measuring in light rain?

nmt 27 February, 2025, 11:48:03

Hi @user-bc092c! You'll want to ensure that your Invisible glasses remain dry. So no exposure to rain. Perhaps you can make use of a rain cap?

End of February archive