@kar, if I understand your goals, no
Hello! I'm using the Pupil Invisible for a pro and anti-saccade test. I've noticed that with the Pupil Core, it's possible to use markers (QR codes) to gather additional data during recordings. Is it possible to use these markers with the Pupil Invisible? I haven't found any information regarding this in the Pupil Invisible documentation, although there is some mention of it in the Pupil Core documentation (https://docs.pupil-labs.com/core/software/pupil-capture/).
Hi @user-3303bd 👋 Just to clarify, are you referring to using AprilTag markers to define a surface, such as the display where the pro and anti-saccade experiment is presented, in order to obtain gaze data mapped within that surface?
If so, Pupil Cloud offers the Marker Mapper enrichment to facilitate this. This enrichment detects the markers displayed in the Pupil Invisible scene camera video, allowing you to define surfaces and extract gaze and fixation data mapped within that surface.
Once the enrichment is completed, you also have the option to generate heatmaps and define areas of interest (AOIs) to gain further insights.
Hi! Yes, I’d like to use AprilTags, i'll view the Marker Mapper on Pupil Cloud. Thanks for your response!
Hello! I'm using realtime-python-api(https://github.com/pupil-labs/realtime-python-api) . I can get gaze data, but I can't get IMU data. Realtime-python-api have stream_imu.py, but can't pupil invisible use it? A few years ago, there was mention of a roadmap for IMU data acquisition , are there still no plans?
Hi @user-35d5ed ! The IMU sensor data from Pupil Invisible is not streamed in a format that's compatible with our current realtime API, unlike Neon.
If you need real-time eye images or IMU data with Pupil Invisible, you'll have to use our legacy API. You can find more information on how to request the sensor IMU data here and access the legacy API documentation here.
I understand that IMU data cannot be get by realte API. Thank you!
Hi! Is there a way to do post-hoc gaze offset correction with data collected on Pupil Invisible without using Pupil Cloud?
Hi @user-d5a41b ! You can use this plugin in Pupil Player: Offset Correction Plugin. Alternatively, you could apply the offset correction manually to your files if you need specific adjustments or do not want to convert the files to Pupil Core format (ts).
Hi everyone,
I've recently started to implement Pupil Invisible into our existing setup where Psychtoolbox is running in Matlab to present stimuli and syncs to a Qualisys motion tacking system.
My main question is: What is the better/more accurate way to sync the PupilLabs recording to the stimulus presentation and QTM motion recordings: Send an event trigger from Matlab to PupilLabs for the start and end of the stimulus presentation OR use GetSecs(‘AllClocks’) (http://psychtoolbox.org/docs/GetSecs-AllClocks) in Psychtoolbox to get the UTC timestamps on the lab computer for these moments and match afterwards?
I found a deprecated solution for triggering events from Matlab in PupilLabs (https://github.com/pupil-labs/realtime-matlab-experiment) which also hints towards a newer version (https://github.com/pupil-labs/pl-neon-matlab). There's an commented part in the code that loads april tags which would be used for the Marker Mapper if I understand correctly. Unfortunately, the asset seems to be gone (https://docs.pupil-labs.com/assets/img/apriltags_tag36h11_0-23.37196546.jpg). Edit: I think the jpg files I was looking for can be found here: https://docs.pupil-labs.com/invisible/pupil-cloud/enrichments/marker-mapper/
Thanks, Alex
Hi @user-7e9cde 👋 ! When working with multiple data sources (i.e. more than two), the easiest way to synchronize them is by using Lab Streaming Layer (LSL), provided the devices support it. In your case, this should work well:
Using LSL will ensure everything is aligned on the same master clock.
For using MATLAB with Pupil Invisible, please use the latest package: Pupil Labs Neon MATLAB. You’ll also find example code in the examples folder. And as for the Apriltags, instead of loading images statically, this package includes a marker generator, which is generally a better approach: Gaze Mapping Example.
Hi @user-d407c1, I am trying to use the latest examples from the Neon MATLAB with our Invisible as you suggested. The examples get stuck after „Searching for device…Device found!“ meaning no recording is started. Control via the monitor app works fine. Any advice what the issue could be?
Thanks for the fast reply, Miguel. I'll check LSL out 👍
Hi @user-d407c1 Today When I was recording a data using pupil invisible eye tracker . There is problem in save the recording its invisible companion app is not responding. Also one error is coming like Sensor have error. Later when another recording I started and saved it then after a messege showing that you have not save privious recording and then I have saved it. But main issue is in incompanioin app i have upload recording to the pupil cloud the recorded duration is showing 52 minutes but when i play video in incompanion app it is only 39.12 minutes. And also the uploaded video on pupil cloud is not open. When I am going in download and try to download timeseries data it showing upload not finished. But in companion app recording show whole video is uploaded on cloud. Please can you help me what the solution is possible. If any other information is required further please let me known.
Hi @user-787054 ! Let’s go through this step-by-step.
Multipart Recordings:
Sometimes, recordings are split into multiple parts, which can happen for a few reasons—such as reaching a certain file size limit or encountering an error that required the app to start a new file. This isn’t an issue, as both Pupil Cloud and Pupil Player can automatically merge these parts and handle multipart recordings. However, the embedded player in the Companion app can only display the first part of the recording, this player isn't intended to consume the whole recordings, but rather make a quick preview.
Sensor Error and Cloud Issues:
For the sensor error, as well as the issues with playback and downloading from Pupil Cloud, could you please open a ticket in the 🛟 troubleshooting channel? When you open the ticket, please include the recording ID so we can assist you more effectively.
Please can help me from pupil lab. I have recorded two video on yesterday , in companion app it's showing that recording is upload on the pupil cloud, also in pupil cloud recording is exist but its showing processing . I am unable to open the recording why is it happening.
Greetings, @user-787054! I see you've also sent us an email. I have responded there.
@nmt Thank you for your reply. Now that processing isuue is resolve
@Neil(Pupil Labs) There was one another issue for that I have created ticket please can you look and let me whether that problem is resolve or not. @user-d407c1 was reply on that but still the issue is not resolve.
Hello everyone, I have a question I'd like to ask. I have a Pupil Invisible device, but my phone is a Google Pixel 8(a). The app store indicates that the software is not compatible with my device. However, I tried installing Invisible Lab using an APK, but it keeps crashing. Is there any way to solve this issue? Thank you!
Hi @user-b63d4c! The Invisible Companion App is only compatible with certain models of phone. These are: OnePlus 6 (Android 8 and 9), and OnePlus 8 and 8T (Android 11).
If I flash the Google Pixel 8 to Android version 11, will it still be incompatible?
The phones listed in the docs are the only supported models - they have been carefully tested to ensure optimal compatibility.
Cool, thanks for the reply!
Hey! Some of our recordings are not fully uploaded to the cloud (according to the Cloud at least) but show up in the App as uploaded. Is there any way to force reupload them?
Hi @user-dc5cba ! Would you mind opening a ticket in 🛟 troubleshooting and share a recording ID for one of the affected recordings? This will help us investigate the issue more effectively
I created a ticket, thanks!
are you using an institutional network? you can try to connect to the device directly using the IP address rather than relying on the MDNS
I‘ve Connected the phone and PC directly into the router via ethernet. The device seems to Connect fine but controlling it seems to be an issue. Setup is: Windows 10 + MATLAB 2023b + Python 3.10
If I put the ip address and port that I am using for the control/monitor app, there‘s an error: Cannot find device at given IP address and port.
Hi @user-7e9cde , I'm stepping in for @user-d407c1 . May I ask if using Windows 10 with a direct Ethernet connection is the only option? mDNS support on that OS is reportedly incomplete. Windows 11, rather, has complete and tested mDNS support.
If you need to use Windows 10, then is it instead possible to connect your Neon via Ethernet to a router and then connect your Windows 10 computer via Ethernet to that same router? The router does not need to be connected to the Internet. Then, device discovery should work as expected and you will still have the low transmission latencies that Ethernet provides.
Also, just to be sure, when it gets stuck, is there an error message?
Hi Rob, thanks for helping. I‘ve updated the Info above. Both devices are plugged into a local router (without internet access). I can access the live Feed and control functions in the browser. I can also ping the phone‘s ip adress and get a response.
When running the demo „basic_recording.m“ all seems fine as it says „Device found!“. But then nothing Happens.
When I try to use Device(ip,port), I get the error message that no device can be found.
Edit: I am using Pupil Labs Invisible
Thanks for the clarifications.
May I ask, after it shows "Device found!", when it is stuck, does MATLAB:
It shows „Busy“
Hi guys, sorry but I'm new to the use of pupil eyeglasses and to this world in general but I am currently working with the Pupil Invisible eyeglasses and using the companion app on the mobile device. I attempted to download the recorded data from the mobile device directly to my PC via USB cable and open the complete folder in Pupil Player. However, I was only able to view data from the Pupil Invisible device itself, without any information from the mobile device, such as blink and fixation detection.
Is it expected that blink and fixation data are unavailable in this direct-download approach? Would uploading the recorded data to the Pupil Cloud be necessary to access the complete dataset, including metrics like blink and fixation detection?
Hi @user-e28b58! It is expected that blink and fixation data are unavailable when recordings are directly exported from the Companion phone and loaded into Pupil Player. Pupil Player doesn't have these plugins for Invisible.
The easiest way to get blinks and fixations is to upload recordings to Pupil Cloud. That said, if you did want to stay offline, you could also use a command-line tool called pl-rec-export
, which generates blinks and fixations from the recordings transferred directly from the phone. You can find that here: https://github.com/pupil-labs/pl-rec-export
Hi guys, we are using the Invisible eye glasses for a project at uni right now. We are new to this and our professor doesn't have a lot of experience either. We can't use the connected camera because we want to wear the glasses underneath ski goggles (and then the camera doesn't fit) and then try to map the field of view onto the ski goggles.
Now we are wondering if first of all you guys think it is possible to map the gaze data onto a 2D picture of the glasses with Matlab (our teacher requested we use Matlab and because right now the gaze data is plotted in coordinates of the camera but the ski goggles are not the same dimensions as a square), then our professor would also like us to take the angle of the single eye, as the azimuth and elevation of the gaze point will be different from the angle of azimuth and elevation of each single eye. Do we get the single eye angle information from the Invisible glasses?
Thanks in advance! Sarah from Vienna
Hey @user-001c52! In principle, what you're asking is possible. However, I'm unsure why you'd want to map gaze onto a 2D image of the ski goggles. Could you elaborate on your objective? This would help me provide more specific feedback. With Invisible, single eye vectors are not available; the 2D gaze point is generated from binocular eye image pairs. There are no data for independent eyes.
Hi @nmt! Our objective is to map the field of view/heatmap on ski goggles to show where the athlete is looking during skiing/snowboarding. Eventually the goal is that we then know where the quality of the ski glasses is more important and where it's not as important. So ideally the product can be adapted according to the data of what areas of the glasses need to be clear at all times to be safe during sports and reducing the quality in the other areas, if that makes any sense. Alright thank you, that already helps a lot!
Who do I contact regarding an issue I'm having with a Workspace on Pupil Cloud, I want to change the Scene video upload On but it just says Contact us to change settings. Thanks
Hi @user-86c9bb , please send an email to info@pupil-labs.com with the Workspace ID (found in the Pupil Cloud URL when it is loaded in your browser)
Hello everyone, I am having a problem with the face mapper enrichment. I want to apply it to a specific recording but it includes all recordings in the enrichment. Do you know what the problem is? Thank you!
Hi @user-fc5504!
If your project includes multiple recordings and you apply a Face Mapper enrichment using the default temporal selection of recording.begin
and recording.end
, the enrichment will be applied to all recordings containing these events. To apply the enrichment to a specific recording only, you can add two recording-specific events (e.g., face_start
, face_stop
) to that recording and use these events as the temporal selection when starting the enrichment. This way, the enrichment will be limited to the specified epoch marked by these events, and only this recording will be included.
Thank you very much Nadia!
Hello everyone. I am currently having problems uploading my recordings to the Pupil cloud. There is no problem with the data left on my phone, but when I check Pupil cloud, empty data is uploaded as shown in the image. Do you know how to solve this problem? Thank you.
Hi @user-13b643 , could you open a support ticket in 🛟 troubleshooting ?
Hi @user-f43a29 , ok, thanks for the reply.
Hello, Is it possible to separate the recording in left eye, and right eye to see the diverence between the two eyes?
Hi @user-741cd5 ! Pupil Invisible utilizes images from both eyes simultaneously to estimate gaze robustly. Monocular estimations are not available, and due to the eye camera angles, pupil detection and eye states cannot be robustly measured or offered.
If you'd like to analyze the eye cameras yourself, you can access them using the native recording format or the legacy API. More information is available here: Pupil Invisible Legacy API Documentation.
Some others have managed to perform estimations using alternative methods. For example, you might want to check out PISTOL by Fuhl et al. Here's the paper and the code repository.
Hey,
I think you can see the QR code on the computer screen in the image. I wonder where I can check if the coordinates are calibrated to the 2D screen or not (in this fixations.csv file). Like if the pixel is in global coordinates or the computer screen coordinates? As shown in the picture attached.
Hi @user-a97d77 👋! To get the coordinates of fixations on the surface, you'll need to add the recordings you're interested in to a project and run the Marker Mapper enrichment.
Let me know if you have any questions or need further guidance!
Hi @user-d407c1 , I have a very important question and a request. PupilCloud calculates in the AOIs often fixations with a duration of well under 200 ms ( for example 51, 60, 81, 153 ms). Such fixation durations are pointless, as no information can be obtained during them. How does the algorithm for calculating fixations and fixation durations work? What is the lowest fixation duration for a fixation? The main problem is that some very short fixations pull the mean value of the fixation duration very far down, even if longer fixations are present. Please correct this in the algorithm, so that the minimum fixation duration is 200 ms. I assume that users cannot change/adjust anything in this algorithm.
Hi @user-3c26e4! The minimum fixation duration for Pupil Invisible is 60 ms. The filter itself is adaptive, in that is uses optic flow to compensate for head movements, and includes event-based post-processing filters. You can read more about that in the Pupil Labs fixation detector whitepaper or our publication in Behavior Research Methods. Note that the filter is open source. So you can absolutely change/adjust the parameters in the algorithm and run it in an offline context. You can find it in this GitHub repo. Alternatively, you might choose to remove specific rows from the Cloud exports if that's feasible. Finally, if you would like to see user-modifiable thresholds in the Pupil Cloud UI, you could make a request in: https://discord.com/channels/285728493612957698/1203965124759519252
Hi everyone, I am a human factors engineering student. I'm currently designing an experiment with a python programming software that requires subjects to perform some mouse clicks on a computer. I also want to capture data from their eyes and be able to synchronize it with my mouse data. (For example, the eye data will start to be captured only after the subject tries the start button on the screen). I have both invisible and core devices. I don't know which one is easier to program ( since my English and programming skills below average). It would be nice if there is a documentation or project about it! Thanks guys! I really need your help!🥹 👍 👀
Hi @user-74c615! What you describe certainly soounds possible with both Pupil Invisible and Pupil Core. But before making any recommendations, I'd like to learn a little more about your screen-based task and eye data. What kind of eye data exactly do you want to collect?
Edit: I see you've also posted in the 👁 core channel.
Thank you for your work, Pupil invisible's eye camera mp4 video cannot be downloaded from PupilCloud. Do I need to put the pupil player format data into pupil player to get the eye camera mp4?
Hi @user-5c56d0 , if you open the ZIP file that you get after downloading the Pupil Player Format, do you see these PI left/right ... .mp4
files? Those are the eye videos and you can open them directly, if you prefer.
Thank you for your reply. Thanks for the reply. I see that the eye camera movie exists separately for each eye, left and right, as shown in your image. I attach my download folder, but it does not contain both. I upload the following 2 types.
Folder 1: Data recorded a week ago. I used the genuine cable.
Folder 2: Data recorded yesterday. I used a non-genuine cable (because I lost my genuine cable last week). ※Folder is in the follwoing URL https://drive.google.com/drive/folders/1jssq_N7SQ8o6dCE4yFTqs8_1G1h9iwWv?usp=sharing
Q2. Lost the genuine USB cable. Are there cable specs available? If data is stored, is it considered an appropriate cable? (e.x., USB 2.0 or more than. ) Note that Folder 1 above was a genuine cable, but still no eye camera video.
This is a screencap. The actual contents of the folder are in the url I sent you before.
Hi @user-5c56d0! I've looked at your recordings and they are intact. I have been able to play them just fine. Note that the eye videos are .mjpeg
format – e.g. PI right v1 ps1.mjpeg
.
If you're struggling to play them back, you can load the recording into Pupil Player.
Most high-quality USB cables will work. This one is suitable, for example.