🕶 invisible


user-7e5889 02 January, 2024, 10:45:12

I would like to know how long is the delay between sending a command and the start of recording when controlling Pupil Invisible via API ? Assuming that there is a time difference of about 1 second between the clocks of the computer that sends the command and the cell phone that receives the command, is the timestamp recorded in 'events.csv' downloaded from Pupil Cloud using the time of the cell phone or the time of the computer that sends the command?

nmt 03 January, 2024, 03:43:02

How @user-7e5889! It depends on how you use our API. For example, you can opt to send a string, and the event will be timestamped on the phone. Alternatively, you can send a timestamp from your computer. There are some code snippets that show how in this section of the docs.

Important notes: The accuracy of the first option is bound by network latency, whilst the second is bound by any temporal offset between your computer and phone clocks. Luckily, there are methods to measure and account for these factors. Recommend reading our docs on achieving a super precise time sync!

user-86eebf 05 January, 2024, 19:46:30

We are having trouble calibrating/finding instructions on calibrating the invisible PupilLab glasses. Can anyone help? 📣

nmt 07 January, 2024, 05:31:50

Hi @user-86eebf 👋. The Invisible system doesn't have a calibration procedure in the conventional sense, but it does have an offset correction feature. You can read more about that, and how to use it, in this section of the docs

user-e40297 06 January, 2024, 16:05:49

Hi there,

I'm confused... I’m trying to write a program allowing to impose an image on the worldcam recording of the invisble or neon (post processing) that depends on the gaze. I’ve used the snippit code provided to read the raw data. Somehow I experience large differences when comparing the snippit using gaze ps1.raw and the csv coordinates provided after pupil player export. I was hoping to use the raw data since it does not a timely export. I thought it used to work fine, but now it does not... Or is there any way where you can run the command using python to obtain the gaze and imu file?

user-d407c1 08 January, 2024, 08:40:49

Hi @user-e40297 ! Pupil Player does transform the timestamps to Pupil Time , this is a legacy conversion such that remained compatible with Pupil Core.

You may want to use pl-rec-export , a CLI utility that exports raw data into CSV files keeping the structure and timestamps as Cloud does (not converting them). It also allows you to get blinks and fixations offline.

Finally, this code shows an example of how you can work with both. In that case, the override flag basically make some conversions to be compatible with the data from Pupil Player format.

user-e40297 08 January, 2024, 08:45:16

Thanks! I'll look into it

user-5fb5a4 08 January, 2024, 13:09:39

Hello. I have conducted a study with pupil invisible glasses. The participants were sitting in a simulator and completing a excavation simulated task. I have 3 AOIs in my experiment and when I export the enrichment data I notice that a lot of data is missing. This is an example with raw fixation data merged with the 3 AOI data in one file (n/a are missing values) The missing data also does not match in between AOIs, meaning that one data referring to a fixation is present while another is missing completely.

Chat image

user-5fb5a4 08 January, 2024, 13:10:22

To be noted is that I am using the marker mapper enrichment to define AOI

Chat image

user-5fb5a4 08 January, 2024, 13:21:46

Is it possible to have a better data export?

user-d407c1 08 January, 2024, 13:57:45

HI @user-5fb5a4 ! Would you mind sharing whether you created the enrichment and then added recording to that project? Also could you share the enrichment ID such that we can further investigate?

user-5fb5a4 08 January, 2024, 14:02:20

Hello. I created the enrichments after adding the recordings

user-d407c1 09 January, 2024, 07:45:54

Hi @user-5fb5a4 ! Few things regarding the marker mapper enrichments that you shared. There is indeed missing data due to markers not being detected/recognised during some of the fixations/ parts of recordings. Therefore if there is no surface is normal to not get a surface coordinate.

Currently NaNs are not ignored or replaced by empty strings, moving forward we are changing it such that gaze.csv shows an empty string if there are NaN values on the gaze coordinates, and we will skip the row in the fixations.csv if the fixation coordinates are both empty, to match the format.

What it means for you? Simply ensure that the markers are properly recognised (eg. good illumination, size and avoid motion blur) and consider NaNs as if the markers are not recognised. So, for example in your fixation ID 104, screenis recognised, and the fixation falls in the screensurface, but the controls surface is not found.

user-5fb5a4 08 January, 2024, 14:02:39

40f0b276-d7ce-45a6-85f3-c7f7a87dd5ad

user-5fb5a4 08 January, 2024, 14:02:50

62f70e95-e5fa-41ab-905e-b0cc12f63aab

user-5fb5a4 08 January, 2024, 14:02:59

d3abd7dc-4ada-4f47-b1c7-102c86f6a13d

user-22adbc 08 January, 2024, 16:21:24

Hello, I have a question about cloud upload. Is there any expected delay between upload time and the data shows up in cloud? Today, I uploaded data but can't see it yet in cloud.

user-480f4c 08 January, 2024, 16:28:54

Hi @user-22adbc 👋🏽 ! I have some follow-up questions:

  • Is "Cloud upload" enabled in your Settings and are you connected to wifi?
  • Does the recording appear as fully uploaded in the Recordings folder in the App? This should be indicated by a tick symbol (✔) for this recording
  • When did the recording upload start? Do these recordings appear as uploading on Pupil Cloud?
user-22adbc 08 January, 2024, 16:30:07

@user-480f4c Yes, Yes, and No. Interestingly, Everything looks fine, and videos uploaded in 2023-12-25_14:23:32 looks just fine.

user-480f4c 08 January, 2024, 16:38:09

Do I understand correctly that the recordings do not appear as upload in the App? In that case, please follow the steps below:

  • Can you please try logging out and back into the Neon Companion app and let us know if this triggers the upload?
  • Please visit https://speedtest.cloud.pupil-labs.com/ on the phone's browser and execute the speedtest to ensure the phone can access the Pupil Cloud servers.
user-22adbc 08 January, 2024, 16:38:38

@user-480f4c , It looks properly uploaded in the app, but not see in the cloud.

user-480f4c 08 January, 2024, 16:44:37

Thanks for clarifying! I saw your DM - I'll reply there to request the recording IDs!

user-14536d 09 January, 2024, 01:01:51

Hello! Using the Invisible model for wayfinding and signage research, I often find that the gaze point will disappear for a second or two throughout the video recordings, anyway we can avoid this from happening?

user-d407c1 09 January, 2024, 07:48:58

Gi @user-14536d ! Does it occur in Cloud or also in the phone? Could it be that you are observing blinks? Kindly note, that we have now a little eye icon next to blinks that will toggle on/off gaze visualisation during blinks.

Chat image

user-2251c4 09 January, 2024, 13:09:46

Hi! I am running experiments with four pairs of Pupil Invisible glasses and I have managed to remote control one pair of glasses with the Pupil Invisible Monitor. I was wondering if it is somehow possible to remote control all pairs of glasses with the same computer?

user-d407c1 09 January, 2024, 13:44:47

Hi @user-2251c4! While it is possible to run multiple instances of the Monitor App (one per device) in different tabs and control them, each tab has access to only one device at a time (you can swap them, but not trigger them from the same tab). Also, I would not recommend this if you have a weak wifi.

The best way you can achieve this, is programmatically using the realtime API.

user-d407c1 09 January, 2024, 13:54:24

@user-2251c4 kindly note, I have edited my response, to be more precise

user-d9f5f9 10 January, 2024, 04:43:52

Hi all, I am using Pupil Invisible, is it possible to do Apriltags detection offline locally? I know currently the process is to upload the data to PupilCloud but I sometimes need to do some post corrections on the gaze data and for Pupil Cloud I cannot do it so am exploring ways to do correction offline on raw data and then run apriltag detection to map the data to the relative coordinates to the surface.

nmt 10 January, 2024, 13:09:15

It's possible to work with AprilTag markers locally. One way is with Pupil Player, our free desktop software. Look at the 'Surface Tracker' plugin.

You can also do a post-hoc offset correction with Player, but you'll need to add a new plugin to enable this. The process of adding plugins is a bit involved, but manageable for most users. You can download the plugin from this GitHub gist. This section of the docs outlines how to add plugins.

One thing to bear in mind: Pupil Player does not compute Fixations for Invisible.

user-4fcb6b 10 January, 2024, 14:09:46

We used the Pupils Invisible in an urban setting; having participants standing for several minutes at different public squares; now we’d like to analyze were the participants have been looking at during the time standing in the public space (cars; pedestrians; buildings; green areas; etc.). I am wondering what the easiest approach to gather these data would be and would be very happy to get some insights on that.

user-480f4c 10 January, 2024, 14:18:42

Hi @user-4fcb6b 👋🏽 ! Have you checked our Pupil Cloud enrichments ? They allow you to map gaze and fixations onto static features of the environment (see our Reference Image Mapper enrichment ) or on people's faces appearing on the scene (see our Face Mapper enrichment ). I also recommend having a look at our Alpha Lab guides that might be useful for your use case.

user-9894cd 10 January, 2024, 21:15:06

Hi! I'm working on combining the Invisible with a position tracker to calculate a full eye-gaze vector in 3D space. For this, I would need the yaw & pitch rotations for the eyes, idealy for each eye but combined is also okay. How do you suggest I go about this? Can I reliably convert the gaze x & y coordinates I get though the streaming API into a yaw & pitch, or are there better ways?

user-d407c1 11 January, 2024, 13:06:19

Hi @user-9894cd ! Are you looking for a realtime solution or does it work posthoc? And are Azimuth and Elevation what you are looking for?

user-5ab4f5 11 January, 2024, 12:26:20

Hi, i am trying to create a programm by which i start recording by "space" key and stop it by "x" key. Starting the recording works well, but stopping doesnt. Whenever i press "x" the pygame window is not responding anymore, i can't exit the programm without restarting the Kernel and the recording on invisible companion also isn't stopped.

The only problem VS Code shows to me is: Import "pupil_labs.realtime_api.simple" could not be resolved and Import "psychopy" could not be resolved

I have no problem connecting to the device though to start, just the opposite to stop.

user-5ab4f5 11 January, 2024, 12:26:41

Can send the code too, if that helps

user-5ab4f5 11 January, 2024, 12:53:38

i noticed i generally have the problem that after 4 seconds the recording was started the program is stuck on "not responding" and any key i press kills the program so i have to restart the kernel to end it

user-d407c1 11 January, 2024, 13:03:27

Hi @user-5ab4f5 ! Without having a look at the code is a bit hard to provide more specific feedback but did you check the async version of our API? It allows for non-blocking operations, enabling concurrent execution, while non-async code executes sequentially and can block on IO-bound or long-running operations.

Also note that to start & stop recordings you could simply do POST request to the endpoint.

like this:

import requests

url = "YOUR_IP:8080/api/recording:start"
response = requests.post(url)
user-5ab4f5 11 January, 2024, 13:13:09

This was basically my code. @user-d407c1 Do you see anything off about it? Otherwhise i'll just try to reqrite it with the information you have given me and hope it works out

MusicEyeTrack_Execute.ipynb

user-d407c1 12 January, 2024, 07:44:12

Hi @user-5ab4f5! On a quick look, you call to nest_asyncio() which suggest that you are using asynchronous operations, but then you are using the simple realtime API rather than the async version, and I do not see any async operations. Is this intended? Why don't you try a simple code that starts the rec, sleeps 2sec , sends an event and then stops, save and finish? This way you can deduct whether the issue lays on your code from the realtime api or elsewhere.

user-af52b9 11 January, 2024, 16:08:52

Hi, When I tried to run the script on my home wifi, the device connected and worked as it should. But when I try to connect the wifi at my friend's house (Maayan) or to a hot-spot from my phone - it doesn't work and there is no connection to the device. On the other hand, when Maayan connects to the device from the wifi in her house, it does work for her.

** It is important to note that I made sure that the computer (which I ran the script) and the mobile device were on the same wifi.

This is the relevant part of my code:
# Look for devices. Returns as soon as it has found the first device.
    print("Looking for the next best device...")
    device = discover_one_device(max_search_duration_seconds=15)
    if device is None:
        print("No device found.")
        raise SystemExit(-1)

The output: No device found. 
nmt 12 January, 2024, 04:02:48

Hi @user-af52b9 👋. You might want to double-check there are no firewalls blocking the connections on those networks.

user-228c95 12 January, 2024, 07:27:54

Hello! Is there a field or article where I can learn in detail about pupillary data concepts, such as azimuth angle? I am trying to better understand exactly which data these concepts correspond to. If you have any information on this and can help, I would be very happy.

user-d407c1 12 January, 2024, 07:30:32

Hi @user-228c95 ! There is no pupillary data with Pupil Invisible, would you mind confirming that this is the eye tracker that you are using? Regarding azimuth and elevation, you can find some information here

user-5fb5a4 15 January, 2024, 10:21:41

Hello. When downloading the raw data from the cloud is there also a file with saccade data? I see fixation, gaze, and blinks data but not saccades

user-480f4c 15 January, 2024, 13:30:08

Hi @user-5fb5a4 👋🏽 ! Saccade data is not provided as a separate file yet. However, we are planning to add this information to our cloud download soon, but we don't have a concrete release date for that just yet.

In the meantime, please note that within Pupil Cloud there is a fixation detector. If you have an experimental setup that does not produce smooth pursuit movements, one could consider saccades as the inter-fixation interval (or gaps) between fixations. Using the available data you could calculate saccade amplitudes and velocities.

user-5fb5a4 15 January, 2024, 13:45:29

hello, thank you for the response. What would happen in the case there are actually smooth pursuit movements? I guess that they are just counted as fixations by the algorithm

user-480f4c 16 January, 2024, 11:46:03

Our fixation algorithm detects fixations, but not smooth pursuit movements. See also this relevant post: https://discord.com/channels/285728493612957698/633564003846717444/1013737914015952978

user-c20bf6 17 January, 2024, 10:18:31

Is someone from the sales team available ?

user-d407c1 17 January, 2024, 10:20:28

Hi @user-c20bf6 ! You can direct your sales queries to sales@pupil-labs.com

user-c20bf6 17 January, 2024, 10:20:54

Thanks

user-dce61b 19 January, 2024, 10:50:57

Hi, we've been thinking of using the invisible for a simulation drive study, but have strict restrictions about cloud data. Is there a way to directly integrate data from the capture in iMotions and then export it with the other sensors? As opposed to post-importing it to iMotions and linking it to the sensors? We currently have the invisible set available, but are open to other models if it's better. Thanks for the info!

nmt 22 January, 2024, 02:54:22

HI @user-dce61b 👋. Local transfer of recordings from our Companion App to iMotions software should be possible, i.e. there's no strict requirement to upload them to Pupil Cloud beforehand. The iMotions documentation covers this in more detail 🙂.

user-e65190 21 January, 2024, 14:05:44

Hello, I am event-tagging some of my videos recorded with Invisible on Pupil Cloud, but when I seek the video using the playhead, I keep getting this error message, and the video stops buffering, I have to refresh the page, after which it works for two or three 'seeks', before the error message reappears again, and the whole cycle repeats. As a result it's impossible to do the tagging efficiently.

Chat image

user-e65190 21 January, 2024, 14:06:04

Chat image

user-e65190 21 January, 2024, 14:06:41

I can confirm that this is only happening to one particular video in my project - other videos don't seem to have this problem. Can anyone give some pointers on how this can be overcome?

nmt 22 January, 2024, 03:06:48

Hey @user-e65190 👋. That's definitely not expected! Is this still happening with your recording? If so, can you please forward the recording ID to [email removed] To find the ID, right-click on the recording and select 'View recording information'.

user-00729e 22 January, 2024, 14:13:38

Hi there. I am using the real-time API. I am trying to get the eyes video frame using device.receive_eyes_video_frame(). While I can get the scene video frame and gaze data, this does not return anything. Do you know why this could be?

user-d407c1 22 January, 2024, 14:25:47

HI @user-00729e ! Eye videos streaming through the realtime API is only supported for Neon.

In Pupil Invisible, the eye cameras are positioned on the side and they are normally of not much use other than for our network. Since the view of the eyes is quite lateral it is hard to get meaningful information from that view, and for this reason, the streaming of eye videos for Pupil Invisible was never migrated to the new realtime API.

That said, if you want to obtain eye images (although they might not be of much use as I mentioned), you can still use the legacy API

user-91d7b2 22 January, 2024, 20:34:04

when trying to create a reference image - what does "form not ready" mean regarding the recording?

user-d407c1 23 January, 2024, 09:13:39

Hi @user-91d7b2 ! This usually means that you tried clicking on run but either the name of the enrichment, or any of the required fields (scanning recording, reference image,...) was not ready. Were you able to create it? Or do you need assistance with ?

user-1af4b6 23 January, 2024, 11:58:39

Hi, I'm currently trying to use "pl-dynamic-rim" on one of my videos and I get the error: No valid gaze data in RIM gaze data for recording ID 39cf919f-1785-403c-9b04-371a79ed2dbb

This issue seems to persist with another video as well, resulting in two corrupted videos out of a total of five. I have attached the gaze data (reference image) file for your reference.

gaze.rar

user-d407c1 23 January, 2024, 12:15:09

In that gaze file that you shared, you have the answer. There are no gaze points on the reference image for that recording.

If you filter by recording id equal to that one, and the gaze detected on the reference image is equal to true, you will see there is no data.

What it means? - The reference image does not appear on that recording - Or, it appears, but you never gazed on

If you inspect the recording on the Cloud and you find that the reference image is found and you gazed upon, but the download says otherwise please let us know.

user-d407c1 24 January, 2024, 07:41:04

Hi  @user-91d7b2 ! Could please indicate the steps you follow? eg. Create Enrichment - Reference Image Mapper - Place a name for the enrichment - Select scanning recording - Upload reference image and click Run ?

Can you confirm it started after you clicked on run? Alternatively, you can invite me to your workspace to have a look.

user-91d7b2 24 January, 2024, 17:19:45

Yes i will invite you to the workplace. How do i do that?

user-cc9ad6 24 January, 2024, 12:46:30

Hi, There is a problem using reference image mapper, it always shows 'running' and I have waited for more than 20 min, the video is very short (30 seconds). Could you tell me what's wrong?

Chat image

user-480f4c 24 January, 2024, 13:35:23

Hi @user-cc9ad6 👋🏽 ! This error is likely the consequence of a poor scanning recording. A good scanning recording needs to record the object of interest from all possible angles and from all distances a subject may view it. I highly recommend checking our scanning recording best practices. https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/#scanning-best-practices

user-cc9ad6 24 January, 2024, 12:50:51

In the end, it reports there is an error. Could you tell me how to fix it?

Chat image

user-8db87d 24 January, 2024, 22:22:15

Hey everyone. I am new to eye-tracking and Pupil labs. I have an idea for an upcoming thesis topic. We have the Pupil Invisible glasses in our lab. I would like to extract the pupil diameter as part of my data. However, as per the documentation, Invisible does not collect PD data directly. Does anyone know a way where it can be obtained from a video recording?

nmt 25 January, 2024, 02:38:09

Hi @user-8db87d 👋. Pupil Invisible does not provide data for pupillometry. The eye cameras are off-axis and for some gaze angles, the pupils of the wearer aren't even visible. This is not a problem for gaze estimation as higher-level features in the eye images are leveraged by the neural network. However, it does make it difficult to do pupillometry robustly.

user-5d2994 25 January, 2024, 00:29:10

Hi, I am also new user of invisible, and would like to ask about the meaning of each word of fixations excel file.

user-cdcab0 25 January, 2024, 00:51:20

Hi, @user-5d2994 - welcome to the community! You can find descriptions of different fields in our online documentation. Let us know if you have other questions 🙂

user-5d2994 25 January, 2024, 00:29:52

What is the meaning of "azimuth[deg]" ,"elevation[deg]" ?

user-5d2994 25 January, 2024, 00:30:14

If there is any instruction documents which mentioning aobut this, please let me know.

user-5d2994 25 January, 2024, 00:59:07

Thank you Dom. I appreciate it. Then, these degrees are from observer's eye point, not from previous fixation to next fixation, am I right?

user-d407c1 25 January, 2024, 10:11:58

The azimuth and elevation are the positions of that specific fixation given in degrees, like the image attached here.

The origin point (observer in the image) is the scene camera rather than the eye point, and the "star" would be the fixation point.

Hope this clarifies it.

Chat image

user-3c26e4 25 January, 2024, 10:04:59

Hi @marc , Could you please tell me which adapter I should use to be able to charge the OnePlus while I am measuring the gaze behavior with Invisible? I would nee d a Usb-c to 2 Usb-C adapter I guess, but what about the transfer rate? Thanks in advance.

user-d407c1 25 January, 2024, 10:07:28

Hi @user-cc9ad6! the scanning recording is a normal recording made with the glasses, the difference is that you do not need to be wearing them, it has to be under 3 min and it is not used for the metrics. You can read more here

user-4c17e6 29 January, 2024, 16:18:45

Hi, I'm trying to use the enrichments,. I uploaded the reference image, the scanning recording, I chose the events where I want the enrichment but after minutes of waiting the system gives me an error....I can't understand where I'm wrong

user-480f4c 29 January, 2024, 16:22:37

Hi @user-4c17e6! Would you mind sharing the details of the error? Based on the image you shared, it seems that the enrichment is still running, but I can't see any errors

user-4c17e6 29 January, 2024, 16:21:51

....this is the situation

Chat image

user-4c17e6 29 January, 2024, 16:25:25

no particular message....the little wheel on the top left keeps on spinning around and after many minutes (maybe 15) it stops and I see the word Error

Chat image

user-480f4c 29 January, 2024, 16:37:59

@user-4c17e6 thanks for sharing more details about the error. This error is likely the consequence of a poor scanning recording and/or a suboptimal reference image. A good scanning recording needs to record the object of interest (in this case the painting) from all possible angles and from all distances a subject may view it. I highly recommend checking our scanning recording best practices. https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/#scanning-best-practices

user-4c17e6 29 January, 2024, 16:47:09

thanks for your reply and suggestion.....I'll try with new scanning recordings

user-e40297 30 January, 2024, 11:39:45

When looking again at the source, it is not mirrored. As said I'm confused... Maybe I'm just mistaking.

I'm trying to calculate it to the worldcam times... For the downloaded format from the cloud I used the time stamps of the worldcam

For the pupil player format I'm using the index of the worldcam.

I'm taking the average

user-d407c1 30 January, 2024, 12:01:38

I am not sure I am following up on what you intend to do. Do you simply want to compare Pupil Player vs what you get from Cloud? Or something else? As I mentioned, there are some transformations when loading the recording into Pupil Player, which would make a frame-frame comparison a bit trickier.

Please let me know how can assist you

user-e40297 30 January, 2024, 13:14:18

I'm trying to place an image over the worldvideo. The location of the image is gaze dependent. In order to do so, as a first step, I'm trying to determine if the gaze as calculated from the excel is identical to what I see happening in Cloud/pupilplayer. Somewhere I'm making an error

user-d407c1 30 January, 2024, 13:32:47

Have a look at this snippet to render over the scene camera using gaze. https://discord.com/channels/285728493612957698/1047111711230009405/1187427456534196364

user-cc9ad6 31 January, 2024, 13:13:22

Hi Nadia, Could you tell me how to get the gaze position on the image after using Reference Image mapper. Now I can only get the gaze position on original video, not image.

user-480f4c 31 January, 2024, 13:54:58

Hi @user-cc9ad6! Just to clarify, do you mean you'd like to export a video of the static reference image and the gaze points overlayed on the image? If so, this is not offered directly through Cloud.

However, you get all the csv files with gaze/fixation data along with the reference image within the Reference Image Mapper export, so you could create your own rendering to achieve that.

In case it's helpful, I encourage you to have a look at this tutorial that generates a scanpath on the reference image based on the fixation csv file.

End of January archive