🕶 invisible


user-695fcf 01 February, 2022, 09:54:11

Hi! I've been running into some issues with the Template editor in the cloud the past week or so. I can create new templates and edit recording names, but once I try to add questions or headers, it comes with an Error 400. Nothing happens when I press either option to ignore or reload. If I keep ignoring and fill in questions etc., it never saves the template. I'm using Chrome as browser, if that's relevant to the issue.

Chat image

user-53a8c4 01 February, 2022, 10:16:43

@user-695fcf thanks for reporting this issue, this has been fixed if you'd like to try again

user-695fcf 01 February, 2022, 10:18:14

Perfect, thank you!

user-0ee84d 01 February, 2022, 11:34:07

hello.. is there any plug-in that are available to replay the recorded data and also send the frames over the network so i can simulate the real-time data streaming?

papr 01 February, 2022, 11:35:53

Hi, yes, check out https://github.com/psychopy/psychopy/pull/4216/#issuecomment-906379208

Edit 1: Note, the replay script replaces Pupil Capture altogether. Edit 2: Note, that this does not support streaming video (yet) Edit 3: To support streaming video, these functions needs to be adjusted to load video data on demand https://gist.github.com/papr/49f58a894364dd94b23c53e6bc6929d0#file-simulate_ipc_backend-py-L179-L202 and this code needs to be extended to send the image raw data https://gist.github.com/papr/49f58a894364dd94b23c53e6bc6929d0#file-simulate_ipc_backend-py-L43-L44

user-0ee84d 01 February, 2022, 11:50:58

@papr should I place this in the plugins folder?

papr 01 February, 2022, 11:51:15

No, it is an external Python script.

user-0ee84d 01 February, 2022, 11:54:14

should I pass the recorded dataset path to "capture_recording_loc"?

papr 01 February, 2022, 11:54:29

correct 👍

papr 01 February, 2022, 11:55:37

@user-0ee84d My apologies, I just realized that you might be interested in simulating the Pupil Invisible Companion network API, not the Pupil Capture network API. Is that correct?

user-0ee84d 01 February, 2022, 11:55:50

yes.. It is pupil invisible

papr 01 February, 2022, 11:56:49

In this case, the script is not of any use to you as it only simulates Pupil Capture's network API. 😕 There is no simulator of the Pupil Invisible network API.

user-0ee84d 01 February, 2022, 11:55:57

not the pupil core recording

user-0ee84d 01 February, 2022, 11:57:04

😦

user-0ee84d 01 February, 2022, 11:59:44

feature request: Please provide an option to replay the recording of both core and invisible to simulate the network api.

marc 01 February, 2022, 12:02:57

Hey @user-0ee84d! Thanks a lot for the feedback! Could you describe your use-case for this a bit so I can understand what you are going for? For example, why do you need to replay recordings rather than generating actual real-data? And why would you prefer this to run via the API rather than reading the recordings from disk directly?

user-0ee84d 01 February, 2022, 12:07:28

I record the dataset and use the recorded dataset to write the logic. Later I prefer to test the logic in real-time mode. I can't ask a person to use the invisible every time while writing the logic. @marc

marc 01 February, 2022, 12:33:34

Thanks for sharing! What I understood is that you essentially need this as a tool to support you during development. You are writing an application that is using real-time data, but during implementation it is impractical for you to constantly generate actual real-time data for testing, and would thus like to be able to repeatedly replay pre-made recordings as real-time test data.

We have been working on a new version of the real-time API, which will be released soon. Writing a simulation for the new version might be very easy. We will discuss that internally!

user-0ee84d 01 February, 2022, 12:09:04

usually, I record hours of data and I use them for testing. It heavily relies on the frames (i process these images) and other metadata such as timestamp, 2D gaze position etc..,

user-0ee84d 01 February, 2022, 12:09:22

once it works offline, I test it with real people

user-0ee84d 01 February, 2022, 12:31:46

either updating the pupil invisible companion to simulate the network api or the pupil capture to simulate it would really help.

papr 01 February, 2022, 12:32:24

How are you currently interacting with the realtime API? Do you use Python and pyndsi?

user-0ee84d 01 February, 2022, 12:33:17

I'm using pyndsi

user-0ee84d 01 February, 2022, 12:36:01

@marc exactly... That would make development a lot easier...

user-0ee84d 01 February, 2022, 12:36:04

thank you!... that would really help until the new version is released. I would be looking forward to it.

user-0ee84d 01 February, 2022, 13:54:21

I could read the recorded data. if I have to send an image to the client via pyndsi, how do I do so?

papr 01 February, 2022, 13:59:09

Doing this for pyndsi is very envolved. I would much rather wait for the new API. The preview of the new Python client can be found here https://pupil-labs-realtime-api.readthedocs.io/en/latest/examples/simple.html Building a mock-up client that reproduces the streaming functionality from a recording will be much easier for that than for pyndsi

user-0ee84d 01 February, 2022, 13:55:49

i see that pyndsi is used to send the images while zmq is used to send the meta data

papr 01 February, 2022, 14:04:54

pyndsi is built ontop of zmq. Internally, all messages are sent via zmq. But how these messages look like is a matter of the protocol aka ndsi

user-0ee84d 01 February, 2022, 14:05:25

can realtime-api be installed already with pip?

papr 01 February, 2022, 14:06:37

pip install git+https://github.com/papr/realtime-api.git should work. We have not published it to PyPI yet and the Companion app does not serve this API yet. In other words, you won't be able to test it yet.

user-0ee84d 01 February, 2022, 14:06:08

I see few examples to receive the data. any examples to send the data without having to use the device?

papr 01 February, 2022, 14:08:45

It would require new code that has the same interface/functions as the api linked above, but does not actually send anything over the network. It would just read data from disk while behaving the same as the actual api. Similar to pyndsi, it would be a very complex task to replicate the actual API including all network messaging. You would have to basically replicate the whole app.

user-0ee84d 01 February, 2022, 14:09:33

I will use zmq for now

user-61fa47 02 February, 2022, 09:37:09

Good morning. We have a pupil invisible and a pupil core. For an experiment, we need to acquire data with these two devices at different times, but we need to synchronize their timestamp with the one of a same pc. I know that a pc could not work as a companion so I'm interested in suggestions on managing this problem. Maybe the companion device could be synchronized with a specific time server? Thank you very much

papr 02 February, 2022, 09:52:37

Hi 🙂 Pupil Invisible Companion uses the operating system's time synchronization feature (NTP) to record time in Unix epoch (passed time since Jan 1, 1970). For Pupil Core, you can either use 1) this plugin to adjust Pupil Capture's time base to Unix epoch, too https://gist.github.com/papr/87c4ab1f3b533510c4585fee6c8dd430, or 2) synchronize the recorded timestamps post-hoc. See this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

Both methods assume that the corresponding devices (Android phone and PC) are synchronized using NTP. You can read more about NTP here https://en.wikipedia.org/wiki/Network_Time_Protocol#Clock_synchronization_algorithm

Since you are not recording at the same time, I would assume that you do not need millisecond-accurate time sync, is that correct? In this case, the methods above should be sufficient for your use case.

user-0ee84d 02 February, 2022, 10:20:07

may i know when could i expect the initial release for real-time api?

marc 02 February, 2022, 10:21:33

We are in the process of running final tests. Unless we find major issues, I expect we can release in the next 2-3 weeks.

user-0ee84d 02 February, 2022, 10:20:58

simulating the network api with zmq works but not with the same latency as i would obtain from a real-time feed.

user-0ee84d 02 February, 2022, 10:21:40

while processing real-time, my code is absolutely devoid of lags but when i simulate the zmq, it just crashes in a while.

user-0ee84d 02 February, 2022, 10:21:54

thanks! i’ll be looking forward to it

user-61fa47 07 February, 2022, 08:23:53

Dear dr Paper, we are not recording at the same time but we could need ms-accurate sync. In this case what we can do to obtain this precision in your opinion? Thank you for your answer

marc 07 February, 2022, 08:40:39

Immediately after NTP synchronization usually you have an error of ~0-20ms. If that is not sufficient you can estimate the remaining clock offset to manually compensate it. See this section of the docs for more information: https://docs.pupil-labs.com/developer/invisible/#time-synchronization

user-bbf384 08 February, 2022, 02:07:42

Hello! I'm from Singapore and my lab recently purchased a pupil invisible which will be used in our experiments soon. I just had a question about the disinfecting procedure. It was mentioned on the website that: "The disinfectant we use is made from 22.0g Ethanol, 21.0g 2-Propanol, 8.0g 1-Propanol per 100 grams of solution. One such brand name that we use is "Vibasept"." However, we are unable to find its equivalent with a similar composition so I wanted to ask if it is safe to use ordinary alcohol wipes for the frames? If not, could you suggest some alternatives?

wrp 08 February, 2022, 07:13:08

Hi @user-bbf384 👋

You can use another brand of disinfecting wipe to clean the glasses.

Important note: Try to squeeze the wipes so they are almost dry before touching the wipes to the glasses. The wipes should be moist but not not wet. We do not want liquid to get inside the frames which can damage the electronics (electronics are not water resistant). Consider cleaning only the contact points of the glasses frames. Do not wipe the eye cameras, world camera, or any electronic contact points.

user-bbf384 08 February, 2022, 12:21:47

Hello! Thank you for your quick reply. May I know if using regular 70% alcohol wipes would be okay?

marc 08 February, 2022, 12:28:46

Yes, when keeping the notes @wrp mentioned in mind, this is okay!

user-bbf384 08 February, 2022, 12:29:11

Great! Thanks for the prompt replies :)

user-df9629 08 February, 2022, 16:35:27

Hi Is it possible to use the Fixation Detector released on Pupil Cloud (v5.6) on prior recordings that are already on the cloud? If yes, can somebody please tell me how? Thanks!

marc 08 February, 2022, 16:39:23

Hi @user-df9629! This is possible, but only on demand via our support! Please send me a DM with the email address of your Pupil Cloud account and the recording IDs your would like to get fixations for. If it is many recordings, you can also send project IDs instead.

user-df9629 08 February, 2022, 16:46:11

Hello @marc ! Got it. I will create a project and add the recordings in it and email the details ASAP Thank you 🙂

user-98789c 10 February, 2022, 15:07:43

Hello, I need to know if it is possible to do an enrichment with marker mapper with more than one recording? I need the average heatmap of multiple recorsdings.

marc 10 February, 2022, 15:09:00

Yes, you can calculate an enrichment on as many recordings as you want. The recordings just have to be within the same project and all contain the corresponding events you used to define the enrichment.

user-98789c 10 February, 2022, 15:12:01

I also need to know, for a recording with invisible, do I get fixation duration in csv format? i need it for my heatmap analysis.

marc 10 February, 2022, 15:12:39

Yes, fixation data is calculated on upload to Pupil Cloud and is available in CSV format as part of the raw data exporter as well as the marker mapper and reference image mapper

user-98789c 10 February, 2022, 15:14:12

perfect, thank you

user-98789c 10 February, 2022, 15:39:29

Is the heatmap generation script publicly available?

marc 11 February, 2022, 09:03:57

No, the script is not directly available. It is just using numpy.histogram2d to calculate a 2D histogram of the gaze points and cv.GaussianBlur to blur the result.

user-98789c 10 February, 2022, 15:50:31

if yes, I would like to look at it now 🙂

user-a9b74c 10 February, 2022, 18:12:54

Hi, is it possible to show the fixation id on the gaze overlay video? or is there any way to connect the fixation data in csv to the video? thank you !

marc 11 February, 2022, 09:10:31

Hi @user-a9b74c! The gaze overlay is currently only able to visualize the raw gaze signal and does not consider fixation data. Using the following script you could however generate your own fixation visualization, which included IDs, based on a raw data export. The script is showing a fixation scan path and uses optical flow to compensate for head movement: https://gist.github.com/marc-tonsen/d8e4cd7d1a322a8edcc7f1666961b2b9

user-e0a93f 10 February, 2022, 19:51:39

Hi 🙂 I saw that it is possible to correct for the fisheye effect using the iMotion export plugin for the core, but not for the invisible glasses. Is there a way to de-distort the images from the invisible glasses if we want our lines to be straight ?

marc 11 February, 2022, 12:58:37

Hi @user-e0a93f! If you are only interested in undistorted scene video with gaze overlay, you can get this from the Gaze Overlay enrichment in Cloud. It has an option to undistort the scene video. The raw undistorted video is currently not directly available, but you could generate it yourself using this script: https://gist.github.com/marc-tonsen/0bff75f776da16c594ced1eef9fdd397

user-98789c 10 February, 2022, 23:25:03
  1. when recording using Invisible, what do fixation x [px]and fixation y [px] in the fixations.csv refer to? coordinates on the surface?
  2. when recording while participant is staring at an image, how should I get the fixation coordinates exactly equal to image pixels?
marc 11 February, 2022, 09:05:57

The code here is doing pretty much the same thing: https://github.com/pupil-labs/surface-tracker/blob/master/src/surface_tracker/heatmap.py#L87

marc 11 February, 2022, 09:19:16
  1. If you are referring to the fixations.csv file from the Marker Mapper export, then yes! See the doc entry here for details: https://docs.pupil-labs.com/cloud/enrichments/#export-format

  2. You are already using the marker mapper, right? If you track the screen with it and edited the surface to exactly coincide with the screen corners, then you are almost done. The mapped gaze on the screen surface is given in normalized coordinates, where the top left corner of the surface is defined as (0, 0) and the bottom right corner as (1, 1). Thus, you simply need to multiply those with your screen resolution to convert to pixels.

user-98789c 11 February, 2022, 11:24:32

thank you @marc 🙂

user-3e7550 15 February, 2022, 03:53:02

Hello from BKK, Thailand... We bought pupil invisible since 2020. But we have problem about inaccuracy focus point, even we use QR code at 4 corners at the first frame. We also have issue about Heat map accuracy also. Could you advise us?

marc 15 February, 2022, 08:25:39

Hi @user-3e7550! Could you share an example recording with [email removed] that demonstrates the inaccuracy?

user-3e7550 15 February, 2022, 08:27:42

Ok

user-3e7550 15 February, 2022, 09:26:52

@marc I sent you 2 set of records from our device.

marc 15 February, 2022, 09:27:45

Thanks @user-3e7550, I'll take a look! I see you also reached out via email, we will respond to you there!

user-a98526 16 February, 2022, 08:29:04

Hi@marc@papr, I would like to ask if pupil invisible can provide information to calculate gaze velocity (e.g., eye image, calculate gaze velocity by regression method). I'm trying to process gaze data using Kalman filtering and I need the measurement for position and velocity.

papr 16 February, 2022, 08:33:12

Yes, that is possible by running custom code on the exported data. You can adopt this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb For Invisible data, you need to replace step 1 and 2 with code that converts the 2d gaze pixel locations into 3d viewing directions. Afterward, you can use the tutorial as is.

user-a98526 16 February, 2022, 08:56:32

Thanks for your reply, but I still have some questions. What I need is pixel speed (pixel/sec). There is no essential difference. The method in the tutorial uses the position calculation velocity (pos_diff/ts_diff) What I want to try is to use other information (e.g., eye image difference) to calculate a position-independent gaze velocity. This velocity is then used to filter the position location.

user-a98526 16 February, 2022, 08:57:10

But as far as I know, pupil invisible doesn't provide eye images.

marc 16 February, 2022, 08:58:45

Eye images are provided as part of the raw binary data download (in Drive right-click a recording and select download)

marc 16 February, 2022, 09:01:19

If you want the speed in pixel/sec, can you simply take the position difference of neighboring frames and divide that by the difference in timestamps? The tutorial is doing the same thing but converting the pixels into degrees first.

user-a98526 16 February, 2022, 09:10:35

Yes, this can calculate gaze speed. What I want to do is to use other methods to calculate an additional gaze speed for comparison.

user-e40297 21 February, 2022, 15:55:10

Not sure whether this is an approriate message to this forum. I'm new to the pupil invisible. We made some succesfull recordings. But since one of my colleagues used the invisible, it doesn't display my gaze any more. I read an e-mail on an update that wasm't supposed to be installed? Any one?

marc 21 February, 2022, 18:16:27

Hey @user-e40297! Android version 10 has a bug that makes it impossible for the app to access USB devices. You can check in the system settings which version is installed. Other that that, does your Pupil Invisible show up as connected when it is attached to the phone? The camera and eye icon at the top of the home screen would be in color rather than gray then.

user-e40297 21 February, 2022, 19:44:45

Hi, thanks for your reaction It's Android version 11 security update 1-12-2021 camera and eye icons are coloured (purple and green)

marc 22 February, 2022, 08:53:07

I am assuming you are using a OnePlus8 device, is that correct? In that case Android 11 is fine. The sensors connecting successfully is of course also a good sign. So the issue you see is that while you wear the glasses and open the live preview, no gaze is visualized? If you make a recording, do you see gaze in it? If not, could you please share an example recording showing this behavior with [email removed]

user-e40297 22 February, 2022, 11:20:57

I've send a very small video in a mail. Indeed it is a OnePlus8 device

user-cd3e5b 22 February, 2022, 14:30:54

Hello, I'm currently working on accessing gaze data in realtime, but I'm also interested in obtaining the gaze data wrt a surface in realtime. Is this doable? In other words, would it be feasible to use a script to determine a surface based on surface markers in realtime then extract the gaze data in realtime? If so, does anyone have some examples to point me in the right direction? Thank you.

marc 22 February, 2022, 14:44:54

Hi @user-cd3e5b! Yes that is in principle possible. Its not supported super well, but you have two options:

1) You can use the Real-Time Surace Tracking in Pupil Capture: Using this plugin you stream Pupil Invisible to Pupil Capture: https://github.com/pupil-labs/pi_preview

There you can track surfaces in real-time using the Surface Tracker plugin, which makes its results available to the local network. See the available documentation here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking https://docs.pupil-labs.com/developer/core/overview/#surface-datum-format https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py

2) Integrate with the surface tracking source code The code used for surface tracking is open-source. The repo with the code used in the marker mapper in Pupil Cloud is available here: https://github.com/pupil-labs/surface-tracker

You could write a custom script that uses this in real-time. The biggest issue is that this repo is mostly undocumented and you might have a hard time finding your way around it. It is on our todo list to make this repo easier integrate with, but for the moment only this example exists: https://github.com/pupil-labs/surface-tracker/blob/master/examples/visualize_surface_in_image.ipynb

user-cd3e5b 22 February, 2022, 14:46:53

Hello @marc , thank you so much for your response. This sounds like a great starting point!

user-562b5c 22 February, 2022, 21:38:05

Hi, is there a companion app for windows os? Can I use, let's say a windows laptop to record, instead of the provided smartphone? Thanks.

marc 22 February, 2022, 22:33:56

Hi @user-562b5c! No, a Companion app for Windows is not available. The phone needs to be connected to calculate gaze and make recordings. You can however stream the resulting data to a computer in real-time. See here for details on that: https://docs.pupil-labs.com/developer/invisible/#network-api

user-562b5c 22 February, 2022, 22:37:39

Thanks Marc, is calculate gaze a windows software?

user-562b5c 23 February, 2022, 05:12:39

Lol, I read your message again, sorry get it now.

End of February archive