invisible


Year

user-cd03b7 01 November, 2022, 21:22:50

Also, if we didn't place surface marker mappers (the QR codes) in the corners of each surface of interest, is there a manual method to build a heatmap?

user-cd03b7 02 November, 2022, 00:06:28

Awesome, great to hear.

user-cd03b7 02 November, 2022, 00:07:42

Is there a pro/con to using the reference image mapping technique over the marker mapper technique (to develop heatmaps)? Is one method preferred over the other? iiuc they both seem to accomplish the same task, no?

marc 02 November, 2022, 07:58:03

They do indeed accomplish the same task. The Marker Mapper requires adding Markers to the scene though, which is not always desired. The Reference Image Mapper on the other hand works in natural scenes, but requires them to have a minimum level of visual features, so it is not applicable in every environment.

Our general recommendation would be to try the Reference Image Mapper first with some pilot recordings and fall back on the Marker Mapper if it is not successful.

user-ace7a4 02 November, 2022, 11:19:25

Chat image

papr 02 November, 2022, 12:14:56

Ah, I see from the chat history that you calculated these timestamps yourself. Make sure to always subtract the same value from all timestamps. Otherwise you end up with misaligned timelines. A good value would be the start time in the info.json file

user-ace7a4 02 November, 2022, 11:19:52

Chat image

papr 02 November, 2022, 12:12:41

What is the source of these files? As in which program did generate them?

user-ace7a4 02 November, 2022, 11:20:18

as can be seen here, the timestamps of both blink ids do not overlap. the first screenshot is from the blink.csv file, the second from the gaze.csv file.

user-2d66f7 02 November, 2022, 11:22:14

Hi! I tested the frequency of my data after upgrading the companion app, and found that it was even around 200Hz in stead of 120Hz. Is this possible and accurate?

papr 02 November, 2022, 12:09:48

Yes, the device will run at the highest possible speed. But that does not always mean 200Hz. We advertise 120 Hz as a minimum bound.

user-ace7a4 02 November, 2022, 12:17:10

ahh I didnt even think of a mistake in the code. I will take a second look at it, thanks! I thought there had been a mistake somewhere in the recording.🫠

papr 02 November, 2022, 12:18:35

Both are valid reasons. You can easily check by validating the original timestamps. If they make sense, the error is likely in the transformation code

user-ace7a4 02 November, 2022, 12:18:54

I applied the same function to both blinks and fixations.csv and for the fixation file it turned out right, thats why I was confused.

papr 02 November, 2022, 12:19:40

Does the function take a start point in nanoseconds? Or do you just use the smallest available timestamp?

user-ace7a4 02 November, 2022, 12:21:05

It uses the start point in nanoseconds

papr 02 November, 2022, 12:21:46

Start point of what?

user-ace7a4 02 November, 2022, 12:22:21

I'll do an extensive look at the code and will then update you!

user-ace7a4 02 November, 2022, 12:39:56

okay so: First all of the nanoseconds are transformed into seconds. Then, the very first "start timestap" is substracted from all the time values in seconds within the start timestamp. After that, the duration [ms] column is also transformed into seconds. This duration in seconds is added to each respective start timestamp so the end timestamp is calculated. The same function applied both to blinks.csv and fixations.csv. For fixation.csv it makes a lot of sense and matches with the gaze dataframe, but for blinks it does not.

papr 02 November, 2022, 12:49:43

That assumes that the start timestamp is always the same. But if the first blink happens at 3.0 seconds and the first fixation at 0.0 seconds, then you will see a discrepancy. Also, to calculate the ent timestamps, you can apply the same function as before regarding the subtraction of the same timestamp as for start.

Also, note, due to how floats are usually implemented, it is better to subtract the nanosecond timestamp first and then converting the value to seconds. This preserves timestamp accuracy.

user-ace7a4 02 November, 2022, 12:54:58

Okay maybe to clarify (in response to your first sentence): I applied it to the blink csv file and then to the fixation csv file, and saved it in two different variables. So basically x = function(blinks), y = function(fixations). Hence the start timestamp was not the same for both files. Or did did I misunderstand you?

papr 02 November, 2022, 12:58:59

Your understanding is correct. You want x = function(blinks, start_time=T) and y = function(fixations, start_time=T). Using T within the function for the subtracting and ensuring T has the same value for both function calls.

As mentioned above, a good value for T is the recording start time:

import json
import pathlib

path_to_json = pathlib.Path("<path to the recording's info.json>")
start_time_ns = json.loads(path_to_json.read_text())["start_time"]
user-ace7a4 02 November, 2022, 12:55:25

Thanks for the tip on first subtracting and then converting!

user-ace7a4 02 November, 2022, 13:02:29

Ahhh! Okay, this makes a lot of sense! So it probably was just coincidence that it worked so well with the fixations?

papr 02 November, 2022, 13:03:27

Coincidence in the sense of the gaze starts with an fixation, yes. Also, fixations are more frequent than blinks, making it more likely for it work out.

user-ace7a4 02 November, 2022, 13:14:13

thank you very much. this helped me tremendously

papr 02 November, 2022, 13:14:21

Happy to help!

user-ace7a4 02 November, 2022, 13:14:28

appreciate the support you guys are doing a lot!

user-f1fcca 03 November, 2022, 10:28:59

Hi! I'm trying to apply the video with the gaze into the Lab Recorder, in real time while streaming, but there is no video option of pupil labs in Lab Recorder, it shows only pupil_invisible_Event & pupil_invisible_Gaze. I have tried with or without starting recording from the monitor app. Is there any way to synchronize it through lab Recording ? Thanks in Advance!

papr 03 November, 2022, 11:30:44

Hi, streaming the video to LSL is not supported. Mostly due to a lack of software that would be able to play that back.

user-f1fcca 03 November, 2022, 11:20:36

Or in other words, while recording i could create an event (timestamp) and then retrieve it through lsl, and then synchronize through this timestamp the other datastreams ?

papr 03 November, 2022, 11:31:36

That's exactly what the LSL relay does for you! See https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html

user-f1fcca 03 November, 2022, 12:15:53

In the pupil labs cloud, at the project editor, i created a 23 sec video with 2 events. In the middle of the event 1 and 2 there is an annotation lsl.time_sync.66. What does this mean ? And why is at that specific time?

papr 03 November, 2022, 12:18:43

Those events are generated by the relay and are used for the Post-hoc sync linked above. The frequency of those events is configurable via the lsl relay command line Arguments. The exact time of the event only matters for the relay

user-f1fcca 03 November, 2022, 13:19:16

Thank you so much! For sure it will occur more questions in the future!

user-f1fcca 04 November, 2022, 14:56:43

Hi again! I'm trying to verify the values of the raw-data-export from the pupil Cloud, and the values from the Lab Recorder. I saw that the csv file from the cloud is rounding all the values of x and y gaze to the 10^9, and the lab Recorder isn't rounding at all. That means that the lab recorder gives me number like 672274658203125 and the csv file gives 672275000000000. That's making a difference when i try to identify the exact start point of the recording through the companion app and the identical part inside the lab Recorder xdf file. Is there a way i can get through the cloud the unrounded official values of the gaze? Thanks in Advance!

marc 04 November, 2022, 15:09:15

Hi @user-f1fcca! The timestamps exported with the raw data exporter should be unix timestamps in nanoseconds. The value 672275000000000 is much too small of a number to fit though, so something is going on here. Are you doing any processing of the timestamps coming from the raw data exporter, or are those the raw values in the CSV file? Could you let us know the recording ID of a recording that is showing such timestamps and give us permission to inspect the CSV files to investigate? Also, in theory there should not be any rounding of this magnitude, which would essentially round the timestamps to seconds.

user-f1fcca 04 November, 2022, 15:49:21

Hi @marc ! I'm not doing any processing, at least none of I'm aware of. I just open the gaze.csv file first with the excel to see if everything is all right there, and then I read it with matrixread in Matlab.

papr 04 November, 2022, 15:51:26

Please note, that LSL time != Pupil time. LSL records timestamps in seconds relative to an arbitrary clock start. Pupil time is in nanoseconds since Unix epoch as Marc said already.

user-f1fcca 04 November, 2022, 15:51:02

This values are not timestamps, these are the values that the gaze.csv file gives in the gaze x [px] and gaze y [py].

papr 04 November, 2022, 15:52:44

Ah, gaze locations are given in floating-point pixels. Depending on your language setting, the float values might be misinterpreted.

user-f1fcca 04 November, 2022, 15:54:50

True there is a difference in Greek with the coma (,) and the dot (.) but what programm is making this setting difference ? The excel by default? Also why matlab doesnt make it ?

papr 04 November, 2022, 15:57:04

I think Excel tries to be smart about it but fails. Matlab just uses the internal float string interpretation. I think you can tell Excel to use that, too, but I don't know how.

user-f1fcca 04 November, 2022, 15:57:59

Ah alright! Thanks again guys !!

user-f1fcca 07 November, 2022, 12:47:13

Hi again! I have a question, the lab recorder gives me a little bit different values than the csv file from the raw data from the pupil cloud, not only the issue with the rounding but with a 4 or 5 values difference, for example lab recorder gives 504.5231 and the csv gives 501.8492. We have any idea? Im speaking for the gaze coordinations of x axis!

papr 07 November, 2022, 12:51:36

Hi, am I assuming correctly that you are comparing rows that have the same timestamp?

user-f1fcca 07 November, 2022, 13:06:13

Chat image

papr 07 November, 2022, 13:09:07

Note, that Pupil Cloud reprocesses the eye videos with their full temporal resolution, yielding more gaze data than what can be calculate in real time

user-f1fcca 07 November, 2022, 13:06:39

the lab recording and the cloud have different time

papr 07 November, 2022, 13:07:42

That is expected. In the above message, how did you determine that the two values were coming from the same gaze datum?

user-f1fcca 07 November, 2022, 13:09:34

For example in this pic i show different picks, the timeseries are very very close to each other but not identical

papr 07 November, 2022, 13:09:58

Unless you align the timelines, you might not be comparing the same gaze points

papr 07 November, 2022, 13:11:22

See this guide to align the timelines https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html

user-f1fcca 07 November, 2022, 13:14:38

Okey, so if the timelines are aligned the points of lab recorder and pupil cloud have exactly the same values ?

papr 07 November, 2022, 13:16:20

To clarify: LabRecorder can only record a subset of the gaze datums compared to Cloud. This subset should have the same gaze locations in LabRecorder and Cloud, yes.

user-f1fcca 07 November, 2022, 13:15:31

Just for curiosity Im asking

user-f1fcca 07 November, 2022, 13:18:07

Okey thanks a lot!

user-f1fcca 07 November, 2022, 13:41:14

In the terminal when i try to run the command : lsl_relay_time_alignment --C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg --C:\Users\Nick\Desktop\Synchronize pupil recorder with lab recoder\2022-11-04_11-42-27-new , the terminal says : 'lsl_relay_time_alignment' is not recognized as an internal or external command, operable program or batch file.

user-f1fcca 07 November, 2022, 13:42:51

I have installed the dependencies and the paths are correct to the xdf file and the raw data file with the events respectively

papr 07 November, 2022, 13:44:13

Please try to replace lsl_relay_time_alignment with python -m pupil_labs.invisible_lsl_relay.xdf_cloud_time_sync

user-f1fcca 07 November, 2022, 14:08:01

C:\Users\Nick\anaconda3\python.exe: No module named pupil_labs.invisible_lsl_relay.xdf_cloud_time_sync

user-f1fcca 07 November, 2022, 13:59:20

is there a difference in terminal Window Powershell and the cmd ?

user-f1fcca 07 November, 2022, 14:03:58

I mean i should repeat the process in terminal? Because everything i did was in cmd.

papr 07 November, 2022, 14:04:47

One needs to differentiate between the ui and the shell parts. The ui provides the window and the possibility to type text. Once you hit enter, the command is interpreted by the underlying shell. cmd.exe and powershell.exe are both different shell programs that work slightly different in regard to the available commands and the syntax of scripting syntax. In terms of executing a python program, they are the same.

user-f1fcca 07 November, 2022, 14:10:08

I really need to offer you a coffee sometime for all this help 😊 , sorry for the inconvenience😅

papr 07 November, 2022, 14:11:12

No worries, happy to help!

user-f1fcca 07 November, 2022, 14:13:35

Error: Invalid value for 'PATH_TO_XDF': File 'C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg' is a directory.

papr 07 November, 2022, 14:15:04

This looks much better already. Do not pass -- and instead add the full file name including the file extension at the end

user-f1fcca 07 November, 2022, 14:14:23

if i put the -- before has a different error e.g : --C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg

user-f1fcca 07 November, 2022, 14:14:39

Error: No such option: --C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg

user-f1fcca 07 November, 2022, 14:15:00

If i put the xdf file in the desktop what should i put in the path ?

papr 07 November, 2022, 14:15:24

You can simply drag and drop the xdf file onto the terminal window to get the path

user-f1fcca 07 November, 2022, 14:24:19

Chat image

papr 07 November, 2022, 14:25:49

Ah, the time alignment tool assumes that the recording was made using an up-to-date version of the LSL relay. It adds meta data that is important for the time alignment. The error handling for these older recordings needs to be improved!

user-f1fcca 07 November, 2022, 14:24:46

Still no time_alignment .json in the folder with the events but i think we getting close ;p

user-f1fcca 07 November, 2022, 14:30:06

Okey no worries this recording was a testing one. I should just repeat the recording process with the up to date lsl relay which i just updated with the command python -m pip install -U pupil-invisible-lsl-relay[pupil_cloud_alignment] ?

papr 07 November, 2022, 14:30:26

Correct!

user-f1fcca 07 November, 2022, 14:32:40

Lovely! I'll do it maybe in Thursday , so see you then if some more errors occur ! Have a nice evening !

papr 07 November, 2022, 14:34:04

To you, too!

user-8247cf 07 November, 2022, 22:51:22

So, I am sorry, I got late, i have been using Dlib library, and it is projecting me the coordinates, it's detecting the pupils' location for the left and right eyes, respectively, I have attached a pictiure representing the same for that, the co-ordinates which are being printed, is the location my pupils are currently at... and simultaneously, if you see the cursor= those are referring to 34.6,40.3.....

user-8247cf 07 November, 2022, 22:51:50

Chat image Chat image

user-8247cf 08 November, 2022, 14:50:41

Here on the right!(black)

user-8247cf 07 November, 2022, 22:54:41

So, i need to synchronize those two sets of co-ordinates-- which you can see printed....(1115,633),(1024,809) for example..... so how I can be able to do that? I am checking the world 3d-coordinates to image 2d references, but can't figure out.... I mean i can add the z-co-ordinate in that which is the distance b/w eye and the camera.....

user-8247cf 07 November, 2022, 22:55:36

for example for the left eye:- it's going to be....

user-8247cf 07 November, 2022, 22:55:37

(981, 562, 100) (963, 584, 100) (968, 613, 100)

user-8247cf 07 November, 2022, 22:55:45

how shall i proceed?

user-8247cf 07 November, 2022, 22:55:56

Please help me....

marc 08 November, 2022, 07:48:11

Hi @user-8247cf! Can you confirm if you are using Pupil Invisible or Pupil Core? If I understand you correctly you are using Dlib to implement pupil detection and are now looking for a mapping from pupil coordinates to screen coordinates?

First, I'd recommend to use the existing gaze estimation algorithms of Pupil Invisible and Pupil Core respectively, which have lots of advantages over a direct mapping on pupil positions.

The recommended way for mapping a gaze signal to screen coordinates is to use one the available gaze mapping algorithms, i.e. Reference Image Mapper or Marker Mapper in Pupil Cloud, or the Surface Tracker Plugin in Pupil Player. Using those you can track the screen and convert the gaze signal into pixel coordinates. From there you can convert them to stimulus coordinates by controlling how the stimulus is presented.

The approach you suggest, to calculate a direct mapping from raw gaze signal to target image coordinates works as well in theory, but is very prone to errors due to head movements. I.e. you could calculate a simple polynomial mapping from gaze/pupil position to the target coordinate space. But the validity of that mapping depends on the subject holding their head perfectly still.

I hope that clears things up a little bit!

user-8247cf 08 November, 2022, 13:56:46

Yes I was using Dlib library, not the pupil codes. Yes, correct, I was trying to put those co-ordinates which are are being printed, to the one I showed. Can you please send the source code(specifically which one) is doing this task. I want to use only web-cam, so can it possible through web-cam only? Also, can you send the source code for the algorithm and mapping from camera pupil coordinate to screen coordinates? I am grateful to you...

user-fb5b59 08 November, 2022, 08:07:31

Hey, short questions regarding the single eye images and compression: 1) Are the single eye videos saved on the mobile device in RAW format or is there any compression applied? 2) The single eye images received by streaming from the mobile device are compressed by using H264, am I right?

papr 08 November, 2022, 08:08:13

The eye cameras provide mjpeg video and that is streamed via ndsi

user-8247cf 08 November, 2022, 14:13:44

Also, is the code available for mapping, in Python?☺️

user-8247cf 08 November, 2022, 14:19:00

Thanks! I will get back to you. I was checking that GitHub, which one in that is specifically for that purpose?

user-8247cf 08 November, 2022, 14:20:21

But I can use the algorithm and try to deploy it, as we have the option of only web-cam (computer one), so that case what should be the solution for the same? Like what's the base of the algorithm?

marc 08 November, 2022, 14:21:07

The surface tracking algorithm's source code is located here: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/surface_tracker

user-8247cf 08 November, 2022, 14:21:21

I mean specifically the polynomial mapping?

user-8247cf 08 November, 2022, 14:21:48

Do you have any resources/code for that?

user-8247cf 08 November, 2022, 14:22:04

Thanks a lot for your help, @marc

marc 08 November, 2022, 14:24:51

The polynomial mapping would map pupil positions to screen coordinates. If you are using a webcam with custom pupil detection, you'd have to implement a custom calibration scheme as well. You can implement a polynomial regression using e.g. sklearn, but I can't point you to much more specific resources, as custom remote eye tracking like this is not really supported with our software.

nmt 08 November, 2022, 14:25:25

@user-8247cf, a bit late to the party. But have you checked out webgazer: https://webgazer.cs.brown.edu/ ?

user-8247cf 08 November, 2022, 14:29:59

I thought that was python:(

user-8247cf 08 November, 2022, 14:27:47

I will check it out! Thanks!! I will get back to you soon. Wait, polynomial regression, how this is linked to mapping? I am so confused now, I mean we need z-distance(we already have that), plus the focal length, and then?

marc 08 November, 2022, 14:30:48

Polynomial regression is the technical term for what I meant with polynomial mapping. The idea is to map 2D pupil positions to 2D screen coordinates. Z-Distance/Depth is not considered here.

See here for an intro on polynomial regression: https://towardsdatascience.com/polynomial-regression-with-scikit-learn-what-you-should-know-bed9d3296f2

marc 08 November, 2022, 14:34:17

In general, implementing remote eye tracking with a webcam is hard though. Using a simple polynomial mapping like this and custom pupil detection will be have extremely poor robustness.

If you need a tool that actually works robustly, I'd recommend working off of an existing tool like WebGazer. Webcam-based eye tracking is working relatively poorly compared to head-mounted or proper remote eye trackers even for the best implementations. If you google for webcam based eye tracking, I am sure you'll find something in Python as well.

user-8247cf 08 November, 2022, 14:34:29

I mean I know the polynomial regression, but how to use that in this respect, I was not sure!! Cool

user-8247cf 08 November, 2022, 14:34:38

I will check em out!

user-8247cf 08 November, 2022, 14:48:15

I did, I mean can't I just convert(map) those co-ordinates to the screen one? Will that not be fine?

marc 08 November, 2022, 14:49:13

That works assuming the mapping is well calibrated (i.e. sufficient data to fit the mapping) and there is no head-movement.

user-8247cf 08 November, 2022, 14:49:19

The co-ordinates that webcam is throwing--(1779,978), it's pupil co-ordinates right?

papr 08 November, 2022, 14:49:58

Can you clarify which product you are referring to? webgazer?

user-8247cf 08 November, 2022, 14:50:19

The Dlib library.....

papr 08 November, 2022, 14:50:47

Please understand that we cannot provide technical support for any third party products, e.g. webgazer or dlib.

user-8247cf 08 November, 2022, 14:51:31

Please see the picture. I mean I am just asking--like the mapping in that scenario will be error, right?

papr 08 November, 2022, 14:52:59

We are missing a lot of context here which would allow us to answer your questions accurately, e.g. how the data was recorded, etc. In addition, we do not have any experience with dlib itself. I would like to ask you to consult the corresponding technical dlib documention.

user-8247cf 08 November, 2022, 14:53:37

Sure, let me figure out, I will get back to you..

user-6dfe62 08 November, 2022, 15:51:30

Hi! I have a question regarding pupil cloud while using pupil invisible ; We are using the glasses for an experiment regarding emotion recognition in certain environments and we started having participants a few days ago. We had 5 recordings when I stopped using the glasses and the app on that day (automatic transfer on the cloud is on). I checked multiple times and the recordings were saved. My lab director messaged me today saying that the app and the cloud says we have no recordings for the last 3 months (as if the app went back in time 3 months and deleted everything from then to now). She might've deleted them by accident or something else but otherwise does anyone have an idea on how to recover those recordings? Is there like a trash can where the deleted data gets stored for a numer of days before permanently getting deleted?

papr 08 November, 2022, 15:53:09

Could you share the account email adress with me via a private message?

user-8247cf 08 November, 2022, 16:02:53

Papr, so i mean, data is not recorded, this is on-time, user is just watching the screen, and it keeps on printed on the screen. Just asking, i mean how to get the camera intrinsic parameters?

papr 08 November, 2022, 16:04:32

To clarify, are using any of the head-mounted Pupil Labs eye trackers or the web cam to record eye movements?

user-8247cf 08 November, 2022, 16:07:35

i am just asking from open-cv/research perspective, nothing personal:) webcam, specifically to record the eye movements..... like i was looking the mapping, so i mean, it says intrinsic camera matrix..... how do i determine that? do you wish to have a look at the link? i am so grateful that you're helping me..

papr 08 November, 2022, 16:10:25

I am asking because my answer dependent on that. For example, for Pupil Invisible glasses, we offer an api to access prerecorded values. Since we did not ship the web cam, we don't know the values and you will need to measure them yourself. Check out the Opencv documentation on how to do that

user-8247cf 08 November, 2022, 16:09:03

i wish, we can have the money for tracker:/, i am so sorry, i am bothering ya'll

user-8247cf 08 November, 2022, 16:11:51

question:- how do you measure the camera matrix= so the matrix has fx,fy= it's already known right?

papr 08 November, 2022, 16:15:58

This is the opencv documentation that I was referring to in my previous message https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html

user-8247cf 08 November, 2022, 16:16:34

oh cool, let me have a look.... thanks a lot, i will study that, and get back...

user-8247cf 08 November, 2022, 16:50:20

one thing, guys, I want to know is that, the process of mapping, which i am trying, is not wrong, right? just the things is that, it needs to have some initial constraints like- z axis should be constant, and the user is assumed to have his/er head position fixed, right? I am trying to ask is that= the mapping is valid, right?

papr 08 November, 2022, 16:52:13

Could you clarify what mapping process you are using?

user-cd03b7 09 November, 2022, 20:33:53

We're running into glasses usability issues, the USB-C cord seems to be causing drag and head rotation limitations. Even worse, the weight/drag created by the USB cable seems to be inadvertently lifting the glasses off the faces of testers, jostling the glasses and causing disruptions in recording. Our tests are being conducted in transit stations so we inherently have a decent amount of head rotation & movement, have you guys ran into similar issues? How can I reduce glasses jostling?

papr 09 November, 2022, 21:57:26

Have you tried using a cable clip to fix the cable e.g. on the subject's jacket and such that there is leeway for the cable?

nmt 10 November, 2022, 10:45:47

Hi @user-cd03b7 👋. Good to hear from you again, and thanks for the feedback. Please reach out to [email removed] mentioning the issue and my name 🙂

user-cd03b7 09 November, 2022, 21:01:49

note - we're using Invisible glasses ^

user-cd03b7 09 November, 2022, 21:58:22

We tried that, it helps a bit but cable weight/lack of cable malleability is still present

marc 10 November, 2022, 08:11:20

In our experience it matters a lot how you route the cable along the subjects body to avoid the cable bending over itself and thereby putting force on the glasses. But I imagine you have played around with that already. We do offer a head strap that might help fixate the glasses on the head better. Have you considered that? The Pupil Invisible glasses are transmitting a lot of data through the cable and most 3rd party USB-C cables will not be able to carry that load robustly. However, if you had access to a softer high quality cable it could be worth trying it out. The issue you would see with insufficient cables is intermitted disconnects in the recording.

user-87b26c 10 November, 2022, 11:19:19

Hi , anyone ever experienced cloud upload from Pupil Invisible taking too long?

user-87b26c 10 November, 2022, 11:20:06

for instance, I have 13 min. of record but it was uploaded since noon. Still unfinished.

marc 10 November, 2022, 11:21:35

Hi @user-87b26c! Depending on the overall load sometimes there is waiting time. If you could let me know the recording ID of one of the affected recordings, I can double check if there is a problem!

marc 10 November, 2022, 11:22:29

How many hours have the recordings been processing? "Since noon" depends a lot on your timezone 🙂

user-87b26c 10 November, 2022, 11:22:32

Hi Marc, here it is. 6ece8c62-b479-42ab-af2f-8fbf84f747b8

user-87b26c 10 November, 2022, 11:23:07

I would say about 6 hours. still unfinished. Noon time Bangkok GMT +7

marc 10 November, 2022, 11:23:29

Thanks! That is indeed much longer than expected! I'll check what is going on.

user-87b26c 10 November, 2022, 11:24:23

In the past, it was way faster than this. Thanks

marc 10 November, 2022, 11:48:14

I see the status of this recording is indeed still "uploading" rather than "processing". The load on our servers is currently not unusually high, but the upload speed for those recordings is very slow. The most likely explanation would simply be that the available internet connection for the phones is weak. Could you confirm that the phones are correctly connected to the internet and maybe execute speed test?

user-87b26c 10 November, 2022, 11:49:09

We uploaded it via wi-fi at our office. the speed is fast.

user-87b26c 10 November, 2022, 11:49:35

Is it about the OS . I still have OnePlus6

user-87b26c 10 November, 2022, 11:49:51

Never updated according to the instruction.

marc 10 November, 2022, 11:52:06

No, the OS should not be an issue. Are the phones still located in the office with access to the fast connection in theory?

marc 10 November, 2022, 11:53:19

Could being outside the wifi range or something similar be an issue?

marc 10 November, 2022, 11:53:44

Executing a speed test would be useful to rule out this issue.

user-87b26c 10 November, 2022, 11:54:15

It was fast when we started the uploading. Office internet is real fast through leased-line optic. Should not be the issue.

user-87b26c 10 November, 2022, 11:58:54

We waited for 2 hours and it was uploaded only less than 20%. No internet issue as we are doing Streaming as well. We have plenty of bandwidth.

marc 10 November, 2022, 12:01:15

Okay, we'll think about other explanations.

marc 10 November, 2022, 12:40:48

@user-87b26c when you have access to the phones again, please try the following: - From the recordings list in the app, via the context menu of the affected recordings pause the upload and then start it again and see if that helps. - Confirm that you can access https://api.cloud.pupil-labs.com/ from you office' internet connection. This is the server the app ultimately communicates with for uploading. When opening this in a browser you should see a page with API documentation. - To be safe, do a speed test via e.g. https://fast.com/. While I do not doubt you internet connection is fast, WiFis can often be quirky, so I want to rule this out for sure.

user-87b26c 11 November, 2022, 10:33:49

Chat image

user-87b26c 11 November, 2022, 10:34:26

Chat image

user-87b26c 10 November, 2022, 15:52:56

Hi Marc. Thanks. I will check this tomorrow.

user-b28eb5 11 November, 2022, 03:24:45

Hi! We are working with two invisible glasses and we have two questions: 1. Can we increase the gain of the audio in each glasses? The volume that we get in the video is low 2. We have a bit of offset in the gaze with one of the glasses, do you have any idea about how to solve it? Thank you!!

papr 11 November, 2022, 08:19:44

Hi, the app receives and stores the audio as it receives it from the operating system. That said, are both participants low in volume in both recordings? Regarding the offset, the app has a built-in offset correction. You should be able to access it through the wearer profile.

marc 11 November, 2022, 10:41:05

@user-87b26c Thanks you! Does the problem persist after pausing+restarting the download?

user-87b26c 11 November, 2022, 10:56:43

@marc problem still exists. We do have 2 records today and it is still uploading slowly.

marc 11 November, 2022, 11:14:10

If we can't get it fixed soon we'll come up with an alternative approach for getting this up into cloud so you don't get delayed too much by this! Please also execute the test here from the phone: https://speedtest.cloud.pupil-labs.com/

user-df1f44 12 November, 2022, 19:10:52

Just here to say - kudos on the naming convention update for the cloud downloads guys. As an early adopter of invisible cloud, this is very very much appreciated. Makes such an improvement to our pipeline. 🤓 ... So yeah, thanks for all the great work you all are doing over there @ Pupil Labs.

user-4771db 14 November, 2022, 11:50:43

Hi Marc, thanks for the promt response, I tried it with the orange charging cable, but it didn't work either. I will try to update everything the next days and try once again. I would like to charge the phone while recording. I haven't tried the ethernet connection ever.

marc 14 November, 2022, 11:52:09

I just got the info that the problem is the old version of Android. So unfortunately this will only work when using a OnePlus 8 device, sorry! 😦

user-4771db 15 November, 2022, 07:31:59

ok no problem! Thanks for the info!

user-ad947a 14 November, 2022, 21:51:46

Hi, we have just set up the system and are trying to understand if its accurate enough to be used with text reading. Can we map the fixations and saccades onto a page of text or not? Can the red circle be shrunk as it currently encompasses several words at a time?

marc 15 November, 2022, 08:26:52

Hi @user-ad947a! For classical reading studies the accuracy requirements are usually exceptionally high and Pupil Invisible will most likely not suffice! Two map fixations onto a page you can use either the Reference Image Mapper or Marker Mapper. See the documentation here: https://docs.pupil-labs.com/invisible/explainers/enrichments/reference-image-mapper/ https://docs.pupil-labs.com/invisible/explainers/enrichments/marker-mapper/

The red circle can currently not be shrunk within the Pupil Cloud UI directly. Using the Gaze Overlay enrichment you could export snippets of recordings with a smaller gaze circle, but this of would of course require some extra steps. See here: https://docs.pupil-labs.com/invisible/explainers/enrichments/gaze-overlay.html

user-ad947a 15 November, 2022, 09:47:53

Hi Marc, thank you for your response. Have a great day!

marc 15 November, 2022, 14:15:25

@everyone We want your feedback!! 😀 We are looking for feature requests and opinions from the community to shape our development roadmap for Pupil Cloud and Pupil Invisible! Help us to focus on what you need most. Any idea is welcome!

For this purpose, we have set up a platform for you to submit feature requests and vote on requests submitted by others. You can log in using your existing Pupil Cloud accounts!

https://feedback.pupil-labs.com

user-cb1972 15 November, 2022, 17:19:33

Hello, I'm setting up the Pupil Invisible, does it work with Pupil Capture?

user-d407c1 15 November, 2022, 21:17:14

Hi @user-cb1972 No, Pupil Invisible requiere the Companion Device to run. That said, you can trigger start/stop and event annotations from a web browser using the monitor app https://docs.pupil-labs.com/invisible/getting-started/understand-the-ecosystem/#pupil-invisible-monitor You can also load Pupil Invisible recordings into Pupil Player

user-946395 15 November, 2022, 17:37:11

Hi there! I am discussing a research study that used Pupil Labs CORE glasses. This is my students' first conversation about eye tracking details in my psychology of music class. I am looking for a post-hoc video that might illustrate fixations, saccades, etc. There are several videos on the Pupil Labs channel of post-hoc analysis and rendering, but there was no audio in the videos. Thanks in advance!

user-946395 15 November, 2022, 17:40:03

I downloaded the demo videos from this link, https://pupil-labs.com/products/core/tech-specs/, but I cannot get them to load in any video apps that I have. Any help is appreciated!

papr 15 November, 2022, 17:48:38

Please see my response in the core channel.

user-946395 15 November, 2022, 17:58:35

Thanks! Figured it all out! 🙂

user-47f6b0 16 November, 2022, 12:17:23

Hi, what is the accuracy of the Pupil Invisible eye tracker?

marc 16 November, 2022, 12:28:57

Hi @user-47f6b0! You can find a detailed evaluation of accuracy in our white paper here: https://arxiv.org/pdf/2009.00508.pdf

Regarding blink recovery time, is this about how long it takes for the gaze samples to yield high-quality data again after a blink event? This was never formally evaluated, but Pupil Invisible's gaze estimates are independent for every pair of eye images, so it should be instant as soon as the eye is open again. Or rather gaze estimates should reach high quality again already during the opening motion of the eye lids.

user-47f6b0 16 November, 2022, 12:20:31

I'm also wondering what is the blink recovery time and gaze recovery ?

user-47f6b0 16 November, 2022, 12:32:09

So sorry, the blink and gaze recovery are not the parameters that I'm interested in.

user-47f6b0 16 November, 2022, 12:32:30

Thank you for the link!

user-94f03a 17 November, 2022, 02:31:26

Hello, we are using the Pupil together with an EEG headset (smarting). This means that occasionally conductive gel (from the mastoid electrodes) comes in contact with the arms of the Pupil Invisible. Does that pose any risk for the electronics? We are wiping with a dry cloth.

marc 17 November, 2022, 08:08:19

Hi @user-94f03a! As long as the gel stays clear from the USB socket at the tip of the arm and clear from the eye cameras this should not be a problem.

user-d407c1 17 November, 2022, 08:54:45

@user-94f03a you can try placing some insulating tape (electrical tape) around the USB connector - cable at the tip of the glasses to better insulate the connector from the gel, albeit the glasses may become less comfortable to wear

user-94f03a 17 November, 2022, 08:55:56

thanks! will do that

user-2196e3 17 November, 2022, 09:03:37

Hi, how to convert pupil timestamp to real time? The first timestamp in pupil.csv is 224215.9444. I tried to convert to real time, but the converted real time year is 1970. It's weird, we did the experiment on Sep 1, 2022.

user-2196e3 17 November, 2022, 09:08:11

Thank you! I will read that.

user-2196e3 17 November, 2022, 09:46:47

@papr Could you please recommend some references to preprocess the pupil, blink and gaze data? Thanks!

papr 17 November, 2022, 09:49:36

Hi, just for clarification, are you working with Pupil Invisible or Pupil Core data?

user-2196e3 17 November, 2022, 09:58:23

I don't know. What is the difference? I work on with pupil.csv, gaze.csv and blink.csv data.

papr 17 November, 2022, 09:59:52

The products use two different data formats. Does your recording folder have a info.json file? Or alternatively, a info.player.json file?

user-2196e3 17 November, 2022, 10:01:59

Oh, I think it is Pupil core data. Because we use pupil core device

papr 17 November, 2022, 10:02:23

Then let's move the discussion to core

user-2196e3 17 November, 2022, 10:03:48

Ok, thank you!

user-0ee84d 17 November, 2022, 14:54:18

@papr does pupil invisible also output the cognitive workload?

papr 17 November, 2022, 14:56:42

Not directly. You can estimate cognitive load through various metrics. One of them would be blink rate which can be calculated based on Pupil Clouds blink detection results

user-c661c3 17 November, 2022, 19:03:46

Hi, I have a problem with the raw data export on pupil_cloud.

I recorded a video with the invisible glasses and marked various live events using the web tool provided (pi.local:8080). The recording was uploaded on the pupil_cloud platform but when I try to create and download a raw data export enrichment, the folder created contains only the info.json file and not the .csv files.

The others enrichments don't seem to be a problem.

Would you have a solution to this problem?

user-c661c3 17 November, 2022, 19:20:59

Chat image

user-c661c3 17 November, 2022, 20:07:42

Hi I have a problem with the raw data

user-342d6f 18 November, 2022, 12:44:39

Hi. Is it possible to stream eye images captured by Invisible using Realtime Python API or Network API?

marc 18 November, 2022, 12:45:38

Hi! Streaming eye video is currently possible only via the legacy API: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/legacy-api.html#legacy-api [FIXED LINK]

user-342d6f 18 November, 2022, 12:47:12

Can I have access to this API? I need to provide a username and password.

user-342d6f 18 November, 2022, 12:48:20

Actually, I need to detect blinks in real-time. Is it feasible in any other way?

papr 18 November, 2022, 13:07:15

There will be some delay (specific to your network connection). I suggest you give it a try and see if the transport is fast enough for your use case.

user-342d6f 18 November, 2022, 13:47:29

Thank you

user-455cca 19 November, 2022, 20:27:01

Hi! I'm trying to re-load the "head_pose_tracker_model.csv" to my second recording so that I don't need to repeat the 3D-model build up process. I found the .csv file but now figuring out where I could apply/ upload my template model on Pupil Player for other recordings. Thanks!

papr 20 November, 2022, 09:47:53

Hi, the csv file is just the export, not the model. If I remember correctly, it should be either a msgpack or a pldata file.

user-94f03a 21 November, 2022, 12:31:01

Hi all! We want to use the Pupil Invisible on a completely mobile setup (i.e. no laptop/tablet) together with smarting EEG (which also supports LSL). Is this possible as far as you know? For instance, the smarting mobile app can receive events in UDP is there anything supported by the Pupil Companion? Thanks

marc 21 November, 2022, 12:34:37

Hi @user-94f03a! Pupil Invisible will always need to be connected to a Companion phone, but that is of course mobile. Pupil Invisible is compatible with LSL, but you'd need to run the LSL Relay for it somewhere: https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/

Using the real-time API you can send events to the phone as well: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/ https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/track-your-experiment-progress-using-events/

user-94f03a 21 November, 2022, 12:34:59

Hi @marc thanks for this

user-94f03a 21 November, 2022, 12:35:08

Yes we are using the relay on a desktop setup.

user-94f03a 21 November, 2022, 12:35:31

Was wondering if there is any options to do this fully mobile, meaning without a laptop (assuming relay runs there).

papr 21 November, 2022, 15:45:50

Hi, the relay is written in Python and does not run by itself on Android. That said, it should be possible to reimplement it in a mobile app using https://github.com/labstreaminglayer/liblsl-Android

user-94f03a 21 November, 2022, 12:37:00

i suppose an option is to use a laptop to send a single 'start' marker to both streams (pupil, and EEG) and then do the synchronisation offline?

papr 21 November, 2022, 15:47:06

That is partially what the relay does for you (in addition to streaming gaze data via LSL). https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html

user-94f03a 22 November, 2022, 02:21:35

Thanks @papr for the suggestion. I don't think we can write this for android ourselves at the moment. We will have to find another workaround

user-06f807 22 November, 2022, 10:29:19

Hey, under https://docs.pupil-labs.com/invisible/explainers/data-streams/#fixations you mention that your algorithms on saccades, fixations and blinks will be open-source soon and that there will be a white paper aswell. Is there a roadmap for that? Or a way to get a preview? The work might be relevant for my research project.

marc 22 November, 2022, 11:50:26

Hi @user-06f807! Yeah, we are a bit overdue on that! The paper on the fixation detector will be peer-reviewed and that is taking some time. We might be able to give you a confidential pre-print though. I'll DM you regarding that!

The issue with the blink detection algorithm is mostly a lack of documentation and some reliance on internal tools in the implementation that we need to get rid of. There will be no white paper to accompany it, but the algorithm is also very simple. We need to find the resources to get this done!

There is no explicit saccade detection algorithm. In some scenarios one could interpret the absence of a fixation as a saccade though!

user-06f807 22 November, 2022, 13:37:57

Okay, thanks for your quick response! If your allowed to hand out the pre-print, it would be highly appreciated

user-4bc389 23 November, 2022, 05:44:37

Hello, I want to use the Pupil Invisible Monitor app, but this website (http://pi.local:8080/)cannot be opened. Is there any other way to use this function?,I'm from China.

nmt 23 November, 2022, 07:53:15

Hi @user-4bc389. Here are a few things to check: 1. Ensure the companion device is connected to the same network as your computer. Note that this does not require an internet connection 2. Enable HTTP Streaming in the companion app 3. Try with pi.local:8080 first, and if that doesn't work, enter the address shown in the HTTP Streaming section of the app If the streaming isn't working, it might be related to the network firewall. For example, institutional networks can have firewalls that block the connection. In such a case, the easiest thing would be to set up a local hotspot on your computer and connect the phone to it, or use your own dedicated router

user-e29e0d 25 November, 2022, 08:50:02

Hi Is the web app of pi.local open source? I was wondering how I could stream multiple cameras at the same time on the same web page using another backend such as Flask.

papr 25 November, 2022, 08:53:50

Hi, the pi.local webpage is just one client for the Realtime Network API. You can write your own client. See the under-the-hood description of the API here: https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html

The pi.local client code is not open source at the moment.

user-e29e0d 25 November, 2022, 17:06:16

File "./multi_stream_venv/lib/python3.9/site-packages/pupil_apriltags/bindings.py", line 468, in detect assert len(img.shape) == 2

Hi when I'm trying to stream on the webpage I face this error. can you explain to me what is happening?

papr 25 November, 2022, 17:17:00

The detector expects a gray image. You are likely passing a rgb/bgr color image

user-e0a93f 25 November, 2022, 22:09:40

Hi guys 🙂 I am using the gaze angles from the API and it works really well, thanks for that! I was just wondering if I can trust that the zero is positioned at a 90° angle from the subject ? In other words, I am comparing the max angles between subjects and would like to be sure that I am allowed to do that.

papr 25 November, 2022, 22:21:19

Note, the gaze angle is relative to the scene camera, not the subject. I.e. For some subjects zero might be pointing further down than for others. You could select an offset for each subject though to align all of them into one common coordinate system.

user-cd03b7 28 November, 2022, 19:47:19

Hey all, quick question - I need to generate heatmaps but I'm worried I don't have enough testers lined up. How many testers have you guys used to build effective heatmaps? Is there a Pupil proprietary number? Academia doesn't have consistent testing numbers, online resources say 39 testers is optimal for heatmap construction (but with no citations), and I don't see anything definitive in this chat

user-d407c1 29 November, 2022, 08:04:45

Hi @user-cd03b7 ! Heatmaps are a powerful qualitative visualisation tool, the bigger the sample you aggregate the more representative of your population is. As with any behavioural data sample sizes might be difficult. Small sample sizes may trick you, and you may get type II errors. If you want to get quantitative data, I recommend that you split your image in areas of interest and compute hit rate (to acknowledge if the participants look at one area), dwell time (how long did they stare at it) and time to first contact (when did they notice it). Check out this tutorial on how you can obtain such metrics. https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/

Once you have this metrics on a pilot data, you could use them to estimate the sample size required in an easier way, using https://www.calculator.net/sample-size-calculator.html

user-a50a12 30 November, 2022, 13:27:56

Hi, we would like to record audio with the glasses (to be able to use it locally), but not upload the audio to the cloud, for data protection issues. Is there any way to achieve this? Or delete the audio track later within pupil cloud?

papr 30 November, 2022, 16:18:53

As a temporary work around, you could record both, the audio from an external microphone and time sync events using LSL, and sync both Post-hoc to the Invisible recording. This would require you to do some programming, though.

marc 30 November, 2022, 13:51:25

Hi @user-a50a12! That is currently unfortunately not possible. The audio track is a part of the scene video file that is uploaded to Cloud and neither the app nor the cloud platform is currently able to remove the audio.

This is an interesting issue! I took the liberty of adding a feature request on feedback.pupil-labs.com for this and will discuss with the team what we could do! https://feedback.pupil-labs.com/pupil-cloud/p/block-audio-data-from-being-stored-in-pupil-cloud

End of November archive