๐Ÿ•ถ invisible


user-969912 04 March, 2020, 16:19:39

Hello, I'd like to know if the Pupil Invisible comes with a software to analyze the data ?

wrp 05 March, 2020, 06:37:10

Hi @user-969912 ๐Ÿ‘‹ - Currently you can use Pupil Player (desktop software) to perform visualization and "light-weight" analysis for Pupil Invisible recordings. We are actively working on adding enrichment functionality to Pupil Cloud, and hope to be able to share an update with the public by end of this quarter/early Q2.

user-969912 05 March, 2020, 08:42:55

@wrp Thanks for answering. Do you have any software recommendations for market research then ?

wrp 07 March, 2020, 01:39:54

@user-969912 it really depends on what you are looking to achieve. There is lots that can be done using Pupil Player already ๐Ÿ˜บ However, if you are looking for turn key solution today, you might want to consider using Pupil Invisible for data capture and then iMotions for post-hoc analysis. Again, we are working on adding enrichment functionality to Pupil Cloud - so this might also be a viable option for you in the near future.

wrp 09 March, 2020, 23:33:41

Announcement: One of our service providers for Pupil Cloud (Digital Ocean) is experiencing latency issues and some service interruptions. As a result, you may have experienced slow or incomplete uploads from Pupil Invisible Companion to Pupil Cloud. Details from the service provider (Digital Ocean) documented here: https://status.digitalocean.com/incidents/qq6wf8ny1c1n

wrp 09 March, 2020, 23:34:26

We hope that this will be resolved very soon. And apologize for any inconveniences this has caused you in using Pupil Cloud.

wrp 10 March, 2020, 02:42:45

According to the link above, this issue has been resolved. โœ…

user-bab6ad 10 March, 2020, 11:32:02

@wrp @papr a quick question, because I did not find detailed infos on that: I found in the git log that you somhow can get the pupil invisible to stream to the capture software. However, I did not find how to do that. Is that possible with the pupil invisible software on the 1+ or do I need another app? How exactly would I connect it? I have both on the same WiFi, but I do not see an option how to stream from invisible from capture?

user-bab6ad 10 March, 2020, 12:30:14

ok, I just needed to take more attention to the WiFi config I had (I was by accident in a VPN with all network tunneled on the Laptop, รคehm, sorry xD). However: No it connects, but it crashes often. Also I can not switch on the eye cameras. It works stable with the pupil standalone app. Now all I want to have is actually a pupil diameter for now if that is possible. If I take the code of the standalone app, in this gaze info, is there also the pupil diameter for the invisible, or does the invisible not calculate that info like the core?

user-16e6e3 10 March, 2020, 13:06:04

Hello! I cannot open the recording.zip file downloaded from pupil cloud. It says the compressed folder is invalid. Is it part of the (current) issue with the Cloud service or something else?

marc 10 March, 2020, 13:57:08

Hey @user-bab6ad! Pupil Capture was not designed to be compatible with Pupil Invisible. You can make it work to some degree (as you seem to have done), but it is not recommended. You can use the Pupil Invisible Monitor App to receive data generated by Pupil Invisible on a desktop computer. However there is no pupil dilation signal provided. We are working on adding pupil dilation, but this will still take a while and we do not have a release date yet. The pupil dilation signal you might get through Pupil Capture will not be very reliable. Due to the positioning of the eye cameras in Pupil Invisible, we get very challenging eye images. The pupil detection algorithms in Pupil Capture do not work robustly on those. If you are interested in a pupil dilation signal we would curently recommend using Pupil Core!

user-bab6ad 10 March, 2020, 14:54:27

@marc thanks a lot for the info, I already thought so. We have an old pupil core, but one of the side cameras broke, so the invisible was a thought of us if we can quickly use it as a replacement. Anyway we are currently order a replacement so that is handled

user-0eb381 11 March, 2020, 14:21:39

Hi, can I change the eye tracking sampling frequency?

user-0eb381 11 March, 2020, 14:23:43

If not, what is the sampling frequency?

user-16e6e3 11 March, 2020, 15:38:45

Hi! I downloaded Pupil player from github and am now trying to load a PI recording that I just downloaded from pupil cloud. How do I drop a recording directory to the pupil player window? I tried to drag and drop the downloaded folder but it does not work. On https://docs.pupil-labs.com/core/software/pupil-player/#load-a-recording it says it needs to be a triple digit folder? I did not find one when I downloaded the recording from cloud.

wrp 11 March, 2020, 15:40:24

@user-16e6e3 please unzip the downloaded folder and drag the recording folder into Player. The naming scheme is different for recordings downloaded from Cloud vs the example in docs

user-16e6e3 11 March, 2020, 15:43:38

thanks! That worked

marc 11 March, 2020, 16:01:00

Hi @user-0eb381! The eye cameras record video at 200 Hz. The real-time gaze estimation on the phone works at 55 Hz. We will release a feature on Pupil Cloud in about 1 month, that will allow you to obtain gaze data at the full 200 Hz post-hoc. There are no options to configure the framerates on the phone, they will always be as high as possible.

user-0eb381 11 March, 2020, 17:26:15

thanks! @marc

user-a9ff0d 12 March, 2020, 07:28:39

Hi, We're now thinking to use the lab streaming layer to synchronize data from Pupil Invisible and other data sources. I found a github page (https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_invisible_lsl_relay) about the LSL plugin for PI, but unfortunately as I'm new to LSL, it's not very clear to me how to use it. I wonder if there's any introduction on how to use the plugin. Thanks in advance, Toru

papr 12 March, 2020, 09:00:32

@user-a9ff0d LSL works by collecting/recording data in a central place (https://github.com/labstreaminglayer/App-LabRecorder) from multiple sources (LSL applications). The Pupil Invisible LSL Relay is such a LSL application and is meant to be installed/used via the terminal/command prompt. Depending on your operating system, you might need to install Python first (https://www.python.org/downloads/).

user-7aee47 13 March, 2020, 05:54:28

Hi! Im Madan recently we got hololens 2 to do research on eye tracking but unfortunately we couldn't access the IR camera due to some security concerns. Hopefully landed in Pupils lab website and got to know about binocular invisible product. Is there any way to attach them in Hololens2?

marc 13 March, 2020, 08:32:37

Hi @user-7aee47! We have not yet had the opportunity to test the compatibility of the Hololens 2 with Pupil Invisible, so we can't give you a definitive answer. AFAIK the Hololens 2 is supposed to be compatible with all regular glasses. Since Pupil Invisible has the form factor of a regular pair of glasses, I think there is a good chance that they are compatible. The scene camera view is probably obstructed to some degree, but the gaze estimation data would be available. You could test the setup yourself within our 30 day trial period after purchase. If it does not work you can send the hardware back and get a refund.

user-a9ff0d 13 March, 2020, 08:41:09

Hi @papr . I have python (version 3.6.8) installed on my local windows pc and followed the installation instruction but I got an error message, saying "fatal: not a git repository (or any of the parent directories): .git"

papr 13 March, 2020, 09:39:59

@user-a9ff0d you might be missing the cd App-PupilLabs/ step. I have just noticed that the git checkout ... step is out of date and not necessary anymore. I have removed it from the readme file.

user-a9ff0d 13 March, 2020, 10:43:49

Hi, separately from the LSL issue, I also have a problem in the gaze offline calibration. I included the calibration process using the Pupil Calibration Marker v0.4 and tried the gaze from offline calibration. Even though it seems that the calib marker is accurately detected, I got the following error:

Create calibration Default Calibration - [INFO] calibration_routines.data_processing: Collected 0 monocular calibration data. Create calibration Default Calibration - [INFO] calibration_routines.data_processing: Collected 0 binocular calibration data. Create calibration Default Calibration - [ERROR] calibration_routines.finish_calibration: Not enough ref points or pupil data available for calibration. Create calibration Default Calibration - [ERROR] gaze_producer.worker.create_calibration: Calibration failed: Not enough ref points or pupil data available for calibration. player - [ERROR] gaze_producer.controller.gaze_mapper_controller: You first need to calculate calibration 'Default Calibration' before calculating the mapper 'Default Gaze Mapper'

Any help?

reference_locations.msgpack

papr 13 March, 2020, 10:44:55

@user-a9ff0d Please be aware that you do not need to perform Offline Calibration for Pupil Invisible recordings. Just use the "Gaze from recording" option.

user-a9ff0d 13 March, 2020, 10:53:56

@papr I'm aware of it, but the gaze from recording doesn't look alright. During the calib marker is presented, the detected gaze is always positioned a bit off from the marker.

user-a9ff0d 13 March, 2020, 10:55:46

Looks like this

Chat image

papr 13 March, 2020, 11:00:43

@user-a9ff0d Please see the most recent "pinned message" by @marc in this regard.

user-a9ff0d 13 March, 2020, 11:02:33

@papr I saw the post, and I used the gaze adjust function before recording this video. By the way, when you adjust the gaze using the wearer setting, are both of corrected and uncorrected data saved? Or only corrected data?

wrp 13 March, 2020, 11:08:30

@user-a9ff0d When you adjust gaze with Offset Correction in Pupil Invisible Companion App, gaze data file with the offset applied is saved. The original signal is not saved. Offset information is saved within info.json

user-16e6e3 16 March, 2020, 17:59:58

Good evening! I am still getting familiar with PI. When I use the raw data exporter in PP, the pupil_positions.csv is empty (only the header is there). Do I need to enable something for it to include the raw data?

papr 16 March, 2020, 19:40:39

@user-16e6e3 Hi, Pupil Invisible does not yet generate pupillometry data. If you are looking for the gaze data check the gaze_positions.csv file.

user-16e6e3 16 March, 2020, 20:17:47

thanks!

user-808722 17 March, 2020, 11:51:02

Hi all! Is there a way to have stronger compression for the Invisible recording? 10min video 1.7GB file seems excessively large.

user-808722 17 March, 2020, 11:53:01

Also, the PI is not recording audio - no microphone in there?

user-808722 17 March, 2020, 11:53:21

If necessary, could the Companion device microphone used?

papr 17 March, 2020, 11:57:05

@user-808722 Hey. Audio recording is an opt-in feature. You can enable it in the settings. @user-0f7b55 might be able to give further insight regarding the video compression.

user-808722 17 March, 2020, 12:00:46

@papr thanks, my bad, found the sound setting! @user-0f7b55 is there a way to have stonger/better video compression? the goal it to have smaller output files, like 10min - 200-300mb max, not 2GB.

user-0f7b55 17 March, 2020, 12:20:14

@user-808722 we are working on that.

user-808722 17 March, 2020, 12:30:15

@user-0f7b55 thank you! also, how can I easily export the video WITH the gaze overlay, to desktop, to edit/cut the video?

user-0f7b55 17 March, 2020, 12:34:04

you can use pupil player

user-0f7b55 17 March, 2020, 12:34:33

@papr can help you with that further

user-808722 17 March, 2020, 12:35:11

@user-0f7b55 found it thanks. will explore the feature. I can only export the full recording, no cutting feature right?

papr 17 March, 2020, 12:36:35

@user-808722 You can use the trim marks to only export parts of the recording. Checkout this section of our docs. This should help you find your way around the application: https://docs.pupil-labs.com/core/software/pupil-player/#player-window

user-808722 17 March, 2020, 12:36:52

thanks a lot guys!

user-a9ff0d 18 March, 2020, 08:10:28

Hi, Currently in the measurement mobile, the folders (one folder corresponds to a single measurement) are named like this (e.g., ebf4153e-4583-4698-9b2e-1455fbbb10e4). Is there any way of setting the folder naming rules such as recording date and time?

papr 18 March, 2020, 08:20:43

@user-a9ff0d Again, @user-0f7b55 is the expert here, but as a work-around: The folders have a creation date. You should be able to sort the folders by this creation date within your file viewing application.

user-0f7b55 18 March, 2020, 08:41:55

@user-a9ff0d sorry, folder naming scheme is not going to change.

user-a9ff0d 18 March, 2020, 09:20:45

@papr @user-0f7b55 I am aware that you can sort them by creation date, but we often use some statistical software such as R to read the output files, in which case these folder names cannot be used to limit the folder search with their names... but okay I understand the situation. Thanks!

papr 18 March, 2020, 09:22:49

@user-a9ff0d If you are accessing the recordings programmatically anyway I recommend reading the info.json file. It includes meta information, e.g. the recording date. This should help you to filter recordings efficiently.

user-c5fb8b 18 March, 2020, 09:24:53

@user-a9ff0d you could also write a script that renames the folders based on the recording date found in the info.json, e.g. prepend the date to the uuid

user-a9ff0d 18 March, 2020, 09:28:24

@papr @user-c5fb8b thanks, I see that may be the best option for now to rename the folders. My laziness just drove me to ask you if there were any ways to work it around ๐Ÿ˜›

papr 18 March, 2020, 09:32:48

@user-a9ff0d We are big fans of laziness. ๐Ÿ˜„ This task is definitely something that you want to automate to avoid human errors. ๐Ÿ™‚

user-a9ff0d 18 March, 2020, 09:37:47

By the way, is it possible to record while it's recharged? We wondered if by using the usb-c-hub or some cables that can divide the line into two, it's possible to record with recharging at the same time.

user-0f7b55 18 March, 2020, 09:40:20

we do not officially support nor recommend such a practice

user-a9ff0d 18 March, 2020, 09:49:26

@user-0f7b55 I see, we would appreciate if you come up with some solution for this issue given the amount charge the PI uses while recording someday.

user-16e6e3 19 March, 2020, 14:21:34

Hi! I currently have trouble opening pupil player. All I see is an empty command line and an empty window that I click on, but it doesn't open. Is it my leptop or something else?

papr 19 March, 2020, 14:23:38

@user-16e6e3 This is a known issue on Windows. Unfortunately, we do not know the underlying cause. But there is a workaround. Please close the application, delete the Home directory -> pupil_player_settings -> user_settings_* files, and start Player again.

user-16e6e3 19 March, 2020, 14:34:47

this worked. thank you!

user-16e6e3 19 March, 2020, 15:56:44

another question: can you do surface tracking with PI recordings? if yes, how do you set the markers?

user-acc960 20 March, 2020, 09:42:41

The Pupil Invisible Monitor works on my Windows OS but not on my Debian based Linux OS. Is this a known issue? Do I have to install certain packages for the monitor to work?

user-acc960 20 March, 2020, 09:43:15

Output on Debian, it can find my device but still only shows a gray screen on the output window.

Chat image

papr 20 March, 2020, 09:44:19

@user-acc960 Hey, this is not a known issue. Just to make sure, this is the ip addr of your wifi interface?

papr 20 March, 2020, 09:46:35

Actually, it looks like it selects your ethernet connection instead your wifi interface for broadcasting.

papr 20 March, 2020, 09:47:21

Is it possible for you to disable the ethernet connection? Unfortunately, there is currently no user option to tell Pupil Invisible Monitor which interface to use.

user-acc960 20 March, 2020, 09:48:10

@papr All right thanks, I'll check it out. Oddly enough my own implementation shows the sensors:

user-acc960 20 March, 2020, 09:48:14

Chat image

user-acc960 20 March, 2020, 09:50:45

@papr You were right indeed. Now it is showing up. Thank you for the help.

papr 20 March, 2020, 09:52:02

@user-acc960 Glad that I was able to help! May I ask what you are working on? ๐Ÿ™‚

user-acc960 20 March, 2020, 09:54:36

@papr Yes I am creating an application for a Raspberry PI that streams the data from the NDSI v4 protocol. The incoming data will be processed with OpenGL to show a marker on the scene video. This OpenGL context will be converted to a video of one minute using the h.264 codec. Every time a one minute video is finished, it will be uploaded automatically to our cloud.

papr 20 March, 2020, 09:56:04

Ah nice! Let us know if you are running into any issues using NDSI ๐Ÿ™‚

user-acc960 20 March, 2020, 09:56:49

@papr Haha, I will. ๐Ÿ˜œ

user-808722 20 March, 2020, 17:37:31

Hi, good afternoon! The Pupil Core, namely the Player to play and export Invisible recordings, would work on Windows 7 or ONLY on Windows 10?

user-808722 20 March, 2020, 17:38:14

So, I want to play and export Invisible recordings, that the only task, can it be done on Windows 7?

papr 20 March, 2020, 17:39:23

@user-808722 We have only tested the applications on Windows 10. It might work on Windows 7, but I cannot guarantee it.

user-808722 20 March, 2020, 17:40:13

Thank you! I supposed this was the case. So I'll give it a go tomorrow.

user-808722 22 March, 2020, 22:37:38

@papr FYI, I tried to run the Player on Windows 7 and it gave me numerous errors, did not run. Didn't try any further, upgraded to Win10. ๐Ÿ‘Œ

user-808722 22 March, 2020, 22:39:05

@papr new question: what causes the recording to stop, to loose signal seemingly randomly / no reason, multiple times?

Chat image

user-acc960 24 March, 2020, 07:30:36

For what reason does this method put out -1 (error code) no matter what I do? The node has been started and joined the v4 group. For some reason it does work properly in the python variant.

Chat image

user-c5fb8b 24 March, 2020, 07:48:42

Hi @user-acc960, please consider posting code related questions in the ๐Ÿ’ป software-dev channel. Anyways, you can always lookup the official zmq API, which is what the Matlab package should implement. As you see here, a return of -1 indicates one of 4 possible errors: (scroll to the very bottom, return value) http://api.zeromq.org/master:zmq-getsockopt You should be able to use zmq_errno() and zmq_strerror() to further debug your issue: http://api.zeromq.org/3-2:zmq-errno http://api.zeromq.org/3-2:zmq-strerror

papr 24 March, 2020, 13:02:34

@user-808722 This happens due to physical disconnects between the phone and the glasses and/or the scene camera. Please make sure, that your USB-C cable is correctly connected to the phone and glasses. If you keep seeing this type of issue please contact info@pupil-labs.com

user-808722 24 March, 2020, 13:03:14

@papr thank you! so it's certainly a hardware issue, most likely cable connection either end?

user-808722 24 March, 2020, 13:04:22

@papr I'm not super familiar with the USB C connector, so if it's plugged in, it can still disconnect and reconnect the data transmission easily?

user-808722 24 March, 2020, 13:05:33

@papr separate question: is it possible to record and charge the Companion  device, from a battery pack for example? so like a USB C splitter?

papr 24 March, 2020, 13:07:50

@user-808722 Difficult to say. If you notice disconnects although both ends of the cable are well connected the disconnect might be caused by different issue. In this case, please contact our hardware support at info@pupil-labs.com

is it possible to record and charge the Companion device, from a battery pack for example? so like a USB C splitter? To cite @user-0f7b55: we do not officially support nor recommend such a practice

user-808722 24 March, 2020, 13:12:12

@papr thank you

user-16e6e3 25 March, 2020, 16:50:39

Hi! I am working with hemianopic patients (partial blindness after brain damage) and I was wondering if it's possible to superimpose the area of a patient's visual field deficit as AOI with Pupil Invisible?

marc 25 March, 2020, 22:56:50

Hi @user-16e6e3! Am I assuming correctly that the portion of the visual field that can not be perceived by the patients is moving with the gaze direction? E.g. always the left half of what my eyes are pointing at is not perceived? Pupil Invisible is of course not capable of measuring what portion of the visual field is affected, but if one would know the size and position of that portion, one could potentially visualize that in the scene camera at least approximately. Pupil Invisible is estimating gaze as "cyclopean gaze", i.e. it does not estimate the gaze directions of each eye separately, but estimates a single "gaze ray" that originates in the scene camera. Visualizing a given masked out area that has a fixed relation to this cyclopean gaze direction would be possible.

user-8779ef 25 March, 2020, 22:58:42

@marc This sort of thing is typically measured using a perimetry test.

user-8779ef 25 March, 2020, 23:01:40

I know that cortical damage from stroke can lead to neglect in dedicated portions of the visual field (see homonynous hemianopia )

user-8779ef 25 March, 2020, 23:02:24

...but there are certainly other types of neglect. I would hit google scholar ๐Ÿ™‚

user-16e6e3 26 March, 2020, 08:14:35

@marc Thanks for your answer! Yes, exactly, the blind field moves with the gaze. I am doing perimetry tests beforehand to determine which part of the visual field is impaired. How would you visualize a masked area fixed in relation to gaze (and, if possible, also get raw data referring to that masked area)? Does this work in Pupil Player?

papr 26 March, 2020, 08:35:15

@user-16e6e3 I am not sure what kind of raw data you are looking for. As far as I understood it, the mask would be just a visualization on top of the scene video that is dependent on the current gaze point. Do you know the "Vis Light Points" visualization in Pupil Player? It masks a circular area around the gaze point. Instead, I would mask the area left of the gaze point in your case (assuming the left half of the field of view is impaired). Meaning, if the subject looks at the center of the scene image, the left half of the image would be masked.

Please excuse my naive approach here. I am very sure that masking everything left of the gaze point is an oversimplification and would not be sufficient for your usecase. Do you have an overview on which "shapes" the impairments can take? (if this term is even applicable...)

marc 26 March, 2020, 09:19:51

@user-8779ef Thanks for the input!

marc 26 March, 2020, 09:20:17

@user-16e6e3 This would probably not be something that could be done immediately with the available tools. However the data that is being generated by Pupil Invisible would allow you to implement a visualization like that yourself, or adapt one of the available visualizations in Pupil Player for your needs as @papr mentioned.

user-16e6e3 26 March, 2020, 13:24:25

thanks for your answers! @papr In many cases the entire left half of the immage being masked would be an accurate depiction of hemianopia. The data I would like to get is the number of fixations/saccades towards that blind half, for example. I have checked out 'Vis Light Points' before. How could I adjust it to mask one half of the image? Does this and @marc 's suggestion involve dealing with the software development? I'm not familiar with it unfortunately ๐Ÿ™ˆ

papr 26 March, 2020, 13:28:46

@user-16e6e3 A very simple approach would be to check the norm_pos_x field in gaze_positions.csv and fixations.csv and test if the value is smaller or bigger than 0.5 (horizontal center of scene camera image). This would not require any special plugins. Regarding the custom visualization, please contact info@pupil-labs.com I think we can come up with a prototype by next week.

user-16e6e3 26 March, 2020, 13:31:52

@papr cool! I've looked at the norm_pos_x before and it's definitely a valid and simple approach. Was just wondering if a gaze-contingent analysis would also be possible. Will contact [email removed] Thank you!

papr 26 March, 2020, 13:38:45

@user-16e6e3 Do you have a reference on how this would/could work? Is it even possible that a subject fixates the blind area? From what I understood, the blind area moves with the eye, i.e. if the subject looks to the left, the impairment moves left, too.

papr 26 March, 2020, 13:39:16

Also, is this impairment just left/right or is it possible that there are top/bottom impairments, too?

user-16e6e3 26 March, 2020, 13:44:03

@papr That's true, it moves with the eye, which is why it's important for patients to make eye movements to explore more of what's hidden in their blind field (e.g. with left-sided hemianopia, they should make more eye movements to the left so they can bring the 'hidden' left part of the environment into the normal sighted field and therefore compensate for their deficit). Patients do not have problems to fixate objects in general, because enough of the central part of their visual field is spared for them to do so.

user-16e6e3 26 March, 2020, 13:45:22

There are also impairments of top or bottom quarter of the visual field called quadrantanopia, it's quite common among my patients as well.

papr 26 March, 2020, 13:47:19

@user-16e6e3 So if you build a heatmap / 2d histogram of the fixation or gaze norm_pos_x/norm_pos_y data, you should see significant more data points in the area of the impairment, correct?

user-16e6e3 26 March, 2020, 13:51:59

@papr In patients who compensate well, yes. Others would look more towards their sighted field.

user-16e6e3 26 March, 2020, 13:54:21

I was also thinking about using markers (apriltags) on the left side of the room (in case of left hemianopia), this would also enable creating a heat map of gaze distributions, right? Would not be gaze-contingent, but could still be an option.

papr 26 March, 2020, 14:01:15

@user-16e6e3 The difference would be the coordinate system. gaze/fixation data is by default in scene camera coordinates (which represents the subject's field of view). If you use the surface tracker to generate heatmaps, you will get data that is relative to a specific region of interest that is independent of the subject's head movement.

user-16e6e3 26 March, 2020, 14:08:21

that's good to know, thanks!

End of March archive