πŸ•Ά invisible


user-5a6ece 01 April, 2022, 09:53:07

Hi! I'm pretty new to pupil labs and an intern in a research facility. My advisor said that I should learn how to work with pupil labs. Now we're already have troubles to view/play the recordings in pupil cloud. Although it seemed to worked some time ago. Any tips?

nmt 01 April, 2022, 13:08:02

Hi @user-5a6ece πŸ‘‹. Please refresh the page and then try playing the recordings again

user-dd0253 01 April, 2022, 11:01:31

Hello, I have an issue with Pupil Invisible. I don't see the gaze/red circle anymore after the recording and during the calibration. Why? How can I fix it? thanks

nmt 01 April, 2022, 13:09:18

@user-dd0253 in the first instance, please try logging out and back into the Invisible Companion App. If that doesn't work, reach out to info@pupil-labs.com

user-9429ba 01 April, 2022, 13:07:23

Hi @user-092531! Welcome to the Pupil Labs community πŸ™‚ Replying to your message about eye tracking underwater: https://discord.com/channels/285728493612957698/285728493612957698/958790379539415040 This would be a fantastic development! What product are you currently prototyping with? Would it be possible to see an image of the underwater hardware you are developing? You mention "lens over the camera as part of the water proof enclosure" Is this completely sealed and watertight?

user-5a6ece 01 April, 2022, 13:16:42

It works now! Crazy, may I know what the issue was?

nmt 01 April, 2022, 13:42:00

Good to hear it's working for you πŸ™‚. There was a minor UI error that we just resolved πŸ‘

user-092531 01 April, 2022, 17:57:49

Hi Richard, we are doing the development with the pupil core. Can't give you a photo of the hardware just yet but will be happy to once it's finished. It is completely watertight.

user-9429ba 04 April, 2022, 07:59:17

Thanks for the info! Lots of things to consider. But as a starting point, Pupil Core connects to a laptop via USB. Do you have a plan for sealing/water-proofing the whole system?

user-9429ba 04 April, 2022, 07:48:46

Hi @user-1abb3f Replying to your message from the Core channel here https://discord.com/channels/285728493612957698/285728493612957698/959555127507828826 Pupil Invisible connects to the OnePlus 8T mobile Companion Device that ships with the glasses. Please use the USB cable provided. The companion device runs the Invisible Companion App - which is exclusively used to record and/or stream gaze behavior. You cannot record data by connecting the glasses to a laptop.

user-a9b74c 06 April, 2022, 07:20:57

really thank you for your help, there's no error report during the running process now, but I still got an issue is that I can't completely output the full-length video after running the code, is there any suggested way to deal with this situation?

papr 06 April, 2022, 07:35:37

Can you confirm that the visualization works correctly while the process is running but that the generated video scan_path_visualization.avi does not play back correctly?

papr 06 April, 2022, 07:23:46

Hi, could you clarify what you mean by "I can't completely output the full-length video"?

user-1391e7 06 April, 2022, 12:07:17

https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/ has a couple of broken links in it, just in case it's not known.

https://docs.pupil-labs.com/invisible/how-tos/tools/monitor-your-data-collection-in-real-time https://github.com/pupil-labs/pupil-docs/src/invisible/how-tos/integrate-with-the-real-time-api/introduction/ https://docs.pupil-labs.com/invisible/how-tos/applications/track-your-experiment-progress-using-events

marc 06 April, 2022, 12:15:25

Thanks a ton for reporting those @user-1391e7!! πŸ‘ πŸš€ I have fixed them just now and the new version should be online in a few minutes. It may require a hard refresh of the browser window for them to show up (Ctrl+shift+R).

user-1391e7 06 April, 2022, 12:15:46

thank you!

user-f408eb 06 April, 2022, 18:15:07

Hi everyone, my university got Pupil Labs Invisible for a scientific paper. In this one we compare the duration needed to look from one area marked by Aruco markers to another. My question would be, since I am relatively new to working with eye tracking, if there is already an applicable script for this, or one that could be built upon.

marc 07 April, 2022, 08:25:03

Hi @user-f408eb! Did you see the Marker Mapper enrichment in Pupil Cloud yet? Its using AprilTag rather than Aruco, but it is used for exactly the purpose of tracking surfaces with markers and mapping gaze onto them. The export of that enrichment should make it easy to detect and time transitions.

https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper

user-f408eb 18 April, 2022, 18:58:03

Thank you very much! That's really helping our study. I will be looking into that and If there are any issues contact again.

user-a9b74c 07 April, 2022, 18:14:05

thank you for your reply, the situation is like: the full recording is 15 min, but the output avi is only 6 min

papr 07 April, 2022, 18:15:30

Would you be able to share the raw recording with [email removed] This way we can debug the recording directly.

user-3d239a 08 April, 2022, 12:37:47

@nmt Hi again Neil. Accurate eye tracking is what we need to make our treatment of eye cancer non-invasive. We currently have 4 cameras directed at the patient during treatment. Do you think we can use our information in combination with your network for the estimation of gaze angle?

marc 08 April, 2022, 12:40:43

Hi @user-3d239a! The gaze estimation network is specifically trained for the eye camera angles supplied by the Pupil Invisible glasses and is unfortunately not easily transferrable to other cameras. A change in image appearance generally makes a major difference for machine learning models.

user-3d239a 08 April, 2022, 12:47:54

I see. To integrate your network with our cameras would require some modification of the network, correct? Is there any way we could get to look at your network? We might be able to assign some professionals in adapting your network to our needs given the opportunity

marc 08 April, 2022, 16:28:36

No, unfortunately we can not release the full details of the network and training procedure we use. The required effort for these changes would also be tremendous. Depending on how the camera angles change this significant modifications may be necessary. Also it would have to be trained from scratch with a huge dataset that would need to be made available. For most research projects such changes would be far out of scope.

user-d3756a 08 April, 2022, 16:16:16

Is there a way to use the pupil invisible for multiple hours? For example, is it feasible to use a USB-C splitter to charge the phone and have the eye tracker connected?

marc 08 April, 2022, 16:31:26

Yes, this is possible using a powered USB-C hub! The hub you use must be OTG compatible and even then some hubs are not compatible with some versions of Android. But we have tested this successfully with e.g. this hub: https://www.amazon.de/gp/product/B08HZ1GGH9/

user-d3756a 08 April, 2022, 16:32:07

awesome, thanks!

user-4771db 14 November, 2022, 11:38:12

Hi all together - has this ever worked for anyone? I tried this exact device, which didn't work unfortunately with my Invisibles (Device: OnePlus 6 Android 8.1.0.) the first problem was already that this USB C hub does not provide another usb c port besides the one for the charger. I guess one would need another adapter to connect the glasses? Yet, also using an usb c - usb 3 adapter, it didn't work. Do you have any idea?

user-413ab6 12 April, 2022, 05:57:32

Hi. When I use raw data export on a recording made in Pupil Invisible, I am looking at the gaze.csv file. I find that some values of gaze x[px] and gaze y [py] are negative. Why does that happen? I am looking at https://docs.pupil-labs.com/invisible/explainers/data-streams/#gaze for the coordinate system.

marc 12 April, 2022, 07:37:39

Hi @user-413ab6! The field of view of Pupil Invisible is quite large, but some subjects manage to look at things outside of it.

Pupil Invisible is still able to track their gaze to a small degree outside of the scene image, which leads to gaze coordinates outside the bounds of the scene video. Those could be negative values, or values larger than 1088 on the other end.

This margin is relatively small and you should not see values more than 100-200 pixels away from the image and only if subjects actually look that way.

user-413ab6 12 April, 2022, 07:41:06

Hi Marc. My minimum value was -101.9. Is this normal? I tried manually drawing the marker on the scene video using OpenCV but it doesn't seem to work. Should I share my data/code?

user-413ab6 12 April, 2022, 07:51:16

Also, the gaze seems to not be detected correctly. There seems to be a constant upward and leftward displacement. Is there something to be done for this? I can share the video to show you what I mean

papr 12 April, 2022, 08:07:39

Hi, is it possible that you set up offset correction for a previous subject/wearer?

user-413ab6 12 April, 2022, 08:12:08

I have not set any such offset and I have been observing it for multiple users

user-413ab6 12 April, 2022, 08:08:30

I have not done it. But perhaps doing it might solve my problem? How to set up offset correction?

marc 12 April, 2022, 08:21:18

@user-413ab6 In that case it would be great if you could share a recording with [email removed] Then we can take a look if everything looks ok and if offset correction might be a solution!

user-413ab6 12 April, 2022, 08:24:08

Sure, will do. Thanks!

nmt 12 April, 2022, 12:15:30

Hi @user-413ab6. Thanks for sharing the recording! It looks like there is a fairly constant bias in the gaze prediction. In this case, we'd recommend using the offset correction feature. To use this, open the live preview in the Companion app, instruct the wearer to gaze at a specific object/target, then touch and hold the screen and move your finger to specify the correction. The offset will be applied to all future recordings for that wearer, until you change or reset it, of course. Note that the offset is most valid at the viewing distance it was set at. E.g. like in the case of stationary subjects gazing at a marker board.

user-413ab6 12 April, 2022, 12:47:11

Thank you! That worked. Can you also help me with my other query? (the code)

nmt 12 April, 2022, 13:01:28

@user-413ab6 please also have a look at this message for an overview of the causes of bias: https://discord.com/channels/285728493612957698/633564003846717444/801049310866440212

papr 12 April, 2022, 12:52:35

Hi, I would recommend discarding these out-of-bounds data points from your visualization.

user-413ab6 12 April, 2022, 12:54:27

Sure, that's what I am doing now. But the markers I have drawn are not showing the gaze correctly. (I shared my code in the mail) I think I am making some basic silly mistake

user-413ab6 12 April, 2022, 13:02:40

Yes, I have checked, thanks. I notice that some users don't face this issue. For me, the offset is large. Then again, I have nystagmus and a host of other eye problems. That may be affecting it!

nmt 13 April, 2022, 07:49:26

Thanks for sharing that feedback. I'm glad to hear that it worked for you. In the recording that you shared, the gaze estimation looks pretty consistent even at the different viewing distances you recorded.

user-5b0955 12 April, 2022, 20:20:16 URGENT

We need urgent help. We are collecting data using pupil invisible. It was working perfectly. But today, suddenly, it stopped recording. Here is the scenario: we started the recording, and it recorded for approximately one minute, then it stopped working and gave the following error (please check the screenshot). Here are things we tried: we uninstalled and installed the pupil companion app, restarted the device couple of times, and disconnected and connected the pupil invisible glass multiple times. Could you please let me know what the issue is? We had to stop the data collection session for this reason. As we are in the middle of collecting data, any urgent help is appreciated.. Thank you very much.

Chat image

user-5b0955 12 April, 2022, 20:51:58

URGENT

user-9429ba 13 April, 2022, 08:54:04

Hi @user-d1072e Confidence values from 0.0-1.0 reflect the certainty of pupil detection for a given eye image with Pupil Core. Pupil Invisible, in contrast, does not rely on Pupil Detection. The eye cameras are off-axis and for some gaze angles, the pupils aren't visible. This is not a problem for the gaze estimation pipeline as higher-level features in the eye images are leveraged by the neural network.

papr 13 April, 2022, 08:56:24

@user-d1072e That said, Pupil Invisible provides a boolean estimate whether the glasses are being worn or not for each gaze sample. When opening an Invisible recording in Pupil Player, these booleans are converted to 0.0/1.0 confidence values respectively.

user-d1072e 13 April, 2022, 09:29:05

Ah OK, it makes sense now. Thank you both a lot for the explanation

user-3ab99c 14 April, 2022, 15:18:41

Hello, In Pupil Cloud after creating a new project and a new enrichment, the enrichment does not load, the blue bar remains white (even after several hours for a 10 minute video). Do you know how to solve this problem ? Thank you in advance

user-53a8c4 15 April, 2022, 08:54:57

Could you please dm me the enrichment id/name or the email used for Pupil Cloud so that we can debug the issue further

user-413ab6 16 April, 2022, 06:32:51

Hi. I would like to record the GPS data in Invisible Companion. This doc https://docs.pupil-labs.com/invisible/explainers/data-streams/#gps says that it is there in the settings, but I can't find the setting. Can you help me?

user-cd3e5b 18 April, 2022, 22:51:59

Good evening, I'm working on integrating a pupil invisible eye tracker with eeg (brainvision liveamp). I've found that LSL might be the way to go for time sync and centralizing data gathering. I am using this:

https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/

however, the device is not found. I have the oneplus8 device connected to the same wifi network, the companion app is open and glasses are plugged into the phone. I've verified my connection works as I am using a code example related to extracting surface gaze data in realtime. Is there something I'm missing in my configuration to connect PI tracker to LSL? Thank you in advance.

papr 19 April, 2022, 06:34:58

Hi, happy to see that this new package is receiving some traction already! Can you clarify which of the two points below is accurate? 1) The Companion device is not listed during the device selection process of the LSL relay 2) There is no PI LSL stream being listed in your LSL Recording software

marc 19 April, 2022, 07:40:42

Unfortunately the docs are a bit ahead of their time 😬 While working on the recent doc update this was already written, but it is not yet implemented in the app and we have mistakenly released that! While it is on the roadmap, an update on this for the app is also not planned within the next couple of weeks. I know there are other 3rd party apps that can track GPS on the phone though, but I can't recommend a specific one.

Sorry for the inconvenience! I will remove the GPS section from the docs now!

user-413ab6 19 April, 2022, 07:41:27

aww. Thanks for the update πŸ™‚

user-f408eb 19 April, 2022, 11:28:55

@marc hello again! According to the documentation gaze data that is not in the boundaries of the markers is not interpreted in x y coordinaties. Is it really possible to export the time span needed for the transitions from one level to the other defined with markers? Also does the invisible support Aruco markers?

marc 19 April, 2022, 11:50:02

Hi! I am not sure what exactly you mean with "not interpreted in x y coordinaties". Could you point me to the documentation you are referring to? Gaze in surface coordinates is always given in normalized "surface coordinates". See here: https://docs.pupil-labs.com/invisible/explainers/enrichments/#surface-coordinates

If gaze is outside of a localized surface, it will be reported with coordinates outside the normalized range of (0,0) - (1,1).

Aruco markers are not supported, AprilTag tag36h11.

user-cd3e5b 19 April, 2022, 12:34:00

Hello @papr , item 1) is what I'm having issue with.

papr 19 April, 2022, 12:35:49

Thanks for the clarification! Could you tell us a bit about the network that you are using? Is it a private or a public one, e.g. university?

user-cd3e5b 19 April, 2022, 12:50:08

Sure, it's my private (home) network.

papr 19 April, 2022, 12:56:51

And just to be sure, hitting Enter to refresh the list does not show the device, correct?

user-cd3e5b 19 April, 2022, 12:57:33

That's correct

papr 19 April, 2022, 13:01:20

Unfortunately, we do not have any logs in the relay yet. To debug this, we will need to run some custom code. Could you please run this example in your terminal? https://pupil-labs-realtime-api.readthedocs.io/en/latest/examples/async.html#device-discovery

I would need you to add the following lines in line 25 (after the if __name__ ... part):

import logging
logging.basicConfig(level=logging.DEBUG)

and share the logs printed to the terminal with us.

user-cd3e5b 19 April, 2022, 13:05:55

Thanks, I'll post the output

user-cd3e5b 19 April, 2022, 13:19:47

This is what I have:

DEBUG:asyncio:Using proactor: IocpProactor Looking for the next best device... DEBUG:pupil_labs.realtime_api.discovery:ServiceStateChange.Added myrouter._http._tcp.local. None


All devices after searching for additional 5 seconds: ()


Starting new, indefinitive search... hit ctrl-c to stop. DEBUG:pupil_labs.realtime_api.discovery:ServiceStateChange.Added myrouter._http._tcp.local.

papr 19 April, 2022, 17:38:44

And please make sure to use the latest app release.

papr 19 April, 2022, 16:57:34

Mmh, can you ping pi.local? Not sure why the auto discovery is failing. We will add an option to manually set a device soon.

user-f408eb 19 April, 2022, 18:25:47

Ah, thats exactly what I meant. Please excuse the confusion. So if the duration of the transition of the gaze from one surface to the other is to be determined, is this possible with the enrichment? Even if the gaze is reported between the surfaces with values outside the range?

Thank you very much!

marc 20 April, 2022, 10:08:35

You could simply remove all samples of mapped gaze that are outside of the surfaces ranges. There is an extra column called gaze detected on surface that is encoding exactly this. Then you would have to detect the gaze entry and exit points in time in the remaining data and compare them to one another to calculate transition speeds!

user-cd3e5b 19 April, 2022, 19:48:25

That did it...I guess I didn't have auto-update on the companion app. Eek. The device is found now - thank you so much @papr

papr 19 April, 2022, 20:21:55

Happy to hear that this was solved so easily. Props to @user-b506cc for noting that this might be the issue!

user-ee70bf 20 April, 2022, 10:25:48

Hi there, I have a question regarding saccades please. I am using Pupil Core to record eye movements and am interested in the saccade count (number of saccades) and saccade duration. I have read that I can consider that the time during two fixations is a saccade - but what about the blinks ? How can I take those into account / out of the equation to calculate my saccades ? Thanks in advance. Joanna

marc 20 April, 2022, 10:35:36

Hi @user-ee70bf! With Pupil Core this is a bit more complicated than with Pupil Invisible: - Pupil Player does have a blink detection algorithm which you can use to detect blinks. It is less reliable though than the algorithm for Pupil Invisible. - The fixation detection algorithm does currently not take blinks into account and thus blinks may lead to false detections of fixations endings during the blink. - The fixation detection algorithm of Pupil Core does also not compensate for head movements, so for robust detection a setting with little head movement is required. Otherwise small VOR movements will lead to false detections. - There is no detection for smooth pursuit movements, so scenarios that include a lot of those will also lead to false detections.

Besides those points though, one can consider all gaze samples that do not belong to a fixation to be saccade samples.

papr 20 April, 2022, 10:42:14
  • The fixation detection algorithm does currently not take blinks into account and thus blinks may lead to false detections of fixations endings during the blink. Pupil Core's fixation detector only uses high-confidence data for the classification. Confidence typically drops during blinks. Therefore, one can argue that the algorithm handles blinks implicitly. The argument that Core's fixation detector is less reliable compared to the one employed by Pupil Cloud, still stands.
user-ee70bf 20 April, 2022, 10:59:32

Thanks @marc for all of those precisions ! This is much clearer. Therefore, I do not need to "substract" the blink durations to calculate my saccades ?

marc 20 April, 2022, 11:57:12

Your welcome! As @papr mentioned the blinks should actually already be compensated in the fixation detection (my initial statement was a bit inaccurate), so no subtraction should be necessary. The potential issue is that some blinks might not get detected and thus not compensated. This could only be fixed by manually annotating the blinks.

papr 20 April, 2022, 12:40:46

@user-ee70bf Short follow up: @marc and I discussed some edge cases, e.g. Fixation -> Blink -> Fixation. Usually, these will be merged into one fixation. It is possible though that this is classified as two separate fixations. ~~Therefore, our recommendation would be: Count all sections between fixations as saccades that do not contain a blink~~. This would rely on following @nmt 's recommendations above.

nmt 20 April, 2022, 12:39:10

@user-ee70bf to add to these points, if you go down the route of classifying saccades based on inter-fixation duration, you will need to consider a few settings within Pupil Player, as these will affect outcomes such as saccade number and duration: 1. Fixation filter thresholds These thresholds will have a significant influence on the number and duration of fixations classified (and thus saccades). You can read more about the fixation filter here: https://docs.pupil-labs.com/core/terminology/#fixations and setting the thresholds here: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds 2. Blink detection thresholds
You can check the quality of blink classifications within Pupil Player, and adjust the detection thresholds where required. It's worth reading more about how the blink detector works here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector and setting the thresholds here: https://docs.pupil-labs.com/core/best-practices/#blink-detector-thresholds

papr 20 April, 2022, 12:44:30

(This also assumes that one is not able to blink during a saccade. @nmt do you have further insight into the correctness of this assumption?)

nmt 20 April, 2022, 12:54:59

Incorrect assumption. Blinks can occur with saccades for a variety of reasons.

papr 20 April, 2022, 12:57:08

Thanks for the correction! So, assuming we have this case: Fixation -> Saccade -> Blink -> Saccade -> Fixation. Would you count this as one saccade? If yes, would you subtract the blink duration from the saccade duration? Or more interestingly: What would be your approach regarding handling blinks between fixations?

nmt 20 April, 2022, 13:09:22

I think this is totally context dependent. Blinks accompanying saccades can be, e.g. reflexive (due to some perturbation), voluntary, and more commonly evoked by larger shifts of gaze (such as when turning to look at something). In a controlled environment with minimal head movements, you're less likely to find blinks accompanying saccades. In this case, I'd be more willing to rely on the inter-fixation duration. In more dynamic environments, you're more likely to encounter blinks. Things become more difficult and manually inspecting the data is important. So to answer your question, you'd have to assume that to be one saccade. I wouldn't subtract the blink duration from the saccade duration. But really who knows. Maybe there were two saccades πŸ˜•

papr 20 April, 2022, 13:11:27

Thank you very much for your insight! @user-ee70bf Does this give you enough input to continue your work?

user-ee70bf 20 April, 2022, 15:43:05

@nmt @papr Thanks very much for all your very quick and precise feedback.My experiment relies on a very controlled environment, where upon participants are asked to remember personal memories in front of a white wall. Therefore, fixations do tend to be longer and I have trouble finding a reference in the scientific litterature to help me set my mimimum and maximum fixation durations... As you suggested, I am relying on interfixation duration to calculate saccade duration ((i.e., timestamp of fixation 2 - (timestamp of fixation 1 + duration of fixation 1) = duration of saccade 1)). However, when I calculate this, my saccade duration ranges from 18 ms to 1400 ms for the same participant, and the higher values seem to coincide with the moment where blinks appear.. Which is why I was surprised that blinks were taken into account within the fixations ! Maybe my settings are wrong... But then again, if I consider that blinks can fully be part of saccadic movements, then it should be okay ? In other words, I am now trying to understand the best practices for choosing fixation thresholds for a neutral environment (white wall) and an autobiographical memory task.

user-08ada6 20 April, 2022, 14:44:28

Hi there, I have a question regarding the remote connection please. I'm using Pupil Invisible and I need to start the registration data with Matlab, using the remote connection. I found this https://github.com/pupil-labs/pupil-helpers/tree/master/matlab, but I have big problem with zmq package. can you help me? Thanks in advance. E

papr 20 April, 2022, 14:48:38

Hi, the linked package only works for the Pupil Core network API. We currently do not offer any Matlab client implementations for the Pupil Invisible network API. You can read more about it here https://github.com/pupil-labs/realtime-network-api and here https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html

That said, can you tell us a bit more about your use case? What is your motivation to process the eye tracking data in realtime in Matlab?

user-08ada6 20 April, 2022, 14:54:25

OK, clear, thank you! I need to do an experiment on product packaging using Eye tracking and EEG. Since the researchers I am working with created the project on Matlab, I wanted to add Pupil Invisible. If you need more details or if I didn't explain myself well, I am here.

papr 20 April, 2022, 15:26:07

So, if I understand it correctly, you want to do the analysis in Matlab. My question would be if the data collection needs to happen in real time? The recommended workflow would be to upload the recordings to Pupil Cloud, apply enrichments, e.g. our reference image mapper, and then download and process the CSV export. (See https://docs.pupil-labs.com/invisible/explainers/enrichments/ for the possibilities)

Yes, using Python to receive the data in realtime and forwarding it to Matlab is possible. But you will need to perform the full analyses on your own.

user-08ada6 20 April, 2022, 15:16:14

While in Python I can have eye tracking data in realtime? Because I could use Python and afterwards call it in Matlab

papr 20 April, 2022, 15:26:40

What type of metrics/data are you planning on analysing?

user-08ada6 20 April, 2022, 16:45:43

Yes, I wanted to record the data in Matlab in real time. I need to do an experiment where participants observe images for seconds on pc monitor and I want to record their eye movements during the observation. I would like to have a csv file with the fixation coordinates and timestamp, without recording the video and uploading it to the Cloud.

papr 20 April, 2022, 16:52:27

You can use the realtime Python api to receive the scene video and use this Python module to do the AOI tracking https://github.com/pupil-labs/surface-tracker unfortunately, this package is missing some good examples and documentation.

papr 20 April, 2022, 16:48:17

So you are interested in pc monitor coordinates, right? For that, you will need to process the scene video in any case because the gaze is estimated in scene camera coordinates. Probably something similar to the marker mapper enrichment but in realtime.

user-08ada6 20 April, 2022, 16:52:49

Yes exactly, I'm interested in the coordinates of the pc monitor. Do you recommend me to record the pc screen video and then upload it to the Cloud? The only doubt I had was about the timestamp. Because if I could start the Pupil Invisible inside Matlab the timestamps were consistent. If I start the video there are seconds of difference between my project in Matlab and the recording of the video.

papr 20 April, 2022, 16:54:46

You can only upload to cloud via the app. Pupil Invisible records everything in unix epoch. You can time sync everything via the system provided network time sync (via NTP).

user-08ada6 20 April, 2022, 16:57:21

Ok thank you so much for the valuable information and the link you sent me, I will look into it. If I have other doubts I will write to you.

papr 20 April, 2022, 16:59:30

We will need to work on an example anyway. I might try to find some time for that next week. What does your timeline look like?

user-08ada6 20 April, 2022, 17:06:29

Yes, thank you! You would be doing me a great favor! I wanted to start my experiment in June. Let me know if it's too soon, please.

papr 20 April, 2022, 17:08:16

That is plenty of time!

user-08ada6 20 April, 2022, 17:10:51

Talk to you soon, thanks πŸ˜€

papr 20 April, 2022, 17:11:39

You might want to start thinking how you want to get the data from Python into your Matlab. Easiest would be storing the data into a file and reading that post-hoc into Matlab.

user-08ada6 20 April, 2022, 17:16:06

Hmm, okay, but I would want the Pupil data recording to start when I start the experiment in Matlab. Do you still recommend that I record in Python and then read them into Matlab? Sorry maybe I didn't understand that.

papr 20 April, 2022, 17:28:49

You can do that via the Python client. Maybe this is an option: https://uk.mathworks.com/products/matlab/matlab-and-python.html

user-08ada6 20 April, 2022, 17:30:18

Ok thank you very much, I will look it.

user-b14f98 20 April, 2022, 18:49:41

Is it still true that the IMU and gaze data are subject to non-constant temporal offset? There is latency between the two by an amount that changes over time?

papr 20 April, 2022, 19:08:00

If I remember correctly the issue was improved in v1.2.2

user-5b0955 21 April, 2022, 04:56:47

Hi Guys, we are trying to sync data from Azure Kinect and pupil invisible. Could you please suggest what is the best way to do that? Any pointer is highly appreciated. Thanks.

user-1391e7 21 April, 2022, 07:13:50

I don't know about best, but would writing a piece of software, which receives sync events via the network api suffice? https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/

user-9429ba 21 April, 2022, 13:51:08

Hi @user-ee70bf This sounds like an eye movement reinstatement paradigm, where fixation-saccade sequences are specifically reproduced when recalling a past experience from episodic memory. If so, I am wondering if analysis of saccades is even necessary. Typically it is the relative direction of eye movements between fixations which is important. See this paper: https://psycnet.apa.org/doi/10.1037/a0026585.

For more info on saccades in mobile eye tracking compared to other contexts, I can highly recommend this article: https://royalsocietypublishing.org/doi/10.1098/rsos.180502

user-ee70bf 22 April, 2022, 11:37:29

Thanks very much Richard for those interesting articles. In this paradigm, I haven't got access to the eye movements that were produced during the encoding of the memory, so I am not comparing encoding and recall. I will look into the importance of saccades... Typically, in this new field of research (eye movements and autobiographical memory), it is considered that saccades enable us to search images through our visual system and fixations enable us to activate and explore those mental images. But the more I read on the subject (I am a first year PhD student), the more I discover that depending on the field of study, we use very different variables (AOIs, fixations, saccades...)

user-59c143 21 April, 2022, 14:02:14

We have a Pupil Invisible which currently runs into problems. Near the connection of the world camera the glasses are overheating. We expect there is a short-circuit inside. Please advise of how to fix this. At this moment we do not dare to open the glasses ourselves (yet) for inspection.

papr 21 April, 2022, 14:09:15

Please contact info@pupil-labs.com in this regard.

user-59c143 21 April, 2022, 14:14:12

Thanx, I will

user-5b0955 23 April, 2022, 06:56:50

Hi Guys, We recorded approx 600 videos, which are uploaded to the pupil cloud. We need to download the PI World Video (Egocentric) of all these recordings. We can download each recording individually and extract the zip file to get the egocentric video data. This approach is very time-consuming. We are wondering is there any easiest way to download all the egocentric video data from the cloud recordings? Thanks.

marc 25 April, 2022, 08:04:14

Hi @user-5b0955! There are two more convenient options:

  • You could create a project with all of those recordings. The creation process would still be tedious, but using the raw data exporter enrichment you could then download the recording data including the scene video in a single download.

  • You could use Pupil Cloud's API to download those recordings programmatically. This would require writing an according script and you would need a compatible strategy on how to identify those recordings (e.g. by their name follows a specific scheme or they share recording dates). If this might be of interest to you, I can give you details on how that would work.

user-5b0955 25 April, 2022, 10:23:19

I think downloading using API will be convenient for us as I think we still need to add each video manually to the project. Could you please provide resources on this? Suggestion: I think it would be great if the cloud has the feature to select and download a set of recordings.

user-53a8c4 25 April, 2022, 10:45:44

Suggestion: I think it would be great if the cloud has the feature to select and download a set of recordings. This feature exists already, it is possible to select all the recordings on a page at once by clicking the first in the list, holding shift and clicking the last. Control click can be used to select/deselect individual recordings from that selection.

marc 25 April, 2022, 10:30:41

Thanks a lot for the feedback! I can definitely see how this would be very helpful for use-cases like yours! I have asked the engineering team to provide a minimal example of how to achieve this with the API and will forward it to you asap. This may take until tomorrow.

user-5b0955 25 April, 2022, 12:08:02

Thank you very much πŸ™‚

user-e0a93f 25 April, 2022, 14:43:33

Hi πŸ™‚ My question concerns timestamps. I am conducting a motion capture (Xsens IMUs) + eye-tracking (Pupil invisible) study. I need to sync the data from both system which is not obvious. My goal was to match the acceleration recorded from Xsens' head IMU and glasses' IMU. My experiment is on a trampoline so I have a pretty good idea of the shape of the acceleration profile. Unfortunately, it seems like Pupils' gaze and acceleration data are not synced? Is it possible to know how the timestamp for the IMU in the gasses is given?

papr 25 April, 2022, 14:46:33

There is a known issue in older app versions (< 1.2.2) where these two data streams were out of sync for some subjects. In what stage of the study are you at the moment?

user-e0a93f 25 April, 2022, 14:50:40

Ah zut! I have 16/20 subjects collected. Is it a fixed lag or an unknown lag?

papr 26 April, 2022, 10:38:31

Unfortunately, at the time, the lag was not fixed and even drifted over time. πŸ˜•

user-55ae67 25 April, 2022, 18:07:18

Hello, we are collecting data while our participants are using a touchscreen in a dark environment but the resolution of the world camera on Invisible is not enough to clearly see the components on the screen. Is it possible to adjust the camera settings so that the texts can be read clearly or can we connect an external world camera to Invisible? If yes, do you have any recommendations? We got similar results with Core too. Thanks in advance.

marc 27 April, 2022, 07:29:36

You could also try setting the scene camera to manual exposure in the setting and try to optimize that value for best exposure of the screen. This does not increase the resolution but would remove affects of over-exposure of the screen.

papr 26 April, 2022, 10:42:19

If you find an external camera and a software that supports video capture via labstreaminglayer (LSL) you could use it in combination with our PI LSL Relay https://pupil-invisible-lsl-relay.readthedocs.io/en/latest/. It emits events to LSL that allow you to synchronize the external camera feed with the Pupil Invisible recording post-hoc.

user-e0a93f 26 April, 2022, 10:58:43

Thanks for your answer @papr. You can understand that I am really disappointed. If I understand well, there is nothing we can do about it?

papr 26 April, 2022, 11:04:24

There would be a way to correct for that but it requires some coding. One could calculate the scene video's optical flow which correlates highly with imu data. Then you can use the two streams to find the offset and compensate for it

user-e7240e 26 April, 2022, 17:21:50

Hi, I recorded data using pupil invisible now, I am trying to analyze the data. I am not sure how should I start with data analysis, I came across a video explaining to use pupil player in order to create the heatmap but when I am trying to create a surface it shows that no marker is found and I am not sure how to define the markers. I would appreciate if someone can help me

papr 27 April, 2022, 06:22:08

Hi, I recommend this section of our documentation to get an idea of how the different products (Pupil Invisible, Pupil Cloud, Pupil Player) are playing together https://docs.pupil-labs.com/invisible/getting-started/understand-the-ecosystem/ I would recommend having a look at Pupil Cloud and its enrichments specifically https://docs.pupil-labs.com/invisible/getting-started/analyse-recordings-in-pupil-cloud/

user-e0a93f 26 April, 2022, 21:33:53

I have another question while we are at it. I saw multiple times the notion of gaze angle appearing. It is an information that I need, I would like to be able to know the orientation of the gaze in terms of angles (side, up/down) relatively to the glasses reference frame. I am guessing I could compute it from gaze_point_3d, eye_center and gaze_normal fields from gaze_position.csv (exported from the app). However, these fields remain empty after export. Am I doing something wrong? Or is there any other way to access gaze angles ?

papr 27 April, 2022, 07:29:48

gaze_point_3d, eye_center, and gaze_normal are Pupil Core specific outputs. They are not available for Pupil Invisible. That said, it is possible to calculate the gaze direction (similar to gaze_point_3d) for Pupil Invisible. You can do so by undistorting and unprojecting the 2d gaze estimates using the scene camera intrinsics. I am working on an example that shows you how to do that.

papr 27 April, 2022, 08:23:17

Please see this example on how to get directional gaze estimates in spherical coordinates based on the 2d pixel locations estimated by Pupil Invisible https://gist.github.com/papr/164e310047f7a73130485694d037abad

user-e0a93f 27 April, 2022, 16:56:38

Thanks a lot!

user-ace7a4 27 April, 2022, 09:20:11

Is it possible to get pupil data when using the invisible eyetracker? Meaning, if the invisible eyetracker also provides parameters like the pupil diameter, similar to the core device

papr 27 April, 2022, 09:22:44

Hi, currently, Pupil Invisible is not able to provide pupillometry data. Due to the oblique camera angles, we do not get a good view of the eyes, making pupillometry very difficult.

user-8cef73 27 April, 2022, 10:46:29

Hi, I'm having an issue getting videos from the Pupil Companion app to the cloud. All videos stay at 0%, no matter how long I leave it. I have tried restarting the app and phone, have tried multiple WiFi and data connections, and nothing has worked. I could previously upload videos with no issue.

marc 27 April, 2022, 10:47:23

Hi @user-8cef73! Could you try logging out and back in to the app? You can log out via the settings view.

user-8cef73 27 April, 2022, 10:51:52

Thank you, that worked! I'm also having an issue where I'm getting notifications that I can't action, some of them saying there are errors with the glasses. Is there any other way I can action them? One is about calibration and the other is a recording error but the notification is cut off so I can't actually read what the issue is.

marc 27 April, 2022, 11:20:07

Could you provide a screenshot of those notifications?

user-1936a5 27 April, 2022, 11:14:30

Hi,we are researchers from a university in China. We want to buy some of your products, but we have some problems in import licensing and channels. Who can help?

marc 27 April, 2022, 11:21:52

Hi @user-1936a5! Please write an email to [email removed] The sales team will be able to help you!

user-1936a5 27 April, 2022, 11:50:51

@marcthankyou!

user-f93379 27 April, 2022, 14:11:14

Are you planning to connect "invisible" to psychopy?

papr 27 April, 2022, 17:05:56

Currently not. Psychopy experiments are usually screen based and to estimate gaze in screen coordinates in realtime we need marker mapping. But this feature is currently not available (in realtime) for Pupil Invisible. We do have an integration for Pupil Core though which is suitable for this kind of controlled lab-based experiments.

user-b811bd 28 April, 2022, 12:52:55

Hello friends, I have some issues with my recordings: 1. Are the recordings time sensitive or is there any space limitations, other than the phone itself, for the videos? When I recently try to have recording, although I have the color trail all the time, my video is not recording completely! 2. For audio output, I don’t have the audio after I use the pupil player. Is that normal? 3. The main concern is with pupil player. I try to have the merged video, and I usually can see it while it’s playing in the player. But when I try to download it, I have a mp4 file that cannot be played or when I push the download button it says the 2d or 3d data for visualization is missing!

papr 28 April, 2022, 12:55:03

Could you let us know which Companion app and Pupil Player versions you are using?

papr 28 April, 2022, 12:54:08

~~Hey, are you using the iMotions exporter by any chance?~~ This should not be possible for Invisible recordings.

user-b811bd 28 April, 2022, 12:54:36

No

user-b811bd 28 April, 2022, 12:55:26

Player is v3.5.1

user-b811bd 28 April, 2022, 12:55:44

My app is 8 I believe

papr 28 April, 2022, 12:56:10

Do you mean 0.8?

user-b811bd 28 April, 2022, 12:56:24

Yes, the newer one

user-b811bd 28 April, 2022, 12:56:36

That is much easier to use

papr 28 April, 2022, 12:58:29

Could you please share a recording (not the export/download) that is supposed to have audio with [email removed]

user-b811bd 28 April, 2022, 12:59:58

Sure

user-e7cf71 28 April, 2022, 14:57:32

Hey I have a question regarding the PI Timestamp. Is it denoting the time the image was taken or received on the mobile device?

papr 28 April, 2022, 15:42:56

It is the reception timestamp minus a fixed amount to correct for the transmission time.

user-e7cf71 29 April, 2022, 07:23:08

Thank you!

End of April archive