Hi! I'm pretty new to pupil labs and an intern in a research facility. My advisor said that I should learn how to work with pupil labs. Now we're already have troubles to view/play the recordings in pupil cloud. Although it seemed to worked some time ago. Any tips?
Hi @user-5a6ece π. Please refresh the page and then try playing the recordings again
Hello, I have an issue with Pupil Invisible. I don't see the gaze/red circle anymore after the recording and during the calibration. Why? How can I fix it? thanks
@user-dd0253 in the first instance, please try logging out and back into the Invisible Companion App. If that doesn't work, reach out to info@pupil-labs.com
Hi @user-092531! Welcome to the Pupil Labs community π Replying to your message about eye tracking underwater: https://discord.com/channels/285728493612957698/285728493612957698/958790379539415040 This would be a fantastic development! What product are you currently prototyping with? Would it be possible to see an image of the underwater hardware you are developing? You mention "lens over the camera as part of the water proof enclosure" Is this completely sealed and watertight?
It works now! Crazy, may I know what the issue was?
Good to hear it's working for you π. There was a minor UI error that we just resolved π
Hi Richard, we are doing the development with the pupil core. Can't give you a photo of the hardware just yet but will be happy to once it's finished. It is completely watertight.
Thanks for the info! Lots of things to consider. But as a starting point, Pupil Core connects to a laptop via USB. Do you have a plan for sealing/water-proofing the whole system?
Hi @user-1abb3f Replying to your message from the Core channel here https://discord.com/channels/285728493612957698/285728493612957698/959555127507828826 Pupil Invisible connects to the OnePlus 8T mobile Companion Device that ships with the glasses. Please use the USB cable provided. The companion device runs the Invisible Companion App - which is exclusively used to record and/or stream gaze behavior. You cannot record data by connecting the glasses to a laptop.
really thank you for your help, there's no error report during the running process now, but I still got an issue is that I can't completely output the full-length video after running the code, is there any suggested way to deal with this situation?
Can you confirm that the visualization works correctly while the process is running but that the generated video scan_path_visualization.avi
does not play back correctly?
Hi, could you clarify what you mean by "I can't completely output the full-length video"?
https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/ has a couple of broken links in it, just in case it's not known.
https://docs.pupil-labs.com/invisible/how-tos/tools/monitor-your-data-collection-in-real-time https://github.com/pupil-labs/pupil-docs/src/invisible/how-tos/integrate-with-the-real-time-api/introduction/ https://docs.pupil-labs.com/invisible/how-tos/applications/track-your-experiment-progress-using-events
Thanks a ton for reporting those @user-1391e7!! π π I have fixed them just now and the new version should be online in a few minutes. It may require a hard refresh of the browser window for them to show up (Ctrl+shift+R).
thank you!
Hi everyone, my university got Pupil Labs Invisible for a scientific paper. In this one we compare the duration needed to look from one area marked by Aruco markers to another. My question would be, since I am relatively new to working with eye tracking, if there is already an applicable script for this, or one that could be built upon.
Hi @user-f408eb! Did you see the Marker Mapper enrichment in Pupil Cloud yet? Its using AprilTag rather than Aruco, but it is used for exactly the purpose of tracking surfaces with markers and mapping gaze onto them. The export of that enrichment should make it easy to detect and time transitions.
https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper
Thank you very much! That's really helping our study. I will be looking into that and If there are any issues contact again.
thank you for your reply, the situation is like: the full recording is 15 min, but the output avi is only 6 min
Would you be able to share the raw recording with [email removed] This way we can debug the recording directly.
@nmt Hi again Neil. Accurate eye tracking is what we need to make our treatment of eye cancer non-invasive. We currently have 4 cameras directed at the patient during treatment. Do you think we can use our information in combination with your network for the estimation of gaze angle?
Hi @user-3d239a! The gaze estimation network is specifically trained for the eye camera angles supplied by the Pupil Invisible glasses and is unfortunately not easily transferrable to other cameras. A change in image appearance generally makes a major difference for machine learning models.
I see. To integrate your network with our cameras would require some modification of the network, correct? Is there any way we could get to look at your network? We might be able to assign some professionals in adapting your network to our needs given the opportunity
No, unfortunately we can not release the full details of the network and training procedure we use. The required effort for these changes would also be tremendous. Depending on how the camera angles change this significant modifications may be necessary. Also it would have to be trained from scratch with a huge dataset that would need to be made available. For most research projects such changes would be far out of scope.
Is there a way to use the pupil invisible for multiple hours? For example, is it feasible to use a USB-C splitter to charge the phone and have the eye tracker connected?
Yes, this is possible using a powered USB-C hub! The hub you use must be OTG compatible and even then some hubs are not compatible with some versions of Android. But we have tested this successfully with e.g. this hub: https://www.amazon.de/gp/product/B08HZ1GGH9/
awesome, thanks!
Hi all together - has this ever worked for anyone? I tried this exact device, which didn't work unfortunately with my Invisibles (Device: OnePlus 6 Android 8.1.0.) the first problem was already that this USB C hub does not provide another usb c port besides the one for the charger. I guess one would need another adapter to connect the glasses? Yet, also using an usb c - usb 3 adapter, it didn't work. Do you have any idea?
Hi. When I use raw data export on a recording made in Pupil Invisible, I am looking at the gaze.csv file. I find that some values of gaze x[px] and gaze y [py] are negative. Why does that happen? I am looking at https://docs.pupil-labs.com/invisible/explainers/data-streams/#gaze for the coordinate system.
Hi @user-413ab6! The field of view of Pupil Invisible is quite large, but some subjects manage to look at things outside of it.
Pupil Invisible is still able to track their gaze to a small degree outside of the scene image, which leads to gaze coordinates outside the bounds of the scene video. Those could be negative values, or values larger than 1088 on the other end.
This margin is relatively small and you should not see values more than 100-200 pixels away from the image and only if subjects actually look that way.
Hi Marc. My minimum value was -101.9. Is this normal? I tried manually drawing the marker on the scene video using OpenCV but it doesn't seem to work. Should I share my data/code?
Also, the gaze seems to not be detected correctly. There seems to be a constant upward and leftward displacement. Is there something to be done for this? I can share the video to show you what I mean
Hi, is it possible that you set up offset correction for a previous subject/wearer?
I have not set any such offset and I have been observing it for multiple users
I have not done it. But perhaps doing it might solve my problem? How to set up offset correction?
@user-413ab6 In that case it would be great if you could share a recording with [email removed] Then we can take a look if everything looks ok and if offset correction might be a solution!
Sure, will do. Thanks!
Hi @user-413ab6. Thanks for sharing the recording! It looks like there is a fairly constant bias in the gaze prediction. In this case, we'd recommend using the offset correction feature. To use this, open the live preview in the Companion app, instruct the wearer to gaze at a specific object/target, then touch and hold the screen and move your finger to specify the correction. The offset will be applied to all future recordings for that wearer, until you change or reset it, of course. Note that the offset is most valid at the viewing distance it was set at. E.g. like in the case of stationary subjects gazing at a marker board.
Thank you! That worked. Can you also help me with my other query? (the code)
@user-413ab6 please also have a look at this message for an overview of the causes of bias: https://discord.com/channels/285728493612957698/633564003846717444/801049310866440212
Hi, I would recommend discarding these out-of-bounds data points from your visualization.
Sure, that's what I am doing now. But the markers I have drawn are not showing the gaze correctly. (I shared my code in the mail) I think I am making some basic silly mistake
Yes, I have checked, thanks. I notice that some users don't face this issue. For me, the offset is large. Then again, I have nystagmus and a host of other eye problems. That may be affecting it!
Thanks for sharing that feedback. I'm glad to hear that it worked for you. In the recording that you shared, the gaze estimation looks pretty consistent even at the different viewing distances you recorded.
We need urgent help. We are collecting data using pupil invisible. It was working perfectly. But today, suddenly, it stopped recording. Here is the scenario: we started the recording, and it recorded for approximately one minute, then it stopped working and gave the following error (please check the screenshot). Here are things we tried: we uninstalled and installed the pupil companion app, restarted the device couple of times, and disconnected and connected the pupil invisible glass multiple times. Could you please let me know what the issue is? We had to stop the data collection session for this reason. As we are in the middle of collecting data, any urgent help is appreciated.. Thank you very much.
URGENT
Hi @user-d1072e Confidence values from 0.0-1.0 reflect the certainty of pupil detection for a given eye image with Pupil Core. Pupil Invisible, in contrast, does not rely on Pupil Detection. The eye cameras are off-axis and for some gaze angles, the pupils aren't visible. This is not a problem for the gaze estimation pipeline as higher-level features in the eye images are leveraged by the neural network.
@user-d1072e That said, Pupil Invisible provides a boolean estimate whether the glasses are being worn or not for each gaze sample. When opening an Invisible recording in Pupil Player, these booleans are converted to 0.0/1.0 confidence values respectively.
Ah OK, it makes sense now. Thank you both a lot for the explanation
Hello, In Pupil Cloud after creating a new project and a new enrichment, the enrichment does not load, the blue bar remains white (even after several hours for a 10 minute video). Do you know how to solve this problem ? Thank you in advance
Could you please dm me the enrichment id/name or the email used for Pupil Cloud so that we can debug the issue further
Hi. I would like to record the GPS data in Invisible Companion. This doc https://docs.pupil-labs.com/invisible/explainers/data-streams/#gps says that it is there in the settings, but I can't find the setting. Can you help me?
Good evening, I'm working on integrating a pupil invisible eye tracker with eeg (brainvision liveamp). I've found that LSL might be the way to go for time sync and centralizing data gathering. I am using this:
https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/
however, the device is not found. I have the oneplus8 device connected to the same wifi network, the companion app is open and glasses are plugged into the phone. I've verified my connection works as I am using a code example related to extracting surface gaze data in realtime. Is there something I'm missing in my configuration to connect PI tracker to LSL? Thank you in advance.
Hi, happy to see that this new package is receiving some traction already! Can you clarify which of the two points below is accurate? 1) The Companion device is not listed during the device selection process of the LSL relay 2) There is no PI LSL stream being listed in your LSL Recording software
Unfortunately the docs are a bit ahead of their time π¬ While working on the recent doc update this was already written, but it is not yet implemented in the app and we have mistakenly released that! While it is on the roadmap, an update on this for the app is also not planned within the next couple of weeks. I know there are other 3rd party apps that can track GPS on the phone though, but I can't recommend a specific one.
Sorry for the inconvenience! I will remove the GPS section from the docs now!
aww. Thanks for the update π
@marc hello again! According to the documentation gaze data that is not in the boundaries of the markers is not interpreted in x y coordinaties. Is it really possible to export the time span needed for the transitions from one level to the other defined with markers? Also does the invisible support Aruco markers?
Hi! I am not sure what exactly you mean with "not interpreted in x y coordinaties". Could you point me to the documentation you are referring to? Gaze in surface coordinates is always given in normalized "surface coordinates". See here: https://docs.pupil-labs.com/invisible/explainers/enrichments/#surface-coordinates
If gaze is outside of a localized surface, it will be reported with coordinates outside the normalized range of (0,0)
- (1,1)
.
Aruco markers are not supported, AprilTag tag36h11
.
Hello @papr , item 1) is what I'm having issue with.
Thanks for the clarification! Could you tell us a bit about the network that you are using? Is it a private or a public one, e.g. university?
Sure, it's my private (home) network.
And just to be sure, hitting Enter to refresh the list does not show the device, correct?
That's correct
Unfortunately, we do not have any logs in the relay yet. To debug this, we will need to run some custom code. Could you please run this example in your terminal? https://pupil-labs-realtime-api.readthedocs.io/en/latest/examples/async.html#device-discovery
I would need you to add the following lines in line 25 (after the if __name__ ...
part):
import logging
logging.basicConfig(level=logging.DEBUG)
and share the logs printed to the terminal with us.
Thanks, I'll post the output
This is what I have:
DEBUG:asyncio:Using proactor: IocpProactor Looking for the next best device... DEBUG:pupil_labs.realtime_api.discovery:ServiceStateChange.Added myrouter._http._tcp.local. None
All devices after searching for additional 5 seconds: ()
Starting new, indefinitive search... hit ctrl-c to stop. DEBUG:pupil_labs.realtime_api.discovery:ServiceStateChange.Added myrouter._http._tcp.local.
And please make sure to use the latest app release.
Mmh, can you ping pi.local
? Not sure why the auto discovery is failing. We will add an option to manually set a device soon.
Ah, thats exactly what I meant. Please excuse the confusion. So if the duration of the transition of the gaze from one surface to the other is to be determined, is this possible with the enrichment? Even if the gaze is reported between the surfaces with values outside the range?
Thank you very much!
You could simply remove all samples of mapped gaze that are outside of the surfaces ranges. There is an extra column called gaze detected on surface
that is encoding exactly this. Then you would have to detect the gaze entry and exit points in time in the remaining data and compare them to one another to calculate transition speeds!
That did it...I guess I didn't have auto-update on the companion app. Eek. The device is found now - thank you so much @papr
Happy to hear that this was solved so easily. Props to @user-b506cc for noting that this might be the issue!
Hi there, I have a question regarding saccades please. I am using Pupil Core to record eye movements and am interested in the saccade count (number of saccades) and saccade duration. I have read that I can consider that the time during two fixations is a saccade - but what about the blinks ? How can I take those into account / out of the equation to calculate my saccades ? Thanks in advance. Joanna
Hi @user-ee70bf! With Pupil Core this is a bit more complicated than with Pupil Invisible: - Pupil Player does have a blink detection algorithm which you can use to detect blinks. It is less reliable though than the algorithm for Pupil Invisible. - The fixation detection algorithm does currently not take blinks into account and thus blinks may lead to false detections of fixations endings during the blink. - The fixation detection algorithm of Pupil Core does also not compensate for head movements, so for robust detection a setting with little head movement is required. Otherwise small VOR movements will lead to false detections. - There is no detection for smooth pursuit movements, so scenarios that include a lot of those will also lead to false detections.
Besides those points though, one can consider all gaze samples that do not belong to a fixation to be saccade samples.
- The fixation detection algorithm does currently not take blinks into account and thus blinks may lead to false detections of fixations endings during the blink. Pupil Core's fixation detector only uses high-confidence data for the classification. Confidence typically drops during blinks. Therefore, one can argue that the algorithm handles blinks implicitly. The argument that Core's fixation detector is less reliable compared to the one employed by Pupil Cloud, still stands.
Thanks @marc for all of those precisions ! This is much clearer. Therefore, I do not need to "substract" the blink durations to calculate my saccades ?
Your welcome! As @papr mentioned the blinks should actually already be compensated in the fixation detection (my initial statement was a bit inaccurate), so no subtraction should be necessary. The potential issue is that some blinks might not get detected and thus not compensated. This could only be fixed by manually annotating the blinks.
@user-ee70bf Short follow up: @marc and I discussed some edge cases, e.g. Fixation -> Blink -> Fixation. Usually, these will be merged into one fixation. It is possible though that this is classified as two separate fixations. ~~Therefore, our recommendation would be: Count all sections between fixations as saccades that do not contain a blink~~. This would rely on following @nmt 's recommendations above.
@user-ee70bf to add to these points, if you go down the route of classifying saccades based on inter-fixation duration, you will need to consider a few settings within Pupil Player, as these will affect outcomes such as saccade number and duration:
1. Fixation filter thresholds
These thresholds will have a significant influence on the number and duration of fixations classified (and thus saccades). You can read more about the fixation filter here: https://docs.pupil-labs.com/core/terminology/#fixations and setting the thresholds here: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds
2. Blink detection thresholds
You can check the quality of blink classifications within Pupil Player, and adjust the detection thresholds where required. It's worth reading more about how the blink detector works here: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector and setting the thresholds here: https://docs.pupil-labs.com/core/best-practices/#blink-detector-thresholds
(This also assumes that one is not able to blink during a saccade. @nmt do you have further insight into the correctness of this assumption?)
Incorrect assumption. Blinks can occur with saccades for a variety of reasons.
Thanks for the correction! So, assuming we have this case: Fixation -> Saccade -> Blink -> Saccade -> Fixation. Would you count this as one saccade? If yes, would you subtract the blink duration from the saccade duration? Or more interestingly: What would be your approach regarding handling blinks between fixations?
I think this is totally context dependent. Blinks accompanying saccades can be, e.g. reflexive (due to some perturbation), voluntary, and more commonly evoked by larger shifts of gaze (such as when turning to look at something). In a controlled environment with minimal head movements, you're less likely to find blinks accompanying saccades. In this case, I'd be more willing to rely on the inter-fixation duration. In more dynamic environments, you're more likely to encounter blinks. Things become more difficult and manually inspecting the data is important. So to answer your question, you'd have to assume that to be one saccade. I wouldn't subtract the blink duration from the saccade duration. But really who knows. Maybe there were two saccades π
Thank you very much for your insight! @user-ee70bf Does this give you enough input to continue your work?
@nmt @papr Thanks very much for all your very quick and precise feedback.My experiment relies on a very controlled environment, where upon participants are asked to remember personal memories in front of a white wall. Therefore, fixations do tend to be longer and I have trouble finding a reference in the scientific litterature to help me set my mimimum and maximum fixation durations... As you suggested, I am relying on interfixation duration to calculate saccade duration ((i.e., timestamp of fixation 2 - (timestamp of fixation 1 + duration of fixation 1) = duration of saccade 1)). However, when I calculate this, my saccade duration ranges from 18 ms to 1400 ms for the same participant, and the higher values seem to coincide with the moment where blinks appear.. Which is why I was surprised that blinks were taken into account within the fixations ! Maybe my settings are wrong... But then again, if I consider that blinks can fully be part of saccadic movements, then it should be okay ? In other words, I am now trying to understand the best practices for choosing fixation thresholds for a neutral environment (white wall) and an autobiographical memory task.
Hi there, I have a question regarding the remote connection please. I'm using Pupil Invisible and I need to start the registration data with Matlab, using the remote connection. I found this https://github.com/pupil-labs/pupil-helpers/tree/master/matlab, but I have big problem with zmq package. can you help me? Thanks in advance. E
Hi, the linked package only works for the Pupil Core network API. We currently do not offer any Matlab client implementations for the Pupil Invisible network API. You can read more about it here https://github.com/pupil-labs/realtime-network-api and here https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html
That said, can you tell us a bit more about your use case? What is your motivation to process the eye tracking data in realtime in Matlab?
OK, clear, thank you! I need to do an experiment on product packaging using Eye tracking and EEG. Since the researchers I am working with created the project on Matlab, I wanted to add Pupil Invisible. If you need more details or if I didn't explain myself well, I am here.
So, if I understand it correctly, you want to do the analysis in Matlab. My question would be if the data collection needs to happen in real time? The recommended workflow would be to upload the recordings to Pupil Cloud, apply enrichments, e.g. our reference image mapper, and then download and process the CSV export. (See https://docs.pupil-labs.com/invisible/explainers/enrichments/ for the possibilities)
Yes, using Python to receive the data in realtime and forwarding it to Matlab is possible. But you will need to perform the full analyses on your own.
While in Python I can have eye tracking data in realtime? Because I could use Python and afterwards call it in Matlab
What type of metrics/data are you planning on analysing?
Yes, I wanted to record the data in Matlab in real time. I need to do an experiment where participants observe images for seconds on pc monitor and I want to record their eye movements during the observation. I would like to have a csv file with the fixation coordinates and timestamp, without recording the video and uploading it to the Cloud.
You can use the realtime Python api to receive the scene video and use this Python module to do the AOI tracking https://github.com/pupil-labs/surface-tracker unfortunately, this package is missing some good examples and documentation.
So you are interested in pc monitor coordinates, right? For that, you will need to process the scene video in any case because the gaze is estimated in scene camera coordinates. Probably something similar to the marker mapper enrichment but in realtime.
Yes exactly, I'm interested in the coordinates of the pc monitor. Do you recommend me to record the pc screen video and then upload it to the Cloud? The only doubt I had was about the timestamp. Because if I could start the Pupil Invisible inside Matlab the timestamps were consistent. If I start the video there are seconds of difference between my project in Matlab and the recording of the video.
You can only upload to cloud via the app. Pupil Invisible records everything in unix epoch. You can time sync everything via the system provided network time sync (via NTP).
Ok thank you so much for the valuable information and the link you sent me, I will look into it. If I have other doubts I will write to you.
We will need to work on an example anyway. I might try to find some time for that next week. What does your timeline look like?
Yes, thank you! You would be doing me a great favor! I wanted to start my experiment in June. Let me know if it's too soon, please.
That is plenty of time!
Talk to you soon, thanks π
You might want to start thinking how you want to get the data from Python into your Matlab. Easiest would be storing the data into a file and reading that post-hoc into Matlab.
Hmm, okay, but I would want the Pupil data recording to start when I start the experiment in Matlab. Do you still recommend that I record in Python and then read them into Matlab? Sorry maybe I didn't understand that.
You can do that via the Python client. Maybe this is an option: https://uk.mathworks.com/products/matlab/matlab-and-python.html
Ok thank you very much, I will look it.
Is it still true that the IMU and gaze data are subject to non-constant temporal offset? There is latency between the two by an amount that changes over time?
If I remember correctly the issue was improved in v1.2.2
Hi Guys, we are trying to sync data from Azure Kinect and pupil invisible. Could you please suggest what is the best way to do that? Any pointer is highly appreciated. Thanks.
I don't know about best, but would writing a piece of software, which receives sync events via the network api suffice? https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/
Hi @user-ee70bf This sounds like an eye movement reinstatement paradigm, where fixation-saccade sequences are specifically reproduced when recalling a past experience from episodic memory. If so, I am wondering if analysis of saccades is even necessary. Typically it is the relative direction of eye movements between fixations which is important. See this paper: https://psycnet.apa.org/doi/10.1037/a0026585.
For more info on saccades in mobile eye tracking compared to other contexts, I can highly recommend this article: https://royalsocietypublishing.org/doi/10.1098/rsos.180502
Thanks very much Richard for those interesting articles. In this paradigm, I haven't got access to the eye movements that were produced during the encoding of the memory, so I am not comparing encoding and recall. I will look into the importance of saccades... Typically, in this new field of research (eye movements and autobiographical memory), it is considered that saccades enable us to search images through our visual system and fixations enable us to activate and explore those mental images. But the more I read on the subject (I am a first year PhD student), the more I discover that depending on the field of study, we use very different variables (AOIs, fixations, saccades...)
We have a Pupil Invisible which currently runs into problems. Near the connection of the world camera the glasses are overheating. We expect there is a short-circuit inside. Please advise of how to fix this. At this moment we do not dare to open the glasses ourselves (yet) for inspection.
Hi Guys, We recorded approx 600 videos, which are uploaded to the pupil cloud. We need to download the PI World Video (Egocentric) of all these recordings. We can download each recording individually and extract the zip file to get the egocentric video data. This approach is very time-consuming. We are wondering is there any easiest way to download all the egocentric video data from the cloud recordings? Thanks.
Hi @user-5b0955! There are two more convenient options:
You could create a project with all of those recordings. The creation process would still be tedious, but using the raw data exporter enrichment you could then download the recording data including the scene video in a single download.
You could use Pupil Cloud's API to download those recordings programmatically. This would require writing an according script and you would need a compatible strategy on how to identify those recordings (e.g. by their name follows a specific scheme or they share recording dates). If this might be of interest to you, I can give you details on how that would work.
I think downloading using API will be convenient for us as I think we still need to add each video manually to the project. Could you please provide resources on this? Suggestion: I think it would be great if the cloud has the feature to select and download a set of recordings.
Suggestion: I think it would be great if the cloud has the feature to select and download a set of recordings. This feature exists already, it is possible to select all the recordings on a page at once by clicking the first in the list, holding shift and clicking the last. Control click can be used to select/deselect individual recordings from that selection.
Thanks a lot for the feedback! I can definitely see how this would be very helpful for use-cases like yours! I have asked the engineering team to provide a minimal example of how to achieve this with the API and will forward it to you asap. This may take until tomorrow.
Thank you very much π
Hi π My question concerns timestamps. I am conducting a motion capture (Xsens IMUs) + eye-tracking (Pupil invisible) study. I need to sync the data from both system which is not obvious. My goal was to match the acceleration recorded from Xsens' head IMU and glasses' IMU. My experiment is on a trampoline so I have a pretty good idea of the shape of the acceleration profile. Unfortunately, it seems like Pupils' gaze and acceleration data are not synced? Is it possible to know how the timestamp for the IMU in the gasses is given?
There is a known issue in older app versions (< 1.2.2) where these two data streams were out of sync for some subjects. In what stage of the study are you at the moment?
Ah zut! I have 16/20 subjects collected. Is it a fixed lag or an unknown lag?
Unfortunately, at the time, the lag was not fixed and even drifted over time. π
Hello, we are collecting data while our participants are using a touchscreen in a dark environment but the resolution of the world camera on Invisible is not enough to clearly see the components on the screen. Is it possible to adjust the camera settings so that the texts can be read clearly or can we connect an external world camera to Invisible? If yes, do you have any recommendations? We got similar results with Core too. Thanks in advance.
You could also try setting the scene camera to manual exposure in the setting and try to optimize that value for best exposure of the screen. This does not increase the resolution but would remove affects of over-exposure of the screen.
If you find an external camera and a software that supports video capture via labstreaminglayer (LSL) you could use it in combination with our PI LSL Relay https://pupil-invisible-lsl-relay.readthedocs.io/en/latest/. It emits events to LSL that allow you to synchronize the external camera feed with the Pupil Invisible recording post-hoc.
Thanks for your answer @papr. You can understand that I am really disappointed. If I understand well, there is nothing we can do about it?
There would be a way to correct for that but it requires some coding. One could calculate the scene video's optical flow which correlates highly with imu data. Then you can use the two streams to find the offset and compensate for it
Hi, I recorded data using pupil invisible now, I am trying to analyze the data. I am not sure how should I start with data analysis, I came across a video explaining to use pupil player in order to create the heatmap but when I am trying to create a surface it shows that no marker is found and I am not sure how to define the markers. I would appreciate if someone can help me
Hi, I recommend this section of our documentation to get an idea of how the different products (Pupil Invisible, Pupil Cloud, Pupil Player) are playing together https://docs.pupil-labs.com/invisible/getting-started/understand-the-ecosystem/ I would recommend having a look at Pupil Cloud and its enrichments specifically https://docs.pupil-labs.com/invisible/getting-started/analyse-recordings-in-pupil-cloud/
I have another question while we are at it. I saw multiple times the notion of gaze angle appearing. It is an information that I need, I would like to be able to know the orientation of the gaze in terms of angles (side, up/down) relatively to the glasses reference frame. I am guessing I could compute it from gaze_point_3d, eye_center and gaze_normal fields from gaze_position.csv (exported from the app). However, these fields remain empty after export. Am I doing something wrong? Or is there any other way to access gaze angles ?
gaze_point_3d
, eye_center
, and gaze_normal
are Pupil Core specific outputs. They are not available for Pupil Invisible. That said, it is possible to calculate the gaze direction (similar to gaze_point_3d) for Pupil Invisible. You can do so by undistorting and unprojecting the 2d gaze estimates using the scene camera intrinsics. I am working on an example that shows you how to do that.
Please see this example on how to get directional gaze estimates in spherical coordinates based on the 2d pixel locations estimated by Pupil Invisible https://gist.github.com/papr/164e310047f7a73130485694d037abad
Thanks a lot!
Is it possible to get pupil data when using the invisible eyetracker? Meaning, if the invisible eyetracker also provides parameters like the pupil diameter, similar to the core device
Hi, currently, Pupil Invisible is not able to provide pupillometry data. Due to the oblique camera angles, we do not get a good view of the eyes, making pupillometry very difficult.
Hi, I'm having an issue getting videos from the Pupil Companion app to the cloud. All videos stay at 0%, no matter how long I leave it. I have tried restarting the app and phone, have tried multiple WiFi and data connections, and nothing has worked. I could previously upload videos with no issue.
Hi @user-8cef73! Could you try logging out and back in to the app? You can log out via the settings view.
Thank you, that worked! I'm also having an issue where I'm getting notifications that I can't action, some of them saying there are errors with the glasses. Is there any other way I can action them? One is about calibration and the other is a recording error but the notification is cut off so I can't actually read what the issue is.
Could you provide a screenshot of those notifications?
Hi,we are researchers from a university in China. We want to buy some of your products, but we have some problems in import licensing and channels. Who can help?
Hi @user-1936a5! Please write an email to [email removed] The sales team will be able to help you!
@marcthankyou!
Are you planning to connect "invisible" to psychopy?
Currently not. Psychopy experiments are usually screen based and to estimate gaze in screen coordinates in realtime we need marker mapping. But this feature is currently not available (in realtime) for Pupil Invisible. We do have an integration for Pupil Core though which is suitable for this kind of controlled lab-based experiments.
Hello friends, I have some issues with my recordings: 1. Are the recordings time sensitive or is there any space limitations, other than the phone itself, for the videos? When I recently try to have recording, although I have the color trail all the time, my video is not recording completely! 2. For audio output, I donβt have the audio after I use the pupil player. Is that normal? 3. The main concern is with pupil player. I try to have the merged video, and I usually can see it while itβs playing in the player. But when I try to download it, I have a mp4 file that cannot be played or when I push the download button it says the 2d or 3d data for visualization is missing!
Could you let us know which Companion app and Pupil Player versions you are using?
~~Hey, are you using the iMotions exporter by any chance?~~ This should not be possible for Invisible recordings.
No
Player is v3.5.1
My app is 8 I believe
Do you mean 0.8?
Yes, the newer one
That is much easier to use
Could you please share a recording (not the export/download) that is supposed to have audio with [email removed]
Sure
Hey I have a question regarding the PI Timestamp. Is it denoting the time the image was taken or received on the mobile device?
It is the reception timestamp minus a fixed amount to correct for the transmission time.
Thank you!