@user-acc6b6 If you haven't already, please get in touch with info@pupil-labs.com and we can follow up to help resolve any issues.
@user-acc6b6 Thanks for all the info. To add onto what @wrp has said: I seems like there is a connection issue within your Pupil Invisible glasses. To get a repair going please contact [email removed] referencing our conversation here.
Hi@marc , I want to know which IMU is used by pupil invisible. Can I get the head posture (IMU posture) data in real time? If not, how can I calculate the head posture.
Hi @user-a98526! The IMU we use has the model number BMI160
. It is a gyroscope and accelerometer, so it gives you the angular and translational speed of the headset. This translates into relative measures of the head pose, but not absolute ones. For absolute ones you can use the head pose tracker plugin in Pupil Player, which is based on environment markers.
https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
@wrp @marc Thank you for your reply. I will try it.
Hi all!
I am trying to find some comparative measures of gaze-estimation accuracy between the invisible and other eye tracking devices. From my understanding 0.5 degree and below is recommended, the gaze accuracy of the invisible is reported around 4 degree. I understand that other devices are heavily constrained in their accuracy assessments, so they may not necessarily be comparable? Essentially, I just want to ensure that the data from the invisible is high quality and on a similar level to other devices, and could use some reassurance π
Hi @user-1bfcbc! My biased opinion as a Pupil Labs employee on this is as follows π
For some experiments angular error below 1 degree is essential (e.g. reading studies). Obtaining this accuracy requires a very constrained environment for eye trackers from all vendors (e.g. limited subject head movement, controlled lighting). Such settings are typically used in accuracy evaluations (see related work in Pupil Invisible white paper). With Pupil Invisible this level of accuracy is currently not possible, but with Pupil Core it is.
If your application requires the subjects to be in a less restricted environment (e.g. they need to be able to move or be outdoors) it is generally not realistic to get errors below 1 degree. Depending on the environment you can either expect 2-5 degrees of error, or often the eye tracker might not work at all (especially outdoors). Pupil Invisible was designed for such challenging environments and achieves errors of 3-5 degrees in more or less any environment.
So if you need very high accuracy and can provide a constrained environment Pupil Core or another traditional eye tracker is the best choice. If you can sacrifice a little accuracy (this really depends highly on your application) you can take advantage of the increased robustness of Pupil Invisible to unlock recording environments, that would otherwise not be possible. It's a trade-off.
Another aspect is the ease of use in not having to do a calibration, which simplifies the setup. This is useful for some applications.
Hi @marc
Thank you for your informative reply, It seems that I have a lot to consider!
Our participants will be older so it is likely that their vision will be corrected with glasses, could I adapt the pupil core to be worn over glasses?
@user-1bfcbc Yeah, it's not a totally straight forward decision. Regarding glasses and Pupil Core, the answer is a fuzzy "it works about half the time". You need to have the eye cameras look at the eyes without the frame blocking the vision. Sometimes this works by having them look through the glass or underneath the frame, but for some frames it is just not possible. Contact lenses are not a problem at all however. With Pupil Invisible you could insert corrective lenses into the glasses.
How can I find information about how heatmaps are generated and interpreted? Is there a translation to the map colors, regarding the duration of the gaze?
@user-98789c our documentation of that is definitely still lacking. It is computed with the following steps: - Calculate a 2D histogram of all the included gaze points with 300x300 resolution - Apply a gaussian blur according to the parameter set by the user. - Apply the colormap chosen by the user. Except for the traffic light map all color maps are standard maps that can be found in most tools/libraries.
The minimum/maximum of the colormaps thus correspond to the minimum/maximum bins of the histogram. We do not plot a colorbar yet, so for a more in depth analysis it might help the generate the heatmap yourself from the CSV files.
hello! is it possible to record gps coordinates with the invisible app ?
If not, we could use a third party app to collect GPS, but then we would have to sync with the gaze data. As far as I can see, all timestamps in the csv files are in time relative to the start of the recording βΒ not in UNIX time or something we can match with another app.
Any suggestions? Thanks!
Hi @user-94f03a! Currently the app is not able to track GPS, so you would need to do that with another app. The timestamps used in raw Pupil Invisible recordings are originally UTC timestamps given in nanosecond integers. If you export the recordings to CSV using Pupil Player they do however get converted to floating point seconds with time 0 being the start time of the recording. The info.invisible.json
file does contain the start time of the recording in UTC though, so using that you can convert the sensor timestamps back to UTC as well. UTC timestamps should allow you to sync with other services!
Hello everyone! To my understanding gaze data and world video which are recorded through pupil invisible are stored on the one plus device and uploaded to the cloud. Then, if one wants to get the data, will need 1) to download the data from the cloud and 2) converting it through pupil player to obtain the world video and the gaze data. For our application this procedure is not ideal as we would need to have world and gaze data straight away for quality checks. Could this data be sent in some way, ready to consume, to our local machine or to our cloud provider? Thank you very much!
Hello @user-0e7e72! There are a couple different ways to access the data:
Upload to cloud for later download. That is one option as you describe, but it is also optional. Data can also stay offline and local.
Download directly from the phone: The data is stored on the phone like on a USB device, so it can be downloaded directly via e.g. USB.
Real-time streaming: All data can be streamed from the local network so it is available in real-time. If this is of interest you might want to check out Pupil Invisible Monitor, but you could also stream directly to your own software. https://github.com/pupil-labs/pupil-invisible-monitor
We also offer an API to Pupil Cloud which allows you to download recordings programmatically, which might make it easier to streamline your setup.
Pupil Player is needed to convert raw data to CSV, but it is also possible to use the raw data right away (albeit a bit more difficult technically). I can give you instructions if that was of interest.
Would one of those options work for your use-case?
Hi @marc, that's great, thanks for the detailed answer! - Streaming directly the data would be very useful, unfortunately Pupil Invisible Monitor does not work on my MacOS Catalina. Anyway, how would you stream directly to own software? What I have being doing is to open pupil capture and using this script https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py to stream the data to opencv. Is there a simpler way to stream while avoiding to use Pupil Capture? - It would be also very useful to use the raw data right away, I'd be glad to get the instructions Thanks again!
@user-0e7e72 Regarding streaming please check out the documentation here: https://docs.pupil-labs.com/developer/invisible/#network-api
We are not yet aware of a problem between Pupil Invisible Monitor and MacOS Catalina. Could you give me some more detail on the issue you are having?
Hi Marc, I have checked the network API and I managed to get it up and running. The example provided for the gaze data works correctly, do you have by chance also an example on how to stream the world video data? Thanks a lot!
unfortunately Pupil Invisible Monitor does not work on my MacOS Catalina Hey @user-0e7e72, to add to @marc 's question, could you please clarify in what way Pupil Invisible Monitor behaves differently from your expectations? Also, could you please confirm you meant Catalina and not Big Sur as Big Sur is currently not supported yet.
@user-0e7e72 Regarding the format of the raw recording please check out the following doc. It summarizes how the binary data is setup. I could also provide you with an example in Python on how to open the files if that is helpful. https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793
Thanks Marc, also the example would be useful!
I have checked again, it was simply blocked by the os because it was not from a recognized developer
Ah, yes, this is indeed a known issue. Our next release of the app should not yield this error anymore.
Hi, I`m trying to use the ndsi package in python 3.6.8. I used the .whl from github to install the package and added ffmpeg and libjpeg dlls in my system path. Unfortunately, I still get this error:
Let's move that discussion to π» software-dev
@user-0e7e72 See this gist for an example on how to open raw PI recordings. Note, that in raw recordings the sensor data may be split into multiple files. This happens whenever the sensor disconnects. Further the timestamps and actual data are split into separate files (ending with .time
and .raw
). These things make it a bit inconvenient to work with the raw data, but for just a quality check it shouldn't be too much of an issue. The gist shows how to open a single part of the sensor streams. Opening the videos is not demonstrated but can be done with any standard library (e.g. OpenCV).
https://gist.github.com/marc-tonsen/d230301c6043c4f020afeed2cc1f51fe
@user-0e7e72 I have extended the example in the docs to also include world and eye video in the following gist: https://gist.github.com/marc-tonsen/93b4b1ff0756a5a4be2acdfea1862cd2
Great, thank you, amazing support!
Your welcome π
Hi, I was wondering whether there is experience with the Invisible system with children, say 5-10 years old or younger. Do they fit? Or do you have other options for research with children?
Hi@user-a50a12! Unlike with the Pupil Core system, the Pupil Invisible system is not available in a children-size version. We have not extensively tested the system with children, but have tried it with children down to an age of 3 years on a couple of occasions. The gaze prediction seems to work just fine on children, but the glasses of course sit very loosely on smaller children. This can be improved to some degree using the sports strap, which helps fixate the glasses on smaller heads. I am afraid I can not give you a recommendation for a minimum age given our limited tests. We do however have a 30-day return policy, during which you could test the product.
thanks a lot, that helps
Hi@marc, I want to know if Pupi-invisible can provide real-time fixation detection like Pupil-capture. Besides, do I have any way to obtain pupil data in real time like Pupil-capture. For example: diameter_3d.
Hi @user-a98526! No, Pupil Invisible can currently not do real-time fixation detection. Also Pupil Invisible does not provide pupillometry data. Both of this is currently only available with the Pupil Core product.
Hi everyone, I have a question. My research group is doing research in medical field (endoscopy) and we might collect a lot of patient data with the glasses. Because of patient data protection we would like to know if we can store the data offline, on the phone, but not on the cloud.
Is there a way to go around the cloud and not use it at all?
Many thanks in advance. Boban
Hi @user-d33e76! The upload of recording data that might include patient data is 100% optional. The data can be downloaded from the phone directly as well, so it never has to touch the cloud. The option to disable cloud uploads is presented during initial setup. However, a cloud account has to be created in the beginning to login to the app. Also, some features are only available with cloud usage, e.g. the 200 Hz gaze signal. The phone itself can only provide ~45 Hz real-time gaze data, the 200 Hz signal becomes available only after downloading a recording from cloud.
Thank you very much for the info!
Your welcome @user-d33e76!
I canβt install Pupil Invisible Monitor Desktop App on my computer (Windows 10, 64 bit). Please advise. Thanks a lot.
Could you please elaborate on what behaves differently from your expectations?
The program just cannot be installed. The window is staying grey, the CMDs last 2 lines say: 2021-01-14 18:52:09 [DEBUG] (220c98) JOIN name=5edf1e group=pupil-mobile-v4 2021-01-14 18:52:09 [DEBUG] Dropping <pyre.pyre_event ENTER from 5edf1eda2ac64b73ae3d74beb24bb3ad>
So, the program does not have an explicit install procedure. You just started as you did already. Can you share the pi_monitor_settings -> pi_monitor.log
file with us such that I can have a look at the debug information?
Dear all, I tried to check for the accuracy of the eyetracker (invisible) with some of the measurements presented in Niehorster et al (2020) The impact of slippage on the data quality of head-worn eye trackersThe impact of slippage on the data quality of head-worn eye trackers. I know that they used the pupil core, but I thought invisible should be equally or even better. So a person is standing in front of the stimulusgrid (distance 1.5m), watching at different markers. For all test measurements, the coordinates of the fixations are far away from the markers. It looks like an offset, because it is mostly the same direction and some distance. Does anyone have some tipps how I can approve data quality? Since one cannot calibrate the invisible eyetracker, I do not know how to improve data quality.
The same with my glasses! I just wanted to ask the same.
@user-7a8fe7 @user-7c714e In case you have not seen it already, we have published an accuracy evaluation of Pupil Invisible here: https://arxiv.org/pdf/2009.00508.pdf While the peak accuracy possible with Pupil Invisible is lower compared to Pupil Core or other eye trackers, the robustness to slippage should be much better. So whatever predictions and errors you get should be unaffected by slippage.
Getting a largish constant offset in the predictions is a known phenomenon that can occur for 10-20% of subjects. Some physiological parameters of the eye (e.g. the offset between visual and optical axis) can not be perceived with the eye cameras. Subjects for whom those parameters lie far away from the population average are affected by the offset. Since this offset is constant and independent of the gaze direction it can however be corrected fairly easily.
The Companion app offers a feature called "offset correction" to make this correction. To access it enter the live preview of the app, click and hold the screen and then specify the present offset by dragging your finger. I hope this helps, let me know if you have further questions!
Dear Marx, thank you so much for your answer. Although in my data it isn't only 10-20 % of the subjects but for all subjects which are - in my opinion - no special cases (young people, some with and most without contact lenses). Did I get it right that the offset correction is possible only before the measurement and not afterwards? At least I couldn't click and hold the screen while watching the record video in the app.
@marc Thanks a lot! I will try it in the next days.
@user-7a8fe7 If it is a lot more than 20% of the subjects, the offset can in theory not be explained by the physiological parameters anymore. In that case, make sure that the error is actually a constant offset, otherwise you run the risk of making the predictions worse on average when applying the offset. Contact lenses should not affect the predictions.
Another thing that can introduce a constant offset is parallax error, which become noticeable for gaze targets at a distance <1 meter.
You are correct, currently the offset can only be applied before starting a recording. We are planning to make post-hoc corrections possible within Pupil Cloud, but do not have a planned release date for that yet.
Dear @marc again, thank you very much for your support!
@marc How can I turn back exactly to the preset after I have made an "offset correction"?
Hi marc! It turns out I'd need to use the API to Pupil Cloud as well, would you point me to it? Thanks a lot!
We also have a Python client that implements the API https://github.com/pupil-labs/pupil-cloud-client-python/
https://api.cloud.pupil-labs.com/ You will need to create an api token in Settings -> Developer (top left) -> Generate New Token
@user-7c714e If you set an offset correction it is saved in the wearer profile and will automatically be applied to future recordings made with the same wearer. If you choose to reset the offset in the UI it will reset to the raw uncorrected predictions.
dear all, I would like to know if I can reset my smartphone from pupil labs (in Germany: zurΓΌcksetzen auf Werkseinstellungen) or is this a problem due to the pre-installed apps? thank you so much in advance
@user-7a8fe7 Resetting the phone is not a problem. You can simply reinstall the app from the Play Store. Note, that of course all recordings still present on the phone would be deleted!
thanks @marc
@marc I don't see how to reset the offset in the UI. Is this only possible when making a new userβs profile? What bothers me a lot is that the correction of the offset for objects in a near distance (up to 3 m and not only under 1 m) affects the precision of the fixations in far distance, so I don't seem to find a sweet spot for any of the subjects who wears Pupil Invisible. This matter is very important when driving a car and looking at the dashboard and the mirrors (near distance) as well as at the road (far distance). When I look at the fixations in Pupil Player, there is always an offset from objects one looks at and the offset is not constant. Do you experience these issues when trying the glasses and is it possible that the problems I experience are due to a defect in Pupil Invisible (position of sensors, scene camera etc.)? On Page 4 in your paper you are writing as follows: "Due to tolerances in the manufacturing process of Pupil Invisible glasses, both the relative extrinsics of the eye cameras and scene camera (i.e. their spatial relationships), as well as the intrinsics of the scene camera module vary slightly from hardware instance to hardware instance.For each hardware instance, the respective quantities are measured during production and thus can be assumed to be known".
@user-7c714e You can find the button for resetting the offset associated with a wearer in the wearer's profile. You can access the profile of the currently activated wearer by clicking on the wearer name in the app's home screen.
When looking at it in detail, there are two sources of constant offsets and other sources of non-constant error:
Physiological parameters: As mentioned, in theory this only affects ~20% of subjects whose parameters are furthest away from the average. The error is a constant offset that is barely affected by gaze angle and distance to the gaze target.
Parallax error: This error is due to the camera position off to the side of the frame and affect all subjects. It introduces a constant error that is dependent on the distance to the gaze target. For distances <1 meter this error is not really noticeable, but for distances >1 m it becomes increasingly large. This implies that it is not possible to find an offset correction, that corrects the gaze signal perfectly at both close distances and further away distances.
Other error: Some of the error is of course also not just a constant offset. This error is characterized in more detail in the white paper.
Would you be able to share an example recording with [email removed] Then I could take a look to gauge if there might be a defect.
@marc I will share a recording until the end of the week. Thank you.
Hello! Unfortunately, I am not able to find where to generate the api token, thanks!
The link refers to the api. The instructions for generating the token refer to https://cloud.pupil-labs.com/
oh I've just seen it, the setting icon is on the bottom left, it is not really visible as it mixes with the color of the left grey bar, thank you!
Could you share a screenshot of that? I can relay this feedback to our design team.
here it is π
Got it, thanks
Hey guys, just one questing regarding a notification in the PupilInvisible app: I get always the warning "Calibration not synced! Tap for instructions". Actually, the mobile phone is connected to the internet, but I think nothing is downloaded. What can I do?
What happens if you tap on the notification? Do you get the instruction as described?
Yes. "Update HW Calibration Data.. To improve..."
And can I easily update the phone? Or are there any problems with the PI app? I just remember that there have been issues in the past.
Could you please contact [email removed] and provide the serial number of the scene camera? We would like to verify the calibration can actually be synced.
Please refrain from updating your operating system for now please.
I have relayed your question to our Android team. We will let you know their response π
Good afternoon! I have been playing with the python client and I have an issue when I try downloading the recordings. if I execute the script below:
from pupilcloud import Api, ApiException
api_key = XXX api = Api(api_key=api_key, host="https://api.cloud.pupil-labs.com")
data = api.get_recordings().result last_recording = data[-1] saved_path = api.download_recording_zip(last_recording.id)
I get
Let's move this discussion to π» software-dev π
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/api/recordings_api.py", line 277, in download_recording_zip return self.download_recording_zip_with_http_info(recording_id, **kwargs) # noqa: E501 File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/api/recordings_api.py", line 345, in download_recording_zip_with_http_info return self.api_client.call_api( File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/api_client.py", line 338, in call_api return self.__call_api(resource_path, method, File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/api_client.py", line 170, in __call_api response_data = self.request( File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/api_client.py", line 363, in request return self.rest_client.GET(url, File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/rest.py", line 235, in GET return self.request("GET", url, File "/Users/enrico/.conda/envs/ReperioConda/lib/python3.8/site-packages/pupilcloud/rest.py", line 223, in request r.data = r.data.decode('utf8') UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc8 in position 10: invalid continuation byte
I have tried with different files getting the same error, maybe I am not using the correct ID? As a workaround I have tested the following which works fine:
url = last_recording.download_url cmd = 'curl -O "'+url+'" -H "accept: application/octet-stream" -H "api-key:'+api_key+'"' proc = subprocess.Popen(cmd, shell=True)
Dear all, yesterday I took some measurements. While there are some which are good (a), there are a lot of those, where the pupil player is shutting down itself (b). I can't see a difference between those measurements compared to the measurements of a. At least there are measurements, where the fixation in the pupil player is not correct (is in the top left corner for the whole measurement) although in the previewing in the pupil cloud, the fixation is moving where the person is looking at (c). So in c, there seems to be something wrong with the fixation calculation... does anyone had the same problem and could help? thanks a lot in advance!
I have good news regarding the c) issue: This seems to happen in only a few cases. Our Android team is aware of it and investigating the cause. Luckily, Pupil Cloud post-processes the eye videos to generate the full 200Hz gaze signal, which is not affected by the issue. Pupil Player will be able read the 200Hz signal in our next release. You will have full access to the recording once it is published.
Regarding b), could you please share the pupil_player_settings folder -> player.log
file with us, after Player shutdown due to b) without restarting Player afterwards? (restarting resets the log file)
Dear @papr thank you very much for your answer. Attached you will find the player.log after testing a measurement data in case (b), where the pupil player is shutting down.
Regarding c), it would be most helpful if you could share an example with [email removed] such that we can have a closer look.
Thank you very much @papr I will contact them!
Please also share an example recording for b) with [email removed] The log shows an unknown issue that we will have to investigate.
@papr is it possible to share with my cloud?because the file is too big
Unfortunately, not yet. Please share the downloaded recordings from cloud via a service like Google Drive or something similar.
ok thanks, I will do it later because I'm in a meeting now
Hi, I have a question about the gaze coordinates data. So, lets say I have a monitor in front and I put the QR markers on its corners and create a surface on it (Marker mapper). I have read somewhere in documentation that you can map the gaze points on the surfac. But can I extract the data to have it like coordinate values of a gaze point for every frame. An example of what I mean: // Frame1, (16,65); //Frame2, (19,61); // Frame3, (15,55); etc
Is it possible to get these coordinates in relation to one of the markers I put on the display?
Maybe it is a confusing explanation, but I beleive that there are many people that did something similar. Many thanks and kind regards. π
By relation to the markers I mean... I create a coordinate system in the top-left corner (marker), which has the value (0,0). And from there, I extract the gaze coordinates on the sufrace.
@user-d33e76 You can define the surface corners to align with the corners of the display. The Marker Mapper reports mapped gaze in a normalized coordinate system, where (0,0)
is the bottom left corner and (1,1) is the top right corner. So to convert the mapped gaze into pixel coordinates of your display you simply need to multiply the normalized coordinates with the resolution of your screen.
Ok, that is great. Thanks!
And the surface area and coordinate system, do they adjust all the time while I move my head?
In other words, I would need to have surface corners "fixed" to physical corners of the screen, despite my head movement. Do they stay "fixed"? Head movement would be minor and the defined surface would never leave the camera sight.
We have received your email. We will probably be able to come back to you by next week.
@papr many thanks!!
@user-d33e76 Note that the corners of the surface do not need to be aligned exactly with the markers. You can define the surface corners to be anywhere within the plane the markers are on.
@user-d33e76 Consider for example this surface. It is defined such that the mapped gaze can be converted into coordinates exactly aligned with the white board. The corners of the surface do not match exactly with the markers.
@user-d33e76 Yes, the markers are detected independently in every scene camera frame, allowing accurate detection in every frame. Thus, the mapping is robust to headmovement.
That is excellent! Thank you very much for the info! π
Your welcome! π
Hey @papr! I have a question kind of related to your last message. My understanding: We can get 200Hz gaze data only if we upload it to Pupil Cloud and if we load the downloaded folder in the current version of Player, then it goes down to ~45Hz. The folder I get from Cloud download contains gaze_200hz.raw and gaze_200hz.time files. But I don't get how and where to read in these files to actually extract the gaze data so that I can get something like the gaze_positions.csv from Pupil but in 200Hz? Is there a way to do that or shall we just wait until the next Player release?
Your understanding is correct. You can read the raw 200 hz gaze data using:
import pathlib
import numpy as np
import pandas as pd
# rec = <path to recording folder>
rec = pathlib.Path(rec)
gaze = np.fromfile(rec / "gaze_200hz.raw", dtype="<f4")
gaze.shape = -1, 2
ts = np.fromfile(rec / "gaze_200hz.time", dtype="<u8")
ts_datetime = pd.to_datetime(ts)
gaze_200hz_signal = pd.DataFrame(
{
"x_pixels": gaze[:, 0],
"y_pixels": gaze[:, 1],
"timestamp_nanoseconds": ts,
"datetime": ts_datetime
}
)
gaze_200hz_signal.to_csv("gaze_200hz.csv")
Be aware that this uses original Pupil Invisible data (time as nanoseconds; unix epoch; location in pixels, not normalized) and is not comparable to other Pupil Player generated data. The actual transformation from Invisible to Player recording is more complex, and you will have to wait for the next Player release to get full Player data compatibility.
Thanks a lot!
Hi I have a question about gaze plots: how can I obtain one? I found the tutorial explained in the link, but I wonder if there is a simpler way (for non-developers basically, like an export setting or so) π Ideas?https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb
Hi@user-1cf7f3! Within Pupil Player you can get a scan path visualization that is showing the scan path of the last 5 seconds (independently of surfaces that are used in the tutorial you reference). Within Pupil Player and Cloud you can also get a heatmap for a tracked surface. The exact visualization created in this tutorial is not available "off the shelf" however.
Hi @marc Thanks for your reply! Can you explain further how to obtain the last 5 seconds scan path visualization? Thanks
Hello. I am not sure whether this is an issue to be solved here or somewhere in my Google account. I logged into the Pupil Invisible Companion App with two different Google accounts. But now I need to delete/remove the other account from the app. Tried different things in the app, and googled the issue, but I just can't figure out how to remove it. Can you help?
@user-1cf7f3 This is done using the Vis Polyline
plugin in Pupil Player. You can set a Duration
for how much of the history should be plotted. The maximum is actually 3 sec, not 5 sec as I previously said.
Got it! Thanks @marc
@user-2652b9 Have you logged in by previously adding the Google account in Android?
@marc Sorry, not sure what you mean exactly?
I created the account on the Android while signing up on the app.
And the second account is one I already own, and I need that one to be removed.
@user-2652b9 Okay, then the steps to remove the account would be to a) log out of the app and b) remove the account from Android. If there are still recordings from the account that is to be deleted on the phone, you can delete those either before loging out and removing the account, or after using the Android filesystem.
Removing the account from Android works via the system settings
Let me know if you need further clarifications with one of the steps @user-2652b9!
Oh wow, not used to Android. It worked. Thank you very much!
You are welcome! π
Hey guys, I have a question regarding Pupil capture. I want to localize a 3D model for head pose tracking in Pupil invisible via real time streaming in Pupil capture. Now the question is how I can connect both Pupil invisible and my laptop (which has running pupil capture on it) on same time to the Pupil companion app? I would greatly appreciate your response.
If both devices are in the same wifi network, you should be able to select the Pupil Invisible world camera in the Video Source menu in Pupil Capture.
Thank you so much!
Just tried it out with having both my laptop and pupil invisible companion connected to same WiFi. But when I go to video source in Pupil capture the world camera from Pupil invisible could not be detected there. Is there any other way to sync them? Thanks,
Could you share the pupil_capture_settings -> capture.log
file with us?
Thanks, Here is the file you asked for.
Is it possible that you are connected to a University/Enterprise network?
I am trying it out in University but the eduroam connection is not good, that is why I used my personal Hotspot trying to connect both devices.
Ah, Pupil's auto discovery does not work for hotspots or university networks. They employ security mechanisms that block the required communication. We recommend setting up a dedicated wifi network (with a dedicated wifi router) for best results.
Oh, I see, then is it possible to localize the head pose 3D model Post hoc when I am not in university? If not, considering that my 3D head-pose model is same in all my measurement, is localization procedure through Pupil capture a one-time 3D model identification (i.e., I only need to define my model for one time and then I have the model each time I do experiment) or I must re-identify the model each time I do experiment?
You define the model once and use it for real-time and post-hoc localization.
Thank you so much for your help!
@marc How can I get all data (number of fixations, fix. duration, gyroscope and accelerometer data etc.) from the Invisible glasses in a .csv or .xls, etc.? Which software should I use? Which markers should I use for defining the AOIs?
@user-7c714e The fixation detector algorithm we use for Pupil Core is not yet compatible with Pupil Invisible. We are actively working on providing fixation detection for Pupil Invisible as well, but do not have a release date for that feature yet.
We are currently adding the ability to download IMU data in CSV format to both Pupil Player and Pupil Cloud, It will become available within the next couple of weeks. Until then you could already parse the raw binary data yourself. The format is documented here: https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit?usp=sharing
Regarding the markers please check the documentation of the Surface Tracker here: https://docs.pupil-labs.com/core/software/pupil-capture/#markers The same markers are used for the Marker Mapper in Pupil Cloud and are linked in the instructions there as well.
Hi @papr ! Parishad and I are on the same team. Just wondered if this would work with an internet stick / surfstick as well? Settinng up a new wifi network with a dedicacted wifi router is quite complicated as far as organizational aspect, so we'd like to avoid that.
That is likely not to work. Do you need the data in real time for your experiment?
Just for defining the 3d model for head tracking. Since for that Pupil Capture and Pupil Invisible have to be connected to the same networf
Do you really have to create a new 3d model for each subject or do all subjects perform the experiment in the same space?
*network
Did you know that you can reuse the 3d model definition across multiple recordings?
All subjects perfrom the experiment in the same place. But even to define the 3d model once, we'd need to connect the devices to the appropriate network, right? And setting up a new wifi network is quite complicated at our uni at the moment
@user-16e6e3 You could also record a thorough video of your experiment environment (including all the markers) and use that recording to build the 3D model post-hoc using Pupil Player.
The model does not necessarily have to be generated live in Capture.
Oh ok. That's good to know, thanks! We'll try that
You can also connect the Pupil Invisible glasses to Pupil Capture. Change to manual selection mode in the Video Source. Afterward, select the world camera and use it to build the 3d model.