Hey, I saw that you can receive a response from the mobile device for things like IP address, Battery Level, Serial Number of Glasses and so on (more or less your discover_one_device() functionality in your real time API). Is there anywhere in the documentation noted which commands have to be send to the phone and what messages will be received in response? We can currently start/stop local recordings on the phone remotely from our tooling, but I would like to receive the additional information as well. Thanks!
Hey ๐ Are you using the simple or async/advanced API in your tooling? (Are you planning on using the Python API at all?)
No, it is a C++ integration which is not using any of your python scripts
Ok, got it! There are two ways to get this kind of device information:
1. Via HTTP requests on the /api/status
enpoint
2. Via websocket updates
See the Under The Hood guide for details https://pupil-labs-realtime-api.readthedocs.io/en/latest/guides/under-the-hood.html#get-current-status
The implementation with ndsi (and e.g. zyre related) is in a way outdated and does not support such things I guess?
ndsi is a whole different API and has been deprecated. The only reason to use it atm is to receive eye video and IMU data. But we will be adding that to the new API, too.
Okay, thanks. Is there any data when the NDSI integration will no longer be working?
We don't have a end-of-life date yet.
@user-fb5b59 That said, and given that there is no NDSI c++ reference implementation, I would not recommend investing any efforts into building a c++ integration for it.
Already did this two years ago and receiving the world video, eye gaze, single eye images, IMU data is working fine ๐ Just saw some updates on your page (e.g., regarding these battery status and so on) and was thinking about just integrating them...but yes, won't do this know and we might need a refactoring later on to support the new libs.
I am able to connect to the ws://pi.local:8080/api/status and will receive the status message on connection. Is there any message I can send via WebSocket so I get a new status update in response?
No, I don't think so. But the connection will push updates as soon as something changes, e.g. sensor disconnect. If you want to explicitly pull the current status, use the http get request
Btw, did you open source the client somewhere?
No
Just one additional question regarding the IMU usage in the future with the new API: will it still be send in chunks of size 60 or directly live streamed?
@user-fb5b59 we will keep sending in chunks of 60 this is actually the IMU doing the chunking when sending the data via i2c to the USB bridge not the companion app or the network api host.
Okay, got it. Thank you very much for your support, have a nice weekend.
Hi folks, I recently changed from pupil core to pupil invisible. In my studies using pupil core, I could easily use the Pupil Player software to calculate fixation positions. However, with the Invisible that does not seem possible. How to calculate fixations using the Invisible and Pupil Player?
Hi @user-3b5a61! The fixation detector implemented in Pupil Player is not compatible with Pupil Invisible's gaze signal. The signal contains a bit more noise, which the algorithm can not handle.
We have developed a new algorithm, which works great with Pupil Invisible and also is much better in compensating for head movements of the subject. This algorithm is however only available in Pupil Cloud. Fixation data will be calculated on upload and the recording downloads will include the fixation data.
Thanks in advance ๐
Thank you so much, @marc :)
Dear sir
Thank you for your management. Excuse me. Could you answer the following question? Best regards
My questions are as follows
Q1 What are the steps to follow when connecting the eyewear to a smartphone?ใใIs there anything else besides "Enable OTG"?
Q2 Does the eyewear normally work as soon as the eyewear is connected to the smartphone?ใMy eyewear has been difficult to be recognised as connected since it was first purchased.
Q3. Is there a solution to my problem?
My situation is as follows.
ใปWhen the eyewear (Invisible) is connected to the smartphone (smartphone enclosed with the purchase), it takes a very long time for the eyewear to be recognized (It cannot be recognized somedays). Experiments cannot be carried out.
ใปThe Scene Camera Icon and Eye Camera Icon in the attached image do not light up.
ใปThe eyewear gets hot and electricity is running.
ใปYesterday morning, eyeware worked after about 10 minutes of connection. In the afternoon it never worked. Today, it also didn't work.
ใปMy use of the Invisible eyewear so far is about one hour or less (total recording time is less than one hour).
Hi @user-5c56d0 thanks for getting in touch and for the detailed report.
A number of questions: 1. What version of Android is running 2. What device OnePlus 6 or OnePlus 8/8T. If OnePlus 6 you will need OTG enabled. OnePlus8/8T does not have this requirement. 3. Are you using the original cable we supplied to connect the glasses to the phone?
Thank you for your answer. The following is my state.
What version of Android is running Futami:ใ It is 8.1.0.
What device OnePlus 6 or OnePlus 8/8T. If OnePlus 6 you will need OTG enabled. OnePlus8/8T does not have this requirement. Futami:ใIt is OnePlus 6A.
Are you using the original cable we supplied to connect the glasses to the phone? Futami:ใYes, it is.
Q4 Is the operation "Enable Application Lock" necessary?
Thanks for the response @user-5c56d0 1. Android v 8.1.0 is supported ๐ 2. OnePlus 6A 2.1. OTG: Required โ 2.2. Enable Application Lock: Required to enable app to run in background even when phone screen is off.
@user-5c56d0 If you are not able to get up and running, I suggest that you contact us at info@pupil-labs.com to follow up so that our Hardware team can help debug and diagnose the issue.
@wrp Thank you.
Could you tell me how to do the operation "Enable Application Lock" necessary? I once did it last year, but I don't know how to do it now.
To enable the application lock please follow this instruction video:
Thank you very much. I was very saved.
I'm sorry.
Could you give me the following?
In my attached image, how to go from (1) to (2).
The card in the bottom of image 2) only appears during the initial setup of the app (when you open it for the first time). This is only a tutorial though, you do not need to access this view. You can simply execute the steps for locking the app from the screen in image 1)!
Thank you very much.
Is the follwoing action is correct? I have to change from "un-lock" to "Lock"(i.e., from (3) to (4) in my attached Image). My original one was "un-lock".
I can't see images 3 and 4, but yes, this is exactly what you need to do.
I'm sorry. This is my intended image.
Thank you
In addition, the other application "Pupil Mobile" shows the following error.
"com. pupillabs. exceptions. Has No Sensor Permission Exception" in my attached image.
@user-5c56d0 Pupil Mobile is not compatible with Pupil Invisible and is depreciated as an application.
Thank you for your reply. I understand.
Iโm sorry for my trouble. Could you answer the following?
Q1 In general, how long after connecting the eyewear to the smartphone is the eyewear recognized?
Q2. I have set the application lock to lock ON (i.e. locked), but 10 minutes have passed without the eyewear being recognized. Regarding the operation "Enable Application Lock", is it correct to change the application from unlocked to locked?ใThe reason I ask this is because the eyewear worked yesterday morning with the application unlocked.
The application icon should always be locked. Otherwise there is a chance that the app shuts down while recording.
The hardware should connect to the app almost instantly. The behaviour you describe, where it sometimes connects, sometimes it does not, and sometimes it takes a while, indicates that there might be a connection issue in the hardware.
This might be an issue with the cable, or an issue within the Glasses themselves. Please contact [email removed] and ask them for a repair. You can reference this conversation. The hardware team will then diagnose the issue with you and provide a hardware replacement if necessary.
I hope we can get you up and running again quickly! Sorry for the inconvenience!
Thank you.
The eyewear was not recognized when a Thunderbolt cable was used. Is USB-C & USB-C cable the appropriate cable? Such as the following products.
Thank you. From the following, I assumed that the problem was with the eyewear. I was also surprised to hear that the eyewear should be recognized as soon as it is connected to the smartphone via USB. The reason is that the eyewear has not been recognized easily since I first purchased it.
ใปYour original cable could not be used for the eyewear, but my headphones could be charged with it. Therefore, I can assume that your original cable is not broken.
ใปI tried the following two types. (1) one Thunderbolt cable I have and (2) two Type C cables that are not Thunderbolt cables. As a result, the eyewear was not recognized by the smartphone in all three cases.
Hi @user-5c56d0. We are responding via email ๐
Hi! I have questions about setting up an experiment with 3 invisible eye tracking systems in a conference room setting (4.1m x 7.5 m). Three subjects sit in those positions shown in the figure. We want to track their head and eye movements. We have tried putting QR codes (apriltag) on the wall and the table. However, when we threw the data in the Pupil Player and did the camera localization, the camera pose was not fully developed. We want to know how big and how many of them we need in this room size so that we could get solid 3D-model and head pose. Is it possible tracking three goggles at the same time in this big environment? What is your advice working in this big environment with several goggles? Thanks in advance!
Hi! To get a solid 3d model, I recommend making a dedicated "scanning recording" * which you can use to build the head pose tracker's model. Afterward, copy the file to the other recordings. This has the advantage that all three PI recordings will be localized within the same model.
* See the tutorial for reference https://youtu.be/9x9h98tywFI?t=15
Thanks for the response!! How do I copy and paste the model to the next recording?
hi! Iโm using the enrichment named reference image map to convert pupil invisible videos to heat map but it always failed and warned me use another video. I tried several times but still failed. I wonder how to success?
Hi @user-7e5889 ๐. Would you be able to describe the environment in which you're using the reference image mapper?
I used both windows10 64bit chrome cable and mac OS Monterey safari wifi
@user-c23af2 You can visualize fixations (rather than raw gaze) as a polyline in Pupil Cloud. You need to open a recording player and then enable the Fixations toggle in the bottom right. (Note: we are only currently releasing this feature and it might take until about next Monday to become available for all recordings). You can however not export this visualization yet.
A visualization of a polyline of the raw gaze is not directly available and you'd have to generate it yourself. You can download the gaze data in CSV format and then visualize it e.g using Python and matplotlib.
Hi @user-7e5889! @nmt 's question was aiming at something different. The Reference Image Mapper does not work in every physical recording environment. It requires the environment to be sufficiently rich in visual features. So e.g. tracking an empty table top in an empty room would probably not work. Could you describe your application and recording environment a little bit? If possible, you could also share your scanning recording with [email removed] so we can take a direct look.
I tested 2 objects: a painting and a whiteboard in my lab.
After I created an enrichment and press the 'start' button, then it seems to be loading (p1) but soon went back to the previous status (p2). Then I refreshed the webpage, it seems to be in progress again (p3). But after several minutes, I refreshed the webpage one more time, it showed error 100%(p4).
the detail is shown in the screenshot below and the enrichment ID is 4ff6da06-5525-4aca-8c41-29e4e69afb25
Hi Pupil Labs, Hi Pupil Labs, I am carrying on an experiment and I am using Pupil Invisible. I would like to know how to visualize the gaze positions with a polyline for each gaze position.
Hi @user-c23af2! Please see my response slightly above: https://discord.com/channels/285728493612957698/633564003846717444/994185790944985118
Thanks @user-7e5889! This make sit much clearer! If you get this error state, this means that either the environment you're in is unsuitable, or the scanning video of the environment is insufficient. I have not seen your scanning videos, but I think in your case the environment is the issue. With the plain white board, the solid gray floor and the rather texture less background objects and walls the algorithm does not have enough visual features to hold on to. The image you try to track is also a rather fuzzy/blurry one with not too many hard features.
Possible fixes for this would be: 1) make sure the object you are trying to track is more feature-rich. E.g. if it's possible you could swap to a different painting that is less fuzzy. Depending on the research question swapping out the object of interest is of course not always possible.
2) Make sure the object of interest is located in a feature rich environment. If you could move the whiteboard in front of a wall that has large texture that would help. Feature rich objects next the object of interest would also help.
Regarding paintings: we are about to release an example project next week demonstrating the reference image mapper in a gallery with paintings. Maybe that can serve as an example.
Thank you for your reply. I'm considering changing the environment and then conducting the test one more time. And I wonder whether the scanning video in the third step is about the environment without the target object or with it?
Had a recording with the invisible today. Unfortunately, I found out later that no scene video was recorded.
Could there be an error in the export?
I didn't get an error message from the companion app while recording.
Can you right-click the recording in Cloud and go to Download->Download Raw Data
and list the files in the recording? Does it include any files with the word "world" in it?
that surprises me
I don't use the cloud service. But the two "PI world" files are missing in the exported directory.
In that case, this means the scene camera was simply not connected during the recording which is a supported use case by the app and therefore does not warn you about it.
But the camera was attached the whole time.
Or do you mean it was disabled in the app?
One cannot disable the scene camera on its own via software. When you open the recording in Player, does it include gaze data?
I can load the recording into the player but when I play the recording I see and hear nothing.
No green circles? That would most likely mean that the glasses were not connected at all or (in case of OnePlus 6) OTG was not enabled.
I think that was the problem. OTG was disabled.
I'm surprised I was able to record anyway.
The app is designed to work even if the device disconnects during the recording. That includes situations where the device is "not connected" during the start. Regarding OTG, the app will display a red usb icon in the main view if OTG is disabled.
But I can follow your expectation here. I will forward your feedback to our Android and Design team!
Thank you for that. If things have to be done quickly on site, then unfortunately you make mistakes. I'll pay attention to that in the future.
Also, that means all three subjects do not need to do the scanning process right? We just need to create that ahead of time, them anyone could wear the goggles and walk into the scene. Am I getting this right?
Correct!
I don't remember the name of the file. Please check the head pose tracker docs. If this information is missing I will update the docs accordingly tomorrow
I got it. Thank you!!
Let us know if you run into trouble again with the next test! The object of interest must be included in the environment when recording the scanning video!
I have changed the environment and another test on this painting. It failed again. Then I tried a table full of objects. It has been in on progress 100% status for 5 minutes. Maybe failed again.
The painting is suffering from the same problems again: the painting itself is fuzzy and the background is pretty monotone. The test with the table I would expect to succeed though! The table is visually very busy and should work for the algorithm. The computation (if it is successful) can take quite a while, definitely longer than 5 min, so give it some time!
Thank you! I think the UI had better be more user-friendly and tell the real status instead of showing '100%' all the time. About the painting, actually, my experiment is about the appreciation of a painting on a white wall. This enrichment maybe doesn't support my needs?
sad.
Given that this failed as well, we should take a look at the scanning video and see if anything is going wrong there!
It depends on the painting! Again, we are releasing a demo dataset using the Reference Image Mapper in an art gallery, where it worked very well. I recommend you check it out once it is out and see what painting we have used. This should be the right tool for you.
I'll check the demon and choose the experiment paintings and then have another test But could you tell me why the table with a lot of objects failed? I'll send the video to the email address soon
Could you share the scanning video you made for the table with [email removed]
We will need to have a look at the scanning recording to tell ๐
I've sent the email named โThe scanning video of the failed reference image mapper reported by bubblepepperโ. Please check the attachment.
Thanks! I reviewed it and I would recommend the following: In your recording, you hold the glasses in multiple static poses while moving very quickly between them. Please try moving more, but slowly and continuously. This allows the algorithm to leverage more points of view. Holding a single pose is not beneficial. Also, try to include top-down views and side-views as well. This will help the algorithm to build a more stable model.
Thank you for your advice. I'll test one more time soon.
A bit more variation in distance to the table could also help, i.e. moving a bit closer and further away during the video.
Hi @user-7e5889 ๐ Check out the video in our documentation on how to make a scanning recording: https://docs.pupil-labs.com/invisible/explainers/enrichments/#setup-2 I would also recommend watching this explainer video https://www.youtube.com/watch?v=ygqzQEzUIS4 to get a better grasp of how the Reference Image Mapper algorithm works.
thank you! I have read and watched it before but I didn't notice that there are many points that should be paid attention to. I'll read and watch again. If there are some tips shown on the user interface of the process instead of another independent webpage, it will be more notable and useful.
@user-7e5889 Hi again! You now see some real examples from an art gallery in our new Demo Workspace: https://docs.pupil-labs.com/invisible/explainers/basic-concepts/#demo-workspace. Check out the Reference Image Mapper Enrichments with paintings, and accompanying scanning recordings there.
thanks! But I am afraid that I can't find this #demo-workspace on this page. Could you send a screenshot to me?
There's a link to Demo Workspace on the page. Let me know if you still have trouble accessing
I think there is no area named 'demo workspace' on this page. Is there any issue about cache or version release
Also, here is the direct link to the demo workspace for you to checkout: https://cloud.pupil-labs.com/workspace/78cddeee-772e-4e54-9963-1cc2f62825f9/recordings
Please try a "force reload" of the page. Your browser is caching an out-of-date version of the web page. You can force reload with command+shift+r
Thanks! I have accessed the page, but there was no reaction after I clicked the 'heatmap'. I recorded a video. I can send it to your email if needed.
The issue with displaying heatmaps in Safari has been fixed
This works on Firefox if you want to use it in the mean time.
Which browser do you use? Safari?
yes
Let me try to reproduce the issue
I am able to reproduce the issue. I will forward it to our Cloud development team. Thanks for reporting the issue!
Thanks! I accessed it on chrome and it worked. Iโll refer to these success examples and try again tomorrow.
I had the use case where I didn't have access to the internet. Is a constant online connection required to use the Companion App or the recording?
Hi, an active internet connection is only necessary for the initial setup and uploading recordings to Pupil Cloud. Once setup, it can be used without internet.
Or is it only used for login?
OK. So if I'm logged in, can I use the app offline?
Yes, that is correct
thx.
To use the Pupil Invisible Monitor, I tried to make the OnePlus a HotSpot. To connect me with the tablet to this one. Unfortunately, after successfully connecting, I cannot call up the Pupil Invisible Monitor.
Hi ๐ My apologies, but I am not quite sure if I am able to follow the issue correctly. Can you clarify if you are referring to the new web app or the deprecated desktop app. And what do you mean you "cannot call up"?
Yes, I mean the new web application.
There seems to be a problem with the DNS. Unfortunately, I also have no information about which IP the OnePlus has if it establishes a hotspot itself.
Have you checked the app's Streaming
view? It displays urls (incl. one with the phone's ip) which can be used to connect to the phone.
So that I can call IP-OnePlus:8080 in the browser.
Hi, I have 2 invisible glasses. It seems they're intermittently failing to connect between the android and glasses after several sessions of data collection. Most of the time the both scene and camera icons are inactive (grey) when connected to the phone. Otherwise it takes a long time to connect. I've updated the phone and app, not working. What can i do?
Hi @user-95de6f! I have a couple follow up questions:
Are the Android phones you are using OnePlus 6 or OnePlus 8 devices? For OnePlus 6 there would be a couple of things to watch out for, on OnePlus 8 connection should work out of the box.
Are you using the cable that was included in the box to connect the glasses and the phone, or another 3rd party cable?
Do both sets of glasses and phone behave the same? I.e. both loose the connection sometimes?
When you say "it stops collecting data", what exactly does that mean? Does the sensor show up as disconnected in the app and the circles stop filling up on the home screen? Or did you observe something else to come to this conclusion?
hi @marc sure, i'll ping separately
I tried that. Unfortunately, the browser returns the message that it cannot access the page.
Could you share a screenshot of the tablet's browser window attempting to connect to the phone's ip address and monitor port?
I am able to make the setup work (hotspot on phone, connect with external device).
Just to be sure: Have you tried force stopping and relaunching the app after setting up the hotspot on the phone?
Are you able to ping the device's ip address from the terminal?
Unfortunately, I have no way of doing this with the tablet or OnePlus.
What operating system is running on the tablet?
It's an android (android 11) device.
When I check the IP of the tablet it says something like 192.168.... but the IP of the OnePlus is supposed to be 10.59....etc. This shows me the streaming option in the companion app.
The ip of the tablet does not matter. The ip that matters is the one displayed in the app's Streaming
menu.
O.k.
Does your hotspot have internet access?
It does but should not matter for the local connection of the web monitor app. ๐
Connection Problems with 2 devices
OK. I have now tried, as you said. With the streaming option, it is currently looking for the DNS service. Without the internet, this could be a problem.
I am able to reproduce the issue. Here is what's happening when the phone opens a hotspot: A. The phone has a public ip address that identifies it within which ever network it is currently connected to (might be none; in your case 10.59.x.y.z; can't be used to connect the monitor to) B. The phone gets a "hotspot" ip address that identifies the phone to all devices connected in the hotspot (this is what we are looking for).
Unfortunately, the phone's ui does not tell us (B) the hotspot ui. We need to use the tablet's ui to do that.
The "router" ip address corresponds to (B) and is the ip that you need to type into the browser (don't forget to add the port number! :8080
)
It is waiting for DNS service...
I have the IP. However, when I enter it in the browser, I get the message: Website not available. ERR_CONNECTION_REFUSED
You might have forgotten to specify the port
I try an HTTP connection to "router" IP:8080
Have you installed any other third-party apps? e.g. a screen share application? Please try incrementing the port numbers by 1, e.g. 8081, 8082, etc
And can you confirm that the app is running?
Yes, the app is running. Have now tried to port 8085 without success.
Now it works! The error was that the RTSP service was not running.
Does the streaming menu continue to wait for dns?
Does this service start automatically?
How did you determine the service was not running? Did you do anything specific to enable it?
What I found is that the RTSP service only starts automatically when an internet connection is detected.
I get a notification that the service has started.
I spoke to our Android engineering team. The recommended setup is to have an external wifi of some sort, e.g. another phone providing the hotspot. Hosting the hotspot on the companion device technically works but is not officially supported. For me, the rtsp service is immediately available after setting up the hotspot (without wifi or internet connection) and fully restarting the app (force quit + launch).
If I do the same, then I only have the background service and the upload service. The upload service even though I turned off the automatic cloud upload.
But are you able to connect via the router ip and the corresponding port? It is very possible the rtsp service is only launched once requested via the http entrypoint api
Thanks for the extensive information and help.
I had to set up the environment because my tablet is not an LTE device, so I can't set up a hotspot with it. But I've now used my private smartphone for this, and it worked perfectly.
You are welcome!
This will help me during my next session to check that everything is working correctly while recording.
Is anyone aware on how to disable the system update notifications? I tried this, but they still appear :-( https://www.nstec.com/how-to-stop-software-update-notifications-on-android/
I am not aware of a method to disable them unfortunately ๐ฆ
I managed to do it in a safer way with https://adbappcontrol.com/ ๐ฅณ To use it, you need to enable dev mode and then USB debugging on the phone.
Afterwards I disabled the following package to get rid of the notifications and prevent any accidental system update * com.oneplus.opbackup
I also uninstalled some bloatware, namely * com.oneplus.membership (oneplus Red Cable Club) * com.oneplus.gameinstaller (onePlus game stuff) * com.oneplus.gamespace (onePlus game stuff) * com.facebook.services (Facebook) * com.netflix.mediaclient (Netflix) * com.netflix.partner.activation (Netflix)
So far everything seems to work fine. If I'd run into any problems, I'd report them here as FYI
Have you ever done any tests with LineageOS?
hello, I am trying to run a code that has used pupil_apriltags on windows 10. but I have problem installing the package. first it was about "problem building wheel" which got solved by changing a line in setup.py. now I'm getting another error "python setup.py develop did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip." and my pip is up to date. can anyone help me solve this?
The software is not tested on other OS than the specific OxygenOS versions for Android 8/9 (1+6) and 11 (1+8)
I understand that this isn't officially supported; I just wanted to know whether you ever tried at all ๐ I assume I might try and see if any of the issues from the troubleshooting guide comes up
Cool! Thanks for sharing it here @user-28ac9f! ๐
Hey, I don't know if this is the correct channel for my question, but I'd be happy to move it if need be.
We are using Pupil Invisible glasses and Pupil Player for analyses. Our lab needs to have the experimental room video footage in the background with the MET data overlayed on top. Is there anyway to have this setup?
Right now, if we drag the MET data folder into Pupil Player first, we can't resize the video. However, if we drag the background footage first, then we can't get the MET data overlayed on top, we can only see the MET video data without gaze data
Hi @user-2ecd13 ๐. Am I correct in assuming that you downloaded the recording from Pupil Cloud? If so, did you click 'Download Recording' or 'Download Raw Data'? The former contains human readable files, not meant to be opened in Player. The latter contains binary files that can be opened in Player (i.e. you'll see the video + gaze overlay). If you'd like to open the recording in Player, it's 'Download Raw Data' that you need (if in doubt, this is the larger download)
Hi guys, I am having some issues with the invisible companion app. When I start recording, it seems to be working until I try to stop the recording. It shows an error message saying "recording error" and then the app freezes. Is anyone able to help troubleshoot what is going on? I have tried clearing cache, resetting the app and turning the phone off and on again.
Attached is a photo of the error message.
The recording seems to be fine for shorter sessions such as 2-3 minutes. However, when participants are spending 16-20 minutes recording, this is when we get the error message.
Hi @user-8d7781 ๐. Would you be able to share another photo of the error message? I can't seem to view the current one ๐
When I received this message, the recording had cut off half way through the experiment. It also did not let me import the recording into pupil player, and said there was no available fixation data.
Hi, Please note that Pupil Player does not support detecting fixations from Pupil Invisible recordings.
Thanks for the screenshot @user-8d7781. Could you share the affected recording with [email removed] so we can take a look it?
Hi Marc, sure thing. I will be at work tomorrow and will email the recording through then. Thanks for this.
Hi,
We are planning to use the Invisible Glasses for tracking where the driver was looking while driving a vehicle. We had a few questions regarding that:
Thanks. ๐
And is it possible to connect Pupil Invisible glasses with the Pupil Core Capture software? I would guess not, but just to confirm. ๐
Hi @user-d4d4bc!
1) + 2) The raw gaze signal is 200 Hz and the estimates are made independently per sample. That means that no stabilization is necessary and looking at the same point for a longer time does not affect the quality of the estimate. For fixation detection the gaze signal needs to remain stable on a target for a minimum duration though.
3) Again, the duration does not influence the accuracy. However, the distance of the object of regard does to a degree. If the subject looks at a target that is < 1 meter away, the gaze estimate will suffer from a "parallax error", which gets worse the closer the target is. This error introduces a constant offset in the predictions. For distances > 1 meter this error does not have an effect.
4) No further setup is necessary, the device works at full quality as soon as you put it on.
5) You can open recordings made with Pupil Invisible in Pupil Player. A bunch of features like obtaining fixations, blinks and gaze mapping are however only available in Pupil Cloud. It is not possible to use Pupil Core's gaze estimation pipeline in Pupil Capture after connecting Pupil Invisible directly to a computer.
Let me know if you have further questions!
hello guys, does invisible calculate pupil diameter ?
Hey, Pupil Invisible's eye camera angles are not suited to do so. Your current option for calculating Pupillometry is Pupil Core.
hey @papr turns out the university does have invisible . so core with tablet then for my custom app ?
If you are dependent on pupillometry, then yes. Note that uncontrolled light environments can make it difficult to get clean pupillometry data due to the pupillary light reflex. Maybe there are other non-pupillometry metrics that can help answer your research question (allowing the use of Invisible).
well the basis of the research is based on pupillometry but I will talk to my advisor if we can see other options . thanks again!
Hello Pupil Labs Team , our pupil invisible oneplus 8 companion app flickers and the screen stays bright , is the issue known ?
Hi @user-f36bd4! No, I don't think I have heard of such an issue before. Could you maybe share a video of this behavior?
@nmt Hey Neil, we have been exporting the raw eye-tracking data from the phones to the computer. So far we have not used cloud services.
For the video footage inside our room, we are running the video data through a video conversion script that was provided by pupil.
Hi @user-2ecd13. Are you referring to the script that enables third-party videos to be loaded in Pupil Player and then manually synchronised with the Invisible recording?
Hi Marc, I just shared the google drive folder with that email. The folder was too large to send via email.
Hi, I had a look at why Player was not able to open the recording. Turns out, the shared recording is a partially upgraded recording. Pupil Player crashed during the initial recording upgrade due to PI world v1 ps2.mp4
being corrupted. To fix this issue, follow these steps:
1. Delete PI world v1 ps2.mp4
and PI world v1 ps2.time
2. Delete info.player.json
3. Rename info.invisible.json
to info.json
4. Open the recording in Pupil Player
Player will re-attempt the recording upgrade and this time it should show you video from the first part that was recorded before the error appeared.
I am trying to setup the Monitor. I have to figure out how to do this, in our IT sec setup.
AFAIK creating the hotspot on the companion device will not work at all, as mDNS is not supported in Android 11 (which is mandatory for the connection to the glasses to work) [1][2].
Thus I am trying to use a hotspot from my company phone, but it seems to partially break sometimes as well (the streaming works, but all actions will not; probably due to UDP blocking?)
[1] https://blog.esper.io/android-dessert-bites-26-mdns-local-47912385/ [2] https://issuetracker.google.com/issues/140786115
but all actions will not Could you clarify what you mean by that?
Hey, creating the hotspot on the companion device is not recommended. Rather, use a third device to create the hotspot and connect the companion and monitor device to that hotspot.
@papr this process should also work if instead of the cloud data I use the data obtained directly from the device, correct? If I'm not mistaken, I am downgrading from 200Hz (cloud) to ~66Hz (local), and that's pretty much it, right?
correct!
Note that since a recent update the real-time gaze estimation is much higher than 66 Hz though! Its >120 Hz now.
thank you @marc and @papr !
Hi, was just wondering what the cloud storage capacity was for the Pupil Invisible Glasses?
Hi @user-3b418f! The storage capacity in Pupil Cloud is unlimited!
Oh cool, would there ever be any plans to change that do you think? If pupil labs got bigger etc? Trying to work out data management from the perspective of higher education ๐
We might limit the capacity in the future, or setup something like a payed subscriptions for very large accounts. But nothing is currently planned and we'd make sure to announce a change like this early!
can I make a request in advance that if you do make plans, to create pop-ups or disclaimers that appear on the cloud website itself? We never check the generic intuitional email we used for the account creation ๐คฃ
Thanks for the feedback ๐ Yes, we'll make sure to it obvious also within the app itself!
Haven't used Invis in a while. I'm delighted to see the new fixation detector ๐
So basically the way to apply the various visual interpretations of the gaze data is by โenrichingโ them. First create a project from the footage you want โenrichedโ, then within that project select โadd new enrichmentโ which will be the blue button at the top. Then follow the process of adding the enrichment you want. For the little circle that tracks the gaze data, select gaze overlay. Etc...
...at the same time, I see it in the cloud, but not when i download the recording?
What's going on there?
I right click, and select "download recording" (and not "download raw data")
I would have expected the gaze and fixation overlay on the raw video.
It is as if the option to download recording is just returning the raw data.
it returns a downloadable pupil labs folder "raw-data-export"
...and, why can I enable fixations when viewing one video, but not another?
Then once added, you can select the play button to process the new enrichment that should appear on the screen, and then download that.
I have created a project and have tried this. Something seems to be malfunctioning. And what about this issue that downloading only returns raw data, but never the processed video?
It looks like the application of the gaze overlay enrichment stalled for some reason, which may explain why one video lacked the overlay. One source of confusion is that another video had the overlay without being in a project, and without enrichments applied to it.
Hi Papr, thanks for this... I will try this shortly. On a side note, do we know why the PI world v1 ps2.mp4 was corrupted? It keeps happening for every recording now, where when we go to stop any recording, the app freezes and continues to record despite clicking stop. We aren't able to use this device because it keeps producing corrupted files, so would love to figure out why it's happening ๐ฆ Thanks
Could you please use a second phone and film the steps to reproduce the issue? Regarding the corrupted scene video: There is one known issue where a disconnect while the in-app preview is opened can cause the app to crash.
I just tried deleting those files, and renaming the info file but when I go to put this into Pupil Player it says: 'Invalid Recording - There is no info file in the target directory'
@user-b14f98 Since the introduction of the fixation detector, fixation data is computed for every recording on-upload to Pupil Cloud. Fixation data has also been calculated post hoc for all recordings already in Pupil Cloud before the release. If you download a recording in CSV format, (i.e. "Download Recording" rather than "Download Raw Data"), the files should include a fixations.csv
file that contains the fixation data. Let me know if that is not actually the case for any of your recordings!
The scene video included in this download is intentionally the unaltered original scene video with no gaze overlay visualizations applied. The idea is that you could use it to create any custom visualization on top of it, without having to deal with visualizations already in there. If you are interested in a video with gaze overlay, using the "Gaze Overlay" enrichment as @user-3b418f said would be the recommended way. If the computation of this enrichment is not finishing something might be wrong with it. Could you let me know it's enrichment ID (visible in "View Details") so we can take a look?
Outside of using the gaze overlay enrichment, there is currently no other way of obtaining such a video, except maybe screen recording your browser. The scanpath visualization for fixations which is now available in the recording player can currently not be exported as video. We are aware that the options for exporting scene video with visualizations are still pretty limited. We are currently working on two things to improve on this:
1) we will soon publish a Python library together with guides that make the handling of video, timestamp matching and the manual creation of custom visualizations much easier. 2) A bit further down the road still, but we plan on adding a bigger video rendering tool, which will allow you to create all kinds of visualizations on top of the scene video, including gaze, fixations, eye video overlay, face/surface detections etc. Release will not be before 2023 though.
Sounds like there might be a spelling mistake in there. The renamed file should info.json
. If this is already the case, could you please share it here?
Hi Papr, I went to upload the new folder and saw that I was calling it info.json.json (json added by default) - sorry my mistake, I'm still very new to all of this. Thanks for your patience, it seems to be working fine now Appreciate it!
Hello, I am using pupil invisible glasses and I want to use the scene video in my code. But I have problem connecting to the device. I can see the video in http://pi.local:8080/ but I get this in python using real-time API: (I use windows 10 and Python 3.10.4)
device = discover_one_device(max_search_duration_seconds=10)
frame, gaze = device.receive_matched_scene_video_frame_and_gaze() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'receive_matched_scene_video_frame_and_gaze'
Can someone help me with this?
Thanks, Marc. To reiterate, the issue is that the top video in the image below gave me the "fixation" toggle without any additional work, before it was in a project. However, I cannot seem to activate the same "fixation" toggle in the bottom video, with the enrichment ID 22e8850b-e664-48c3-b173-d14f498117ca
After noticing the difference I did add both videos to the same project ("cycling") with the same "gaze overlay" enrichment applied from the start to the end of the video. The issue was not resolved.
Notice that the bottom video has an "!" for an icon. Something may be wrong with it.
Thanks for the screenshot, that's helpful! Indeed this exclamation mark indicates that something went wrong during the processing of this recording. Given that the fixation toggle is not showing, the error seems to have happened during the fixation calculation. Via the enrichment ID you shared we should be able to identify the recording, see what went wrong, and fix it! I'll keep you up to date!
@marc Dear Ladies and Gentlemen, After the University of Passau bought the Pupil Lab Invisible, we are now conducting a study with it. The following error message was displayed. I hope you can help us with this:
Hi @user-f408eb ๐. Thanks for sharing the screenshot. Please reach out to [email removed] and a member of the team will help you.
If device is none that means that no device was discovered. Could you share details about your network setup?
I connected the companion device and my PC to the same wifi router. I thought it was okay since the local app worked. is other setup needed for connecting?
We have restarted the processing of the failed recording, which has fixed the issue. The exclamation mark should now be gone and you should be able to toggle the fixation visualisation!
Thanks!
alternatively to
device = discover_one_device(max_search_duration_seconds=10)
you can also use
from pupil_labs.realtime_api.simple import Device
device = Device("pi.local", 8080) # or use ip address instead of domain name
does this mean that the device is not connected? Device(ip=pi.local, port=8080, dns=None)
the device is still 'NoneType' i think it did not work
We want to use the invisible for user testing on a digital screen. We want to track what the users see first and how they navigate through the prototype. Currently it seems that the glasses have a slight shift to the left if we look at the fixations. Is there a way to improve this? We have tested with several people. Another question we have is if there is a certain distance needed between the glasses and the screen?
Is it possible to get the realtime stream of the gaze data into Unity3D for pupil invisible?
You can use you real-time network API (https://docs.pupil-labs.com/invisible/getting-started/understand-the-ecosystem/#real-time-api) to access Pupil Invisible's data streams. However, we have no Unity3D integration for Pupil Invisible!
Hi @user-011cbf ๐. Thanks for your questions!
For users looking at a digital screen positioned less than 1 m away, there may be some parallax error. You can use the offset correction feature to compensate for this. To access this feature click on the name of the currently selected wearer on the home screen and then click "Adjust". There are instructions in-app. The offset you set will be saved in the wearer profile and applied to future recordings of this wearer automatically. Important to note: * the offset is valid at the viewing distance it was set, which should* be okay for screen-based work.
Have you also checked out the Marker Mapper Enrichment? You can add April Tag markers to your screen and use the enrichment to obtain gaze positions relative to the screen: https://docs.pupil-labs.com/invisible/explainers/enrichments/#marker-mapper
Thanks so much this helped a lot to solve it.
Thank you for the response. The realtime stream works fine, and I wanted to get the marker id using april tags. However, I have some issues for installing pupil-apriltags, and could not fix it.
C:\Users\Admin>python Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
from pupil_apriltags import Detector
at_detector = Detector(families='tag36h11', ... nthreads=1, ... quad_decimate=1.0, ... quad_sigma=0.0, ... refine_edges=1, ... decode_sharpening=0.25, ... debug=0) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\Admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pupil_apriltags\bindings.py", line 285, in init self.libc = ctypes.CDLL(str(hit)) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\ctypes__init__.py", line 374, in init self._handle = _dlopen(self._name, mode) FileNotFoundError: Could not find module 'C:\Users\Admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pupil_apriltags\lib\apriltag.dll' (or one of its dependencies). Try using the full path with constructor syntax.
Any tip to fix the issue? I tried to update the binding.py as the recommendations of some guys, but it does not take into effect.
This I do not know. Do you have any insight, @papr?
Could you please share the file contents of C:\Users\Admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pupil_apriltags\lib\
?
Here it is please!
I spoke to our Android engineering team
Is it possible to send annotation events over a network to the pupil Invisible? If I remember correctly this was possible with pupil Core via 0MQ, does a similar solution exist for the Invisible companion app?
Hi @user-cc819b! Yes, this is possible via the real-time API. You can find an introduction here: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/
And then follow that with - https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/track-your-experiment-progress-using-events/
thanks!
Hi,
I had a question. We have an application where we would like to do both, head pose tracking, and surface tracking simultaneously in the same vehicle.
For surface tracking, we were planning to use different sized markers.
For head pose tracking, as I understand, we are required to use the same sized markers.
I was thinking what we could do is, we could use 36h11 for head pose tracking, and run surface tracking only with Circle21h7 markers.
As I understand, head pose tracking is currently compatible only with 36h11 markers.
Would this be okay? As in would there be a problem with the head pose tracking algorithm, owing to different sized Circle21h7 markers placed in the vehicle?
In the future, is it planned that the head pose tracker starts detecting Circle21h7, which could lead to problems with the functionality?
Thanks. ๐
Hi @user-d4d4bc! This might work just fine, but some of the marker families can have false positive detections with markers of other families. As this is combining square and circular markers, I imagine this would not be the case, but I'd recommend validating this. If we change the compatibility of the head pose tracker to other families, we would most likely allow the user to select the family, so this should not be an issue.
An alternative to using different families would be to split the markers into two sets by their ID. I.e. use constant size markers for head pose tracking with a specific set of IDs, say 1-16. And varying size markers for surface tracking with IDs 17-32. For the surface tracking you can easily select which of the detected markers should be used for surface tracking. For the head pose tracker you'd have to generate the model without the other markers present (or covered).
Hi there,
I have some really annoying issues while using the glasses with Invisible. I think there might be some hardware connection issue, as the glasses seems to randomly disconnect from the phone, with error messages and phone vibrations. This happens approximately 9 times over 10, and my team and I have not been able to record any usable signal from our experimentations.
Is there any way to fix this issue, or do we need a hardware replacement?
Thank you
Hi @user-2c09ac! This sounds like a hardware issue. Please reach out to [email removed] for a diagnosis and potential repair/replacement!
Thaks Mark for the quick reply.
Thanks Marc. ๐
Hi! Question about Pupil Cloud: I have created a project and added two enrichments: raw data export and gaze overlay. I have downloaded this data to my local storage. Now when I add a new recording to the project and compute the enrichments, is there any way to download only the enrichments for the newer recording? Is there a way to download enrichments for each recording separately?
Hi @user-413ab6 ๐ You will need to download the entire enrichment. For Raw Data Exporter
it's likey not that big of a deal because the download size should remain small. However, for Gaze Overlay
the download can get quite large as it is videos, which can take a while.
This is good feedback and we will incorporate a way to make more granular downloads in future updates to Cloud.
Thanks for the prompt response.
Hello! Question about the blink detection: is there any chance to get the detected blinks as .csv file for data collected before the introduction of this feature in the Cloud? I am specifically talking for data collected in January, 2022. Thanks for your support! ๐
Hi @user-4f3037! On release of this feature we have initially not retroactively calculated blink data for all recordings in Pupil Cloud. Recently, we have however revoked this decision and added blink and fixation data to all existing recordings. So in theory your recordings should already have this data and the blinks.csv
file should be included when selecting Download -> Download Recording
or using the Raw Data Export enrichment.
If this is not the case for your recordings, then please let me know the according recording IDs, so we can invoke the calculation. Alternatively make a new project with all affected recordings and let me know the project ID if that is easier!
Is it possible to use Invisible with Android phone other than OnePlus 8 or 6?
Hi @user-8a4f8b! No, only those two models are currently supported. We have very specific requirements for the hardware and operating system and these things vary wildly between manufacturers and models, so we need to limit our support to just those devices. You can use any OnePlus 6 or 8 device though (assuming a compatible Android version), they do not need to be purchased from us.
thanks. I live in Japan, and OnePlus do not have Technical Conformity Mark (https://www.tele.soumu.go.jp/e/adm/monitoring/illegal/monitoring_qa/purchase/purchase.htm), which is necessary to use any smartphone in Japan legally. Do you know any smart phone that is legal to use in Japan for Invisible?
@marc or would it be possible to transfer/steam recorded data from OnePlus via USB to PC (i.e. without using wifi nor bluetooth)?
Update: I can confirm, if you purchase a OnePlus8 device in Japan, they do have the required certification. In case you have not made your Pupil Invisible purchase yet, you could buy it without the phone from us for a discounted price too.
Hi @user-8a4f8b! It is possible to transfer recordings vis USB to a computer and to operate Pupil Invisible fully offline (except in the very beginning you need to create an account and login to the app with internet). However, this way you would not be able to use Pupil Cloud, which means you'd not have access to e.g. blink and fixations data, or various gaze mapping tools, so generally this is not recommended.
There is no other model of phone that is currently compatible. If you'd purchase a OnePlus 8 device in Japan, might it have the according Japanese certification then? I am personally not an expert in how this certification works, but I know that we have a lot of Pupil Invisible users in Japan, who must have a way of dealing with the issue. ๐ค I'll consult with the team to see if anyone else has more insight in this!
I just found this. so it does seem possible: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html
Please note that you cannot stream data in realtime via USB. You can only transfer the recordings once they are exported via the app.
Hello, what's the end-to-end sample delay for real-time streaming of gaze data with invisible, and at what frequency can the gaze data be accessed?
The delay will depend on your connection. You can either use Wifi or a USB-C hub with ethernet to connect your streaming client to the phone. I can try to measure the delay on the wired connection later. The realtime api is able to receive data at the full frame rate that the phone is producing.