hi, i have some problems with some data. There are some video`s were the circle is stucked at one place or where i dont have the gaze circle.
@user-82e5bd Hi π please send an email to info@pupil-labs.com along with your serial number and/or order ID and we can provide support.
Hello. Just after the update, the glasses stopped being recognized by the mobile phone. However, they also do not work in the PC with Pupil Capture. I tried another (data transfer) cable and it doesn't work either. Any suggestions? Thanks
Please contact info@pupil-labs.com in this regard.
Hello, I'm trying to use the Network API for the Pupil Invisible. For this purpose I installed the ndsi module. But when I try to import ndsi there is an error because a module named formatter can't access to the frame module. Here is the repository with the files : https://github.com/pupil-labs/pyndsi/tree/master/ndsi. Could you help me with this ? Thanks
Please see this conversation for reference
https://discord.com/channels/285728493612957698/633564003846717444/732964406509436969
Thanks, but I managed to install pyndsi (with ffmpeg and turbojpeg). It says that the module was installed succesfuly. So i don't understand how it cant found the frame file.
Could you please post the complete error message?
Sure
First I thought this was because it couldn't import the frame.pyx file so I tried to use : import pyximport; pyximport.install() but it still doesn't work
There are build requirements and runtime requirements. The former are fulfilled. The latter are not. Please see https://discord.com/channels/285728493612957698/446977689690177536/796747481693028443
Ok thanks, I'll see that
So I noticed that I don't have a frame.pyd file. Maybe it is the problem. But I don't know how to get this file. Could you help me please ?
Where you able to find the ndsi folder in the site-packages? If so, could you please share the contents of that folder?
Sure
frame.cp39-...
is the file that you need to drag onto Dependencies_Gui
Ok so it says the same as the discussion you sent it must be the same problem
Thanks for your help
Were you able to solve it based on the previous conversations?
Yes thanks
Hallo all! I have a question regarding the power supply, i couldnt find an answer online yet. The question is: If I want to make long recordings using the pupil invisible, is there a possibility to have an additional external power supply attached to the phone while the pupil invisible is recording? Somthing like a y usb-c cable or something similar? Did anybody try this?
Hi @user-9d72d0, it is not possible to charge the phone while using the USB connection. Based on our limited feedback, those who have tried using y adaptors have run into issues where the Android operating system switches to charging mode automatically thus blocking our connection. We usually recommend hot swapping to a freshly charged phone to extend battery life, or with careful planning you can leverage the warp speed charger's fast charging capabilities in between recording sessions.
@nmt thank you very much for your detailed answer! By Hot swapping, do you mean its possible to continue a running recording session on a second android device if the first needs to be recharged? Does the second android device needs to be bought at pupil labs or is any android satisfying specific requirements fit to run the pupil invisible?
@user-9d72d0 it is not possible to continue the same recording on the second phone. You would have to start a new recording, but the process of switching the phones is very quick. You can use any OnePlus 8 (and the no loger available OnePlus 6) phone. That is the only model that is supported by the Companion App. You can purchase it through us or any other retailer.
Thank you also very much for your answer!
Hey, i am using the pupil labs wearable eye tracker on windows, and when i use the pupil capture program, it tells me that it cannot connect to device
@user-42a6f7 looks like you double posted to both π» software-dev and here. I will remove your post in π» software-dev
Are you using Pupil Invisible or Pupil Core eye tracker?
Pupil core
ok, let's migrate to your post in the π core channel. In the future just post once on single channel.
Okay awesome
Hi, I have a few queries: 1] In the documentation, gaze ps1.raw fps is 65 Hz but its respective .time file is 55 Hz. [I opened both the files and the .raw file had twice the number of datapoints as in .time file, which is (as expected and) correct.]. My query here is what is the sampling rate? 2] I also noticed gaze_200.raw and .time, are you just interpolating the data? If yes, any reason for choosing 200 Hz? Can we get more options to choose for interpolation like 30 Hz or 100 Hz or 500 Hz on the cloud? Thanks!
Hi @user-df9629! 1) The real-time framerate you get depends on the Companion device you use. On the OnePlus 8 it is ~65 Hz, on the OnePlus 6 it is ~55 Hz. We did update our documentation to 65 Hz once we started using the OnePlus 8, but I guess we missed a spot! Do you remember where you saw the 55 Hz? I'll also try to find it myself!
2) The eye cameras of Pupil Invisible are recording at 200 Hz. After a recording is uploaded to Pupil Cloud, the gaze data will automatically be recomputed at 200 Hz based on the full framerate eye videos. Thus, the data is not interpolated but is actual 200 Hz. The real-time signal computed on the phone is limited by the phone's computational power. This data could of course be further interpolated to 500 Hz (or downsampled), but we do not offer a feature that would do that for you automatically.
Thank you @marc ! 1) That clarification helps a LOT. In the documentation file, for the file_name gaze ps1.time ~fps mentioned is 55 while ~fps for gaze ps1.raw is 65 Hz. 2) I am VERY pleased to know it is not interpolating! Thank you once again!
Hey guys, does the exported data in gaze_positions.csv have the offset applied?
...and question #2: Does Pupil Labs have access to the cloud data?
...beyond automated processing to calculate gaze? From the perspective of a researcher, this has implications for privacy of human subjects data that could get a bit sticky on our end.
Can you get 200Hz data without the cloud? Asking for someone in China.
Hi @user-e0b992! No, currently only recordings uploaded to Pupil Cloud receive the 200 Hz gaze data through post-hoc cloud processing.
Are there any workarounds for working from China?
Hey! Will the IMU data be streamed in the future or will it be only accessible via the post processing later?
It is already being streamed to my knowledge π
Ah okay, because at first it was only the gaze and the image which was streamed and the IMU were "only" already implemented, but not data was streamed
Are you referring to the example from the docs?
I did an implementation to receive the gaze and image live from the PI. During this time I also integrated the IMU sensor streaming, but as far as I know the IMU data was not streamed at that time
Please give it a try again and let us know if you have trouble receiving the data.
https://discord.com/channels/285728493612957698/633564003846717444/834345115304067092 Here I also got only the response regarding using only recorded data and my current implementation does not receive any IMU data
Hi @user-fb5b59. IMU data is streamed live and you should be able to receive it by adding a few lines of code to the network API example: https://docs.pupil-labs.com/developer/invisible/#network-api (I have just tested this). What is your current implementation?
Thanks! Actually I just missed out, that I set the IMU sensor to not supported yet in my code. Now it can connect to the sensor but sometimes I get value errors even on the smartphone app, if I activate the sensor. I will recheck this.
Edit: The error values correspond to the IMU sensor (e.g., SensorUuid 456d57f5-daef-4607-a593-7f4583b1974a for the IMU Sensor and on the smartphone "Value 456d57f5-daef-4607-a593-7f4583b1974a of t..." is displayed
@nmt Thanks for confirming that it works!
Hey folks, I have to ask these questions again, because it looks like they were lost to the sands of time. 1) does the exported data in gaze_positions.csv have the offset applied? 2) Does Pupil Labs have access to the cloud data? THis is important with regard to the privacy of participants in our studies.
I'm not sure that I understand how you are receiving the data stream. Please can you describe your implementation in more detail?
https://docs.pupil-labs.com/developer/invisible/#network-api I am using this code, but converted to C++. It is working fine for the Gaze and Video Sensor. In this example, the gaze and world video ( see SENSOR_TYPES = ["video", "gaze"] and sensor name comparison with "PI world v1" and "Gaze") is enabled. Now I added the IMU sensor (sensor name comparison with "BMI160 IMU") and it is not working well for me, but this might be an implementing issue on my side. Of course, it would be helpful to get an example including the IMU Sensor addtionally
You should be able to extend the example by subscribing to the "imu"
sensor type and processing the sensor data. Please note, that there is a known bug in pyndsi which causes the imu timestamps to not be converted to seconds. The v4 protocol publishes timestamps in nanoseconds. For backwards compatibility, pyndsi converts these usually to seconds (but not for IMU data at the moment).
Please provide full error messages and a clear description of the issues such that we can get more precise feedback.
Actually the error is fixed. Problem was just using the wrong flags in "set_control_value" and "refresh_controls" for communicating with the device. Thank you very much for your fast support π
How do i delete an annotation within pupil player ( i annotated a section of my video with the wrong annotation so I need to delete and correct it)
I have invisible eye-tracking glasses I had an individual wear them in an arena and the signs that were 150 feet away are blurry and you cannot see what the signage says. any way to fix this to make the video footage sharp?
Unfortunately, there is currently no option to delete annotations in Pupil Player. You would have to discard/delete the incorrect entries from the annotations.csv
file after exporting.
That seems silly thank you!
Hello all! Regarding the IMU Sensor, I also have a question: What are the units for the gyro and accel data recorded to the .raw files? In the documentation I only found the graphic showing the IMU coordinate system orientation relatively to the pupil invisible itself.
Please see raw recording data format here: https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793
This doc is also linked via Pupil Invisible dev docs: https://docs.pupil-labs.com/developer/invisible/
Ah yes, sorry and thank you very much!
You can also get IMU data in convenient CSV format if you use cloud raw data exporter enrichment: https://docs.pupil-labs.com/cloud/enrichments/#raw-data-exporter (scroll down to IMU section)
All right, thanks also for pointing this out!
Hi @user-b14f98! I am sorry if we have missed your previous post!
1) Yes, the gaze offset is applied everywhere where gaze data is accessible, including the CSV download in Player.
2) From a purely technical point of view Pupil Labs does have access to the data, since we control the server where it is stored. This access is however strictly regulated. Only members of our server reliability team have access and all activity is logged. We have a strict policy of not accessing user data unless on user request or to investigate bugs. Legally this is regulated in our terms of use and privacy policy, which you can find here: https://pupil-labs.com/legal/
Thanks, Marc!
@user-9d72d0 are you using Pupil Cloud / Have you tried any of the enrichment features yet?
Not yet, but I will surely try it out for testing purposes.
Hello, the lab I'm in recently moved, and we're trying to decide whether to buy a new invisible or buy our invisible that we bought a year ago from our previous institution at a slightly discounted price. So, we were wondering, have there have been any significant hardware changes in the last 1 year?
Hello @user-7192ac! No, there have been no significant hardware changes. New devices ship with a OnePlus 8 phone, which is more performant than the One Plus 6, but you could also purchase that phone separately (from us or anywhere else).
one more question, do you expect any updates to the hardware in the near future?
@marc Thank you for your speedy answer!
Iβm just about to update our laptops to latest pupil core software and reconnect with our pupil invisible glasses. However Iβve completely forgotten where do you get the latest update for the pupil_invisible_monitor.exe. Thanks Gary
Hi @user-057596! You can find it on our Github page here: https://github.com/pupil-labs/pupil-invisible-monitor/releases/tag/v1.3#user-content-downloads
Thank you Marc, you can tell how long my mind has been on furlough π
Hey! What is the maximum data rate at which IMU data can be streamed? Currently I can receive IMU data at only approx. 3 Hz.
I have invisible eye-tracking glasses I had an individual wear them in an arena and the signs that were 150 feet away are blurry and you cannot see what the signage says. any way to fix this to make the video footage sharp?
How do i delete an annotation within pupil player ( i annotated a section of my video with the wrong annotation so I need to delete and correct it)
Help please ^^
Please see my response above https://discord.com/channels/285728493612957698/633564003846717444/841940203861311488
Do you know anything about this?I have invisible eye-tracking glasses I had an individual wear them in an arena and the signs that were 150 feet away are blurry and you cannot see what the signage says. any way to fix this to make the video footage sharp?
Do you know anything about this?I have invisible eye-tracking glasses I had an individual wear them in an arena and the signs that were 150 feet away are blurry and you cannot see what the signage says. any way to fix this to make the video footage sharp?
We do not have any solutions built-into Pupil Cloud. This is indeed a general challenge as the signs are very far away. It would seem that you want to "enhance" the images or increase the spatial resolution post-hoc in order to have a "sharp" image of the far away text/image. There are methods to do this that fall into the category of Super Resolution
. There has been quite a bit of progress on this topic recently using machine learning. See: https://paperswithcode.com/task/image-super-resolution/latest
Image super-resolution (SR) techniques reconstruct a higher-resolution image or sequence from the observed lower-resolution images
I am not 100% sure what the SotA is for video; but interesting problem πΈ Perhaps others here have some thoughts on the topic.
Among all the lenses with different vision corrections, is there a way to know which is which? or it's only their left/right shape and their thickness that's different?
Yes, the left/right shape is different and the thickness, although the latter is difficult to see. Thus, it is important to not mix them up in the box.
Just want to ask this question again: do you guys expect any updates to the invisible hardware in the near future? I'd really appreciate an answer, even if it's "We can't answer that question." Thanks!
We do not expect any changes within the next couple of months!
@papr @marc & team cheers for including the changes on how surface_markers are detected and reported in the new pupil_player release. Makes a big difference!
Thanks for your input on that end!
I'm trying to run Pupil Invisible Monitor. Any idea how to resolve this error:
C:\Users\CVBE\Downloads\pupil-invisible-monitor-master>python setup.py install
Error calling git: "Command '['git', 'describe', '--tags', '--long']' returned non-zero exit status 128."
output: "b'fatal: not a git repository (or any of the parent directories): .git\n'"
Traceback (most recent call last):
File "setup.py", line 29, in <module>
version=str(get_version()),
File "C:\Users\CVBE\Downloads\pupil-invisible-monitor-master\deployment\_packaging\utils.py", line 74, in get_version
version = pupil_version()
File "C:\Users\CVBE\Downloads\pupil-invisible-monitor-master\deployment\_packaging\utils.py", line 58, in pupil_version
raise ValueError("Version Error")
ValueError: Version Error
You need to use git clone
if you want to run from source, instead of downloading the zip file from source. I highly recommend using the bundled application though https://github.com/pupil-labs/pupil-invisible-monitor/releases/download/v1.3/pupil_invisible_monitor_windows_x64_v1.3-10-g1f171da.zip
perfect, thanks π
Hey guys, I am looking at your documentation at docs.pupil-labs.com right now to figure out the options to integrate a Pupil Invisible in our measurement environment. My current understanding is as follows: Using a Pupil Invisible & the Companion app I will be able to access all data (camera video, gaze + imu data) via the NDSIv4 API. Is this correct? I'd appreciate an answer - getting a little confused by the load of different documentations pages right now π
Generally correct, NDSI is for realtime access though. Alternatively, you can access the data post-hoc programmatically via our Pupil Cloud API.
Perfect. Realtime access would be the goal as it will enable us to sync the data with our other systems. Could you give me a ballpark estimate on the latency induced by the gaze detection? Another question: Is a purely offline usage a problem for any of the systems?
I do not have an estimate on the latency right now. You can check it by running this ndsi script https://docs.pupil-labs.com/developer/invisible/#time-synchronization
By pure offline usage, do you mean not using Pupil Cloud?
I already found the script, unfortunately we are still in the planning-phase and I didn't get my hands on one of your devices yet. Will check asap.
Not using Pupil Cloud would be the idea. Just Pupil Invisible + the Companion app and additionally some other code to retrieve data via the API while the experiment is running. For various reasons we will not be able to allow any data transfer to external servers.
Is there a plugin available for realtime or post-hoc heat map extraction from Invisible recordigs?
Thank you marc, one final question directly from the PI of my lab: "If we want to buy some models in the next 6 months, should we wait?"
There will be no hardware update in the next 6 months. We are of course working on things, but nothing has a foreseeable release date yet.
Hi @user-b10192! Since the Pupil Invisible gaze estimation pipeline is not based on pupil detection, there is no corresponding confidence signal either. We do actually have a classifier in place that makes reasonable estimates about whether or not the glasses are being worn, but this data is currently not available in real-time. The classifier results are saved in the worn ps1.raw
files and are immediately available after finishing a recording.
Hi there, We are currently display the images from two eye cameras via NDSI network to ensure whether the participant is wearing the glasses correctly or not. However we could not rely on manual checking. If we can calculate\retrieve the confidence from the data of left and right eye cameras, that would be really helpful for us. Is there anyway to calculate\retrieve the confidence of the pupils from the data of eye cameras? We are using "data.bgr" to display the eye and world cameras. Is there any other attributes of "data" such as confident or something like that?
any replies here? π
You can use the marker mapping feature in Pupil Cloud or the surface tracker in Pupil Player (both are post-hoc; real-time is currently not supported for Invisible)
Is there some kind of server codes or settings that I can apply so that Pupil Capture and Pupil Invisible Monitor recognize my Invisible device also on public networks?
No, usually public networks are setup to not allow peer-to-peer communication for security reasons.
You can usually set-up a WiFi network on the PC running Invisible Monitor, and connect the Companion device to that, thus bypassing the public network.
do you maybe have a link to instructions on how to do this in Windows?
Thank you so much for taking the time to answer these questions! We love the invisible and look forward to seeing what other amazing eye trackers you release in the future! P.s. I have my hopes up for a kid friendly version of the invisible π₯Ί
Hi @user-7192ac, for reference, we have found that Pupil Invisible works well for ages 3 and up. The head strap: https://pupil-labs.com/cart/?pi_strap=1 can be used to ensure a secure fit.
@user-7192ac Further improving the support for children is definitely on our list for future iterations! Thanks for the feedback! π
Hello Pupil Labs Team! I'm trying to stream data from pupil invisible to my computer in real-time. Receiving of the world video and the gaze data works fine. However I can receive data from the IMU only every ~300 ms and there is 60 messages in each "data batch". Is it possible to reduce the size of this data batch in order to increase the rate in which IMU data is sent. Or is there another method for real-time transmitting of the IMU data? For my tests I used this example script: https://docs.pupil-labs.com/developer/invisible/#network-api
I'll duplicate my question in case it was overlooked
@user-98789c The keyword to look for is "mobile hotspot". Essentially the laptop is acting as a router to create it's own wifi and other devices can connect to that wifi. These should be the according instructions: https://support.microsoft.com/en-us/windows/use-your-pc-as-a-mobile-hotspot-c89b0fad-72d5-41e8-f7ea-406ad9036b85
Perfect, thanks. I was trying the same thing creating a hotspot; I had set the public wifi as trusted (private), and it didn't work. Now I set it back to public, and it's working!
Hi guys! I have two questions regarding IMU data I received via Pupil Cloud enrichment. How could we get the head rotation data in degrees from the IMU? As the recorded values are the measures of angular speed and not the rotation of head in degrees? As mentioned before, relative head-pose measurements usually have a drift error, could you please let me know how big the drift is and what are the possible ways to correct it? Thanks!
Hi @user-17da3a! The IMU we use does have an accelerometer but no magnetometer. Thus, 2 out of three dimensions (pitch and roll) of the relative head-pose can determined without drift. We have an implementation of the Madgwick algorithm, which calculates those drift-free poses from the acceleration data, available in a Pupil Player plugin. This is not yet directly compatible with the CSV download of the IMU data from Pupil Cloud, but would work with raw recordings downloaded from cloud. https://gist.github.com/N-M-T/ec8071bd211db287f4879e0b48874505
The yaw can not be determined drift-free without a magnetometer. We do however not have an evaluation based on which I could quantify exactly how large this drift is.
Sorry for the slightly delayed response, I had to consult with our engineering team on that topic. Using the current NDSI protocol there is no way to change the batch size or to increase the rate of sending the data. We are currently working on a new streaming protocol, which will make it easier to configure such parameters and might make it possible to increase the data rate. It will still take at least several weeks for the new protocol to be released though. The underlying issue here is a buffer within the IMU sensor that makes it difficult to pull the data in real-time.
Thank you for your detailed answer!
Hello guys ! I'm trying to stream Pupil invisible video with LSL but as I'm totaly new to LSL i couldn't find out how to do it. If you could link me a tutorial it would be great !
Hi @user-59adb5! We do not have tutorial I could point you too, but you can find an implementation of a LSL "relay" for Pupil Invisible here that should allow you to integrate with Pupil Invisible: https://github.com/labstreaminglayer/App-PupilLabs/
Your welcome! π
Thanks !
I also highly recommend reading their documentation at https://labstreaminglayer.readthedocs.io/info/intro.html It is important to get a rough idea on how it works behind the scenes in case you run into any issues.
Hi! My team encountered an issue with a few recordings. We did a total of 24 observations that day, but we can only see the first 10 in the cloud. It seems like after a point the recordings weren't being saved anymore, even though we have been recorded as usual. If we do trial recordings now they do work and are properly stored. Any ideas on what could have gone wrong?
Hi @user-1cf7f3! If you look into the list of recordings on the phone, are the recordings missing there as well?
yes, they are also not there in the phone
@user-1cf7f3 Could you check what version of the app is installed on the phone? (to check press and hold the app icon -> app info)
Also, if you connect the phone to a computer and browse the internal storage of the phone (you need to switch the phone to file transfer mode to access the internal storage, let me know if you need instructions for that), what are the contents of the Pupil Invisible
folder on the phone?
Hi @marc ! I just repeat my first question, which might be overlooked. How could we get the head rotation data in degrees from the IMU? As the recorded values are the measures of angular speed and not the rotation of head in degrees? I also want to know a bit more about the timestamp of IMU which is in nanosecond. I need the time points in second, but when I transfer them in second the timestamp is still a very small value like "1616688788".
Hi @user-17da3a! I tried to answer your question here: https://discord.com/channels/285728493612957698/633564003846717444/844179767326277652
The timestamps are given in nanoseconds since the UNIX epoch, so 1616688788 seconds
would translate to 25. March 2021 16:13:08 UTC
. This is a fairly common format for timestamps and libraries like e.g. Pandas can typically convert those values to more date-like formats easily: pd.to_datetime(timestamps)
Hello guys, I have this error when I try to install Pupil Gaze Relay. Does someone know how to fix it please ?
@marc this is the odd thing I see. And it is compatible with the time it stopped working (basically after 13 'o clock for the afternoon)
On another note: what is the storage limit? I read here some time ago that there is no limit in the cloud, but in the app I do see that now, for example, I still have 10Gb - 1 hours 47 mins. This brings me to a second question: what is the difference between the cloud and the storage system the app refers to?
Hi, I saw that you also contacted [email removed] I will respond to all your questions via email. π
okay ! thanks alot π
@user-1cf7f3 The storage on the phone is limited by the phones harddrive, which is either 128 GB or 256 GB in size. Our recordings have a size of ~5GB/hour, so this determines how much can be stored in total. Recordings have to be deleted manually on the phone to free up space. The storage capacity in the cloud is currently unlimited. Since the phone is not actually entirely full, this should not have caused the phone to stop recording. I will consult with our engineering team regarding your screenshot.
Thank you so much Marc!
@user-1cf7f3 The recording you posted the screenshot of indeed looks broken. This may have been caused by a connection error during the recording or a hardware error in the Pupil Invisible device. If you see such behavior again we should consider sending your device in for repairs.
The missing recordings are more difficult to explain. Whenever a recording is started, a folder is created on the phone and the only way of removing this folder is explicit deletion. If the recording folder is missing entirely, this strongly suggests that either the recordings have not actually been started or they have been removed. Missing recording folders due to a software error are very unlikely.
Reasons why a recording may not start are: - Out of storage. This does not seem to be the case for you as you reported >1 hour left over storage. Also, a warning would have been displayed and the recording animation would not have started. - Out of battery. If battery is very low the app does not start a recording but instead displays a warning to the user that the phone is about to power off. - Hardware error. Again, this would display an error message.
In all those cases there should have been an error message and the recording would not have been stoppable and savable at the end, since it never started.
Reasons why a recording may have been deleted afterwards: - On recording-end the app asks if the data should be saved or discarded. Discarding requires a extra confirmation. - The recording may have been deleted via the UI or a file browser.
Did you 1) not see any error messages on the phone and 2) were you able to properly stop and save the recordings that are now missing?
Hey guys - feature request. Save both gaze offset and raw (not offset) data to the output gaze estimates.
I know that this can be recovered by reading in the invisible json.
That is correct. Isn't it sufficiently easy to use that? Writing two files means double the storage used for the same data. It also means more IO usage of the phone during the recording. And the phone's IO is already pretty busy with writing 3 videos.
I thought it was practically free to do, and it would be convenient and make it more explicit that the offset was already applied to the data stream. If IO bandwidth is an issue, then yes I understand why you aren't doing it.
FWIW, adding the offset to the data output, but not the raw output, seems a bit inconsistent with the ethos adopted with respect to the Core pipeline, which was to only output raw data.
Can I ask where I can download Pupil Player? Sorry, new in town.
Hi @user-19ed58! To download Pupil Player please visit https://pupil-labs.com/products/core/ and scroll down a bit.
I just got my Invisible, but its gaze seems a bit off. Is there any ways I can test it or calibrate it? Or whom I should talk to, please advise, thanks.
There is no classical calibration for Pupil Invisible. The accuracy you can expect from Pupil Invisible is evaluated here https://arxiv.org/pdf/2009.00508 For some subjects there is a constant bias in the predictions and this may be what you see. There is feature called offset correction that allows you to correct for this bias. To access this open the live preview in the Companion app, touch and hold the screen and move your finger to specify the correction.
Hi Marc. We didnβt get any errors at that point and we were able to save the files (or at least that was the impression, we could click on the save button π ). At the moment though, we are experiencing something else funny. We get quite some error messages and broken recordings. Hereβs a list of what happens:
What I can think of is overuse of the phone or the glasses. What makes me think that is that we are carrying out a study where we need to do 30 minutes long recording - 15 minutes stop - 30 minutes recording - 1 hour break - same. The phone gets really warm too. Ideas?
Usually this amount of usage is not a problem. The phone does get warm as it is working at maximum capacity, but this should not lead to any errors. This looks very much much like a hardware error to me, where e.g. some sensor has a loose connection. We should organize a repair of your device. Could you please send an email to [email removed] referencing this conversation and mentioning the serial number of your device (see the tip of the left arm of the glasses) to ask for a repair?
Thanks, @marc
hi there...i installed the pupil invisible companion app on the OnePlus 6 companion device. when i want to start the app i get the message that the companion app isnt compatible with the android version.
I found out that the OnePlus 6 companion device is preinstalled with android 10. Now i read that this version isnt compatible with the companion app. But how is it possible that you deliver a OnePlus 6 companion device with an incompatible android version ?
Hi! I've also been regularly getting the second error lately (i.e., companion app stopped working). Based on the contens of the error, it seems to me like it would sooner be a software error in the phone/app. I've checked the data from when it happened, and the gaze data seems to be recorded (red circle visisble), but not the data from the world cam (only seeing a grey screen). The phone also has a system update available (OnePlus8_O2_BETA_10), but I'm not sure if it's wise to do this (I'm worried the companion app may no longer be compatible then..)
Hi @user-997dee! Could you share a recording that features this error with [email removed] such that we could take a look at what has gone wrong?
Regarding the update: It looks like you are using a OnePlus8. For that device Android 11 is officially available and the USB bug is fixed again in that version. So if you update, just make sure to update all the way to Android 11!
@user-d1dd9a All OnePlus6 devices ship with the OS version that is installed from factory, which is Android 8. Android 10 is the highest version that is available for the OnePlus6, so if that is what is installed the phone must have been updated after shipping. Unfortunately, Android 10 is not compatible with the Pupil Invisible Companion app as you mentioned, because this Android version contains a bug related to how USB devices can connect with the phone.
To get the phone working again you will have to downgrade the Android version to 8 or 9. Please follow these instructions for that:
Rollback process:
1. Download the latest ROM package from http://oxygenos.oneplus.net.s3.amazonaws.com/OnePlus6Oxygen_22_OTA_015_all_1808102118_downgrade_68586451f7334c4d.zip.
2. Copy ROM rollback package to the mobile phone storage root directory.
3. Settings -> system updates -> top right corner icon -> local upgrade -> click on the installation package -> immediately upgrade -> system upgrade to 100%.
4. Select Reboot system now.
5. Update successfully.
Please note: This rollback will clear all contents of the phone, please make sure to back up.
alter rollback to the stable version of O can update to P version via OTA after global rollout
Thx marc for that info. I tested the phone fresh out of the newly delivered pupil invisible box.
Hi, we're having some trouble with the PI since last week. We are running experiments and all of a sudden there are issues with the Phone/ recording site of the PI. We are getting the following error message "Recording failure" the PI stopped working ... please reach out to our support team" (this seems like a generic segfault catch btw). Note, we are only using it for this purpose and have not touched the phone otherwise. However, there is a software update for the phone available, but we are hesitant to update...
Ah I see that @user-997dee, my colleague has already started a thread...
That is peculiar! I am sorry if there was a mix-up on our side that lead to this undesired update. Our usual protocol is to leave the Android version as is.
No problem. I did the downgrade as you posted. All works but what i figured out that i cant charge with the dash charger anymore.
@user-d1dd9a I have not hear of an interaction like this with the Android version before. Are you using the USB-C cable that came with the charger?
Yes i used the bundled usb-c cable. With the Android 10 version i didn't had this problem. Now i can only do normal charge through usb taking this cable again. I think its a software problem.