Hi @nmt, unfortunately not, restarting both the browser (firefox) and laptop (on mac os) that I am using, and companion device did not resolve the issue. Just checked on another browser (chrome) without any luck!
Please send the recording ID to info@pupil-labs.com and the Cloud team will investigate the issue!
How to generate a heat map
i unble to do
i am unable to do it
Hi @user-ba21ea π. Could you please provide more details about your use case? For instance, what type of data have you collected and in what context/environment? How far have you progressed with building enrichments, etc.? Any screenshots would be particularly helpful. Sharing this information will enable us to give specific advice and get you up and running with heatmap visualisations!
Hi! I'm trying to run visualization, but it continually shows error. How should I do to complete the analysis?
Hey @user-6aefe3 π. Same as this message https://discord.com/channels/285728493612957698/633564003846717444/1181108076640743424 π
Hi! @nmt . I'm not sure if the message is providing more detail. In my case, I recorded about 5 minutes driving data, trying to run the visualization.
Thanks for clarifying - that's definitely not expected! Please can you reach out to info@pupil-labs.com and send the enrichment ID? Right click on the enrichment and click 'View recording information' to get the ID.
Hello, I'm trying to generate a heatmap and an error keeps popping up. I don't understand exactly what video or image that I need to insert into the Pupil Cloud that the heatmap can be generated.
Hi @user-5bff5d! Could you please provide more details about your use case? For instance, what type of data have you collected and in what context/environment? How far have you progressed with building enrichments, etc.? Any screenshots would be particularly helpful. Sharing this information will enable us to give specific advice and get you up and running with heatmap visualisations.
I have a pupil labs glasses and i had not problem beforehand but now the recording won't get uploaded on the pupil could. (with invisible companion) Instead its stuck at 0% upload. i can pause and start again but nothing changes
Hi @user-5ab4f5 ππ½ ! Can you try logging out of the app and then logging back in? This may help resolve this issue. Logging out is possible from the Settings.
cloud*
Ah, thank you. It worked ^^" @user-480f4c
Hi, Neil, thank you for the response. I attach here a print screen form the pupil cloud.
Hi @user-5bff5d ππ½ ! It seems that your enrichment was not completed successfully. This is why the heatmap cannot be generated. This error is likely the consequence of a poor scanning recording.
A good scanning recording needs to record the object of interest (in this case the painting) from all possible angles and from all distances a subject may view it. I highly recommend checking our scanning recording best practices. https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/#scanning-best-practices
I hope this helps, but let us know if you have more questions on that!
Hi, I'm using pupil invisible for a project. I'm having a problem, when i download a mapper maker from pupil cloud i noted that the file surface_positions has less rows than the gaze file and i can't understand why Is possible to link the data between these two files?
I also tried to download in pupil player format, elaborate with pupil player,but the fixation_on_surface file is empty
Hello @user-cc52b5 ! With Invisible, gaze is approximately 200 Hz after Cloud upload, whereas surface positions (as detected in the scene video) can only be roughly 30 Hz due to the lower sampling frequency (~30 Hz) of the scene camera. This accounts for the different number of rows you see. The gaze file in the marker mapper enrichment download already includes surface-mapped gaze. You can learn more about it here: https://docs.pupil-labs.com/invisible/pupil-cloud/enrichments/marker-mapper/#gaze-csv
Thank you for the response
Good afternoon. I recorded 57 participants using Pupil Invisible (short ride on the tram sim). Then, in the Pupil Cloud environment, I defined 7 enrichments based on markers. Now, I need to check how many fixations each participant had in each area designated by the markers. The fixations file contains all the fixations of my participants. I have 7 enrichments, so I have 7 files. I believe that each file should contain the same number of fixations. The difference lies in the fact that in each file corresponding to a specific enrichment, there is a column with either true or false - depending on whether the fixation was present in that area or not. But numer of all fixations should be the same.
Hi @user-23899a ! Just to fully grasp it, you have a project with 57 recordings, and 7 enrichments inside with the same start and end events, but with different markers chosen, is that right? when you download the recording data in the fixations.csv
you should have an id for the fixation matching the ones in the enrichments data.
If that's not the case, would you mind inviting us to your workspace so we can investigate it?
Yes, and in each fixations.csv file, I should have the same number of participants and the same number of fixations. The problem is that I don't, and I don't know why.
Could you add me to your workspace (||mgg@pupil-labs.com ||) so I can further assist you or send the files to data@pupil-labs.com ?
Tramwaje2022 project
invitation sent
Received, I will have a look and get back to you
Hello, I have been having issues downloading the raw data export zip file from pupil cloud of my pupil invisible recordings. I have tried different browsers, wired and wireless connections, and different locations. The download starts and then stops after it gets to a certain point every time this has happened on two different videos as well.
Hey @user-5882af! Do you get an error message in Pupil Cloud or does it seem to just stop silently?
@nmt It stops sliently no error
Recording downloads
Seems like reference image mapper doesn't work for files longer than 3 minutes. Any way around this? the visual helps!
Hi @user-91d7b2 π ! It seems there is a misconception, the limit of the reference image mapper is only for the scanning recording. Meaning, you can have one scanning recording of less than 3 min and then a longer one where your subjects wear the glasses for as long as required.
I have only one 6 minute recording. I think i understand what you mean but i'm not sure how to proceed. When asked for the scanning recording, it says my file is more than 3 minutes. I have events within this 6 minute recording that i'd like to generate heatmaps for.
Hi, I'm using pupil invisible for a project. I download the pupil player format data in order to using pupil player (because the different frequency of the surface) and get the gaze on surface already scaled , but the number of rows in "gaze" (downloaded from pupil cloud) and the ones in "gaze position on surface" (from pupil player) sometimes aren't the same. So i'm trying to match the timestamps but i'm having a problem, the first "gaze timestamp" in "gaze position on surface" and the first "start timestamp" in "fixations" are different. How can i do?
Hi, @user-06bb78 and welcome to the community!
Gaze data is computed from the eye cameras which operate at 200 Hz. Surface gazes combine data from the eye cameras and the world camera which operates at 30Hz. Fixations are behavioral events and do not have a reliable frequency. Does that answer your question?
Hi
We have started working with the PupilLabs Invisible for a few hours and sessions and a few days ago they stopped working. This includes not showing the fixation on the video, the application crashed (stops responding during recording) after a few seconds of seemingly recording - no video gets created. We have tried testing everything with the new pair that we just initiated today and that one too does not work at all. We are not sure why this suddenly happened and what could be the cause of the problem
We would like to ask for your assistance as soon as possible since we do not want our work to be delayed and this is an inherent part of our work
Thanks and best wishes Maayan
Hi @user-1af4b6 ! May I ask, what error is shown in the app? and whether you could follow up with an email to info@pupil-labs.com for further debugging steps? Thank you!
Sometimes we record and when we try to watch the video we get "No scene video", but in other times when we try to record the app just don't respond (we can't stop the recording and then we have to close the app).
Thanks for following up! Please contact us at the email above, and we will follow up as quickly as possible with further steps to identify what could be happening and how to prevent this.
Hello, can I copy or move some recordings to another Workspace in Pupil Cloud? I don't want all students to have an access to all other recordings. Kind regards and thank you.
Hi @user-3c26e4 ! Unfortunately, moving recordings across projects is something we can not support as of now. π
However, note that you can create as many workspaces as you want and invite your students to them with different roles. Simply select the appropriate workspace before making the recording.
https://docs.pupil-labs.com/neon/pupil-cloud/workspaces/#frequently-asked-questions
I have a question about invisible companion. The app shows the two grey circles with the eye and the camera symbol. Sometimes the speed of the symbols going around the circle (its hard to explain sorry) differs (the recording time is accurate though). E.g. for example if i run my first program it takes one minute for them to pass one circle. Sometimes it takes three minutes (whe ni run the second program). Does that have any influence on the data or what is going?
Hi @user-5ab4f5 ! I am not sure I fully grasped the issue. Generally, in the Companion App it will show you a greyed camera or eye icon if one of the sensors is disconnected.
When Pupil Invisible is plugged in, they should get coloured.
When you start a recording they will start moving around. The speed does not matter, what matter is, if for example the glasses or the scene camera get disconnected and reconnected, you will see a gap in the ring.
However sometimes the second program shows the same pattern on the app as the first one. Or generally sometimes if i just press record on the app itself, it needs 3 minutes for one circle and other times one again
When i looked into the data, it looked normal to me but i don't know
@user-d407c1 Alright, if the speed doesnt matter, everything is fine^^. I just noticed sometimes it just moves differently (speed wise) but i never saw a gap or anything. And it also normally colors the grey cricles during recording So i guess it'll be okay. Thank you!
Hi @user-3c26e4 ! Sorry to hear, please upvote this feature if you would like to see it implemented in the future.
In the meantime, yes, you can create a visualisation and export it. To do so, add all recordings you want to create the visualisation to a project. Then, enter the project, and choose Analysis at the bottom left. Once on the Analysis page select "+ New visualisation" and choose "Video renderer". You will then be on a new page that allows you to modify the appearance of the gaze overlay, show fixations and even undistort the video.
Fantastic, I will give it a try! Thank you so much!
Hi all.....I have 3 questions: 1) my one plus tells me to update the OS. Can I go ahead? will tha app work with the new android? 2) How can I move a video from a workspace to a new one? 3) don't know what happened. in the "recordings" sections of the app there isn't a recording I took. But i can find the video inside the Directory (see photo attached) thanks a lot
Hi @user-4c17e6! In response to your questions:
1) We recommend that you don't allow Android system updates on your device. The currently supported Android versions are as listed here: https://docs.pupil-labs.com/invisible/hardware/compatible-devices/#android-os
2) Moving recordings from one workspace to another is not currently possible. Please upvote this feature if you would like to see it implemented in the future.
3) Is there any chance that this recording was accidentally deleted from the recording folder? If you can find the recording in the documents folder on the phone, you can re-import it. To get it back into the app you'd have to import the recording by going to the settings view in the app and selecting "Import recordings".
I hope this helps!
Hi I have recorded data using invisible and export them into folders and transfer them to PC. I want to know 1. how can I calculate some fixation state, saccade state, blink state, and eye moving metrics like velocity based on these data and Pupil Player? 2. I have deleted all the raw data on the One Plus app, so the cloud method can not be used, right?
Hi @user-86d342! Regarding your questions:
1) Pupil Player is only capable of calculating fixations for Pupil Core. The algorithm implemented there is not compatible with Pupil Invisible. May I ask if there is a specific reason why you prefer Pupil Player over Pupil Cloud? The latter provides fixation/blink data automatically for all recordings.
2) Recordings that are deleted from the Invisible Companion App, e.g. to free up storage space, cannot be transferred back to the Invisible Companion App from your backup location. This means that if you delete the recordings prior to uploading them to Pupil Cloud, they cannot be uploaded at a later date. See also here: https://docs.pupil-labs.com/invisible/data-collection/transfer-recordings-via-usb/
I'm a bit confused... I'm using the invisible and reading in the movie and the bin data. But sometimes the video appears to be mirrored. Like I'm driving in England on my bike. Any ideas?
Hi @user-e40297 ! what software are you using to read the video file?
python open cv
Could you kindly post the code you are using? Wouldn't you, by any chance, be calling cv2.flip()
in your code?
I see... sorry for that
No worries! π
Hi @user-86d342 π ! You can use this CLI to compute blinks and fixations in the same way as Pupil Cloud does. With the difference that Pupil Cloud would reprocess the recordings and recompute gaze at 200Hz, and with this utility you will be limited to the sampling rate you got at recording times.
thank you for your guidance!
However, when trying to use this CLI to compute and export, it reports such errors as shown in the error_log.txt file.
And I found that the "export" folder was built and four files "events.csv" "gaze.csv" "imu.csv" "template.json" exist there, but these two columns ("fixation id" and "blink id") are blank in the gaze.csv file, which is important for my results.
can you help me figure this out?
Hi, We are trying to make a recording with the invisible. But after 5 seconds a pop up shows asking to Failed to start recording, pleae disconnect Pl! Any ideas?
Hi @user-e40297 ! I am sorry to hear that you are having issue with Pupil Invisible. Would you mind writing to info@pupil-labs.com regarding this issue? Please include all the possible relevant information, such as the app version, the serial number and the Android version of your Companion Device. We will follow up with further debugging steps.
Hi, we recorded with Pupil Invisible at night. Unfortunately, the QR codes in the video are too dark and Pupil Player only recognizes them in some places. 1. the AOIs are only partially defined, so they jump around in Pupil Player when I play the video. (They deform depending on the view). Is there a way to minimize this? 2. even if the AOIs are only partially defined, I can download a table where they are all listed. Why can't they be displayed visibly in the video? 3. is there a way to define the frontal area for the AOIs in Pupil Cloud, even if the QR codes are too dark? Thank you very much.
Hi, is it possible that you can look into my problem today? Thank you!
Hi @user-cd7b85! Getting robust AOIs with this tool really depends on the markers being well-detected. In the image you shared, the marker on the left is visible and seems to be well-detected. However, the marker on the right appears partially occluded and is not. How many markers do you use in total? Using only two seems limited for the area you're attempting to cover. Once defined, the AOIs will still appear in the export even if they are of poor quality or disappear at some point during the recording. Unfortunately, there's no easy way to generate the AOIs without the markers.
Hello,
I am currently working on a project that involves using an 'invisible' device in a golf setting. This device is worn by players while they are on the golf course. I have several questions regarding its functionalities and capabilities:
Pupil Dilation Information: Does this version of the device provide any data or insights related to pupil dilation?
Time Tracking with Cloud Data: Is there a mechanism in place to determine timestamps or track time using the data we receive from the cloud?
Reference Image Compilation: Can we aggregate or compile zones of interest from 18 different videos to create a comprehensive reference image?
Neon Version Advantages: I am considering the Neon version of the device. Would this version offer additional information or enhanced features compared to the invisible device we are currently using?
Thank you for your assistance.
Hi @user-58c1ce ! Nice to hear that you are considering Pupil Invisible.
Pupillometry: Pupil Invisible don't offer pupil dilation due to the eye camera positioning.
Timestamps and tracking time: Yes! every sensor is timestamped, so you can tell how long a recording was or when something happened.
Reference Image Mapper: Yes! you can aggregate multiple recordings using the reference image mapper, you can also use multiple reference images.
Neon: Neon will offer you better accuracy, better camera positioning, frame modularity and pupil size.
If you want to see how it works or ask any questions to the Product Specialist team, you can book it here
Hey guys, today i used the pupil labs glasses and something went wrong so i had to do two recordings when i wanted one. I wondered now if i can edit the first recording to the point where the data is usable so i had only download the raw data until that point. That would be more useful than searching for the timestamp and trying to delete all the data up to that timeframe from every dataframe. Is there a possibility to edit and cut a video until a point and afterwards merge it with the second recording? (i know i would need to change fixation ids and blink ids myself but i would like to edit the first video at least because the last seconds are not usable) Is there a possibility or do i have to do it coding wise or just try to delete in the csv itself by hand?
Hi @user-5ab4f5 ! Cutting recordings and merging them is not supported in Pupil Cloud.
Pupil Player would allow you to easily trim the recording albeit support for Pupil Invisible recordings is limited and you won't have fixations there, plus the data will be converted to Pupil Player format.
The easiest way would be to do it by yourself coding or deleting as you mentioned. You can aid you of events to determine the timestamps.
@user-d407c1 @user-480f4c really sorry to bother you guys...
do you have some possible solutions or explanations to the problems? can I expect this can be solved in one day or two?
Hi @user-86d342 ! Sy! I missed your reply. This is issue seems to be related to the blink detection. Can you try running it without blinks? Using the --no-blinks
flag to confirm?
Usage: pl-rec-export [OPTIONS] [RECORDINGS]...
Options:
-e, --export-folder TEXT Relative export path [default:
(<recording>/export)]
-f, --force Overwrite an existing export
-v, --verbose Show more log messages (repeat for even more)
--blinks / --no-blinks
--fixations / --no-fixations
--help Show this message and exit.
@user-86d342 Scratch that I misread the logs. It is in the _process_world()
.
Could you try this branch ?
You can install it like this `pip install -e [email removed]
Then you can run it pl-rec-export -f your/path
this branch seems working good, really thank you for thisπ ! you saved this batch of data I recorded for the last half month! next time I will definitely choose pupil cloudπ .
besides, there are still several questions I want to know:
Hi Today I tried to install this branch in another device, but it seems this address is 404, do you know what's wrong?
We used 4 markers, but mostly only 2 to 3 are visible in the video.
Hi @user-cd7b85 , I removed my answer because I saw my colleague also replied to you.
As you can see our answers goes on the same line, if the markers are too dark to be detected, the marker mapper and surface tracker will struggle.
Detecting the markers is key to define the Surface/AOI, and usually at least 2 markers need to be detected to properly define them. If a marker is lost, then the surface definition will "jump".
A way to prevent this would be to place more markers across the cockpit/dashboard that can be recognised, making the surface definition more robust.
You can also try to better illuminate these markers, perhaps placing some kind of backlight?
If you really need this data, may I suggest a "hacky" way to try to save some data? Since you are using Pupil Player, you can try making a backup of the raw recording, then attempt to use FFMPEG to increase the brightness in the scene camera video files? That said, as my colleague mentioned the right marker seems to be partially occluded, so it may not work.
Hi, is FFMPEG a program and how does it work?
Thank you, I will try that. Is there a way to brighten up the video file in Pupil Cloud?
Hi @user-cd7b85 ! Unfortunately, there is not. If you would like to see some kind of post processing tools in Cloud, feel free to suggest them at https://feedback.pupil-labs.com/
I have the problem that sometimes my invisible companion shows that i am recording (i am starting it through a software) but in the end i don't have a video or audio i can look at. One time it did work for the first program (but no audio or scene, also no error message in python/visual code) and the second different program was fine. And the second time neither first nor second program worked.
Also i have audio issues in general. It seems like the audio is sometimes really really quiet. We tried to take of the camera and put it back again and that worked for a while but now it's not working anymore. Sometimes the audio is load, sometimes its more quiet Do you know what could be the problem? Is it probably more hardware related?
@nmt Yes, i have a program by which is start recording and stop recording. And that normally works very well, except for these 2 times (within 24 times)
Hi @user-cd7b85! You can find it here.
The command you are looking for should be something like this:
ffmpeg -i input.mp4 -vf "eq=brightness=value" output.mp4
where value should be from -1 to 1 , with 0 meaning no change
Hello, is there an example of an article about basketball players ?
Hi @user-228c95 π. We keep a list of publications that have used our products. You can find it here: https://pupil-labs.com/publications. A quick search shows one involving basketball.
About the monitor app, could you elaborate on the problem you're facing? It'd also be helpful if you could share more details about your setup, like the device/browser you're using the monitor app on, and the type of network, etc.
We cannot operate the monitor system. Do you have a solution for this?
Hi, I'm currently trying to use "pl-dynamic-rim" through the CMD. During the process, the tool asks me for a "corners_screen" file, and I'm a bit uncertain about the format this file should follow. I've already interactively selected screen corners as instructed in the code but I want to run it through the CMD. Could you please provide guidance on the expected format of the "corners_screen" file?
Hi @user-1af4b6 π. The author of this tool will be able to assist you in the new year!
Hi @user-1af4b6 ! Apologies for the delay in answering and happy new year! To pass them, you need to use the --corners_screen
argument in the CLI.
Points are defined as a dictionary, you can see here .
I am not sure if I used this feature only while writing/testing the scripts or if it is finally implemented, but it should be trivial to add this. You can save args.corners_screen
as a file after defining it and add a loading function.
This library is due to a refactor, but I can add this loading/saving step if you do not know how to.
Hi, I tried running the following code to check if the device connects to my laptop. But unfortunately the error appeared: "No device found".
device = discover_one_device(max_search_duration_seconds=15)
if device is None:
print("No device found.")
raise SystemExit(-1)
*** my laptop version is: HP Pavilion Laptop 14-dv0xxx Device name LAPTOP-TJJSK9DU Processor 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz 2.80 GHz Installed RAM 16.0 GB (15.8 GB usable) System type 64-bit operating system, x64-based processor Pen and touch No pen or touch input is available for this display
Windows: Edition Windows 11 Home Version 22H2
Python: Python 3.12
Hi @user-af52b9! Firstly, could you try connecting your device to our monitor app ? Does that work for you?
Dear Pupil Labs team. I am writing the methods part of a paper. Is there a document specifying, how fixations, dwell times, blinks and blink durations are detected and computed within Pupil Cloud/Pupils Invisible?
Hi @user-08c118! You can find the details about the calculation of fixations and blinks in our fixation detector and blink detector whitepapers.