Is there a lib for converting Pupil Player/Capture raw data to CSV??
If you export a recording from Pupil Player, it will be in CSV format! https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
@marc Thanks! We aware that it converts raw to csv, but our dataset are divided into blocks and participants. The numbers are too many to individually drag & drop. I was looking for a method that can automatically converts it to csv by programming (python).
Hi, what kind of data are you looking to extract?
Hi, are there any general delays known regarding the timestamps? Maybe a time offset in the world and single eye timestamps?
Hi @user-e7cf71 π The Pupil Invisible Companion App runs its own monotonic clock for timestamping all data streams. Read more in the documentation: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/legacy-api.html#time-synchronization
We accidentally updated the OnePlus 6 operating system (so the app no longer works). Is there a simple method to restore the old system? Should we copy any files before trying?
@papr Event triggers, pupil size, gaze anlge, saccade etc..
Ok, looks like this are Pupil Core recordings. See this example on how to extract pupil diameter values from multiple recordings without opening Pupil Player. https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972 You can adapt it to extract events (annotations.pldata
) and gaze data (gaze.pldata
). The recording format is documented here https://docs.pupil-labs.com/developer/core/recording-format/
For saccades, you will need to apply custom code to detect them based on the gaze data.
Currently it is divided into trial blocks
Each participants have 12 folders each. So.. I'm looking for a method that could automate this process
Hello and thanks for the answer. Are the frame rates from the exported world video and the single eye videos the same as streaming them with the real time api? It appears to be a little time delay (~1 second) in the timestamp we receive from the mobile phone and the reception timestamp of the images in our system. - But only for the single eye video stream.
The system will attempt to stream the data at full frame rate but if the bandwidth is insufficient, it will drop frames. Streaming is independent and the app will always record video in the full frame rate.
Where are you receiving the eye video data? In a custom script or in Pupil Capture? Can you elaborate on how the delay becomes visible?
Hi, this sounds like you are using ndsi, is that correct?
Yes
I'm receiving the video streams in a custom script and the delay becomes visible while comparing the timestamp from the mobile phone and from the system clock we receive the images.
So what you are measuring is the transport delay, correct? When processing data via ndsi, it is important to know how the frame buffering works under the hood: Incoming data is buffered up to a specific point in time. Once the internal queue is full, ndsi (or better: zmq) drops new incoming frames. It does not drop the oldest frames from the queue.
In other words: You might not be receiving the most recent frame and therefore perceive the stream with a delay. The solution is something like this:
most_recent = None
for datum in sensor.fetch_data():
most_recent = datum
if most_recent is not None:
process(most_recent)
The code above will only process the most recent frame.
Hey, please checkout https://gist.github.com/papr/40d332498bfacb5980a754c5692068ec
Sorry for the delay! Thank you very much, very useful for me. I am currently using Matlab to call the Python script. I await other updates in the future for the Invisible.
Hi folks, I have a few questions regarding the smartphone and other ways to use the glasses. My current project involves people wearing the glasses all day. Is there a way to charge the phone while using the glasses? Also, I understand that the amount of data from this collection will be massive, therefore, is it possible to determine the glasses to turn on (say) for 10 minutes every 30 minutes or something? Thanks, Victor
Hi @user-3b5a61! You can connect a "powered USB-C hub" to the phone and connect the Pupil Invisible Glasses and a power source in parallel to record while charging.
The alternative would be to swap phones mid recording.
The app itself does not allow you to set a recording schedule like this. However you can remote control the phone using the real-time API, which allows you to start and stop recordings. This would make it relatively easy to write a custom script to realize such a schedule.
Thank you so much, Marc!
Where can I find this API that you mention?
You can find an introduction here: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/
Thx again :)
hello pupil team, i have oneplus 6 with android 11 . the invisible companion app does not work . it shows that the version of android is not compatible with invisible companion. it was android 8.1.0 before and it was working but somehow the phone has updated itself and the app is not working anymore. i have tried to delete app data & cache , downgrade the app and restore the mobilephone to factory settings but it could not help any help please π thanks
Hi, you will need to roll back the operating system. Follow the instructions of this link for that https://discord.com/channels/285728493612957698/633564003846717444/654683156972175360
i also have oneplus 8 with android 11 and the app is working on it. is there anything which i can do <:whatthefuck:400815732880965632>
the link is not working anymore. but i am looking fot it , thank you so much.
i meant the download link, but it works now : )
thanks
@papr hello! For some reason I have the audio now for my videos, but could you plz let me know what is this type of error as I want to download the video on my pc?
Hey, Pupil Invisible does not perform any pupil detection. As a result, its recording does not contain pupil data and this warning is expected for Invisible recordings.
@papr I have the scene and I have the eye cameras, I have the red circle of tracking when I play the merged video on pc, but I canβt download them
So why I cannot download the video that I have on pupil player?! Thatβs my issue
I am not sure I understand which video you are referring to? And are you referring to a Pupil Cloud download or a Pupil Player export?
I have exported the folders for each of my recordings on my laptop. I have installed the pupil player. I drag the folder into the player and I can see the merged scene/ eye camera recordings with the red dot/ circle as an indicator of the point of view. Then, when I want to download (have this complete video) on my own pc, it fails. Sometimes I have the error I sent, sometimes the downloaded video which is mp4, shown as an interrupted fileβ¦
If the video is shown as interrupted/incomplete/corrupted then the export process is still in progress. The World Video Exporter menu has a progress indicator that let's you know when the export is finished.
exported the folders for each of my recordings on my laptop Exported as in exported via the Companion app and then copied via USB?
True
~~Can you confirm that the "world video exporter" is enabled in the plugin manager?~~ From the shared screenshot, I can see that this is the case.
Hello, I'm getting back to you about a technical query. As I have already indicated, I am currently working on a study in which the time required to look from one area marked with markers to another area marked with markers is surveyed. This will then be compared between the experimental and control groups. Unfortunately, so far I have only been able to mark one area at a time with the Marker Mapper Enrichment when I add an enrichment to the recording. So I am not sure how to get the data of the times between the last viewpoint on one marked area and the first viewpoint on the other marked area. I would be very pleased if you could offer help with this.
Hi @user-f408eb! To track multiple surfaces you need to create multiple Marker Mapper enrichments. The corresponding exports will share the same timestamps, such that you can merge and compare the data after exporting. Let me know in case this does not yet clear things up for you!
Hi, I was wondering how the fixation in the raw data exported from pupillab Cloud were calculated? What is the maximum degree and minimum duration? Can these two values be adjustedοΌ
Hi @user-64c4d3! I am a Research Engineer at Pupil Labs. Regarding your question:
The novel fixation detection algorithm is essentially a classic velocity-based algorithm, however with an additional stage that subtracts components from the gaze velocity which are in line with the optical flow of the image. This is because optic flow approximates the expected slip of the gaze point when the user is moving their head while fixating a target at the same time. If the user is not moving their head, this head-motion compensation stage is not doing anything.
After that calculation, a velocity-threshold of 900 px/s or around 68Β°/s is applied. Then, we filter out small saccades (βmicro-saccadesβ) which are likely to be misdetections, and merge the neighboring fixations. Saccades are removed if they have a smaller amplitude than 1.5Β° (amplitude = distance from start to end point) and if they are shorter than 60 ms. Finally, we also remove fixations which are likely to be misdetections, which are fixations shorter than 60 ms.
How did we obtain these values? The exact values represent an optimal parameter set for our fixation detector when tuned on an annotated in-house dataset including various use-cases, such as observing visual stimuli on a screen, but also highly dynamic scenarios, such as wearing the Pupil Invisible headset during highly dynamic scenarios, such as performing a search task in a real-world environment. So they are explicitly tuned toward best fixation detection performance in various research settings.
In older versions of the pupilplayer, the fixation value seemed to be adjustable
Hi @user-64c4d3! Fixations are calculated using a novel velocity based algorithm that is using optical flow to compensate head movements. Its not the same algorithm used in Pupil Player. We are currently working on a pier reviewed paper on the algorithm. For exact values on the velocity thresholds etc I need to ask my colleagues who will be available again on Monday. The values are not adjustable.
Ok, please let me know after you confirm the exact values, thank you
Hi! I'm wondering what the FPS from IMU should be. While using ndsi I'm receiving a framerate of around 3.5 - is that correct?
Hi π ndsi transmits multiple imu samples in one frame. So, yes, the received frame rate is low but the contained sample rate is higher.
Thanks for the fast reply :). What do you exactly mean by "contained sampe rate"? Is this from the mobile device or are we receiving bigger packages (such as receiving a package three times per second with multiple data)?
If you are using pyndsi, pyndsi should be taking care of the unpacking for you.
Each package should contain multiple data points
Thank you papr I'll take a look at it!
Hi folks, quick question about the 'heat mapping' enrichment feature. In order for this to be applied to a video taking, am I right in thinking that the only way to apply this enrichment is if those marker icons were printed off and stuck to the focus prior to recording using the glasses? Is there no way to add the markers retrospectively?
Hi @user-3b418f and @user-ce3bd9 π. There are two enrichments you can use in Pupil Cloud to generate heatmaps, 1. Marker Mapper and 2. Reference Image Mapper. 1. Marker Mapper - requires AprilTag markers to be placed in the environment. If these weren't present during a recording, then you won't be able to use the Marker Mapper to generate heatmaps 2. Reference Image Mapper - doesn't require markers, but rather it depends on what we call a 'scanning video' of the feature/region of the environment that you're interested in, in addition to a photo. Depending on where you made your recordings, it might be possible to go back to the location and generate these. The main thing to consider is that the feature/region of the environment should be the same now as when you collected your original data. You can read more about how to set up this enrichment here: https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper
i am interested in the same question...
i would like to create a "heat map" but i didn't print the marker icons ... now i can't do nothing ?
Hi @papr, I get excited every time I communicate with you. The documentation claims: the exact framerate of this signal depends on the model of phone you use as Companion device. On a OnePlus 6 device, the framerate is around 50 Hz, while on a OnePlus 8 it is around 65 Hz. I want to know if this frequency can be increased after I use One plus 9, and whether this frequency can be fixed.
Hi, thank you for the compliment. Much appreciated π Please be aware that we currently only support two Pupil Invisible Companion devices: The OnePlus 6 and OnePlus 8. Other devices are currently not supported. The effective frame rates vary due to hardware constraints. If it is higher realtime gaze sampling rate that you want I recommend to stay tuned and keeping an eye on π― announcements. We will be able to share some good news in this regard very soon.
Thank you very much, I will be watching this announcement closely.
This fixations
csv file comes from a recording with Pupil Core. I need to know if and how I can have the same file, most importantly including start_frame_index
and end_frame_index
for a recodring using Pupil Invisible?
Hello everyone πββοΈ Iβll be working with the pupil invisible glasses in the upcoming days and wanted to ask, if itβs alright to post any questions regarding error messages/connection issues here on discord? And: Is English the preferred language? π
Have a nice day!
Hi @user-42203b! Yes, you are very welcome to post any questions here! And yes, English is the preferred language. You too have a nice day! π
Fixation data for Pupil Invisible can be obtained via Pupil Cloud. Its included in the recording download and the raw data export. The format is a bit different though. The full documentation is here: https://docs.pupil-labs.com/invisible/reference/export-formats.html#fixations-csv
The fixations there have a start and end timestamp corresponding to the contained gaze samples. A frame index is not directly given, but using the fixation timestamps in combination with the scene video timestamps the index can be inferred.
thank you @marc π
Hi there, I am a researcher using the Pupil Invisible for gaze tracking in my research experiments that last about 1 to 2 hours long. During which the glasses get really warm for the wearer causing discomfort. Are there any solutions for this? If not, I was looking to 3D print an insulating cover to slip on that could reduce the heat transferred to the wearer's skin. To do so, I was hoping I could get some CAD files of just the exterior frame arms to use as a reference for the 3D printed cover.
Hi @user-42fccc! Let's schedule a video call to discuss options! I'll DM you a scheduling link.
@marc hi, could you plz LMK what is the issue with such files that make the pupil player crashes? Tnx
That would be my responsibility. Could you please share the full traceback/error message with us?
I have 4 recordings that make the player crash
This is just the photo I sent. As soon as I drag the folder it comes in two seconds and player crashes by itself
Does this happen for all of your recordings or only this one in particular? Could you please share the player.log file? You can find it in the pupil_player_settings folder.
Hi @papr, I found that the scene image obtained by pupil-invisible seems to be somewhat distorted, does this require calibration.
Pupil Cloud and Pupil Player take care to correct for the distortion behind the scenes, e.g. for gaze estimation or marker mapping. It is normal that the video preview looks like this. π
Here is a picture example.
I want to perform object detection on the scene image (in other words: YOLO), do I need to correct the scene image? If Pupil-cloud corrects the image in gaze-mapper, does it mean that I also need to calibrate for object detection.
@user-a98526 What you see in the video is lens distortion due to the large field of view of the camera. This distortion can be compensated using the camera intrinsic values, which are measured for every Pupil Invisible device during manufacturing. YOLO would probably work decently on the distorted video, but undistorting the video often improves the results of such algorithms. We have not tested YOLO explicitly and I'd recommend just trying it out on the distorted video first.
If you want to undistort the video, you have two options:
Option 1) You could download the intrinsic values for your device using the following link, where you need to insert the serial number of you scene camera, which you can find in the info.json
file or on the back of the scene camera module. You need to be logged in to Pupil Cloud in the browser for the link to work!
https://api.cloud-staging.pupil-labs.com/v2/hardware/<serial number>/calibration.v1?json
And then you can use e.g. OpenCV's cv2.undistort
method to undistort the images.
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga69f2545a8b62a6b0fc2ee060dc30559d
Option 2) You can use the gaze overlay enrichment! This enrichment usually allows you to render a custom gaze overlay visualization. One of the options it provides is however to undistort the video in the process. You can set the gaze overlay to be transparent to effectively yield the raw but undistorted video. The enrichment is basically implementing Option 1) for you.
@paprThis situation happens for all of my recordings
I found this two files:
One thing I forgot to mention: The gaze date is in distorted image coordinates. Since you probably want to relate the gaze data to the YOLO detections, you would need to undistort the gaze points as well, if you distort the video, because YOLO detections would be in undistorted coordinates then.
The gaze overlay hack does unfortunately not handle this for you as well, but you can use the intrinsics as above and the cv2.undistortPoints
method.
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e
Thanks for your@marc explanation, I have some doubts, do you mean the gaze data from and Raw data export is distorted?
What I mean is that the gaze data is in the original scene camera image coordinates. If you change the scene camera images by undistorting them, you need to equally change the gaze data as well.
This is might sound more complicated than it is though. In Python pseudo code it would look something like this:
K, D = download_camera_intrinsics()
undistorted_image = cv2.undistort(image, K, D)
undistorted_gaze = cv2.undistortPoints(gaze, K, D)
Plotting gaze
onto image
would be correct, but the image looks distorted. Plotting undistorted_gaze
onto undistorted_image
would also be correct and the image does not look distorted.
Plotting gaze
onto undistorted_image
would be wrong, and the gaze data would be slightly off.
This is very helpful and it really reduces errors in my work!
Besides that, I have another question. I want to utilize the eye image images obtained by pupil-invisible, but I found an interesting thing. The number of gaze points and the number of eye images do not match. In other words, eye_image_number (Pupil player)<gaze_points (Pupil cloud-Raw Data Export)<eye_image_number (.mjpeg file).
The left and right eye cameras are running independently, i.e. they do not record images at the exact same time. To calculate gaze, the algorithm needs to build pairs of eye images first. This is done by iterating through the left eye camera images and picking the closest available image from the right eye camera. Thus, the number of gaze samples should be equal to the number of frames in the left eye video.
I am not exactly sure how Pupil Player calculates the eye image number and how it should relate to the number of gaze samples. Maybe @papr can explain that?
@user-a98526 how do you calculate the number of frames in the MJPEG file? Using PyAV or ffmpeg should yield the correct number, but OpenCV can run into issues when reading videos frame-wise.
1.For Pupil Player, I used the exported eye0.mp4.
eye0
is the right eye. You would need to count the frames in eye1
. Those should be equal to the number of gaze samples, or if you use OpenCV to count potentially lower, but not higher.
Is this from the intermediate recording format or exported from Player using the eye video exporter?
It is from Player using the eye video exporter.
frame_count = frame_count + 1
# frame_buff.append(frame)
return frame_count*
Yeah, as @marc mentioned, Opencv is not reliable in this regard. Let me look up an example with pyav π
The issue with OpenCV is that it sometimes skips frames and would thus yield a frame count that is too low. This could not explain why you find more eye frames than gaze samples.
@user-a98526
pip install av
import av
container = av.open("eye1.mp4")
count = 0
for frame in container.decode(video=0):
count += 1
print(count)
In fact, results is as follows: opencv count :3761 ,av count: 3761 "gaze.csv:3566"
Can I upload my data for a detailed analysis?
Yes, that would be helpful! Please share with [email removed]
I have sent an email. In fact, I found that my Pupil Invisible companion produces a 1-2s gap per recording, which is ignored when using Pupil Player but not in Pupil Cloud.
Yes, there is a short delay between the start-recording button-press in the Companion app and the scene camera actually starting to record. Pupil Cloud starts its playback at the button press and Pupil Player at the first scene video frame.
This gap is a solid gray image of 1~2 seconds, and I doubt whether the gaze data during this time is ignored by the Raw Data Exporter.
Pupil Cloud gaze data includes gaze that was generated during this period.
All sensors have some start up time. I recommend starting the recording before starting the condition/task in your experiment.
Yes, I actually did, my remaining doubts are mostly inconsistencies in gaze counts and eye images, and I've shared my data to [email removed] hope this helps.
I will have a look this afternoon
Thank you very much.
@user-a98526 Please see my notes below:
- after pressing the start button, the eye cameras take a short time to start (~0.43 seconds in your recording)
- after pressing the stop button, the eye cameras take a short time to stop (0.5-1.0 seconds)
- Pupil Cloud matches a right eye frame to every left eye frame as long as it is not further away than 5ms. Left eye frames without a matching right eye frames are not processed for gaze
- gaze.csv
only contains gaze data between the presses of the start and end buttons
The discrepancy between number of gaze samples in gaze.csv
and the number of frames in eye1.mp4
is due to the last point. You can verify this by plotting the timestamps (see attached plot).
What this also means is that there is no issue with the data, all of this is actually expected! π
Is there a way to get the eye image corresponding to the gaze point?
Yes, the eye frames and the gaze samples have the same timestamps, which you can use to match them.
From your@papr explanation, I understand why the number of eye images is greater than the number of gaze points, which solves my doubts.
More precisely, the eye1.time
file contains the corresponding timestamps of the left eye frames, which correspond to the gaze ps1.time
timestamps.
eye0.time
contains timestamps of the right eye, which correspond to the timestamps in gaze_right ps1.time
.
The timestamps in gaze.csv
only contain the timestamps of the left eye (a subset of gaze ps1.time
as described above) and would only allow you to match the left eye.
You can open the time files in Python using np.fromfile("gaze ps1.time", dtype="int64")
.
Okay, I'll try it, thank you very much for your help today. This will be very helpful for my research.
Hi Pupil friends π
just a quick question about the Invisible and Markers vs Image Mapper For "art like pictures" an a white background, which method would you recommend for analysis?
Markers or Image Mapper?
Thanks for you opinion Philipp
Hey @user-0f58b0!
We have recently recorded an example dataset (which will be released very soon!) in a gallery, where we used the Reference Image Mapper with great success! So I would for sure recommend it!
It depends a little bit on how the pictures look like though. If it was very uniform art work, say paintings of just solid color, the algorithm will not work well. But for most types of art it should work well!
I can send you a teaser of the dataset if you are interested in comparing it with the setup you are going to have.
thanks Marc, you guys/girls are the best. weΒ΄ll give it a try! also the Stream thingy is awesome!
@papr
the player file is what I need π
could you please try to open one of the problematic recordings and then upload the file here?
This is what you want?!
@user-b811bd That's not quite what we need! Please first open one of the problematic recordings in Pupil Player, which will log the error in the player.log
file. Then, upload the player.log
file here. Please upload the actual file rather than taking an image of it!
Ok, got it
Hello, can I disable the sound in PupilCloud when creating Gaze Overlay as .mp4?
ok, thank you, this is very helpful. This was an issue on older Pupil Invisible Companion app versions when it attempted to recover an unfinished recording. Please share the info.json files of the affected recordings with [email removed] We can fix them for you!
No, that is not an option currently. You would have to use an external tool to remove the audio from the video.
Thanks Marc
sounds great, thank you so much
Those recordings that have info.player.json
and info.invisible.json
files have a separate issue. Please attempt to open these recording, too, and share the player.log file again. Note, that the log file is being overwritten each time Player is restarted. Therefore, it is important to make a backup of the file between opening different recordings.
Hi, i need to know if i can use pupil lab open source software with Gaze Point GP3 hardware? thank you so much
Hi @user-328d3b GazePoint only sells remote eye trackers. Pupil Labs software is designed for head-mounted eye trackers and is therefore not conceptually compatible with their hardware.
Sorry to bother you@papr @marcagain, I tried your method, but I found that the number of data in gaze ps1.time (767, from Down Raw Data) is much smaller than the amount of eye image 1(3761). Also, the timestamp in gaze.csv(from Pupil Cloud Raw Date Export) and the timestamp in eye1.time are not equal. So I'm still not sure how to match them.
It is expected that gaze ps1.*
contains less samples as this is the realtime estimated gaze at ~55-65 Hz. And as explained, gaze.csv only contains data up to the stop-recording button press while the eye video can go a bit longer.
Every timestamp [ns]
in gaze.csv
should be present in PI left v1 ps*.time
. Its index corresponds to the frame index.
This is how the matching could work roughly
gaze = read_csv("gaze.csv")
left_eye_time = read_binary("PI left v1 ps1.time")
left_eye_video = video("PI left v1 ps1.mp4")
for row in gaze:
idx_in_left_eye = left_eye_time.index(row["timestamp [ns]"])
left_eye_img = left_eye_video.frame_for_index(idx_in_left_eye)
Hello everyone! I have a technical problem with pupil invisible. I am using OnePlus 6. The system has been updated by mistake and (as expected) the app was not working anymore. We managed to reinstall the older version (Android version 8.1.0), however, we are still having some issues. Eyes are not detected and the gaze pointer during offset correction is not visible or fixed on the scene video. The recording seems to start correctly, but it will then give an error message. Any idea about how to fix this problem? Please, let me know if you need any more info. Thanks a lot!
Hi @user-4f3037 Sorry to hear about the issues. Could you please contact info@pupil-labs.com in this regard and attach the error message? Ideally, please attach the recording's android.log.zip
file as well.
Thanks, I'll do that. I have the error message given by the app. Could you please tell me where I can find the android.log.zip file?
Was the recording uploaded to Pupil Cloud?
When the error message is given, the recording is not saved (there is not file in the recording folder). For some reasons, sometimes the recording is saved, but without gaze. I can try to get one of those and upload it in Pupil Cloud
Does that mean that you usually copy the recordings from the phone via USB?
I usually uploaded them on the Cloud when the app used to work. But now it just does not record anything and just give me an error message
Thanks @papr . By the way, I'm trying to use https://api.cloud-staging.pupil-labs.com/v2/ to get the camera parameters, but I don't seem to have access.
For the API to work you need to be logged in to Pupil Cloud. So please first visit cloud.pupil-labs.com, log in, and then visit the API UR!
Just to clarify, you get the error message when attempting to start the recording, not later after the recording has started already, correct? What is the error message?
yes, correct! This is the error I get. I get it just after the recording started
I tried, but it still doesn't seem to work. My serial number is 7PQ45.
These do not seem to be a scene camera serial number but those of the glasses frame. You can find the former on the inside of the scene camera (you need to detach the camera to see it).
@user-a98526 ~~Please try https://api.cloud.pupil-labs.com/v2/ instead (Without the -staging
part)~~ Looks like UI does not show the hardware endpoint.
Is it YWSCB?
Yes, that is a valid serial. In the URL you need to spell it without caps though, i.e. please visit: https://api.cloud.pupil-labs.com/v2/hardware/ywscb/calibration.v1?json
This is very useful, thanksοΌ
I have a small suggestion that I hope will help you improve your Pupil Invisible experience. The Gaze overlay plugin for Pupil Cloud may add functionality similar to Pupil Player's eye overlay.
The gaze overlay enrichment exports the scene video with the closest gaze point. The issue here is that the scene video is recorded at a much lower frequency than the eye videos/gaze. Overlaying the eye videos would mean to only render the closest eye image as well. As a result, the gaze overlay video does not contain all gaze/eye video images and would surely not be the right tool for your annotation task.
Thank you for the feedback! Could you elaborate on what you are using the overlay for? Or in other words: What is your use case for visualizing the eye videos?
Hello everyone πββοΈ Iβm using the pupil invisible glasses for an interview and while checking the recording in the invisible companion app, Iβve noticed that the ET circle doesnβt move at all. Any ideas what I can do to prevent this from happening again? Thank you very much in advance π EDIT: I just checked the cloud and thankfully the recording seems to be fine there!!!
Could you specify when (during vs after) and where (in-app vs Cloud) you checked the recording?
My application is eye activity classification (fixaition, saccade, smooth pursuit, PSO, etc.), which will require experienced users to label the data. During the tagging process, eye images, gaze points can provide the user with a more accurate judgement. This is why I am so obsessed with matching gaze points to eye images.
Ok, thank you for elaborating π
In-app and during - but it only appears that way when checking the recording in-app; I just viewed the video in the cloud and it seems to be fine there π
To monitor the recording in realtime, make sure to check out this newly released tool: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/monitor-your-data-collection-in-real-time.html
From what age onwards would you recommend using Pupil Invisible?
Hi Kerstin! The official answer would have to be that we have never properly evaluated usage with children and can't make solid recommendations. But we can share some anecdotal advice.
The biggest issue when using Pupil Invisible with children is that the glasses are not sitting well on their small heads. This can be partially circumvented using the head strap, which gives some stability.
In all cases we have tested, the algorithms have been working well even on children as young as 2-3 years. But in that age group the glasses sit very loose. More creative ways of fixating the glasses may help. At maybe 8-10 years I'd expect the issues to stop as the glasses should start to sit well when using the head strap.
Thank you! π
Hi @marc, I found an interesting thing. Sometimes an eye0(t) image will be closest to two adjacent eye1(t) and eye1(t+1) images, while eye0(t+1) is not closest to any eye1 image, does this mean that some eye0 οΌi.e., eye0(t)οΌ images will be used twice in the gaze point estimation.
Yes, that can happen.
Thanks for the answer, this is a very interesting situation!
Hi there! I have a problem as well with pupil invisible glasses and the recordings in the invisible companion app. The ET circle seems to stop moving after 10 seconds to 2,5 minutes, even though the recording overall continues. Is there a possibility that I can prevent this from happening so I can check the recordings in the app (not live nor in the cloud) with a working ET circle? Thank you in advance!
Hi @user-bd34eb! Does the gaze circle only stop moving when you play the recording back in the app, or also during playback in Pupil Cloud/Pupil Player/ Live Preview? Would you be able to share the recording with [email removed] so we can take a look?
Hi! Thank you for the fast response! I have send you a recording via that emailadress. I'm unsure whether the ET circle also stops when looking at the live feed during recording, since we only use the recording in the app...
The recording of the circle most often stops at 1:13 min, but sometimes after 10 seconds or 2:15 min.
Thanks @user-bd34eb! After a first look it seems like there is nothing wrong with the recording itself. The gaze data is fully available in the recording. I have forwarded the recording to the Android team to investigate why it might not play correctly in the app. We'll keep you up to date!
Hi Marc, we might have the same issue. The ET circle stops moving while the recording continues. We use these recordings in the rehabilitation to give feedback on scanning behaviour, so having recordings with a moving ET circle is crucial for us. If you have found the problem I would like to hear it as well!
Thank you very much!
Thanks for the report @user-2d66f7! We are still investigating the issue! If you wouldn't mind, could you also share one of the affected recordings with [email removed] so we have another instance to work with?
Yes, I will do that as soon as possible.
@user-2d66f7 @user-bd34eb We were able to locate the problem and fixed it with a new release, so please update your apps. The release should become available tomorrow! π @user-a9589e If you could anyway still send us your example recording that would be great to double check!
Hello. We are small company considering to buy Pupil Labs Invisible. We cannot find anything regarding the warrenty of the products though. Can you inform me of the warrenty? π Cheers, Frederik
@user-710092 We give 12 months of warranty. In case of a defect we will ship the unit back to us, repair it and ship it back to you for free. Passt warranty we have a repair service, our repair pricing is fair: it covers part and labor cost only.
Is it possible to get the companion app outside the play store ?
No, generally not. Can't you use the Play Store?
You need a google account for that. It would be great if the apk would also be made available outside of the play store.
Okay, I see! Thanks for the feedback, I'll forward it!
For a official educational institution such a account binding is difficult. We use the OnePlus in a tight defined way only as (local) recorder of our eyetracking data. So it would be really great to have another option to update the companion app.
What stops you from using a "throw-away account" whos only purpose is to download the app?
I'm not so happy with that but i did that right now.
Hi π I noticed something weird in my data. I always get a 1.0 confidence score for all of the frames. However, sometimes my participant's eyes are almost closed (not enough to be a blink) and in these moments the gaze estimate in bad laterally. I highly doubt that the gaze orientation is right at these instants. Do you have an idea on how to detect these instant other than I have the feeling that this is not right? Is the confidence score usually reliable?
Hi @user-e0a93f. Please see this message for what confidence means in the context of an Invisible recording: https://discord.com/channels/285728493612957698/633564003846717444/963724325586870303
Thanks for you answer, it explains why it is always 1.0. Do you have an idea on how to detect half closed eyes?
@papr or anyone, is it possible to get a confidence score for the AI prediction of the gaze orientation? Otherwise, I guess I will have to train my own AI to detect when the eyes are half closed. Did you do something like that for the blink detection which could be reused in my case?
Hi! Is the video frame rate of the pupil invisible world camera variable?
Hi, yes, it is π
In our videos we assume a slight increase and then decrease to 'normal' of the playback speed. This happens for a few seconds throughout the raw world video.
Could that be possible?
The framerate should be gaussian distributed around 30 FPS and a local average should always yield ~30 FPS. The changes in speed should not be perceivable when playing back the recordings. Are you playing back the recordings using either the Companion App, Pupil Cloud or Pupil Player? If you could share an affected recording with [email removed] I'd be happy to take a look if something appears wrong!
Does anyone know how to resolve issues with uploading to the cloud -- files have been stuck at 0% for several days?
Files/videos of similar length have uploaded to the cloud perfectly fine in the past and the specific files not uploading can be viewed with overlaid gaze by clicking on the file. I have stopped the app completely and rebooted the phone provided by pupil but have not had luck in the files uploading.
Hi @user-6a91a7. Please try logging out of the app and logging back in again.
Thank you @nmt ! That worked!