Hey there! I have a question about the pupil invisble eye tracker and its output. We have recorded specific eye movements in a static scene and now want to generate a video file with the best possible resolution. Since the image quality is suboptimal due to the poor lighting and rather poor camera quality, we wanted to ask whether the video stream could be replaced by a static photo of the scene. The aim should therefore be to superimpose the eye movement as a video over the photo (instead of the real video stream...).
Hi @user-48a758 ! This is exactly what you can achieve with the Reference Image Mapper enrichment, have you tried it?
@user-d407c1 thank you for your reply! Yes, but we dont know how to export this as mp4 or other video file...
Understand! You can not currently download the overlaid you see on the right. Feel free to suggest it as a ๐ก features-requests if you would like to have that video downloaded.
That said, replicating it through Python is relatively trivial. If you are interested on the fixations scanpath, there is a click and run tutorial here
Is it possible to connect an Invisible headset to a personal android smartphone? The phones we have that came with the Invisible headsets are experiencing difficulties
Hi @user-3cff0d , could you open a ticket in ๐ troubleshooting ? It is possible that we can get your phone(s) working again. Please note that we only support specific phones for Pupil Invisible. If your phone is supported for Pupil Invisible (i.e., OnePlus 6, OnePlus 8, OnePlus 8T), then yes, you can use your personal phone.
Hello, I'm trying to use the reference image mapper enrichment, but I didn't understand very well the correct procedure. I took a picture and then I scanned the interested area with the glass in my hand. I then created a project with this video and made the reference image mapper enrichment. Even if I didn't wear the glasses I see gaze on the image. My other question is, once I created the enrichment with the video scan and the image how can I apply it to a recording that I captured (of course in the same environment).
Hi @user-3e88a5 ๐ ! Let me address your questions one by one.
Even if I didn't wear the glasses I see gaze on the image
This is normal. The neural network attempts to estimate gaze direction even without eye images. You can filter out such data later to remove instances where the glasses were not worn or when the person was blinking, using the respective columns. Also, we automatically exclude data from the scanning recording, so thereโs no need to worry about it.
My other question is, once I created the enrichment with the video scan and the image, how can I apply it to a recording that I captured (in the same environment)?
Simply add the recordings to your project before running the enrichment. The reference image mapper will attempt to map all recordings in the project to that image.
If you want to exclude any recordings, you can use events to delimit which parts of a recording are included.
For more details, see this post https://discord.com/channels/285728493612957698/1047111711230009405/1229721600115478640 or refer to our documentation
Very clear, thank you a lot !!
Hi everyone! For our research we need to specify at which point the wearer looked at the eyes of their counterpart. In the end we want to measure eye contact in a dyad. I know that there is an enrichment called face-mapper, but it only gives a boolean variable for whether wearer looked at a face. Is there an efficient way to create such a variable for the eye region? Maybe we can define a AOI for eyes? Thank you all very muchโบ๏ธ
Hi @user-fc5504 ! We have an Alpha Lab tutorial built on top of the face mapper that shows exactly how to achieve what youโre looking forโfeel free to take a look!
Hi Miguel. I was able to run the code and get a new csv file with landmarks. However the timestamps on this csv does not match the timestamps that I had on my gaze on face csv file that I got from the enrichment. Why is that so? My dyiad are perfectly synced in time in the gaze on face data (bcs I added a post hoc event marking the start). But after extracting landmarks they dont match anymore. Another question, is there an easy way to find out time points where an eye contact happened? It's been a long question but thank you so much in advance
Thank you very much Miguel, it works! I had another question. Since we are recording a dyad we have to make sure that both devices are perfectly synced in time so we can detect eye contact. However when I look at our data I see that for each participant there are different timestamps and they do not exactly match. Do you know how we can make sure that both glasses record in the same frametime? Thank you so much again
Hi @user-fc5504 ! Generally, I would sync both phones to the same master clock https://docs.pupil-labs.com/invisible/data-collection/time-synchronization/ and send an event to both of them programmatically at the same time. You can also use some other stimuli to achieve that post-hoc.
Thank you Miguel I will try that
Could you please elaborate what you mean by sending an event at the same time? How can I achieve this during experiment or post hoc?
That would be during the experiment, using the realtime API https://docs.pupil-labs.com/invisible/real-time-api/track-your-experiment-progress-using-events/
Sanity check please: when viewing the data in Pupil Cloud, gaze is expressed as a red or grey ring. Can I double check that the red colour represents the pupil being detected and grey - not detected (eg a blink)? Couldn't find this definition in your white papers. Thanks!
Hi @user-ee4205! the circle always represent gaze, the default color is red and gray when there is a blink. You can disable this behaviour by clicking on the icon next to blinks in the timeline.
Thank you! I've played with the buttons and see what the colours mean.
HI! Can I do post-hoc offset correction with the data coming from the pupil invisible?
Hi @user-3e88a5 , yes, you can do Offset Correction in Pupil Cloud.
Thank you a lot!
I would like to ask what should I do if the video is not completely uploaded? My phone shows that the upload is complete.
Hi @user-de220f , could you open a ticket in ๐ troubleshooting about this?
I can see the full movie on my phone but not on my computer
The mobile phone shows that it has been uploaded and it is normal to open the video on the mobile phone.
Hello! Is there any open source code that contains the calculations of the aoi metrics calculated after the reference image mapper or any paper that describe the alorithm used?
Hi @user-fc5504 - Can you explain what do you mean that the timestamps are not matching your enrichment data? The tool works with the data of an individual recording - it doesn't take into account any synchronization that you might have done between different recordings.
As for your second question, as my colleague, @user-d407c1, also mentioned, eye contact would require synchronization and manual annotation of events when gaze of both participants falls on the other person's eyes.
Thank you very much for your response Nadia. I am basically trying to do the same thing to calculate eye contact. However, since as you said the tool does not take into account the synchronization I made on the videos (by event markers for starting points), the timestamps of individual recordings in a dyad does not match therefore I cannot ascertain whether an eye contact happened. On the contrary, the data I get from facial mapper (where I used the marked events as starting points to sync recordings) provide identical timestamps for synced videos. The problem is I need to use the facial landmark tool but its output is not synced.
Hello
Hope you are doing fine!
Please, let me introduce myself first. I am Laurens, a doctoral student at the Ghent University. Markus Schรถneberg and Daniel OโYoung (from iMotions) told me I could contact you in case of a problem with the Pupil Invisible.
Last year I worked with a masterโs student on a pilot study on gaze behavior in football coaches and scouts. However, when I received the Pupil Invisible glasses back, the software of the OnePlus8T updated to Android 12 making it impossible to work with the Pupil Invisible Companion app (gaze behavior data did not work). Moreover, video data in the โMy Filesโ section also got removed during this update to Android 12. I read something on the FAQ of Pupil Invisible about going back to Android 11, but this did not work out well. Are there any other solutions you might think of?
How do you think I should proceed from here? I hope you can help!
Thanks in advance and enjoy the rest of your day.
Hi @user-6669c1 - we received your email. Indeed, the Invisible Companion App is not compatible with Android 12 on OnePlus8T. You can downgrade to Android 11 following the instructions available in our docs.
As for your recordings, did you upload them to Pupil Cloud?
Hi @user-480f4c! We tried to do this but it did not work out well. The downgrade only turned out in resetting the OnePlus8T to factory settings. We indeed uploaded the images to the Pupil Cloud so we have backups. Should I just retry the downgrade?
Can you open a ticket in the ๐ troubleshooting channel and we can assist you further there in a private chat?
I will, thank you!
Hi, I just started using Pupil Invisible. During data recording I recieved supricing amount of errors related to device failure. Is it normal that the device overheats at 28 celcius indoors?
Another question that I have is related to pupil size metrics. Is there a module or a command to mass export pupil data from project files? I found a paper discussing "Pistol: Pupil Invisible Supportive Tool" but it seems to take in one project file at a time.
Hi @user-e6b995! ๐ค It sounds like you might have an issue with the USB cable or connector if you're running into multiple errors. The device does heat up, but overheating shouldn't be causing what you describe. Please open a ticket in ๐ troubleshooting and a member of my team will coordinate debugging steps with you.
Re. mass export, yes, this is possible via Pupil Cloud
Hi, A while back I asked a question about how to extend the battery life of the companion device during a recording. It was suggested to use a usb-c hub, but my current companion device (Oneplus 6) does not support charging and data transfer from the charging port at the same time.
Would this problem be solved if I were to upgrade to another companion device like the Oneplus 8/8T, and does this model support both charging and data transfer at the same time?
Hi, I have a question about downloading recordings from the Pupil Cloud. I saved my recordings there and used the "Pupil player format" button to download them to my PC. However, when I try to drag the extracted file from the zip into the Pupil Player, it wonโt accept the recording. Can someone assist me with this issue?
Hi @user-266087 - What is the error you get? May I also ask if you could preview this recording on Pupil Cloud?
This is the error: player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those.
@user-266087 can you please open a ticket in the ๐ troubleshooting channel? We can help you there in a private chat ๐
Hi all!
Hope you are well! We will be running an experiment November with 23 Pupil Invisible devices as well as 23 AntNeuro eeGo EEGโs. Last year we managed to reliably stream the EEGโs to LSL and we would like to do the same this year for the Pupil Invisible gaze so we can better sync with the EEG data.
I noticed there is already a solution to this problem with this lsl script: GitHub - pupil-labs/lsl-relay
And so I was wondering if I could ask for some pointers before I start hacking: The current version can only accommodate for one device at the time. If we want to have multiple things, how would we proceed? Do you have a suggestion to automatically connect to the device as they appear on the network? Since they will be staggered setup It might be useful to connect to them ASAP
I also notice the Neon Companion app handles LSL right out of the box. In a perfect world we would have that as well for the invisible glasses Any way we could connect the Invisible to the Neon Companion App. Is it possible to write a script that would live on the phone to stream the data via LSL?
Thanks alot for your help!
Hi, @user-d23b52! This sounds like a big project! Let's see if I can help ๐
If we want to have multiple things, how would we proceed? You would need to run multiple instances of the lsl-relay software (one for each Invisible device). It may be useful to launch with unique values for the
--outlet_prefix
parameter so that you can distinguish the different streams wherever they will be received.Do you have a suggestion to automatically connect to the device as they appear on the network? You'd have to do some programming. The companion app will broadcast over mDNS, so you could run a loop that queries mDNS, and when it sees an Invisible companion, launch the lsl-relay with the
--device_address IP:PORT
parameter specifiedAny way we could connect the Invisible to the Neon Companion App. No, I'm afraid they aren't compatible in that way
Is it possible to write a script that would live on the phone to stream the data via LSL? It's certainly possible, but it would be more of a full-fledged app than a simple script. LSL publishes Android bindings, but I believe they are outdated and the java bindings are recommended instead.