Also, if we didn't place surface marker mappers (the QR codes) in the corners of each surface of interest, is there a manual method to build a heatmap?
Awesome, great to hear.
Is there a pro/con to using the reference image mapping technique over the marker mapper technique (to develop heatmaps)? Is one method preferred over the other? iiuc they both seem to accomplish the same task, no?
They do indeed accomplish the same task. The Marker Mapper requires adding Markers to the scene though, which is not always desired. The Reference Image Mapper on the other hand works in natural scenes, but requires them to have a minimum level of visual features, so it is not applicable in every environment.
Our general recommendation would be to try the Reference Image Mapper first with some pilot recordings and fall back on the Marker Mapper if it is not successful.
Ah, I see from the chat history that you calculated these timestamps yourself. Make sure to always subtract the same value from all timestamps. Otherwise you end up with misaligned timelines. A good value would be the start time in the info.json file
What is the source of these files? As in which program did generate them?
as can be seen here, the timestamps of both blink ids do not overlap. the first screenshot is from the blink.csv file, the second from the gaze.csv file.
Hi! I tested the frequency of my data after upgrading the companion app, and found that it was even around 200Hz in stead of 120Hz. Is this possible and accurate?
Yes, the device will run at the highest possible speed. But that does not always mean 200Hz. We advertise 120 Hz as a minimum bound.
ahh I didnt even think of a mistake in the code. I will take a second look at it, thanks! I thought there had been a mistake somewhere in the recording.🫠
Both are valid reasons. You can easily check by validating the original timestamps. If they make sense, the error is likely in the transformation code
I applied the same function to both blinks and fixations.csv and for the fixation file it turned out right, thats why I was confused.
Does the function take a start point in nanoseconds? Or do you just use the smallest available timestamp?
It uses the start point in nanoseconds
Start point of what?
I'll do an extensive look at the code and will then update you!
okay so: First all of the nanoseconds are transformed into seconds. Then, the very first "start timestap" is substracted from all the time values in seconds within the start timestamp. After that, the duration [ms] column is also transformed into seconds. This duration in seconds is added to each respective start timestamp so the end timestamp is calculated. The same function applied both to blinks.csv and fixations.csv. For fixation.csv it makes a lot of sense and matches with the gaze dataframe, but for blinks it does not.
That assumes that the start timestamp is always the same. But if the first blink happens at 3.0 seconds and the first fixation at 0.0 seconds, then you will see a discrepancy. Also, to calculate the ent timestamps, you can apply the same function as before regarding the subtraction of the same timestamp as for start.
Also, note, due to how floats are usually implemented, it is better to subtract the nanosecond timestamp first and then converting the value to seconds. This preserves timestamp accuracy.
Okay maybe to clarify (in response to your first sentence): I applied it to the blink csv file and then to the fixation csv file, and saved it in two different variables. So basically x = function(blinks), y = function(fixations). Hence the start timestamp was not the same for both files. Or did did I misunderstand you?
Your understanding is correct. You want x = function(blinks, start_time=T)
and y = function(fixations, start_time=T)
. Using T
within the function for the subtracting and ensuring T
has the same value for both function calls.
As mentioned above, a good value for T
is the recording start time:
import json
import pathlib
path_to_json = pathlib.Path("<path to the recording's info.json>")
start_time_ns = json.loads(path_to_json.read_text())["start_time"]
Thanks for the tip on first subtracting and then converting!
Ahhh! Okay, this makes a lot of sense! So it probably was just coincidence that it worked so well with the fixations?
Coincidence in the sense of the gaze starts with an fixation, yes. Also, fixations are more frequent than blinks, making it more likely for it work out.
thank you very much. this helped me tremendously
Happy to help!
appreciate the support you guys are doing a lot!
Hi! I'm trying to apply the video with the gaze into the Lab Recorder, in real time while streaming, but there is no video option of pupil labs in Lab Recorder, it shows only pupil_invisible_Event & pupil_invisible_Gaze. I have tried with or without starting recording from the monitor app. Is there any way to synchronize it through lab Recording ? Thanks in Advance!
Hi, streaming the video to LSL is not supported. Mostly due to a lack of software that would be able to play that back.
Or in other words, while recording i could create an event (timestamp) and then retrieve it through lsl, and then synchronize through this timestamp the other datastreams ?
That's exactly what the LSL relay does for you! See https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html
In the pupil labs cloud, at the project editor, i created a 23 sec video with 2 events. In the middle of the event 1 and 2 there is an annotation lsl.time_sync.66. What does this mean ? And why is at that specific time?
Those events are generated by the relay and are used for the Post-hoc sync linked above. The frequency of those events is configurable via the lsl relay command line Arguments. The exact time of the event only matters for the relay
Thank you so much! For sure it will occur more questions in the future!
Hi again! I'm trying to verify the values of the raw-data-export from the pupil Cloud, and the values from the Lab Recorder. I saw that the csv file from the cloud is rounding all the values of x and y gaze to the 10^9, and the lab Recorder isn't rounding at all. That means that the lab recorder gives me number like 672274658203125 and the csv file gives 672275000000000. That's making a difference when i try to identify the exact start point of the recording through the companion app and the identical part inside the lab Recorder xdf file. Is there a way i can get through the cloud the unrounded official values of the gaze? Thanks in Advance!
Hi @user-f1fcca! The timestamps exported with the raw data exporter should be unix timestamps in nanoseconds. The value 672275000000000
is much too small of a number to fit though, so something is going on here.
Are you doing any processing of the timestamps coming from the raw data exporter, or are those the raw values in the CSV file?
Could you let us know the recording ID of a recording that is showing such timestamps and give us permission to inspect the CSV files to investigate?
Also, in theory there should not be any rounding of this magnitude, which would essentially round the timestamps to seconds.
Hi @marc ! I'm not doing any processing, at least none of I'm aware of. I just open the gaze.csv file first with the excel to see if everything is all right there, and then I read it with matrixread in Matlab.
Please note, that LSL time != Pupil time. LSL records timestamps in seconds relative to an arbitrary clock start. Pupil time is in nanoseconds since Unix epoch as Marc said already.
This values are not timestamps, these are the values that the gaze.csv file gives in the gaze x [px] and gaze y [py].
Ah, gaze locations are given in floating-point pixels. Depending on your language setting, the float values might be misinterpreted.
True there is a difference in Greek with the coma (,) and the dot (.) but what programm is making this setting difference ? The excel by default? Also why matlab doesnt make it ?
I think Excel tries to be smart about it but fails. Matlab just uses the internal float string interpretation. I think you can tell Excel to use that, too, but I don't know how.
Ah alright! Thanks again guys !!
Hi again! I have a question, the lab recorder gives me a little bit different values than the csv file from the raw data from the pupil cloud, not only the issue with the rounding but with a 4 or 5 values difference, for example lab recorder gives 504.5231 and the csv gives 501.8492. We have any idea? Im speaking for the gaze coordinations of x axis!
Hi, am I assuming correctly that you are comparing rows that have the same timestamp?
Note, that Pupil Cloud reprocesses the eye videos with their full temporal resolution, yielding more gaze data than what can be calculate in real time
the lab recording and the cloud have different time
That is expected. In the above message, how did you determine that the two values were coming from the same gaze datum?
For example in this pic i show different picks, the timeseries are very very close to each other but not identical
Unless you align the timelines, you might not be comparing the same gaze points
See this guide to align the timelines https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html
Okey, so if the timelines are aligned the points of lab recorder and pupil cloud have exactly the same values ?
To clarify: LabRecorder can only record a subset of the gaze datums compared to Cloud. This subset should have the same gaze locations in LabRecorder and Cloud, yes.
Just for curiosity Im asking
Okey thanks a lot!
In the terminal when i try to run the command : lsl_relay_time_alignment --C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg --C:\Users\Nick\Desktop\Synchronize pupil recorder with lab recoder\2022-11-04_11-42-27-new , the terminal says : 'lsl_relay_time_alignment' is not recognized as an internal or external command, operable program or batch file.
I have installed the dependencies and the paths are correct to the xdf file and the raw data file with the events respectively
Please try to replace lsl_relay_time_alignment
with python -m pupil_labs.invisible_lsl_relay.xdf_cloud_time_sync
C:\Users\Nick\anaconda3\python.exe: No module named pupil_labs.invisible_lsl_relay.xdf_cloud_time_sync
is there a difference in terminal Window Powershell and the cmd ?
I mean i should repeat the process in terminal? Because everything i did was in cmd.
One needs to differentiate between the ui and the shell parts. The ui provides the window and the possibility to type text. Once you hit enter, the command is interpreted by the underlying shell. cmd.exe
and powershell.exe
are both different shell programs that work slightly different in regard to the available commands and the syntax of scripting syntax. In terms of executing a python program, they are the same.
I really need to offer you a coffee sometime for all this help 😊 , sorry for the inconvenience😅
No worries, happy to help!
Error: Invalid value for 'PATH_TO_XDF': File 'C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg' is a directory.
This looks much better already. Do not pass --
and instead add the full file name including the file extension at the end
if i put the -- before has a different error e.g : --C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg
Error: No such option: --C:\Users\Nick\Documents\CurrentStudy\sub-P001\ses-S001\eeg
If i put the xdf file in the desktop what should i put in the path ?
You can simply drag and drop the xdf file onto the terminal window to get the path
Ah, the time alignment tool assumes that the recording was made using an up-to-date version of the LSL relay. It adds meta data that is important for the time alignment. The error handling for these older recordings needs to be improved!
Still no time_alignment .json in the folder with the events but i think we getting close ;p
Okey no worries this recording was a testing one. I should just repeat the recording process with the up to date lsl relay which i just updated with the command python -m pip install -U pupil-invisible-lsl-relay[pupil_cloud_alignment] ?
Correct!
Lovely! I'll do it maybe in Thursday , so see you then if some more errors occur ! Have a nice evening !
To you, too!
So, I am sorry, I got late, i have been using Dlib library, and it is projecting me the coordinates, it's detecting the pupils' location for the left and right eyes, respectively, I have attached a pictiure representing the same for that, the co-ordinates which are being printed, is the location my pupils are currently at... and simultaneously, if you see the cursor= those are referring to 34.6,40.3.....
Here on the right!(black)
So, i need to synchronize those two sets of co-ordinates-- which you can see printed....(1115,633),(1024,809) for example..... so how I can be able to do that? I am checking the world 3d-coordinates to image 2d references, but can't figure out.... I mean i can add the z-co-ordinate in that which is the distance b/w eye and the camera.....
for example for the left eye:- it's going to be....
(981, 562, 100) (963, 584, 100) (968, 613, 100)
how shall i proceed?
Please help me....
Hi @user-8247cf! Can you confirm if you are using Pupil Invisible or Pupil Core? If I understand you correctly you are using Dlib to implement pupil detection and are now looking for a mapping from pupil coordinates to screen coordinates?
First, I'd recommend to use the existing gaze estimation algorithms of Pupil Invisible and Pupil Core respectively, which have lots of advantages over a direct mapping on pupil positions.
The recommended way for mapping a gaze signal to screen coordinates is to use one the available gaze mapping algorithms, i.e. Reference Image Mapper or Marker Mapper in Pupil Cloud, or the Surface Tracker Plugin in Pupil Player. Using those you can track the screen and convert the gaze signal into pixel coordinates. From there you can convert them to stimulus coordinates by controlling how the stimulus is presented.
The approach you suggest, to calculate a direct mapping from raw gaze signal to target image coordinates works as well in theory, but is very prone to errors due to head movements. I.e. you could calculate a simple polynomial mapping from gaze/pupil position to the target coordinate space. But the validity of that mapping depends on the subject holding their head perfectly still.
I hope that clears things up a little bit!
Yes I was using Dlib library, not the pupil codes. Yes, correct, I was trying to put those co-ordinates which are are being printed, to the one I showed. Can you please send the source code(specifically which one) is doing this task. I want to use only web-cam, so can it possible through web-cam only? Also, can you send the source code for the algorithm and mapping from camera pupil coordinate to screen coordinates? I am grateful to you...
Hey, short questions regarding the single eye images and compression: 1) Are the single eye videos saved on the mobile device in RAW format or is there any compression applied? 2) The single eye images received by streaming from the mobile device are compressed by using H264, am I right?
The eye cameras provide mjpeg video and that is streamed via ndsi
Also, is the code available for mapping, in Python?☺️
Thanks! I will get back to you. I was checking that GitHub, which one in that is specifically for that purpose?
But I can use the algorithm and try to deploy it, as we have the option of only web-cam (computer one), so that case what should be the solution for the same? Like what's the base of the algorithm?
The surface tracking algorithm's source code is located here: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/surface_tracker
I mean specifically the polynomial mapping?
Do you have any resources/code for that?
Thanks a lot for your help, @marc
The polynomial mapping would map pupil positions to screen coordinates. If you are using a webcam with custom pupil detection, you'd have to implement a custom calibration scheme as well. You can implement a polynomial regression using e.g. sklearn
, but I can't point you to much more specific resources, as custom remote eye tracking like this is not really supported with our software.
@user-8247cf, a bit late to the party. But have you checked out webgazer: https://webgazer.cs.brown.edu/ ?
I thought that was python:(
I will check it out! Thanks!! I will get back to you soon. Wait, polynomial regression, how this is linked to mapping? I am so confused now, I mean we need z-distance(we already have that), plus the focal length, and then?
Polynomial regression is the technical term for what I meant with polynomial mapping. The idea is to map 2D pupil positions to 2D screen coordinates. Z-Distance/Depth is not considered here.
See here for an intro on polynomial regression: https://towardsdatascience.com/polynomial-regression-with-scikit-learn-what-you-should-know-bed9d3296f2
In general, implementing remote eye tracking with a webcam is hard though. Using a simple polynomial mapping like this and custom pupil detection will be have extremely poor robustness.
If you need a tool that actually works robustly, I'd recommend working off of an existing tool like WebGazer. Webcam-based eye tracking is working relatively poorly compared to head-mounted or proper remote eye trackers even for the best implementations. If you google for webcam based eye tracking, I am sure you'll find something in Python as well.
I mean I know the polynomial regression, but how to use that in this respect, I was not sure!! Cool
I will check em out!
I did, I mean can't I just convert(map) those co-ordinates to the screen one? Will that not be fine?
That works assuming the mapping is well calibrated (i.e. sufficient data to fit the mapping) and there is no head-movement.
The co-ordinates that webcam is throwing--(1779,978), it's pupil co-ordinates right?
Can you clarify which product you are referring to? webgazer?
The Dlib library.....
Please understand that we cannot provide technical support for any third party products, e.g. webgazer or dlib.
Please see the picture. I mean I am just asking--like the mapping in that scenario will be error, right?
We are missing a lot of context here which would allow us to answer your questions accurately, e.g. how the data was recorded, etc. In addition, we do not have any experience with dlib itself. I would like to ask you to consult the corresponding technical dlib documention.
Sure, let me figure out, I will get back to you..
Hi! I have a question regarding pupil cloud while using pupil invisible ; We are using the glasses for an experiment regarding emotion recognition in certain environments and we started having participants a few days ago. We had 5 recordings when I stopped using the glasses and the app on that day (automatic transfer on the cloud is on). I checked multiple times and the recordings were saved. My lab director messaged me today saying that the app and the cloud says we have no recordings for the last 3 months (as if the app went back in time 3 months and deleted everything from then to now). She might've deleted them by accident or something else but otherwise does anyone have an idea on how to recover those recordings? Is there like a trash can where the deleted data gets stored for a numer of days before permanently getting deleted?
Could you share the account email adress with me via a private message?
Papr, so i mean, data is not recorded, this is on-time, user is just watching the screen, and it keeps on printed on the screen. Just asking, i mean how to get the camera intrinsic parameters?
To clarify, are using any of the head-mounted Pupil Labs eye trackers or the web cam to record eye movements?
i am just asking from open-cv/research perspective, nothing personal:) webcam, specifically to record the eye movements..... like i was looking the mapping, so i mean, it says intrinsic camera matrix..... how do i determine that? do you wish to have a look at the link? i am so grateful that you're helping me..
I am asking because my answer dependent on that. For example, for Pupil Invisible glasses, we offer an api to access prerecorded values. Since we did not ship the web cam, we don't know the values and you will need to measure them yourself. Check out the Opencv documentation on how to do that
i wish, we can have the money for tracker:/, i am so sorry, i am bothering ya'll
question:- how do you measure the camera matrix= so the matrix has fx,fy= it's already known right?
This is the opencv documentation that I was referring to in my previous message https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html
oh cool, let me have a look.... thanks a lot, i will study that, and get back...
one thing, guys, I want to know is that, the process of mapping, which i am trying, is not wrong, right? just the things is that, it needs to have some initial constraints like- z axis should be constant, and the user is assumed to have his/er head position fixed, right? I am trying to ask is that= the mapping is valid, right?
Could you clarify what mapping process you are using?
We're running into glasses usability issues, the USB-C cord seems to be causing drag and head rotation limitations. Even worse, the weight/drag created by the USB cable seems to be inadvertently lifting the glasses off the faces of testers, jostling the glasses and causing disruptions in recording. Our tests are being conducted in transit stations so we inherently have a decent amount of head rotation & movement, have you guys ran into similar issues? How can I reduce glasses jostling?
Have you tried using a cable clip to fix the cable e.g. on the subject's jacket and such that there is leeway for the cable?
Hi @user-cd03b7 👋. Good to hear from you again, and thanks for the feedback. Please reach out to [email removed] mentioning the issue and my name 🙂
note - we're using Invisible glasses ^
We tried that, it helps a bit but cable weight/lack of cable malleability is still present
In our experience it matters a lot how you route the cable along the subjects body to avoid the cable bending over itself and thereby putting force on the glasses. But I imagine you have played around with that already. We do offer a head strap that might help fixate the glasses on the head better. Have you considered that? The Pupil Invisible glasses are transmitting a lot of data through the cable and most 3rd party USB-C cables will not be able to carry that load robustly. However, if you had access to a softer high quality cable it could be worth trying it out. The issue you would see with insufficient cables is intermitted disconnects in the recording.
Hi , anyone ever experienced cloud upload from Pupil Invisible taking too long?
for instance, I have 13 min. of record but it was uploaded since noon. Still unfinished.
Hi @user-87b26c! Depending on the overall load sometimes there is waiting time. If you could let me know the recording ID of one of the affected recordings, I can double check if there is a problem!
How many hours have the recordings been processing? "Since noon" depends a lot on your timezone 🙂
Hi Marc, here it is. 6ece8c62-b479-42ab-af2f-8fbf84f747b8
I would say about 6 hours. still unfinished. Noon time Bangkok GMT +7
Thanks! That is indeed much longer than expected! I'll check what is going on.
In the past, it was way faster than this. Thanks
I see the status of this recording is indeed still "uploading" rather than "processing". The load on our servers is currently not unusually high, but the upload speed for those recordings is very slow. The most likely explanation would simply be that the available internet connection for the phones is weak. Could you confirm that the phones are correctly connected to the internet and maybe execute speed test?
We uploaded it via wi-fi at our office. the speed is fast.
Is it about the OS . I still have OnePlus6
Never updated according to the instruction.
No, the OS should not be an issue. Are the phones still located in the office with access to the fast connection in theory?
Could being outside the wifi range or something similar be an issue?
Executing a speed test would be useful to rule out this issue.
It was fast when we started the uploading. Office internet is real fast through leased-line optic. Should not be the issue.
We waited for 2 hours and it was uploaded only less than 20%. No internet issue as we are doing Streaming as well. We have plenty of bandwidth.
Okay, we'll think about other explanations.
@user-87b26c when you have access to the phones again, please try the following: - From the recordings list in the app, via the context menu of the affected recordings pause the upload and then start it again and see if that helps. - Confirm that you can access https://api.cloud.pupil-labs.com/ from you office' internet connection. This is the server the app ultimately communicates with for uploading. When opening this in a browser you should see a page with API documentation. - To be safe, do a speed test via e.g. https://fast.com/. While I do not doubt you internet connection is fast, WiFis can often be quirky, so I want to rule this out for sure.
Hi Marc. Thanks. I will check this tomorrow.
Hi! We are working with two invisible glasses and we have two questions: 1. Can we increase the gain of the audio in each glasses? The volume that we get in the video is low 2. We have a bit of offset in the gaze with one of the glasses, do you have any idea about how to solve it? Thank you!!
Hi, the app receives and stores the audio as it receives it from the operating system. That said, are both participants low in volume in both recordings? Regarding the offset, the app has a built-in offset correction. You should be able to access it through the wearer profile.
@user-87b26c Thanks you! Does the problem persist after pausing+restarting the download?
@marc problem still exists. We do have 2 records today and it is still uploading slowly.
If we can't get it fixed soon we'll come up with an alternative approach for getting this up into cloud so you don't get delayed too much by this! Please also execute the test here from the phone: https://speedtest.cloud.pupil-labs.com/
Just here to say - kudos on the naming convention update for the cloud downloads guys. As an early adopter of invisible cloud, this is very very much appreciated. Makes such an improvement to our pipeline. 🤓 ... So yeah, thanks for all the great work you all are doing over there @ Pupil Labs.
Hi Marc, thanks for the promt response, I tried it with the orange charging cable, but it didn't work either. I will try to update everything the next days and try once again. I would like to charge the phone while recording. I haven't tried the ethernet connection ever.
I just got the info that the problem is the old version of Android. So unfortunately this will only work when using a OnePlus 8 device, sorry! 😦
ok no problem! Thanks for the info!
Hi, we have just set up the system and are trying to understand if its accurate enough to be used with text reading. Can we map the fixations and saccades onto a page of text or not? Can the red circle be shrunk as it currently encompasses several words at a time?
Hi @user-ad947a! For classical reading studies the accuracy requirements are usually exceptionally high and Pupil Invisible will most likely not suffice! Two map fixations onto a page you can use either the Reference Image Mapper or Marker Mapper. See the documentation here: https://docs.pupil-labs.com/invisible/explainers/enrichments/reference-image-mapper/ https://docs.pupil-labs.com/invisible/explainers/enrichments/marker-mapper/
The red circle can currently not be shrunk within the Pupil Cloud UI directly. Using the Gaze Overlay enrichment you could export snippets of recordings with a smaller gaze circle, but this of would of course require some extra steps. See here: https://docs.pupil-labs.com/invisible/explainers/enrichments/gaze-overlay.html
Hi Marc, thank you for your response. Have a great day!
@everyone We want your feedback!! 😀 We are looking for feature requests and opinions from the community to shape our development roadmap for Pupil Cloud and Pupil Invisible! Help us to focus on what you need most. Any idea is welcome!
For this purpose, we have set up a platform for you to submit feature requests and vote on requests submitted by others. You can log in using your existing Pupil Cloud accounts!
Hello, I'm setting up the Pupil Invisible, does it work with Pupil Capture?
Hi @user-cb1972 No, Pupil Invisible requiere the Companion Device to run. That said, you can trigger start/stop and event annotations from a web browser using the monitor app https://docs.pupil-labs.com/invisible/getting-started/understand-the-ecosystem/#pupil-invisible-monitor You can also load Pupil Invisible recordings into Pupil Player
Hi there! I am discussing a research study that used Pupil Labs CORE glasses. This is my students' first conversation about eye tracking details in my psychology of music class. I am looking for a post-hoc video that might illustrate fixations, saccades, etc. There are several videos on the Pupil Labs channel of post-hoc analysis and rendering, but there was no audio in the videos. Thanks in advance!
I downloaded the demo videos from this link, https://pupil-labs.com/products/core/tech-specs/, but I cannot get them to load in any video apps that I have. Any help is appreciated!
Hi, what is the accuracy of the Pupil Invisible eye tracker?
Hi @user-47f6b0! You can find a detailed evaluation of accuracy in our white paper here: https://arxiv.org/pdf/2009.00508.pdf
Regarding blink recovery time, is this about how long it takes for the gaze samples to yield high-quality data again after a blink event? This was never formally evaluated, but Pupil Invisible's gaze estimates are independent for every pair of eye images, so it should be instant as soon as the eye is open again. Or rather gaze estimates should reach high quality again already during the opening motion of the eye lids.
I'm also wondering what is the blink recovery time and gaze recovery ?
So sorry, the blink and gaze recovery are not the parameters that I'm interested in.
Thank you for the link!
Hello, we are using the Pupil together with an EEG headset (smarting). This means that occasionally conductive gel (from the mastoid electrodes) comes in contact with the arms of the Pupil Invisible. Does that pose any risk for the electronics? We are wiping with a dry cloth.
Hi @user-94f03a! As long as the gel stays clear from the USB socket at the tip of the arm and clear from the eye cameras this should not be a problem.
@user-94f03a you can try placing some insulating tape (electrical tape) around the USB connector - cable at the tip of the glasses to better insulate the connector from the gel, albeit the glasses may become less comfortable to wear
thanks! will do that
Hi, how to convert pupil timestamp to real time? The first timestamp in pupil.csv is 224215.9444. I tried to convert to real time, but the converted real time year is 1970. It's weird, we did the experiment on Sep 1, 2022.
Thank you! I will read that.
@papr Could you please recommend some references to preprocess the pupil, blink and gaze data? Thanks!
Hi, just for clarification, are you working with Pupil Invisible or Pupil Core data?
I don't know. What is the difference? I work on with pupil.csv, gaze.csv and blink.csv data.
The products use two different data formats. Does your recording folder have a info.json
file? Or alternatively, a info.player.json
file?
Oh, I think it is Pupil core data. Because we use pupil core device
Then let's move the discussion to core
Ok, thank you!
@papr does pupil invisible also output the cognitive workload?
Not directly. You can estimate cognitive load through various metrics. One of them would be blink rate which can be calculated based on Pupil Clouds blink detection results
Hi, I have a problem with the raw data export on pupil_cloud.
I recorded a video with the invisible glasses and marked various live events using the web tool provided (pi.local:8080). The recording was uploaded on the pupil_cloud platform but when I try to create and download a raw data export enrichment, the folder created contains only the info.json file and not the .csv files.
The others enrichments don't seem to be a problem.
Would you have a solution to this problem?
Hi I have a problem with the raw data
Hi. Is it possible to stream eye images captured by Invisible using Realtime Python API or Network API?
Hi! Streaming eye video is currently possible only via the legacy API: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/legacy-api.html#legacy-api [FIXED LINK]
Can I have access to this API? I need to provide a username and password.
Please use this link instead https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/legacy-api.html#legacy-api
Actually, I need to detect blinks in real-time. Is it feasible in any other way?
There will be some delay (specific to your network connection). I suggest you give it a try and see if the transport is fast enough for your use case.
Thank you
Hi! I'm trying to re-load the "head_pose_tracker_model.csv" to my second recording so that I don't need to repeat the 3D-model build up process. I found the .csv file but now figuring out where I could apply/ upload my template model on Pupil Player for other recordings. Thanks!
Hi, the csv file is just the export, not the model. If I remember correctly, it should be either a msgpack or a pldata file.
Hi all! We want to use the Pupil Invisible on a completely mobile setup (i.e. no laptop/tablet) together with smarting EEG (which also supports LSL). Is this possible as far as you know? For instance, the smarting mobile app can receive events in UDP is there anything supported by the Pupil Companion? Thanks
Hi @user-94f03a! Pupil Invisible will always need to be connected to a Companion phone, but that is of course mobile. Pupil Invisible is compatible with LSL, but you'd need to run the LSL Relay for it somewhere: https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/
Using the real-time API you can send events to the phone as well: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/ https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/track-your-experiment-progress-using-events/
Hi @marc thanks for this
Yes we are using the relay on a desktop setup.
Was wondering if there is any options to do this fully mobile, meaning without a laptop (assuming relay runs there).
Hi, the relay is written in Python and does not run by itself on Android. That said, it should be possible to reimplement it in a mobile app using https://github.com/labstreaminglayer/liblsl-Android
i suppose an option is to use a laptop to send a single 'start' marker to both streams (pupil, and EEG) and then do the synchronisation offline?
That is partially what the relay does for you (in addition to streaming gaze data via LSL). https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html
Thanks @papr for the suggestion. I don't think we can write this for android ourselves at the moment. We will have to find another workaround
Hey, under https://docs.pupil-labs.com/invisible/explainers/data-streams/#fixations you mention that your algorithms on saccades, fixations and blinks will be open-source soon and that there will be a white paper aswell. Is there a roadmap for that? Or a way to get a preview? The work might be relevant for my research project.
Hi @user-06f807! Yeah, we are a bit overdue on that! The paper on the fixation detector will be peer-reviewed and that is taking some time. We might be able to give you a confidential pre-print though. I'll DM you regarding that!
The issue with the blink detection algorithm is mostly a lack of documentation and some reliance on internal tools in the implementation that we need to get rid of. There will be no white paper to accompany it, but the algorithm is also very simple. We need to find the resources to get this done!
There is no explicit saccade detection algorithm. In some scenarios one could interpret the absence of a fixation as a saccade though!
Okay, thanks for your quick response! If your allowed to hand out the pre-print, it would be highly appreciated
Hello, I want to use the Pupil Invisible Monitor app, but this website (http://pi.local:8080/)cannot be opened. Is there any other way to use this function?,I'm from China.
Hi @user-4bc389. Here are a few things to check:
1. Ensure the companion device is connected to the same network as your computer. Note that this does not require an internet connection
2. Enable HTTP Streaming in the companion app
3. Try with pi.local:8080
first, and if that doesn't work, enter the address shown in the HTTP Streaming section of the app
If the streaming isn't working, it might be related to the network firewall. For example, institutional networks can have firewalls that block the connection. In such a case, the easiest thing would be to set up a local hotspot on your computer and connect the phone to it, or use your own dedicated router
Hi Is the web app of pi.local open source? I was wondering how I could stream multiple cameras at the same time on the same web page using another backend such as Flask.
Hi, the pi.local webpage is just one client for the Realtime Network API. You can write your own client. See the under-the-hood description of the API here: https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html
The pi.local
client code is not open source at the moment.
File "./multi_stream_venv/lib/python3.9/site-packages/pupil_apriltags/bindings.py", line 468, in detect assert len(img.shape) == 2
Hi when I'm trying to stream on the webpage I face this error. can you explain to me what is happening?
The detector expects a gray image. You are likely passing a rgb/bgr color image
Hi guys 🙂 I am using the gaze angles from the API and it works really well, thanks for that! I was just wondering if I can trust that the zero is positioned at a 90° angle from the subject ? In other words, I am comparing the max angles between subjects and would like to be sure that I am allowed to do that.
Note, the gaze angle is relative to the scene camera, not the subject. I.e. For some subjects zero might be pointing further down than for others. You could select an offset for each subject though to align all of them into one common coordinate system.
Hey all, quick question - I need to generate heatmaps but I'm worried I don't have enough testers lined up. How many testers have you guys used to build effective heatmaps? Is there a Pupil proprietary number? Academia doesn't have consistent testing numbers, online resources say 39 testers is optimal for heatmap construction (but with no citations), and I don't see anything definitive in this chat
Hi @user-cd03b7 ! Heatmaps are a powerful qualitative visualisation tool, the bigger the sample you aggregate the more representative of your population is. As with any behavioural data sample sizes might be difficult. Small sample sizes may trick you, and you may get type II errors. If you want to get quantitative data, I recommend that you split your image in areas of interest and compute hit rate (to acknowledge if the participants look at one area), dwell time (how long did they stare at it) and time to first contact (when did they notice it). Check out this tutorial on how you can obtain such metrics. https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/
Once you have this metrics on a pilot data, you could use them to estimate the sample size required in an easier way, using https://www.calculator.net/sample-size-calculator.html
Hi, we would like to record audio with the glasses (to be able to use it locally), but not upload the audio to the cloud, for data protection issues. Is there any way to achieve this? Or delete the audio track later within pupil cloud?
As a temporary work around, you could record both, the audio from an external microphone and time sync events using LSL, and sync both Post-hoc to the Invisible recording. This would require you to do some programming, though.
Hi @user-a50a12! That is currently unfortunately not possible. The audio track is a part of the scene video file that is uploaded to Cloud and neither the app nor the cloud platform is currently able to remove the audio.
This is an interesting issue! I took the liberty of adding a feature request on feedback.pupil-labs.com for this and will discuss with the team what we could do! https://feedback.pupil-labs.com/pupil-cloud/p/block-audio-data-from-being-stored-in-pupil-cloud