Given the different lengths of pupil_positions.csv files even for recordings of the same design, how do you propose to average multiple recodrings from different sessions and participants? (mainly focusing on pupil diameter)
Kret et. al propose resampling the data at a specific sampling frequency. This makes it possible to compare pupil size at different timestamps across subjects and conditions.
Kret, M. E., & Sjak-Shie, E. E. (2019). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 51(3), 1336β1342. https://doi.org/10.3758/s13428-018-1075-y
Is there a way to automate loading recordings on Player and exporting them?
No, unfortunately not. But there are some community scripts that provide that function (with limitations) https://github.com/pupil-labs/pupil-community
In https://peerj.com/articles/7086/, I have come across this sentence:
Note that this step does not eliminate the inherent delay of 10 ms of the Pupil Labsβ cameras (personal communication with Pupil Labs).
Is it the single trip to Capture that I already know about?
No, they are talking about the time between image exposure and transfer from camera to computer, if I am not mistaken.
Reposting in case anyone has some advice to provide. Thanks in advance! We recently bought an invisible device. I have two questions regarding the LSL plugin. 1. The github repo says that the coordinates are normalized to [0-1], however the recordings contain much larger values (e.g. 900 or so). Why does this happen? 2. Is there any way to also push the video through LSL?
So yes, it is in pixels. https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_invisible_lsl_relay/pi_gaze_relay.py#L64 Could you please point me to the documentation saying it is in normalized coordinates, such that I can correct it?
Apologies for the delayed response. 1) It looks like this data is in pixels then. I would have to check. 2) Technically possible, but not implemented in our plugin.
I have been trying to use https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.resample.html to resample my pupil diameter data, but apparently the data structure is not compatible with how the function works. Any suggestions for resampling? π
df["datetime"] = pd.to_datetime(df["timestamp"], unit="s")
df.set_index("datetime)".resample("5ms")
Hello, I'm having a bit of trouble making my plugins work on the precompiled version of Pupil Labs Core.
Firstly, my plugins require some external Python extensions that Pupil Labs Core doesn't require natively (Namely, PyTorch.) Running the .exe doesn't seem to use any python version on my system, so I'm wondering if there's a way to get packages installed that it is able to reach when initializing plugins.
Hi, The bundle ships with is own Python distribution. Any dependencies need to be placed within the plugins directory. See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin Make sure to install the dependencies for the corresponding bundled python versions: - macOS: Python 3.9.1 - Windows bundle: Python 3.6 - Linux bundle: Python 3.6
Secondly, my plugins (which are custom pupil detector plugins) override the default gl_display function of the PupilDetectorPlugin class in order to change the color of the 2d pupil ellipse on the eye window display. To do this, I'm importing the draw_pupil_outline
function from within visualizer_2d
. However, on the precompiled version of Pupil Labs, visualizer_2d
doesn't seem to exist as a file to import from. Is there presently a way to get access to it?
It should be available using this import:
from pupil_detector_plugins.visualizer_2d import draw_pupil_outline
hi, I am making a plugin to broadcast the fixation on surface using LSL. The code I wrote to access the fixation on surface is as follow:
events["surfaces"][0]["fixations_on_surfaces"]
Since the fixation is detected after the gaze position is nearly fixed for 300 ms, I am wondering whether the moment when the line above returns a value is actually 300 ms after the beginning of a fixation? Thanks.
Roughly, yes, but I would not make any assumptions about that. Instead, I recommend using the data's timestamp
field which corresponds to the start of the fixation. If your are using the existing LSL Relay plugin, this timestamp is already generated by the local LSL clock and can be used in StreamOutlet.push_sample()
to indicate the time of creation.
I do this for resampling:
pupil_data_in_trial_eye0['datetime'] = pd.to_datetime(pupil_data_in_trial_eye0['pupil_timestamp'], unit = 's')
pupil_data_in_trial_eye0_resampled = pupil_data_in_trial_eye0.set_index('datetime').resample('5ms').mean()
pupil_data_in_trial_eye0_interpolated = pupil_data_in_trial_eye0_resampled.interpolate(method='linear', limit_direction='forward', axis=1)
and for some reason, from the 1st line to the 2nd, my 'trial' column disapears. Any idea why?
Since you resampled fixed intervals method="linear"
works, but given a datetime index, I would always recommend using method="time"
instead.
Calling mean()
will set rows with no data to NaN
, causing the interpolation to skip (I think) this portion.
I think .resample("5ms").interpolate(method="time")
should work. You might want to add an additional bfill()
to fill NaN
values at the beginning? That is application dependent though
I did the resampling without .mean()
and the interpolation without any arguments, and the 'trial' column stays, although no data is passed through.
Thank you- this has been working great! However, I'm having an issue with a dependency I'm attempting to install, torchvision
specifically. torchvision
uses the import from PIL import ImageOps
, which doesn't seem to be in the native installation of PIL
that comes bundled with Pupil Labs. Is it possible that the version in use is out of date, or is the PIL
module rather than the more updated Pillow
module that shares the same import name?
You can check the bundled version with from PIL import __version__
in your plugin. Please let me know if the package is indeed out of date and which version number you would require. A temporary workaround could be to install an older version of torchvision.
I did try installing the updated Pillow
module like I have the rest of the dependencies, in the plugins/ folder as you suggested. However since it shares the same import name (PIL
) as the one Pupil Labs comes bundled with as well as many of the same functions, it doesn't seem to work correctly
The version of PIL being used in Pupil Labs is 5.2.0
, the most up-to-date version for python 3.6 is 8.2.0
You have a working setup to run from source, correct?
Yes, I do
ok, I wanted to ask you if you could test those features requiring PIL with the up-to-date version, but this won't be necessary. The only reference to PIL I can see is PyAV's to_image()
function that we do not use. I might be able to remove PIL all-together from the bundle. This way you can install which version you like for your plugin.
Right now I'm trying to get my plugins working on the prebuilt version
I've already confirmed that they work when running from source
(fortunately!)
Sounds good, if it's that simple!
I hope so π We will see when it comes to testing the next release bundle.
@papr I read it here https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture maybe i am mistaken and it does refer to the lsl plugin we are using
Ah, this part of the docs only applies to the Pupil Capture LSL plugin and does not apply to Pupil Invisible.
i see, thanks for the info and sry for the miss-information!
No worries!
Hi I have written a python script that uses pupil_remote to start and stop recordings and Remote Annotations to save the timing of events. I now need to modify the script so that I can check the confidence of the 3d models in real time so that I can check it is sufficiently high before starting the experiment (and can pause the experiment if it drops below threshold). However, I am unsure how to best integrate the code for connecting to the IPC Backbone with the code I am already using to connect to pupil_remote and Remote Annotations.
The code I used the set up the connect with pupil_remote and Remote Annotations is:
ctx = zmq.Context()
pupil_remote = zmq.Socket(ctx, zmq.REQ)
pupil_remote.connect('tcp://127.0.0.1:50020')
pupil_remote.send_string("PUB_PORT")
pub_port = pupil_remote.recv_string()
pub_socket = zmq.Socket(ctx, zmq.PUB)
pub_socket.connect("tcp://127.0.0.1:{}".format(pub_port))
time_fn = time.time
pupil_remote.send_string("T {}".format(time_fn()))
print(pupil_remote.recv_string())
time.sleep(1)
Your developers page says to use the code below to connect to the IPC Backbone:
ctx = zmq.Context()
pupil_remote = ctx.socket(zmq.REQ)
ip = 'localhost'
port = 50020
pupil_remote.connect(f'tcp://{ip}:{port}')
pupil_remote.send_string('SUB_PORT')
sub_port = pupil_remote.recv_string()
pupil_remote.send_string('PUB_PORT')
pub_port = pupil_remote.recv_string()
What is the best way to integrate this code with my existing code given that some of the lines for the IPC Backbone appear to override some of the lines for pupil_remote. Eg. will pupil_remote = zmq.Socket(ctx, zmq.REQ)
and pupil_remote = ctx.socket(zmq.REQ)
conflict with each other?
Please be aware that this is no longer the best practice regarding time sync. I will try to update that code next week
These rows are redundant. You can choose one and use it for both.
Interesting. So out of interest what is the different between pupil_remote = zmq.Socket(ctx, zmq.REQ)
and pupil_remote = ctx.socket(zmq.REQ)
? Why do they differ if they are redundant? The same for pupil_remote.connect('tcp://127.0.0.1:50020')
vs pupil_remote.connect(f'tcp://{ip}:{port}')
? I am pretty new to python so please forgive me if I am missing something obvious.
I assume you are referring to
time_fn = time.time
pupil_remote.send_string("T {}".format(time_fn()))
print(pupil_remote.recv_string())
?
This is good to know. Thanks! Can I please ask why or will the reason be explained in the update?
Correct. It is about which clock is being synchronized. T
changes the pupil clock and there are just too many hidden side effects that users have been running into in the past. The new way requires more code but is more explicit on how the time sync works, hopefully causing less issues
pupil_remote = zmq.Socket(ctx, zmq.REQ)
pupil_remote = ctx.socket(zmq.REQ)
The zmq api just provides two ways of creating this socket. The scripts were written by different people, each preferring a different call. The result is the same.
'tcp://127.0.0.1:50020' # fixed string
f'tcp://{ip}:{port}' # formatted string that inserts the values of the ip and port variables into the string
Cool. Thanks for the explanation. Good to know. Hopefully I'll be able to add the update next week before I start data collection.
Maybe, if @user-98789c has time, she could help you in the mean time. We implemented and validated the new approach in Matlab for her experiment. The code logic should be the same for python.
sure, I'd be happy to, @user-a09f5d please send me a private message and we'll take it from there.
Thank you!
Instead of toc
, you can use time.perf_counter()
as a clock (it does not require a tic
)
Wonderful! Thank you! @user-98789c I'll drop you a private message now. Thanks to you too @papr. As always you have been a really big help.
@papr Iβm using network api to receive the frames and Iβm performing object detection on it. For some reasons, the frames are stacking up if the object detection is slow. How can I prevent the network api from stacking up the frames ?
Please see this issue for reference https://github.com/pupil-labs/pupil/issues/2116#issuecomment-803621800
Hi again @papr
I have been able to get my script to print out all pupil.
messages. As I have mentioned I am interested in extracting the model confidence for each eye so that I can check the quality of the model in real time. However, when I look at the printed message the value for model_confidence is always 1.0. I found a line in the "Pupil Datum Format" section that says that model_confidence is fixed.
### 3D model data
# Fixed to 1.0 in pye3d v0.0.4.
'model_confidence': 1.0,
Does this mean I can't extract the model confidence in real time?
Currently, the pye3d module does not have a way to calculate a meaning full model confidence. Not in realtime, not post-hoc.
The closest to that value is the RMS residual https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/base.py#L186-L204 But its range is [0, inf[
and while there are ways to map that range to [0, 1]
, it is not clear how to do that best.
The rms value is currently not exposed in real time
So where does the value for model confidence come from in the exported pupil data? Or did you mean to say that it can only be calculated post-hoc?
It is fixed at 1.0 π as in: It is 1.0 by default.
Mine is not fixed at one.
Ah, then you are using the legacy 3d detector, that was shipped prior to v3.0
Aww, okay! Forgive my confusion. I did not realize that this had changed in the update given our conversation the other day (https://discord.com/channels/285728493612957698/285728493612957698/827246333827481620)
So to clarify. You can access the model confidence in real time in v2.x. In v3.x it is fixed to 1.0. Apologies for not differentiating this earlier.
So moving forward with the latest update the best cause of action to make sure that subjects roll there eyes often during the experiment to ensure the 3d model is as accurate as possible?
correct
Thanks for that. Since I am interested in the raw pupil position values (circle_3d_normal_x,y,z) and have had a problem with low model confidence values in the past that would have effected the measurement had they not been removed, is it best for me to use v2 instead of v3 in order to check the models fit? Or is the modeling better in v3 meaning model_confidence is less important?
I think inspecting the visualization is sufficient to make sure the model is well fit. Fitting is also much quicker in v3.
While inspecting the visualization of the model for each eye in v3 of capture I notice that the green circle continuously changes shape and/or size every few seconds. Does this mean that the model is continuously changing/updating? It appears to happen more frequently than it did in v2.
That is correct.
@papr could you suggest some papers that does temporal and spatial analysis of gaze data using pupil labs? Thank you!
@nmt You might have a better overview regarding gaze analysis literature than me. Maybe you can even link of your own research?
Hi @user-c09cdd. In my recent paper: https://doi.org/10.1016/j.humov.2021.102774 we investigated the temporal relationship between gaze and stepping actions as people negotiated stairs. We also looked at the spatial dimension of how far people looked ahead in relation to where they stepped. With that said, temporal and spatial analysis are broad terms encompassing many kinds of research. Perhaps if you can be more specific I can think of some more pertinent research papers.
Thank you! Iβm more interested in cognitive load analysis using the spatial temporal gaze patterns or by any other better approach if possible
As you have read on https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales, there are three different eye models in play: short term, long term and ultra long term.
If you open the 3d detector debug window, you can see the outline of all three, see https://discord.com/channels/285728493612957698/285728493612957698/829631637012480071
The short-term model is updated on every frame.
Real-time mode: Since updating the long-term model usually takes longer than a few frames, the model is not updated on every frame. Instead, a new update is started once the previous one is finished. I.e. the update rate depends on your CPU power. Better CPU, more updates.
Post-hoc mode: In Player, the model is updated once per "recorded second", i.e. after seeing one second worth of pupil data. This makes the post-hoc detection process reproducible.
When you freeze the eye model, only the long-term models are being frozen. The short-term models continue being updated.
Thanks @papr
hi @papr , I built an plug-in before which can broadcast the gaze position on surface with LSL by accessing:
events["surfaces"][0]["gaze_on_surfaces"]
but recently I pull the newest version of pupil lab and found that
events["surfaces"]
will return key error now. I am wondering is the data format changed? Thank you
Is it possible that you surface tracker is not enabled? Session settings reset on upgrades. You should always check is available or not.
@papr I have checked the surface tracker is on and I can see the marker detected and the red triangle showing the surface direction
Please try the singular 'surface'
@papr umm.it's kind of weird. I remove the whole capture_settings folder and re-put the plugin back, and now it works like before (using events['surfaces']). So everything is good now. Thank you tho.π
haha, ok π
Hi All, New to using Pupil (Received a core a couple of days ago). I'm making a tool for a colleagues company. I am using Java, and have ~80% of his app done ,rolling in support for the pupilcore now. I have a question. I have the glasses setup, I've added jeromq jar, and can query the glasses. I have the pub / sub ports and can subscribe. These response has me puzzled right now as it is not a JSon, packed binary or traditional payload. I see a lot of 0x3F's (63's) in there, are those the delimeters? I see the field names with a 0x3F, then 3 values (0x00 as I am not wearing the glasses) then a 0x3F. Are the 3 bytes the value? If so, are they a type of float? Also, if anyone does this in java, is there a Jar I can roll in to unpackage with? I can keep beating my face against this, but if anyone has been down this path and can share, it would be appreciated. Thanks,
It is https://msgpack.org/ encoded data.
Hi, I'm trying out the htc vive add-on with Unity, so I'm running the GazeDemoScene and I'm successfully connecting to the add-on after launching either PupilCapture or PupilService, but it's saying that it's running at 125 fps, but isn't it supposed to run at 200 fps?
Did you change the target frame rate to 200Hz in the eye windows' Video Source menus?
ah, i see; I didn't see that setting for some reason, my bad. Thank you!
I am integrating the pupil core glasses into an R&D product for a company. They told me the glasses can give a measurement to whatever surface is in front of the person wearing the glasses. I've looked through the docs and do not see this as a topic I can subscribe to. Is there one I can get this from?
@user-2d2b31 see docs here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@wrp thanks. I'll dig through. I setup ZMQ & MessagePack with Java. Not sure if others ask, but it can be done and is easy.
Hi, is there any way to tell (before sending) if the calibration data collected per point is good or bad? (I'm using the htc vive add-on)
unfortunately not.
Hello, I am wondering is there any scripts by Python or other language for calculating eye movement parameters?(like fixation count, fixation duration, and saccades...)
Checkout out this community project https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing
Thanks
Hi, with the htc vive add-on, is it possible to redo a calibration point without having to go through the whole calibration process again?
i.e., if I do something like
double sampleTimeStamp = timeSync.ConvertToPupilTime(Time.realtimeSinceStartup);
AddSample(sampleTimeStamp);
targetSampleCount++;
if (targetSampleCount >= settings.samplesPerTarget)
{
calibration.SendCalibrationReferenceData();
}
and then do it again for the same point, will the second time through just replace the calibration data for the first time?
Hey, no, it does not work that way. All sent data is used.
Okay, thank you