πŸ’» software-dev


user-98789c 01 April, 2021, 14:43:22

Given the different lengths of pupil_positions.csv files even for recordings of the same design, how do you propose to average multiple recodrings from different sessions and participants? (mainly focusing on pupil diameter)

papr 01 April, 2021, 14:57:12

Kret et. al propose resampling the data at a specific sampling frequency. This makes it possible to compare pupil size at different timestamps across subjects and conditions.

Kret, M. E., & Sjak-Shie, E. E. (2019). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 51(3), 1336–1342. https://doi.org/10.3758/s13428-018-1075-y

user-98789c 02 April, 2021, 11:51:55

Is there a way to automate loading recordings on Player and exporting them?

papr 02 April, 2021, 11:52:56

No, unfortunately not. But there are some community scripts that provide that function (with limitations) https://github.com/pupil-labs/pupil-community

user-98789c 04 April, 2021, 13:56:44

In https://peerj.com/articles/7086/, I have come across this sentence:

Note that this step does not eliminate the inherent delay of 10 ms of the Pupil Labs’ cameras (personal communication with Pupil Labs).

Is it the single trip to Capture that I already know about?

papr 04 April, 2021, 14:29:57

No, they are talking about the time between image exposure and transfer from camera to computer, if I am not mistaken.

user-5e8fad 05 April, 2021, 13:46:14

Reposting in case anyone has some advice to provide. Thanks in advance! We recently bought an invisible device. I have two questions regarding the LSL plugin. 1. The github repo says that the coordinates are normalized to [0-1], however the recordings contain much larger values (e.g. 900 or so). Why does this happen? 2. Is there any way to also push the video through LSL?

papr 07 April, 2021, 17:51:11

So yes, it is in pixels. https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_invisible_lsl_relay/pi_gaze_relay.py#L64 Could you please point me to the documentation saying it is in normalized coordinates, such that I can correct it?

papr 07 April, 2021, 17:49:48

Apologies for the delayed response. 1) It looks like this data is in pixels then. I would have to check. 2) Technically possible, but not implemented in our plugin.

user-98789c 05 April, 2021, 21:04:13

I have been trying to use https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.resample.html to resample my pupil diameter data, but apparently the data structure is not compatible with how the function works. Any suggestions for resampling? πŸ™‚

papr 05 April, 2021, 21:34:19
df["datetime"] = pd.to_datetime(df["timestamp"], unit="s")
df.set_index("datetime)".resample("5ms")
user-3cff0d 05 April, 2021, 23:25:07

Hello, I'm having a bit of trouble making my plugins work on the precompiled version of Pupil Labs Core.

user-3cff0d 05 April, 2021, 23:27:28

Firstly, my plugins require some external Python extensions that Pupil Labs Core doesn't require natively (Namely, PyTorch.) Running the .exe doesn't seem to use any python version on my system, so I'm wondering if there's a way to get packages installed that it is able to reach when initializing plugins.

papr 06 April, 2021, 09:49:48

Hi, The bundle ships with is own Python distribution. Any dependencies need to be placed within the plugins directory. See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin Make sure to install the dependencies for the corresponding bundled python versions: - macOS: Python 3.9.1 - Windows bundle: Python 3.6 - Linux bundle: Python 3.6

user-3cff0d 05 April, 2021, 23:30:41

Secondly, my plugins (which are custom pupil detector plugins) override the default gl_display function of the PupilDetectorPlugin class in order to change the color of the 2d pupil ellipse on the eye window display. To do this, I'm importing the draw_pupil_outline function from within visualizer_2d. However, on the precompiled version of Pupil Labs, visualizer_2d doesn't seem to exist as a file to import from. Is there presently a way to get access to it?

papr 06 April, 2021, 09:54:08

It should be available using this import:

from pupil_detector_plugins.visualizer_2d import draw_pupil_outline
user-5e6759 06 April, 2021, 05:13:09

hi, I am making a plugin to broadcast the fixation on surface using LSL. The code I wrote to access the fixation on surface is as follow:

events["surfaces"][0]["fixations_on_surfaces"]

Since the fixation is detected after the gaze position is nearly fixed for 300 ms, I am wondering whether the moment when the line above returns a value is actually 300 ms after the beginning of a fixation? Thanks.

papr 06 April, 2021, 10:13:10

Roughly, yes, but I would not make any assumptions about that. Instead, I recommend using the data's timestamp field which corresponds to the start of the fixation. If your are using the existing LSL Relay plugin, this timestamp is already generated by the local LSL clock and can be used in StreamOutlet.push_sample() to indicate the time of creation.

user-98789c 06 April, 2021, 09:25:31

I do this for resampling:

pupil_data_in_trial_eye0['datetime'] = pd.to_datetime(pupil_data_in_trial_eye0['pupil_timestamp'], unit = 's') pupil_data_in_trial_eye0_resampled = pupil_data_in_trial_eye0.set_index('datetime').resample('5ms').mean() pupil_data_in_trial_eye0_interpolated = pupil_data_in_trial_eye0_resampled.interpolate(method='linear', limit_direction='forward', axis=1)

and for some reason, from the 1st line to the 2nd, my 'trial' column disapears. Any idea why?

papr 06 April, 2021, 09:45:21

Since you resampled fixed intervals method="linear" works, but given a datetime index, I would always recommend using method="time" instead.

Calling mean() will set rows with no data to NaN, causing the interpolation to skip (I think) this portion.

I think .resample("5ms").interpolate(method="time") should work. You might want to add an additional bfill() to fill NaN values at the beginning? That is application dependent though

user-98789c 06 April, 2021, 09:42:18

I did the resampling without .mean() and the interpolation without any arguments, and the 'trial' column stays, although no data is passed through.

user-3cff0d 06 April, 2021, 15:40:46

Thank you- this has been working great! However, I'm having an issue with a dependency I'm attempting to install, torchvision specifically. torchvision uses the import from PIL import ImageOps, which doesn't seem to be in the native installation of PIL that comes bundled with Pupil Labs. Is it possible that the version in use is out of date, or is the PIL module rather than the more updated Pillow module that shares the same import name?

papr 06 April, 2021, 15:44:51

You can check the bundled version with from PIL import __version__ in your plugin. Please let me know if the package is indeed out of date and which version number you would require. A temporary workaround could be to install an older version of torchvision.

user-3cff0d 06 April, 2021, 15:43:05

I did try installing the updated Pillow module like I have the rest of the dependencies, in the plugins/ folder as you suggested. However since it shares the same import name (PIL) as the one Pupil Labs comes bundled with as well as many of the same functions, it doesn't seem to work correctly

user-3cff0d 06 April, 2021, 15:56:28

The version of PIL being used in Pupil Labs is 5.2.0, the most up-to-date version for python 3.6 is 8.2.0

papr 06 April, 2021, 15:57:20

You have a working setup to run from source, correct?

user-3cff0d 06 April, 2021, 15:57:26

Yes, I do

papr 06 April, 2021, 16:02:39

ok, I wanted to ask you if you could test those features requiring PIL with the up-to-date version, but this won't be necessary. The only reference to PIL I can see is PyAV's to_image() function that we do not use. I might be able to remove PIL all-together from the bundle. This way you can install which version you like for your plugin.

user-3cff0d 06 April, 2021, 15:58:06

Right now I'm trying to get my plugins working on the prebuilt version

user-3cff0d 06 April, 2021, 15:58:20

I've already confirmed that they work when running from source

user-3cff0d 06 April, 2021, 15:58:34

(fortunately!)

user-3cff0d 06 April, 2021, 16:03:16

Sounds good, if it's that simple!

papr 06 April, 2021, 16:03:43

I hope so πŸ˜… We will see when it comes to testing the next release bundle.

user-5e8fad 07 April, 2021, 18:27:28

@papr I read it here https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture maybe i am mistaken and it does refer to the lsl plugin we are using

papr 07 April, 2021, 18:28:06

Ah, this part of the docs only applies to the Pupil Capture LSL plugin and does not apply to Pupil Invisible.

user-5e8fad 07 April, 2021, 18:28:53

i see, thanks for the info and sry for the miss-information!

papr 07 April, 2021, 18:32:16

No worries!

user-a09f5d 08 April, 2021, 15:09:03

Hi I have written a python script that uses pupil_remote to start and stop recordings and Remote Annotations to save the timing of events. I now need to modify the script so that I can check the confidence of the 3d models in real time so that I can check it is sufficiently high before starting the experiment (and can pause the experiment if it drops below threshold). However, I am unsure how to best integrate the code for connecting to the IPC Backbone with the code I am already using to connect to pupil_remote and Remote Annotations.

The code I used the set up the connect with pupil_remote and Remote Annotations is: ctx = zmq.Context() pupil_remote = zmq.Socket(ctx, zmq.REQ) pupil_remote.connect('tcp://127.0.0.1:50020') pupil_remote.send_string("PUB_PORT") pub_port = pupil_remote.recv_string() pub_socket = zmq.Socket(ctx, zmq.PUB) pub_socket.connect("tcp://127.0.0.1:{}".format(pub_port)) time_fn = time.time pupil_remote.send_string("T {}".format(time_fn())) print(pupil_remote.recv_string()) time.sleep(1)

Your developers page says to use the code below to connect to the IPC Backbone: ctx = zmq.Context() pupil_remote = ctx.socket(zmq.REQ) ip = 'localhost' port = 50020 pupil_remote.connect(f'tcp://{ip}:{port}') pupil_remote.send_string('SUB_PORT') sub_port = pupil_remote.recv_string() pupil_remote.send_string('PUB_PORT') pub_port = pupil_remote.recv_string()

What is the best way to integrate this code with my existing code given that some of the lines for the IPC Backbone appear to override some of the lines for pupil_remote. Eg. will pupil_remote = zmq.Socket(ctx, zmq.REQ) and pupil_remote = ctx.socket(zmq.REQ) conflict with each other?

papr 08 April, 2021, 15:20:28

Please be aware that this is no longer the best practice regarding time sync. I will try to update that code next week

papr 08 April, 2021, 15:19:32

These rows are redundant. You can choose one and use it for both.

user-a09f5d 08 April, 2021, 15:23:13

Interesting. So out of interest what is the different between pupil_remote = zmq.Socket(ctx, zmq.REQ) and pupil_remote = ctx.socket(zmq.REQ)? Why do they differ if they are redundant? The same for pupil_remote.connect('tcp://127.0.0.1:50020') vs pupil_remote.connect(f'tcp://{ip}:{port}')? I am pretty new to python so please forgive me if I am missing something obvious.

user-a09f5d 08 April, 2021, 15:25:39

I assume you are referring to time_fn = time.time pupil_remote.send_string("T {}".format(time_fn())) print(pupil_remote.recv_string())? This is good to know. Thanks! Can I please ask why or will the reason be explained in the update?

papr 08 April, 2021, 15:29:47

Correct. It is about which clock is being synchronized. T changes the pupil clock and there are just too many hidden side effects that users have been running into in the past. The new way requires more code but is more explicit on how the time sync works, hopefully causing less issues

papr 08 April, 2021, 15:28:21
pupil_remote = zmq.Socket(ctx, zmq.REQ)
pupil_remote = ctx.socket(zmq.REQ)

The zmq api just provides two ways of creating this socket. The scripts were written by different people, each preferring a different call. The result is the same.

'tcp://127.0.0.1:50020'  # fixed string
f'tcp://{ip}:{port}'  # formatted string that inserts the values of the ip and port variables into the string
user-a09f5d 08 April, 2021, 15:36:19

Cool. Thanks for the explanation. Good to know. Hopefully I'll be able to add the update next week before I start data collection.

papr 08 April, 2021, 15:38:21

Maybe, if @user-98789c has time, she could help you in the mean time. We implemented and validated the new approach in Matlab for her experiment. The code logic should be the same for python.

user-98789c 08 April, 2021, 15:40:51

sure, I'd be happy to, @user-a09f5d please send me a private message and we'll take it from there.

papr 08 April, 2021, 15:41:44

Thank you!

papr 08 April, 2021, 15:41:05

Instead of toc, you can use time.perf_counter() as a clock (it does not require a tic)

user-a09f5d 08 April, 2021, 15:43:16

Wonderful! Thank you! @user-98789c I'll drop you a private message now. Thanks to you too @papr. As always you have been a really big help.

user-c09cdd 08 April, 2021, 18:09:08

@papr I’m using network api to receive the frames and I’m performing object detection on it. For some reasons, the frames are stacking up if the object detection is slow. How can I prevent the network api from stacking up the frames ?

papr 08 April, 2021, 18:10:55

Please see this issue for reference https://github.com/pupil-labs/pupil/issues/2116#issuecomment-803621800

user-a09f5d 08 April, 2021, 19:03:36

Hi again @papr I have been able to get my script to print out all pupil. messages. As I have mentioned I am interested in extracting the model confidence for each eye so that I can check the quality of the model in real time. However, when I look at the printed message the value for model_confidence is always 1.0. I found a line in the "Pupil Datum Format" section that says that model_confidence is fixed.
### 3D model data # Fixed to 1.0 in pye3d v0.0.4. 'model_confidence': 1.0, Does this mean I can't extract the model confidence in real time?

papr 08 April, 2021, 19:05:27

Currently, the pye3d module does not have a way to calculate a meaning full model confidence. Not in realtime, not post-hoc.

papr 08 April, 2021, 19:08:53

The closest to that value is the RMS residual https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/base.py#L186-L204 But its range is [0, inf[ and while there are ways to map that range to [0, 1], it is not clear how to do that best.

The rms value is currently not exposed in real time

user-a09f5d 08 April, 2021, 19:12:01

So where does the value for model confidence come from in the exported pupil data? Or did you mean to say that it can only be calculated post-hoc?

papr 08 April, 2021, 19:12:32

It is fixed at 1.0 πŸ˜… as in: It is 1.0 by default.

user-a09f5d 08 April, 2021, 19:13:44

Mine is not fixed at one.

Chat image

papr 08 April, 2021, 19:14:48

Ah, then you are using the legacy 3d detector, that was shipped prior to v3.0

user-a09f5d 08 April, 2021, 19:18:55

Aww, okay! Forgive my confusion. I did not realize that this had changed in the update given our conversation the other day (https://discord.com/channels/285728493612957698/285728493612957698/827246333827481620)

papr 08 April, 2021, 19:32:49

So to clarify. You can access the model confidence in real time in v2.x. In v3.x it is fixed to 1.0. Apologies for not differentiating this earlier.

user-a09f5d 08 April, 2021, 19:29:30

So moving forward with the latest update the best cause of action to make sure that subjects roll there eyes often during the experiment to ensure the 3d model is as accurate as possible?

papr 08 April, 2021, 19:32:58

correct

user-a09f5d 08 April, 2021, 19:41:06

Thanks for that. Since I am interested in the raw pupil position values (circle_3d_normal_x,y,z) and have had a problem with low model confidence values in the past that would have effected the measurement had they not been removed, is it best for me to use v2 instead of v3 in order to check the models fit? Or is the modeling better in v3 meaning model_confidence is less important?

papr 08 April, 2021, 19:45:54

I think inspecting the visualization is sufficient to make sure the model is well fit. Fitting is also much quicker in v3.

user-a09f5d 13 April, 2021, 16:15:57

While inspecting the visualization of the model for each eye in v3 of capture I notice that the green circle continuously changes shape and/or size every few seconds. Does this mean that the model is continuously changing/updating? It appears to happen more frequently than it did in v2.

papr 13 April, 2021, 16:20:03

That is correct.

user-c09cdd 13 April, 2021, 16:20:57

@papr could you suggest some papers that does temporal and spatial analysis of gaze data using pupil labs? Thank you!

papr 13 April, 2021, 17:46:51

@nmt You might have a better overview regarding gaze analysis literature than me. Maybe you can even link of your own research?

nmt 14 April, 2021, 07:47:33

Hi @user-c09cdd. In my recent paper: https://doi.org/10.1016/j.humov.2021.102774 we investigated the temporal relationship between gaze and stepping actions as people negotiated stairs. We also looked at the spatial dimension of how far people looked ahead in relation to where they stepped. With that said, temporal and spatial analysis are broad terms encompassing many kinds of research. Perhaps if you can be more specific I can think of some more pertinent research papers.

user-c09cdd 14 April, 2021, 07:52:20

Thank you! I’m more interested in cognitive load analysis using the spatial temporal gaze patterns or by any other better approach if possible

papr 14 April, 2021, 08:41:49

As you have read on https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales, there are three different eye models in play: short term, long term and ultra long term.

If you open the 3d detector debug window, you can see the outline of all three, see https://discord.com/channels/285728493612957698/285728493612957698/829631637012480071

The short-term model is updated on every frame.

Real-time mode: Since updating the long-term model usually takes longer than a few frames, the model is not updated on every frame. Instead, a new update is started once the previous one is finished. I.e. the update rate depends on your CPU power. Better CPU, more updates.

Post-hoc mode: In Player, the model is updated once per "recorded second", i.e. after seeing one second worth of pupil data. This makes the post-hoc detection process reproducible.

When you freeze the eye model, only the long-term models are being frozen. The short-term models continue being updated.

user-a09f5d 14 April, 2021, 13:03:32

Thanks @papr

user-5e6759 16 April, 2021, 20:58:00

hi @papr , I built an plug-in before which can broadcast the gaze position on surface with LSL by accessing:

events["surfaces"][0]["gaze_on_surfaces"]

but recently I pull the newest version of pupil lab and found that

events["surfaces"]

will return key error now. I am wondering is the data format changed? Thank you

papr 16 April, 2021, 21:12:07

Is it possible that you surface tracker is not enabled? Session settings reset on upgrades. You should always check is available or not.

user-5e6759 16 April, 2021, 21:20:40

@papr I have checked the surface tracker is on and I can see the marker detected and the red triangle showing the surface direction

papr 16 April, 2021, 21:21:16

Please try the singular 'surface'

user-5e6759 16 April, 2021, 22:06:43

@papr umm.it's kind of weird. I remove the whole capture_settings folder and re-put the plugin back, and now it works like before (using events['surfaces']). So everything is good now. Thank you tho.πŸ˜„

papr 16 April, 2021, 22:07:05

haha, ok πŸ˜„

user-2d2b31 18 April, 2021, 01:01:01

Hi All, New to using Pupil (Received a core a couple of days ago). I'm making a tool for a colleagues company. I am using Java, and have ~80% of his app done ,rolling in support for the pupilcore now. I have a question. I have the glasses setup, I've added jeromq jar, and can query the glasses. I have the pub / sub ports and can subscribe. These response has me puzzled right now as it is not a JSon, packed binary or traditional payload. I see a lot of 0x3F's (63's) in there, are those the delimeters? I see the field names with a 0x3F, then 3 values (0x00 as I am not wearing the glasses) then a 0x3F. Are the 3 bytes the value? If so, are they a type of float? Also, if anyone does this in java, is there a Jar I can roll in to unpackage with? I can keep beating my face against this, but if anyone has been down this path and can share, it would be appreciated. Thanks,

papr 18 April, 2021, 07:55:34

It is https://msgpack.org/ encoded data.

user-d1efa8 20 April, 2021, 13:48:18

Hi, I'm trying out the htc vive add-on with Unity, so I'm running the GazeDemoScene and I'm successfully connecting to the add-on after launching either PupilCapture or PupilService, but it's saying that it's running at 125 fps, but isn't it supposed to run at 200 fps?

papr 20 April, 2021, 13:49:00

Did you change the target frame rate to 200Hz in the eye windows' Video Source menus?

user-d1efa8 20 April, 2021, 13:49:42

ah, i see; I didn't see that setting for some reason, my bad. Thank you!

user-2d2b31 21 April, 2021, 03:49:51

I am integrating the pupil core glasses into an R&D product for a company. They told me the glasses can give a measurement to whatever surface is in front of the person wearing the glasses. I've looked through the docs and do not see this as a topic I can subscribe to. Is there one I can get this from?

wrp 21 April, 2021, 03:50:39

@user-2d2b31 see docs here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-2d2b31 21 April, 2021, 03:53:05

@wrp thanks. I'll dig through. I setup ZMQ & MessagePack with Java. Not sure if others ask, but it can be done and is easy.

user-d1efa8 22 April, 2021, 19:07:27

Hi, is there any way to tell (before sending) if the calibration data collected per point is good or bad? (I'm using the htc vive add-on)

papr 26 April, 2021, 09:20:13

unfortunately not.

user-3c4ff0 26 April, 2021, 10:19:18

Hello, I am wondering is there any scripts by Python or other language for calculating eye movement parameters?(like fixation count, fixation duration, and saccades...)

papr 26 April, 2021, 10:20:37

Checkout out this community project https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing

user-3c4ff0 26 April, 2021, 10:24:34

Thanks

user-d1efa8 28 April, 2021, 12:54:10

Hi, with the htc vive add-on, is it possible to redo a calibration point without having to go through the whole calibration process again?

i.e., if I do something like

double sampleTimeStamp = timeSync.ConvertToPupilTime(Time.realtimeSinceStartup);
AddSample(sampleTimeStamp);
targetSampleCount++;
if (targetSampleCount >= settings.samplesPerTarget)
{
  calibration.SendCalibrationReferenceData();
}

and then do it again for the same point, will the second time through just replace the calibration data for the first time?

papr 29 April, 2021, 08:06:10

Hey, no, it does not work that way. All sent data is used.

user-d1efa8 29 April, 2021, 10:53:02

Okay, thank you

End of April archive