πŸ’» software-dev


user-b14f98 04 May, 2021, 20:21:07

So... is it me, or do the eye images arrive to the pupil detection plugin upside down?

papr 04 May, 2021, 20:21:41

Right eye is upsidedown because the cam is upsidedown

user-b14f98 04 May, 2021, 20:21:44

Yes, I know that one camera is upside down.

user-b14f98 04 May, 2021, 20:21:51

Sorry, I shoudl have expanded.

user-b14f98 04 May, 2021, 20:22:19

So, when I view the raw eye videos using VLC ...eye_1 is upside down, and eye_0 is right-side up.

user-b14f98 04 May, 2021, 20:22:37

...but, when I view the images inside the plugin, or save them out, it is the opposite.

user-b14f98 04 May, 2021, 20:22:43

Known issue? Am I crazy?

papr 04 May, 2021, 20:22:57

Image coordinate system origin is top left not bottom left.

user-b14f98 04 May, 2021, 20:23:15

oK, thx.

user-b14f98 04 May, 2021, 20:23:32

Trying to decide how to handle this, since our trained neural network expects right-side up images.

user-b14f98 04 May, 2021, 20:24:37

Thanks for the pointer. Already on it πŸ™‚

user-b14f98 04 May, 2021, 20:25:02

...and, does HMD-Eyes change any pupil detection settings from the default values I see in player?

papr 04 May, 2021, 20:25:54

Not sure

user-b14f98 04 May, 2021, 20:27:18

Happy to do some digging. Any idea where I should look?

papr 04 May, 2021, 20:28:28

Easiest way to check would be to subscribe to all notifications and look how eye processes etc are being started

user-b14f98 06 May, 2021, 13:15:14

I had the idea of searching the repository for 'pupil_detector', a string that I believe would be part of any notification involved in changing the ROI or pupil detector settings. There are no instances, and so I assume that the detector uses default values.

user-b14f98 04 May, 2021, 20:28:36

Hurmn.

user-b14f98 04 May, 2021, 20:28:44

Ok. Thanks, @papr. As always, very helpful.

user-b14f98 04 May, 2021, 20:29:12

Man, it's like 10pm in Germany right now, isn't it? That's dedication.

papr 04 May, 2021, 20:30:00

10:30 πŸ˜‰ happy to help

user-358511 05 May, 2021, 06:01:27

Hi @papr and everyone, I wonder what method you recommend for synchronizing the eye data with separately recorded task events, after a session. Is there any way to flash detectable markers on the screen (say at the beginning of each trial) so that the timestamps of these events appear in the exported data?

papr 05 May, 2021, 07:11:19

I suggest using remote annotations https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations

user-37e73e 05 May, 2021, 07:10:10

Hey everyone, is there a supported way of exporting lots of videos without dropping every single recording folder into the player gui?

papr 05 May, 2021, 07:13:15

Unfortunately, no. There are some community projects that batch export data, but I do not think they export the video. https://github.com/pupil-labs/pupil-community

What part of the data do you need exported?

user-37e73e 05 May, 2021, 07:14:34

@papr I'm using the fixations and world_timestamps csv files the world_timestamps I can read in without exporting as a numpy array

papr 05 May, 2021, 07:21:51

If you recorded fixations in real time, you could read the fixations.pldata file using this function https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972#file-extract_diameter-py-L71-L96

But you will need to aggregate them, as the online fixation detector output is slightly different from the post-hoc detector.

papr 05 May, 2021, 07:44:22

Yes, in this case, the tutorial uses 3d pupil detector data, but I recommend using the gaze data instead, in order to be comparable to the Player output. You can read pupil and gaze data without exporting it, using the function linked here

user-37e73e 05 May, 2021, 07:23:30

unfortunately I'm not using the online detector as I tend to get more frames the less plugins I run in the online modus

papr 05 May, 2021, 07:24:43

You can use the recorded gaze data and run the post-hoc fixation detector at the bottom of this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb

user-37e73e 05 May, 2021, 07:26:44

That's a great tip, thanks a lot! I'll try that out.

user-37e73e 05 May, 2021, 07:25:11

As I need to sync the world video with another camera I require to have close to 60 frames at all times

user-37e73e 05 May, 2021, 07:41:56

I had a look at the tutroial, it seems that the csv table used for everything is something I get from a 'raw data export'. Is that right? exported_pupil_csv = os.path.join(recording_location, 'exports', '000', 'pupil_positions.csv')

user-37e73e 05 May, 2021, 07:43:17

Is there a recommended way of calling the raw_data_export plugin from outside the gui?

user-37e73e 05 May, 2021, 07:57:37

Thank you again! I'm gonna have a go at it.

user-37e73e 06 May, 2021, 09:53:24

So, I changed the code that you linked, so that I get all the collumns (from pupil.pldata) that I need for the tutorial code. But what I get for the fixations is only one fixation for the whole duration of the video. Any idea what the error might be?

Also your saying that I should compute the fixations directly from the gaze.pldata file? which columns would I need for that? (my guess would be: [confidence, diameter, circle_3d:normal(x,y,z)] and norm_pos(x,y) for the actual point looked at in the world video)

papr 06 May, 2021, 09:55:31

You need gaze_point_3d, confidence (remove all rows with confidence < 0.6), timestamp

papr 06 May, 2021, 09:57:07

Difficult to tell what the issue might be. I might need to have a quick look at the code to check.

user-37e73e 06 May, 2021, 10:06:21

I changed this: def extract_eyeid_diameters(pupil_datum): """Extract data for a given pupil datum

Returns: tuple(eye_id, confidence, diameter_2d, and diameter_3d)
"""
return (
    pupil_datum["id"],
    pupil_datum["confidence"],
    pupil_datum["diameter"],
    pupil_datum.get("diameter_3d", 0.0),
)

To this: def extract_eyeid_diameters(pupil_datum): """Extract data for a given pupil datum

Returns: tuple(eye_id, confidence, diameter_2d, and diameter_3d)
"""
return (
    pupil_datum["id"],
    pupil_datum["method"],
    pupil_datum["confidence"],
    pupil_datum["norm_pos"][0],
    pupil_datum["norm_pos"][1],
    pupil_datum["diameter"],
    pupil_datum.get("diameter_3d", 0.0),
    pupil_datum.get("normal"[0],0.0),
    pupil_datum.get("normal"[1],0.0),
    pupil_datum.get("normal"[2],0.0),
)

The tutorial code was used as is

papr 06 May, 2021, 10:13:55

To load gaze data, you need to pass topic="gaze" to load_and_yield_data(). Also, you need to adjust the extracted fields accordingly.

user-37e73e 06 May, 2021, 10:29:13

Sure I changed everything, so it works with gaze :) I thought you wanted to see my changes I applied to the pupil.pldata reading I used for the tutorial

papr 06 May, 2021, 10:16:24

Please run the tutorial as is with the accompanying data and check if the generated fixations match the original output. Afterward, try again with your own recording.

user-37e73e 06 May, 2021, 13:09:42

that works with my pupil data extraction. The only big difference that I see in the tutorials and mine data files is that your pupil.method is '3d c++' and my data has pupil.method = 'pye3d 0.0.4 real-time' so I changed that for my data, maybe that isn't inter-changeble?

user-37e73e 06 May, 2021, 10:29:38

All right, I'll try that!

user-358511 06 May, 2021, 10:42:43

Thanks. Has anyone integrated pupilLab APIs with mWorks?

papr 06 May, 2021, 10:43:50

Not to my knowledge

user-b14f98 06 May, 2021, 13:16:07

This reminds me - because there is no parallax error in VR due to an offset of the eyes and scene camera, I question the importance of calibrating to multiple depths.

user-b14f98 06 May, 2021, 13:16:40

The only reason to do that would be if the pupil changed size with fixatnoi depth, but due to the fixed focal distance (because the optics are fixed), there is no change in accommodation with changes in depth.

papr 06 May, 2021, 13:16:45

It is!

user-37e73e 06 May, 2021, 14:22:24

On another of my recordings it just worked πŸ˜„ I'm gonna try around for a bit more I guess ^^ Again thank you very much for all of your help!

user-b14f98 06 May, 2021, 13:17:26

So, I think you guys are wasting time with the default setting of calibrating to multiple depth planes. My advice? Instead of changing depths, change the clear-color of the background so that you vary gray levels and get a variety of pupil sizes.

user-b14f98 06 May, 2021, 13:18:21

I should say that I'm not sure if it has been published that pupil size varies with depth of accommodation, and does not vary with fixation depth under vergence / accommodation conflict. This is an assumptino of mine tested by looking at pupils during calibration at different target depths along the same angular direction.

user-b14f98 06 May, 2021, 13:19:46

They are just very, very stable.

user-b14f98 06 May, 2021, 13:27:35

Well, I should acknowledge that the fixatino target's position will change subtly within the screen of the left/right eye as it changes in depth, so it's not a total waste to change depth planes, but it would be more effective to just stick to a single plane and vary spatial position more evenly across the field.

user-b14f98 06 May, 2021, 14:18:53

Looks like I may wrong about the stability of pupil size when changing fixation depth without accommodation! https://link.springer.com/content/pdf/10.1007/s00417-017-3770-2.pdf ...but, I remain skeptical, because that's just not what I see in these eye videos.

user-b14f98 06 May, 2021, 14:32:04

Ok, another issue: when switching between pupil plugins (I am developing one), I often get an error: eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace: Traceback (most recent call last): File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\launchables\eye.py", line 776, in eye consume_events_and_render_buffer() File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\launchables\eye.py", line 272, in consume_events_and_render_buffer p.gl_display() File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\shared_modules\pupil_detector_plugins\pye3d_plugin.py", line 193, in gl_display draw_eyeball_outline(result) File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\shared_modules\pupil_detector_plugins\visualizer_2d.py", line 58, in draw_eyeball_outline if pupil_detection_result_3d["model_confidence"] <= 0.0: KeyError: 'model_confidence'

papr 06 May, 2021, 14:33:59

Were you switching from ours to your plugin or the other way around? It looks like the pupil datum in question did not have a model_confidence value. Given that you are currently developing the plugin, it is possible that your detector does not create that field yet.

user-b14f98 06 May, 2021, 14:32:14

This is solved by erasing pupil player settings.

user-b14f98 06 May, 2021, 14:32:43

...however, I can't help but feel this is common enough to be considered a bug. I will post this as an issue...

user-b14f98 06 May, 2021, 14:34:22

Thanks for the heads up. I'll double-check.

user-b14f98 06 May, 2021, 14:34:41

That was an issue in the past - but I cleaned out the data folder of offline data. I'm surprised that that kind fo info is stored in player settings.

papr 06 May, 2021, 14:36:37

The session settings store which plugins were active and their settings. On launch, it reloads the previously active plugins. If the loaded plugin is faulty, it will crash and not write out new session settings. i.e. the next time, the faulty plugin will be loaded again.

user-b14f98 06 May, 2021, 14:37:00

Thanks.

user-b14f98 06 May, 2021, 14:37:19

I'm guessing that you have hit the nail on the head, so I'll hold off on posting the issue.

papr 06 May, 2021, 14:39:17

Should you run into the issue again after making sure that your detector fills this field, feel free to create a github issue. Since you are running from source, it would also be helpful to print out the pupil_detection_result_3d value to check which datum is causing issues.

user-37e73e 06 May, 2021, 16:04:53

Got it running now on the gaze.pldata file as well! One quick question though: the fixation plot looks rather different when I compute it with the pupil- vs with the gaze-data. Is that normal?

papr 10 May, 2021, 08:09:16

Some differences are expected as the fixations in scene camera space are conceptionally different from fixations using the eye model rotation. Also, if you do such comparisons, make sure to align the figure axes (e.g. setting x-axis limits to the same values).

user-37e73e 06 May, 2021, 16:05:08

fixations for pupil data

Chat image

user-37e73e 06 May, 2021, 16:05:16

fixations for gaze data

Chat image

user-e3a1fe 06 May, 2021, 19:36:03

Want to know how EyeTracking data looks like from the Pupil Lens tracker. Is it possible to get a sample file through here?

papr 10 May, 2021, 08:10:48

Download a raw sample recording here https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing

The simplest way to view and export it is using Pupil Player https://github.com/pupil-labs/pupil/releases/latest

user-37e73e 10 May, 2021, 14:55:00

That is a very valid point, thank you. I'll make sure to do that, if I decide to put such a comparison in my thesis. For now I just wanted to make sure that the output I produced is reasonable. Also if I where to use the pupil export plugin the output I would get should be the one from the gaze data right? Just asking so that I can be sure that it is reasonable to compare my output with the one from the plugin.

papr 10 May, 2021, 15:00:36

Correct. There are two different ways how the gaze direction vectors are calculated. Either, if you calibrated in 3d the gaze_point_3d data is used, or if you calibrated in 2d the norm_pos is used and unprojected into a direction vector. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L137-L149

user-1391e7 11 May, 2021, 13:12:03

hello hello!

user-1391e7 11 May, 2021, 13:13:05

I'm currently busy trying to match the gaze data of pupil invisible recordings to video frame data and I'm trying to understand the logged data a little bit better. Could you maybe help me with that?

user-1391e7 11 May, 2021, 13:13:25

I've parsed both time files for gaze and video frame values

user-1391e7 11 May, 2021, 13:14:02

then I went over all video frame timestamps and looked for the closest matching gaze timestamp

user-1391e7 11 May, 2021, 13:14:41

when I then visualize the result, it looks like I am a little ahead on the gaze data

papr 11 May, 2021, 13:15:10

How do you visualize the result?

user-1391e7 11 May, 2021, 13:16:08

it's a little app I wrote in unity. could be that I screwed something up in there of course and I'll look at that some more for sure. but just to get a better sense for the logged data

user-1391e7 11 May, 2021, 13:16:48

is it normal that I get quite a few extra entries in the gaze timestamp logfile, because they maybe don't finish exactly at the same time?

papr 11 May, 2021, 13:17:11

By logfile, do you mean the .time file?

user-1391e7 11 May, 2021, 13:17:14

yeah

user-1391e7 11 May, 2021, 13:18:12

maybe 1-3 seconds after my last videoframe timestamp data

papr 11 May, 2021, 13:20:56

This is indeed normal. You can discard the excess timestamps.

user-1391e7 11 May, 2021, 13:22:06

alright nice, thanks for the info!

user-1391e7 12 May, 2021, 10:22:22

I do think I'm doing the correct thing, but I'm slightly ahead and I don't really have an explanation for that

user-1391e7 12 May, 2021, 10:23:21

ahead meaning the gaze values I show to the corresponding closest frame is a few frames ahead of where the companion app shows it should be (and yours is correct too)

papr 12 May, 2021, 10:23:36

So, you are visualizing the data in realtime, correct?

user-1391e7 12 May, 2021, 10:24:24

post processing for now, to later add information from other sensor data

user-1391e7 12 May, 2021, 10:25:36

just wrote a parser to match the gaze timestamps to video timestamps, let the video play, ask each frame which frame we are at, look up the match, draw the gaze on top

papr 12 May, 2021, 10:27:45

So, you could use Pupil Player to compare the result, correct? Have you tried that? The question would be, which of the sensors is providing the incorrect data.

user-1391e7 12 May, 2021, 10:28:50

just a sec, I didn't know about pupil player πŸ™‚

user-1391e7 12 May, 2021, 10:30:10

really neat, thanks!

user-1391e7 12 May, 2021, 10:37:32

so I was trying to reinvent the wheel, in a shoddier version πŸ™‚

papr 12 May, 2021, 10:38:30

Let us know if Pupil Player is missing anything that you need.

user-1391e7 12 May, 2021, 10:44:08

would rectifying the video frames result in gaze points not reflecting what you looked at? or I guess one could rectify those as well

papr 12 May, 2021, 10:45:33

you would have to recitify gaze as well. But that is totally possible.

user-1391e7 12 May, 2021, 10:44:44

not that the camera distortion bothers me, it's just something we tried out with a tool we developed inhouse

user-1391e7 12 May, 2021, 10:53:01

but this is perfect, thank you very much. now I can adapt the format of what I have to match yours, see where I went wrong

papr 12 May, 2021, 10:54:02

Please note that Player does a bunch of timestamp transformations. The data might not be immediately comparable. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_recording/update/invisible.py#L139-L149

user-1391e7 12 May, 2021, 10:56:23

another thing that happened is that the original files disappeared or got renamed πŸ™‚

papr 12 May, 2021, 10:56:55

correct. All part of transforming Invisible to Player recording format.

papr 12 May, 2021, 10:57:25

I recommend keeping a backup of the original PI recording.

user-1391e7 12 May, 2021, 10:57:57

thanks for the tip!

user-1391e7 12 May, 2021, 11:04:04

hoooo, there's even plugins that do all kinds of things, how great is this! πŸ™‚

papr 12 May, 2021, 11:04:27

You can also write your own plugins.

user-1391e7 12 May, 2021, 11:10:36

might be difficult to do in this project, but looks more appealing than what I had going so far πŸ™‚

user-1391e7 12 May, 2021, 11:11:07

time to find out how long the conversion for the maximum invisible recording length takes on this old machine

user-1391e7 12 May, 2021, 11:11:38

98 minutes, ~4gb video

papr 12 May, 2021, 11:11:57

Well, Pupil Player is not optimized for long recordings. You might see serious performing issues with this kind of recording lengths. πŸ˜•

user-1391e7 12 May, 2021, 11:12:59

looks just fine, no worries πŸ™‚

user-1391e7 12 May, 2021, 11:14:25

oh, just noticed. I used the eye video overlay

user-1391e7 12 May, 2021, 11:14:42

I can flip both vertically and horizontally

user-1391e7 12 May, 2021, 11:14:51

but 90Β° is what it's off by I think?

papr 12 May, 2021, 11:15:14

Correct. Rotation is currently not supported.

user-1391e7 12 May, 2021, 11:17:13

but this is great still. already showed a flaw in the way we use some of the external applications

papr 12 May, 2021, 11:17:43

What are you referring to by external applications?

user-1391e7 12 May, 2021, 11:18:27

a couple of psych tests that participants had to periodically go through

user-1391e7 12 May, 2021, 11:18:57

cognitive tests of a sort, usually on tablets in the field

user-1391e7 12 May, 2021, 11:19:31

but where those are positioned matters. if people naturally almost fully close their eyes, hard to get good eyetracking data πŸ™‚

user-1391e7 12 May, 2021, 11:24:11

moving to a specific frame in these very long recordings gets a little tougher

papr 12 May, 2021, 11:25:30

That is what I was referring to by performing issues.

user-1391e7 12 May, 2021, 11:26:17

it's so cool that I can start exporting from a specific frame on

user-1391e7 12 May, 2021, 11:26:26

but can I stop that before the end as well?

user-1391e7 12 May, 2021, 11:26:48

oh or does it start from the beginning anyways and I misinterpreted

papr 12 May, 2021, 11:27:09

Yes, there are trim marks left and right of the timeline to adjust the export range. You can change the range in the general settings as well

user-1391e7 12 May, 2021, 11:27:40

oh perfect, thank you again πŸ™‚

papr 12 May, 2021, 11:28:12

You are welcome. There is more information here as well https://docs.pupil-labs.com/core/software/pupil-player/

user-1391e7 12 May, 2021, 11:29:07

right I even had it open in a tab already, got a little excited when I had the player open

user-3cff0d 12 May, 2021, 22:49:11

Hello! Did the removal of PIL happen in this latest release?

papr 12 May, 2021, 22:51:03

Sorry, I lost that change idea/proposal out of sight. Which OS do you use? I could get you bundle tomorrow without PIL for testing.

user-3cff0d 12 May, 2021, 22:51:28

Windows 10

papr 13 May, 2021, 08:16:34

@user-3cff0d I have sent you a download link via DM. Please let me know if you run into any issues with this bundle.

user-3cff0d 13 May, 2021, 23:00:00

That does seem to have solved that problem, thank you!

user-d1efa8 14 May, 2021, 17:11:26

sorry, I didn't see this in the docs and I'm not sure if it's there, but is there anyway to only use the 2d pupil detector (for calibration, data collection, etc)?

papr 17 May, 2021, 08:00:21

Is this about saving computational resources? You can disable the 3d pupil detector via a notification or custom plugin.

user-d1efa8 19 May, 2021, 16:22:33

What's the name of the 2d choreography that I can put into the calibration settings (unity) ?

papr 19 May, 2021, 16:23:17

The current hmd-eyes version only supports 3d calibration.

user-d1efa8 19 May, 2021, 16:24:18

is there anyway to work around this? the 2d detection for us is really good, but the 3d detection is just really unstable and unreliable

papr 19 May, 2021, 16:28:43

For that, you will have to use hmd-eyes 0.62 or earlier. It is compatible with Pupil Capture v1.23 or lower. Be aware that this will use a dual-monocular gaze mapper instead of a binocular one.

Feel free to share a recording of your 3d calibration with [email removed] We can have a look and give specific feedback. For example, it could be possible that the 3d eye model is not fit well. That would explain an instable gaze signal.

user-d1efa8 19 May, 2021, 16:33:36

where can i find installations for these?

also, does the unity demo save the calibration data in a specific folder, or do i have to write code to output it?

user-d1efa8 19 May, 2021, 16:43:47

nvm, I found it, but as for the calibration data, still need to find out about that

papr 19 May, 2021, 16:53:01

I do not remember the details for hmd-eyes <v1.0 but Capture should be creating recordings including gaze data. Not sure if it stores calibration parameters.

user-d1efa8 19 May, 2021, 17:05:05

ah, i have to use capture; i've been using service with unity

another question: would i be able to receive 2d pupil detection data? if so, i might just try to do my own calibration in unity using raw 2d data

papr 19 May, 2021, 17:05:47

You definitively can from both, Capture and Service πŸ™‚

user-d1efa8 20 May, 2021, 12:28:07

i asked before if i should subscribe to "gaze.2d", but it looks like i should subscribe to "pupil", right? or use a PupilListener to parse pupilData whenever it comes in and extract the position in camera 3d space

papr 20 May, 2021, 12:29:29

Pupil and gaze data are in different coordinate systems. But you seem to be interested in gaze data.

user-98789c 20 May, 2021, 14:21:02

Has anyone here worked with the application AttentionVisualizer that can stream eye tracking data from Invisible and Core?

user-d1efa8 20 May, 2021, 14:29:30

so, how would i go about getting gaze data from the 2d model?

papr 20 May, 2021, 14:32:04

Subscribing to gaze should work

user-d1efa8 20 May, 2021, 18:24:13

I'm having some trouble with this; in the GazeListener.cs script, it subscribes to gaze.3d; so would I just subscribe to gaze.2d instead?

papr 20 May, 2021, 18:24:40

You should be using the older hmd-eyes version which does not subscribe to 3d gaze

user-d1efa8 20 May, 2021, 20:10:41

Oh, is there no way to get 2D data in the newer version?

papr 20 May, 2021, 20:12:20

2d pupil data yes, 2d gaze data no.

user-ef3ca7 23 May, 2021, 18:43:53

Hi, We have recorded 2D data with pupil Lab core. How can we extract pupil diamarer in mm? since we do not have corrected for perspective data. Is there any python code I can use for diameter extraction?

nmt 24 May, 2021, 09:56:05

Hi @user-ef3ca7. Pupil Core records pupil diameter both in pixels (observed in the image) and millimetres (provided by 3d eye model). You can access the latter in the β€œpupil_positions.csv” export, under the heading: β€œdiameter_3d” (this is corrected for perspective).

user-ef3ca7 24 May, 2021, 18:36:00

Thank you for your answer. It says "is not corrected for perspective" in the 2D diameter. If I want to use 2D diameter do I need to consider some other parameters (e.g. off distortion axis and so on) or I can use them as is.

papr 24 May, 2021, 18:37:11

If you still have the raw recording you can run the post-hoc pupil detection in Player to get the 3d pupil data

user-ef3ca7 24 May, 2021, 18:40:57

I must have eye0.mp4 and eye1.mp4 for post-hoc. Correct?

papr 24 May, 2021, 18:46:39

And the info.json etc

user-ef3ca7 24 May, 2021, 20:46:02

Because eye0 and eye1.mp4 weren't recorded, the extract size of the pupil in mm must be forgotten. Correct?

papr 24 May, 2021, 20:48:45

You can still use https://github.com/pupil-labs/pye3d-detector/ manually and push your 2d data in via a python script.

user-ef3ca7 25 May, 2021, 15:45:41

Thank you. Is there any tutrial or example of what information should I push to the pye3d-detector?

papr 25 May, 2021, 15:51:06

You need to create 2d datums based on the Player export and pass them to the detector, like here https://github.com/pupil-labs/pye3d-detector/blob/create-integration-tests/tests/integration/test_synthetic_metrics.py#L287 And yes, you will also need the eye videos as I just noticed.

user-37e73e 28 May, 2021, 07:58:11

I have another question regarding the export of data. Is there an way to call the export function from my own code without a need to use the gui (pupil player)? def export(rec_dir, user_dir, min_data_confidence, start_frame=None, end_frame=None, plugin_initializers=(), out_file_path=None, pre_computed={}): :::: from exporter.py

user-37e73e 28 May, 2021, 08:00:08

What I now noticed is that I need these corrected previous gaze points, which are computed in the recent_events function of vis_scan_path.py, if I want to draw lines between the gazes from the last few miliseconds for visualisation purposes

papr 03 June, 2021, 07:26:35

I think the plugin caches these values in the offline_data folder. You might be able to read it directly from disk.

End of May archive