So... is it me, or do the eye images arrive to the pupil detection plugin upside down?
Right eye is upsidedown because the cam is upsidedown
Yes, I know that one camera is upside down.
Sorry, I shoudl have expanded.
So, when I view the raw eye videos using VLC ...eye_1 is upside down, and eye_0 is right-side up.
...but, when I view the images inside the plugin, or save them out, it is the opposite.
Known issue? Am I crazy?
Image coordinate system origin is top left not bottom left.
oK, thx.
Trying to decide how to handle this, since our trained neural network expects right-side up images.
Thanks for the pointer. Already on it π
...and, does HMD-Eyes change any pupil detection settings from the default values I see in player?
Not sure
Happy to do some digging. Any idea where I should look?
Easiest way to check would be to subscribe to all notifications and look how eye processes etc are being started
I had the idea of searching the repository for 'pupil_detector', a string that I believe would be part of any notification involved in changing the ROI or pupil detector settings. There are no instances, and so I assume that the detector uses default values.
Hurmn.
Ok. Thanks, @papr. As always, very helpful.
Man, it's like 10pm in Germany right now, isn't it? That's dedication.
10:30 π happy to help
Hi @papr and everyone, I wonder what method you recommend for synchronizing the eye data with separately recorded task events, after a session. Is there any way to flash detectable markers on the screen (say at the beginning of each trial) so that the timestamps of these events appear in the exported data?
I suggest using remote annotations https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations
Hey everyone, is there a supported way of exporting lots of videos without dropping every single recording folder into the player gui?
Unfortunately, no. There are some community projects that batch export data, but I do not think they export the video. https://github.com/pupil-labs/pupil-community
What part of the data do you need exported?
@papr I'm using the fixations and world_timestamps csv files the world_timestamps I can read in without exporting as a numpy array
If you recorded fixations in real time, you could read the fixations.pldata file using this function https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972#file-extract_diameter-py-L71-L96
But you will need to aggregate them, as the online fixation detector output is slightly different from the post-hoc detector.
Yes, in this case, the tutorial uses 3d pupil detector data, but I recommend using the gaze data instead, in order to be comparable to the Player output. You can read pupil and gaze data without exporting it, using the function linked here
unfortunately I'm not using the online detector as I tend to get more frames the less plugins I run in the online modus
You can use the recorded gaze data and run the post-hoc fixation detector at the bottom of this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb
That's a great tip, thanks a lot! I'll try that out.
As I need to sync the world video with another camera I require to have close to 60 frames at all times
I had a look at the tutroial, it seems that the csv table used for everything is something I get from a 'raw data export'. Is that right? exported_pupil_csv = os.path.join(recording_location, 'exports', '000', 'pupil_positions.csv')
Is there a recommended way of calling the raw_data_export plugin from outside the gui?
Thank you again! I'm gonna have a go at it.
So, I changed the code that you linked, so that I get all the collumns (from pupil.pldata) that I need for the tutorial code. But what I get for the fixations is only one fixation for the whole duration of the video. Any idea what the error might be?
Also your saying that I should compute the fixations directly from the gaze.pldata file? which columns would I need for that? (my guess would be: [confidence, diameter, circle_3d:normal(x,y,z)] and norm_pos(x,y) for the actual point looked at in the world video)
You need gaze_point_3d
, confidence
(remove all rows with confidence < 0.6), timestamp
Difficult to tell what the issue might be. I might need to have a quick look at the code to check.
I changed this: def extract_eyeid_diameters(pupil_datum): """Extract data for a given pupil datum
Returns: tuple(eye_id, confidence, diameter_2d, and diameter_3d)
"""
return (
pupil_datum["id"],
pupil_datum["confidence"],
pupil_datum["diameter"],
pupil_datum.get("diameter_3d", 0.0),
)
To this: def extract_eyeid_diameters(pupil_datum): """Extract data for a given pupil datum
Returns: tuple(eye_id, confidence, diameter_2d, and diameter_3d)
"""
return (
pupil_datum["id"],
pupil_datum["method"],
pupil_datum["confidence"],
pupil_datum["norm_pos"][0],
pupil_datum["norm_pos"][1],
pupil_datum["diameter"],
pupil_datum.get("diameter_3d", 0.0),
pupil_datum.get("normal"[0],0.0),
pupil_datum.get("normal"[1],0.0),
pupil_datum.get("normal"[2],0.0),
)
The tutorial code was used as is
To load gaze data, you need to pass topic="gaze"
to load_and_yield_data()
. Also, you need to adjust the extracted fields accordingly.
Sure I changed everything, so it works with gaze :) I thought you wanted to see my changes I applied to the pupil.pldata reading I used for the tutorial
Please run the tutorial as is with the accompanying data and check if the generated fixations match the original output. Afterward, try again with your own recording.
that works with my pupil data extraction. The only big difference that I see in the tutorials and mine data files is that your pupil.method is '3d c++' and my data has pupil.method = 'pye3d 0.0.4 real-time' so I changed that for my data, maybe that isn't inter-changeble?
All right, I'll try that!
Thanks. Has anyone integrated pupilLab APIs with mWorks?
Not to my knowledge
This reminds me - because there is no parallax error in VR due to an offset of the eyes and scene camera, I question the importance of calibrating to multiple depths.
The only reason to do that would be if the pupil changed size with fixatnoi depth, but due to the fixed focal distance (because the optics are fixed), there is no change in accommodation with changes in depth.
It is!
On another of my recordings it just worked π I'm gonna try around for a bit more I guess ^^ Again thank you very much for all of your help!
So, I think you guys are wasting time with the default setting of calibrating to multiple depth planes. My advice? Instead of changing depths, change the clear-color of the background so that you vary gray levels and get a variety of pupil sizes.
I should say that I'm not sure if it has been published that pupil size varies with depth of accommodation, and does not vary with fixation depth under vergence / accommodation conflict. This is an assumptino of mine tested by looking at pupils during calibration at different target depths along the same angular direction.
They are just very, very stable.
Well, I should acknowledge that the fixatino target's position will change subtly within the screen of the left/right eye as it changes in depth, so it's not a total waste to change depth planes, but it would be more effective to just stick to a single plane and vary spatial position more evenly across the field.
Looks like I may wrong about the stability of pupil size when changing fixation depth without accommodation! https://link.springer.com/content/pdf/10.1007/s00417-017-3770-2.pdf ...but, I remain skeptical, because that's just not what I see in these eye videos.
Ok, another issue: when switching between pupil plugins (I am developing one), I often get an error: eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace:
Traceback (most recent call last):
File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\launchables\eye.py", line 776, in eye
consume_events_and_render_buffer()
File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\launchables\eye.py", line 272, in consume_events_and_render_buffer
p.gl_display()
File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\shared_modules\pupil_detector_plugins\pye3d_plugin.py", line 193, in gl_display
draw_eyeball_outline(result)
File "C:\Users\gabri\Documents\GitHub\pupil\pupil_src\shared_modules\pupil_detector_plugins\visualizer_2d.py", line 58, in draw_eyeball_outline
if pupil_detection_result_3d["model_confidence"] <= 0.0:
KeyError: 'model_confidence'
Were you switching from ours to your plugin or the other way around? It looks like the pupil datum in question did not have a model_confidence
value. Given that you are currently developing the plugin, it is possible that your detector does not create that field yet.
This is solved by erasing pupil player settings.
...however, I can't help but feel this is common enough to be considered a bug. I will post this as an issue...
Thanks for the heads up. I'll double-check.
That was an issue in the past - but I cleaned out the data folder of offline data. I'm surprised that that kind fo info is stored in player settings.
The session settings store which plugins were active and their settings. On launch, it reloads the previously active plugins. If the loaded plugin is faulty, it will crash and not write out new session settings. i.e. the next time, the faulty plugin will be loaded again.
Thanks.
I'm guessing that you have hit the nail on the head, so I'll hold off on posting the issue.
Should you run into the issue again after making sure that your detector fills this field, feel free to create a github issue. Since you are running from source, it would also be helpful to print out the pupil_detection_result_3d
value to check which datum is causing issues.
Got it running now on the gaze.pldata file as well! One quick question though: the fixation plot looks rather different when I compute it with the pupil- vs with the gaze-data. Is that normal?
Some differences are expected as the fixations in scene camera space are conceptionally different from fixations using the eye model rotation. Also, if you do such comparisons, make sure to align the figure axes (e.g. setting x-axis limits to the same values).
fixations for pupil data
fixations for gaze data
Want to know how EyeTracking data looks like from the Pupil Lens tracker. Is it possible to get a sample file through here?
Download a raw sample recording here https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing
The simplest way to view and export it is using Pupil Player https://github.com/pupil-labs/pupil/releases/latest
That is a very valid point, thank you. I'll make sure to do that, if I decide to put such a comparison in my thesis. For now I just wanted to make sure that the output I produced is reasonable. Also if I where to use the pupil export plugin the output I would get should be the one from the gaze data right? Just asking so that I can be sure that it is reasonable to compare my output with the one from the plugin.
Correct. There are two different ways how the gaze direction vectors are calculated. Either, if you calibrated in 3d the gaze_point_3d
data is used, or if you calibrated in 2d the norm_pos
is used and unprojected into a direction vector. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L137-L149
hello hello!
I'm currently busy trying to match the gaze data of pupil invisible recordings to video frame data and I'm trying to understand the logged data a little bit better. Could you maybe help me with that?
I've parsed both time files for gaze and video frame values
then I went over all video frame timestamps and looked for the closest matching gaze timestamp
when I then visualize the result, it looks like I am a little ahead on the gaze data
How do you visualize the result?
it's a little app I wrote in unity. could be that I screwed something up in there of course and I'll look at that some more for sure. but just to get a better sense for the logged data
is it normal that I get quite a few extra entries in the gaze timestamp logfile, because they maybe don't finish exactly at the same time?
By logfile, do you mean the .time file?
yeah
maybe 1-3 seconds after my last videoframe timestamp data
This is indeed normal. You can discard the excess timestamps.
alright nice, thanks for the info!
I do think I'm doing the correct thing, but I'm slightly ahead and I don't really have an explanation for that
ahead meaning the gaze values I show to the corresponding closest frame is a few frames ahead of where the companion app shows it should be (and yours is correct too)
So, you are visualizing the data in realtime, correct?
post processing for now, to later add information from other sensor data
just wrote a parser to match the gaze timestamps to video timestamps, let the video play, ask each frame which frame we are at, look up the match, draw the gaze on top
So, you could use Pupil Player to compare the result, correct? Have you tried that? The question would be, which of the sensors is providing the incorrect data.
just a sec, I didn't know about pupil player π
really neat, thanks!
so I was trying to reinvent the wheel, in a shoddier version π
Let us know if Pupil Player is missing anything that you need.
would rectifying the video frames result in gaze points not reflecting what you looked at? or I guess one could rectify those as well
you would have to recitify gaze as well. But that is totally possible.
not that the camera distortion bothers me, it's just something we tried out with a tool we developed inhouse
but this is perfect, thank you very much. now I can adapt the format of what I have to match yours, see where I went wrong
Please note that Player does a bunch of timestamp transformations. The data might not be immediately comparable. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_recording/update/invisible.py#L139-L149
another thing that happened is that the original files disappeared or got renamed π
correct. All part of transforming Invisible to Player recording format.
I recommend keeping a backup of the original PI recording.
thanks for the tip!
hoooo, there's even plugins that do all kinds of things, how great is this! π
You can also write your own plugins.
might be difficult to do in this project, but looks more appealing than what I had going so far π
time to find out how long the conversion for the maximum invisible recording length takes on this old machine
98 minutes, ~4gb video
Well, Pupil Player is not optimized for long recordings. You might see serious performing issues with this kind of recording lengths. π
looks just fine, no worries π
oh, just noticed. I used the eye video overlay
I can flip both vertically and horizontally
but 90Β° is what it's off by I think?
Correct. Rotation is currently not supported.
but this is great still. already showed a flaw in the way we use some of the external applications
What are you referring to by external applications?
a couple of psych tests that participants had to periodically go through
cognitive tests of a sort, usually on tablets in the field
but where those are positioned matters. if people naturally almost fully close their eyes, hard to get good eyetracking data π
moving to a specific frame in these very long recordings gets a little tougher
That is what I was referring to by performing issues.
it's so cool that I can start exporting from a specific frame on
but can I stop that before the end as well?
oh or does it start from the beginning anyways and I misinterpreted
Yes, there are trim marks left and right of the timeline to adjust the export range. You can change the range in the general settings as well
oh perfect, thank you again π
You are welcome. There is more information here as well https://docs.pupil-labs.com/core/software/pupil-player/
right I even had it open in a tab already, got a little excited when I had the player open
Hello! Did the removal of PIL happen in this latest release?
Sorry, I lost that change idea/proposal out of sight. Which OS do you use? I could get you bundle tomorrow without PIL for testing.
Windows 10
@user-3cff0d I have sent you a download link via DM. Please let me know if you run into any issues with this bundle.
That does seem to have solved that problem, thank you!
sorry, I didn't see this in the docs and I'm not sure if it's there, but is there anyway to only use the 2d pupil detector (for calibration, data collection, etc)?
Is this about saving computational resources? You can disable the 3d pupil detector via a notification or custom plugin.
What's the name of the 2d choreography that I can put into the calibration settings (unity) ?
The current hmd-eyes version only supports 3d calibration.
is there anyway to work around this? the 2d detection for us is really good, but the 3d detection is just really unstable and unreliable
For that, you will have to use hmd-eyes 0.62 or earlier. It is compatible with Pupil Capture v1.23 or lower. Be aware that this will use a dual-monocular gaze mapper instead of a binocular one.
Feel free to share a recording of your 3d calibration with [email removed] We can have a look and give specific feedback. For example, it could be possible that the 3d eye model is not fit well. That would explain an instable gaze signal.
where can i find installations for these?
also, does the unity demo save the calibration data in a specific folder, or do i have to write code to output it?
nvm, I found it, but as for the calibration data, still need to find out about that
I do not remember the details for hmd-eyes <v1.0 but Capture should be creating recordings including gaze data. Not sure if it stores calibration parameters.
ah, i have to use capture; i've been using service with unity
another question: would i be able to receive 2d pupil detection data? if so, i might just try to do my own calibration in unity using raw 2d data
You definitively can from both, Capture and Service π
i asked before if i should subscribe to "gaze.2d", but it looks like i should subscribe to "pupil", right? or use a PupilListener to parse pupilData whenever it comes in and extract the position in camera 3d space
Pupil and gaze data are in different coordinate systems. But you seem to be interested in gaze data.
Has anyone here worked with the application AttentionVisualizer that can stream eye tracking data from Invisible and Core?
so, how would i go about getting gaze data from the 2d model?
Subscribing to gaze
should work
I'm having some trouble with this; in the GazeListener.cs script, it subscribes to gaze.3d; so would I just subscribe to gaze.2d instead?
You should be using the older hmd-eyes version which does not subscribe to 3d gaze
Oh, is there no way to get 2D data in the newer version?
2d pupil data yes, 2d gaze data no.
Hi, We have recorded 2D data with pupil Lab core. How can we extract pupil diamarer in mm? since we do not have corrected for perspective data. Is there any python code I can use for diameter extraction?
Hi @user-ef3ca7. Pupil Core records pupil diameter both in pixels (observed in the image) and millimetres (provided by 3d eye model). You can access the latter in the βpupil_positions.csvβ export, under the heading: βdiameter_3dβ (this is corrected for perspective).
Thank you for your answer. It says "is not corrected for perspective" in the 2D diameter. If I want to use 2D diameter do I need to consider some other parameters (e.g. off distortion axis and so on) or I can use them as is.
If you still have the raw recording you can run the post-hoc pupil detection in Player to get the 3d pupil data
I must have eye0.mp4 and eye1.mp4 for post-hoc. Correct?
And the info.json etc
Because eye0 and eye1.mp4 weren't recorded, the extract size of the pupil in mm must be forgotten. Correct?
You can still use https://github.com/pupil-labs/pye3d-detector/ manually and push your 2d data in via a python script.
Thank you. Is there any tutrial or example of what information should I push to the pye3d-detector?
You need to create 2d datums based on the Player export and pass them to the detector, like here https://github.com/pupil-labs/pye3d-detector/blob/create-integration-tests/tests/integration/test_synthetic_metrics.py#L287 And yes, you will also need the eye videos as I just noticed.
I have another question regarding the export of data. Is there an way to call the export function from my own code without a need to use the gui (pupil player)? def export(rec_dir, user_dir, min_data_confidence, start_frame=None, end_frame=None, plugin_initializers=(), out_file_path=None, pre_computed={}): :::: from exporter.py
What I now noticed is that I need these corrected previous gaze points, which are computed in the recent_events function of vis_scan_path.py, if I want to draw lines between the gazes from the last few miliseconds for visualisation purposes
I think the plugin caches these values in the offline_data
folder. You might be able to read it directly from disk.