@user-911c89 This integration is still in the planning phase. Unfortunately, I cannot provide further assistance with this issue yet.
Hello I guess the question I have is a common question. I am currently using the pupil hardware for a study where people are looking at stuff where it is impossible to use the markers. Before I was used to use the SMI semantic gaze mapping function which allows to manually code to what part of a reference image people are looking at in order to calculate dwell time etc. How do people solve this issue with pupil labs eyetracking data?
Hello, I'm an Interaction design student at NTNU (Norway), we are considering to buy a few Pupil eye trackers to replace some eagling SMI glasses. One of the project that would justify the expense would involve the use of the world camera as a luminance meter, to eventually allow to use the pupil size as a metrics in different lighting conditions. in order to use a camera as a luminance meter (cd/m2) it is necessary to be able to either fix the exposure or save the parameters of the world camera as they change during the recording (f-stop iso, exposure time) is any of that data available trough your software? Thanks
@user-62abb8 fixing the exposure time is possible through software
Hi @papr , Have you uploaded the final version of the blink detector? If you remember me, I found a bug that foce you to restart the script before restart the capture. Discord has delete my last account, then I will use this one.
@user-e194b8 Hey, yes, I do. I have just updated the gist with the latest version
That's perfect! Other question, could we know the latency that happens between camera images received by the program and UDP signal sended to Unity? Is Pupil detection faster than blink detection? Thanks!
@user-e194b8 blink detection is based on the detected pupil data. Therefore it is faster, yes.
The delay depends on different factors, e.g. which plugins are active, your network connection, etc. You can measure it yourself by syncing time between the Capture and unity and measuring the time difference between pupil data timestamps and receiving them in unity
You can measure it yourself by syncing time between the Capture and unity
Can I do this with the Time sync plugin?
Yes
Reading it again, I had not understood what you wanted to tell me. It's only see the difference between timestamp and Unity time. I have this time, the time, about I have asked, is period between read the camera buffer (maybe this time is the TimeStamp) and pupil analyze the frame to obtain confiance and these things
Hey! When installing the latest version of everything in the developer docs, Pupil Capture opens with a black window. Only when resizing it, the UI is shown. But everything is really slow, macOS also shows the rainbow ball when clicking anything. But when you select "Test image" in the backend manager, the UI is responding fast and no balls are shown. I have no idea where that comes from and it looks like nothing else is broken, but I wanted to let you know that there is an issue.
@user-06a050 Thank you, I will look into that. Which mac do you use?
macOS 10.14.1, MacBook Pro Retina
Ok, thank you
*Activating "Test image" and fake capture
I'll create an issue on GitHub, maybe I'm also able to create a video from it
ok
@papr One question about the new annotation sending procedure: Is it right, that I must send my custom annotation with the topic "annotation", so to speak: "annotation.blabla", or do I have to use the notify topic with the subject "annotation", as: "notify.annotation.blabla" ... the latter works, but is not saved to annotation.pldata (and my issue with high frequency annotation persists) but the former doesn't work, the error is: "Req.XSend - cannot send another request" .... maybe I have to start a new publisher socket? But how does pupil know about that? I'm confused
Annotation Capture Plugin IS running
Are you sending the data to Pupil Remote or the IPC Pub Port?
pupil remote
The first way would be correct but you need to 1. request the PUB_PORT from pupil remote 2. create a pub/push socket connecting this port 3. Send the message to that socket
@user-29e10a Please checkout the new example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
We will probably adapt Pupil Remote such that we will accept these kinds of messages such that a pub port is not necessary anymore.
Do not forget to call recv
on the Pupil Remote socket after sending a message. The REQ-REP requires you to receive the servers response.
thank you, I will try that! 😃
Hi, i just tried version 1.9 and it seems like you changed the data format sent via Pupil Remote:
1.8: {"topic":"gaze.3d.0.","eye_centers_3d":{0:[x,y,z]},"gaze_normals_3d":{0:[x,y,z]},...
1.9: {"topic":"gaze.3d.0.","eye_center_3d":[x,y,z],"gaze_normal_3d":[x,y,z],...
In the 1.9 structure eye_center_3d
and gaze_normal_3d
are no longer dictionaries, but simple arrays, containing exactly one vector instead of one or two.
Also, in 1.8 it was possible to also get gaze data packages like gaze.3d.01.
which was a package containing data from both eyes. Is this functionality gone and the aforementioned dictionary structure no longer necessary?
If so, you should definitively report such changes in the developer notes of your release message, as it completely broke our pupil integration O_o
@user-82e7ab Are you using macos?
You are receiving monocular data. That the fields are different than binocular data is expected
what is macos?
macOS, the operating system for apple
ah sry, no it's Windows
(10)
Please make a recording and export the pupil data via the raw data exporter. Let's check if the eye cam clocks are in sync
Alternatively, it is possible, that one of the pupil detections yiels low confidence. This also results in monocularly mapped data
ok, I did a recording via capture
where do I find the raw data exporter?
Pupil Capture started the Binocular 3D gaze mapper after calibration - and it's still running - not sure if this can still send monocular data?!
OK, maybe this just did not happen to me before (receiving monocular data), but if this can happen, I'll add this to our receiver.
You need to open the recording with Player. The Raw Data Exporter will be loaded by default and will export the data to csv.
Yes, you should definitively support receiving of monocular data
Hey. Trying to integrate pupil with Unity. Got the capture running, imported the HMD_VR asset. Opened the 3d calib demo; it successfully connects to the Pupil, and then nullRefs when PupilGazeTracker tries to instantiate a "CalibrationPointExtendPreview" from Resrouces. Which isn't in any resources folder.
Is there any complete documentation / synopsis / example output of how the sent gaze data can look like?
How do I know if a received message contains monocular or binocular data? (The topic
seems to be the same)
Or does 1.9 only sends binocular data if the topic (gaze.3d.
) ends with 01.
instead of just 0.
or 1.
?
I've exported the raw data - should I just drop it here, or upload it somewhere else?
You can put it in a gist.github.com
Yes, .01.
is binocular, while the rest is monocular. Please let us know if you find any inconsistencies
OK, then could you add this to the release notes?
Because in 1.8 the data was always stored in binocular format, even if the topic was .0.
or .1.
@user-82e7ab https://github.com/pupil-labs/pupil/pull/1291
Thx, but I was thinking of the "Developer Notes" section here https://github.com/pupil-labs/pupil/releases/tag/v1.9
The release notes did only mention #1286. You are right, that we should have mentioned this PR explicitly
because this is the place where I whould expect such kind of API changes
but thats just an idea - you're always super fast with giving us support - so it's no big problem 😉
I will add it to the release notes as soon as I am in the office. I 100% agree with you that this should have been mentioned.
thx
oh, your even working before arriving at the office .. now I feel bad .. thx again ; )
just a quick wrap-up for me - from 1.9 on I can rely on the topic to tell monocular from binocular gaze data packages, right?
@user-82e7ab yes, you should be able to rely on that. This was the case for 1.8, too... But it was buggy. ;)
@user-82e7ab I created an API changes
label to keep track of api changes within PRs. This should ensure an exhaustive list of all api changes for the next release notes.
For reference: https://github.com/pupil-labs/pupil/issues?utf8=%E2%9C%93&q=label%3A%22API+changes%22
perfect! thx @papr
Question 1: May I ask that what do the maturity, solver fit, confidence, performance, and perf.Grad mean in the 3D eye model?
@papr
Question2: The green circle is the projected sphere in left image, however, the corresponding 3D eye model is shown in the right image, obviously they don't match. In the green circle, the pupil is in the projected sphere, but the pupil is not in the projected sphere in the right image. May I ask that why? How the green circle is obtained?@papr
@user-b91aa6 Q1: These are displayed for debugging and represent internl parameters/intermediate results of the 3d model. They are only partially meaningful. See this paper for details on how the 3d model works: https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
Q2: The debug window displays a 3d scene. You can click+drag within the window to change the camera angle. You will see that the sphere is positions behind the eye image plane. The perspective makes the sphere look smaller from tis angle.
To Question1: Thank you very much for your reply. May I ask that can you explain them? I have read the paper and get the idea how 3D eye model work, but my current work needs me to know when the 3D eye model estimation is accurate. So I need to know the meaning of these metrics.@papr
Unfortunately, there is no exact way to tell if the model is accurate.
maturaty: The model is fit best if there are observations for many different pupil angles. The more angles there are, the higher the maturaty
confidence is a mixture between the 2d datum confidence, and how well the 2d datum fits the current model.
solver fit: measure for how well the model fits to the data it was fitted on (the trainings error).
performance is a running average of the confidence values
For more details, please have a look at the code.
2D datum means the 2D pupil position?
@papr
Correct
A 3d datum is always based on a 2d datum.
Thank you very much. To Question 2: The origin of the coordinates system should be the eye-tracking camera, right? the eye image is what the eye tracking camera see. So the blue 3D sphere we see on the eye image should be the projected sphere, right?
You are correct in regard to the coordinate origin. No, the blue sphere is not being projected in this view. It is the visualization of the actual 3d model sphere. This sphere is used to generated the projected green circle in the eye window.
So if the view camera is at the same position as the eye-tracking camera, then, the sphere will be the projected sphere on the eye image, right?
Mmh, yes, this sounds about right. I now understand the issue. You are rigth to expect both spheres to be equally big. My guess is that the projected sphere (green) is somehow differently calculated/displayed. I will look into that.
Can you reply to me after you check this? I need this in my current project. Thank you very much.
Sure
Thanks a lot
@user-b91aa6 The issue is that we are using the focal length of the 120Hz cams for the projection. This is a bug. We should be using the focal length for the 200Hz cams. My colleague will create a Github issue for that.
But the eye-tracking cameras for vive that I use is 120 HZ. focal_length = 620 in the codes. May I ask that what's the right focal length? Which one is wrong? The green circle in the eye image is wrong?
@papr
Mmh, I will have to discuss with my colleagues but I am pretty sure that the 3d debug visualizer is buggy.
Thank you very much.@papr
What does the perf. Grad mean in the 3D eye model?
@papr
@user-b91aa6 it's just the gradient of the performance within one iteration
Thanks. Have you found that what's the problem for the 3D eye visualizer?
@papr
Not yet
Ey, I am trying to check the delay for the adquisition of pupil data. I am subtracting a counter that I start when I connect Unity to PupilCapture and the PupilTresholds. Results are around 110 ms, could it be correct? In the pupilCapture (last version) I have connected the plugins: PupilRemote, Blink Detection (It works now perfect) and FramePublisher (which start when I launch Unity). Thanks!
@user-e194b8 we typically measure 10ms delay from start of camera exposure until frame is in Pupil Capture. 5ms for processing, all else is added outside of Pupil Capture. maybe unity3d introduces a lag?
It could be, I am trying several approaches to reduce it
Have you measure the delay with your Unity demos?
How do you measure the delay? Do you synchronize clocks? If yes, how?
I record the first pupil.0 TimeStamp = time0. When I do this I start a counter in the Update. I remove this time0 from all the futures TimeStamps. Then only a substract beetwen each Timestamp and the current Update counter.
Approximately
Maybe this is wrong
Thanks for the picture, this helped a lot! I am afraid, that you might not be calculating what you are expecting:
TS - time0
is the time interval between the recording of two frames. This should be equal to 1 / <fps>
.TS - time0
Since processing time and network transmission time varies, the "delay" value varies as well.
I think I understand you, the the Update timer is also delayed for the process and network times....
To calculate the delay between frame creation and data reception, I would recommend - synchronizing Pupil Capture to the unix time using this plugin https://gist.github.com/papr/45ec8a48d83338d007c1a5d49a35a966 - takeing the unix timestamp at the time of arrival - substracting these two unix timestamps to calculate the total delay
Perfect! I am going to try it
These are a few example values I get when calculating the delay using a script running on the same computer as Capture:
Delay: 0.017004728317260742
Delay: 0.014100313186645508
Delay: 0.01119089126586914
Delay: 0.013366222381591797
Awesome! I have a delay of 25.2952328938744 ms, its enought for our porpose! Thanks a lot!
👍
Hello, I can't seem to get data off of pupil mobile. I drag the file onto pupil play but it displays a grey screen saying "this will take a while". I left it up for hours with no change. how long should this take?
@user-d45407 please make sure that there are no "umlaute" in your path. Also what does the log file in your pupil settings folder say?
2018-11-15 09:49:24,850 - MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. 2018-11-15 09:49:27,434 - player - [INFO] launchables.player: Starting new session with 'C:\Users\tac0018\recordings\20181114103343557' 2018-11-15 09:49:27,447 - player - [INFO] player_methods: Updating meta info 2018-11-15 09:49:27,448 - player - [INFO] player_methods: Checking for world-less recording 2018-11-15 09:49:27,449 - player - [ERROR] launchables.player: Could not generate world timestamps from eye timestamps. This is an invalid recording.
^from the settings.
@user-d45407 it looks like the recording is not complete. Did you transfer this from Pupil Mobile?
Please restart Android before transfer, we found that not all files where shown in windows otherwise.
please then re-transfer the files and try again.
will do!
it did the same thing. I am going to try again with a different recording and let you know if that changes anything.
So I think I see what the problem is. the phone I am using (nexus 5) is saving all of the data in different folders.
@user-d45407 please then make sure to update Pupil Mobile!
Thanks for your help!
Its working now!
Great to hear!
hi,
as of release 1.8 all messages (sent via zmq/msgpack) are required to have a topic
field, so all sent messages contain a topic
. My question is, what the subject
field is for, as this seems to always hold exactly the same data/string?
Hi @user-82e7ab
Notifications are special messages. Their topic
has this strict format: notify.<notification subject>
Notifications are passed to all activae plugins via the on_notify()
callback. Plugins as well have the possibility to sent notifications via notify_all()
.
So the answer to your question is that, even if your argument in regards to redundant information is valid, we still need it for the app-internal Plugin api.
ok, so is it valid to say if I'm sending requests to pupil I'm save with always filling both, topic
and subject
, with the same information (with notify.
prefix for topic
)?
I'm asking because e.g.
in main.py
the topic is checked for eye_process.should_start
while in eye.py
the subject is checked foreye_process.should_stop
Notifications should always have both, yes
ok, thx
Are you planning to "merge" these two at some point?
Because, as you already mentioned, it's quite redundant data .. subject
, topic
and the topic again, as first part of a multi message
We will consider it.
I'm using single eye/camera headset to develop gaze controlled actions. The surface is defined as a full screen of the monitor. Two questions: Q1. Regardless calibration there is always an offset between observed point and gaze position received from Pupil. Is any ready to use solution to remove this offset? Q2. I'm writing the function to minimize such an offset based on known calibration point. Having some strange issues with that I've just discovered that in some situations (not recognized yet) gaze position could be not normalized (means out of range [0..1]). What is the meaning of such not normalized positions? Have I ignore them?
Question 1: When the 3D eye model is projected to the eye image, is the eye-tracking camera taken as the pin hole camera? I only see the focus length paratemeter of the camera.
Question 2: Because the focus lenth of the eye-tracking camera in the vive can be adjusted. How do you know the focus length?@papr
@user-cde59c 1) This offset is an estimation error measured in accuracy
, see the Accuracy Visualizer
for details. This offset is only rarely constant. Reducing the offset means improving the calibration. 2) Not all gaze points are neccessary in the field of view of the camera. Therefore it is technically valid to have negative/exceeding values. These are often outliers though. Keep an eye on the confidence values. These extreme gaze positions often have low confidence.
@user-b91aa6 I saw your questions the first time that you posted them. I will answer them when I know the answer to them. Please refrain from reposting your questions. Next time, I will try to acknowledge the question earlier, such that you know, that I have read it.
@user-b91aa6 Q1: I don't know. You probably know the code for the 3d visualizer better than me. Q2: I think this is just an estimation and is kept constant.
This visualizer is really just for debugging and most likely buggy. Unfortunately, I do not have the time to look into this problem.
Thank you very much.
When the 2D pupil is unprojected to the 3D space, is the eye-tracking camera regarded as the pinhole camera? Why it can be regarded as pin hole camera?
@papr
They are considered pinhole cameras with distortion. See https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py
Thank you very much
Hello. I'm struggling with surface tracking. I've stamped all the markes, put on a monitor screen but it seems that they aren't detected at all. What can I do?
@user-96755f can you send a quick image of your setup?
https://imgur.com/a/nmWSGXJ this is how I setup the markers around the monitor
I haven't done any calibration at all right know. I'm working on 2 different pc. In one i'm running pupil in the other I will give the stimulus trigger
i'm currently using the diy bundle if this could help
untick the inverted marker option
@user-96755f I had simillar problem. You need to print markers again and cut them off with white space (0,5 cm) arround marker
@user-8be7cd ok this works but it starts to detect the markers on the screen my bad. @user-cde59c So I will try this.
A short step by step guide so, what will it be? What should I do first after running Pupil? Go immediately on Surface tracker? when should I calibrate? Before or after?
@user-96755f surface tracking is independent of calibration. The issue is that you need a white border as mentioned by @user-cde59c
@papr surface detection works now due to the new markers printed. But what i'm asking is do I need to calibrate gaze? Because where my subject is watching is not where the dots are. So do i need to do any calibration after surface tracking or before?
@user-96755f you definitely need to calibrate. Surface tracking is just an additional step that maps calibrated gaze from the scene camera coordinate system into the surface coordinate system
What kind of calibration do you suggest? We are working on 2 different pc, so no 2-screen setup. I'm thinking about to print the big marker and then paste it on the wall behind the screen where the markers are. Is it a good idea?
@user-96755f just to let you know I'm just working exactly on the same problem 😃
@user-cde59c if you are going to solve it before me, share your findings! I will do the same as welll
@user-96755f I can share with my conclusions as they are at the moement: 1. You need to calibrate headset with Pupil Player - as it need be calibrated; 2. The surface (= the monitor) takes only part of the world camera image; 3. In your or mine solution we are interested for receiving data (gaze positions) from the surface. As I understand those data they are normalized to the surface edges (kindly ask Pupil staff to confirm or decline); 4. So, the solution is to run your own calibration procedure on the defined surface. During it you know positions of calibration points and respected gaze positions. This allows to calculate everage offset for each calibration point; 5. Finally you need to write function to minimize offset for any received gaze point which is simply weigthed average of offsets collected during your surface calibration. The tricky thing is to calculate weights 😃
Thank you so much, but it seems to much for my skills. I'm not really good at programming, so I will find a way more comfy I think. Maybe changing something in the setup. ANother question, there is a minimum of time for developing a good heatmap? Right know i'm working on short videos but they look all with a red filter. Sorry for all this stuff
@user-96755f please set X Size and Y Size of each surface to see heatmaps
ok size x and size x should they be real size right?
Yes, real size is ok. Or just proportions that approximate the proportions of the surface
If using Player you will need to click the recalculate yase distributions button after changing the surface size
x, y size in what? cm pr px?
and wher is this 'yaze distributions button'?
@wrp could you describe what is yase distribution button, please? I'm working with surface too.
I think he meant the (Re-)calculate Gaze Distribution
button
sounds more redable, but I'm still don't know where it is.
It is in the right -sided menu of the Offline Surface Tracker
Above the surface sub menues
nothing such at my Pupil Player. I have only Surface Tracker menu . Nothing starts with 'Offline'
@user-cde59c Ah, I think you are using Pupil Capture. The online surface tracker does not have such a button.
Thank you all! Everything is working fine!
(thanks @papr for clarifying - was typing on mobile and made some typos)
I just need one clarification: if I want to expose my subject to a series of images, how can I cut off each heatmap and see them alone for each image?
@user-96755f you can "edit" each surface to change/offset the boundary from the markers.
I will work on a slide show on screen
@user-96755f You could integrate the markers into the slide show. Just show different markers for each slide and define a suface for each
Ok, just a little work on Photoshop!
Hey together, does someone know in which method/class the timestamps (world camera, eye cameras) are generated? Regards, Steve
@user-4a7dd2 depends on the selected backend
In case of the uvc/local usb cameras: https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L605
good afternoon
I would like to know if someone has already used the mouse_controll script to control the mouse with their eyes and if the performance was good?
@papr Thank you
So these are some tasks I would like to do with the eye tracker
what software would you recommend to do this with the pupillab?
hey i have a trouble in building boost.python can anyone help me?
hi, I want to receive world video frame. I tried to use "frame publisher" plugin in Pupil Capture and recv_world_video_frames.py, but I received only eye video frame. Is there anything else I need to do?
https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@user-babd94 hi, what version of Capture do you use?
@papr I use version: 1.8.26
@user-babd94 there was a frame publisher related bug in that version. Please update Capture.
@papr thank you! receiving of world video frame is successed!
Nice!
Is it possible to acquire images from the cameras into a .NET environment? In particular, I am looking to use the Pupil Labs goggles to do eye tracking on software we have developed in C#. Has anyone tried this before? If not, would there be anything preventing this from theoretically working? Any issues relating to drivers?
@user-87fec3 i am trying to find a good software to develop with the pupil tracker, Im sure theres something out there and it doesnt make much sense to make something from scratch
right now im using Okazolabs eventiDE
@user-87fec3 you can subscribe to the IPC over the network and receive messages - and also video frames - here is an example of how to subscribe to Pupil and get world frames (reference in python) but you can adapt for c# - https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@user-21d960 I am not familiar with/don't have any experience with Okazolabs - maybe there are some people in the community that use other experiment building tools
@wrp thanks for the reply. I wanted to confirm -- by IPC do you mean interprocess communication? If so, I am wondering if there would be any limitation using this IPC method? For example, let's say I want to record at 120 fps or even 200 fps. I could be wrong, but I get the sense that acquiring images over IPC could run into a bottleneck?
Also thanks for providing the example python code. Perhaps I shall give this a test as proof of concept. Just wanted to confirm how I should have the system set up to run. Should I be running Pupil Capture application while running this script? Or should I be using a different configuration that uses the Pupil Labs source code?
@user-87fec3 just run Pupil Capture as usual while running the script.
Hi there, im going to analyze 60 AOI sequence with one device. Can you represent the result of accumulating 60 data, not one image, as a single picture(AOI sequence)?
60 person
hey all! i am trying to develop an application on microsoft's hololens and implement pupil for getting the user's fixations count at the end of the demo. Is there any way possible to listen for an event that will return the fixations count? thank you
hi everybody! Does Imotion support recording data from Pupil? anyone has experience with Pupil with iMotions??
thank you😇
@user-1bcd3e hi yes, we have a plugin that exports recordings such that is compatible with imotions
I am getting this error when running run_capture.bat. it also happens when I run service and player. can someone help me resolve this issue please?
@user-6c7426 The docs recommend to compile the pupil detector manually in this case since this gives you better feedback. See the docs for details. Generally we recommend to run the bundled application though.
@papr The thing is, when i compile the pupil detector manually it still gives me the same error for the cl.exe in visual studio's msvc
Mmh, this is unfortunate to hear. I don't have any experience with the windows dev environment though, sorry.
ok thank you
Hi, thanks for your software. I'm trying to receive the world video, like @ papr and @user-babd94 were discussing earlier here. However, if I add below file to the pupil_capture_settings/plugins/ in v1.9.7 on ubuntu 18.04, Capture doesn't start anymore. In capture.logs, last thing it says: [DEBUG] plugin: Scanning: recv_world_video_frames.py Any ideas? https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
@user-fbbb29 this script is meant to be run independently - it is not a plug-in. So you have Pupil Capture app running and then execute/run the python script from a terminal/IDE
Thanks!!!
Sorry for reposting but can someone help with that has had any similar experience with this error? Thank you again!
@wrp Between using IPC and recording video using Pupil Capture, I am wondering which is the more feasible option. For saved video recordings, are the recordings saved at full resolution and frame rate? For example, if I acquired the data at 120 fps or 200 fps at the respective resolutions, is that how the video files would be saved? Or is it a slower playback at lower resolution?
@user-6c7426 looks like you are running from source on Windows - the pupil detectors and calibration routines are not being built. Please try to build these first prior to running capture.
@wrp I tried to build them prior but still got the same error about cl.exe
@user-6c7426 please see earlier conversation in this channel regarding compiling libs on windows
@wrp ok thank you
@user-87fec3 please see the docs section on timestamps: https://docs.pupil-labs.com/#data-format
Timestamps files ensure accurate playback rates.
@wrp Thank you very much. I managed to resolve my issue by using this suggestion: https://discordapp.com/channels/285728493612957698/446977689690177536/500182594235924491