Hello, I am trying to create annotations via the pupil remote, which is sending the annotations from matlab via udp (pupil middleman method). I have also tried the matlab helpers from github. However, the annotations I have sent do not appear anywhere in the data. The only way that I can seen annotations is if I manually add them via the Capture GUI. I am using the matlab helpers from github on a windows 10 machine with the 1.21.5 version of Capture.
@user-66a3ee hey 👋 Which topic
do you use?
annotation is the subject, right?
topic would then be notify.annotation?
in this case, you send annotations as "notifications" which are a special case of messages
Sending annotations as notifications has been removed. Annotations have their own topic now
I just checked the matlab pupil helpers and there is now an annotations function. I will try this out. Thanks for your help!
Hi, I'm working on offline calibration (based on calibrate_and_map in gaze_producers of the old API). I saw the 3 steps in a previous message and was wondering where I can find the functions that call the controllers in gaze_from_offline_calibration?
Also, is there a way to get the gaze positions without calibrating?
Hi! (not sure if this should be in here or 🥽 core-xr). I'm trying to update the Unity plugin with the latest version of hmd-eyes. The data format on the pupil developer guide site (https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format) and the actual data class in hmd-eyes (https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/PupilData.cs) appear to differ in some property names. (for instance, the guide calls one variable "norm_pos", while the class calls it "NormPos"). I'm not sure if this is a known discrepancy or something I'm misunderstanding -- but I just wanted to see if anyone knew about this!
Never mind -- I dug a little deeper and found that the field for the dictionary for the data class is 'norm_pos', but it will access the variable NormPos. Thanks!
Hi there! I am trying to load the gaze.pldata using your api. Could you please explain what each of the things in data mean? It's not clear to me where exactly there is the mapped x and y gaze coordinates.
@user-ddc8dd You might be interested in these functions: - Setup: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/gaze_from_offline_calibration.py#L37-L38 - Reference marker detection + calibration + mapping https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/controller/calculate_all_controller.py#L30
Regarding gaze without calibration I would recommend to have a look at https://pupil-labs.com/products/invisible/
Hi @user-2be752 by
each of the things in data I assume you are using the
file_methods.load_pldata_file()
and/orfile_methods.PLData
, correct?
The data
attribute is a deque (double ended queue) of Serialized_Dict
objects. The Serialized_Dict
class handles deserialization of the underlying msgpack data. This is useful because it only deserializes on demand. You can access it like any other dictionary. The keys normally are: 'topic', 'eye_centers_3d', 'gaze_normals_3d', 'gaze_point_3d', 'confidence', 'timestamp', 'base_data', 'norm_pos'.
By
mapped x and y gaze coordinates I assume you want to access the
norm_pos
key, which stores the gaze position in relative coordinates to the world image (from 0.0 to 1.0 in x and y). Assuming thatdata
is the deque ofSerialized_Dict
instances you get fromload_pldata_file
, you can get e.g. all gaze coordinates with:
for datum in data:
time = datum["timestamp"]
norm_pos = datum["norm_pos"]
print(f"{time} -> {norm_pos}")
Hi @user-c5fb8b, yes, exactly what I needed to know, thank you very much!! I assume this gaze is already computed from the calibration I did during the recording, so no need to recalibrate, right?
@user-2be752 Correct!
Hi there - I'm developing a plugin to help with handheld target calibration by simply plotting a circle in each of the 9 locations for 9-point calibration. I'm using draw_points_norm to make the circles, but am running into the problem that the circle's sizes are constant regardless of window size. I found a related github issues with cygl (https://github.com/pupil-labs/cygl/issues/9) from @papr , but it is now closed. Any thoughts? Thanks v. much!
Hi guys, I'm currently trying to analyse VOR (vestibular induced nystagmus of the eye) using the PL system. While doing recordings I noticed that my eye model kept 'shrinking' (pupil fit remained accurate though) from a normal eye fit to a fit of just the iris, resulting in an amplitude scaling of the supposed eye movements being made. So what I would like to do is an offline check of my eye model to visually check when it is accurately tracking the eye versus when it is not (so i can discard false data). However in Pupil Player, I can only find a window for visualizing the eyecam + pupil fit, yet no eye model is being depicted here... Does anyone know if this is possible? Many Thanks! Jesse
Hi there- I want to check some info, I am always using eye tracking glasses in closed space like supermarkets with mobile. I just want to test banners in the streets, from the car. If you know to help me is it possible? how is the process of calibration? many thanks.
@user-83773f Visualizing the 3d model outline in the eye overlay view is a small, simple feature which we can add to our next release. Currently, there is no way to display it with this plugin. @user-c5fb8b @user-764f72 Could one of you please take on this task?
@papr I can take care of this.
Hello everyone, I'm trying to run pupil from source on Windows 10 and I am following this steps: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md but when it comes to build optimization_calibration module I get an error:
python setup.py build
running build
running build_ext
building 'calibration_routines.optimization_calibration.calibration_methods' extension
error: Unable to find vcvarsall.bat
I've added catalog with vcvarsall.bat to PATH and still nothing...
In general I'm looking for an easy way to run pupil player (with my custom plugin that has to be run from source) on any computer. I thought that maybe running pupil software in Docker will be the easiest way but I don't have much experience with Docker so I got stuck with this problem:
I created Dockerfile based on https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu18.md
but when I try to docker run myimage
it get:
File "/bin/sh", line 1
SyntaxError: Non-UTF-8 code starting with '\xc0' in file /bin/sh on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
Here is my Dockerfile. I want to make container with environment ready to run pupil apps.
@user-a6cc45 this repo is depreciated (e.g. no longer maintained and not up to date re dependencies) but if you want a reference docker file here - here you go: https://github.com/pupil-labs/pupil-docker-ubuntu/blob/master/Dockerfile
Note - I really do not know if docker will enable you to launch a GUI. The docker image linked above was intended to be used as a build/bundling system and not for actually running the software.
If you have access to a linux/mac system (edit: saw that you were using win 10 above) and are developing source code, then it might actually be easier to install dependencies and run from source.
@user-a6cc45 vcvarsall is a tool from visual studio, which seems to be missing. Did you install visual studio with the required options as documented here: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md#install-visual-studio ? Side note: Are you aware that you can run custom plugins also with the bundled versions of Pupil? You only need to run from source if you actually modify Pupil source code.
Hi, I have a question about the 2D detection. I would like to know the position of the two ellipses centers of the eyes respect to the world camera screen, or in the same coordinates. I understand that this is obtained using a polynomial regression (am I right?), and I was able to read the parameters of the polynomial regression in the notify documents. If I am not wrong it is used a regression with 13 parameters, and two with 7 (for the two eyes). Does the two sets of parameters refers to the coordinates? Can I use them to the data in pupil_positions.csv ? Thanks, and sorry if I was not so clear
@user-bc8123 the position of the pupil ellipse centers in world camera coordinates is called gaze and exported to gaze_positions.csv check out the norm_pos field specifically.
The polynomial that Capture uses to map pupil positions to gaze has a specific structure. I can look it up later today.
ah, ok. Thank you for your reply
Hey, I'm new and trying to use pupil core for some gaze mapping. I wanted to know if its possible to connect to the device and collect data without using the pupil softwares capture and player. I'm using windows 10 and don't have administrator rights to it. Hence I can't install these softwares. My programming and open source knowledge is quite good. Thanks
@user-dfa378 Unfortunately, Pupil Capture is required. The administrator rights are only necessary for the driver installation.
So there is no way around it? Using a python library etc since a lot of code is opensource so I expect there would be something
@papr The last resort would be to get pupil capture
@user-dfa378 You can of course try to extract the parts of the pipeline from the Pupil project, but not everything is nicely separated as one would like it. 😕
But even if you do all that, we assume pyuvc (https://github.com/pupil-labs/pyuvc) to detect the cameras. And on windows, we require the libusbk drivers for that.
Ok, then I guess I go with the installation path. Thanks for your help @papr
Hello guys! I would like to know if it's possible to create a dynamic heat map easily with the pupil capture/player? I mean a heat map for dynamic content (videos, videos game...) and not only a heat map for static image. Thank you in advance.
Hi there- just checking in again about my earlier question re: draw_points_norm and circle sizes. Thanks! 🙂
@user-fd4a45 Are you asking about creating a heatmap for each frame of a video?
Example: 10 participants watch the same film. You want to aggregate gaze on frame [0, 1, 2, ... n] for each frame of the film the participants watched. Does this example capture your use case?
@user-c5fb8b or @papr please see @user-06ae45's question above (and from before).
@papr should have a look at that as he seems to have been involved in the matter already. I wouldn't know what to do here without spending some time on this.
I can respond to the question on Monday.
@wrp Yes that's exactly what I'm asking. The use case is not exactly what I had in mind though, but it's still useful to be able to do that.
As making heat map for dynamic content is useless, I would like to be able to create dynamic heat maps from a specific frame to another from a single participant. The idea is to visualise zones of interest for each section of a video (from x frame to y frame).
@user-fd4a45 Unfortunately, this is not possible to generate in an automated fashion in Pupil Player. Instead, you will have to set the trim marks to the time range over which you want to aggregate the surface gaze to a heatmap and hit export. Afterwards, adjust the trim marks to the next time range and repeat.
Ok thank you @papr and to create heat maps I need to use the marker tracking right?
@user-fd4a45 correct.
I recorded some videos using Pupil Remote through python code and it seems that in all of my recordings, the eye cameras are only starting to record 4 seconds after the front facing camera starts recording, Is this a known issue and are there fixes? @papr
Hey!! I need to use the pupil core data along with the data I collected from another camera. Since the starting timestamp of the pupil data is arbitary, what is the most robust/error-free way to convert the pupil timestamps to epoch time. I'll be using collected data so real-time conversion is not a necessity. Thanks
@user-c9d205 Hey 👋 on what OS are you running the script?
@user-dfa378 Checkout the info.player.json
file. It contains the recording start time in system time (unix epoch) and pupil time. You can use these two timestamps to calculate the offset between unix epoch and pupil time and apply it to your other data.
HI. @papr did you have the time to check the polynomial to map the pupil positions into the gazes?
@user-bc8123 All relevant functions can be found here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_routines/calibrate.py
thank you ! @papr
@user-c9d205 Hey 👋 on what OS are you running the script? @papr Ubuntu 16.04
Hi, is there a way to obtain a depth (z) value from the 2d detection ?
@user-bc8123 no, the 2d detection operates solely on the image plane.
@papr do you believe is there an easy way to obtain at least eyes angles from the pupils image?
Hello, I've noticed inside gaze.pldata there are 3 norm_pos, 2 inside 'base_data'and one as its one key. Which one is the one we should use for x y coordinates of gaze?
Hi @user-2be752
when your gaze was mapped binocularly, a datum from left and right eye are combined to produce a good "average" result. The two entries in base_data
are the left and right pupil data that resulted in this single gaze datum. The norm_pos
key of the gaze datum resembled the best matching of those two. If you have monocular gaze there will be only one entry in base_data
, which will have the same norm_pos
as the gaze datum. Did that answer your question?
Hi @user-c5fb8b yes! so I have binocular data, that means that I should use the norm_pos key of the gaze datum. Thank you!
Hi again - I have a question regarding accessing the recording directory from a plugin, so that I can save additional files in the same recording directory. I am able to access objects like the sync time through 'g_pool.get_timestamp()', but 'g_pool.rec_dir' throws the error 'AttributeError: 'types.SimpleNamespace' object has no attribute 'rec_dir''. Am I missing something here about how the g_pool object works? Thank you!
Hello again 🙂 I'm trying to use your fixation detector code but I'm missing some ipc_pub parameter in gpool. Any idea how I can get it? Thank you!
@user-2be752 I fear @user-c5fb8b wasn't entirely correct. Gaze is not equal to Pupil data. Gaze is mapped pupil data. Gaze can be mapped based on one or two pupil data points. You are looking for the top-level norm_pos key of the gaze datum.
@user-2be752 the IPC pub can be instanciated with zmq.Context.instance.socket(zmq.PUB) if I am not mistaken.
You might need to bind it to a random port.
@user-06ae45 that should work. Could you share the plugin code with [email removed] We will have a look at it and come back to you tomorrow.
@papr thank you! is there anyway I can bypass this since I only want to use the fixation detector...
Re: if I assign zmq.Context.instance.socket(zmq.PUB) to ipc_pub then it says that the socket has no such option NOTIFY
@papr Sure, I'll send it over now. Thank you!
I recorded some videos using Pupil Remote through python code and it seems that in all of my recordings, the eye cameras are only starting to record 4 seconds after the front facing camera starts recording, Is this a known issue and are there fixes? @papr @user-c9d205 Hey 👋 on what OS are you running the script? @papr Ubuntu 16.04 @user-c9d205 is a known issue and are there fixes? @papr Anything new in this matter?
Hi @user-c9d205 until @papr responds: the recordings will never start at the exact same time (although 4 seconds seems uncommon to me). We match timestamps of every video in order to match frames and pupil/gaze data afterwards. So you should not rely on any workflow that requires the world and eye videos to start simultaneuously.
Hey!! I wish to move from the world camera of pupil core to the actual world coordinates (I'm wearing them while driving so the car coordinate frame). So basically I want to account for the head position as well. Probably its not possible but just want to confirm it. Is there some procedure to follow say during calibration that can achieve this? @papr
@user-dfa378 Checkout our head pose tracking plugin!
Thanks a lot @papr Could you please send a link, I can't seem to find it. I found Asistiva but that is not what I'm looking for
@user-dfa378 https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins
To use the fixation detector code, do you recommend me to use the offline or online detector? It seems that difference is that in the online the fixations will overlap?
Hi @user-c9d205 until @papr responds: the recordings will never start at the exact same time (although 4 seconds seems uncommon to me). We match timestamps of every video in order to match frames and pupil/gaze data afterwards. So you should not rely on any workflow that requires the world and eye videos to start simultaneuously. @user-c5fb8b Thanks for the response, I was expecting one second mismatch maximum, but I consistently get 4 seconds mismatch, is there anything that can be done?
(Not sure if this should go here or in 🥽 core-xr ) Hi, I'm trying to in Unity put markers for the left and right eye in independently where the gaze in 2D is directed at the display (independent of where gaze is directed in 3D)... basically just putting a red circle for the left eye and blue for the right with a constant depth and size. In prior code from a few years ago, this was accomplished using the "norm_pos" variable for each eye and then setting it as the localPosition for each rectTransform marker. However I'm having a lot of difficulty using the newer hmd-eyes plugin for Unity to do the same thing. I was trying to adapt the GazeVisualizer demo to use the norm_pos from the monocular GazeData (one for each eye) but it seems very intermittent (it will switch between updating the position of the markers every few seconds rather than both simultaneously) and it seemed like the variable wasn't correctly capturing the eye position. I also tried using the eye direction as a 3D vector and then obtaining the position from that as GazeVisualizer did, but that worked very consistently, but only for when I rendered one eye at a time. Any ideas of which variable of to use or how to approach this? Thanks!
@user-2be752 You can read about the differences in the terminology section: https://docs.pupil-labs.com/core/terminology/#fixations As you said, the online detector can produce overlapping fixations. There is also no way to specify a maximum duration (as in the offline mode). Since normally you would want mutually exclusive fixations, I would recommend the offline mode for any post-hoc analysis. The online fixation detector is required when you need real-time information while running the experiment, e.g. to provide feedback or control the flow of the experiment.
@user-c9d205 We haven't experienced a case with such a big delay before. Are you using any special headset version, or a very old version? What specs does your computer have? Also, can you send me the log file after starting and stopping a recording? You can find it in your home folder > pupil_capture_settings > capture.log. Please note that this file gets overwritten every time you restart capture.
@user-220849 Please repost the question in 🥽 core-xr
Hi, I'm a newbie, I need a CSV file to review export data. Can anyone help me with that?
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/raw_data_exporter.py like that 🙂
Hi @user-6b818e
you can export the data from a recording in Pupil Player into a csv file.
Make sure the Raw Data Exporter plugin is enabled and press "e" (or click the "export" button with the arrow down on the left).
The export will now be processed, you will find all exported data in the folder of your recording > exports > "export_number".
E.g. there is a file gaze_positions.csv
which contains all gaze data.
Did that answer your question?
@user-c5fb8b , that makes sense. so I should input the gaze data into recent_events (from Offline_Fixation_Detector class)?
@user-c5fb8b thank you. I realized some things yesterday that may change my question so I'll repost an edited version if I'm still having issues.
Hi @user-6b818e you can export the data from a recording in Pupil Player into a csv file. Make sure the Raw Data Exporter plugin is enabled and press "e" (or click the "export" button with the arrow down on the left). The export will now be processed, you will find all exported data in the folder of your recording > exports > "export_number". E.g. there is a file
gaze_positions.csv
which contains all gaze data. Did that answer your question? @user-c5fb8b I don't have any devices to test :D. I'll oder one ASAP, but can you have any sample export data, i need one file to test my back end 😄
@user-6b818e https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing This is the example recording from our website. In the export folder you can find the gaze_positions.csv
file.
Thanks @papr
@user-c9d205 We haven't experienced a case with such a big delay before. Are you using any special headset version, or a very old version? What specs does your computer have? Also, can you send me the log file after starting and stopping a recording? You can find it in your home folder > pupil_capture_settings > capture.log. Please note that this file gets overwritten every time you restart capture. @user-c5fb8b I just restarted the program and there is no major delay. Very wierd
I now face a different problem, I upgraded the version of the software and whenever I try to record it from python code the front camera just freezes The log is pretty much filled with lines in this format: world - [DEBUG] video_capture.uvc_backend: Received non-monotonic timestamps from UVC! Dropping frame Removing this line solved it: socket.send_string('T 0.0')
@user-c9d205 Thank you very much for letting us know! This is a regression of a change that we made. We will try to fix it as soon as possible. (for reference: https://github.com/pupil-labs/pupil/pull/1812)
@user-c9d205 We have merged a fix to master
. Could you give it a try please?
We have also updated the bundle releases to include this fix.
If anyone was following my earlier question about the position of eye markers not being updated often and NormPos -- just wanted to update that the issue was that I set the confidence threshold too high (.8) and it was basically throwing out most of the data!
Hi there again! I'm still trying to figure out the fixation_detector code. I'm using the offline detector, and I wanted to know if the correct function to pass the gaze data into is recent_events. Do I need to pass all of the data at once or one by one? Thank you!
@user-2be752 I would recommend to simply call this function: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L157-L244