Hi, can Remote Annotations be used to annotate at a high frequency, say 100Hz? I'm using a PupilCore
Hi, yes, that is possible. You might want to use this custom plugin that does not log the annotation to the main window: https://gist.github.com/papr/7b940b2c02e05135f59d599a6a90c5f6
I changed notification = {"subject": "start_plugin", "name": "Annotation_Capture", "args": {}}
to notification = {"subject": "start_plugin", "name": "No_Log_Annotation_Capture", "args": {}}
in the example script https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py to use the new plugin. Do I need to change anything else?
You will also need to install the custom plugin. It is not built in.
I think it is working
Forgot to say that I did install it
Then the only thing left to do is to ensure that the built-in annotation plugin is not running. You can turn it off in the plugin manager.
Great, thanks!
You should be able to give it a try by making a recording with the new plugin and checking if the annotations are stored as usual without generating a log message in the main window.
Hello, I have done the calibration with the glasses core and I would like to be able to get the point of the gaze in python. I am using that code https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py to get the video, but I don't know from where I can get the location of the gaze.
When receiving gaze from Capture, keep in mind that its location is in normalized coordinates, not in image plane pixels (needed to render gaze into the scene video frame). Read more about the coordinates here https://docs.pupil-labs.com/core/terminology/#coordinate-system
You can convert the coordinates like this:
pixel_x = gaze["norm_pos"][0] * image_width
pixel_y = (1.0 - gaze["norm_pos"][1]) * image_height
Someone can help me?
Hi @user-b28eb5. You can use this script to obtain gaze coordinates: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py You'll need to uncomment line 24
Hi again, what is the best way of adding a label to a recording (using the Network API)? The label is associated with the entire recording and not to a specific timestamp
you could change the session/recording name, e.g. by sending R <recording name>
to Pupil Remote. This will change the recording folder's name. Therefore, you should avoid special characters as they might lead to issues when trying to read the recording later.
hmm could work but the label is not unique
Can I get the auto generated name somehow?
You have full control over the name. You can make it unique by adding the start time to the name. The recording location is exposed via the recording.started
notification. Subscribe to notify.recording.started
to receive it.
Can a plugin save a file to the recording directory? How does it get the path in that case?
Awesome thank you!
What makes creates this pop-up window and how do i stop it?
The enabled Request additional user info
setting in the Recorder menu. π Just disable it again.
Ohh interesting, is that saved to the recording directory?
Yes. So that would be another possibility to store your recording label. Apologies, that I did not recommend that before. This functionality has existed for a very long time and I never really got to use it myself π
The result is written to user_info.csv
.
Nice how does it work can I modify the defaults? I can't see a save/confirm button
I mean can I modify the dictionary keys
When the recording is stopped, the current values will be written to the mentioned file. Unfortunately, there does not seem to be the possibility to set the values via the network api. You can add more fields by modifying the dict representation at the bottom
I just realized that the user_info.csv is always generated. So we could write a custom plugin that accepts values via the network api and sets these values in the recorder plugin
Let me draft something up. @user-74ef51 do you know how to send notifications via the network API, already?
Actually, having a pop-up in the GUI is better for my usecase then what I had in mind before, there is another problem however
ok, then no need for the network API plugin?
The videos I collect are short ~20s and there is a risk that the person does not have time to fill it out since it disappears when the recording is done (which is controlled by the network API)
My usecase is this: The person responsible for the data collection starts a script on the computer that show an animation and at the same time starts the recording (via the network api) and annotates the recording with data about the animation. However I also want the data collection person to lable the recording with some personal data about the test subject
Is it possible to go the other way i.e. creating a plugin that shows the pup-up and a start button, which when clicked sends a signal over the network API to start the animation+annotation?
mmh, this is possible but requires a bit more work. Let me think about it. What would you think about the possibility to enter the value via your animation script and the script sending it Capture and then starting the recording remotely?
Yes, that was my initial idea which should work fine, just not as pretty. I have already started experimenting with a plugin for storing the labels to a json file by sending them as a notification and listening to "notify.recording.started" to extract the "rec_path"
Does it sound like a good idea?
Sounds great. I think this favorable over using the recorders user info mechanism because it is not dependent on Recorder plugin implementation details.
What am i doing wrong here?
# Client
topic = "Add_Recording_Label"
payload = {"topic": topic, "label": "my label", "name": "Add_Recording_Label"}
pupil_remote.send_string(topic, flags=zmq.SNDMORE)
pupil_remote.send(msgpack.dumps(payload, use_bin_type=True))
print("Added label:", payload["label"], ",", pupil_remote.recv_string())
# Plugin
from plugin import Plugin
import logging
logger = logging.getLogger(__name__)
class Add_Recording_Label(Plugin):
def on_notify(self, notification):
logger.info(notification)
What do in need to change so that my notification shows up in on_notify
?
Notifications have two requirements:
- a subject
field (string)
- the topic
field should be formated like this notify.<subject>
You could also use this library that I started developing https://pupil-core-network-client.readthedocs.io/en/latest/api.html#pupil_labs.pupil_core_network_client.Device.send_notification
Thanks! It works now
Hi again, sorry for bombarding you with questions. Can I use the Network API to set resolution, frame rate and auto exposure of the eye recordings?
Yes, that should be possible. Let me look that up for you!
Did you find anything @papr ?
@user-74ef51 The corresponding notification should be
{
"topic": "notify.start_eye_plugin",
"subject": "start_eye_plugin",
"target": "eye0", # and "eye1" respectively
"name": "UVC_Source",
"args": {
"frame_size": (width, height)
"frame_rate": 200,
"name": "Pupil Cam2 ID0", # and ID1 respectively; Cam1 for 120 Hz models
"exposure_mode": "auto",
}
}
Just like this?
notification = {
"topic": "notify.start_eye_plugin",
"subject": "start_eye_plugin",
"target": "eye1",
"name": "UVC_Source",
"args": {
"frame_size": (400, 400),
"frame_rate": 120,
"name": "Pupil Cam1 ID1",
"exposure_mode": "auto",
},
}
pupil_remote.send_string(f"notify.{notification['subject']}", flags=zmq.SNDMORE)
pupil_remote.send(msgpack.dumps(notification))
pupil_remote.recv_string()
This caches the eye process
If you go to the video source menu and enable the manual camera selection, check out the Activate source selector. It lists the available camera names.
Ah, the resolution does not match the camera name. Cam1 does not have square resolutions. Only Cam2 does. Which model do you use?
Is ID1 the same as eye 1? What is Cam1?
The camera name has two components: - CamX - indicates camera model (x=1 -> old 120 Hz model, x=2 -> new 200 Hz model) - IDY - indicates the eye (x=0 -> right, x=1 -> left)
Alright, I think I have the new version (I got it last week) I usually record in [email removed] but it also works in [email removed]
Then use Pupil Cam2 ID0
for target: eye0
and Pupil Cam2 ID1
for target: eye1
I tried but it still crashes
Can you share the trace back?
Oh right the capture.log? Note that I first sett the values to [email removed] manually to check that it changes back. I do that before starting the Network API controller script
The issue is that the camera was in use beforehand and not released in time. The new plugin instance is not able to access it and fails.
I am not sure how to work around this issue. I will have to try this myself.
Okay, I'm not sure what that means. Closing the eye camera processes before sending the notice does not help
How about sending this message, and then stopping starting the eye process?
This is the script i use. I don't know what is going on but it works a few times in a row an then it doesn't work a few times in a row and then it works again and so on.
I don't know why, because I didn't change anything, but it works now. Let me experiment a little more
Sounds like a race condition, i.e. an issue that is triggered by specific timing. Try subclassing UVC_Source like this in a user plugin:
import time
from video_capture.uvc_backend impot UVC_Source
class Wait_Before_Init_UVC_Source(UVC_Source):
def __init__(self, *args, **kwargs):
time.sleep(0.5)
super().__init__(*args, **kwargs)
Then change the name
field in your notification accordingly.
The added sleep will allow the system to release the prev. acquired resources before attempting to claim them again.
Should the plugin appear in the plugin menu? It doesn't and I get an error saying "Attempt to load unknown plugin". I did set "name": "Wait_Before_Init_UVC_Source",
Hi this is still a problem for me. Has there been any progress?
No, to the former. The latter sounds like the plugin was not loaded correctly. I did not test the code. Please check the debug logs. It will tell if the plugin was attempted to load and if it failed.
2022-03-08 09:24:09,854 - eye0 - [DEBUG] plugin: Scanning: wait_before_init_uvc_source.py
2022-03-08 09:24:09,854 - eye0 - [DEBUG] plugin: Imported: <module 'wait_before_init_uvc_source' from '/home/johan/pupil_capture_settings/plugins/wait_before_init_uvc_source.py'>
2022-03-08 09:24:09,855 - eye0 - [DEBUG] plugin: Added: <class 'video_capture.uvc_backend.UVC_Source'>
2022-03-08 09:24:09,855 - eye0 - [DEBUG] plugin: Added: <class 'wait_before_init_uvc_source.Wait_Before_Init_UVC_Source'>
...
2022-03-08 09:24:13,175 - eye0 - [ERROR] launchables.eye: Attempt to load unknown plugin: 'Wait_Before_Init_UVC_Source'
2022-03-08 09:24:13,185 - eye1 - [ERROR] launchables.eye: Attempt to load unknown plugin: 'Wait_Before_Init_UVC_Source'
https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/eye.py#L220-L227 looks like eye process plugin must inherit from PupilDetectorPlugin
, too.
+ from pupil_detector_plugins.detector_base_plugin import PupilDetectorPlugin
- class Wait_Before_Init_UVC_Source(UVC_Source):
+ class Wait_Before_Init_UVC_Source(PupilDetectorPlugin, UVC_Source):
- super().__init__(*args, **kwargs)
+ super(UVC_Source).__init__(*args, **kwargs)
But this multiple-inheritance workaround might have side-effects that I cannot predict right now. See also https://docs.python.org/3/library/functions.html#super You might need to overwrite multiple functions with their respective super calls or implement them with a pass
.
Maybe it is easier to check if the process is running and restart it if it is not? If it is possible to check if a preprocess is running
TypeError: __init__() missing 2 required positional arguments: 'frame_size' and 'frame_rate'
mmh, I wonder if we had an easier time by monkeypatching the init function.
import functools
import time
from video_capture.uvc_backend import UVC_Source
@functools.wraps(UVC_Source.__init__)
def custom_init(self, *args, **kwargs):
time.sleep(0.5)
UVC_Source.__init__(self, *args, **kwargs)
UVC_Source.__init__ = custom_init
this instead of the subclass
That crashes Pupil Capture before any errors are written to the log π
ok, I will need to test the code tomorrow. Just wrote this from the top of my head π But I think this could be a viable solution.
But isn't this the same as adding a time.sleep(0.5) in my Network API controller script?
(also, it had a typo in the import statement)
Yes I saw and fixed the typo. Alright let me know when you have tested it π
No, because the notification causes the prev plugin to be overwritten. And you need the sleep between shutting down the old one and starting the new one.
Ohh is see that is what happens
Can someone help me? I have my scene with a 3D shape, and I want to Store data of each gaze point. I want to Store the timestamp, the coordinates of the 3d point of the shape I'm looking at, the camera position, the camera direction... To sum up, the variables in the script gazeData
. How can I get them and Store them as I'm performing the experiment?
For example, in this case I want to store the point of the shape I'm looking, the timestamp in which I've looked at it, the position of the camera at that moment, the gazedirection...
How could I do it??
Hi, did you have time to look at the plugin restart problem?
not yet, sorry, I will try to get to it today
The wrapped code called itself recursively π€¦ Please try
import functools
import time
from video_capture.uvc_backend import UVC_Source
UVC_Source__init__ = UVC_Source.__init__
@functools.wraps(UVC_Source.__init__)
def custom_init(self, *args, **kwargs):
time.sleep(0.5)
UVC_Source__init__(self, *args, **kwargs)
UVC_Source.__init__ = custom_init
"name": "UVC_Source"?
I should put that code in a file under plugins called whatever I want right?
correct. Also, remember to rename the start_plugin name field back to the original name
Okay, I still get errors
Can you please increase the sleep duration to 10 seconds (this is huge but if it still fails with this amount of sleep, the issue might be related to something else)
I still get the error but now it is delayed 10s
Ok, you can delete the file again. Afterward, please run Pupil Capture from the terminal with this env variable: LIBUSB_DEBUG=4 pupil_capture
and share the logs here?
The capture.log?
No, the libusb debug log is only printed to the terminal as far as I know. Please share the statements printed to the terminal window.
I will have a look at this tomorrow
Hi @user-f76ddf π. Relevant documentation: -Accessing data: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#accessing-data -Recording data: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#recording-data
In order to determine which object is being gazed at, you could use raycasting to find the intersection of gaze and objects in the scene: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#gazedirection--gazedistance-instead-of-gazepoint3d
Thank you very much, I have already read all of that and I think I got it. But now I have some specific questions:
What exactly is pupil timestamp from GazeData.cs? The instant in which the pupils reach a specefic position? The instant in which a point of the scene is fixated? Or what?
In order to store the point fixated, is it better to use the hit.point computed doing the raycasting in the GazeVisualizer.cs file or the gazePoint3D from the GazeData.cs file? Is the hit.point the same than the origin+distance in the direction in the GazeVisualizer.cs file?
The variable lastConfidence in DataVisualizer is the confidence of the gaussian distribution where you're looking?
@papr Hi papr! We've been making progress on closing the difference between the realtime and post-hoc calibration routines for vr headsets running HMD-Eyes. We've been working off of the post-hoc-vr-gazer
branch, and I noticed that in the gazer_3d/gazer_hmd.py
file that contains the binocular model for the realtime calibration sequence (ModelHMD3D_Binocular
), there is no equivalent model for the monocular models. The ModelHMD3D_Monocular
's _fit()
function just returns a NotImplemented
. Are there plans to implement this monocular model? If not, what monocular model is being used for realtime HMD data in that branch?
Yes, ModelHMD3D_Monocular
is not fit independently. Instead, it receives the parameters from a common ModelHMD3D_Binocular
. In both monocular and binocular cases, the gaze mapping (not fitting) remains the same as their corresponding non-hmd counter-parts Model3D_Monocular
and Model3D_Binocular
(through class inheritance).
In summary, Pupil Core headset and VR add-on gazers differ in two things: 1. hardcoded eye translation and reference depth parameters 2. the fitting procedure
The gaze estimation algorithm remains the same
I was not able to find anything helpful yet. In the meantime, could you try running Pupil Capture with sudo
and try the replacement again?
I tried reproducing the issue on my mac running macOS Monterey. On that OS, one is required to run with sudo to get access to the cameras. It seems, as a side effect of that, I am not able to reproduce the issue. I know, that on Windows it is possible to access the same camera from multiple processes. i.e. the UVC Source replacement won't be an issue there either. Next Monday, I might have access to a Linux machine that I can use for testing this issue properly.
Okay I see, unfortunately starting pupilcapture with sudo (like this sudo pupil_capture
) does not seem to make any difference other than that it can't find the plugins anymore. Also, I noticed that this warning: eye1 - [WARNING] uvc: Could not set Value. 'Backlight Compensation'.
keeps popping up regardless if the other problem occurs, do you know what it means?
The warning is nothing to worry about. uvc just attempts to set a specific camera parameter that is not supported by the camera. It has no effect on its functionality.
Hello, I am working with capture version 1.14.9. I am wondering how I can make sure my pupil is beint detected using the pupil detector. With the new version of pupil capture I was able to see a video of each of my eye. I would then move my eye around to make sure my conf was around 1. Is this different for the older versions of capture? So far I have tried Activate source and I choose Pupil Cam2 ID1 (for Eye 1) and Pupil Cam2 ID0 (for eye 0). It's not a hardware issue since it works fine with the most up to date version of pupil capture. Thank you!
If I remember correctly in that version you had to choose between 2d and 3d detection. Which one are you using?
why detect is wrong?
Hi, the pupil detection algorithm is designed for black and white eye images, captured by a IR camera. The algorithm looks for dark edges. In this case, my eye lashes are detected as a false positive. Generally, it is recommended to avoid other IR light sources, e.g. the sun, as the reflections will make pupil detection difficult, as @user-84083b mentioned correctly. The reflection in this image is so strong that I would not even be able to manually annotate the pupil myself. π
Maybe is due to reflection ?
Hello everyone, i m trying to make datalogger with arduino to get data from pupilcore and another sensor. Do you know if some people had ever done similar things ? Thanks for your help π
I do not know anyone who used an arduino for that purpose yet. Please be aware that you will need a device that is able to run Pupil Capture. The arduino is likely not powerful enough to run this software. But you should be able to receive data via Pupil Capture's Network API https://docs.pupil-labs.com/developer/core/network-api/
thanks for your reply !
Hi everyone! I wonder if you know the exact recording settings they had in the videos from pupil core example video 'sample_recording_v2' : eye0.mp4 ? Thanks in advance.
Could you quickly link me to the download link for the recording? I am not sure which one you are referring to.
What kind of settings are you looking for? You will be able to infer frame size from the video file and the selected frame rate from the recorded timestamps.
Oh, and another question: do you know a way to extract heart rate using the video recordings? Any one that have tried it out? π
I do not think this would work. There are optical approaches but to my knowledge the light source and camera need to be very close to the skin for this to work.
Thank you for your response! I was able to finally detect my pupil. What is the difference between 2D and 3D detection, or better which one should I be using? Thanks again π
Please see this Best Practices section https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline
Hello, I am trying to analysis pupil invisible recordings in Pupil player and I have 2 hour, 1 hour, and 45 minutes videos to analyze. I am using the linux version of pupil player. My issue is that the 45 minute videos take about 20 mins to create the marker cache but the hour long video takes up to 4 hours to process the marker cache and the two hour video assuming it does not crash easily 12+ hours. I know I could use the cloud software for doing surface tracking but I do not see anyway to pull gyroscope data/head pose tracking data. I did some initial testing and I can easily pull the Gyroscope data out of the videos as long as I have the surface tracker plugin turned off but head pose tracking runs into the same problem as the surface tracking as it eventually slows to an absolute crawl or crashes the software. My question is there a way to cut the length of my videos or is the best solution using a combination of pupil cloud to do the surface tracking and simply pull the gyroscope data from the pupil player? I would like to cut the videos because then all export data is nicely organized by pupil player but I completely understand if this is not possible. I think the head pose tracking could be nice but I am fairly confident that the gyroscope data will work for our purposes.
Hey, you can get the gyro data from the Raw Data Exporter enrichment in Pupil Cloud, too. See https://docs.pupil-labs.com/invisible/reference/export-formats.html#imu-csv
That is great, I must have missed it when I was first reading this documentation. Thank you @papr
Hey, do you have something like a doc which is referencing all kind of notification that we can send to IPC Backbone ? I have some weird results sometimes, I think I am not understanding exactly what is doing my notification.
Unfortunately, we do not have such a document. Back in 2016, we decided actively against it because it would have been quickly out of date. With the software reaching its current stability, it is time to work on it.
Until then, what are the issues that you are seeing?
I was analizing the mean colour intensity for each frame and I could identify regular peaks in the example videos but not in the videos that I was recording myself.
What hardware are you using for your recording?
I am using a Microsoft Windows 11 pro. Processor : 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz, 2304 Mhz, 8 Core(s), 16 Logical Processor(s)
Apologies for being unclear, I tried referring to the eye tracking hardware. Are you using a Pupil Core headset?
ah, yes, I use the Pupil Core headset
yes, sure
https://drive.google.com/file/d/1vzjZkjoi8kESw8lBnsa_k_8hXPf3fMMC/view
Is it possible that the peaks correlate to the person looking down, and the eye lid being more visible?
Mmh, from visual inspection I cannot see a reason for the regular peaks. Are they perfectly regular? If so, at which frequency?
Well I was trying things with remote recording (you send me the github link 2 days ago). The thing with this project is that I have some latency because in the video backend, the guy is sending image per images... I was wondering, is it possible to select a video source for eye0, eye1, world from an URL ?
I am not sure I understand. The backend uses OpenCV to access the frames which decompresses the images by default. If you want to transfer the images in their compressed original (MJPEG) you will need to modify the code to use pyuvc https://github.com/pupil-labs/pyuvc
How does transmission latency relate to the "weird results" reported by you?
You can monitor all notifications using this script https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py
Please note that the zmq messages containing image data consist of 3 zmq frames (topic as string, meta data in msgpack-encoded dictionary, and the raw data). Most other messages only contain two frames. When processing this kind of message yourself, you need to pay attention to this third frame. Otherwise, you might reached an unexpected state of the stream. See this implementation for reference https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/zmq_tools.py#L117-L120
Sorry for this naΓ―ve question, but I'm still confused. I'm checking this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb, on cell 10, to denormalize gaze coordinates, i.e convert them from normal scale (0-1) to frame coordinate with size H, W, should X, Y multiply with W-1, H-1 instead of W, H? An image with size W, H should have the top left corner coordinate to be (0,0) and the opposite corner coordinate to be (W-1, H-1), right? Another way to think about this is considering the first channel (red channel) of an image with the size (W,H). Then to access the corner element, it should be img[0,0], whereas for the opposite corner, it should be img[W-1, H-1]. Hence, to map the normalized coordinate to the array, we should multiply with W-1, H-1, right?
Hi @user-d1072e. H
and W
in this example are integers. frame_index_image
is just a tuple returned by Matplotlib's imgread
function (see cell 8). Slicing that with frame_index_image.shape[:-1]
gives us the correct height and width values.
Yes, I agree with the height and width values, but that's not the problem I'm asking about. I'm asking about the mapping normalized coordinates to a correct index in frame image. Let's say the image has the size (image resolution) of 5x5. Undoubtedly, the height and width returned from "frame_index_image.shape[-1] is 5 and 5. But when it comes to mapping a normalized gaze coordinates (0.5, 1.0) to numpy array array with size 5x5. The question is where do we put this gaze? Without math, we know the image array has index from (0 to W (or H) - 1), i.e 0 to 4, and hence the index of the gaze should be at (2, 4) which is the same as multiply by W (or H) - 1). However, now, if we simply multiply tthe normalized gaze by the width and height, we actually get 2.5 and 5 (out of index value)
The image frame size in given in pixels. You seem to be applying a zero index in your example, which is incorrect. If you re-calculate your example without the zero index, the results are as expected.
Yeah, I try to apply zero-indexing since I want to map it to the location on the numpy array. But I think it's better to just calculate without zero-indexing way (i.e just multiply with W and H), then -1 to get the index on the numpy array. That would be more straightforward.
The gaze point might be outside of the frame and still be valid. If you want to be robust against index errors you will always have to check the frame boundary.
That said, let's have a look at how the norm_pos value is generated in the first place [1]. In 3d gaze mapping, the gaze point is estimated in pixels first [2] and then transformed into the normalized coordinate system [3].
Transforming the norm values by multiplying with width/height [3] will result in the original pixel estimates (with some very small errors due to floating point operations). [2] estimates in sub-pixel accuracy (due to using [4]). If you want integer pixel values, I suggest rounding to the nearest one. The initially mentioned boundary check should be performed afterward.
[1] https://github.com/pupil-labs/pupil/blob/04b6fcc70f8e42ebadbfba9074354cfcad528867/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L171-L178 [2] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L651-L688 [3] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/methods.py#L477-L502 [4] https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga1019495a2c8d1743ed5cc23fa0daff8c