πŸ’» software-dev


user-74ef51 01 March, 2022, 11:38:15

Hi, can Remote Annotations be used to annotate at a high frequency, say 100Hz? I'm using a PupilCore

papr 01 March, 2022, 12:53:43

Hi, yes, that is possible. You might want to use this custom plugin that does not log the annotation to the main window: https://gist.github.com/papr/7b940b2c02e05135f59d599a6a90c5f6

user-74ef51 01 March, 2022, 14:15:35

I changed notification = {"subject": "start_plugin", "name": "Annotation_Capture", "args": {}} to notification = {"subject": "start_plugin", "name": "No_Log_Annotation_Capture", "args": {}} in the example script https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py to use the new plugin. Do I need to change anything else?

papr 01 March, 2022, 14:18:29

You will also need to install the custom plugin. It is not built in.

user-74ef51 01 March, 2022, 14:19:24

I think it is working

user-74ef51 01 March, 2022, 14:18:55

Forgot to say that I did install it

papr 01 March, 2022, 14:20:03

Then the only thing left to do is to ensure that the built-in annotation plugin is not running. You can turn it off in the plugin manager.

user-74ef51 01 March, 2022, 14:20:28

Great, thanks!

papr 01 March, 2022, 14:21:12

You should be able to give it a try by making a recording with the new plugin and checking if the annotations are stored as usual without generating a log message in the main window.

user-b28eb5 01 March, 2022, 15:26:14

Hello, I have done the calibration with the glasses core and I would like to be able to get the point of the gaze in python. I am using that code https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py to get the video, but I don't know from where I can get the location of the gaze.

papr 02 March, 2022, 07:40:29

When receiving gaze from Capture, keep in mind that its location is in normalized coordinates, not in image plane pixels (needed to render gaze into the scene video frame). Read more about the coordinates here https://docs.pupil-labs.com/core/terminology/#coordinate-system

You can convert the coordinates like this:

pixel_x = gaze["norm_pos"][0] * image_width
pixel_y = (1.0 - gaze["norm_pos"][1]) * image_height
user-b28eb5 01 March, 2022, 18:44:47

Someone can help me?

nmt 01 March, 2022, 20:06:32

Hi @user-b28eb5. You can use this script to obtain gaze coordinates: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py You'll need to uncomment line 24

user-74ef51 02 March, 2022, 11:10:18

Hi again, what is the best way of adding a label to a recording (using the Network API)? The label is associated with the entire recording and not to a specific timestamp

papr 02 March, 2022, 11:18:25

you could change the session/recording name, e.g. by sending R <recording name> to Pupil Remote. This will change the recording folder's name. Therefore, you should avoid special characters as they might lead to issues when trying to read the recording later.

user-74ef51 02 March, 2022, 11:20:05

hmm could work but the label is not unique

user-74ef51 02 March, 2022, 11:20:31

Can I get the auto generated name somehow?

papr 02 March, 2022, 12:54:47

You have full control over the name. You can make it unique by adding the start time to the name. The recording location is exposed via the recording.started notification. Subscribe to notify.recording.started to receive it.

user-74ef51 02 March, 2022, 12:20:49

Can a plugin save a file to the recording directory? How does it get the path in that case?

user-74ef51 02 March, 2022, 13:57:05

Awesome thank you!

user-74ef51 02 March, 2022, 14:21:20

What makes creates this pop-up window and how do i stop it?

Chat image

papr 02 March, 2022, 14:24:25

The enabled Request additional user info setting in the Recorder menu. πŸ™‚ Just disable it again.

user-74ef51 02 March, 2022, 14:26:41

Ohh interesting, is that saved to the recording directory?

papr 02 March, 2022, 14:27:51

Yes. So that would be another possibility to store your recording label. Apologies, that I did not recommend that before. This functionality has existed for a very long time and I never really got to use it myself πŸ™‚

papr 02 March, 2022, 14:28:54

The result is written to user_info.csv.

user-74ef51 02 March, 2022, 14:28:55

Nice how does it work can I modify the defaults? I can't see a save/confirm button

user-74ef51 02 March, 2022, 14:31:39

I mean can I modify the dictionary keys

papr 02 March, 2022, 14:33:11

When the recording is stopped, the current values will be written to the mentioned file. Unfortunately, there does not seem to be the possibility to set the values via the network api. You can add more fields by modifying the dict representation at the bottom

papr 02 March, 2022, 14:34:55

I just realized that the user_info.csv is always generated. So we could write a custom plugin that accepts values via the network api and sets these values in the recorder plugin

papr 02 March, 2022, 14:38:59

Let me draft something up. @user-74ef51 do you know how to send notifications via the network API, already?

user-74ef51 02 March, 2022, 14:39:22

Actually, having a pop-up in the GUI is better for my usecase then what I had in mind before, there is another problem however

papr 02 March, 2022, 14:39:47

ok, then no need for the network API plugin?

user-74ef51 02 March, 2022, 14:41:07

The videos I collect are short ~20s and there is a risk that the person does not have time to fill it out since it disappears when the recording is done (which is controlled by the network API)

user-74ef51 02 March, 2022, 14:44:34

My usecase is this: The person responsible for the data collection starts a script on the computer that show an animation and at the same time starts the recording (via the network api) and annotates the recording with data about the animation. However I also want the data collection person to lable the recording with some personal data about the test subject

user-74ef51 02 March, 2022, 14:47:59

Is it possible to go the other way i.e. creating a plugin that shows the pup-up and a start button, which when clicked sends a signal over the network API to start the animation+annotation?

papr 02 March, 2022, 14:53:29

mmh, this is possible but requires a bit more work. Let me think about it. What would you think about the possibility to enter the value via your animation script and the script sending it Capture and then starting the recording remotely?

user-74ef51 02 March, 2022, 15:00:42

Yes, that was my initial idea which should work fine, just not as pretty. I have already started experimenting with a plugin for storing the labels to a json file by sending them as a notification and listening to "notify.recording.started" to extract the "rec_path"

user-74ef51 02 March, 2022, 15:02:49

Does it sound like a good idea?

papr 02 March, 2022, 15:43:11

Sounds great. I think this favorable over using the recorders user info mechanism because it is not dependent on Recorder plugin implementation details.

user-74ef51 03 March, 2022, 11:58:13

What am i doing wrong here?

# Client
topic = "Add_Recording_Label"
payload = {"topic": topic, "label": "my label", "name": "Add_Recording_Label"}
pupil_remote.send_string(topic, flags=zmq.SNDMORE)
pupil_remote.send(msgpack.dumps(payload, use_bin_type=True))
print("Added label:", payload["label"], ",", pupil_remote.recv_string())
# Plugin
from plugin import Plugin
import logging
logger = logging.getLogger(__name__)

class Add_Recording_Label(Plugin):
    def on_notify(self, notification):
        logger.info(notification)

What do in need to change so that my notification shows up in on_notify?

papr 03 March, 2022, 12:05:38

Notifications have two requirements: - a subject field (string) - the topic field should be formated like this notify.<subject>

You could also use this library that I started developing https://pupil-core-network-client.readthedocs.io/en/latest/api.html#pupil_labs.pupil_core_network_client.Device.send_notification

user-74ef51 03 March, 2022, 13:51:58

Thanks! It works now

user-74ef51 07 March, 2022, 08:52:50

Hi again, sorry for bombarding you with questions. Can I use the Network API to set resolution, frame rate and auto exposure of the eye recordings?

papr 07 March, 2022, 08:56:03

Yes, that should be possible. Let me look that up for you!

user-74ef51 07 March, 2022, 11:39:07

Did you find anything @papr ?

papr 07 March, 2022, 12:10:47

@user-74ef51 The corresponding notification should be

{
  "topic": "notify.start_eye_plugin",
  "subject": "start_eye_plugin",
  "target": "eye0",  # and "eye1" respectively
  "name": "UVC_Source",
  "args": {
    "frame_size": (width, height)
    "frame_rate": 200,
    "name": "Pupil Cam2 ID0",  # and ID1 respectively; Cam1 for 120 Hz models
    "exposure_mode": "auto",
  }
}
user-74ef51 07 March, 2022, 14:43:14

Just like this?

notification = {
    "topic": "notify.start_eye_plugin",
    "subject": "start_eye_plugin",
    "target": "eye1",
    "name": "UVC_Source",
    "args": {
        "frame_size": (400, 400),
        "frame_rate": 120,
        "name": "Pupil Cam1 ID1",
        "exposure_mode": "auto",
    },
}
pupil_remote.send_string(f"notify.{notification['subject']}", flags=zmq.SNDMORE)
pupil_remote.send(msgpack.dumps(notification))
pupil_remote.recv_string()

This caches the eye process

papr 07 March, 2022, 14:46:11

If you go to the video source menu and enable the manual camera selection, check out the Activate source selector. It lists the available camera names.

papr 07 March, 2022, 14:44:40

Ah, the resolution does not match the camera name. Cam1 does not have square resolutions. Only Cam2 does. Which model do you use?

user-74ef51 07 March, 2022, 14:48:41

Is ID1 the same as eye 1? What is Cam1?

papr 07 March, 2022, 14:50:32

The camera name has two components: - CamX - indicates camera model (x=1 -> old 120 Hz model, x=2 -> new 200 Hz model) - IDY - indicates the eye (x=0 -> right, x=1 -> left)

user-74ef51 07 March, 2022, 14:52:19

Alright, I think I have the new version (I got it last week) I usually record in [email removed] but it also works in [email removed]

papr 07 March, 2022, 14:53:44

Then use Pupil Cam2 ID0 for target: eye0 and Pupil Cam2 ID1 for target: eye1

user-74ef51 07 March, 2022, 14:55:59

I tried but it still crashes

papr 07 March, 2022, 14:56:22

Can you share the trace back?

user-74ef51 07 March, 2022, 15:00:17

Oh right the capture.log? Note that I first sett the values to [email removed] manually to check that it changes back. I do that before starting the Network API controller script

capture.log

papr 07 March, 2022, 15:06:56

The issue is that the camera was in use beforehand and not released in time. The new plugin instance is not able to access it and fails.

papr 07 March, 2022, 15:08:35

I am not sure how to work around this issue. I will have to try this myself.

user-74ef51 07 March, 2022, 15:10:32

Okay, I'm not sure what that means. Closing the eye camera processes before sending the notice does not help

papr 07 March, 2022, 15:11:21

How about sending this message, and then stopping starting the eye process?

user-74ef51 08 March, 2022, 07:24:16

This is the script i use. I don't know what is going on but it works a few times in a row an then it doesn't work a few times in a row and then it works again and so on.

capture.py

user-74ef51 07 March, 2022, 15:18:09

I don't know why, because I didn't change anything, but it works now. Let me experiment a little more

papr 08 March, 2022, 07:33:59

Sounds like a race condition, i.e. an issue that is triggered by specific timing. Try subclassing UVC_Source like this in a user plugin:

import time
from video_capture.uvc_backend impot UVC_Source

class Wait_Before_Init_UVC_Source(UVC_Source):
  def __init__(self, *args, **kwargs):
    time.sleep(0.5)
    super().__init__(*args, **kwargs)

Then change the name field in your notification accordingly.

The added sleep will allow the system to release the prev. acquired resources before attempting to claim them again.

user-74ef51 08 March, 2022, 08:20:40

Should the plugin appear in the plugin menu? It doesn't and I get an error saying "Attempt to load unknown plugin". I did set "name": "Wait_Before_Init_UVC_Source",

user-74ef51 12 April, 2022, 13:31:34

Hi this is still a problem for me. Has there been any progress?

papr 08 March, 2022, 08:21:48

No, to the former. The latter sounds like the plugin was not loaded correctly. I did not test the code. Please check the debug logs. It will tell if the plugin was attempted to load and if it failed.

user-74ef51 08 March, 2022, 08:28:53
2022-03-08 09:24:09,854 - eye0 - [DEBUG] plugin: Scanning: wait_before_init_uvc_source.py
2022-03-08 09:24:09,854 - eye0 - [DEBUG] plugin: Imported: <module 'wait_before_init_uvc_source' from '/home/johan/pupil_capture_settings/plugins/wait_before_init_uvc_source.py'>
2022-03-08 09:24:09,855 - eye0 - [DEBUG] plugin: Added: <class 'video_capture.uvc_backend.UVC_Source'>
2022-03-08 09:24:09,855 - eye0 - [DEBUG] plugin: Added: <class 'wait_before_init_uvc_source.Wait_Before_Init_UVC_Source'>
...
2022-03-08 09:24:13,175 - eye0 - [ERROR] launchables.eye: Attempt to load unknown plugin: 'Wait_Before_Init_UVC_Source'
2022-03-08 09:24:13,185 - eye1 - [ERROR] launchables.eye: Attempt to load unknown plugin: 'Wait_Before_Init_UVC_Source'
papr 08 March, 2022, 08:41:15

https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/eye.py#L220-L227 looks like eye process plugin must inherit from PupilDetectorPlugin, too.

+ from pupil_detector_plugins.detector_base_plugin import PupilDetectorPlugin

- class Wait_Before_Init_UVC_Source(UVC_Source):
+ class Wait_Before_Init_UVC_Source(PupilDetectorPlugin, UVC_Source):

-   super().__init__(*args, **kwargs)
+   super(UVC_Source).__init__(*args, **kwargs)

But this multiple-inheritance workaround might have side-effects that I cannot predict right now. See also https://docs.python.org/3/library/functions.html#super You might need to overwrite multiple functions with their respective super calls or implement them with a pass.

user-74ef51 08 March, 2022, 09:08:51

Maybe it is easier to check if the process is running and restart it if it is not? If it is possible to check if a preprocess is running

user-74ef51 08 March, 2022, 09:02:03

TypeError: __init__() missing 2 required positional arguments: 'frame_size' and 'frame_rate'

papr 08 March, 2022, 09:09:37

mmh, I wonder if we had an easier time by monkeypatching the init function.

import functools
import time
from video_capture.uvc_backend import UVC_Source

@functools.wraps(UVC_Source.__init__)
def custom_init(self, *args, **kwargs):
  time.sleep(0.5)
  UVC_Source.__init__(self, *args, **kwargs)

UVC_Source.__init__ = custom_init

this instead of the subclass

user-74ef51 08 March, 2022, 09:13:58

That crashes Pupil Capture before any errors are written to the log πŸ˜…

papr 08 March, 2022, 09:17:14

ok, I will need to test the code tomorrow. Just wrote this from the top of my head πŸ˜„ But I think this could be a viable solution.

user-74ef51 08 March, 2022, 09:25:02

But isn't this the same as adding a time.sleep(0.5) in my Network API controller script?

papr 08 March, 2022, 09:17:50

(also, it had a typo in the import statement)

user-74ef51 08 March, 2022, 09:21:37

Yes I saw and fixed the typo. Alright let me know when you have tested it πŸ™‚

papr 08 March, 2022, 09:32:20

No, because the notification causes the prev plugin to be overwritten. And you need the sleep between shutting down the old one and starting the new one.

user-74ef51 08 March, 2022, 09:34:59

Ohh is see that is what happens

user-f76ddf 08 March, 2022, 12:15:42

Can someone help me? I have my scene with a 3D shape, and I want to Store data of each gaze point. I want to Store the timestamp, the coordinates of the 3d point of the shape I'm looking at, the camera position, the camera direction... To sum up, the variables in the script gazeData. How can I get them and Store them as I'm performing the experiment?

user-f76ddf 08 March, 2022, 12:21:35

For example, in this case I want to store the point of the shape I'm looking, the timestamp in which I've looked at it, the position of the camera at that moment, the gazedirection...

Chat image

user-f76ddf 08 March, 2022, 12:22:24

How could I do it??

user-74ef51 10 March, 2022, 08:01:51

Hi, did you have time to look at the plugin restart problem?

papr 10 March, 2022, 08:09:37

not yet, sorry, I will try to get to it today

papr 10 March, 2022, 09:23:47

The wrapped code called itself recursively 🀦 Please try

import functools
import time
from video_capture.uvc_backend import UVC_Source

UVC_Source__init__ = UVC_Source.__init__


@functools.wraps(UVC_Source.__init__)
def custom_init(self, *args, **kwargs):
    time.sleep(0.5)
    UVC_Source__init__(self, *args, **kwargs)


UVC_Source.__init__ = custom_init
user-74ef51 10 March, 2022, 09:55:41

"name": "UVC_Source"?

user-74ef51 10 March, 2022, 09:54:51

I should put that code in a file under plugins called whatever I want right?

papr 10 March, 2022, 09:55:25

correct. Also, remember to rename the start_plugin name field back to the original name

user-74ef51 10 March, 2022, 09:56:55

Okay, I still get errors

capture.log

papr 10 March, 2022, 09:59:31

Can you please increase the sleep duration to 10 seconds (this is huge but if it still fails with this amount of sleep, the issue might be related to something else)

user-74ef51 10 March, 2022, 10:05:21

I still get the error but now it is delayed 10s

papr 10 March, 2022, 10:09:22

Ok, you can delete the file again. Afterward, please run Pupil Capture from the terminal with this env variable: LIBUSB_DEBUG=4 pupil_capture and share the logs here?

user-74ef51 10 March, 2022, 10:15:28

The capture.log?

capture.log

papr 10 March, 2022, 10:18:30

No, the libusb debug log is only printed to the terminal as far as I know. Please share the statements printed to the terminal window.

user-74ef51 10 March, 2022, 10:21:46

pupil_usb_log.txt

papr 10 March, 2022, 13:37:43

I will have a look at this tomorrow

nmt 10 March, 2022, 13:30:27

Hi @user-f76ddf πŸ‘‹. Relevant documentation: -Accessing data: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#accessing-data -Recording data: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#recording-data

In order to determine which object is being gazed at, you could use raycasting to find the intersection of gaze and objects in the scene: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#gazedirection--gazedistance-instead-of-gazepoint3d

user-f76ddf 10 March, 2022, 13:46:53

Thank you very much, I have already read all of that and I think I got it. But now I have some specific questions:

  1. What exactly is pupil timestamp from GazeData.cs? The instant in which the pupils reach a specefic position? The instant in which a point of the scene is fixated? Or what?

  2. In order to store the point fixated, is it better to use the hit.point computed doing the raycasting in the GazeVisualizer.cs file or the gazePoint3D from the GazeData.cs file? Is the hit.point the same than the origin+distance in the direction in the GazeVisualizer.cs file?

  3. The variable lastConfidence in DataVisualizer is the confidence of the gaussian distribution where you're looking?

user-3cff0d 11 March, 2022, 21:37:41

@papr Hi papr! We've been making progress on closing the difference between the realtime and post-hoc calibration routines for vr headsets running HMD-Eyes. We've been working off of the post-hoc-vr-gazer branch, and I noticed that in the gazer_3d/gazer_hmd.py file that contains the binocular model for the realtime calibration sequence (ModelHMD3D_Binocular), there is no equivalent model for the monocular models. The ModelHMD3D_Monocular's _fit() function just returns a NotImplemented. Are there plans to implement this monocular model? If not, what monocular model is being used for realtime HMD data in that branch?

papr 15 March, 2022, 08:23:08

Yes, ModelHMD3D_Monocular is not fit independently. Instead, it receives the parameters from a common ModelHMD3D_Binocular. In both monocular and binocular cases, the gaze mapping (not fitting) remains the same as their corresponding non-hmd counter-parts Model3D_Monocular and Model3D_Binocular (through class inheritance).

In summary, Pupil Core headset and VR add-on gazers differ in two things: 1. hardcoded eye translation and reference depth parameters 2. the fitting procedure

The gaze estimation algorithm remains the same

papr 15 March, 2022, 16:11:34

I was not able to find anything helpful yet. In the meantime, could you try running Pupil Capture with sudo and try the replacement again?

papr 16 March, 2022, 08:14:21

I tried reproducing the issue on my mac running macOS Monterey. On that OS, one is required to run with sudo to get access to the cameras. It seems, as a side effect of that, I am not able to reproduce the issue. I know, that on Windows it is possible to access the same camera from multiple processes. i.e. the UVC Source replacement won't be an issue there either. Next Monday, I might have access to a Linux machine that I can use for testing this issue properly.

user-74ef51 17 March, 2022, 09:14:13

Okay I see, unfortunately starting pupilcapture with sudo (like this sudo pupil_capture) does not seem to make any difference other than that it can't find the plugins anymore. Also, I noticed that this warning: eye1 - [WARNING] uvc: Could not set Value. 'Backlight Compensation'. keeps popping up regardless if the other problem occurs, do you know what it means?

papr 17 March, 2022, 11:32:39

The warning is nothing to worry about. uvc just attempts to set a specific camera parameter that is not supported by the camera. It has no effect on its functionality.

user-ced35b 17 March, 2022, 21:52:44

Hello, I am working with capture version 1.14.9. I am wondering how I can make sure my pupil is beint detected using the pupil detector. With the new version of pupil capture I was able to see a video of each of my eye. I would then move my eye around to make sure my conf was around 1. Is this different for the older versions of capture? So far I have tried Activate source and I choose Pupil Cam2 ID1 (for Eye 1) and Pupil Cam2 ID0 (for eye 0). It's not a hardware issue since it works fine with the most up to date version of pupil capture. Thank you!

papr 18 March, 2022, 07:19:25

If I remember correctly in that version you had to choose between 2d and 3d detection. Which one are you using?

user-0af9a5 19 March, 2022, 10:39:40

why detect is wrong?

Chat image Chat image

papr 21 March, 2022, 11:31:30

Hi, the pupil detection algorithm is designed for black and white eye images, captured by a IR camera. The algorithm looks for dark edges. In this case, my eye lashes are detected as a false positive. Generally, it is recommended to avoid other IR light sources, e.g. the sun, as the reflections will make pupil detection difficult, as @user-84083b mentioned correctly. The reflection in this image is so strong that I would not even be able to manually annotate the pupil myself. πŸ™‚

user-84083b 19 March, 2022, 21:22:08

Maybe is due to reflection ?

user-84083b 19 March, 2022, 14:36:58

Hello everyone, i m trying to make datalogger with arduino to get data from pupilcore and another sensor. Do you know if some people had ever done similar things ? Thanks for your help πŸ™‚

papr 21 March, 2022, 11:35:34

I do not know anyone who used an arduino for that purpose yet. Please be aware that you will need a device that is able to run Pupil Capture. The arduino is likely not powerful enough to run this software. But you should be able to receive data via Pupil Capture's Network API https://docs.pupil-labs.com/developer/core/network-api/

user-84083b 26 March, 2022, 10:59:14

thanks for your reply !

user-16edf1 21 March, 2022, 15:22:02

Hi everyone! I wonder if you know the exact recording settings they had in the videos from pupil core example video 'sample_recording_v2' : eye0.mp4 ? Thanks in advance.

papr 23 March, 2022, 08:29:04

Could you quickly link me to the download link for the recording? I am not sure which one you are referring to.

papr 22 March, 2022, 13:05:57

What kind of settings are you looking for? You will be able to infer frame size from the video file and the selected frame rate from the recorded timestamps.

user-16edf1 21 March, 2022, 15:24:20

Oh, and another question: do you know a way to extract heart rate using the video recordings? Any one that have tried it out? πŸ™‚

papr 22 March, 2022, 13:08:20

I do not think this would work. There are optical approaches but to my knowledge the light source and camera need to be very close to the skin for this to work.

user-ced35b 21 March, 2022, 17:52:11

Thank you for your response! I was able to finally detect my pupil. What is the difference between 2D and 3D detection, or better which one should I be using? Thanks again πŸ™‚

papr 22 March, 2022, 13:04:53

Please see this Best Practices section https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline

user-5882af 22 March, 2022, 16:00:39

Hello, I am trying to analysis pupil invisible recordings in Pupil player and I have 2 hour, 1 hour, and 45 minutes videos to analyze. I am using the linux version of pupil player. My issue is that the 45 minute videos take about 20 mins to create the marker cache but the hour long video takes up to 4 hours to process the marker cache and the two hour video assuming it does not crash easily 12+ hours. I know I could use the cloud software for doing surface tracking but I do not see anyway to pull gyroscope data/head pose tracking data. I did some initial testing and I can easily pull the Gyroscope data out of the videos as long as I have the surface tracker plugin turned off but head pose tracking runs into the same problem as the surface tracking as it eventually slows to an absolute crawl or crashes the software. My question is there a way to cut the length of my videos or is the best solution using a combination of pupil cloud to do the surface tracking and simply pull the gyroscope data from the pupil player? I would like to cut the videos because then all export data is nicely organized by pupil player but I completely understand if this is not possible. I think the head pose tracking could be nice but I am fairly confident that the gyroscope data will work for our purposes.

papr 22 March, 2022, 16:03:13

Hey, you can get the gyro data from the Raw Data Exporter enrichment in Pupil Cloud, too. See https://docs.pupil-labs.com/invisible/reference/export-formats.html#imu-csv

user-5882af 22 March, 2022, 17:24:41

That is great, I must have missed it when I was first reading this documentation. Thank you @papr

user-b74779 23 March, 2022, 06:45:02

Hey, do you have something like a doc which is referencing all kind of notification that we can send to IPC Backbone ? I have some weird results sometimes, I think I am not understanding exactly what is doing my notification.

papr 23 March, 2022, 07:50:31

Unfortunately, we do not have such a document. Back in 2016, we decided actively against it because it would have been quickly out of date. With the software reaching its current stability, it is time to work on it.

Until then, what are the issues that you are seeing?

user-16edf1 23 March, 2022, 07:27:38

I was analizing the mean colour intensity for each frame and I could identify regular peaks in the example videos but not in the videos that I was recording myself.

papr 23 March, 2022, 07:47:57

What hardware are you using for your recording?

user-16edf1 23 March, 2022, 08:25:21

I am using a Microsoft Windows 11 pro. Processor : 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz, 2304 Mhz, 8 Core(s), 16 Logical Processor(s)

papr 23 March, 2022, 08:26:37

Apologies for being unclear, I tried referring to the eye tracking hardware. Are you using a Pupil Core headset?

user-16edf1 23 March, 2022, 08:27:04

ah, yes, I use the Pupil Core headset

user-16edf1 23 March, 2022, 08:29:22

yes, sure

user-16edf1 23 March, 2022, 08:36:09

https://drive.google.com/file/d/1vzjZkjoi8kESw8lBnsa_k_8hXPf3fMMC/view

papr 23 March, 2022, 09:12:01

Is it possible that the peaks correlate to the person looking down, and the eye lid being more visible?

papr 23 March, 2022, 09:09:17

Mmh, from visual inspection I cannot see a reason for the regular peaks. Are they perfectly regular? If so, at which frequency?

user-b74779 23 March, 2022, 20:10:48

Well I was trying things with remote recording (you send me the github link 2 days ago). The thing with this project is that I have some latency because in the video backend, the guy is sending image per images... I was wondering, is it possible to select a video source for eye0, eye1, world from an URL ?

papr 24 March, 2022, 09:58:21

I am not sure I understand. The backend uses OpenCV to access the frames which decompresses the images by default. If you want to transfer the images in their compressed original (MJPEG) you will need to modify the code to use pyuvc https://github.com/pupil-labs/pyuvc

How does transmission latency relate to the "weird results" reported by you?

You can monitor all notifications using this script https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py

Please note that the zmq messages containing image data consist of 3 zmq frames (topic as string, meta data in msgpack-encoded dictionary, and the raw data). Most other messages only contain two frames. When processing this kind of message yourself, you need to pay attention to this third frame. Otherwise, you might reached an unexpected state of the stream. See this implementation for reference https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/zmq_tools.py#L117-L120

user-d1072e 29 March, 2022, 17:57:00

Sorry for this naΓ―ve question, but I'm still confused. I'm checking this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb, on cell 10, to denormalize gaze coordinates, i.e convert them from normal scale (0-1) to frame coordinate with size H, W, should X, Y multiply with W-1, H-1 instead of W, H? An image with size W, H should have the top left corner coordinate to be (0,0) and the opposite corner coordinate to be (W-1, H-1), right? Another way to think about this is considering the first channel (red channel) of an image with the size (W,H). Then to access the corner element, it should be img[0,0], whereas for the opposite corner, it should be img[W-1, H-1]. Hence, to map the normalized coordinate to the array, we should multiply with W-1, H-1, right?

Chat image

nmt 30 March, 2022, 08:56:57

Hi @user-d1072e. H and W in this example are integers. frame_index_image is just a tuple returned by Matplotlib's imgread function (see cell 8). Slicing that with frame_index_image.shape[:-1] gives us the correct height and width values.

user-d1072e 30 March, 2022, 10:58:40

Yes, I agree with the height and width values, but that's not the problem I'm asking about. I'm asking about the mapping normalized coordinates to a correct index in frame image. Let's say the image has the size (image resolution) of 5x5. Undoubtedly, the height and width returned from "frame_index_image.shape[-1] is 5 and 5. But when it comes to mapping a normalized gaze coordinates (0.5, 1.0) to numpy array array with size 5x5. The question is where do we put this gaze? Without math, we know the image array has index from (0 to W (or H) - 1), i.e 0 to 4, and hence the index of the gaze should be at (2, 4) which is the same as multiply by W (or H) - 1). However, now, if we simply multiply tthe normalized gaze by the width and height, we actually get 2.5 and 5 (out of index value)

nmt 30 March, 2022, 13:03:05

The image frame size in given in pixels. You seem to be applying a zero index in your example, which is incorrect. If you re-calculate your example without the zero index, the results are as expected.

user-d1072e 30 March, 2022, 20:18:03

Yeah, I try to apply zero-indexing since I want to map it to the location on the numpy array. But I think it's better to just calculate without zero-indexing way (i.e just multiply with W and H), then -1 to get the index on the numpy array. That would be more straightforward.

papr 30 March, 2022, 21:42:53

The gaze point might be outside of the frame and still be valid. If you want to be robust against index errors you will always have to check the frame boundary.

That said, let's have a look at how the norm_pos value is generated in the first place [1]. In 3d gaze mapping, the gaze point is estimated in pixels first [2] and then transformed into the normalized coordinate system [3].

Transforming the norm values by multiplying with width/height [3] will result in the original pixel estimates (with some very small errors due to floating point operations). [2] estimates in sub-pixel accuracy (due to using [4]). If you want integer pixel values, I suggest rounding to the nearest one. The initially mentioned boundary check should be performed afterward.

[1] https://github.com/pupil-labs/pupil/blob/04b6fcc70f8e42ebadbfba9074354cfcad528867/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L171-L178 [2] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L651-L688 [3] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/methods.py#L477-L502 [4] https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga1019495a2c8d1743ed5cc23fa0daff8c

End of March archive