πŸ’» software-dev


user-8619fb 01 November, 2023, 18:28:05

That's it! You nailed it. Thank you so much!

user-1763e2 02 November, 2023, 18:49:49

Oh nice thing! I was wondering if its possible to have it this same thing automated thru ZMQ messages with pupil player or a batch script? @nmt @user-cdcab0

user-cdcab0 08 November, 2023, 12:09:27

Could you elaborate a little bit on what you're trying to accomplish?

user-2251c4 03 November, 2023, 08:13:20

Hi! I exported some data from Pupil Cloud Marker Mapper. I am wondering what some of the columns in the surface_positions.csv indicate; tl x, tr y, br y etc. and how do they differ?

user-d407c1 03 November, 2023, 09:14:53

Hi @user-2251c4 ! Have you seen our export format? these are the xy coordinates of the top left, top right, bottom left and bottom right corners of your surface definition https://docs.pupil-labs.com/export-formats/enrichment-data/marker-mapper/#aoi-positions-csv

user-2251c4 03 November, 2023, 09:25:25

Oh, thanks! I was looking for the documentation but couldn't find but this helps.

nmt 09 November, 2023, 12:13:15

Let's move the conversation (https://discord.com/channels/285728493612957698/285728493612957698/1172134508938657836) here.

Can you please share the script you're using to send the annotations to Pupil Capture?

user-b5484c 09 November, 2023, 13:59:29

Nor sure how useful that is because I'm using CSharp inside Bonsai, a visual programming language that deals with asynchronous streams of data.

Here is the snippet of code (where value is the string: {"topic":"annotation", "label":"myclock", "timestamp": 2.14} and it is fixed for every annotation. byte[] json_bytes = MessagePackSerializer.ConvertFromJson(value); return json_bytes;

Anyhow. I don't care so much about the content of my annotations. I only care about the time at which they arrive at the PupilCapture, in the PupilCapture clock. And I know that they arrive, and they arrive with a proper PupilCapture clock on them (as I show in my screenshot of the PupilCapture terminal). What I don't know is whether you actually store this time value, or whether you just store the timestamp the user sends along with each annotation.

nmt 09 November, 2023, 17:34:25

Both are stored. If the annotation plugin is loaded in the Player application when you hit export, then the export should contain all of the annotations stored alongside the recording.

user-b5484c 09 November, 2023, 17:37:27

Do you have code to load the files .pldata in python? If you do, maybe that solves all my problems.

nmt 09 November, 2023, 17:40:36

Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1097858677207212144

user-b5484c 09 November, 2023, 17:54:15

OK. So after loading the annotation.pldata file I get the following structure:

PLData(data=[{'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}, {'topic': 'annotation', 'label': 'myclock', 'timestamp': 2.14}], timestamps=array([2.14, 2.14, 2.14, 2.14, 2.14, 2.14, 2.14, 2.14]), topics=['annotation', 'annotation', 'annotation', 'annotation', 'annotation', 'annotation', 'annotation', 'annotation'])

which contains only the timestamps that I sent to the PupilCapture (2.14). I'm guessing that this is what gets loaded into the PupilPlayer, and that is the reason why in the player the annotations come all at the same time. Where can I find the timestamps of the annotations in the PupilCapture clock?

nmt 09 November, 2023, 18:09:57

Okay I see what's happened. The fix would be to not send a custom timestamp value.

user-b5484c 09 November, 2023, 18:19:30

OK. I tried that but the program crashes both if:

(1) I send the string with an empty timestamp, .i.e: {'topic': 'annotation', 'label': 'myclock', 'timestamp'}

  It returns:
              Traceback (most recent call last):
              File "launchables\world.py", line 755, in world
              File "annotations.py", line 230, in recent_events
              File "zmq_tools.py", line 119, in recv
              File "zmq_tools.py", line 130, in deserialize_payload
              File "msgpack\_unpacker.pyx", line 202, in msgpack._cmsgpack.unpackb
              msgpack.exceptions.ExtraData: unpack(b) received extra data.

or (2) if I remove the timestamp field altogether, i.e: {'topic': 'annotation', 'label': 'myclock' }

  It returns:
              Traceback (most recent call last):
              File "launchables\world.py", line 755, in world
              File "annotations.py", line 233, in recent_events
              KeyError: 'timestamp'
nmt 09 November, 2023, 18:31:52

Ah okay. So you'll need to send a timestamp in that case. (it's been some time since I looked at this). You can get the current Pupil Time using this snippet: https://github.com/pupil-labs/pupil-helpers/blob/21dee56f9d1f2736f7e1f139825a3e12561ad272/python/remote_annotations.py#L129

user-b5484c 09 November, 2023, 18:38:33

OK. Thanks, I think it is solved now. This is the code I used to guide me into the annotations. I don't need that level of time precision. I really like the way you guys allow for devices to synch through ZeroMQ, but you should save the Capture timestamps upon annotation arrival (independently of what the user sends). There is really no need to go back and forth to get timestamps, especially because not all the communications are done in python.

nmt 09 November, 2023, 18:36:09

In our example, as you stated, we use a method to compensate for network latency. This is the recommended approach. For example, you might send a trigger, but there will be some latency associated with when the trigger was sent and when it was received.

user-46e202 10 November, 2023, 12:20:32

Hello everyone, I am trying to develop a python app that uses pupil-labs python packages and then deploy it on android (using tools like buildozer, chaquopy, etc.) However, I am having problems building for android because of pupil-labs’ non-python dependencies (like av) Does anyone has experience in doing such thing?

user-d407c1 10 November, 2023, 12:55:33

Hi @user-46e202 ! Interesting, what libraries are you trying to port over to Android? pyavis a python library that acts as bindings for FFMPEG. I do not have personally experience doing that, but might be worth checking https://github.com/arthenica/ffmpeg-kit

user-46e202 10 November, 2023, 13:05:13

I am trying to bring pupil labs realtime python api to android: https://github.com/pupil-labs/realtime-python-api Correct me if I'm wrong but I think I have to port all its dependencies too.

user-d407c1 10 November, 2023, 14:22:57

that's a client to the API, the API is here https://github.com/pupil-labs/realtime-network-api, you can use the Python client as a guideline to port it over to other programming languages.

user-46e202 10 November, 2023, 16:28:28

Thank you for your answer. What I see in the links seems like to be the APIs for recording only. What about streaming APIs?

nmt 11 November, 2023, 17:40:26

Docs for streaming are here: https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html#streaming-api πŸ™‚

user-a93d29 10 November, 2023, 22:58:43

Hello, I am currently attempting to get 3D transformations (rotation, translation) between the eye and its gaze point after a 3d calibration has been performed. Looking inside the code (pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py) we have access to poses_in_world (of both eyes) and gaze_targets_in_world, which could be used to calculate their relative poses. I don't think these transformations are explicitly outputted from pupil player (unless I am incorrect). Could someone help out with how I can access these variables to be outputted from pupil player, or if there's a direct way to calculate the relative pose between the either eye and the gaze point? many thanks.

nmt 11 November, 2023, 18:10:54

Hi @user-a93d29! In the gaze_positions export, we essentially give you the absolute positions of the eyes and their gaze points in 3d world camera coordinates. We provide a variable called gaze_point_3d (gaze point in world camera space) and the eyeball centres (also in world camera space). Does that information not suffice? Or perhaps I misunderstand your goals...

user-a93d29 11 November, 2023, 20:21:44

thanks for the information. I need the rotations as well for a full SE(3) transformation of the eyes and gaze pose. I think with the bundle adjustment this is estimated in calibrate_3d as 4x4 matrices (I presume), so I wanted to look at those values to make sure that’s what was offered. Is there any way I can access these values, or specifically these variables that I mentioned?

user-a93d29 13 November, 2023, 15:40:23

So sorry Neil, just wanted to follow up on this : (, was really hoping I could figure something out...

nmt 13 November, 2023, 16:01:32

I'm still not 100% sure I understand what you need. The eyeball centre is an x,y,z coordinate, and so is gaze_point_3d. From there it's trivial to compute both direction and distance. Perhaps it would be helpful if you could elaborate on your ultimate goal here.

user-a93d29 13 November, 2023, 16:06:00

Apologies for the ambiguity. A better question to (first) ask if if Pupil Labs has the capability to estimate the coordinate frame of the eye (with 3d calibration) and the coordinate frame of the pupil point so that a 3D transformation between those two coordinate frames can be calculated. Looking at the codebase of Pupil, it looks like there's bundle adjustment going on that accounts for rotation and translation for the eyeball poses and gaze targets, which led me to believe that there should be a way to do this. Sorry again for the confusion

user-cdcab0 13 November, 2023, 21:00:25

I think the eye position and gaze point outputs are already in the same coordinate frame, the difference between those two points gives you the forward vector. From there I think you'd have to assume the up/right vector, but calculating an orientation matrix should be simple, right?

user-fb5b59 14 November, 2023, 12:30:47

Hey, a few questions regarding receiving values from the NEON device:

1) We previously integrated the PupilInvisible Device by using the "zyre" library in C++, so we were able to directly revceive the eye streams, gaze data, world video etc. in our system based on C++. I assume, that this is no longer possible, so we have to switch to using the HTTP REST API, websocket connections etc.? 2) Is it possible to use the Websocket connection also for the video streams/gaze data or is it only being used for the status update?

Thank you! πŸ™‚

user-d407c1 14 November, 2023, 12:58:58

Hi @user-fb5b59 ! I guess the Pupil Invisible library you are referring is the legacy API https://docs.pupil-labs.com/invisible/real-time-api/legacy-api/ , is is discontinued and not compatible with Neon.

You will need to use the newer realtime API with Neon, you can see how it works under the hood here, and some information about what you can obtain from the REST API, the websockets and the RTSP protocols https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html#streaming-api

nmt 16 November, 2023, 10:36:00

Re. https://discord.com/channels/285728493612957698/1047111711230009405/1174657743345434654, can you share the console output?

user-29f76a 16 November, 2023, 10:40:50

Chat image Chat image

nmt 16 November, 2023, 10:51:06

The filepath in the left screenshot seems to be correct. In πŸ‘“ neon you said there was an issue with this. Can you post an image that shows the traceback when you run it?

user-29f76a 16 November, 2023, 10:55:12

It's stuck here

Chat image

user-d407c1 16 November, 2023, 11:02:42

Hi @user-29f76a ! Are you pressing space for selecting a new area and ESC to finish? If so, but it is not working , may I ask where are you running your Jupyter notebook from? If you struggle with Jupyter notebooks have you consider using our vanilla python code? https://gist.github.com/mikelgg93/a250811d59885e791cbeeb99fd12ef55

user-956f89 17 November, 2023, 09:11:01

Hi, i did my plotting successfully following your coding instruction but i would like to have some support for how to change the coding to get "pupil_timestamp_datetime" in X axis instead of "pupil timestamp". What should i change in this script please to have diameter_3d against pupil_timestamp_date time in my plot: import matplotlib.pyplot as plt

plt.figure(figsize=(16, 5)) plt.plot(eye0_df['pupil_timestamp'], eye0_df['diameter_3d']) plt.plot(eye1_df['pupil_timestamp'], eye1_df['diameter_3d']) plt.legend(['eye0', 'eye1']) plt.xlabel('Timestamps [s]') plt.ylabel('Diameter [mm]') plt.title('Pupil Diameter')

user-d407c1 17 November, 2023, 09:23:00

Hi @user-956f89 ! This has been replied here https://discord.com/channels/285728493612957698/285728493612957698/1175000877992517642 and by email, please in the future, refrain from posting the same message on multiple channels at once.

user-13078d 17 November, 2023, 09:33:10

Hello! I was wondering if there is a way of obtaining the raw signal from the eyetracker directly connected to the USB/type-C port without the need of starting Pupil Capture or Pupil Service. Would this method require C/C++ or any other programming language? I cannot find any documentation regarding this. Your help is much appreciated. Thank you!

user-d407c1 17 November, 2023, 09:52:10

Hi @user-13078d ! Have you seen our pyuvc https://github.com/pupil-labs/pyuvc library? that gives you access to the cameras if that's what you are looking for ? Or are you looking for some other data?

user-2f4829 23 November, 2023, 15:29:31

Hi, is it possible to reprocess the .mp4 videos obtained from the wearable eye tracking headset to recognize eye blinks in a script? I know one can do that with the Player application but our data is in a remote server with no graphical interface. Moreover, we would like to automate the extraction as much as possible. We did not activate the eye blink plugin during the online experiment.

nmt 24 November, 2023, 10:53:41

Hey @user-2f4829 πŸ‘‹. May I ask which headset to which you refer?

user-1391e7 24 November, 2023, 10:35:43

Idk if I'm in the right channel, but I'm curious about the pupil cloud API.

with more and more computations happening on the cloud, is it possible to queue processing of uploaded recordings, ask whether processing is done & download the processed information?

user-1391e7 24 November, 2023, 10:38:56

or is it important that users do these sorts of things strictly on the cloud web interface?

nmt 27 November, 2023, 05:04:54

Hey @user-1391e7 - this is the right channel πŸ™‚.

At the moment, our Cloud API's functionality is a bit limited. While you can download things like timeseries data, it's not currently possible to create, queue, or track enrichments. We're planning to develop our API further, but we don't have a set timeline for this yet.

In the meantime, I'd suggest using Projects in Pupil Cloud's UI. Here, you can add and batch process multiple recordings/run and download enrichments at the project level, which could be a handy workaround for now.

API docs are here for reference: https://api.cloud.pupil-labs.com/v2

user-2f4829 24 November, 2023, 17:36:14

Pupil Core eye tracking headset. World camera: high speed 120hz world camera. Eye cameras: 200hz binocular eye.

nmt 27 November, 2023, 02:48:17

Thanks for clarifying. We don't have a way to reprocess the .mp4 videos without utilising the Player UI. Implementing the blink detector offline is technically feasible, but it would require development work. If you're interested, the code for the detector can be found here: https://github.com/pupil-labs/pupil/blob/e9bf7ef1a4c5f2bf6a48a8821a846c5ce7dccac3/pupil_src/shared_modules/blink_detection.py#L337

I would actually recommend operating Pupil Core's blink detector using the Player UI. This enables you to manually verify the quality of the blink classifications. It may be necessary to adjust the detector thresholds to ensure blinks are accurately classified. You can read more about that here: https://docs.pupil-labs.com/core/best-practices/#blink-detector-thresholds

user-a93d29 27 November, 2023, 17:58:08

Hello! I had a quick question about the output plcal file for 3D calibration, which has left_model's eye_camera_to_world matrix, right_model's eye_camera_to_world matrix, and binocular_models' eye_camera_to_world_matrix0 and eye_camera_to_world_matrix1. What is the difference between left, right, and binocular models' matrix0 and matrix1? Thank you!

nmt 29 November, 2023, 06:25:31

@user-a93d29, these are matrices that encode rotation and translation between each eye camera and the scene camera. They are used to transform pupil vectors from eye camera to scene camera space and produce gaze estimates. What are you trying to do with these? Do you have an end goal or just exploring? If you're interested, the source code is here: https://github.com/pupil-labs/pupil/blob/e9bf7ef1a4c5f2bf6a48a8821a846c5ce7dccac3/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L90

user-a93d29 27 November, 2023, 19:21:37

and what units are these matrices in, namely the translation component

user-a93d29 29 November, 2023, 16:19:18

Hi Neil, thanks for the response. I am currently performing 3D calibrations at various head positions and orientations and seeing how these output matrices change. I was just confused on the difference between these four - I could imagine that the left_model's eye_camera_to_world_matrix is the left eye camera to the scene camera space and the right_model's is the right eye camera to the scene camera space, but there was also two more matrices under "binocular_model" which I didn't quite understand...

nmt 30 November, 2023, 04:01:36

Correct. The 3D gazer function reverts to a monocular model, e.g. when one eye is occluded. This is slightly less accurate then binocular gaze estimation, but relevant nonetheless.

user-a93d29 29 November, 2023, 16:39:55

Upon more reading, it seems like the calibration produces both left/right monocular models and a binocular model, which produces these four matrices. I think the units are in millimeters as well but someone can correct me on that...

user-13078d 30 November, 2023, 14:19:26

Hello! I'm using pyuvc to capture the frames from the eyetracker. I was wondering if there is a way to configure the frame rate or frame capture mode (e.g. MODE: 192 x 192 @ 120 (MJPG)) of the cameras. I cannot find any documentation. Thank you!

user-cdcab0 30 November, 2023, 20:21:40

I believe these are simply properties on a Capture device. E.g.,

cap = uvc.Capture(device_uid)
cap.frame_size = (192, 192)
cap.frame_rate = 200
End of November archive