πŸ’» software-dev


user-4b18ca 02 October, 2024, 13:03:32

Hi all! I noticed something peculiar. We are working with MATLAB R2024a, using the pl-neon-matlab library for communication with Neon. Starting and Stopping+Saving works fine, unless we turn on the new option in the app to stream LSL data. In this case, after a successful initial recording, the next start of a recording will fail with "Python Error: DeviceError: (400, 'Cannot start recording, previous recording not completed')". However, the recording seems to be stopped and saved successfully.

Only after closing and reopening the app, it works again (once).

Comments: - behaviour is repeatable and linked to LSL option - we are aware MATLAB 2023b is recommended as last working version, but we have so far not experienced other problems with R2024a - we observed that the phone will continue to send LSL information on the network, even after the recording is stopped (my understanding or expectation is that data is streamed on LSL only while recording) - app is recent version - pl-neon-matlab version is a few weeks old

Questions: - is it a known behaviour? - should I open a ticket for this?

user-f43a29 02 October, 2024, 15:01:27

Hi @user-4b18ca , at first glance, this seems to be unrelated to the MATLAB integration. Do you get the same error if you use the Real-time API from within a pure Python session with those Companion App settings?

Also, the recording on the Companion App is independent of the LSL recording in the LabRecorder App. You can still stream data via LSL and via the Real-time API, even when a recording is not running on the Companion App.

user-912183 02 October, 2024, 13:40:34

Hej PL team. I am testing Neon IMU data and getting inconsistent values for orientation: I am reading the live stream using pupil_labs.realtime_api.simple: while True: imu = self.device.receive_imu_datum() q = np.array([ imu.quaternion.x, imu.quaternion.y, imu.quaternion.z, imu.quaternion.w ]) a = 180 * quaternion_angle(q) / np.pi print(a)

def quaternions_angle(q1: np.ndarray, q2: np.ndarray): r1 = np.array([0,1,0]) # world nord pole axis r2 = R.from_quat(q1).apply(r1) #local IMU y-axis in world coordinates angle2 = vectors_angle(r1,r2) return angle2

def vectors_angle(v1: np.ndarray, v2: np.ndarray): angle = np.dot(v1, v2) angle /= np.linalg.norm(v1) * np.linalg.norm(v2) angle = np.arccos(angle) return angle

I put the Neon glasses into the box and rotate around z axis for example for 90 or 180 degrees and back, and when it comes back to the initial position it gives completely different numbers for the orientation (a - angle in degrees between world y axis and local IMU y axis). For example: in the initial position the IMU gives ~120 degrees, and gradually goes down to 4 degrees (I do not touch the device). Then I rotate the devices for 90 degrees the code gives ~175 degrees, than I rotate the device back to the initial position, the code gives ~45 degrees. Could you please point what might cause this problem. Another question is how fast the IMU data adopts to the orientation change (i.e. using the Fusion filter brings some inertia, how big it is in time)? Thank you

nmt 02 October, 2024, 13:47:29

Hi @user-912183! Did you calibrate the IMU prior to your tests? If not, that might explain your data: https://docs.pupil-labs.com/neon/data-collection/calibrating-the-imu/

user-912183 02 October, 2024, 14:00:16

Thank for reply. I do not think this is my case, as I am looking at the relative orientation (initial angle and the angle after rotation). As far as I see from the doc "...will self-calibrate in under 3 seconds and require no assistance", that's exactly what I see in the beginning, when I am waiting for a few seconds until the initial value stagnates (the angel goes down from 120 to 4 degrees).

nmt 02 October, 2024, 13:53:01

In terms of latency in the orientation estimates, we don't have figures for that. The fusion engine is built into the IMU. The datasheet does have sensor noise profiles, though, which might be useful for you. See this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1105775561357393951

nmt 02 October, 2024, 14:14:00

Pitch and roll will quickly self-calibrate, without going into too much detail, because gravity is easily measured. But yaw typically needs some movement of the IMU for the fusion algo to converge accurately. Can you retry following the movements shown in video linked in the docs? If you're streaming the data in real-time, you'll be able to see it align with expected values.

user-912183 03 October, 2024, 09:32:14

Thank you Neil, the calibration (about 1 min of calibration choreography) helped to get rid of the crazy numbers, I get reproducible angle values for the same orientations. But I still get some problems with precision: for example orientating the device to the physical North (more or less) gives the angle ~ 0, orienting it to the West gives ~85 degrees, South gives expected +-180 degrees, orienting it to the East gives around -60 degrees (expected -90). Is it technical limitation of the IMU precision?

nmt 02 October, 2024, 14:33:29

Also note, the quaternions already define a rotation between the IMU local coordinate system and the world coordinate system. Thus, if you want the angle of rotation with respect to north (aka heading) you can simply do a quat2euler conversion and look at the yaw component.

user-376ddb 02 October, 2024, 16:22:00

Hey PL Team! I am using pupil labs realtime api and trying to callpupil_labs.realtime_api.device.Device.recording_start() and pupil_labs.realtime_api.device.Device.recording_stop_and_save() to do a recording on my neon glasses (with the Neon companion app) I noticed that doing this sometimes works and make the recording properly but other times it will "break" the neon recording app, and make it so that it can't record anymore in the future, both from calling the record start function (where it throws a DeviceError saying "Cannot start recording, previous recording not completed) and from just clicking the record button on the neon companion app.

Is there any solution to this problem? I am using Neon Companion version 2.8.25-prod and pupil labs realtime api v1.3.3

user-f43a29 02 October, 2024, 16:29:09

Hi @user-376ddb , we recently released v2.8.31-prod of the Neon Companion App. Could you upgrade to that version and try again?

If the issue persists, then please open a support ticket in πŸ›Ÿ troubleshooting .

user-912183 03 October, 2024, 09:50:45

One more question. Does the PYPI pupil-labs-realtime-api (v1.3.3) work with the latest Neon glasses (with Motorola companion device) . I am running the streaming code: self.device = Device(address=self.address, port=self.port, start_streaming_by_default=True) ... scene, gaze = self.device.receive_matched_scene_video_frame_and_gaze() <- this line never returns The code works fine with the old OnePlus companion device I mentioned above, but with the new Neon set gives: Stopping run loop for rtsp://192.168.2.32:8086/?camera=eyes Stopping run loop for rtsp://192.168.2.32:8086/?camera=world and never returns. I still can start/stop recording recording, I can see the live stream in the browser. Could you give me a clue what can cause the problem. Thank you The Motorola companion: Neon companion v2.8.31-prod, Motorola edge 40 pro, Android version 13

nmt 03 October, 2024, 10:28:12

Follow-up question, how exactly are you measuring the orientation angle? Is it possible there's an additional source of electromagnetic interference that could bias the measurement?

user-912183 03 October, 2024, 10:45:24

The glasses placed on the plastic rotating platform on the floor, about one meter from the nearest table. There is about 0,4 m between the companion device and the IMU module. I suspected that it might be some magnetic fields (like PC or smartphone), so I tried to distance the the IMU from such sources. But if you claim that IMU behaviour is not expected, then I'll try to make more clear tests

nmt 03 October, 2024, 10:28:26

Can you share the full code snipped?

user-912183 03 October, 2024, 10:34:49

from pupil_labs.realtime_api.simple import Device, discover_one_device import numpy as np from datetime import datetime

from osm.utils import neon from .neon_source import NeonSource

class NeonRealtimeSource(NeonSource):

device: Device = None

def __init__(self, name: str, server: str, port: int,
             address: str) -> None:
    super().__init__()
    self.name = name
    self.server = server
    self.port = port
    self.address = address

def open(self):
    self.device = Device(address=self.address,
                         port=self.port,
                         start_streaming_by_default=True)
    # self.device.streaming_start()
    # self.device.recording_start()

def close(self):
    # self.device.recording_stop_and_save()
    # self.device.streaming_stop()
    self.device = None

def info(self):
    calibration = self.device.get_calibration()
    return {'camera': calibration}

def data(self):
    while True:
        try:
            scene, gaze = self.device.receive_matched_scene_video_frame_and_gaze()
            imu = self.device.receive_imu_datum()
            print('Passed!')
        except Exception as e:
            print(e)
nmt 03 October, 2024, 10:56:10

from pupil_labs.realtime_api.simple

user-912183 03 October, 2024, 12:36:09

I have one more question not related to the previous one. Is is possible to estimate the distance to the object the user is staring at? Can I use for ex. optical axes for this sake?

nmt 04 October, 2024, 08:54:45

Well, your success may vary depending on factors such as viewing distance and physiological characteristics.

At close viewing distances, e.g. less than 1m, it's conceptually possible to measure viewing distance from vergence. However, at greater distances, the visual axes become more parallel, making convergence difficult to measure (if it happens at all). This can result in noisy and mostly unusable distance measurements, in my experience.

Then there are physiological factors. Some participants' visual axes never converge, e.g. due to tropias, and so on.

Finally, the optical axis often differs from the visual axis, introducing the so-called kappa angle to further complicate matters. NeonNet measures the optical axis, so you might need to account for that, although it shouldn't introduce too much error into the result πŸ€”

In my experience, there's no one-size-fits-all solution for this. If you're interested, it might be worth experimenting a bit, perhaps with a second source of measurement to validate your depth estimates. We'd be keen to hear how you get on!

user-190b28 03 October, 2024, 15:48:01

Hi everyone,

I'm new here. We recently upgraded the CORE eye camera to a new model (see attached picture). The camera works fine when using the Capture software. However, when we try to use OpenCV with Python, we encounter the following error:

python code cam = cv2.VideoCapture(0) [email removed] global cap.cpp:323 open VIDEOIO(AVFOUNDATION): raised unknown C++ exception! We are working on MacOS/Linux. Could anyone point me to relevant documentation for using this camera with OpenCV? Additionally, does anyone know how to turn on the IR LEDs? They don't seem to activate when using video capture software like QuickTime.

Thanks in advance for your help!

Chat image

nmt 04 October, 2024, 08:21:22

Hi @user-190b28! May I ask why you're trying to access the video stream directly via opencv? Would it not be feasible for you to leverage our real-time API to access video frames in real-time, like in this example script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py ?

Note that the IR illuminators cannot be toggled on/off.

user-912183 03 October, 2024, 15:53:20

What is the output from v4l2-ctl --all?

user-190b28 03 October, 2024, 16:00:18

I only have the Macbook with me now, this is the supported mode from the avfoundation [avfoundation @ 0x7f92cc304480] Supported modes: [avfoundation @ 0x7f92cc304480] [email removed] 200.300451]fps [avfoundation @ 0x7f92cc304480] [email removed] 180.001800]fps [avfoundation @ 0x7f92cc304480] [email removed] 120.901441]fps [avfoundation @ 0x7f92cc304480] [email removed] 90.200606]fps [avfoundation @ 0x7f92cc304480] [email removed] 60.400333]fps [avfoundation @ 0x7f92cc304480] [email removed] 30.200076]fps [avfoundation @ 0x7f92cc304480] [email removed] 200.300451]fps [avfoundation @ 0x7f92cc304480] [email removed] 180.001800]fps [avfoundation @ 0x7f92cc304480] [email removed] 120.901441]fps [avfoundation @ 0x7f92cc304480] [email removed] 90.200606]fps [avfoundation @ 0x7f92cc304480] [email removed] 60.400333]fps [avfoundation @ 0x7f92cc304480] [email removed] 30.200076]fps [avfoundation @ 0x7f92cc304480] [email removed] 30.000030]fps [avfoundation @ 0x7f92cc304480] [email removed] 120.101366]fps [avfoundation @ 0x7f92cc304480] [email removed] 90.000090]fps [avfoundation @ 0x7f92cc304480] [email removed] 60.000240]fps [avfoundation @ 0x7f92cc304480] [email removed] 30.000030]fps

user-912183 03 October, 2024, 16:05:12

the device, somthing like v4l2-ctl --device=/dev/video0 --all. Not sure about Mac, it is for linux

user-190b28 03 October, 2024, 17:40:38

this is the output from RPI5, it looks normal

message.txt

user-190b28 03 October, 2024, 17:48:02

Also, I was able to save a picture using this command ffmpeg -f video4linux2 -s 192x192 -i /dev/video0 -ss 2 -frames 1 out.jpg

user-190b28 03 October, 2024, 18:48:40

Since the v4l2 is working, should I use v4l2 instead of opencv for programming? And how can I control the LED ON/OFF?

Again, thanks for helping.

user-cf2c3e 04 October, 2024, 08:51:40

Hello,

I am currently working on an university projet with a team. We are creating a sotfware to visualize data from the pupil app. The issue is that we cant use it at any time since we dont have a full time access to the device. We are trying to figure a way to emulate the data from the pupil app cloud but we are not sure if this is possible or not.

user-f43a29 24 October, 2024, 09:41:37

Hi @user-cf2c3e , could you provide some extra clarification? - Which eyetracker are you using? Pupil Core, Pupil Invisible, or Neon? - When you say "emulate the data", do you mean create fake/simulated data? Or are you asking about analysis methods?

user-b96a7d 08 October, 2024, 11:29:11

Hi everyone!

I am currently trying to utilize the pupil-capture software to read and send the IMU data form the Pupil-Neon module. Does anyone have experience with this or could point me in the right direction? I could not find any documentation on how to retrieve this data yet.

Thanks!

user-d407c1 08 October, 2024, 11:46:57

Hi @user-b96a7d ! Pupil Capture support for Neon is experimental and does not have access to the IMU. Is there any specific reason why you are using Pupil Capture instead of the Companion App?

user-b96a7d 08 October, 2024, 11:53:51

Hi Miguel!

We are using them in research for robotic applications and usually rely on robust data streams (so using the app with wifi communication is not really an option).

Is there a way to access the raw data of the IMU? Or maybe some drivers? Then I'd be able to integrate this myself πŸ™‚

user-d407c1 08 October, 2024, 12:02:18

Hi @user-b96a7d ! For that use case, I'd recommend connecting Neon via ethernet through a USB type-C hub. This way you can use the Realtime API and access all streams including IMU and you will still be using NeonNet (calibration free and robust gaze estimation pipeline).

user-e44a18 10 October, 2024, 01:49:09

Hello, not sure if it should be here. I;m trying to purchase an eyetracker for my lab, and wanted to make sure I know all the features. So I was trying out Neon player with the sample data. After I dragged the whole folder of the sample data into the window, it did create a folder of "neon_player", and the player quickly closed and opened again with the camera video, but then it crashed. I tried several times, it always crashed there, I could not proceed to see all the features of the player. the version i downloaded from the website is v4.3.3-0-g6427dc2a_macos_x64, my OS mac 15.0.1 (24A348). Is it the version issue? or my os? or something about the sample data? Thanks!

user-cdcab0 10 October, 2024, 13:34:45

Hi, @user-e44a18 - do you mind trying with the latest version of Neon Player which is just published today?

user-083b4f 15 October, 2024, 22:41:25

Hi, @user-cdcab0 - I'm using the real-time screen gaze package with the Neon device and followed the example from the repo. However, the results I’m getting seem off, with the gaze being consistently shifted to the left by more than 100px. When I manually adjust it, the positions seem roughly correct, but I feel it shouldn’t require this manual shift.

I wanted to check with youβ€”what marker_verts and marker size are you using? Are you using the marker with the white border (as it appears when generated), or something else?

Thanks!

user-cdcab0 15 October, 2024, 22:43:31

Hi, @user-083b4f - the marker sizes is the length of the edge of the black space of the markers and should not include the white margins

user-d690ca 20 October, 2024, 19:37:54

Hi, I was installing pupil core from source code, so I cloned the repo and did "python -m pip install -r requirements.txt" however I encountered issue with building wheel for pupil_labs_uvc-1.0.0b7.tar.gz. ERROR: File "C:\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module> main() File "C:\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\janax\AppData\Local\Temp\pip-build-env-3y6afbjq\overlay\Lib\site-packages\setuptools\build_meta.py", line 332, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\janax\AppData\Local\Temp\pip-build-env-3y6afbjq\overlay\Lib\site-packages\setuptools\build_meta.py", line 302, in _get_build_requires self.run_setup() File "C:\Users\janax\AppData\Local\Temp\pip-build-env-3y6afbjq\overlay\Lib\site-packages\setuptools\build_meta.py", line 318, in run_setup exec(code, locals()) File "<string>", line 24, in <module> File "C:\Python312\Lib\pathlib.py", line 1162, in __init__ super().__init__(*args) File "C:\Python312\Lib\pathlib.py", line 373, in __init__ raise TypeError( TypeError: argument should be a str or an os.PathLike object where __fspath__ returns a str, not 'NoneType' Could you advice me how to solve this?

user-b96a7d 21 October, 2024, 07:23:08

Hey there, hello!

If you know your way around docker I created a dev environment with all the dependencies (and some extra for my work) πŸ™‚ : https://github.com/paulijosey/pupil

Super basic run instructions are included if you are using VSCode πŸ™‚

user-f43a29 24 October, 2024, 08:37:33

Hi @user-d690ca , are you using Python 3.12? It might go better if you try Python 3.11

user-de399a 25 October, 2024, 13:16:06

Hi, I am currently think about buying one of your glasses and I was wondering if it possible to get the eye landmarks from the camera pointed towards the eyes with your software? And if yes, how that would be possible? Thanks a lot for your help!

user-f43a29 28 October, 2024, 10:55:39

Hi @user-de399a , may I ask for more clarification about what "eye landmarks ... pointed towards the eyes" means? Feel free to draw a picture if that helps

user-de399a 28 October, 2024, 13:45:32

Hi @user-f43a29 , thanks for your answer! If I understood it right, the glasses would also give me back the videos of the eyes (see image below) right? Is there a way to get the landmarks of the eyes in this video (white line in the image below)?

user-f43a29 28 October, 2024, 13:52:40

Hi @user-de399a , I assume you are referring to the boundary of the pupil, but the image was not uploaded, so if you can try uploading the image again, then I will wait to be sure I give the best answer to your question.

user-de399a 28 October, 2024, 14:13:32

@user-f43a29 This is the image. Sorry, it did not upload before.

Chat image

user-f43a29 29 October, 2024, 13:11:31

Hi @user-de399a , thanks, I see now what you mean. With all of our eyetrackers, you have direct access to the eye images. Then, you can run an analysis that would find the boundary of the eye lid (the white line in your image) and segment the eye, as desired.

Please let us know if you have any other questions. You can also schedule a Demo and Q&A call to ask questions over video call.

user-de399a 28 October, 2024, 14:14:03

I mean the actual eye, so not the pupil itself. In this way, I first try to segment the eye from the skin before further using the video data

user-4b18ca 29 October, 2024, 10:11:13

Hello, is there a problem with the cloud at the moment? Can't log in since a few minutes, same for my colleague.

Huh, now it'S working again - just a hiccup.

user-de399a 29 October, 2024, 14:58:01

Hi @user-f43a29 , ah thats great. So the analysis that finds the boundary of the eye lid is already implemented in your framework? Could you point me towards that part?

user-f43a29 29 October, 2024, 15:24:01

Hi @user-de399a , ah, let me clarify. I meant that you could then run an analysis of your choice. It sounded as if you had already developed an algorithm. We do not provide an analysis tool that returns the boundary of the eye lid.

Also, to clarify, real-time streaming of eye images is not possible with Pupil Invisible, but Pupil Invisible eye videos are available in the Native Recording Data directory, after you have made a recording.

We would recommend looking into Neon, where real-time streaming of eye images is possible and the eye cameras are better positioned for your purposes.

user-de399a 29 October, 2024, 16:03:14

Hi @user-f43a29 ah that is too bad. I would have needed the boundardy of the eye lid. Real-time is not necessary, offline would be fine. But then I will have to keep on looking for another solution. Thanks anyways!

user-f43a29 06 November, 2024, 16:10:59

Hi @user-de399a , I wanted to update you, as we discussed it. We will add the capability to visualize the eyelid boundary for recorded data, including a measure of the eye opening.

Once released, we plan to develop this further, so you can then be involved in providing feedback. Please let us know if you'd like to discuss this! Either here or you can send us an email: info@pupil-labs.com

user-6bc37f 30 October, 2024, 06:32:23

Hi, I am wondering why video recordings can not be uploaded into the workplace since yesterday afternoon. The recordings on the mobile app show they are uploaded, but they are always in progress in the workplace.

Chat image Chat image

nmt 30 October, 2024, 18:41:26

Hi @user-6bc37f! This should be resolved as of now. Please check and let us know if things are working as expected.

user-50e084 30 October, 2024, 16:04:09

Hello everyone, I am porting the Linux version of the source code on GitHub to Raspberry PI 5, and recently found the following errors about GLEW, I do not know how to solve, hope someone can help me.

[23:22:29] ERROR world - launchables.world: Process Capture world.py:843 crashed with trace: Traceback (most recent call last): File "/home/xhw/pupil_env/pupil/pupil_src/launchable s/world.py", line 547, in world cygl.utils.init() File "src/pyglui/cygl/utils.pyx", line 63, in pyglui.cygl.utils.init File "src/pyglui/cygl/utils.pyx", line 80, in pyglui.cygl.utils.init Exception: GLEW could not be initialized!

user-d407c1 30 October, 2024, 16:50:05

Hi @user-78da25 ! Please have a look at https://discord.com/channels/285728493612957698/285728493612957698/1140896658574557204

user-b96a7d 31 October, 2024, 08:18:44

hey! I am also trying to get this running on arm! I'll keep you posted in case I manage to figure something out πŸ™‚

End of October archive