Has anyone tried to run two separated instances of Pupil on the same system? I am trying to get two pupil (binocular) to run in a Linux, and the bandwidth of the USB port seemed unable to handle streaming from 6 cameras (2 worlds + 4 eyes) at the same time, and at most I was only able to catch frames from 5 cameras (2 worlds + 3 eyes). I am using two USB 2.0 ports to connect the pupils into my system. Do you think changing them (with adapters) to USB 3.0 or USB C would help?
@user-7e60fc you will need a laptop that has two USB controllers. Unsig usb3 or a hub will not help.
Use lsusb in terminal to see what devices are attached to which bus.
Also, my understanding is that all the frames (world + eye(s)) are grabbed from the camera and transmitted through the IPC. Then we will process the frames (turn into grayscale for eye, superimpose gaze for world) then publish the frame on the screen. If I am trying to intercept the signal and do something on the frame (say, superimpose bounding boxes for object recognition) and want it to be shown on the screen, should I grab the frames with the frame publisher --> process the frames --> send them back to the IPC so it can be shown on the screen?
@mpk So you mean stream two pupils using two separated ports that are connected to the system?
Yes
@user-7e60fc I would make a pupil plugin and use recent_events as well as gl_display callbacks to look the the world image and them draw into the canvas
I'll try that out. Thank you! @mpk
You are welcome!
Hi everyone. I tried using the audio plugin but it did not register any audio source, even though I confirmed that the laptop actually registered the audio device. I am using Ubuntu latest version. According to the logs it seemed like an ALSA and JackD pci card problem. Has anyone encountered this?
Question 1: Should I get fixation by doing the same thing as obtaining gaze?
sub.setsockopt_string(zmq.SUBSCRIBE, 'pupil.')
sub.setsockopt_string(zmq.SUBSCRIBE, 'gaze') sub.setsockopt_string(zmq.SUBSCRIBE, 'notify.') sub.setsockopt_string(zmq.SUBSCRIBE, 'logging.')Question 2: Can we get fixation from Pupil service
Do we have an example code for how to obtain fixation?
@user-b91aa6 you need to subscribe to fixations
everything else should be the same
sub.setsockopt_string(zmq.SUBSCRIBE, 'pupil.')
sub.setsockopt_string(zmq.SUBSCRIBE, 'gaze') sub.setsockopt_string(zmq.SUBSCRIBE, 'notify.') sub.setsockopt_string(zmq.SUBSCRIBE, 'logging.')Question 2: Can we get fixation from Pupil service
like this?
No, because you only subscribe to pupil.
in this code.
You need to subscribe to fixations.
sub.setsockopt_string(zmq.SUBSCRIBE, 'fixations')
sub.setsockopt_string(zmq.SUBSCRIBE, 'fixations')
correct
Understand. Thanks
But will we receive fixatios like gaze 120 hz?
And Service does not enable the fixation detector. You need to run Capture if you want to have fixations.
or we only receive fixations when they are detected
This page doesn't say whether they are published when they are detected or not
or they are publised like gaze
"It publishes the fixation as soon as it complies with the constraints (dispersion and duration). This might result in a series of overlapping fixations."
All right. Thank you very much.
May I ask that what if I am not using world camera? Then I don't have world camera coordinate system
So this fixation can't be used in HMD for 2D detection, is this correct?
@papr
Correct. The solution here is to use 3d pupil detection in combination with 2d HMD calibration.
The 3d pupil data is a super set of the 2d pupil data. This means that the fixation detector can make use of the 3d model while the calibration makes use of the 2d part of the data
How to use both 3D pupil detection in combination with 2D HMD calibration?
In current system, when we choose the detection method, the mapping mode is also chosen
So we have to develop this by our self?
@papr
You are right, and usually this is the case. But the HMD calibration is an exception. The HMD Calibration
uses explicitly only the 2d data while the HMD Calibration 3d
explicitly uses 3d data.
So you won't have to develop anything youself
If I want to detect fixation in 2D mode, then, what should I do?
You will need to find out what the intrinsics of your HMD are.
what if I find
And then?
Actually, I need to correct myself. I do not know if that is possible in the hmd case.
OK. But still, If I want to detect fixation in 2D mode, then, what should I do?
in VIVE HMD
Currently, that is not possible. And as I said, I don't know if that would be possible without major changes to the fixation plugin.
I described the best work around above.
I'm not sure I understand, what should I implement to have 2d calibration + 3d fixation ? 2d calibration gives me good results but slippage is a big problem in hololens (which could be solved by 3d fixation, right ?) @papr
All right. Thanks
set_detection_mapping_mode
notificationHMD_Calibration
method, either through the menu or the start_plugin
notificationFixation_Detector
method, either through the menu or the start_plugin
notification@user-58d5ae 👆
thanks !
Be aware that you do not have slippage compensation through this method!
Oh alright, then what's the advantages above 2d calibration and detection ? And is there a way to compensate for the slippage ?
Currently, the advantage is that 2d calbration works more reliably than the 3d hmd calibration 😬 We are working on improving 3d calibration for the hmd use-cases.
And no, we don't have a way to comensate for 2d slippage at the moment
Ok, thanks a lot, somehow I believed fixation meant slippage compensation
In the fixation detection, here are not that clear to me: "If we do not set a maximum duration, we will also detect smooth pursuit (which is acceptable since we compensate for VOR)"
Question1 :Why you can detect smooth pursuit if you don't set maximum duration?
Since smoothy pursuit is a moving trajectory, dispersion based method detects whether a list of samples data is within a fixed region. For smooth pursuit, the eye is following a moving object, then the list of samples can't be always within a region.
Question 2: what does the VOR mean?
@papr
VOR refers to https://en.wikipedia.org/wiki/Vestibulo%E2%80%93ocular_reflex
Thank you very much. How do you think the question 1?
Question1 :Why you can detect smooth pursuit if you don't set maximum duration?(edited) Since smoothy pursuit is a moving trajectory, dispersion based method detects whether a list of samples data is within a fixed region. For smooth pursuit, the eye is following a moving object, then the list of samples can't be always within a region.(edited)
Question about getting eye image, should I do it like this? sub.setsockopt_string(zmq.SUBSCRIBE, 'eyeImage')
@papr
Hi Team,
I'm having issues with the Pupil capture app: 1. Drivers updated and installed. 2. Most recent Windows Update aquired 3. Most recent Pupil labs apps downloaded. 4. Was able to record this morning no problem, then the computer got hot (has since cooled) but has not allowed me to record since.
HALP. 😬
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. world - [INFO] launchables.world: Application Version: 1.7.42 world - [INFO] launchables.world: System Info: User: Ginny, Platform: Windows, Machine: DESKTOP-EP8KVHA, Release: 10, Version: 10.0.17134 Running PupilDrvInst.exe --vid 1443 --pid 37424 OPT: VID number 1443 OPT: PID number 37424 Running PupilDrvInst.exe --vid 1443 --pid 37425 OPT: VID number 1443 OPT: PID number 37425 Running PupilDrvInst.exe --vid 1443 --pid 37426 OPT: VID number 1443 OPT: PID number 37426 Running PupilDrvInst.exe --vid 1133 --pid 2115 OPT: VID number 1133 OPT: PID number 2115 Running PupilDrvInst.exe --vid 6127 --pid 18447 OPT: VID number 6127 OPT: PID number 18447 Running PupilDrvInst.exe --vid 3141 --pid 25771 OPT: VID number 3141 OPT: PID number 25771 world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied. world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720] world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration world - [WARNING] launchables.world: Process started. Running PupilDrvInst.exe --vid 1443 --pid 37424
The whole enchilada
Hi, i'm trying to run the 'HMD Calibration' from a C++ application (zmq/msgpack) following the hmd-eyes unity pupil plugin example.
I got this far: - request socket connected - 'T 0.0' (initialize timestamp) - 'SUB_PORT' - subscribe to topic 'notify.calibration' - 'notify.start_plugin' - { name: 'HMD_Calibration' } - 'notify.calibration.should_start' - { hmd_video_frame_size, outlier_threshold } - 'notify.calibration.add_ref_data' - { ref_data: [ { norm_pos, timestamp, id }, {... ] } - 'notify.calibration.should_stop' - {}
Up until here my application received the following notifications (subscription 'notify.calibration'): - 'notify.calibration.should_start' - 'notify.calibration.started' - 'notify.calibration.add_ref_data' (x5) - 'notify.calibration.should_stop'
then pupil service/capture crashes with the following output:Starting Calibration world - [INFO] calibration_routines.hmd_calibration: Starting Calibration
Stopping Calibration world - [INFO] calibration_routines.hmd_calibration: Stopping Calibration world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 445, in world File "shared_modules\calibration_routines\hmd_calibration.py", line 75, in on_notify File "shared_modules\calibration_routines\hmd_calibration.py", line 104, in stop File "shared_modules\calibration_routines\hmd_calibration.py", line 121, in finish_calibration File "shared_modules\calibration_routines\hmd_calibration.py", line 121, in <listcomp> TypeError: list indices must be integers or slices, not str
world - [INFO] launchables.world: Process shutting down. eye1 - [INFO] launchables.eye: Process shutting down. eye0 - [INFO] launchables.eye: Process shutting down.
Any idea what I'm doing wrong?
@user-82e7ab looks like you are sending data in the wrong format?
{ norm_pos, timestamp, id }
is a set or list, but we need to use a mapping/dict.
{ "norm_pos":norm_pos, "timestamp":timestamp, "id":id}
hi @mpk , thanks for taking the time. this is exactly how I send it. Maybe I have shortened the description too much to fit in the message. The data looks like this before packing via msgpack:
struct AddCalibrationReferenceElementStruct {
norm_pos norm_pos;
float timestamp;
int id; // eye id
MSGPACK_DEFINE_ARRAY(norm_pos, timestamp, id);
};
struct AddCalibrationReferenceDataStruct {
std::string subject;
AddCalibrationReferenceElementStruct ref_data[100];
MSGPACK_DEFINE_MAP(subject, ref_data);
};
oh
it has to be MSGPACK_DEFINE_MAP(norm_pos, timestamp, id);
anyway - thank you
is there any detailed documentation of the remote protocol, including required fields and data types of messages?
Hi everyone! i have a headset of eyetracking glasses, model pupil w120 e200b. i saw on the official website, that i could run pupil capture software in linux too. So, i was wondering how to do that. i'm novice in linux and for now i just tried to perform some acquisition by running "pupil capture SW" and then "pupil player SW" in Windows. Can anyone help me?
@user-ba85a3 Download the linux bundle from the github release page, install and run it. There is no big setup required.
@papr Thank you a lot. can you suggest me where i can find it, directly by linking? Furthermore could the software run on a raspberry pi 3 b+?
@user-ba85a3 http://pupil-labs.com/software -- I own a a RB Pi 3b+ but I have never run Capture on it. From my experience with other software, I highly doubt that it would work well.
@papr very much appreciated!!
Question1 :Why you can detect smooth pursuit if you don't set maximum duration? Since smoothy pursuit is a moving trajectory, dispersion based method detects whether a list of samples data is within a fixed region. For smooth pursuit, the eye is following a moving object, then the list of samples can't be always within a region. Question 2: About getting eye image, should I do it like this? sub.setsockopt_string(zmq.SUBSCRIBE, 'eyeImage')
@mpk
@user-b91aa6 in regard to Q2: you need to subscribe to frame.eye
to subscribe to both eyes or to frame.eye.0
for id0 images, or frame.eye.1
for id1 images
Hi guys! I also would like to use the glasses with a raspberry in order to make them wireless. As you said, capture won't probably work well on it. I wonder if ,at least ,i can read and identify the 2eyeCameras and the world camera with the RB. If this is possible i would search a section of the code implementing acquisition and data stream from the cameras and use it on my RB with the required libraries.. Thank you
@user-c1220d That should work. We use https://github.com/pupil-labs/libuvc to access the cameras.
Additionally, you could use pyuvc+pyndsi to send the data to Capture. See this example: https://github.com/pupil-labs/pyndsi/blob/master/examples/uvc-ndsi-bridge-host.py
@user-c1220d but in this case why not use an android phone and pupil mobile app?
mpk is referring to https://play.google.com/store/apps/details?id=com.pupillabs.pupilmobile&hl=en
@mpk yes but i need synchronize more devices with them, so a trigger signal is needed in my application. I would write a part of the software on my RB, just for my needs, but obviously i need first cameras identifications and the right libraries
@user-c1220d then you could use Pupil time sync. This allows you to control the clock of Pupil mobile.
May I ask that why the pupil detection accuracy decreases when switched to 3D mode compared with 2D pupil detection?
thank you all guys. I'm going to try some solutions and let you know if i need something more
hi again,
I'm trying to integrate pupil to an custom OpenGL application.
I'm able to connect, calibrate 2d (norm_pos
) or 3d (mm_pos
) and to receive tracking data via subscription for topic: "gaze"
.
But the gaze data contains 2d tracked gaze norm_pos
only - regardless of whether I'm using HMD_Calibration
or HMD_Calibration_3D
.
What I'm looking for is something like gaze_normal0_x
which can be found in this doc on data format.
Somewhere else, I read about something like a 3d vector gaze mapper and then found this
public static void StartBinocularVectorGazeMapper ()
{
Send (new Dictionary<string,object> { { "subject","" }, { "name", "Binocular_Vector_Gaze_Mapper" } });
}
in the hmd-eyes unity plugin here. But when trying to start this gaze mapper pupil plugin (remotely via zmq+msgpack) pupil capture crashes with the following output():
Starting Calibration
world - [INFO] calibration_routines.hmd_calibration: Starting Calibration
Stopping Calibration
world - [INFO] calibration_routines.hmd_calibration: Stopping Calibration
world - [INFO] calibration_routines.calibrate: Reflection detected
world - [INFO] calibration_routines.calibrate: Reflection detected
Ceres Solver Report: Iterations: 33, Initial cost: 1.078242e+03, Final cost: 1.485864e+01, Termination: CONVERGENCE
(multiple times the same lines from ceres solver)
world - [ERROR] accuracy_visualizer: Accuracy visualization is disabled for 3d hmd calibration
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "launchables\world.py", line 443, in world
File "launchables\world.py", line 292, in handle_notifications
File "shared_modules\plugin.py", line 321, in add
TypeError: __init__() missing 2 required positional arguments: 'eye_camera_to_world_matrix0' and 'eye_camera_to_world_matrix1'
world - [INFO] launchables.world: Process shutting down.
eye1 - [INFO] launchables.eye: Process shutting down.
eye0 - [INFO] launchables.eye: Process shutting down.
any idea what I'm missing?
Hey @user-82e7ab This plugin is not supposed to be initialized manually but is started by Capture after the calibration. Please be aware that you need to enable 3d pupil detection for 3d gaze mapping.
I think that the linked code location is not correct or incomplete.
The missing arguments 'eye_camera_to_world_matrix0' and 'eye_camera_to_world_matrix1' are the result of the 3d calibration
hi @papr thanks for your time.
on startup, I ensure 3d mode via set_detection_mapping_mode: "3d"
the output after HMD_Calibration_3D
without this plugin looks like this:
{"topic":"pupil","circle_3d":{"center":[-1.87313,1.64452,34.0346],"normal":[-0.650676,0.0690923,-0.756205],"radius":1.99072},"confidence":0.795505,"timestamp":1.39489,"diameter_3d":3.98144,"ellipse":{"center":[166.873,229.891],"axes":[52.0427,72.6392],"angle":-8.39662},"norm_pos":[0.417182,0.425273],"diameter":72.6392,"sphere":{"center":[5.93498,0.81541,43.1091],"radius":12},"projected_sphere":{"center":[285.358,211.727],"axes":[345.171,345.171],"angle":90},"model_confidence":0.0916953,"model_id":1,"model_birth_timestamp":120.496,"theta":1.63994,"phi":-2.28133,"method":"3d c++","id":0}
{"topic":"gaze.2d.0.","norm_pos":[0.837798,0.93075],"confidence":0.879012,"id":0,"timestamp":0.336607,"base_data":[{"topic":"pupil","circle_3d":{"center":[-0.47434,1.93127,33.0256],"normal":[-0.53411,0.0929881,-0.840285],"radius":1.66878},"confidence":0.879012,"timestamp":0.336607,"diameter_3d":3.33757,"ellipse":{"center":[191.799,236.16],"axes":[51.7945,62.769],"angle":-14.3252},"norm_pos":[0.479498,0.409601],"diameter":62.769,"sphere":{"center":[5.93498,0.81541,43.1091],"radius":12},"projected_sphere":{"center":[285.358,211.727],"axes":[345.171,345.171],"angle":90},"model_confidence":0.190211,"model_id":1,"model_birth_timestamp":120.496,"theta":1.66392,"phi":-2.137,"method":"3d c++","id":0}]}
@user-82e7ab ~~That's a pupil datum, not a gaze datum.~~ Edit: Scratch that.
Definitively a gaze.2d.0.
datum
Please try the following: 1. Start Capture 2. Start this script https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py 3. Run the hmd calibration procedure
I would like to see the output of the script in 2.
sorry for the delay - lunch break :)
It sems like I had too little patience ... now i receive data for gaze.3d.0.
, gaze.3d.1.
and gaze.3d.01.
No idea why they didn't show up before this gaze mapper plugin attempt.
Maybe I had too little patience, because finishing the calibration takes a few seconds .. maybe I only checked the very first data messages that arrived directly after calibration.
anyway - thank you @papr
subscription to topic "notify." while HMD_Calilbration_3D
(for the sake of completeness)
hi again, I'm stuck with unpacking the received 3d data:
gaze: {"topic":"gaze.3d.0.","eye_centers_3d":{0:[-181.835,-7.05468,133.635]},"gaze_normals_3d":{0:[0.383765,-0.297437,-0.874217]},"gaze_point_3d":[-149.779,-31.8997,60.6118],"confidence":0.101774,"timestamp":132.468,"base_data": ...
Until now, I always used a struct with msgpack::unpack<>()
matching the message structure, e.g.
struct GazeStruct {
std::string topic;
struct {...} norm_pos;
float confidence;
float id;
float timestamp;
struct {...} base_data;
MSGPACK_DEFINE_MAP(topic, norm_pos, confidence, id, timestamp, base_data);
};
But for 3d gaze data, I don't know how to create a MAP-struct with numbers as keys (like for gaze_normals_3d
)
I'd like to avoid using a msgpack::visitor
to go through received messages.
EDIT: In C/++ I can not create a struct with an Integer as variable name - which would be required for the included maps eye_centers_3d
and gaze_normals_3d
.
How do you typically solve this? Thank you all
msgpack seems to only support string as type for map keys (in C/++)
struct define_map<...> {
...
void msgpack_unpack(msgpack::object const& o) const
{
if(o.type != msgpack::type::MAP) { throw msgpack::type_error(); }
std::map<std::string, msgpack::object const*> kvmap;
for (uint32_t i = 0; i < o.via.map.size; ++i) {
if (o.via.map.ptr[i].key.type != msgpack::type::STR) { throw msgpack::type_error(); }
kvmap.insert(
std::map<std::string, msgpack::object const*>::value_type(
std::string(
o.via.map.ptr[i].key.via.str.ptr,
o.via.map.ptr[i].key.via.str.size),
&o.via.map.ptr[i].val
)
);
}
{
std::map<std::string, msgpack::object const*>::const_iterator it = kvmap.find(a0);
if (it != kvmap.end()) {
it->second->convert(a1);
}
}
{
std::map<std::string, msgpack::object const*>::const_iterator it = kvmap.find(a2);
if (it != kvmap.end()) {
it->second->convert(a3);
}
}
}
any suggestions?
This might actually be the reason why our Matlab examples fails when recv gaze...
I wonder why the python msgpack implementation does not fail. 🤔
i could imagine that there is no problem with non-string keys in python - in general there is also no problem with non-string key maps in c++. it's the msgpack implementation that forces string keys
I will look up the msgpack definition to see if it is a valid yo use anything but strings as keys. If it is, the c++ implementation is incomplete and needs to be fixed.
thx ... for now I'll try to extract the data using a msgpack visitor as a workaround
@user-82e7ab What speaks against using this generally?
its rather inconvenient .. or maybe I didn't find a good enough example .. my current approach requires some kind of state machine to parse through the data and find the correct values. (I'm using a derived msgpack::null_visitor ... the only thing I came across until now)
I'm sorry ... msgpack correctly parses the following struct:
struct mm_pos {
float x;
float y;
float z;
MSGPACK_DEFINE_ARRAY(x, y, z);
};
struct Gaze3dStruct {
std::string topic;
std::map<int, mm_pos> eye_centers_3d;
std::map<int, mm_pos> gaze_normals_3d;
mm_pos gaze_point_3d;
float confidence;
float timestamp;
MSGPACK_DEFINE_MAP(topic, eye_centers_3d, gaze_normals_3d, gaze_point_3d, confidence, timestamp);
};
I was thinking to complicated and tried to create another struct (like mm_pos) for eye_centers_3d
and gaze_normals_3d
to be able to declare them as msgpack-map via MSGPACK_DEFINE_MAP(...)
.
But that doesn't seem necessary. It's working with a simple std::map<int, mm_pos>
.
Thanks anyway @papr
Great! Pleased to hear that it could be resolved.
one more question - sorry :) It's about the relation between target positions during calibration and gaze directions during actual tracking.
My current calibration looks like this:
- generate n viewing directions
- for each direction from these directions:
-- show marker at headPosition + direction
-- watch the marker
-- send calibration sample (ref data
) with mm_pos
being the direction (without headPosition)
- finish calibration
Now when I receive 3d gaze data from Pupil (Gaze3dStruct
from the post above) my question is:
What is the coordinate system of gaze_normals_3d
?
Or in other words...
If I'm looking at one of the calibration positions, would gaze_normals_3d
contain (approx.) the corresponding direction?
My calibration positions are not straight in front of the viewer. Instead, the whole calibration pattern is rotated upwards.
During calibration, my head is rotated upwards as well such that the pattern fills my visual field.
The problem is that during the actual tracking, I need to (with my eyes, not my head) look up this far to get gaze_normals_3d
pointing straight ahead.
(i hope it's not too confusing)
The idea is that calibrate relative to the field of view of the user and not relative to the environment
yes - makes sense 😃
The resulting gaze_normals_3d
are relative to the field of view as well. Therefore you will need a second transformation step where you translate the normals to the environment based on the headPosition+direction
ok, so gaze_normals_3d
are not interpolated from the mm_pos
samples sent in the ref_data
during calibration?
Or are mm_pos
vectors interpreted as relative to the FOV and I need to create a symmetrical calibration pattern in front (-Z) of the user?
Eeh, I do not know that, sorry.
I am not sure what this mm_pos
is
oh yes, i missed that.
during calibration, I send one AddCalibration3dReferenceDataStruct
per calibration direction:
struct AddCalibration3dReferenceElementStruct
{
float[3] mm_pos;
float timestamp;
int id;
};
struct AddCalibration3dReferenceDataStruct
{
std::string subject;
AddCalibration3dReferenceElementStruct ref_data[100];
};
Here, I store the position of the current calibration position/direction in mm_pos
Ok, in this case the mm_pos
is interpreted relative to the FOV
And yes, you need to create a calibration pattern in front (-Z) of the user. It does not really have to be symmetrical though
ok, thank you!
What's the problem for one of my eye tracking camera, no matter how I adjust the focus, the image is always blured?
I am wondering if the camera is broken
@user-b91aa6 can you remind us what eye cameras are you using? Note that 200hz eye cameras are not focusable and may appear a little blurry in the preview but pupil detection will remain robust.
@wrp Can you say why it is not necessary to have focused images when using 200Hz cameras? (they can run at lower frame rates, right?)
@user-82e7ab the images may appear unfocused, but the camera's optics are designed for a specific range (close to the eye - specific to our hardware). It is just that the image might not look like it is in focus
ah ok .. thx! maybe you should write this somewhere in the dev-docs .. I bet I'm not the only one asking for this ; )
@user-82e7ab thank you for the feedback. We have made a note in the dev docs that the 200hz eye cameras are not focusable - but can make this more clear
if you (or anyone) would like to contribute to docs, you can make PRs to https://github.com/pupil-labs/pupil-docs
yes, I know this section .. already searched for it before asking here. currently I don't have much time, but I'll think about sharing the insights I gained while getting to know pupil once I'm done with the integration to our framework (and then hopefully I'm sure enough what I'm talking about 😅 )
PR's are welcome and so is feedback 😸
@wrp I am using 120 hz eye tracking camera for VIve HMD, have you met this problem before?
Hi guys, we are developing a ROS application which includes the pupil glasses. We performed some test and we have been able to acquire the images from the 3 cameras, one camera at a time. Now the problem is to acquire the 3 signals at the same time, actually we can't do it. I wonder if there was any library or package used in pupil-capture that may allow us to solve this issue. As in capture the 3 signals are acquired and displayed at the same time.
@user-c1220d you need 3 separate processes accessing the cameras and one that gathers the images
Question1. May I ask that have you met that no matter how you adjust the focus, the eye tracking image is always blured?
Question 2: How do you clean the eye tracking camera, it's so small, it's realy hard to clean the surface of the camera
Question3. I am wondering maybe the camera is broken, may I ask that can we send it back to be repaired?
Thank you very much.
I am using 120 HZ vive eye tracking, one of the eye tracking camera is always blured.
@user-b91aa6 please send a short sample eye video to info@pupil-labs.com along with your order id so that we can provide you with concrete feedback
Thank you very much.
@papr ok, we tried with different instances to acquire the cameras but we can't run more than one at time, otherwise they crash. What do you mean with separate processes?
@user-c1220d what package do you use to access the cameras? I mean starting multiple nodes of that package. Do not put them into a single launch file since this would run them as nodelets, what makes them behave differently
@user-c1220d what package do you use to access the cameras? I mean starting multiple nodes of that package. Do not put them into a single launch file since this would run them as nodelets, what makes them behave differently
we are using USB CAM package, would you suggest anything else?
@user-c1220d No, I think that should be fine. Just make sure to run them in three different nodes.
I do not know if the topic name overlap is an issue though. Maybe you need to remap the topic
@mpk is there a reason that you don't save the 2D detected pupil size and center in 3D detection mode? Since the 3D detection is not deterministic (detect one recording over and over yields slightly different results) – I think because the fittings of the pupil on the sphere – but the 2D algorithm is, it would be great to have the information, which 2D pupil led to phi or theta i.e..
@user-29e10a you are right about that. I think it would be nice to keep the 2d result. We are working on a complete refactor of the 3d model and will keep this in mind!