@user-c5fb8b ?
Oh, sorry. Switched accounts. See the question above by bok_bok.
(I am both)
Now you've just unmasked yourself. Bwaaaaaa!
You would think the chicken themes would be hint enough...
We should see how many discordia-folk have chickens.
@user-8779ef sorry for the delayed response. Are you using the RecordingController from HMD-Eyes? It already has settings for recording to a custom path. Take a look at the DataRecordingDemo, the Recorder
GameObject has the component.
@user-c5fb8b I'm afraid that you don't understand my issue, or I'm missing something. I would like to change the directory programmatically from another script attached to a game object. This requires importing the pupil labs namespace. This doesn't seem to be possible.
@user-b14f98 ah sorry, so you are trying to access the path on the RecordingController component already?
I realize that setting this programmatically is not possible, as the path is a private variable: https://github.com/pupil-labs/hmd-eyes/blob/4d74e79903083c849701f5f34241691ccbff7aa9/plugin/Scripts/RecordingController.cs#L14
(The namespace is not the problem here, you can simple using PupilLabs;
at the top to access the namespace.)
I'm not sure why the path is a private variable here. I'd suggest you either modify the original script (RecordingController.cs) or duplicate it to create your own custom recording controller, the logic here isn't very complex anyways.
I understand how to modify a private variable to make it public. This was not the obstacle that I encountered. The issue was that the namespace was not visible.
You can confirm that you've been able to import the Pupil Labs namespace in the manner that you described?
I see no examples in the demo scenes that demonstrate at attempt to import of the namespace in the manner you described.
@user-b14f98 I have made the variable public and then added the following file to the demo project (specifically outside of the Plugins folder into Assets\custom\MyCustomComponent.cs
) with the following content:
using UnityEngine;
using PupilLabs;
public class MyCustomComponent : MonoBehaviour
{
void Start()
{
RecordingController rec = GetComponent<RecordingController>();
rec.useCustomPath = true;
rec.customPath = "Test";
}
}
I attached the script component to the Recorder
GameObject in the DataRecordingDemo scene.
When I hit Play, the path on the GO changes as expected.
I'm not sure why you cannot access the namespace. I'm not aware of any issues blocking namespace access in C#, this is the normal way, e.g. using UnityEngine
also just accesses the namespace.
Accessing namespaces is not something Pupil specific.
Of course not, but thank you for checking.
...but, issues can arise during compilation where namespaced associated with plugins (in teh plugins folder) are not accessible to scripts outside of the plugins folder.
I appreciate you checking this for me.
I'll go back to my code and try this again, but I left the computer last time with Unity throwing errors when I tried to import the Pupil Labs namespace.
I understand your doubt, and no, I can't explain it either, which is why I requested attempt to replicate.
It might be that you just have compilation errors. I'm also not an absolute expert in Unity/C# but maybe you can share the error message and I can see if something obvious jumps at me 😄
There were no errors if I removed the attempt to import the namespace.
...but, again, thank you for offering - I may take you up on that.
I'll try again as soon as I'm able.
Maybe you have a custom class with the same name as one of the ones in the PupilLabs namespace... that might cause a name-clash
That would be interesting.
I think I could test that. There's only one culprit that springs to mind...
but I would have hoped that the error messages would have been more informative.
also, Pupil Labs does run normally.
...I am using the HMD-Eyes plugin.
No issues with screen casting, the recording controller, calibration, or gaze representation
you can also import namespaces as alias:
using PL = PupilLabs;
// ....
GetComponent<PL.RecordingController>();
maybe that helps?
Well, I'll start by unplugging and plugging it in again 😛
Then, I'll try that.
ok great 🙂
Thank you for your replication and input.
Does each eye tracking code run in a separate core or a separate thread?
@user-01c362 Pupil will spawn separate operating-level processes for each eye. So we are not using simple multi-threading, but actually multi-processing. Whether the processes run on the same or on different CPU cores is the job of the operating system to decide.
@FPA Great, I just wanted to make sure separate cores weren't used. Thanks.
@user-01c362 what do you mean by weren't used? In theory we don't rely on that, but it's highly likely that Pupil won't run on a single-core machine, as the combined load of all the different processes might be too much to run everything in real-time.
I mean that weren't necessarily used. It's not about single core machines, more about using the cores efficiently. But as you said, it works as it should.
Is It possible to direct screen animation instructions in hmd_calibration_client.py
?
Hi @user-1ccccf, the Python files are only a sample implementation in Python. If you are using the Unity Plugin, you can implement your own custom calibration choreography in C#. You should have a look at the CircleCalibrationTargets as a reference for how the default choreography is implemented: https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/CircleCalibrationTargets.cs
You can implement your own subclass of CalibrationTargets and pass that to your CalibrationController instead.
@user-c5fb8b Thank you for your reply. I don't know the origin point of world coordinate system for these points below: for pos in (
(0.0, 0.0, 600.0),
(0.0, 0.0, 1000.0), # (x, y, z)
(0.0, 0.0, 2000.0),
(180.0, 0.0, 600.0),
(240.0, 0.0, 1000.0),
(420.0, 0.0, 2000.0),
(55.62306, 195.383, 600.0),
(74.16407, 260.5106, 1000.0),
(129.7871, 455.8936, 2000.0),
(-145.6231, 120.7533, 600.0),
(-194.1641, 161.0044, 1000.0),
(-339.7872, 281.7577, 2000.0),
(-145.6231, -120.7533, 600.0),
(-194.1641, -161.0044, 1000.0),
(-339.7872, -281.7577, 2000.0),
(55.62306, -195.383, 600.0),
(74.16407, -260.5106, 1000.0),
(129.7871, -455.8936, 2000.0),
):
Is it in the middle of the two eye cameras?
I want to test the code in my customized HMD. So I need to compute the two vectors.```"translation_eye0": [34.75, 0.0, 0.0], "translation_eye1": [-34.75, 0.0, 0.0],
@user-1ccccf I am not sure currently. I think we get these values from the UnityXR plugin which handles the VR headset GameObject. Do you have a HTC Vive available? You could spin up Unity and take a look how the coordinate system works with the XR GameObjects.
@user-c5fb8b Thank you very much. The origin may be in the middle of the two eye cameras according to the two translation_eye vectors and this picture.
@user-1ccccf This indeed looks like it. I remember we took the reference values for the eye_translations in the python script directly from the Unity client.
@user-c5fb8b OK. Thank you.
@user-c5fb8b And the last question, is it possible to get 3d gaze points with the z-depth when we calibrate our gaze in the 2d plane such as this picture, using binocular cameras? The 2d coordinates are such as [(.5, .5), (0., 1.), (1., 1.), (1., 0.), (0., 0.)].
class Dual_Monocular_Gaze_Mapper(Monocular_Gaze_Mapper_Base, Gaze_Mapping_Plugin):
"""A gaze mapper that maps two eyes individually"""
def __init__(self, g_pool, params0, params1):
super().__init__(g_pool)
self.params0 = params0
self.params1 = params1
self.map_fns = (calibrate.make_map_function(*self.params0),
calibrate.make_map_function(*self.params1))
def _map_monocular(self, p):
gaze_point = self.map_fns[p['id']](p['norm_pos'])
return {'topic': 'gaze.2d.{}.'.format(p['id']),
'norm_pos': gaze_point,
'confidence': p['confidence'],
'id': p['id'],
'timestamp': p['timestamp'],
'base_data': [p]}
def get_init_dict(self):
return {'params0': self.params0, 'params1': self.params1}
HMD_Calibration notices the Dual_Monocular_Gaze_Mapper
after finish calibration. But I only found 2d gaze points in the Dual_Monocular_Gaze_Mapper
.
@user-1ccccf I don't understand. How are you presenting the gaze points on this 2d plane? In the VR headset? The general idea behind calibrating is that you should pick calibration points which resemble the distribution of points that your subjects will look at later. If you calibrate in 3D with only calibration points at a fixed distance, the gaze mapping will have larger errors when subjects look at points of a different distance. So you probably want to sample different distances when calibrating for 3D gaze mapping.
Also you seem to be using an older version of Pupil, the HMD_Calibration
and Dual_Monocular_Gaze_Mapper
do not exist anymore in the latest versions. Please specify which versions of Pupil and HMD-Eyes you are using any maybe also why you are not using the latest versions.
@user-c5fb8b Setting custom recording directory did work. I could not replicate my earlier issue with teh namespace, and blame the Gremlins. 😛 Thanks for your help! Oh, one more thing: no need to make rec.custompath public. You guys provide a function, rec.SetCustomPath(string), which also sets rec.UseCustomPath = true;
@user-c5fb8b Thank you very much. I understand. I am using the Pupil v1.0. Because I am learning the code from 2018. But the code has changed in many places now. So I did not pay too much attention to the latest version.
hi, what is the difference between the confidence value in GazeData.cs and in PupilData.cs?
Hi @user-f9af20, Pupil confidence describes the confidence of the pupil detection algorithm for that it actually found the pupil. The Gaze confidence is generally some combination of the confidence values of all pupil datums that made up a gaze datum. If the gaze datum was mapped monocularly (i.e. only from one eye), the gaze confidence will equal the pupil confidence for the frame. If the gaze is mapped binocularly, two pupil datums (left and right eye) are combined to produce a gaze mapping result. In that case we use the average of both pupil datum confidences as gaze confidence. Keep in mind that we have both 2D and 3D pupil confidence, depending on which pupil detection algorithm was used. Our gaze mappers also either work in 2D or 3D and use the respectively compatible pupil confidence.
Ah makes sense. Thank you for the very thorough explanation 🙂
Pupil v2.0
scales = list(np.linspace(0.7, 10, 5)) # TODO: change back to 50
for s in scales:
scaled_ref_points_3d = ref_points_3d * (1, -1, s)
# Find initial guess for the poses in eye coordinates
initial_rotation0 = utils.get_initial_eye_camera_rotation(
pupil0_normals, scaled_ref_points_3d
)
initial_rotation1 = utils.get_initial_eye_camera_rotation(
pupil1_normals, scaled_ref_points_3d
)
eye0 = SphericalCamera(
observations=pupil0_normals,
rotation=initial_rotation0,
translation=initial_translation0,
fix_rotation=False,
fix_translation=True,
)
eye1 = SphericalCamera(
observations=pupil1_normals,
rotation=initial_rotation1,
translation=initial_translation1,
fix_rotation=False,
fix_translation=True,
)
@user-c5fb8b Hi, I have two questions about hmd_calibration_3d. 1. I don’t know why we need to change the z-coordinate of ref_points_3d and try to use 50 values to multiply it.
d['pupil']['circle_3d']['normal']
* 500. I don’t know why the pupil0_normals
in Pupil v2.0 doesn’t multiply 500.Pupil v1.0
gaze0_dir = [d['pupil']['circle_3d']['normal'] for d in matched_data if '3d' in d['pupil']['method']]
gaze1_dir = [d['pupil1']['circle_3d']['normal']for d in matched_data if '3d' in d['pupil']['method']]
if len(ref_points_3d_unscaled) < 1 or len(gaze0_dir) < 1 or len(gaze1_dir) < 1:
logger.error(not_enough_data_error_msg)
self.notify_all({'subject': 'calibration.failed', 'reason': not_enough_data_error_msg, 'timestamp': self.g_pool.get_timestamp(), 'record': True})
return
smallest_residual = 1000
scales = list(np.linspace(0.7,10,50))
for s in scales:
ref_points_3d = ref_points_3d_unscaled * (1,-1,s)
initial_translation0 = np.array(self.eye_translations[0])
initial_translation1 = np.array(self.eye_translations[1])
method = 'binocular 3d model hmd'
sphere_pos0 = matched_data[-1]['pupil']['sphere']['center']
sphere_pos1 = matched_data[-1]['pupil1']['sphere']['center']
initial_R0,initial_t0 = calibrate.find_rigid_transform(np.array(gaze0_dir)*500,np.array(ref_points_3d)*1)
initial_rotation0 = math_helper.quaternion_from_rotation_matrix(initial_R0)
initial_R1,initial_t1 = calibrate.find_rigid_transform(np.array(gaze1_dir)*500,np.array(ref_points_3d)*1)
initial_rotation1 = math_helper.quaternion_from_rotation_matrix(initial_R1)
I know the 500 is 500mm, which is the distance from our eyes to the screen. But the 500
doesn't seem to exist in the VR HMD.
@user-1ccccf (1) If I remember correctly, we have noticed that the fixed-scale bundle adjustment is not as consistent for hmd setups as it is for Pupil Core. This is why we run the bundle adjustment for different scales and use the best performing one. We have reduced the number of tested scales as we saw diminishing returns regarding a high number of tests. Instead, we only test 5 different scales which is much faster than testing 50. (2) In v1.23 @user-894365 rewrote the bundle adjustment code. I guess that she found that it is not necessary to scale the gaze direction and removed the multiplication. @user-894365 Please correct me if anything noted above is incorrect/inaccurate and let us know if you can add anything.
@user-1ccccf @papr right, for the function find_rigid_transform
, the scale of the gaze direction does not affect the result, so multiplying 500 is not necessary.
@papr @user-894365 Sorry for the late reply. But I am still confused about this part of the codes. The find_rigid_transform
don't use the 500 to multiply pupil0_normals
and unprojected_ref_points
, when calculates the transformation between these two coordinate systems. But why in the BundleAdjustment
, the unprojected_ref_points
multiplies 500 as the initial_gaze_targets
while the observations
doesn't multiplies 500 ? Shouldn't the initial_gaze_targets
be consistent with other observations
? I am really confused. Looking forward to your reply.
@user-1ccccf observations
describe the observed normals in the world/eye0/eye1 camera coordinates, which do not need to be scaled.
initial_gaze_targets = unprojected_ref_points * 500
is an initial estimate of the 3D gaze targets.The position of the 3D gaze targets, as well as eye poses with respect to world camera, are further optimized by the bundle adjustment.