Hiโฆ. Iโm a PhD student and we want to use the pupil labs ar-Vr cameras. One thing I observed from the left and right camera stream is, one of the cameraโs Image is darker compared to other.
Iโm not sure whether itโs the hardware and software issue. Can you please help me fix this.
The eye image on one camera is very clear and bright while the other camera shows a darker eye image where features of eye are hardly distinguishable
Hi @user-1eb241 ๐. You can try adjusting the camera exposure settings, like in this video: https://drive.google.com/file/d/1SPwxL8iGRPJe8BFDBfzWWtvzA8UdqM6E/view?usp=sharing
@nmt Thanks Neil for your quick response..
I'm hoping to do this programatically.. i installed pyuvc as mentioned in your github page.. can you please how to do the same using a python script
I don't think such functionality exists. Maybe @papr can clarify. In any case, can you not set it to autoexposure mode?
in that case, can you please tell me which software/tool did you use in this video?
This is with Pupil Capture or Service. Just go to 'Video Source' in each eye window to find the exposure settings.
@user-1eb241 @nmt You can use Pupil Core software to set manual or auto exposure for the eye cameras. Two important notes on this: 1. The eye cameras do not have true auto exposure, as in "built into their firmware". Instead, Pupil Capture adjusts the manual exposure based on an algorithm. See [1-4] for the implementation. 2. UVC settings like the manual exposure time are only persistent as long as the camera is connected. Disconnecting the camera will reset its UVC settings.
@user-1eb241 in [A] you can see how to instantiate the camera directly via pyuvc (you might have done this already). In [4] you can see how to change the manual exposure time.
[1] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/utils.py#L27-L78 [2] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L367-L371 [3] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L477-L479 [4] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L631-L640 [A] https://github.com/pupil-labs/pyuvc/blob/master/example.py#L7-L9
@nmt and @papr thanks to you guysโฆ Iโm able to see a bright and clear pupil now
.
Hello, I am a student currently working on a project using the HTC Vive with the Pupil Labs Add-on. One of the things I would like to do is record where the user is looking in the VR environment, but I am completely unsure about where to start with this task. Looking at documentation on the pupil labs website, it seems as if some sort of calibration is also necessary, but since there is no World Camera for the HTC Vive VR add-on, I am unsure about this as well. Any help regarding either of these problems would be greatly appreciated. Thanks!
Hi, recommend starting with this document https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md
hello, few questions:
We are using HMD Eyes v 1.4
Where should we look at to get more precise information on the validation data? If I understand correctly the success or failure of the calibration relies on the validation part, right? Where should we look at to get more precise in-Unity information (control?) on validation and how is validation process being performed with HMD Eyes workflow?
Is it possible to check out programmatically within Unity if the pupil detection is satisfactory?
Please tag me when answering, thank you very much ๐
I need to correct my previous statement. Pupil Capture already calculates the accuracy for hmd calibrations. The missing part is a Unity script + Capture plugin that performs the same procedure without triggering a calibration. Instead of fitting a new gazer with the collected pupil and ref data, it should use the existing gazer to map the pupil data to gaze and compare that against the ref data.
Alternatively, one could build a Unity script that just subscribes to gaze data and calculates the validation accuracy within Unity. This is the more straight forward solution but requires more Unity/c# knowledge.
We currently do not have a way to validate hmd calibrations but I was planning on building a script that calculates accuracy for hmd calib. within the next 2 weeks. Currently, either there is sufficient data for the calibration or not.
You can check the confidence values but this only tells you something about the 2d detection. There is no good way to judge the quality of the 3d model programmatically
this product work for hololens 2?
Our HL add-on is only compatible with HL1. For HL2 and other AR systems we can offer cameras and cabling that can be used for prototyping - the connection to the headset would need to be developed by you. We can continue the conversation here or via email to sales@pupil-labs.com
got it thanks! So if im interested in getting the Hololens 1 kit for prototyping on a hololens 2, would this โother AR systemsโ be a different kit/camera set?
Cameras are the same, just that we don't offer mounts (plastic parts) for HL2 or other AR systems.
Hi, I'm new user, Do you have 3D-printable-model / sell parts for HoloLens 2 mounting?
Please see the message above yours. We do not offer 3d-printable models either.
Hi recommend starting with this document