Hi! I used the accuracy visualizer plugin, but it doesn't recognize the top and bottom ranges. Does anyone have an idea? This was done in ScreenCastDemoScene.
Hi, I was able to review the recording. Please see my notes below: The green outline is calculated based on the recognized reference locations. These are correct.
Orange lines visualize the error between recognized reference locations and predicted gaze locations. The accuracy visualizer excludes "outlier" samples from its calculation and visualization (samples with a larger angular error than a fixed threshold, default 5.0 degrees). In your case, many samples are being excluded (From your email: Used 1922 of 4642 samples
). This is why your screenshot does not show all lines.
I cannot tell for sure what causes the calibration to be this inaccurate that the majority of the samples are being regarded as outliers. The backprojections (yellow and green dots) look fairly good, too. Nonetheless, I feel like the issue could be related to the unity camera intrinsics. Again, I cannot tell for sure, though.
Your eye models are fit fairly well but it could be slightly better, the left eye specifically. I see that you already perform the eye rotation procedure. I suggest doing it with more extreme angles. You can also perform it quicker than in the recording.
i usually get that when the model isn't really fitting correctly at those points (i.e. the confidence isn't high enough to register towards calibration), so try moving your eyes around a bit so that the model has a better fit, freeze the model, and see how the confidence holds up when you fixate around those points. if it looks good try running the calibration again. you can also play around with camera settings in the capture app to see if you can get a better image for processing, sometimes the image might be too bright or dark, and finding that sweet spot would help provide a more stable model
Thank you very much for your answer. I tried it and the confidence was high enough. (>0.8) However, it still remained unrecognized. Is it possible that the top and bottom of the data being retrieved are swapped?
Note, that the confidence displayed in Capture refers to the 2d confidence, not 3d confidence. Feel free to share an example recording with us such that we can give concrete feedback. Flipping the eye does not make a difference
yeah, you can actually flip the image for each eye in camera settings, but im not sure if that matters for calibration. try changing it tho
ok, I'll share the data via e-mail, thank you.
Hi, I tried some things and share what I've noticed. When I checked the exported data, I noticed that the Y axis was pointing in the opposite direction than I expected(top area in Unity is negative in exported data). I think this is why the many samples are being regarded as outliers. Is there any way to solve this?
If you are referring to the gaze_point_3d_y values, please be aware that it follows the 3d camera coordinate space https://docs.pupil-labs.com/core/terminology/#coordinate-system whose origin is in the center of the camera
Could you please specify the exact name of the Y-axis export.
But a possible test to verify a possible y-axis-flip would be to increase the outlier threshold to a very large number (e.g. 360 degrees) and look if the pupil data is indeed mapped to the wrong direction
Got it, I'll try this. Thanks!!
Hello, 1 of 2 cameras of my VR addon stopped working and shows as 3141, 25442, "Sonix Technology Co., Ltd." USB2 camera which works via standard windows Camera app but not via pupil labs sw. I tried manual driver installation which succeeded, however selecting that camera in pupil service cause BSOD with error MULTIPLE_IRP_COMPLETE_REQUESTS. Any fix for that pls?
please contact info@pupil-labs.com in this regard
Hi! Im using HTC VIVE add on, and try to capture pupil size only, but I found the pupil data is inaccurate, very small (~0.5-2 only even with light stimulation) and I found no. Supporting pupil observation is 0; what is wrong with my setting?
Please see https://docs.pupil-labs.com/core/best-practices/#pupillometry If you want we can give more concrete feedback. In this case, please share a Pupil Capture recording with [email removed]
@user-7aedda It is not alone about the positioning about sampling different eye angles.
That said, it is difficult to judge the eye models quality from a single picture like this alone. Please create and share a short example recording with Pupil Capture where you roll your eyes until you think the model is fit well based on the description in the link above.
Thanks for your prompt reply! I finally get the correct pupil size data, I saw the pupil diameter 3D range from -1.6-4.1mm, but when I export it to .csv via player, no such data can be found, largest number of circle_3d_radius is 1.2 only
Hi, your eye models are still not fit well. You can improve the fit by rolling your eyes like here https://youtu.be/7wuVCwWcGnE?t=14
Pupil Player displays the diameter in the timeline. You are looking at the circle_3d_*radius*
. Alternatively, have a look at the diameter_3d
column.
Also, the timeline y-axis limits are not necessarily the total range of the data. See this code for details https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_producers.py#L219-L227
So how can pupil lab player app display that data? How Can I export that data?
If I understand it correctly, my eye model still need adjustment, thatβs why my pupil size data is wrong (see the pic attachment, only largest 1.2mm in radiusβ¦)
Correct π
i work very hard to adjust my position, setting, still no luck, pupil size still very small (~0.7mm), any tips on HTC VIVE addon?
One more note: I know that the camera placement options in the VR add-on is very limited but from the video it looks like the pupil would leave the field of view if the subject looked down. This will cause issues in the pupil detection.
I have try my best to adjust it, just donβt know how to do it right, no tutorial online
Any tips of it?
A well-fit model is visualized by a stable circle that surrounds the modelled eyeball, and this should be of an equivalent size to the respective eyeball. A dark blue circle indicates that the model is within physiological bounds, and a light blue circle out of physiological bounds. From the documentation. This is not the case in your video. I suggest sampling larger angles and for a longer period of time. The eye movements can be faster than in your video.
Hello I am interested in starting the Pupil capture when i play the scene in unity without pressing R. How exactly should I do that? I didn't understand exactly how you mean it with the remote
You can call this method https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/RecordingController.cs#L69
Hey all,
I am currently trying to query data about the eyes through the Pupil Labs Eye Tracker extension of the VIVE HMD. For this I would like to use the Python API. However, I am having a problem communicating with the Pupil Labs eye trackers.
When I try to get a response to the previously sent question with the line "ipc_sub_port = requester.recv().decode("utf-8")", my program waits forever. However, if I start the "Pupil Service" software in parallel, the pupils answer my program directly. The only problem then is that two connections to the Pupil labs are established at the same time, which again does not work.
My question would be why the Pupil Labs eye trackers do not respond when I try to communicate with the trackers via the lines
requester.send_string('SUB_PORT')
ipc_sub_port = requester.recv().decode("utf-8")
to establish a connection.
This problem occurs with all types of "send_string". I've also tried the examples from the site https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote
It is important that the script connects to the correct port. Checkout the Network API menu in Pupil Capture or Pupil Service main window. Both show their respective ports. Adjust them in the script accordingly.
Hey, thanks for fast reply. The ports should be correct, as my python script starts to communicate with the Pupil labs as soon as i start the "Pupil Service". It seems like the "Pupil Service" sends some kind of wake up information to the Pupil Labs.
to the Pupil Labs. Not sure what you are referring to by this
Oh, sorry. I ment the Pupil Labs VIVE HMD Addon.
Please be aware that either Pupil Capture or Pupil Service need to run in order for these scripts to work.
Oh, they do? That already helps a bit. But in that case i get another error, that the remotehost closed the connection.
Yes, the hardware is accessed by the software. The software is responsible for processing the video provided by the hardware and performing the pupil detection/gaze estimation on it.
The script just accesses the result.
That sounds like the Capture or Service application shut down.
I'll look into that. You already helped me alot. Thank you.
I'm trying to calibrate the Pupil Lab VIVE HMD Add-Ons. I've tried to do it via the code, but could figure out how to do it. I guess i can also manage to calibrate it with the software "Pupil Capture", as it comes with the feature of calibrating it ... obvisouly. But it seem to require a video stream, as the extension for the VIVE HMD, does not come with a world camera. In order to calibrate the Pupil Labs VIVE HMD addon, i need to get a live video stream from the perspective of the HMD. Is that correct? I then have to apply that video stream to the "Pupil Capture"-Software?
In order to calibrate the vive add-on, you need a client to display and provide reference locations. An example client would be our hmd-eyes project that implements the necessary functionality as a Unity plugin. I recommend reading the docs https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#getting-started There is also a section about calibrating and a list of demo scenes
Hey! My lab is working with the Pupil Labs VIVE addon in Unity, and we're trying to use the eye tracking to limit the user's vision to a small area that follows their gaze. I've been able to consume data directly from the eye tracker by linking my own code to the gazeController.OnReceive3dGaze event. The window now follows the user's gaze very quickly, but there are some problems.
1, the window is very jittery. I notice that the gaze visualizer code you provide is less jittery than the window, which makes me think you performed some kind of smoothing on it. Edit: I looked into the gaze visualizer code and found that you just don't move the green ball if the confidence from the eye tracker is below a threshold. After implementing this in my code for the window, the jitters are a lot less. If there's anything you could think of to help me reduce it further, please still let me know though.
2, the window occasionally lags for about a second. This also happens with the gaze visualizer, sometimes it just freezes for a moment before snapping across the screen to the correct place.
I'm wondering if you have any advice on mitigating these effects or any others you may know about to increase the accuracy of the window. Ideally we want to use this for research purposes and we need it to be very fast and accurate. Thanks!
Hi! re 1) yes, we just filter by confidence. If you want to apply further smoothing, I can recommend https://cristal.univ-lille.fr/~casiez/1euro/ re 2) these are probably longer periods of low confidence which is why there is no update. But I cannot tell for sure. Otherwise, there might be something else blocking the thread longer than usual.
Hello! I am collecting some data in a csv (button clicks, head movements) in Unity and I need to sync them with pupil data as I record through Pupil Capture and also in a csv with Time.realtimeSinceStartup. I am wondering if there is an example to follow on how to call TimeSync, and make sure that all of these time points will be synched.
You can use this function if timesync is setup correctly https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/TimeSync.cs#L46
ok! thanks a lot for the quick reply!
Hey there, I'm trying to figure out what product my university bought from Pupil Labs. I was able to find an invoice but it just says "head mounted USB Camera" and the weight was 200 grams and the price was the same as the Epson & HoloLens add-ons. I was wondering if anyone could help me to figure out exactly which product they purchased or should I just go through the support email with the PO number?
should I just go through the support email with the PO number? That would be the easiest way. Please contact info@pupil-labs.com
But there should be an acronym, too. I might be able to tell you what it is based on this acronym.
would that be in the invoice ID? Under Item all it says is "head mounted USB Camera" followed by the weight and commodity code
I'll go ahead and draft up an email to be safe just because the university doesn't even know if it's head or desk mounted π€·ββοΈ
Yeah, I think this would be easiest.
i do believe it's from 2017 if either model came out after that
For everyone reading this: We were able to resolve the issue via a direct message. The product in question was a Pupil Core headset.