@user-5ca684 Thanks! That was some good info on threads and locks. So, In my case it should work well in the Update() method as that runs on the mainThread.
@user-5874a4 soon you can check out my dev branch for the single threaded version
Hi Pupil Community! I am new to pupil software and am still getting my HTC Vive add-on fully up and running.However, after calibration, the results of the calibration are not ideal, and the three points will also be used from time to time, which does not seem to follow the pupil in real time.
Does anyone with knowledge in the Unity plugin have a second? Have a question regarding the accessible data.
Or can anyone point me in the direction of somewhere where I ask questions? ๐
@user-489841 this is the right place. @user-5ca684 is the right person to Target for questions regarding unity plugin
@user-abff35 what calibration method?
Okay! Thanks @wrp! starts poking @user-5ca684 furiously
@user-489841 you have access to all pupil and gaze data - also frame data is exposed via the plugin. What specifically are you looking for?
I want to know the difference between the confidence values gotten from Pupil.values.Confidences[] and Pupil.values.BaseData[].confidence, because they differentiate every frame.
@mpk can you respond to this. This is not a unit question but a Pupil question
Awesome. ๐
Am a right that Pupil.values.Confidences[] is related to gaze?
Pupil.values.BaseData[].confidence would then be the uderlying pupil positions. These can have difference confidence.
So they are not referring to the same? Is that what I'm misunderstanding?
you have pupil postions, these are the position of the pupil in the eye video.
each datum comes with its own confidence.
then we map one (monocular) or two (binocular) of these to a gaze datum (point you are observing in the world). this has its own computed confidence.
all this under the condition that @user-5ca684 did not make changes to this structure in unity.
and one last thing.
The plugin hasn't been updated in 2months+ so I don't think so. :p
gaze data has a basedata field that contains the pupil data that was used to make this gaze datum.
confusing to me here is that what I would refer to a gaze is called Pupil.values
So Pupil.values.Confidences[] refer to the confidences in eye position, and Pupil.values.BaseData[].confidence refers to the confidence in the calculated gazr datum?
the other way around.
Pupil.values.Confidences[] -> gaze confiudence
ah
Pupil.values.BaseData[].confidence -> Pupil confidence
AFAIK
Perfect. The reason I started looking into BaseData was that pupil diameter value in Pupil.values.diameter is never changed, but it is in BaseData.
gaze does not have a diameter.
I m not sure why the stuct would offer this field.
that makes sense in that context
Me neither
P
*:P
yeah. @user-5ca684 please revise or correct my assumptions.
But anyways. Thank you @mpk. That made sense to me. I'll hang around for when @user-5ca684 comes back at some point. ๐
@user-489841 sounds good. Our plan it to contiously improve the unit app just like we do with the rest of the Pupil project!
I'm working in a research group that are in love with the PupilLabs eye-trackers for HDMs, so we are following you guys very closely. We are looking forward to that very much. ๐
*HMDs
@user-489841 thats great to hear! thanks for the love ๐
How would you guys recommend I troubleshoot connectivity issues from the Unity3D side? I am trying to use hmd-eyes but I cannot connect to the pupil service
Wireshark output
I turned off all firewalls, I can ping but I cannot connect to pupil_capture running on a remote host 192.168.2.1
I started "wireshark" to sniff packets but it seems like hmd-eyes on the unity side does not generate ANY traffic on tcp port 50020
The funny thing is that from an earlier version of hmd-eyes I can connect fine. It's just this latest version that is giving me trouble. Oh well.. just my 2 cents.
@wrp i use the 2D calibration in HMD
@everyone does anyone know how I can turn of the gaze marker? Thanks!
@user-05da2f I had that problem at some point as well, but then it just started to magically work ๐
@user-fe23df CalibrationGL.cs Line 47 Comment of the call to Marker method. That's where the marker is drawn on the screen. I found this while debugging with the code. I don't know if there is a better way to accomplish this.
@user-5874a4 thanks a lot! ๐
@here if you find an issue in hmd-eyes, please make an issue on the repo: https://github.com/pupil-labs/hmd-eyes/issues so that it can be tracked/fixed
Hi. Iโm trying to do 3D calibration and get gaze data from unity hmd plugin, but it doesnโt workโฆ First, I tried 2D calibration, it worked. However, I failed to do 3D calibration. When I started calibration in 3D mode, I could only see a white circle target following my eyes and nothing else happened. What should I do? I would really appreciate it if you answer.
I think in 3D calibration it doesn't go to the next marker. Even though I changed the confidence condition from 0.4f to 0.0f, the white target didn't move. What's the problem?
@user-56b6a5 I also had a similar problem before, I think you should check the eye capture app if it's running properly and both eyes are detected well in the popped-up windows.
@user-fe23df Thank you for your reply! When I did 2D calibration using pupil service, it seemed to be okay. The white target moved well. And both eyes were detected well in both 2D and 3D modes I think... Could you give me any advice, if you have?
@user-56b6a5 sorry, I have no idea then. In fact, I can't even do 2D calibration as the script switches to 3D whenever I press play. And I just realized our problems are not the same actually. Calibration ran ok for me, it's just that after calibration the marker was right on the camera. And that magically got fixed after a few days.
@user-fe23df I envy you...๐ Thank you very much!
@user-56b6a5 no problem at all, sorry I couldn't help :/
@user-fe23df ๐
@user-56b6a5 You could also possibly verify if in pupil capture, Calibration -> Method is set to "HMD Calibration 3D" and General -> detection and mapping mode is set to "3d". I noticed that sometimes these don't get set automatically on communication between unity and Pupil Capture. Like the Calibration mode change doesn't get reflected in Pupil Capture sometimes
@user-5874a4 Thank you for your reply! I use windows 10 0.8.7 version and there is no HMD Calibration 3D option. Is it the reason of my problem?
@user-5874a4 it is planned to implement a customInspector toggle to turn off the markers, however our refactor also separates the visualization+calbration from the data receiving part
@user-fe23df there is a dev branch that you can check out if you are interested
@user-5ca684 yeah will do, thanks! ๐
@user-fe23df it is strongly under development but it should work, and would love to hear your feedback
Can you tell me how to display the tracking maker on the screen? I want to display a static marker in the middle of the screen to detect the accuracy of the tracking~~~where need I to modify that part of the code, I didn't find it.
@user-56b6a5 Mine is version 0.9.6. That's probably the issue for you. Maybe you can update and see if the issue resolves
hi, just to make sure: gazepoint3d and gazenormals, like eyecenters, are values in local (headset) space right?
@user-5ca684
Hi everyone, I am looking at the List3D calibration points, because we noticed that during calibration some of the points were out of our eyes' reach. Now we were wondering whether it is smarter to change the depth values of the calibration points rather than the x and y values of the points. Also we would like to know whether there is a reason every point is calibrated on 100f depth instead of every point changing in depth. Thanks!
As for the last question, we can imagine that because we are changing the X and Y values to calibrate different angles of the eyes, we want to calibrate on different calibration depths too
Hi guys - big supporter here. Keep it open source. So, what's the word on the hope of getting the cameras behind the HMD optics, with hot plates to align the optical axis of the camera with the optics? After some use, it seems clear that the off-axis camera angles are costing in accuracy inside the HMD
I want to show the pupil data in the picture. I don't know the mapping between the panorama sphere and the image.Can someone give me some information.
@user-5ca684 Can you advise on the issue above?
@papr ๐ okay
@user-72b0ef As far as I know it should not make a difference
@user-5ca684 I am recording from unity but I keep getting this error: FiniteStateMachineException: Req.XSend - cannot send another request NetMQ.Core.Patterns.Req.XSend (NetMQ.Msg& msg) NetMQ.Core.SocketBase.TrySend (NetMQ.Msg& msg, TimeSpan timeout, Boolean more) NetMQ.NetMQSocket.TrySend (NetMQ.Msg& msg, TimeSpan timeout, Boolean more) NetMQ.OutgoingSocketExtensions.Send (IOutgoingSocket socket, NetMQ.Msg& msg, Boolean more) NetMQ.OutgoingSocketExtensions.SendFrame (IOutgoingSocket socket, System.String message, Boolean more) PupilGazeTracker.GetPupilTimestamp () (at Assets/Scripts/PupilGazeTracker.cs:1773) FFmpegOut.CameraCapture.OnRenderImage (UnityEngine.RenderTexture source, UnityEngine.RenderTexture destination) (at Assets/FFmpegOut/CameraCapture.cs:159)
this basically makes my experiment frame rate drop from 300 to 30 fps and eventually crashes unity
Do you have any idea how I could fix it? thanks!
@user-fe23df Iwill take a look
@user-5ca684 thanks a lot!
oh yeah and following right after is "OnRenderImage() possibly didn't write anything to the destination texture!"
I suppose this is due to the error above
and this is probably why my videos in recordings are always broken...
@user-fe23df actually I just realized that the recording is being forcet to 30
on purpose
and so whole Unity
you can try to change that wher it is being set
but I have to say it is risky
@user-5ca684 So changing calibration markers in depth also does not make a change in how well the markers perceive depth?
the whole OnRendering function is for recording the unity view right?
@user-5ca684 Sometimes it seems the markers are more accurate on objects that are far away rather than closeby
If I only need the eyetracking data then I suppose I can just not use it?
further*
because I'm thinking of resorting to commenting out the part that causes the error
In OnRenderImage there's the timeStampList variable that is updated with PupilTimeStamp, I wonder what it is for exactly
@user-72b0ef honestly I think @mpk knows more about accuracy, but as far as I know the depth of the markers should not make a difference in calibration
@user-fe23df if you only need the eye track data then you could try
@user-5ca684 Thanks!
@mpk Hi!! Could you tell me a bit more about the depth of the claibration markers? It seems that after calibration at 100f depth, object that are further away tend to be more accurate for the marker.. Is it possible to calibrate on varying depths?
@user-72b0ef the 2d calibration is accurcate for a certain depth. In your case my guess it that moving markers further away helps since you scene is genrally further away as well.
for 3d this applies as well. But here you can presesnt markers at mixed depths.
it is always advisiable to calibrate at the range you are going to look afterwards.
@mpk I see, can I get an advantage by doubling the amount of calibration points and calibrating for 100f and for example 25f?
yes. For 3d this is certainly true.
@mpk That is really good news
for 2d you will have to more or less decied on one depth.
@mpk what would be the correct construct of a gaze cast for each individual eye? is it from eyecenter to Vector3 (-gazenormal.x, -gazenormal.y, gazenormal.z)?
as I do this I get the gaze generally going into the right direction where my eyes cast, but the two rays don't really converge
@user-fe23df make sure that you are using the gaze_norm and eye center data from the gaze datum, NOT the pupil datum (but I must say I m not sure if the naming is the same in unity.)
I'm using the data that was specified from the ReadMe, which is under pupil.values
the rays may not converge but we also report the closest convergence point, its the 3d gaze point.
I would make the rays have the same magnitude as the 3d gaze point. That should roughly have the desired effenct.
Yes, but when I look at a canvas at a certain depth and mark down where the 3 rays hit (left, right and main gaze point), they all are hitting at very different places
That being said we are working on some super exciting updates for the 3d model tracker that will boost accuracy.
ah okay, I should try the magnitude trick then. Thanks!
@user-fe23df If you are running pupil capture after 3d calibraiton you can turn on the gaze mapping visualzer. It should give you some more insight!
@user-5ca684 Could you advise on the issue above? ๐ฉ ๐ฉ
@user-5ca684 could you help me figure out the NetMQ bug I mentioned above? I would actually still want to get a video capture, and the error freezes Unity
also, it would be great if I can keep the framerate from dropping after pressing record
I tried to change the values and prevent the CameraCapture from changing the framerate, but it still gets dropped to around 30-40
I wonder why you guys want to keep it that low, since for any VR stuff the framerate should be at least 90 fps
Hey again! I would like to ask a question about the calibration and tracking part. When we use 2D we can follow each eye independently and put on gaze markers (red, green, blue). We work with patients that sufer from brain injuries, so it is really useful to track each eye separately. However we would like to step over to 3D, but still show markers for each eye independently. The problem now is that with 3D, only 1 marker shows up (because the eyes will always intersect with normal 3D tracking), but we would like to track each eye separately and still make use of the depth parameter. My question, can it be done?
Yes. this can be done you will have to look at the eye normal and eye pos for each eye in the 3d gaze datum dicts. However the 3d calibration assumes that the subjects binocular vergences is functional.
if you want to calibrate each eye indiviadually you will have to modify the source code to run two monocular caibrations seperatly.
I think I should add that when looking at eye_normal in the gaze dicts you will have to assume a depth since it can no longer be inferred from vergence.
exactly!
in summary: You will get two gaze directions not two 3d gaze points.
@mpk Sounds good! Thanks for the answer. I can imagine it is hard to assume a depth when somebody suffers from "cross-eyes"
Also calibrating with cross eyes in binocular mode would be impossible
so I think double monocular calibration is the way to go
still what would be the benefit of choosing 3D calibration over 2D calibration if we are using dual monocular plugin?
@user-72b0ef I use individual eye data for 3d. What I did is shooting raycasts with eyecenters being the origin and gazenormals being the direction of the ray (-x,-y,z though). and when one of the rays hits a collider you can have a new game object positioned on the hit point, those should be your individual eye markers ๐
I hope this helps somehow
I have a bit of problem with the accuracy of the hits, so if you implement in this direction it would be nice to hear how it performs for you
๐
@user-fe23df Yes, this is very helpful! Thank you :). However,, we already do this for 2D
The results for me are pretty accurate!
what do you mean you already do it for 2D? as in you project your 2D data into 3D space?
yep
yeah it seems like doing 2D then converting to 3D is more accurate than everything 3D for me
I'd attribute that to the binocular/dual monocular mapping
Exactly my point
@mpk when this error happens: FiniteStateMachineException: Req.XSend - cannot send another request, when the GetPupilTimeStamp function is being called, does that mean the stream of data is overloaded?
@user-72b0ef how did you deal with the framerate drop in VR?
@user-fe23df I dont think that that is the case. My guess is something is going on with the unity plugin
@mpk so even when you hit record the frame rate doesn't change at all?
@user-5ca684 is the one with knowledge on this.
oh sorry I thought you were @user-72b0ef
@user-fe23df I was referrring to the FiniteStateMachineException: Req.XSend - cannot send another request, when the GetPupilTimeStamp function is being called, does that mean the stream of data is [email removed] how did you deal wi
@mpk yeah I suppose so, because pupil capture works fine without any problem
@user-fe23df As far as I know the framerate is actually set when you start recording. This error message however looks to me that at a certain point the plugin is sending more request than it can handle
@user-fe23df how do you have your unity set up? have you tinkered with the framerates anyhow?
@user-fe23df have you tried recording out of the box ?
Hi everybody. For our Unity project, we currently want to detect both the duration of a blink of one and the gaze direction of the other eye simultanously. Any ideas on how to do that?
We're currently testing the blink detection plugin, but we're open for any input or advice we can get
@user-fe23df I dont have any framedrops at all ๐ฎ
@user-fe23df Only the captureRate goes a bit down, but after calibrating it is good again
@user-5ca684 yes I believe the framerate drop and the error message are two different problems
@user-72b0ef ah okay. And this is when you record right?
also, thanks a lot for your answers! ๐
I dont use the FFMPEG plugin to record if that is what you mean....
Ah yeah, that's what I mean ๐
I only know about the captureratre decreasing when calibrating
ah, forgive me then, I thought you were talking about something elser
it's okay, it's great to hear though :))
I think this is because since we require 120 samples, and we uise 120hz camera's, we dont want calibration to end within 1 second ๐
so the capture rate reduces to about 30
so it takes 4 whole seconds of good eye detections in order to complete 1 calibration points
I don't really notice any drop during calibration for our project
maybe there is a drop also, but not too drastic
its not a framedrop
its a capturerate drop
oh yeah sorry
I didn't notice that one either
the pupil capture's rate is always kept at 30 and each eye camera's is around 80-90
for each eye, not sure what the rate is during calibration
we have got values of around 240
since we got 2x 120hz camera's it can go up to 2x 120 capturerate
hmm interesting
I can make a screenshot of it if you would like
I should take a look again when I get back to the lab
oh that would be great of course! ๐
somebody is playinhg space pirates now, so it is great for experimenting with the cameras lol
hmmm, seems like it goes up to 360 for jus ta few seconds after calibration
but then it settles back to 120
hmm interesting
I now wonder
maybe the 30 that I remember is only during recording with the ffmpeg plugin
I do recall seeing 120 fps at some point for capture as well
We want to study how the perception of depth differs in vr compared to normal world. Anyone used the depth(z- coordinate) accurately? Is it really working? I don't see it anywhere near accurate. I placed two balls at varying depths and focused on them separately. Ideally, should you not expect the markers to be drawn at the place the balls are placed? I don't see that happening.
@user-5874a4 the 3d calibration is in a proof os concept stage for vr.
it works much better using a normal headset in the real world.
We are woking on improving the 3d calibration in vr. Any contributions and feedback is always welcome!
@mpk Thanks for letting me know this. Just curious to know, whatโs that which makes 3d calibration and computing depth in VR difficult? I understand in the normal case, we compute depth by the intersection of gaze normals.
Also, How does it work better using a normal headset in real world?
@muzammil#8393 I think we are still doing something not quite right with the position of the eyes in vr space.
@user-1eecd4 questions migrated from core channel. Questions:
1- Do Pupil Service and Pupil Capture are the same? If not which one of them is more suitable to work with unity? as Iโm running them and unity locally in windows 10.
2- What are the differences between (Camera Image, ROI and Algorithm) modes? Which one is more suitable to work with unity? Is it possible to modify the green circle that shows up around the eyeโs boundary?
3- I have adjusted my hmd to fit my face properly, however during the 3d calibration I wonโt be able to see the white marker properly when they are at the corners.
4- How could the fixation and blinking plugins work in unity?
Responses: 1. Pupil Service and/or Pupil Capture with hmd-eyes - You can use either Pupil Service or Pupil Capture. Pupil Service was designed to be used by other clients (hence "service"). Pupil Service is designed to be controlled via messages sent over the network. Therefore you should make sure that you properly close Pupil Service (otherwise you may have zombie processes of Pupil Service running in the background). Pupil Capture also works and can be used with the unity plugin.
Thank a lot @wrp for the clarification. I hope you don't mind if I come back with other questions ๐
@user-1eecd4 welcome. please feel free to ask questions - that is one of the main purposes of these chats ๐
glad to hear that @wrp
Hi๏ผI have some problem. I used the 2D calibration mode in HMD.And when I finished the calibration, the tracking results did not seem to be very good and the error was larger. For example, in this picture, I see the green dot in HMD, But pupil read the dots on the red dot. What can I do to improve it
Hey guys, I'm curious - what's holding you back from using a IR hotplate and placing the camera inside to improve the eye imagery? Is it the lensing? Space? The complicated installation process?
All three! We tried but it was not possible to do it as an add-on
Guys, I am not sure how to use recording feature in unity. When I click โRecordโ after calibration, I could see a square shaped transparent box blinking in the middle of my screen, and goes away after the recording is stopped. Now, I could see an error in my console about directory missing. When I create that directory, the error goes away. Instead, I see a debug log that itโs creating the video file with certain name. The application gets stuck saying โProcessingโฆโ and I see no video file actually being created. Can anyone point out if I am doing anything wrong?
Hello, we're trying to set up the PupilGazeTracker for Unity and it successfully connects, gives a capture rate but does not update the left eye / right eye in the on-screen UI. However, when calling Pupil.values.GazePoint3D it does give us the Vector3 (position). Is this position in local or world coordinates? In any case it does not seem to reflect the correct location in the scene and the update-rate is not very good. Any suggestions to which method we should use to convert the input to a world-space position in Unity?
camera always reinit, why?๐
most likely the software configured to read from only one camera.