i am looking to control vive cursor using pupil can anyone point me in the right direction regarding how i can i start to map pupil data for cursor control
@user-6cbd99 start with the gaze visualization demo and work from there i would say
hi @user-8845ff, pupil min and max are defined in pixels of the eye camera. please have a look at the "Fine-tuning" section of https://docs.pupil-labs.com/#pupil-detection
hi @user-32853a, which version of hmd-eyes and pupil capture are you running? Any related logs in the console - what is the last thing logged? Feel free to copy and send me the console log via pm.
Does anyone know how to record within a unity scene? I'm using hmd eyes v.061 for Hololens and unity 2018.3.5f1
Hi @user-3b599d - ~~please have a look at the docs here https://github.com/pupil-labs/hmd-eyes/blob/master/Developer.md#recording-data~~. If you have issues with your hololens setup in general, for building for Hololens we recommend hmd-eyes v0.51, pupil v1.7 and unity 2017.4 LTS.
I already tried this approach but nothing was recorded. Besides the scripts in event system are missing.
@user-3b599d you are right, unity scene recordings in hmd-eyes are only supported for VR - not hololens. I would recommend looking into other third party recording solutions - for example in the asset store. you can still record pupil and gaze data via Pupil Capture.
The missing scripts are part of the unity hololens extensions - looks like your unity + hololens environment might not be set up properly.
@fxlange thank you for your answer. After I couldn t record, I saved the 3d gaze positions in a csv file. Is it also possible for pupil data?
@papr i wanted to ask if i have high confidence setting in 320,240 resolution with 60 fps why does the confidence go to 0 when i have 800,600 resolution at 60 fps
@papr i wanted to ask if i have high confidence setting in 320,240 resolution with 60 fps why does the confidence go to 0 when i have 800,600 resolution at 60 fps any one else is also free to give their input
Higher resolutions use a mechanism to automatically select possible pupil regions. If the Pupil is bigger than the regions, pupil detection fails
so i should set a better minimum and maximum pupil siye?
No, these are other parameters. The regions I am talking about do not have a user-settable ui element
But one idea would be to make it dependent on selected maximum pupil sizes.
But it requires changes to the detection algorithm
i feel like higher resolution would give better pupil detection is that true? and is there a written code for it or would i have to do it myself ? what would u recommend i do if i don't know how to make changes to the algorithm
@user-6cbd99 my recommendation is to use 320x240 for now.
Often, changes to camera exposure times and gain have more impact on the detection than the resolution.
ok thnx 😃
Hi, I just want to make sure about some things. I'm currently developing a project using hmd-eyes-alpha. It's quite a hassle to migrate to the newly published hmd-eyes-beta, and it seems that it works the same way. Is there any real difference or benefit if I use the new hmd-eyes-beta?
And about the 3d gaze, hmd eyes 1.0 and 0.61 is using the same algorithm to calculate the 3d gaze, right?
Thanks in advance.
@user-3b599d - Pupil Capture recordings include pupil data as well, yes (https://docs.pupil-labs.com/#detailed-data-format).
And you can access pupil data in hmd-eyes by subscribing to the pupil
topic: https://github.com/pupil-labs/hmd-eyes/blob/master/Developer.md#accessing-data (for hmd-eyes up to v0.62).
@user-82e954
@fxlange thank you very much! I'll try it.
👋 is there known issue with graphing data from gaze topic from another thread? we run the pupil service and perform calibration. if we continue to graph data from the gaze topic post calibration we get skewed results (points running in a line when looking at the outer perimeter of the viewable scene). however if we collect the data and then graph we see an accurate measure of what has happened
is it possible that zmq is being interrupted by graphing / is there is a blessed way of graphing data in real time?'
hi @user-4c161a - could you please elaborate on how you do realtime and offline graphing. for example in unity via hmd-eyes? which version? how do you access the data (from another thread)? and how does your realtime differ from offline graphing?
@fxl yes i can! wall of text incoming.
BACKGROUND: - we aren't using unity at all, but instead using HMD-Eyes connected to a run of the pupil service. - we've tried running pupil service both as an application (downloaded from pupil site) and directly from cli (forked repo). we haven't yet upgraded to beta 1.0 version. - we are graphing with two different methods, bokeh and matplotlib, both are run as a separate process from the cli and both connect to the gaze topic. They work a bit differently
MATPLOT LIB DASHBOARD: can be run in two ways.
- NOTE: connects to the gaze topic using a subscriber and recv_multipart
function.
- Matplotlib dashboard 1: running sequentially. we calibrate data, and then continuously sample from gaze position after calibration. Once data has been sampled we then graph the data. This results in us getting an appropriate graph (a rectangle of the inner perimeter of calibration points) . although it isn't ultra precise its a great foundation.
- Matplotlib dashboard 2: running "real time". we calibrate data, and then continuously sample from gaze position after calibration. We graph each point as we receive it from the topic. we expect latency here, but are experiencing what seems to be data skew. the graph no longer displays a rectangle rather one diagonal line.
- NOTE: my colleague has recently received the rectangle in this scenario (i believe, still need to confirm) by subscribing to the gaze topic and only graphing 1 out of 50 points.
BOKEH: uses recv()
when reading from topics. has a series of callbacks each in their own thread that samples data from the gaze topic at. implementation relies on zmq_tools for subscriptions. data is sampled from the gaze topic and then data is streamed into the data source that bokeh has connected to the graph. (I suppose similarly to my colleague if I down sample here and load all the data into the graph at once rather than for every point I can reduce the latency the impact of the graphing mechanism)
- similarly to the realtime scenario in matplotlin dashboard 2: the output graph is a diagonal line.
I suspect that we've failed to understand two areas
- We greatly underestimated the amount of latency that both graphing packages would have and how there blocking calls affect other IO Processes
- We have failed to understand how to connect to IPC backbone without interrupting pupil workflow
- we don't understand the appropriate way to drink from the firehose
without drowning. ( i think we might be suffering from the Snail paradigm mentioned in ZMQ documentation and pupil Interprocess and Network Communication
readme)
TL:DR: what is the best way to: - read from the gaze topic in another program when that program is run sequentially and looking to graph the data - read from the gaze topic in another program when that program is run in a multi threaded fashion and looking to graph the data - are the blocking calls that we see from our graphing packages affecting the quality of pupil data? it seems like we aren't just getting missed packets, but rather the gaze from the packets we are getting are no longer viable
@user-4c161a
1. The published zmq data is multi-frame data, i.e. each message includes at least two frames. recv()
only receives a single frame, i.e. you have to call recv()
multiple times per message (until sub_socket.get(zmq.RCVMORE) == False
)
2. sub_socket.get(zmq.EVENTS) & zmq.POLLIN
is True
long as there are messages available. It is recommended to recv/recv_multipart()
all available messages between blocking io calls. This way you can fetch the data in batches.
Hi! How would you go about real-time graphing while using HMD-Eyes connected to a run of the pupil service and what would you recommend the graphing frequency to be? Thanks!
@user-4e99bb Drawing frequency can be bound to your monitor`s refresh rate, probably 60Hz. But it does not matter that much. Just select a frequency and stick with it.
Hello, I just started playing around with the Unity plugin and had a few questions.
I wanted to ask how to increase the calibration radius to encompass the whole screen? I notice that when the calibration runs it only takes up about 50% of the display. Is there any way to make adjustments to the calibration radius, or make a custom calibration set?
@user-a3b085 are you using the recently released 1.0 beta?
I'm using the most recent download from release/beta. The unity package states its v1.0-beta
Have you run the full calibration? IIRC, the calibration shows markers at different distances, effectively showing different calibration radii. Usually the close markers fill the complete screen. If this is not the case, it is possible for you to make a custom set of markers -- but this is something @fxlange will have to elaborate on.
Using any of the demo scenes, or even the specific "Calibration Scene" the "eye markers" don't seem to go to the edge of the display. So I wanted to either adjust the "Calibration Scene" specifically or get help to modify the "Eye Marker" positions so they calibrate to the edges of the display.
I'm not using the eye tracker within an HMD currently. And just using my game window in the editor while having the eye tracker on my head. If that information helps.
I would be interested in how you are having the eye tracking add-on "on your head" 😉 Even though it looks like the marker do not go to the edge of the preview window, in my experience, they are barely visible in the actual field of view when wearing the hmd
Just wearing it like this.
Ah, ok, you purchased the normal headset, not the hmd add-on then.
yes.
We won't be using the eye tracker with a market HMD like vive or oculus. My company is creating a new prototype headset, which is why for our purposes we would like to adjust the radius of the calibration so that it can match our Field of View within our prototype HMD.
I notice that there are default settings/default target scriptable objects. Do these affect where the targets are shown?
ok, that makes perfect sense 👍 The new beta provides the possibility to rearrange the calibration markers but @fxlange will have to comment on how to do that.
ah okay.
I shall wait for his/her response!
Hi @user-a3b085 - the CalibrationController
in the Scene refers to CalibrationTargets
to define the placement of the calibration markers.
the setup is quite flexible: the default implementation places them in circles. you can play around with the provided circle targets (Resources/Calibration/Default Targets), create your own circle targets (Create/Pupil/CircleCalibrationTargets) or even write your own implementation of CalibrationTargets
- if you prefer a grid for example.
be aware that CalibrationTargets
are defined as ScriptableObjects
and therefore live in the project not in your scene.
please have a look the docs here: https://github.com/pupil-labs/hmd-eyes/blob/release/beta/docs/Developer.md#calibration-settings-and-targets and let us know how it goes.
anyone using pupil with unity? i am trying to show eye images to my unity app but all it says is not connected
any guidance would be apporeciated
@fxlange Would you be able to give me some pointers on how to do a grid system for the targets based on camera width and height? I'm attempting to do a full display calibration, and not sure how to work within the CalibrationController class.
I'm attempting to anchor target points based on camera.pixelrect to get the dimensions of the camera viewport. Then translating that into world space. So ideally I would have 9 target locations at each anchor point along the display edge (IE. Top, Right, Left, Bottom).
private void UpdateMarker() { marker.position = camera.transform.localToWorldMatrix.MultiplyPoint(currLocalTargetPos); marker.LookAt(camera.transform.position); }
I don't know if this function is messing with my logic inside of CalibrationController. I guess I would ultimately like some pointers on how to work within your calibration system to create a grid system that will work with different resolutions. Or just a specific resolution would work as well. and then I can build it out from there.
hi @user-a3b085 - instead of messing with the CalibrationController
class I highly recommend writing your own CalibrationTargets
implementation. this involves 3 steps:
(1)
you have to extend the CalibrationTargets
class by overriding the abstract methods GetTargetCount
and GetLocalTargetPosAt(int idx)
. the later is where you have to put your grid logic. please have a look at CircleCalibrationTargets
which is exactly doing this (but for a list of circles). maybe start with just one point in the center for testing and add the grid logic later.
(2)
make sure to have a [CreateAssetMenu(...)]
on top of the class so you can create them via the context menu in the project view. again you can copy and adjust this from the Circle implementation.
(3) create an instance of your grid targets via the context menu and assign it to targets property of the calibration controller in your scene via the inspector.
be aware that this approach is the most advanced from all approaches I mentioned yesterday and involves some deeper knowledge of c# and unity (or hacking it together by coping from the circle implementation). instead you could also start by adjusting the default circles, having more circles and circles with increased radius (under Resources/Calibration/Default Targets
).
Hi @fxlange This is exactly what I did. I extended the CalibrationTargets
class. I tried making a grid system based on anchor points to the main camera reference inside of 'CalibrationController'. Using 'camera.pixelrect' to get the viewport size. But when I try and place a target at an anchor of "Left" or "Right" the positioning doesn't behave as it's supposed to.
Ok cool, how do you access the camera from your implementation? One approach would be to use Camera.main (the first enabled camera tagged "MainCamera") - this way you shoudn't need to change anything in CalibrationController
.
Instead of camera.pixelrect
(which I haven't used before) I would have gone for camera.ViewportToWorldPoint
https://docs.unity3d.com/ScriptReference/Camera.ViewportToWorldPoint.html together with camera.worldToLocalMatrix
to make sure your final coords are in local camera space. This allows you to define your grid in normalized instead of pixel coordinates.
To access the camera in CalibrationController
I just pass the camera reference into targets on Awake().
I tried camera.ViewportToWorldPoint
but since this needs viewport coordinates I needed to use camera.pixelrect
to get the viewport coordinates, since camera.pixelrect returns me the width and height of the camera viewport.
I'll try messing around with it more, but I felt like the UpdateMarker() function inside of CalibrationController
is setting the position of the marker at a different value because it's using this marker.position = camera.transform.localToWorldMatrix.MultiplyPoint(currLocalTargetPos);
So even if I get a final value of the exact position I want the marker at, it's not exactly setting it at that value I need it to be at.
Has anyone tried to create a grid calibration system before? If you have any pointers on how to get started on one that works within the confines of your system, I would be grateful to hear them.
The viewport is a normalized space just from 0,0 to 1,1. So the resolution shouldn't matter and the transformation is handled by the functions mentioned above.
The calibration controller uses the calibration targets to set currLocalTargetPos
and transforms it into world space based on the camera pose.
hmm.. Okay. What kind of position do I need to return so that the CalibrationController
can transform it correctly?
(Maybe try to create a hardcoded grid first (independent of the camera). Just 9 points on a 0.1 grid?)
You need to return your coords in local camera space.
So 0,0,z is in the center of your view no matter how you move your camera.
I see...
So then would 1,0,z be the right of my view?
yes
oh! Well, that makes things much easier I suppose. Thank you then. I will try this with hard coded values and see how it turns out
I will report back on how it works.
Hello, I'd be glad if someone could help me. I am currently working on an unity project(2019.1.0f2) using the HTC Vive and an eye tracker from PupilLabs. In an old Version the unity plugin(unity_pupil_plugin, hmd-eyes-0.61) there was the possibility to save the ingame-video and the tracked gaze in Unity-world-coordinates. What i am trying now is to achieve the same results in the latest release (hmd-eyes v1.0-beta) but it seems to me that this functionality no longer exists. When I use the RecordingController I only get the video files from the tracked eyes and a few files (.NPY and .PLDATA), but no information about the gaze in Unity world coordinates. My question now is whether I have simply overlooked something or whether this functionality is not yet implemented at this point in time? Many thanks in advance!
@user-6fd4b8 I believe it's removed temporarily. https://github.com/pupil-labs/hmd-eyes/blob/develop/docs/Developer.md#recording-data I think it'd be nice to re-construct the gaze offline in Unity-world coordinates as well
Hi everyone, I'm trying to use my own scene to test the eye tracker and I've found a post by P Perron (01/02/2019). The first part is working but when I edit the script, I've an error that "calculateMovingAverage" doesn't exist. Is it due to an update ? If it is, how can I manage it ?
Here is the script : using System.Collections; using System.Collections.Generic; using UnityEngine;
public class YOURSCRIPTNAME : MonoBehaviour {
public Transform marker;
// Use this for initialization
void Start () {
PupilData.calculateMovingAverage = true;
}
void OnEnable()
{
if (PupilTools.IsConnected)
{
PupilGazeTracker.Instance.StartVisualizingGaze();
}
}
// Update is called once per frame
void Update () {
if (PupilTools.IsConnected && PupilTools.IsGazing)
{
marker.localPosition = PupilData._3D.GazePosition;
}
}
}
Thanks
hi @user-6fd4b8 & @user-97591f - indeed this functionality is removed, it wasn't compatible with pupil player nor with the rest of the recording. instead we are working on streaming to pupil capture, so everything will be recorded there and can be viewed in pupil player.
@user-1ece1e which hmd-eyes version are you using? both hmd-eyes v0.62 and the new beta release v1.0 have demos for showcasing eye/gaze tracking.
@fxlange I'm using hmd-eyes v0.62. If you're talking about the shark demo, yes it works but when I try to change the scene the calibration window is still here but after calibration I just have the scene without eye tracker ..
Hello everyone, I'm trying out the heatmap market demo. The recording of the videos/images works, but no particles are drawn. In highlight mode I only get a white image in the video. I have already added the "Calibration Demo" script for testing and I get the eye tracking points. So the data should be available. I don't get any error messages. Any ideas? Thanks in advance. Our System: Pupil.Import.Package.VR.v0.62.unitypackage with Unity 2017.4.0f1 LTS
Hi @user-1ece1e - yes in case of deploying for hololens it is the shark demo. Switching scenes is supported via the Pupil Manager
. Inside the Scene Shark Demo/2d Calibration Demo
you find the Pupil Manager GameObject with the Pupil Manager Component attached. The component allows to define the scenes you want to open after an successful calibration (https://github.com/pupil-labs/hmd-eyes/blob/master/Developer.md#pupil-manager)
Thanks a lot, I found it, forgot to tell you !
It was just troubling because when you do the demo, the "pupil_plugin/Calibration" is added not the "Shark Demo/ 2D Calibration Demo" in Scenes In Build
Hi. I've been using the hmd-eyes plugin (beta v. 1.0) to obtain eye tracking data from Pupil Capture into Unity. Just today I am no longer able to access the stream of eye tracking data, and I get the following error in Unity:
FormatterNotRegisteredException: System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not registered in this resolver. resolver:StandardResolver
Any idea on what is causing this? It was working fine when I last tested my application on Friday, and I am sure I haven't made any changes
Nevermind, turns out I had accidentally set the target platform to iOS rather than PC, Mac, & Linux standalone 😅
@papr I have an issue grabbing the gaze.3d data in my own implementation. I tried the exact setup you guys are using within the Gaze Listener
class where you subscribe to the event in Subscription Controller
. I tried subscribing to that same event, but I get errors when I do. What is causing this?
@user-a3b085 that depends on the error that you get
TerminatingException: CheckContextTerminated - yes, is terminated. NetMQ.Core.SocketBase.CheckContextTerminated () (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) NetMQ.Core.SocketBase.GetSocketOption (NetMQ.Core.ZmqSocketOption option) (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) NetMQ.NetMQSocket.GetSocketOption (NetMQ.Core.ZmqSocketOption option) (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) NetMQ.NetMQSelector.Select (NetMQ.NetMQSelector+Item[] items, System.Int32 itemsCount, System.Int64 timeout) (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) NetMQ.NetMQSocket.Poll (NetMQ.PollEvents pollEvents, System.TimeSpan timeout) (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) NetMQ.NetMQSocket.Poll (System.TimeSpan timeout) (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) NetMQ.NetMQSocket.Poll () (at <cd775f2ee43b4afd8cc83c9f5ddee1f4>:0) PupilLabs.SubscriptionsController+Subscription.UpdateSocket () (at assets/Plugins/Pupil/Scripts/Subscription.cs:62) PupilLabs.SubscriptionsController.UpdateSubscriptionSockets () (at assets/Plugins/Pupil/Scripts/SubscriptionsController.cs:87) PupilLabs.SubscriptionsController.Update () (at assets/Plugins/Pupil/Scripts/SubscriptionsController.cs:34)
This is the error I get
I get this error when I add this one line of code "subsCtrl.SubscribeTo("gaze.3d", Receive3DGaze);"
in my own implementation
@fxlange @user-85976f : I remember (and reread) your discussion on the hmd-eyes-alpha channel from ~2 months ago in which @user-85976f was reporting many non-unique data points when logging pupil data in Update(). I'm currently facing the same issue (please see info on my setup below). For convenience, I also log pupil data together with other behavioral data in an Update function. More precisely, I have a data class which updates some of its properties by means of a function which I set as a listener to the "OnReceive3dGaze" event of the GazeListener. (so, as far as I understand, with each time the event is fired, my properties should update with fresh info from gazeData). I then log these properties once per frame from an Update function. However, I'm surprised that there are these duplications in the logged data. shouldn't OnReceive3dGaze fire approx. 120x/s while Update() is only called 90x/s? So shouldn't it rather be the case that my logging misses some intermediate values (which would be ok for my purposes) rather than be picking up the same data point twice? This all sound pretty much as what @user-85976f described back then. Did you figure out a solution for this ... or can explain to me where I'm going astray? I would be very thankful.
my setup:hmd-eyes-alpha v1.0 [I built my project around this release so I don't dare to update to the beta for this one, unfortunately] Pupil Labs HTC Vive Add-On with PupilCam1 (120Hz) Pupil Capture (v1.12.17) Vive Pro Unity 2018.3.11
Is there any way to save a user's Pupil Capture calibration data so they do not have to recalibrate the next time the application is launched?
what does "Points" mean in calibration targets
Hi, I'm planning to use a function like "Close your right eyes for 3 seconds to do things." I probably can try using the confidence to do that, but has it been implemented before? If so, it'll be really helpful.
Hello, Our System: Pupil.Import.Package.VR.v0.62.unitypackage with Unity 2017.4.0f1 LTS. I'm trying out the heatmap market demo. The recording of the videos/images works, but no particles are drawn. In highlight mode I only get a white image in the video. I don't get any error messages. I have already added the "Calibration Demo" script for testing and I get the eye tracking points. So the data should be available. Also the PupilLabsVR Demo-v1.0-beta.exe works. So it should not have anything to do with the settings in capture. A other user written this to me: In my case two layers were missing in the scene. The gameobjects "Heatmap", "RenderingCamera" and "Mesh" need a layer called "HeatmapMesh". The objects "MaskingCamera" and "ParticleSystem" need the layer "HeatmapMask". This fixed the problem for me. But I still have the problem described above.
Thanks in advance.
Hi, I would like to have the GazePosition in a surface in real time (not with Pupil Player because I need this data for some interaction with the scene of the Hololens). Is it possible to do this ? Thanks
Hi @user-1ece1e, since you're using it for HoloLens, I suppose you're using the v.0.62.
I did use similar case for HTC Vive. I did it by making a raycast from head position to 3D Gaze Position (in my case, there's a GameObject named ObjectMarker).
I don't know if it's the same in Hololens tho.
Hi @user-82e954 , Yes, I'm using the v.0.62.
Oh I see, thanks ! Just how did you get the surface coordinates ? I didn't find the connection with the surface tracker and the hololens.
For your problem btw, maybe could you use the value of the gaze when it's null ( only the case when the user close his eyes or when pupil is out of recognization but it should always be with good calibration ) and a countdown timer. Just an idea, I didn't try it
@user-1ece1e Again, I'm not sure if it's the same in Hololens. If the object have collider, I think you can use I raycast hit. Something like: if raycast hits, marker position = hit position. Then add some action based on that. If this is what you want, you can google some more.
Thanks for your suggestion btw! But I'm working on something quite different now. :)
@user-82e954 Thanks, but actually it will give me the coordinates in the world coordinates system and I would like to get it in the surface coordinates system, if you see my issue
@user-1ece1e Ah sorry, I thought you meant the coordinate of the object's surface lol
Sorry I don't know much about it then. I only use world coordinate in my VR project. I tried to google it, not really sure, but maybe this is what you mean: https://www.wrld3d.com/unity/latest/docs/types/Wrld.Space.LatLongAltitude/
Not really, but thanks anyway ^^
Well, good luck 😁
@user-a3b085 You might be trying to subscribe before a connection is established? Would be interesting to know why you decided against using the GazeListener
and GazeData
directly.
@user-6cbd99 points refers to the amount of markers per circle. Thanks for pointing this out.
@fxlange Am I able to have two GazeListener
classes in a scene?
For example, your Gaze Visualizer
Prefab uses a local version of GazeListener
. Can I have another script have another instance running as well? Like MyScript.cs has a private GazeListener = New GazeListener()
Hi @user-a3b085, yes that's possible but be aware that it means parsing the full gaze dictionary twice every time. Instead better only use one GazeListener
and bind your different scripts to the Gaze3d event.
Hi @user-32853a, no it's currently not possible to avoid the calibration step by storing the data.
@user-141bcd We discussed several different issues but I don't recall the cause for logging same data twice in that case. Are you filtering by confidence by any chance?
@fxlange : nope, I'm not filtering. I tried to look further into this and observed the following (maybe you can explain to me me what's going on): If I print out the time stamps each time pupil data is parsed (in Subscription.ParseData(..)), I get this:
(UnityTime = Time.time and PupilTime = gazeData["timestamp"])
Looks to me like the data (on different "topics") for multiple (pupil) time points is being received at one (Unity) time point. So whenever I access the parsed data, I will get the one which was parsed last. Is that correct?
I get this exception after performing the 3D calibration and loading my scene. I tried it with hmd v0.61 and v.062 and I'm using the hololens. Does anyone know how to solve this Problem?
ArgumentException: Destination array is not long enough to copy all the items in the collection. Check array index and length. System.BitConverter.PutBytes (System.Byte* dst, System.Byte[] src, Int32 start_index, Int32 count) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System/BitConverter.cs:182) System.BitConverter.ToSingle (System.Byte[] value, Int32 startIndex) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System/BitConverter.cs:276) UDPCommunication.FloatArrayFromPacket (System.Byte[] data, Int32 offset) (at Assets/pupil_plugin/Scripts/Networking/UDPCommunication.cs:289) UDPCommunication.InterpreteUDPData (System.Byte[] data) (at Assets/pupil_plugin/Scripts/Networking/UDPCommunication.cs:254) UDPCommunication+<Listen>c__AnonStorey0.<>m__0 () (at Assets/pupil_plugin/Scripts/Networking/UDPCommunication.cs:145) UDPCommunication.Update () (at Assets/pupil_plugin/Scripts/Networking/UDPCommunication.cs:188)
Hello! I'm trying to use the plugin for Unity (2019.1.8), but I can not open PupilGazeTracker in the editor. It shows only the header "pupil labs" and every time I click on it, produces 1 to 8 identical errors like this: ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index System.ThrowHelper.ThrowArgumentOutOfRangeException (System.ExceptionArgument argument, System.ExceptionResource resource) (at <1f0c1ef1ad524c38bbc5536809c46b48>:0) System.ThrowHelper.ThrowArgumentOutOfRangeException () (at <1f0c1ef1ad524c38bbc5536809c46b48>:0) System.Collections.Generic.List`1[T].get_Item (System.Int32 index) (at <1f0c1ef1ad524c38bbc5536809c46b48>:0) CustomPupilGazeTrackerInspector.OnInspectorGUI () (at Assets/pupil_plugin/Editor/CustomPupilGazeTrackerInspector.cs:102) UnityEditor.UIElements.InspectorElement+<CreateIMGUIInspectorFromEditor>c__AnonStorey1.<>m__0 () (at C:/buildslave/unity/build/Editor/Mono/Inspector/InspectorElement.cs:462) UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)
Could you tell me please why is this happening and how do I solve it?
@papr @fxlange Is there a way to take the gaze direction and translate that into a screen position?
Ultimately I would like to be able to get a vector2 screen position from where I am looking. Is there a solution for this?
What is the field of view of the 120hz eye camera ?
hi @user-38092d - it looks like you are using hmd-eyes v0.62 or older, which only supports Unity 2017.4 LTS. I highly recommend switching to the newest hmd-eyes v1.0 beta release, which supports Unity 2018.3+.
Hi @user-a3b085 - Unity Camera
supports all sorts of world to screen transformations. It depends on your scenario what to best use for your 3d world position (to convert into screen):
* combine gaze direction and gaze distance
* gaze direction + some fixed distance
* project the gaze direction vector into your scene (like the GazeVisualizer
)
@user-141bcd Time.time
is the time of the current frame, which explains your log results. Better use Time.realtimeSinceStartup
.
And for your redundant data logging issue: For logging I would recommend to use a buffer / stack for your received gaze data instead of just storing the last one received. This might not solve your problem but gives you better control and help debug what's going on.
@fxlange Thank you!! I will do that
@user-141bcd @fxlange I can check later today, but from memory, I think the issue was that I was not filtering by topic. The binocular gaze topic updates once for each update for the right and left eye (topic = "right" or "left" ), so if you don't filter by topic, you'll get a signal with lots of redundant data.
That is, every time the left eye updates, the listener will return a left eye sample and a binocular sample with the same timestamp.
@user-141bcd If you want to see the unique samples in the data you have already recorded, but did not record topic, you can filter to show only unique timestamps.
...this will introduce some additional noise in the track, but not a ton. The noise is because you can't be sure if the remaining gaze vectors with unique timestamps are the binocular or monocular samples.
@fxlange thanks for the advice re. Time.time
; I thought they're difference was only whether or not they're affected by Time.scale
. The output from above is now more informative. However, it obv. didn't fix the logging issue. (pls see below)
@user-8779ef Thanks for the input. That makes a lot of sense, and I was/am not filtering by topic. However, since I'm (with every OnReceiveGaze
event) just updating a property that I read out once per frame, these duplicates (within a frame) should not harm me as they would just overwrite each other or be overwritten by the next event. My problem seems to be that these values sometimes are not overwritten for more than 1 frame. So I looked more into this by doing what @fxlange suggested. Here is some more data that I cannot make sense of 😑
What I did: With every Onreceive3dGaze
event, I save the timestamp
from the GazeData
(= pupilTime), Time.realtimeSinceStartup
(= unityTime), the current frame number (Time.frameCount
), and the MappingContext
(also from GazeData
).
[The duplicates in the unityTime column aren't real duplicates; that's just rounding]
There is no binocular data in there as nobody was wearing the headset while I recorded this.
As I said, i cannot really explain what is happening there. In terms of pupilTime, everything seems to be fine. However, on the side of Unity there are these weird "jumps" during which for 1-2 frames no event seems to be fired. This fits with what I observe in my (Update bound) data logging, as in these "skipped" frames, I just log the old data point again (as it isn't overwritten).
@fxlange @user-8779ef do you have an idea what I might be doing wrong? I'm fairly sure that I don't really drop all of these frames, as I also log the number of dropped frames (and that's 0 for most of my trials) and also in the profiler it looks ok (btw. that data was recorded with a build, not from the editor). Could it be problematic that nobody is wearing the headset while I'm recording this? However, I also observed the duplicated values in my logfiles when I recorded while someone was wearing the HMD and the eye tracking data was reasonable.
Thanks a lot for your input/thoughts. It's really appreciated.
Some more info: here I filtered (post-hoc) for 1 eye and calculated the time steps between ewo consecutive events
Histogram of the unityTimeSteps (ignoring everything that's > 50ms):
hi @user-141bcd - thanks for the detailed report. The pattern you are describing here doesn't mean skipping any frames or loosing any data. You are receiving data timewise slightly bundled instead of perfectly distributed over time. Main reason for this is how Pupil Capture itself sends out the data on the main thread. You can switch to Pupil Service to avoid this pattern and have lower latency in general (https://github.com/pupil-labs/hmd-eyes/blob/develop/docs/Developer.md#pupil-capture-or-service).
Hi @fxlange thanks for the quick response. Alright, that sounds like a fitting explanation. I didn't know that Capture works like that. Thanks for the tip with Pupil Service. For the current project, I won't take the effort to shape it to make it work with Pupil Service. As I mentioned, logging the data in Unity is more a question of convenience in this case. I can also run my analyses on the data stored by Capture. But yea, for real online stuff, Pupil Service might be the better choice then. Anyway, thanks a lot!
Assets\Plugins\unity_pupil_plugin_hololens\Assets\HoloToolkit\Utilities\Scripts\Editor\SetIconsWindow.cs(396,50): error CS0117: 'PlayerSettings.`WSAImageType' does not contain a definition for 'PhoneSplashScreen'
Is the error I'm getting, any tips on solving it?
Is there currently any way to get HMD-eyes working with the hololens on Unity 2018.4.X?
Currently it is not supported but doesn't mean it's not possible. Any contributions are welcome. We will look into it again after hmd-eyes v1.0 is done.
@fxlange I have a few questions, there may be multiple parts that require specific setups and I hope it's not too confusing. I have been developing concurrently with the develop branch, and the task I've designed in Unity is meant to evoke behavioural changes (eye-deficits, event-related changes in the brain - EEG). All my events are timestamped, and for our lab to analyze the relationship between eye-movements and the neural data (and the presented visual stimuli) the timestamped events from all sources must be precise.
There has been an ongoing issue that prevents us from doing that: https://github.com/pupil-labs/pupil/pull/1541 . If you refer to @user-5d12b0
implementation of gaze_mapper.py
(requires pupil via source), the GazeVisualizer.cs
(or information coming from gazeData.GazeDirection
) from hmd_eyes is unreliable (the direction for the raycast clearly does not illustrate where I gaze). As a temporary fix, I've changed the string "gaze.3d" to "gaze.3d.1" in GazeListener.cs
subsCtrl.SubscribeTo("gaze_3d", Receive3DGaze)
and used gazeData.GazeNormal1
to define localGazeDirection, and the gaze becomes reliable enough for experiments. This forces the visualization script to use just the left eye, and the implemented gaze_mapper.py
fix can prevent our recorded online pupil stream (we use the pupil LSL relay plugin) from having negative time jumps.
The above is definitely a hack, and I remember in the past there was an easy way to change the dominant eye (Binocular -> Monocular) by just using something like leftGazeRay rather than a calculated version using both eyes. I do not know how to approach the negative time jumps issue and its impact in Unity, but is there a way to switch to monocular (without re-calibrating or breaking immersion), in the event where one eye (or combination of both eyes) are unreliable?
There might be a lot of back-and-forth going on, so I'm open to discuss this via DM or a separate channel. Thanks!