hey guys, I was wondering whether I could use the add-ons to capture the clear eye image in low light condition(when users are wearing the hmd)? Thanks for your help!
@fxlange Have the team come to any conclusions in regards to whether you want to make add-ons for the Oculus Quest and Valve Index? And in case you decide to make it, when would you guess the earliest point it could be ready would be?
Hi~ I'm using "unity_pupil_plugin_vr" project and recording the playing windows via FramePublishing class. And when I trying record there is some problem what It cannot record UI things in "Canvas" but only recording eye0 and eye1 textures. Is there any solution for record with UI?
@user-09f6c7 solution is change the Render Mode to World Space...
Hello, I am trying to run the 'Pupil Labs VR Demo - v1.0.exe' hmd-eyes program, but it says not connected. I have windows10 x64 and all the required dependecies installed per guides. I can see both the cameras on pupil capture. Am I missing a step after opening the pupil capture and then starting the unity exe example program? Thank you!
The default port was in use by something else. I am able to get the unity project to work. Thanks!
@fxlange are calibration data stored by hmd-eyes and can be reused for offline gaze-detection?
Is anyone aware of a way to use hmd-eyes with other VR engines like babylon.js?
@user-141bcd - yes, in Pupil Player you can switch from showing recorded gaze data Gaze From Recording
to offline gaze detection Gaze From Offline Calibration
, which would use the recorded calibration data stored as part of the recording.
hi @user-c43af3, hmd-eyes is a plugin specifically build for Unity but the underlying open network API used by hmd-eyes is provided by Pupil Capture and Pupil Service and can be used for remote control and realtime data access: https://docs.pupil-labs.com/developer/core/network-api/#network-api. the only dependencies are msgpack
and zeromq
which for example are fulfilled in JS.
besides hmd-eyes I'm only aware of Vizard https://www.worldviz.com/vizard-virtual-reality-software supporting a high level communication with Pupil Capture/Service
@fxl okay. That's pretty much what I expected... Do you have any information about calibration in a VR-scene? I just received the pupil core and I've tried to test the calibration features with a standard worldcam/eye0/eye1 setup, but even that didn't work.
@fxlange s there a suggested way to export arbitrary text data to file in a manner that is time-synchronized with the pupil capture recording?
This data could encode object positions, transformation matrices, flags describing the world state, etc.
Annotations?
hi @user-8779ef - yes, annotations (= timestamped labels) might be what you are looking for: https://docs.pupil-labs.com/core/software/pupil-capture/#annotations
so far hmd-eyes has no high level support for remote annotations but it is on our list.
Hello, I have another problem. I build an application from Unity and play an application in hololens. For share when I play app turn on hololens dev portal live stream and turn on pupill calibration, calibration UI has bigger than Unity Editor. So I can't success calibration. why is that? Do you solve it?
@fxl I found a way to send annotations by simply making custom methods in Request (based on SendRequestMessage but using the "topic" value instead of the "subject" as first frame) and RequestController (based on Send). I am then sending a Dictionnary with {"topic", "annotation"} as first key-value pair. It now works perfectly. Thanks for the hints!
Hi @user-fc194a great that you made it work and thanks for pointing the issue in the documentation out. We will fix it asap.
@user-8779ef & @user-fc194a remote annotations also support custom data (not just the label). basically all primitive data types (supported by msgpack - ideally flat and not nested). pupil player won't show the custom data but export them. for example you could add the camera/head pose coords to your annotations and send it every frame. in the case of a vec3, for example, it is recommended to add x,y,z separately instead of a float[].
I'm have to admit that I'm not incredibly familiar with msgpack. I send this as a key / value pair, so that the numerical data retains a label?
as a a dict, perhaps?
the annotation is basically already a dictionary, with the required keys. on top you can add your data as additional key value pairs, yes.
Ok, thanks. For write-out, it seems it would be handy to implement a helper-script that one can drop on a game-object, with radio buttons that allow you to select what to write out. FOr example...
"Position" "Matrix" "Albedo, ... who knows what else.
"Collision information?"
Hit locations etc.
working on an example today for v1.1, on top of the PR by @user-141bcd which we just merged. but probably not as feature rich as your proposal 🙂
Hehe. Well, thanks for working on it, anyhow. Good timing.
Also, without playing more, I can't anticipate what kind of "scheduling" issues might crop up. It would be nice if everything were written out on Update()
...but, you can imagine situations where folks want more flexibility. This requires some thought.
What kind of performance hit do we get when sending annotations? (I don't expect a single or a few annotations will have much impact but how about one or many on each Update? Or different sizes of dictionary?) How would you go about to test this? I am designing smooth pursuit tests with different paths so recording the position of the target(s) is essential. I am wondering if storing it to send it at the end is a better alternative than a call on each update.
@user-fc194a Sending data via a pubsocket is very cheap, once/many* per frame is definitely not an issue. For example the Screencast component sends full screen captures as raw byte arrays up to every frame.
(*many in a reasonable way)
@user-fc194a one additional note: because of the framerate difference between capture and your VR application, in theory the pubsocket might have to drop data frames. I can't tell you how big a pubsocket buffer is and it is probably zmq implementation dependent. so just keep it reasonable 🙂
@user-8779ef back to your orginal question (cc @user-fc194a): besides annotations you can always store data via unity directly and keep the time-synced with pupil capture recordings by converting unity time into pupil timestamps via the TimeSync
component (https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#time-conversion). pupil player will also export annotations as a .csv file.
Alright! Thanks, fxl.
[email removed] When switching away from a scene with some pupil connectino objects in them
...and then back, I see the connection objects double.
I assume that this is because you have used 'dont destroy upon load,' but don
don't check for doubles. So, everytime I return to the scene, it loads a new pupil connection object.
See the Unity example here: https://docs.unity3d.com/ScriptReference/Object.DontDestroyOnLoad.html
Hi, I don't have an answer yet, so leave it again. If I turn on the app developed by Unity in the hologens, open the hologens dev portal on the connected PC, and turn on the live stream, the calibration UI turns into a picture. Have you ever seen anything like this? What should I do?
@fxlange For what it's worth, I am having trouble understanding the developer notes for time conversion.
I'm about to start playing around with the new timesynch utility. I'll let you know if I have any suggestions.
Hi! I'm trying to get accuracy information via Pupil Capture, but I can't figure out how to use it. What is even supposed to appear on the "Pupil Capture - World" window, for me it is just a grey screen?
I'm now realising this window is for a third "world" camera that I don't have since this is a VR add-on. However is there any other way to get accuracy information from the 2 eye cameras without this "world camera"?
Hi folks! I am new here. We have an HTC vive, and we are trying to get one pupillab add-on for the headset. I wonder if it's possible to customize our own analytics plugin to extract features from raw eye data from pupil's service and report/display the statistics through unity on our VR headset in realtime :)? Any suggestion on the documentation or project I should look at? Much appreciate it!
Hi @user-d77d0f - you can access all kind of data in realtime via the network api https://docs.pupil-labs.com/developer/core/network-api/#network-api.
For VR I would recommend to use our Unity plugin hmd-eyes https://github.com/pupil-labs/hmd-eyes. Please checkout out the readme and developer docs for how to get started: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#getting-started
Hi @user-3cf418 - your analytics plugin is running in 1⃣ Pupil Capture/Service or in 2⃣ Unity? For 2⃣ you can just use hmd-eyes to prepare the data (gaze/pupil/...) for your Unity plugin and take it from there but if I understand your question correctly that is not what you are doing 🙂 So for 1⃣ you could use the same network API mentioned above to publish your results (https://docs.pupil-labs.com/developer/core/network-api/#writing-to-the-ipc-backbone) and receive them in Unity via hmd-eyes (https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#low-level-data-acess).
@fxlange Hi, thank you, and that helps a lot! I planned to develop and run it in the Pupil Service, and I will check out the API 🙂
Did anybody tried out yet, to put the Vive Add-on into the Vive Cosmos? https://www.vive.com/de/product/cosmos/ Looks like it could fit without modifications...!
Hey @fxlange , trying to align data exported from Unity to a CSV with the timestamps provided by pupil player. Having a bit of trouble.
Here's a frame from pupil player.
now, I just want to sanity check. I wrote out a row of data for each frame of the Unity update(). Each row of data in this CSV includes the pupil labs timestamp. Now, I should be able to search through my rows for the timestamp shown in that video, right?
...and that row of gaze data should align with the video frame.
From the screenshot, that that's a plTime of 100.905, so this frame should be approximately "np.where(dataFrame['plTimestamp']>100.9)[0][0]"
Hi all, does anyone know how to analyze/visualize number of blinks?
Is it correct if I use as blinks all the data that has confidence 0 ?
HI @user-0eef61 - Pupil Capture offers a blink detection plugin which you can access via hmd-eyes: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#blink https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#low-level-data-acess
@fxlange Thanks for your reply. Is that useful now that I have saved all the data from eye tracker?
At the moment, I have saved the data and I want to make some analysis
My question is what data gives information about blinks? For example in pupil_positions.csv or gaze_positions.csv? I read that when we lose the confidence, that can be takes as a blink but I am not sure
@user-0eef61 the blink detection plugin is also available in Pupil Player, so you could run it offline on your recordings.
But yes, blink detection is confidence based and works with an onset and offset threshold. Use pupil_positions.csv if you want to analyze both eyes separately otherwise gaze_positions.csv would work as well (for your custom solution).
Thanks for your feedback @user-8779ef, you are right that the scene switch demo is not setup to handle back and forth scene switches. The intention is to showcase how to maintain tracking while switching the scene but you probably have to adapt it to your use case. Still probably better to check that the object doesn't exist already (as in the Unity example).
Btw. Pull Requests are very welcome 🙂
Regarding synchronizing your data: Yes, that sounds about right. Matching timestamps can be more complex then a simple ">" comparison (also due to latency and depending on your usecase) but as long as you convert unity Time.realtimeSinceStartup
to pupil time via the TimeSync
component you should be able to find pairs this way.
(Be aware that TimeSync
uses double instead of float for pupil time.)
the add on used in VIVE is the same with VIVE pro?
Hi, I've been using the insets with my HTC Vive but while demoing it at different people I encountered different issues:
The image from the camera was either blurry or focused depending on the user's face geometry which resulted in a not always efficient pupil tracking. Is there any chance you upgrade the insert with cameras similar to the 200Hz ones on the Pupil Core as they don't need to be focused?
The camera mostly had the cheek in and the eye up in the image meaning the lower eyelid was partly covering the pupil hence messing with the tracking. Playong with the ROI and the Algorithm parameters did help but not in all cases. Is there a certain way to set the Vive on someone's face to ensure the camera are positionned correctly? Or are you considering another placement of the cameras on the inserts?
The heat generated by the cameras which can be quite hard to cope with, esoecially in a closed enclosure. While the cameras on the Pupil Core also heated a lot, there was a greater airflow around them. The heat generated byt the 200 Hz comeras is rather low I believe, having them on the inserts would be quite beneficial :D
Thank you for the help,
@user-bd800a Out of curiousity, are you using the 120 Hz cameras? They have updated their integration once in the past year or so, to cameras that I do believe can run at 200 Hz.
Check the webpage to see if your inserts look like the most recent update, and if the camera specs match.
@user-bd800a - @user-8779ef is correct. The most recent versions of the Vive/Vive Pro add-ons (Since around March 2019) are 200Hz eye cameras. If you want to get the 200Hz version, or to discuss sales related topics please email sales@pupil-labs.com
Ah okay, I think the inserts I have are relatively ancient indeed. Thank you!
Hi, I'm using the Vive Pro inserts and Pupil-Service with Unity. I noticed recently that the x and y values from GazeDirection were very good, the z values needed a transformation of "1- [val]" to get the value we were expecting. Is there an offset related to this? Is this a known issue? The transformation always uses the value of exactly 1
Hi, whenever I try to operate the ScreenCast demo, my Pupil Capture seems to stop working and shows no response. what's the problem?
Hi @user-7ba0fb, which version of Pupil Capture are you using? Running on windows I suppose? Could you please send me (via DM) your <user path>/pupil_capture_settings/capture.log
after reproducing the crash.
Hi @user-2693a7, very strange. The GazeDirection
is normalized and should always point in the direction you are facing. We are actually filtering it to make sure it does. Are you using the default unity camera or a camera of some sdk (steam vr)? Could it be that the transform used as reference for the GazeVisualization
(or your custom script) is facing backwards?
@fxlange sure, thanks
@fxlange I've been playing HMD 1.0. Great job, buddy. This thing is slick. Huge improvement.
Really looking forward to the use of annotations.
...and a longer-term request. It would be great if player allowed you to import arbitrary time-series data (like a pursuit velocity signal) and represent it there. I know that there is already some limited functionality there, but for it to be useful we really need to be able to add the data as an inset (or even pop-out window) and manipulate the axis range.
It is VERY powerful for a researcher to be able to export data, filter and compute new signals, import the data, and then compare them to the screencast.
Computed signals are almost illusory, and require constant sanity checks. This would provide those checks.
Also, @papr , since he's the pupil player guy 🙂
@user-8779ef This feature request already exists for a while 😉 https://github.com/pupil-labs/pupil/issues/1048
We still agree that it is a good idea but unfortunately we did not have the resources to implement it yet.
Haha, well, whoever suggested that was very forward-thinking ;P
Thanks, and keep up the good work. We've been very impressed by everything lately.
I add the “sceencast camera” component to my own unity project, I add it as a child object of my VR camera, but I found the origin transform of “sceencast camera” was not the same with my VR camera. Thus, I tried to change its transform manually to be consistent with my VR camera, but actually the two cameras did not in the same position (only close to each other) in the scene. In the Caputure window, I can see the view of the “sceencast camera”, and when I moved my VR headset, it followed. Also, I failed to operate the data record demo, whenever I pressed R, it shows not start recording in unity, while the Capture shows recording done but actually it didn’t start recording.
Hey, @fxlange , does a 'pupil time' of zero coincide with the start of the recording, or the start of the server?
Hi @user-7ba0fb - regarding the recording, that's a bug which is fixed on the develop branch but not yet released. It is an easy fix though. There is a conflict between the RecorderComponent and the Recorder demo. Both are listening on "R". You can just remove this behavior from the demo script (better not from the component).
Regarding the screencast cam: When adding it as a child to your VR cam - the screen cast cam transform should be 0,0,0 without any rotations.
hi @user-8779ef - thanks for your feedback yesterday 🙂 Annotations are also done on the develop branch and to be released very soon.
Pupil time of zero shouldn't match any of the two. If I'm not mistaken it it should be your daily OS clock.
Ah...oK. Here's what I'm trying to do - maybe you have an idea. I have a pupil recording. While recording the pupil video with screencast, I started logging my own data to CSV. I recorded the Unity time and Pupil time into each entry of my log.
@user-7ba0fb I don't know if that helps with your issue, but for me the camera Prefab in the Prefab folder didn't work. When I copied the Camera from the screencast demo it worked. (If I remember correctly, the physical camera settings on the prefab weren't set up in a way that would work - at least in my scene.)
Now, I would like to us the pupil video to find a frame of interest, and then inspect the corresponding data saved in my log file. The issue is that pupil player does not show Unity time, and the timestamp that it shoes has no relevance to the pupil timestamp associated with gaze data.
So, any ideas of how can I use Pupil Player to find a corresponding frame of data in my log?
@user-141bcd thanks for your kind reply. I just tried what @fxlange has told me and I solved my problem now.
@user-8779ef I see, because the pupil player only shows a relative timestamp.
Yep.
The best I can do is to initiate my logging at the same time as the pupil recording. This way, I can calculate time relative to the first frame (time on frame f = pupilTIme(f) - pupilTime(0) )
A bit restrictive.
In that case take the timestamp of your first sample of your recording + your relative timestamp of interest -> absolute timestamp.
How do I recover the timestamp of my recording's first frame?
By exporting via pupil player. https://docs.pupil-labs.com/core/software/pupil-player/#export
or, is it in pupil.timestamps.npy?
or gaze.timestamps.npy?
Better pupil timestamps but I'm not familiar with the *.npy and would recommend exporting as described above.
@user-141bcd thanks for the input. I'll check the screencast prefab before the next release.
So, the export creates a csv/spreadsheet that has the timestamps in it then? It has been a while, and I forget the contents of the export 😛
Yes: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv should be what you are looking for.
Thanks!
fwiw, it looks like 'numpy.load('data/2019-10-24-15-26/pupil_timestamps.npy')[0]' will do the same thing
Of course, all of this will be moot once annotations are enabled and reliable. Looking forward to that
Yes, but keep in mind that sending a lot of data as annotations in a high frequency is not the intended use case for annotations. For example I noticed that pupil capture will be pretty busy logging all the data on screen, when sending 120 annotations per second. Not sure if that could become an issue.
Ah, interesting.
"logging data on the screen" - are annotations rendered over the scene image?
pupil capture will display them on top of the scene image, a short version of it. label and timestamp if I remember correctly. same in pupil player if the annotations plugin is running.
Ok. So that is not surprising, then.
I suggest another similar but more minimalistic function via msgpack - datalogging straight to file.
They are logged directly but on top of that displayed on screen.
How about a single bool with the annotation call to turn off rendering?
Great suggestion 🙂 @papr
I imagine that would speed it up quite a bit. As for the Unity -to-analysis pipeline, it would be nice if in Unity we have a simple helper script that can be dropped on any game object, and options for "export matrix" or "export position"...and perhaps "export hit upon collision, etc." What we feel is important enough to be included will likely expand over time.
But let's move the annotation handling in pupil capture to 💻 software-dev
Ah, Ok, will do.
Thanks!
HTC Vive add-on connection problem
Hi guys, any idea how can I solve this problem?
Very likely an ip mismatch. Make sure the ip address in pupil captures matches the one in unity
In capture, i think it’s in remote settings. In unity, it’s in the connection game obj
hi,does anyone have ideas on generating heat maps in the VR environment(unity) to show distributions of fixations?
@user-0cedd9 What do you see in the Device Manager? All cameras (Pupil Cam *) should be listed under the libUSBK category. Please check and let us know. If cameras can be seen in Windows Camera App, then they are likely in the Cameras section of device manager instead of the libUSBK section. Please could you run Pupil Capture again as admin, to re-initiate the driver install process.
Does the algorithm work if I put two cameras under the eye or is there a specific location I have to put the camera?
@fxlange Just gave my first workshop in which I explicitly recommended folks ise pupil labs for vr. 30 or so attendees @ Giessen.
(Previously I just said it was still a bit of a work in progress). I was also very impressed by the PL HMD hardware version 2. Did a live data capture, and it went flawlessly.
Hi~ I'm using 'unity_pupil_plugin_vr' codes for my unity project and when I running the codes on editor - it has no problems with FramePublishing and Recording.
But... When I build the exe file and run it - some problem is merged like this. FramePublishing and Recording is not working...
My debugging process is going but I can not find a fine clue yet. The 'thirdFrame' argument is null checked. Is there anybody get same problems?
@user-94ac2a can you clarify please? You're seeking to have two cameras per eye? Or you're asking about the location of 1 eye camera relative to the eye? If it is about the location of a single eye camera, then the answer is that the position can be changed, but it is important that you have a good view of all eye movements within the image frame and that you do not move the eye camera too far away from the eye.
@wrp I am referring to one eye camera relative to each eye. How to determine too far? If I put too close, the camera will out of focus I think?
@user-94ac2a in most VR HMDs I don't think you will ever have the problem of "too far" come to think of it.
@wrp thanks.
Does the image need to have entire eye or only the pupil itself is important for tracking
Ensure that the pupil is visible within all ranges of eye movement
hi @user-09f6c7 - looks like you are using an older version of hmd-eyes. Depending on your project status please consider upgrading to v1.0. Your issue could be caused by a missing shader (the one needed for displaying the eye images). You should get a warning when running the build (assuming you run a dev build).
Please try
"Edit\Project Settings\Graphics" -> "Always Included Shaders" -> Add "Unlit/Texture"
hi @user-7ba0fb - for any sort of heatmap/scan path you need to intersect gaze data with your VR environment. based on that you can accumulate intersections and apply general heatmap logic/visualizations
the following links should help you get started: * https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#map-gaze-data-to-vr-world-space * https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#default-gaze-visualizer
Can I run exported unity exe on PC screen instead of HMD?
@wrp I can confirm, that the Pupils Vive Addons fit into the Vive Cosmos. They are not as tight as in the Pro, but they will stay in place. Nevertheless, the Cosmos is crap. Good Display and comfortable to wear, but crappy tracking and one has to use additional software. Additional to SteamVR. Sucks!
But, didn't expect it, but there is also a spare USB port for the cameras...👍🏻
When I Run the calibration in unity it seems to lock up capture as it tries to run the calibration. I ran Pupil_Capture as Admin and it seems to have fixed this. Putting this here in case someone else has the issue
@fxl ok. I'll try that ways and thanks for response 🖖
@fxlange The gray texture problem is solved with "Unlit/Texture"... And I checked v1.0 hmd-eyes what has so impressively increased but simplified structure - It is so amazing! Thank you!
That's great to hear @user-09f6c7, thanks for the feedback.
@user-29e10a Good to know, thanks a lot for sharing.
@user-94ac2a normally the Unity VR demo would run on the HMD and the PC screen. But what is your use-case exactly, tracking eyes in the HMD or of someone in front of the PC screen?