Hello everyone, I'm trying out the heatmap market demo. The recording of the videos/images works, but no heatmap is drawn. In highlight mode I only get a white image in the video. I have already added the "Calibration Demo" script for testing and I get the eye tracking points. So the data should be available. I don't get any error messages. Any ideas? Thanks in advance.
Hi Everyone! I am facing some issues with the WORLD VIDEO recording. Firstly the Capture does not show in the "World" window the Unity scene. Pupil Capture does record the video that is saved in a World.mp4 format in the recording folder but when I import it in player or play it after the export from the Player it seems that it does not record the actual video of the Unity scene. Instead of the unity scene it correctly displays the Eyes Position on a solid green background. How can I access the WORLD VIDEO of the Unity scene and save it? One important thing to point out is that I start the recording from the Capture and not by pressing R key in the Unity scene(cause I am trying to develop that scene and associate it with a functioning script). In Pupil Player once I upload the recording folder it says "No world video found. Constructing an artificial replacement." Thank you in advance
hi. has anyone successfully gotten the HoloLens to connect to pupil capture? i'm running the capture software on a non-primary interface (in my case, I specify the endpoint as 192.168.144.106:50021 in windows). i'm assuming that Pupil is using multicast because no corresponding IP address seems to be necessary on the HoloLens/UWP side. however, the HoloLens app throws this exception when trying to bind to the socket:
Waiting for a connection...
(Filename: C:\buildslave\unity\build\artifacts/generated/Metro/runtime/DebugBindings.gen.cpp Line: 51)
UDP data head I
(Filename: C:\buildslave\unity\build\artifacts/generated/Metro/runtime/DebugBindings.gen.cpp Line: 51)
UDP data head T
(Filename: C:\buildslave\unity\build\artifacts/generated/Metro/runtime/DebugBindings.gen.cpp Line: 51)
Could not connect, Re-trying in 5 seconds !
(Filename: C:\buildslave\unity\build\artifacts/generated/Metro/runtime/DebugBindings.gen.cpp Line: 51)
'HoloLens Client.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Users\DefaultAccount\AppData\Local\DevelopmentFiles\HoloLensClientVS.Release_x86.Johnny_Mnemonic\System.Diagnostics.StackTrace.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled. COMException: The requested address is not valid in its context.
The requested address is not valid in its context.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at UDPCommunication.<_SendUDPMessage>d__4.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at UDPCommunication.<SendUDPMessage>d__3.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.AsyncMethodBuilderCore.<>c.<ThrowAsync>b__6_0(Object state) at UnityEngine.UnitySynchronizationContext.WorkRequest.Invoke() at UnityEngine.UnitySynchronizationContext.Exec() at UnityEngine.UnitySynchronizationContext.ExecuteTasks() at UnityEngine.UnitySynchronizationContext.$Invoke1(Int64 instance, Int64 args) at UnityEngine.Internal.$MethodUtility.InvokeMethod(Int64 instance, Int64 args, IntPtr method) (Filename: <Unknown> Line: 0)
Any ideas?
Looks like the error is on the send side because of a failure to connect. Maybe the problem is that the address on the windows side is incorrect. What is the format for a non-primary interface address? I'm using IP:PORT on Windows.
hello! I'm working on an unconventional usage of HMD, from what i've read from the documentation and the discord this should be possible. There seems to be a disconnect in my understanding or setup. hoping to get some guiding feedback / questions on what i should do next to trouble shoot.
context: - using a custom fabricated headset that has the eye tracker fit into place (we've modeled the connection area to that of the htc vive, certainly not perfect but reasonably close) - we aim to use the eye tracker via pupil service (to reduce software overhead) to detect gaze on a custom display system that we have already fabricated. For all intensive purposes this display can be considered an array of led's.
issue:
- we are able to start calibration plugin, following the hmd_calibration_client.py, and stop the calibration plugin. I see notifications that the calibration is successful and the data has carried over to zmq. I also see that the new gaze mapper, Dual_Monocular_Gaze_Mapper
, has been created. However, when testing points the same normalized points used, i don't receive norm positions from gaze topic that are reasonable.
it seems that there is an issue with the mapping? Looking around in a rectangle usually results in points that look like this (left and right displayed). additionally sometimes the norm points aren't the same for each eye, but it seems that should be expected based on discussions here. NOTE: in the image filtering for confidence above 0.8
additionally norm points were given in a range of 0 to 1, but i imagine this is a smaller issue...
let me know if there is any other information that I can provide
@papr can you respond to this question β
@user-4c161a Could you please clarify what you mean by "additionally norm points were given in a range of 0 to 1"? Do you mean the reference positions for the hmd calibrations?
@papr ah i left out a few words. yes the reference position points used for hmd calibration were given in a range from 0 to 1. In the graph above i revisted the the outer calibration points (doing a 3 by 3 array, so skipping the center), but norm pos from the gaze topic were outside the bounds of 0 to 1 with a confidence above 0.8
In the above test I believe I created my norm positions incorrectly. The view from my headset is rectangular. where the width is twice the length. I've remapped my norm points to ((0.5, 0.25), (0, 0), (0, 0.25), (0, 0.5), (0.5, 0.5), (1, 0.5), (1, 0.25), (1, 0), (0.5, 0)). Previously they were ((0.5, 0.5), (0, 0), (0, 0.5), (0, 1), (0.5, 1), (1, 1), (1, 0.5), (1, 0), (0.5, 0)).
0,0 is the bottom left for both cases. This change seems to have made things worse though, now points are clustered to one area and do not move. π¬
@user-4c161a could you try not fixing the coordinate system to [0, 1] when plotting the result? I would like to know if the general movement pattern is there and if maybe the scaling is just wrong
@papr when graphing the coordinate system isn't fixed I played around with this before the image above.. i chose the above screen shot to give an overall picture
here are some more tests I only did top and bottom for speed-> https://docs.google.com/presentation/d/1Gk8oU6bioMUFXPi7UQAyJCkk3ZhM6Icgg7wtaV_a-cc/edit?usp=sharing
I'm going to work on creating a jupyter notebook for this to make playing around with the dash and data post run a bit easier. Won't be able to test any more until later tonight :sad:
FWIW i did some left right testing as well and also found that the points where slightly to the left, slightly to the right but had much larger vertical difference than horizontal (points were tested along the line y=0.25 /0.5 (pending on ref coords used) I appreciate you poking into this!!!
heyo, I was looking at the data recording format and I was wondering if it gives gaze direction?
Sorry I know this is probably a boneheaded question
It gives PupilPositionX,PupilPositionY,PupilPositionZ, how should these be interpreted?
Hi! I bought a pupil lab Hololens add-on but the wires to eye cameras are too delicate. One of the them breaks π¦ Can I have a new connection plugin?
Hello, I'm trying the recording demo, but it's just making an empty folder with today's date
what's up with that?
this is with the alpha
oh shoot i'll go to the alpha channel
Hi guys! π I tried importing the unity package but I am getting errors
" The type 'UsedImplicitlyAttribute' exists in both 'JetBrains.Annotations, Version=10.0.0.0, Culture=neutral, PublicKeyToken=1010a0d8d6380325' and 'UnityEngine.CoreModule, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' "
NEVERMIND that may be related to something else
Ok no it's definitely the plugin, am getting it in multiple files like C:\Program Files\Unity\Hub\Editor\2019.1.1f1\Editor\Data\Resources\PackageManager\BuiltInPackages\com.unity.timeline\Editor\Shortcuts.cs(16,14):
Hi all, is there a method to allow users to not calibrate every time they remove the HMD? Maybe use previous calibration with adjustments based on eye position?
Hello! I have a problem while recording eye-tracking data through Unity the scene is loading slower and slower, and Unity crashes, when I stop recording it gets back to normal speed and when I press record again eventually it goes slower again. Any ideas what could be the problem? (I checked the CPU and memory, nothing extreme happening there)
Dear all, I have asked this before but i still encounter this recently π When doing calibration with the sample calibration unity scene, sometimes the screen of the HMD turns into black but the calibration procedure goes on (and the calibration can be still done!) That is, I cannot see anything during the calibration procedure, but i can hear the notice audio effect that the procedure is done.
When it leaves the calibration scene and goes to the application scene, the screen image recovers from totally black.
I'm working with the Vive pro integration and unity 2018.2.9, the usb line is plugged in usb 3.0 port.
@user-734a2b "Hmd-eyes v0.62 only supports Unity 2017.4 LTS "
Can anyone tell me how to record eye tracking data with the hololens?
download the latest pupil labs release from github and run pupil capture from it
Hi there, can anyone help me with setting Pupil Labs up in Unity please?
@here We are pleased to announce the beta release of hmd-eyes v1.0!!
https://github.com/pupil-labs/hmd-eyes/releases/tag/v1.0-beta
@user-d6c80d Please consider checking out the new beta release. Make sure to read the beta readme [1] and documentation [2]
[1] https://github.com/pupil-labs/hmd-eyes/blob/release/beta/README.md [2] https://github.com/pupil-labs/hmd-eyes/blob/release/beta/docs/Developer.md
@papr Thank you. Do i need to run the Pupil Capture exe in the background for the demo scenes to work?
Congrats.
@user-d6c80d Yes, that is correct.
@papr I get the following error when running GazeDemoScene
"FormatterNotRegisteredException: System.Collections.Generic.Dictionary2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not registered in this resolver. resolver:StandardResolver
MessagePack.FormatterResolverExtensions.GetFormatterWithVerify[T] (MessagePack.IFormatterResolver resolver) (at Assets/Plugins/Pupil/3rd-party/MessagePack/IFormatterResolver.cs:35)
MessagePack.MessagePackSerializer.Serialize[T] (T obj, MessagePack.IFormatterResolver resolver) (at Assets/Plugins/Pupil/3rd-party/MessagePack/MessagePackSerializer.cs:64)
MessagePack.MessagePackSerializer.Serialize[T] (T obj) (at Assets/Plugins/Pupil/3rd-party/MessagePack/MessagePackSerializer.cs:55)
PupilLabs.RequestController+Request.SendRequestMessage (System.Collections.Generic.Dictionary
2[TKey,TValue] data) (at Assets/Plugins/Pupil/Scripts/Request.cs:88)
PupilLabs.RequestController.Send (System.Collections.Generic.Dictionary`2[TKey,TValue] dictionary) (at Assets/Plugins/Pupil/Scripts/RequestController.cs:137)
PupilLabs.RequestController.StopPlugin (System.String name) (at Assets/Plugins/Pupil/Scripts/RequestController.cs:176)
PupilLabs.FrameListener.Disable () (at Assets/Plugins/Pupil/Scripts/FrameListener.cs:53)
PupilLabs.RequestController.Disconnect () (at Assets/Plugins/Pupil/Scripts/RequestController.cs:129)
PupilLabs.RequestController.OnDisable () (at Assets/Plugins/Pupil/Scripts/RequestController.cs:70)"
@user-d6c80d I can remember seeing this issue but do not remember how @fxlange was able to fix it. Do you get this error when running the v1 beta release demos?
Also from the docs: Due to an issue with MessagePack, the default project setting for ProjectSettings/Player/Configuration/API Compatibility Level is not supported and needs to be set to .NET 4.x
@papr just fixed it whilst reading https://github.com/pupil-labs/hmd-eyes/blob/release/beta/docs/Developer.md
Fix: added Script Debugging in Build settings added VR support for OpenVR and Oculus in XR Settings changed ProjectSettings/Player/Configuration/API Compatibility Level = .NET 4.x
@papr How can I check that each eye is getting high confidence (0.8) ?
@user-d6c80d The quickest way is to look at the top left corner of the Capture world window. There are two graphs indicating the current confidence.
ah nice thanks
@papr is it possible to use Pupil Labs with glasses?
@user-d000cb is this also possible if i start a scene in unity with hololens? How can i record then?
@user-d6c80d Technically yes, but you need to be careful regarding the glasses' frame occluding the eye cameras' field of view. Also, the distortion of the glasses might lead to a decrease in accuracy. Therefore, we recommend using contact lenses.
okay cool
@papr For DataRecording scene, in what format does it record the data in? I check the filepath and the folder is empty.
@user-d6c80d Do you see the R
in Capture become red during the recording?
No
The Unity VR plugin allows to trigger recordings in Pupil Capture (Pupil Service does not support this feature).
i.e. the recording only happens in Capture. If the R
in Capture does not turn red, it actually does not record and something else went wrong.
@papr The R does go red. I thought you were talking about the R in my screenshot above.
Does this record a data file or does it record from the camera attached to the Pupil Labs?
By default, Pupil Capture records the eye camera video feed, as well as the pupil and gaze data.
still not recording
I've detached the front facing camera as i dont need that
but the "2019_05_16" is empty , and "006" doesnt exist
@user-d6c80d What hardware are you using exactly?
Also, it looks like the recording is started and stopped immediately.
@papr i did a short one so you could see all the messages
Could you search for 2019_05_16
in your file explorer? I think, for some reason the recording path is wrong. Windows paths usually do not have /
as path separators but \
this is the hardware i have
this is the folder, but it's empty
@papr maybe the code to set the recording path is wrong?
Please check, if there is an other folder with the same or similar name on your system.
@papr that's the only folder with that name.
@papr Also, is it possible to record pupil diameter
@user-d6c80d Pupil diameter is recorded by default as part of the pupil data. Please check if you can set the recording path in Unity. If yes, please correct it to use \
instead of /
. Unfortunately, I am not familiar with the hmd-eyes internals so I am not sure if this is a setup issue or a code issue
@papr can you test it on your side and see if you can get it to record please?
@user-d6c80d Sorry, but I do not have the setup at hand to replicate it. Maybe somebody else in here might be willing to give it a try.
@papr Do you know anyone on the dev team who does have the setup?
Hi! Pupil capture is working for me but when I open a unity demo or try to use the unity prefabs it says "not connected". Does anyone know why that might be?
have to run pupil capture too
or pupil service
Sorry, what do you mean?
Shouldn't I be able to open a demo scene and it'll run ?
nope!
it communicates with the server in pupil capture or service
Wow thank you that was silly of me
Works now π
:D glad to help
Hi guys, Quick question related to 1.0-beta version. I try to use it on a project through Unity 2018.1.9f2 but I get this error :
"Assets/Plugins/Pupil/Scripts/GazeVisualizer.cs(125,77): error CS1644: Feature `out variable declaration' cannot be used because it is not part of the C# 6.0 language specification"
@user-e3dae5 The release requires Unity 2018.3 or higher
All right, I have to manage how to move my project form 1.9 to 3 so, thanks !
I wasn't sure that this was the exact reason π
It is only a guess to be honest. But since the error is related to the C# version, my hope is that Unity 2018.3 has the correct version.
It does, I tried on Unity 2018.3 with an empty scene and it was OK
@user-3b599d I have the same question.Is it possible to record the scene in unity with hololens, so that a csv file gets created ?
@papr did you ever get a chance to look at that google doc with my situation? I started playing around with hmd_video_frame_size
and noticed that this might be the source of my issue. from what i understand (from π» software-dev i believe) hmd_video_frame_size
should play less of an affect on creating normalized points mapping because we pass in normalized points anyway? I also have seen that hmd_video_frame_size
is unit less so it shouldn't matter how I measure or determine it as long as it's consistent for both x y
fwiw the visual field i'm testing for is a rectangle which i seems to have been thrown off initially by the default 1000 by 1000 frame size (since the weightings during the mapping would have been distributed evenly)
just wondering if hearing any of this highlights any other factors i should be looking into. while altering the frame size has helped it still is far from perfect
is there any relationship / other variable I can toggle regarding the size of the calibration points i'm using (in relationship to the screen size?)
@papr Do you know anyone else with the setup who can test the recording data feature?
@user-d6c80d I had a similar issue, which I was able to solve. I was trying to record with the DataRecording demo, but it was just making an empty folder named by today's date in the unity project folder. The cause was that I was running Pupil Service instead of Pupil Capture, which I thought was correct because the demo would not connect to Pupil Capture. But the Data Recording demo requires a connection to a running instance of Pupil Capture (mine could not connect because I had accidentally set the wrong Windows firewall settings the first time I ran Pupil Capture).
Oh sorry, I see that this is probably not your issue. Nevermind.
Yeah...
@user-5df199 so it records fine when you run the demo recording scene?
Yes.
I mean, it makes a recording that I can then play in pupil player.
anyone have any ideas on my calibration issue π open to any suggestions at this point. i've been setting up some infra to start tweaking screen size against a calibration routine already used.
here are some sample tests i've run so far: https://docs.google.com/presentation/d/1Gk8oU6bioMUFXPi7UQAyJCkk3ZhM6Icgg7wtaV_a-cc/edit?usp=sharing
my post is above from 5/10
Hi everyone, does anyone know if itβs difficult to put pupil tracking in a Samsung odyssey as opposed to a vive pro?
@user-8f3748 Thanks! I've upgraded the plugin and unity.
However, the black HMD screen problem still exists. (the screen turns into black but the calibration procedure goes on)
are you using pupil capture or pupil service?
hi @user-d6c80d, as stated by @user-141bcd there is an issue with the recording path (beta release): "In my setup (Windows), I have the Unity project under path "D:/path/to/my/UnityProject/..." and Pupil Capture under "C:/some/other/folders/pupil_capture.exe". I found the data under "C:/path/to/my/UnityProject/<date>/..." Please check if this is the case for you as well.
@user-8f3748 Iβm using pupil capture
@user-734a2b Sorry if I ask you again, but are you using Hmd-eyes v0.62 and unity lts 2017.4? In an earlier post you said, that you can't get to the application scene. There is no application scene as fas as I know in v0.62 but there is one in the alpha/beta. Are you sure you're using v0.62? Or do you mean by application scene the Market Scene Demo?
@user-8f3748 I'm using beta and unity 2018.4 right now. In the earlier post, the application scene is the Market Scene Demo. Actually, I can always get to the application scene and the procedure of calibration works. The only problem is that the HMD screen turns into black for seconds during the calibration procedure, unexpectedly.
@user-734a2b I'm not sure if 2018.4 LTS works with the beta (probably should though). You could try and update your OpenVR via PackageManager and make sure OpenVR is selected in the XR-Settings (Player Settings -> XR Settings). If OculusVR is selected there as well, then try and deselect OculusVR. Make sure you have your Dependencies setup, NET 4.x Equivalent for Scripting Runtime Version and .NET 4.x for API Compatibility Level.
@user-8f3748 Thanks for your help!! After checking all of the related settings, I finally found SteamVR set the "idle time" of my HMD to 5 seconds... that's why the screen turns black when I was not moving but only doing calibration with my eyes.
Hi all, my research lab has just purchased the Pupil Labs VR/AR Eye-tracking for HTC Vive Pro and we've already set it up with our current AR experiment. Most of it seems to work properly, however we do have a couple of issues: 1) even after having completed calibration the eye position seem to be shifted upwards ("vertically") by a little bit. 2) we will need to record and save basically every eye parameter of our participants , but we're having issue extracting data about blinks (the demo program for detecting a blink is not working either). Do any of you have any idea why this might be the case? Many thanks for your time
@user-8845ff the blink demo is just for receiving the data in real time. If you want to extract the data after the effect, make a recording with Pupil Capture and open it in Pupil Player.
Hi, do anyone here know the documentation on the calculation of 3D gaze?
@user-82e954 - 3d eye model http://2013.petmei.org/wp-content/uploads/2013/09/petmei2013_session2_3.pdf - 3d calibration: Bundle adjustment optimization using Ceres as solver
Thank you! Where can I see calculation implemented in the pupil lab's script?
@user-82e954 - 2d +3d detectors: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/pupil_detectors - calibration routines: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/calibration_routines - gaze mapping: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_routines/gaze_mappers.py
Okay, thank you for the quick reply! These helps a lot.
Btw, do you also have a variable for the pupil-size? If so, where can I access it?
@user-82e954 Yes, it is part of the pupil
datum. There are the diameter
field (unit: image pixels) and the diameter_3d
field (unit: millimeters)
I see. That's probably all I need for now. Thanks again. π
@papr many thanks for your quick reply and the info π any idea why we have this vertical shift in the measured eye position even after successful calibration?
@user-8845ff are you using the 2d mode? It is prone to slippage of the hmd. In this case, I would recommend switching to the hmd-eyes beta, which supports 3d mode
@papr thanks again π yes, we're currently adopting the 2d mode because we're using an older Unity version than the 2018.3 for all our projects. Is beta only supported by 2018.3? Regarding your previous point about blinks: is it possible to access blink data information directly via Unity with our hmd-eyes version (the last one before beta)? We're currently able to get all the gaze info but not the blinks one
@user-8845ff yes, it is possible to access blink data in real time. But you are responsible for extracting the data from the msgpack encoded stream. The beta requires 2018.3 or higher, correct.
hi @papr , I've looked up on what you shared.. I want to manipulate the result of the 3d gaze calculation. I have found the variables I need in the python code, but since I'm not very familiar with it, I don't know how to connect it with unity. on the other hand, I tried to trace the source that gives position of 3d gaze from unity, but got stuck in the code in this picture, not really getting to the variables (I wish to see the data in numbers). how can I possibly do that?
@user-82e954 Capture does the actual 3d gaze calculation and publishes it to the network api. hmd-eyes just subscribes to the result. Could you elaborate on what you want to do? Do you need the manipulation to be temporary? Then you can simply change the incoming data in unity. If you want to make long-term changes, you can either change the gaze mapping code in Capture (if you need context of the gaze mapping plugin), or write a custom plugin that manipulates the gaze data before it is recorded/published (if no context is needed). By context, I mean further gaze-mapping-internal information that is not exposed to plugins or the network api.
So to manipulate the information before the calculation of 3d mapping can only be done in the pupil capture. Noted. For now, I'm trying to learn about 3d estimation itself. And want to find out what parameter will affect the result (3d position) by manually varying the variables (e.g. eye angle/position, pupil size).
I'm now trying to make some changes in the gaze-mapping, but since I'm not used to the architecture, still not sure which will affect which. Btw, by "hmd-eyes subscribes to the result", it is done by using UpdateGaze(), right?
@user-82e954 I do not know the hmd-eyes code base, sorry. I cannot tell you if this is the case, regarding UpdateGaze().
Regarding the pipeline: 1. [eye0/1] Receive eye image from camera 2. [eye0/1] Run 2d pupil detection 3. [eye0/1] Use 2d result to build 3d model/map 2d result to existing model -> creates pupil datum 4. [eye0/1] Publish pupil datum to network api (using "pupil" topic) 5. [world] Receive pupil datum 6. [world] Map pupil to gaze datum (eye -> world coordinate system) 7. [world] Pass gaze datum to plugins for higher level processing 8. [world] Publish gaze datum to network api 9. [world] if recording, save gaze data
[X]
indicates the process X
performing the step. Each process assumes control of a single camera (if available)
@papr It seems that the code in the image that I sent is just for calculating relative 3d position.
I read a bit more on the PupilDemo.cs, and it seems that it will allow me subscribe to pupil and access the dictionary that contains the pupil datum from unity. This might be what I'm looking for. Gonna try it out.
If there's anyone here that have used the script for the same purpose, I'd be happy to receive any help π
Btw, Unity does receive gaze data, but from what I see, the dictionary seems to only have datas of positions, which is the processed ones. CMIIW.
Just for clarification: Pupil data = pupil position in eye camera coordinates, topic: "pupil" Gaze data = pupil positions that have been mapped to the world coordinate system, topic: "gaze"
@user-82e954 can you check the dictionary's "topic" field and let us know its value?
The dictionary in PupilDemo.cs is made to find topic that starts with "pupil". The dictionary in HMD-Eyes (seen in PupilData.cs) only contain string saying the kind of eye object, and EyeData.
That's the only dictionaries I've found.
@papr that is a super valuable clarification holy crow
like ahhhhh
now it makes sense
@user-d000cb Agreed lol
@user-82e954 ok. Modifying pupil data does only make sense within the eye process, since it is the one to generate it. Modifying it sometime later in the pipeline will leave you with inconsistencies.
Just for my clarification: Your goal is to improve the accuracy of the gaze
?
@fxlange Yep, that's the fix. Thank you very much!
@user-82e954 The difficulty here is, that there are multiple components of the pipeline that contribute to inaccuracies: 1. 2d detection: ellipse fitting might not be perfect 2. 3d mode: does not incorporate refraction (https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf) 3. gaze mapping: the bundle adjustment compensates for the visual-axis-offset (given the 3d model during calibration). when the model changes, due to slippage, that compensation is not as good as before anymore
All of these deserve their own careful research.
@papr is it possible to record pupil diameter over time using Pupil Capture?
@user-d6c80d Yes, that happens by default (during a recording)
ah great. which file is it saved in? here i have the recorded files
@user-d6c80d It is contained in the pupil.pldata. The easiest way to access it is to open the recording in Player and to export the data to csv using the Raw Data Exporter
@papr Nice, thank you very much.
@papr Thank you for the suggestion! 3d gaze estimation is a part of my idea, but to increase the accuracy is not the goal.
Ok, I am off for today π have a productive day!
Dear all, I saw the pupil docs says that we can connect the pupil eye tracking add-on with the hidden USB port of the HTC Vive headset. Does the pupil eye tracking add-on support usb typec-c cable, to connect with the hidden USB-C port of the HTC Vive Pro headset? (HTC Vive Pro doesn't have a usb port on the headset but only a usb-c port) Or I should only connect it with a separate USB lane to host PC?
Thanks!
my buddy says that plugging it into the hidden usb instead will affect the frame rate?
but i'm not sure
Hi @papr , I think what I'm trying to achieve is to access raw data from Unity. PupilDemo.cs seems to directly access the dictionary for the pupil data, but I still don't know how to implement the code. Since it's in the developer's doc, I'm sure someone have tried to use it. Is there a more detailed guide on how to use it, or someone who can help? Thanks in advance.
@user-82e954 Unfortunately, I personally, do not have any experience with deserializing msgpack data in c#. π
@papr Do you know anyone who may be able to help? It is part of the pupil_plugin. I will still try by myself btw.
@user-82e954 it is very unfortunate, that you cannot use the new beta release π
@papr you mean someone may be able to help if I use hmd-eyes-alpha? I actually just haven't tried it out since I was used to using this one lol
@user-82e954 The hmd-eyes-alpha is structurally cleaner and easier to use than the 0.62 version. It's documentation is better as well as far as I know. Also, we will have to drop the support for 0.62 at some point. So, if you are at the start of your project, I highly recommend to use the beta.
@papr Okay, I will export my current project to the alpha. I will then ask in the other channel after learning more about it. Thanks for the suggestion. π
@papr thanks again for the info. Just to follow up on our discussion: we managed to finish our experiment with hmd-eyes 0.62 (therefore with 2d calibration-tracking). When being very careful to avoid hmd slippage, we do have a pretty precise tracking, but I was just wondering whether the 3d calibration will definitely improve it? If so, why? Connected to this point: by reading your github page regarding the different data types and formats, I've noticed that the pupil size data (diameter_3d) extracted from the 3d tracking are already corrected for perspective and I was just wondering whether I could correct pupil size extracted from the 2d tracking in a similar manner but offline (I know that without a 3d model of the eye this could be difficult, but maybe something can be done with axis and angular info)? In case it's not possible, is the 2d pupil diameter information completely unusable/unreliable in case of eye movements?
@papr second question: we bought the latest 200 Hz version of the hmd-eye tracking and the fitting seems to be ok, however I was wondering whether it's normal that this little orange wire pops out like that (see attached picture)? If I recall correctly, the previous version of the eye tracking (the one showed in your introductory assembly guide) didn't have it.
the calibration in my hmd eyes demo is way off it doesnt follow my gaze properly any solutions?
@user-8845ff gaze mapping - 3d mode is usually a bit less accurate than 2d mode with the advantage of compensating slippage. If you have slippage-less 2d data then you are good to go.
pupil diameter - You can rerun the pupil detection in 3d mode using the Offline Pupil Detection
feature of Pupil Player if you recorded the eye videos.
HTC Vide 200Hz Add-on - I will come back to you regarding the PCB placement
@user-6cbd99 Which hmd-eyes version do you use?
1.0 i guess but when i click it just opens unity hub i directly played the demo
@user-6cbd99 which of the demos are you trying?
3d_gaze_demo_vr_v1.0-beta
hi, I am running some experiments on the HTC vive using the binocular eye tracking add on. Is there a way to determine the camera parameters such as focal length and camera sensor parameters such as pixel pitch and principal point values?
Hi, I'm trying to run the gaze demo scene and the gaze rays demo scene in the Unity editor and I'm having issues getting the calibration to work. I run both demos with the Pupil Capture application running in the background. I start the calibration with no issues, but at the end of the calibration, the Unity scene freezes and Pupil Capture ends the eye tracking and stops responding. The calibration seems to also freeze at the very end if I initiate it within Pupil Capture itself. Has anyone experienced this issue?
@papr many thanks for the info. I've tried to run the Offline Pupil Detection feature of Pupil Layer in 3d mode to compare the results with our current 2d online approach (as you suggested). The process seems to be pretty straightforward, however the diameter_3d values that I extract are kinda strange both for the differences between two eyes and also because they seem to be generally to high considering that the output should be in mm (please correct me if I'm wrong - see attached screenshot)
Linked to this point: I've decided to maintain the deafult parameters for the offline detection, but I was wondering how Pupil min, Pupil max and Pupil Intensity range relate to each other and most importantly what's their scale (the deafult 10-100 shouldn't be mm for physiological reasons)
Sorry, final question related to this topic: we're currently saving a single pupil capture folder per experimental trial (so a single '000' folder per trial) and I was wondering whether there was some way to automatise the process of importing, do the offline pupil 3d detection and exporting the results instead of doing it manually?