Hi ! I have been developing a plugin for the past few month and I would like to know if I can give it to you so you can add it to Pupil Capture (if you are interested). Let me know, if you need more details
months*
Hi @user-94f759, what does your plugin do? We currently plan not to extend Pupil's feature scope much further in the near future, but we can add your plugin to the Pupil Community repository where we collect plugins and scripts from the community :) https://github.com/pupil-labs/pupil-community
My plugin adds recognition of the focused object with Yolo. I would be happy to make it available to the community!
@user-94f759 if you feel comfortable, you can just create a pull request to the community repo, adding a link to your plugin and a brief description (see the other entries for reference). Otherwise you can give us the link and description and we can add it in ourselves!
Ok thanks I'll contact you when I'm ready
Hi, is it possible to plot a real time in graph of pupil diameter in pupil_capture, rather like what is shown in pupil_player.?
@user-a82c20 This option was removed in Pupil v2.0. It was very limited though.
Howdy! Is there a built-in way to change Pupil Capture settings (i.e. select a calibration choreography or set the recording session name) programmatically? For example, a file with startup settings, or built-in notification types I could use on the IPC Backbone?
@user-c5fb8b my plugin is ready, here is the link to download the files
@user-6bc565 yes, it is possible via the network api. The recording session name can be set via the R pupil remote command. The calibration choreography via a start_plugin notification.
@user-94f759 awesome, can you give us a brief 1-sentence description of what your plugin does?
It detects objects in the main scene and show which one you are looking at
There are possibility of streaming the video and it sends data to a topic "objects" in the IPC Backbone
possibilities
@papr thanks! Is there somewhere I can find a list of the notifications that exist? The Network API page on the Pupil Labs website says "TODO: Link on_notify docs" where I would expect that information to be.
@user-6bc565 unfortunately not. Now, that we are stabilizing the software, it could make sense to start such a list.
In that case, where in the code would I look if I wanted to dig into how Pupil handles notifications?
I found a past discussion. Thanks!
Hello, thanks for your really nice eye tracking device! Im using the pupil labs addon with the htc vive cosmos for my Bachelor thesis. I have one question: I want to rotate the user during an eye blink. Is that possible already with the Blink detection from the Unity plugin or should i implement it by myself? I was wondering if a Blink could also been detected while the eyes are still closed.
Hi, I want to write a plugin for the player app which processes the results of pupil_positions.csv with another plugin that I have already written. To do this, I need to know when the routine responsible for writing the pupil_poaitions file has finished . Is there a way of either knowing when a particular plugin has finished, or mkaking sure my new plugin is alksways runi last
@user-a82c20 you can access the data in memory as a plugin, checkout g_pool.pupil_positions
Hi @user-afba5d the Blink Demo in Unity just listenes to the blink data stream from Pupil's blink detector plugin (which is now enabled by default since Pupil v2.0). You can also adjust the settings in Pupil. The Blink Demo is a very simple showcase of this feature, you would probably want to write your own listener to blink data, similar as the demo does. The blink messages also have "type" either "onset" or "offset" for detecting the start/end of blinks, which might be helpful for your use case. Please note that Pupil's blink detection works very very simple, by checking if the Pupil detection confidence of both eyes suddenly decreases/increases. This normally corresponds to a blink and works for most cases. This also means that it won't work monocularly or when only blinking with one eye.
I was wondering if a Blink could also been detected while the eyes are still closed. I'm not sure if I understand you here. What's a blink while the eyes are still closed?
Thank you for your fast and detailed answer! In my study the proband will be rotated during an eye blink. This is a manipulation technique for redirected walking. That means I want to detect that both eyes are closed, in this moment the user will be rotated. I hope my English isnβt too bad.
@user-afba5d that should be possible. Like I said, you probably want to copy and adjust the BlinkDemo.cs script. This script doesn't actually make use of the offset/onset information in the message, but infers the state on it's own. However, I would recommend you rely on the "type" key to detect whether the blink started or stopped.
Thank you really much! I will search for a documentation to find out more about the needed types.
Hi again. Am I right in thinking that the contents of args
in a start_plugin
notification get passed to **kwargs
? That is, that I can generally launch plugins just fine with empty args
?
Hi @user-6bc565, the args get passed to the plugin's __init__()
(which is probably what you meant).
I think almost all plugins have reasonable default values, such that you can start them without args, so you are generally fine with empty args.
However, I recommend having a look at the plugin's __init__
to make sure that it really is the case.
Thanks! I had only been looking at classes with a **kwargs
in their __init__()
, and with defaults on their arguments, and noticed that g_pool
didn't have a default argument. My reasoning was that if i could start them with empty args
, then args
mustn't be interacting with (the entirety of) the init
arguments. Is there something special about what g_pool
represents, like self
or cls
?
@user-6bc565 g_pool
will always automatically be set when the plugin is started. It's basically a globally shared dictionary that we use to collect and pass data between all plugins. Everything else (arguments with default parameters, as well as **kwargs
) can be controlled with the args
.
Hi, I'm not sure if this is the place to ask this question. I recently updated my Unity version from 2018 to 2019 and the HMD eyes \Unity package I was using seems to have broken slightly. The frame visualizer that streams the eye video from the IR camera in the HTC Vive Pro addon no longer shows up when I run the demo scene. This worked with the old version. Does anyone know how I can fix this?
Hello! I don't know why I'm getting these errors:
player - [INFO] numexpr.utils: NumExpr defaulting to 8 threads.
--- Logging error ---
Traceback (most recent call last):
File "C:\Python36\lib\logging\__init__.py", line 994, in emit
msg = self.format(record)
File "C:\Python36\lib\logging\__init__.py", line 840, in format
return fmt.format(record)
File "C:\Python36\lib\logging\__init__.py", line 577, in format
record.message = record.getMessage()
File "C:\Python36\lib\logging\__init__.py", line 338, in getMessage
msg = msg % self.args
TypeError: not enough arguments for format string
Call stack:
File "C:\Python36\lib\threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:/Users/Jolka/PycharmProjects/pupil/pupil_src/main.py", line 187, in log_loop
logger.handle(record)
Message: 'Loaded backend %s version %s.'
Arguments: ['module://backend_interagg', 'unknown']
player - [INFO] launchables.player: Application Version: 2.2.6
player - [INFO] launchables.player: System Info: User: Jolka, Platform: Windows, Release: 10, Version: 10.0.16299
They show up when I run Pupil Player (2.2.6) from source on Windows 10. They don't interrupt me in normal work with Player, however I would like to get rid of them π
Hi @user-99ee4f HMD-Eyes was specifically developed for Unity 2018, I'm not sure if we ever tested it with 2019. What's your reason for upgrading to 2019, are there any features that you would like to use with HMD-Eyes? I'll see if I can reproduce the problem on 2019 and check how much effort it would be to make it compatible.
@user-a6cc45 do I assume correctly that you have made modifications to the source of Pupil?
Specifically there seems to be some log message Loaded backend %s version %s.
, which causes the error.
This code does not seem to be part of Pupil, so I assume the error is in code that you added to Pupil.
@user-c5fb8b It was actually an accidental update. I tried to return to 2018, but there was some new error that kept showing up: FormatterNotRegisteredException: System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not registered in this resolver. resolver:StandardResolver
I even tried redownloading Pupil Labs HMD eyes library and creating a new project.
But after playing around a bit, I realized that there just an extra delay now for the frame visualizer (IR video stream) to show up. So for now, it's actually working fine. Do you know off the top of your head if there is anything else that could be an issue when using 2019? If I do run into unresolvable errors, I'll just revert back to 2018 and try to fix the error above.
@user-99ee4f I just created a dummy project in 2019.4 and imported Hmd-Eyes. I could run the GazeDemoScene without any problems. I also did not notice any delays. I'm not sure what causes your issues, but it does not seem to be the Unity version. Do you have any version control set up for your projects? If not, I'd highly recommend to e.g. use Git, as it will make it much easier to track your modifications and to just roll back to before you made the upgrade.
@user-99ee4f I am fairly sure that this FormatterNotRegisteredException
relates to this dependency mentioned in the README
Due to an issue with MessagePack, the default project setting for
ProjectSettings/Player/Configuration/API Compatibility Level
is not supported and needs to be set to .NET 4.x
@user-99ee4f @papr good idea, I actually made sure to set the API Compatibility level to .NET 4.x in my test project for 2019
Hi I have a question about how gaze_normal0_z
and gaze_normal1_z
that are within the gaze_positions.csv are calculated. Are the z-coordinate based on an estimate of eye vergence?
@user-a09f5d Yes, they are.
@papr Great thank you.
Hello, What are the best values for Fixation Detector parameters: 1. maximum dispersion threshold 2. minimum duration threshold ?
@user-a6cc45 That might depend on your use case. I suggest looking up relevant publications and their definition of a fixation. Based on that, you can choose the best fitting thresholds.
Hello, question about timing. I am trying to record monocular data using the HTC Vive inset at 200 Hz (camera resolution 192x192) and the timestamps I am reading from the pupil data stream are very jittery (from 3 to 8 ms between samples). Anything I can do to get a stable 5ms between samples? Thanks!
Hi, I'm a developer and would like to adopt the pupil detection function, is there any documentation that I can refer to?
@user-3fe6c4 Unfortunately, the cameras do not guarantee a perfectly even sampling.
@user-a9a7d0 The pupil detection code can be found here: https://github.com/pupil-labs/pupil-detectors/
@user-3fe6c4 Unfortunately, the cameras do not guarantee a perfectly even sampling. @papr Understood. Is it strictly a hardware limitation (the cameras are feeding images in an irregular fashion) or is there something I can do through software to ameliorate the issue? My application can take some jitter, but this is probably excessive.
Hi there. I have a follow up question about how gaze_normal[0,1]_z
are calculated. I understand that the z-coordinates are based on an estimate of eye vergence. With this in mind, I made a recording with a monocular calibration (camera 0 was switched off and eye 0 was covered with an eye patch). When I exported the data from this recording, gaze_positions.csv still reports a z value for gaze direction (gaze_normal1_z
). I would like to ask where this z value comes from given that there is no information on vergence available?
@user-a09f5d There is a default depth value that is being assumed. In your case, it should always be the same z-value, correct?
@user-3fe6c4 It is a little bit of both on Windows. There is a difference between hardware and software timestamps. We have discussed this topic a few times in π core and π» software-dev You should be able to find quite some information by searching these channels for these terms.