Is it possible to deactivate just 3d detection at runtime? We want to reduce cpu load while still beeing able to see if the eyes can be tracked live. We would like to calculate both 2d and 3d detection posthoc.
Yes, it is possible via the network api https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_detector_network_api.py or using a custom eye plugin (similar code to this https://gist.github.com/papr/ed35ab38b80658594da2ab8660f1697c#file-artificial_2d_pupil_detector-py-L73-L82)
Thx
i added my changes to the ttl.py example you gave earlier, thats working now. i now added the function in line 30 in order to stop the 3d detector plugin - based on the example you linked above. aparently line 36 never executes, at least i cant see any printout in the command window. do you have time to assist with this code? i dont understand the structure of the pupil labs code, nor python syntax very well. thx.
The issue is the .order
. Plugins are loaded based on this value. And since the order is very low in your case, the pye3d plugin is not loaded yet. The trick is to delay the stop command:
+ DELAYED_STOP_SUBJECT = "TTLTriggerDelayedPye3dStop"
...
- self._stop_Pye3DPlugin_pupil_detectors()
+ self.notify_all({"subject": DELAYED_STOP_SUBJECT, "delay": 0.5})
...
def on_notify(self, notification):
...
+ elif notification["subject"] == DELAYED_STOP_SUBJECT:
+ self._stop_Pye3DPlugin_pupil_detectors()
A first general tip in regards to plugin develop is to use pythons logging mechanism over print statements:
import logging
logger = logging.getLogger(__name__)
...
logger.info("info-level message")
Capture will write these automatically to the log file and display them in the world window.
Edit: One important point is that the logger only processes the first argument. i.e. you should pass a formatted string if you want to log multiple variables, e.g.
- print(a, b, c)
+ logger.info(f"{a} {b} {c}")
i was suspecting load order, but i didnt get when this list was populated - thx a lot. its working now.
We have yet another problem that we cant seem to solve. we use the attached plugin. in line 59 we toggle the TTL for each received frame of eye 0 while self._recording is true. in some recordings it occures that we receive more TTL inputs at the TTL recording device than there are frames of eye0 in "eye0_timestamps.npy". i can explain that one more TTL than npy entry could occure.: if we record an uneven number of frames the TTL line is still high when the recording stops and the stop recording resets the TTL line back to 0 (a toggle) although there was no frame. that happens whenever the recording has an uneven number of frames. we can filter that out. our data however shows that it can occure, that there is an additional 1 to 4 TTL triggers recorded that we can not match to npy entries (there is simply 4 TTL toggles more than npy entries). by shifting the npy and the received TTL vectors against eachother and checking theire time values it becomes clear that the frames which are missing in the npy are the first few frames. i was under the assumption, that when "self._recording" is true every event in line 50 reaches the npy file of the corresponding eye. is there any possibility that this is not the case? how can i improove the filter condition in line 54 to be sure to only toggle the TTL when the frame will actually reach the npy?
if the numer of frames in the npy would be 1000 and there would be 1004 matching triggers we could just drop the initial 4 TTL triggers i guess. its still odd to me how this can happen.
Hi @user-aadbbd, moving the conversation here. Can you confirm that in your filter_gaze_on_surface.py
script run from Psychopy, the key on line 37 is "gaze_on_surfaces"
and not "gaze on surfaces"
?
Yeah, it's "gaze_on_surfaces". I'm not exactly sure how to go about debugging this, as my PsychoPy experiment just immediately shuts down when it encounters this code π¦
If you could provide the full traceback as well as the neighbouring source code of were the error appears, we might be able to help you. Also, are you subscribing to more topics than surface
?
No, only surface I think. Here is the traceback.
Traceback (most recent call last):
File "C:\Users\hesselmann\Desktop\PsychoPyUniformityIllusion\Uniformity Illusion PsychoPy\UniformityIllusion.py", line 391, in <module>
if is_looking_at_surface(surface_name):
File "C:\Users\hesselmann\Desktop\PsychoPyUniformityIllusion\Uniformity Illusion PsychoPy\UniformityIllusion.py", line 165, in is_looking_at_surface
gaze_positions = filtered_surface["gaze_on_surfaces"]
KeyError: 'gaze_on_surfaces'
Here is the function where I do pupil things.
def is_looking_at_surface(surface_name):
context = zmq.Context()
# open a req port to talk to pupil
addr = "127.0.0.1" # remote ip or localhost
req_port = "50020" # same as in the pupil remote gui
req = context.socket(zmq.REQ)
req.connect("tcp://{}:{}".format(addr, req_port))
# ask for the sub port
req.send_string("SUB_PORT")
sub_port = req.recv_string()
# open a sub port to listen to pupil
sub = context.socket(zmq.SUB)
sub.connect("tcp://{}:{}".format(addr, sub_port))
sub.setsockopt_string(zmq.SUBSCRIBE, "surface")
topic = sub.recv_string()
msg = sub.recv() # bytes
surfaces = loads(msg, raw=False)
filtered_surface = {
k: v for k, v in surfaces.items() if surfaces["name"] == surface_name
}
# note that we may have more than one gaze position data point (this is expected behavior)
gaze_positions = filtered_surface["gaze_on_surfaces"]
for gaze_pos in gaze_positions:
norm_gp_x, norm_gp_y = gaze_pos["norm_pos"]
# only print normalized gaze positions within the surface bounds
if 0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1:
return True
else:
return False
Please try changing the code to
try:
gaze_positions = filtered_surface["gaze_on_surfaces"]
except KeyError as err:
raise ValueError(f"surface={filtered_surface}") from err
This will print out the content of filtered_surface in case it does not contain the expected key
PsychoPy becomes unresponsive when I run this
Interestingly, the PsychoPy Runner constantly prints "[]" when I run the unedited filter_gaze_on_surface.py...
The unedited version of the file only prints something if there is a gaze point within the "unnamed" surface. So I am not sure why it would print []
. https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py
Generally, it sounds like the setup is not configured correctly yet.
I think my stuff solved itself by one of our bugs, sorry.
im not sure if anyone has had this issue before, and it's not a pupil related question (i figured i might ask here anyway), but for some reason when I have my vive headset on, and i run my project in editor, it seems that time moves at about a x0.5 scale
my fixed timestep is 0.05, and my vive cosmos runs at 90fps, and if i set the timeScale to 200/90 (where 200Hz corresponds to 0.05 fixed time step and 90 is the refresh rate of the headset), everything in my project seems to run as normal
Hi, when I try to do the calibration, only one circle displayed in the center of the screen, and no other circles showed in the corner of screen next, so I can not complete the calibration, what should I do?
Are you using the screen marker or the single marker calibration choreography?
I am using the screen marker.
Then it is likely that your scene camera is not correctly setup. Please make sure the screen is fully visible during the procedure. The marker's center will turn from red to green once the marker is being detected. If the marker is detected sufficiently long, the choreography will continue with the next marker.
Thank you! I solved the problem as you said.
In this case, I would also like to note that the correct setup of the eye cameras is crucial for the pupil detection and gaze estimation to work. See https://docs.pupil-labs.com/core/#_1-put-on-pupil-core and https://docs.pupil-labs.com/core/hardware/#headset-adjustments for details.
Thx for the information! π I am pretty new with this glasses
Is there anyway to save/load calibrations for the vive add-on? Also, is there anyway to load camera settings (ROI, 3D model, etc)?
Loading calibrations works by starting the corresponding gazer plugin (via the start_plugin
notification) with the correct parameters. These are announced as a notification once the calibration was successful. Use this as an example on how to listen to notifications https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/filter_messages.py (I recommend disabling line 32)
The topics to look for are notify.calibration.successful
and notify.calibration.result.v2
.
Regarding camera settings: There is a variety of pupil detector settings you can retrieve and set via the pupil detector network api https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_detector_network_api.py Unfortunately, setting the 3d model is not one of them. π
sounds good, thank you!!
@user-28c35b Following up on https://discord.com/channels/285728493612957698/285728493612957698/862969137332092959
The issue is that the network API does not guarantee you an even sampling rate. You will need to buffer the samples and do some kind of matching.
Pupil Capture does that already for gaze data. So if you are looking for binocular pairs, I suggest subscribing to gaze
(requires a valid calibration).
Generally, what you are trying to do is much easier when done post-hoc. Are you 100% sure you need to calculate this in real-time?
Yes, I am trying to calculate some metric (The Low/High Index of Pupillary Activity) in real time, I need diameter from both pupils for this.
Ok. In this case, I recommend subscribing to pupil.0.3d
, pupil.1.3d
, and gaze
. Buffer the pupil data in a dictionary with (<datum timestamp>, <datum eye ide>)
as keys. When receiving a binocular gaze datum, use it's base_data
field to check which pupil datums were used and look them up in your pupil buffer. Don't forget to evict old data from the buffer as it grows quite quickly.
Thank you for the help, I try to implement it this way. π
Hi, I have pupil core and i need to dynamically modify the minimum duration of the fixation detector, this parameter will depend on the person who will use the software. Is there a way to set it directly in python? using notification system or publishing on certain topics?
Yes, it is possible by sending a start_plugin
notification. Do you know how it is structured?
Yes, is like this one? {"subject": "start_plugin", "name": "...", "args": {...}}
{..., "name": "Fixation_Detector", "args":{"min_duration": min_duration_value}}
Everything works fine, thank you for the help!!
I will try and i'll let you know, Thanks so much!!
How do I create a remote annotation based on a keypress?
You start by taking this remote annotations example https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py and extend it to send annotations as soon as the user presses a key. (How to capture a keypress is highly dependent on the framework that you are using)
Thanks papr!
@papr This model confidence metric is a really timely update.'
I do not know if metric is the right word π It is very rough but better than nothing.
In the next few days, we're going to hit you with a doc detailing our efforts to turn our segmentatnoi network into a Pupil Labs plugin.
Looking forward to that!
Sneak-preview: A fixation-based assessmetn (9 point grid in VR) shows that it is very robust, very precise, but not very accurate.
We have spent long enough debugging htat we feel we might benefit from some input from the PL team, if they can offer it.
All our efforts will be open source.
Thanks π
Haha. Yes, we know. In the end, it's a very complicated measure of "how elliptical is your mask?"
Wait, are we talking about the same thing?
I know, a bit more complicated than that, but that captures the spirit of it.
@user-3cff0d should share his diagram of it with you - he spent quite a bit of time wrapping his head around the algorithm.
Oh, oh, sorry, no - and that's my fault.
Nevermind.
π π
Yes, looking forward to playing with the new p3d model confidence metric (I was confusing it with the pupil confidence metric for a moment)
Long day.
Ah ok. No worries. π Feedback is welcome.
@papr Here is the flowchart in question. Does this line up with your understanding of the 2d detector's confidence algorithm? Please forgive me if some of the function names are inaccurate, they were based off of a recreation.
I must admit that I do not know the algorithm in this much detail. I will put the review of the flowchart on my backlog though. Afterward, would be pretty neat if you were ok with putting this on https://github.com/pupil-labs/pupil-detectors/
Sure, I would be okay with it
When I create a remote annotation, it does not load in Pupil Player (annotations.pldata is empty). Below is my relevant code, but it's largely just pupil helper code. I don't get any error messages either when running the program.
#set pupil time to psychopy time
pupil_time = core.Clock()
time_fn = pupil_time.getTime()
req.send_string("T" + str(time_fn))
print(req.recv_string())
# Ensure that relative paths start from the same directory as this script
_thisDir = os.path.dirname(os.path.abspath(__file__))
os.chdir(_thisDir)
def notify(pupil_remote, notification):
"""Sends ``notification`` to Pupil Remote"""
topic = "notify." + notification["subject"]
payload = serializer.dumps(notification, use_bin_type=True)
pupil_remote.send_string(topic, flags=zmq.SNDMORE)
pupil_remote.send(payload)
return pupil_remote.recv_string()
def send_trigger(pub_socket, trigger):
"""Sends annotation via PUB port"""
payload = serializer.dumps(trigger, use_bin_type=True)
pub_socket.send_string(trigger["topic"], flags=zmq.SNDMORE)
pub_socket.send(payload)
def new_trigger(label, duration, timestamp):
"""Creates a new trigger/annotation to send to Pupil Capture"""
return {
"topic": "annotation",
"label": label,
"timestamp": time_fn,
"duration": duration,
}
# Start the annotations plugin
notify(
req,
{"subject": "start_plugin", "name": "Annotation_Capture", "args": {}},
)
#req.send_string('C')
#start recording
req.send_string('R')
req.recv_string()
label = "start experiment"
duration = 0.1
minimal_trigger = new_trigger(label, duration, pupil_time)
send_trigger(pub_socket, minimal_trigger)
sleep(1) # sleep for a few seconds, can be less
Also, when using our latest Capture release, you should see a log message in the world window confirming the reception of the timestamp and displaying its age
This code follows the old time sync convention using the T
command. I highly recommend switching to our new example https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Independently, I think this line needs changing:
- time_fn = pupil_time.getTime()
+ time_fn = pupil_time.getTime
- req.send_string("T" + str(time_fn))
+ req.send_string("T" + str(time_fn()))
...
- "timestamp": time_fn,
+ "timestamp": timestamp,
...
- minimal_trigger = new_trigger(label, duration, pupil_time)
+ minimal_trigger = new_trigger(label, duration, pupil_time())
time_fn
needs to be a function that can be called over and over again. In your example, it is a measured timestamp instead. As a result, all your annotations get the same timestamp assigned. Instead, you want to call the time function when you want to measure time. The linked example above should make this more clear as well.
The problem is that when I denote it as a function, it throws an error. Perhaps because I'm trying to use core.Clock() as opposed to local_clock()? Although I'm not sure what the differences are here. I'm just trying to sync the PsychoPy time with Pupil Time so perhaps local_clock also works for that.
Traceback (most recent call last):
File "C:\Users\hesselmann\Desktop\PsychoPyUniformityIllusion\UniformityIllusionPsychoPy\UniformityIllusion.py", line 113, in <module>
minimal_trigger = new_trigger(label, duration, pupil_time())
TypeError: 'Clock' object is not callable
My bad!
- minimal_trigger = new_trigger(label, duration, pupil_time)
+ minimal_trigger = new_trigger(label, duration, time_fn())
time_fn()
instead of pupil_time()
. The naming of the latter is not quite good as it is actually a local clock instance which has a get_time()
function
Okay this runs but the annotation.pldata file is still empty π¦
Are you getting the confirmation that you have received an annotation?
I don't see anything on the World window
Can you confirm that the annotation plugin and recording are started?
The recording definitely starts but I'm not sure how to check if the plugin is starting properly
The plugin has a menu entry on the right. If the menu is present, the plugin is running
Yes, I tried turning it off, and it turns back on when I run my experiment code
What is your code for pub_socket
?
Okay it works, I just had to move the code somewhere else in my experiment. Thanks so much for hand holding me through this debugging!
import zmq
from msgpack import loads
import msgpack as serializer
from time import sleep, time
context = zmq.Context()
# open a req port to talk to pupil
addr = "127.0.0.1" # remote ip or localhost
req_port = "50020" # same as in the pupil remote gui
req = context.socket(zmq.REQ)
req.connect("tcp://{}:{}".format(addr, req_port))
# ask for the sub port
req.send_string("SUB_PORT")
sub_port = req.recv_string()
# PUB socket
req.send_string("PUB_PORT")
pub_port = req.recv_string()
pub_socket = zmq.Socket(context, zmq.PUB)
pub_socket.connect("tcp://127.0.0.1:{}".format(pub_port))
# open a sub port to listen to pupil
sub = context.socket(zmq.SUB)
sub.connect("tcp://{}:{}".format(addr, sub_port))
sub.setsockopt_string(zmq.SUBSCRIBE, "surface")
Again, if you have the time, I highly recommend switching the time sync strategy
So use this instead? So that it accounts for latency?
# 2. Setup local clock function
local_clock = time.perf_counter
# 3. Measure clock offset accounting for network latency
stable_offset_mean = measure_clock_offset_stable(
pupil_remote, clock_function=local_clock, n_samples=10
)
pupil_time_actual = request_pupil_time(pupil_remote)
local_time_actual = local_clock()
pupil_time_calculated_locally = local_time_actual + stable_offset_mean
print(f"Pupil time actual: {pupil_time_actual}")
print(f"Local time actual: {local_time_actual}")
print(f"Stable offset: {stable_offset_mean}")
print(f"Pupil time (calculated locally): {pupil_time_calculated_locally}")
It is more than just copying some code from the example, I am afraid. Please try to understand what the code is doing such that you can apply it correctly to your experiment. See also https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py for more context.
local_clock
can be time_fn
/ core.Clock().getTime
Ah yes understanding is always good..thanks again for all your help today papr!
Is there a way to instruct Player to perform its gaze detection, and instruct it to perform an export (like an 'e' press) through the Network API?
Player does not have a network api. You would need to write a script that calls the gaze estimation code directly. Base class api: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L366
Is it a common issue to have one screen go dark after calibrating when using a dual monitor setup? Specifically, I have Pupil running on one screen, and then calibration dots will appear on the other. Then, the screen that has the calibration dots turns black after calibration is complete.
Hi @papr , may I ask if there is a way to obtain the location of the surface marker as to the frame capture by the world camera?
Please see this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb
Hi papr, sorry for bothering you again. Is the field 'surf_to_dist_img_trans' only available when exporting the recorded file? I was trying to access to it in real-time in a plug-in but couldn't find it.
Thanks a lot!
I've figured it out, but I have to modify the surface_tracker.py
Hi, what part needs to be modified? Edit: Probably this part https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker.py#L488-L496
yes, add
"surf_to_dist_img_trans": surface.surf_to_dist_img_trans.tolist(),
after line 491
We can include this in our next release
Hi all. I'm hoping you can help. I'm looking for a hardware-software combination that can give me realtime values of independent left and right pupil positions. The context here is that I have strabismus and would like to develop (for myself) a feedback loop that allows me to determine, during an eye exercise, if I've correctly aligned my eyes.
A provisional poke around various webcam-based open source solutions suggest they're not accurate enough and could give a lot of false positives.
Is this kind of thing viable with Pupil? A superficial scan of the docs suggests (at least from an API perspective) "yes" but I'm wondering if the system as a whole would be good at this task?
Edit: note that this requirement means I need to determine the degree of deviation of both pupils from where they "should" be. I suspect this presents a problem in that heads aren't symmetric and the real-world target I'm testing my eyes against may not be perfectly aligned. So chances are fundamentally this isn't practical outside VR.
Hi, i'm developing an application in unity and i'm using c#. I imported the packages and tested the demo scenes following your documentation and everything works. I also tried the connection with Pupil Capture and following some examples i was able to create some test scenes. Now i need to read data related to fixations on surfaces but i have a problem with data structure. void receiveCustomData(string topic, Dictionary<string, object> dictionary, byte[] thirdFrame = null)
Inside this function i was able to read dictionary["fixations_on_surfaces"]
but then how can i access the dictionary inside? The one with "topic", "normpos", "confidence", etc.. I can easily read the other data like "name" or "timestamp". Using python i had no problem.
The challenge is that you need to perform a calibration in order to get comparable gaze direction estimates from both eyes. And the default calibration assumes that you are able to fixate the calibration target with both eyes at the same time. You would need to develop a calibration choreography plugin that supports collecting eye data from one eye at a time.
@user-a09f5d has been doing work/research in this regard. Maybe they can share their experiences.
I had a suspicion that might be the case. I can fixate on a target with both eyes for a short period of time which could help. I'll check out the docs to see if I can get a feel for the calibration process and whether that can work.
Assuming the calibration step is fine, is the next step (fetching eye data for each eye) then viable? (Real-world limitations notwithstanding)
Hi @user-53365f Sorry, I haven't been on discord for many months so didn't see this till now. I have responded to your direct message.
I checked the data structure from here: https://docs.pupil-labs.com/developer/core/overview/#surface-datum-format
If you use these dual-monocular gaze estimators [1], you can receive gaze direction estimates for both eyes independently via our real-time network API (some delay due to data transfer included of course)
[1] https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350
Fantastic. I've also just checked the calibration docs and that looks viable too. I also get the point of the world camera now!
Could you confirm that you only need help with understanding the inner structure of this dictionary? Or do you also need help accessing it using c# syntax?
Yes, the first one. I tried casting this System.object to list, tuple and dictionary, but it doesn't cast. On python i can easily read in this way dictionary["fixations_on_surfaces"][0]["norm_pos"]
I also have a more general question: Are your performing eye tracking in a VR or an AR environment? And if yes, which of both?
The application isn't in VR/AR. I'm using pupil core
dictionary["fixations_on_surfaces"]
is a list/array. Unfortunately, since I do not know c#, I cannot tell you why this would not cast.
Thanks anyway, I'll try to find a way and i'll post it here, it might be useful to someone else
Hi, i was thinking, is there a way to subscribe to a topic within a topic. For example I have a surface to track which is called "test". I am interested in the data only for the fixation on this surface. To read all the data I subscribe to the topic "surfaces.test"is there a way to subscribe directly to "surfaces.test.fixation_on_surfaces" topic?
Unfortunately, with the current implementation, this is not possible.
Hi, @papr, here's the code we used to open, edit, and save the .pldata file. In this code, we're saving the edited pldata to a new file
On first sight, I do not see any errors. I think it could be written more efficiently, though. I will write a reference implementation tomorrow.
Hi, I am using core, and I get the data subscribing to gaze. I need pupil size and blinks; I can get the pupil size with no problem, but I am not getting the blink data, even though I have turned on the setting for blinks in the pupil capture software. Is there something I am doing wrong? or maybe I have to subscribe to another topic?
You need to subscribe to blink
π
Thank you! I didn't find it in the docs, maybe I wasn't looking in the right place π
@papr Hi, if you have any suggestions for getting this to function as it should, please let us know!
Apologies, somehow I missed your message.
Hello, is there any documentation for pye3d? I have looked everywhere but maybe I am missing something
https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-pupil-detection
Thanks! But are there more specific api docs for pye3d then? I have read this and I think this is more like a explanatory blog post rather than a descriptive doc for the functions included
Unfortunately, not, no. The entry point for reading the source code would be: https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L102
This implements the detector as described in the link above. If you are looking for the actual model fitting code, look here https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/base.py#L33
Thank you! This would suffice for my usage for now I guess
Let me know if you have questions about the source code
Hello, I'm using the pupil core eyetracker and would like to use the blink_detector pupillabs offers. Is it possible to export the blink data, as visible in pupil player, in a csv file when exporting? Thanks in advance!
Hello! Is there a C++ library for Pupil Core (mono) eye tracker?
Hi, there are no native c++ libraries to access the eye tracker directly. Please refer to our network api instead https://docs.pupil-labs.com/developer/core/network-api/
Are you aware of the existing blink detector export data? Does it not provide what you need?
Thanks for your reply. I should specify my need. I am using a surface data csv export and would like to add a column to specify = blink yes or no for each row. I have the blink detector export. However the timestamps do not correspond with the timestamps in the surface export. I think I am doing something wrong. Does this clarify my question?
This is with Python, I need actually for my C++ project to integrate the eye-tracker. I see some random projects, however, not working.
There are c++ implementations for zmq and msgpack which you can use to connect to the network API. In any case, you will have to run the Pupil Core application in parallel.
Thanks. I will try then.
It does clarify the question, thank you!
However the timestamps do not correspond with the timestamps in the surface export. Are you referring to the fact that they do not match exactly or that they are off completely?
They do not match exactly indeed
Is that normal? I figured the timestamps should match the gaze_timestamps in the surface export
Blinks are calculated based on pupil data, not gaze. This is why it does not match exactly.
You will have to assign the blink yes/no value for each surface-mapped gaze based on the condition if its timestamp falls into any of the blinks periods or not (blink_start_ts <= gaze_ts <= blink_end_ts
)
Okay thanks a lot for the extra information!
sounds good, thank you!
Hi papr, my monitor turns off after calibration. Is there a way to fix this?
Turns off as in "physically turns off" or "displays a black image"?
displays a black image. This only occurs when I have a dual monitor setup and they're displaying different things
The screen that the calibration appears on goes black. And then, when I close pupil, it opens again..
But this doesn't occur when I duplicate the displays and display the same information on both of them
https://github.com/pupil-labs/pupil/issues/2143#issue-894510320 Please use window instead of fullscreen mode to workaround this issue.
Thanks! Is this in settings somewhere? I can't seem to find it..
Yes, go to the calibration settings and disable the "use fullscreen" toggle
Awesome. Thanks again, papr!
I'm trying to end a trial in PsychoPy if a participant looks away from a specified surface (usually monitor). However, it is often the case that the gaze will obviously not be on the monitor, as shown in the Pupil window, but this is not reflected in PsychoPy. I am wondering if I am using norm_gp_x and norm_gp_y wrong, and if they are only defined when the gaze is on the surface..is there a better way to tell if the gaze is on the surface?
#read from remote pupil data and check if they looking at the surface
topic = sub.recv_string()
msg = sub.recv() # bytes
surfaces = loads(msg, raw=False)
filtered_surface = {
k: v for k, v in surfaces.items() if surfaces["name"] == surface_name
}
is_gaze_on_surface = True
try:
# note that we may have more than one gaze position data point (this is expected behavior)
gaze_positions = filtered_surface["gaze_on_surfaces"]
for gaze_pos in gaze_positions:
norm_gp_x, norm_gp_y = gaze_pos["norm_pos"]
# only print normalized gaze positions within the surface bounds
if 0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1:
is_gaze_on_surface = True
else:
is_gaze_on_surface = False
print(norm_gp_x, norm_gp_y)
except:
pass
if is_gaze_on_surface:
continueRoutine = True
else:
continueRoutine = False #end the routine if they look away
I think the idea is correct, but the implementation can be improved: 1. Filter by confidence. Gaze with confidence < 0.6 can be inaccurate 2. In your implementation, only the last gaze item decides if the routine is stopped or not. You might want to break out of the loop as soon is_gaze_on_surface is true for the first time.
Is there documentation on the syntax for gaze confidence? I am guessing gaze_pos['confidence']? Having a hard time finding this on the website
@papr, in your recent response to @user-3cff0d , you suggested that we could reduce error by creating a custom gaze class with a modified scene camera location. Two questions: 1) could you help us find the specific variables you are describing and, 2) If this produces successful recalibrations using our modified pupil detection method, would your team consider making those variables transparent in some way through pupil player?
Correction: The calibration needs to correct for the assumed eye ball location, not scene camera (even though this is pretty much equivalent in the end, the former is more or less implemented already) 1) yes 2) yes
This is why I love pupil labs. Thanks @papr !
If you have any quick pointers to get @user-3cff0d headed in the right direction, they would be much appreciated. He's quite capable.
ok, there are some unforeseen hurdles which I should be able to solve until tomorrow
I am nearly done with the custom plugin. Will come back soon.
You are a king.
Thanks so much!!
@user-3cff0d Could you please share an example VR recording with data@pupil-labs.com that includes reference locations already? I would give this a quick test before sharing.
Edit: Never mind, I roughly annotated a small recording by myself π
@user-3cff0d I might need to come back to this request. I am seeing some weird behavior with my recording and I am not sure if it is due to its age π
π
Hi @papr, for the line of code In [20] in the link you sent, are we supposed to serialize datum ourselves, or is serialized an attribute of datum? We keep receiving an error on that line.
.serialized
is an atttribute of Serialized_Dict
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L281
If you are using normal dicts, use writer.append(datum)
Okay great, we got it working. We have successfully edited the .npy, .pldata, .json, and .mp4 files. Is editing the .size, .time, .raw, .intrinsics, .mjpeg, .time_aux,
and .bin files necessary for the purposes of our project?
Oh, you are editing pupil invisible recordings?
Yes, we are
Do you still have access to the originals, before having them opened in Pupil Player for the first time?
We don't at the moment, but we can get access, would this make it easier?
Yes. In this case you would only need to slice the original files, and Player would take care of generating the correct pldata files. This is the PI recording format https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793
Okay, do you have any advice on editing the other files that we listed?
raw and time are just binary files that can be easily read and written with numpy.fromfile and numpy.ndarray.tofile. json I would only touch the info one to adjust start_time and duration. intrinsics is generated by player and does not need adjustments. mjpeg requires same handling as mp4. I would not touch time_aux
Thanks! We appreciate all your help
Please note, that PI uses nanoseconds since Unix epoch as time.
Sure thing, I'll send one over now
Sent!
Also it looks like the recorded data does not contain pupil data during the annotated period? When using post-hoc pupil detection, I am not able to fit a very good eye model because there is not sufficient eye movement at the beginning. (I used the eye intrinsics from the release)
@user-b14f98 @user-3cff0d Not sure if you are aware but this recording was recorded 2.1 and eye intrinsics were introduced in https://github.com/pupil-labs/pupil/releases/tag/v2.4 Have you downloaded the intrinsics from the release page? See the caveat for 200Hz Vive Add-on Recordings for more information.
@papr Thanks for reminding us. I'll have to take a look, but there's a chance we have not.
Do you want me to do the pupil detection myself and send over the results?
No worries, it will be sufficient for what I am trying todo (compare headset 3d vs hmd 3d gazer). But this has effect on the gaze estimation nonetheless. The 2d pupil detection is robust enough. The issue is that the 3d eye model cannot be triangulated well enough because the eye orientations are not diverse enough. At least it looks to me like this is the case.
It would be using our plugin, so the 3d model would likely be better due to the more robust pupil detection in the vr headset.
Do you do post-hoc pupil detection in Player or with an external script?
Gotcha, that would be a correctable problem for future data collection
We do post-hoc pupil detection in Player, with our plugin installed
With or without your plugin, I recommend freezing the model once it is fit well, and then restarting the detection to apply the frozen model to the whole recording. This is especially important in 3.4 because it has an impact on which model (short vs longterm) is chosen for the gaze estimation. You seem to have a fairly stable setup, justifying the assumption of "no slippage".
We've tried that in the past with not-so-great results, but that might have just been because we froze the model at a bad time. We'll give it another shot. Also, we're currently still on version 3.2. I tried to update to 3.4 and got a strange versioning error with one of the dependencies, where it seemed like 2 files required different versions of the same package or something like that
Feel free to share the error. I might be able to help resolving that
@user-3cff0d @user-b14f98 https://gist.github.com/papr/e57f55ddd02245bbfd99e072b4273fdf this is the current version. It runs without error, but the resulting gaze is all over the place (for your recording as well as mine). Something must still be going wrong very badly. I would appreciate your help with debugging this.
Install the plugin into your Pupil Player plugin folder and it should appear as a gazer option in the post-hoc calibration menu (next to 2d and 3d)
player - [ERROR] launchables.player: Process player_drop crashed with trace:
Traceback (most recent call last):
File "...\pupil-master\pupil_src\launchables\player.py", line 843, in player_drop
from gl_utils import GLFWErrorReporting
File "...\pupil-master\pupil_src\shared_modules\gl_utils\__init__.py", line 11, in <module>
from .draw import draw_circle_filled_func_builder
File "...\pupil-master\pupil_src\shared_modules\gl_utils\draw.py", line 16, in <module>
from pyglui.cygl.utils import RGBA
ImportError: DLL load failed: %1 is not a valid Win32 application.
ok, this is an issue with the pyglui wheel. Not sure what causes the error, but you might still have all the required dependencies to build from source:
pip install --no-binary pyglui --force-reinstall pyglui
I'll give this a shot and get back to you
I am not sure if the dummy eye translation values in the plugin are correct. Do you have a hmd-eyes setup that you can use for a quick test?
I will be on vacation for two weeks, starting tomorrow. I will probably not be able to respond frequently in the meantime.
Hey, the recording we sent you already had eye intrinsic files. Am I missing something?
Ok, great! I just wanted to mention this, as it has an impact on the 3d eye model estimation
Perhaps we were using outdated default intrinsics?
Ok, thanks.
I don't on my person at the moment, no
I work remotely most of the time
Ok, I have everything needed at home but not setup. I can do the test, when I am back from vacation.
To verify the values, we need to run https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py + a hmd-eyes realtime calibration. Filtering for start_plugin
and GazerHMD3D
, we should be able to get the dynamically calculated eye_translations
Awesome. I hope that you have a wonderful vacation!
@user-3cff0d I found the issue. The ref data required a flip of the y axis. Using the new version, I get a validation accuracy of 1.9 compared to 2.9 with the default 3d gaze mapper. /cc @user-92dca7
@user-b14f98 @user-3cff0d I updated the gist. This is the diff https://gist.github.com/papr/e57f55ddd02245bbfd99e072b4273fdf/revisions#diff-81827a6bee15b12e512a80c75d2b0270629db9e9eb5231bc73d7fcc79926cc0c
@user-d1efa8 The whole conversation above is highly relevant for you, too! Please take the time to read it, install the plugin, and give the plugin a try.
Cool, Iβll take a look now; thanks for letting me know !
That's great! I'll give it a try asap.
How do I actually activate the plugin, or do I just do Post-Hoc Gaze Calibration as normal and it will automatically be used?
post-hoc calib -> calibration menu -> select from the 2d/3d selector
Gotcha, thank you
but you get a third post-hoc option, correct?
Yes, there is 2D, 3D, and Post-hoc HMD 3D
I'm getting some really bad gaze data with that, it's flying back and forth from frame-to-frame
Is that what you were observing before you applied the fix?
correct
So I've tried it multiple times, including without any other external plugins. The output looks like that no matter what I adjust. Is it possible that this is happening because I'm still on 3.2?
However when I switched the setting back to 3D everything looked normal again
Did you try both versions of the plugin? One with and the other without the flip? And both result in the huge error? Did you recalculate both calibration + mapping after switching plugins?
The only difference between our setups that I could imagine would be the version difference from 3.2->3.4
That's not it, for sure
I did remove the flip and try again, and the results were even worse. Both have huge errors but without the flip is the worse of the two. I did recalculate both calibration + mapping for each attempt
ok, this is consistent at least.
This is on the same recording you shared with me, correct? Please share the offline_data folder with me. This way I should have the same post hoc pupil data etc like you
Messaged it to you
Ok, I can confirm that something is def. still going wrong. Only the gaze for the center point is accurate.
1.9Β° from 94 / 575 samples didn't see the 95/575 samples before π
Ahh, okay
@user-3cff0d I am getting better results (as in: similar to the default headset calib.) when changing this line https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L179
- scales = list(np.linspace(0.7, 10, 5))
+ scales = list(np.linspace(0.7, 100, 5))
Basically, the bundle adjustment tries different the z-depth-scalings and chooses the best one. 10 seems to be too small if the ref data is manually annotated.
@user-3cff0d @user-b14f98 The real-time hmd plugin is making too many assumptions that are not holding true for 2d annotated reference data. I started making some changes to simplify a post-hoc hmd calib: https://github.com/pupil-labs/pupil/pull/2176
It works the same way as the headset calibration but uses different initial eye translations and ref depths. Unfortunately, it yields less accurate gaze (3.5 instead of 2.9). We will have to investigate why.
The external plugin is no longer necessary.
@papr the 3D target locations are stored in this pickle of a dataframe. The most relevant columns are targetLocalPos (with subcolumns x,y,z) and azEst (with subcolumns az, el) . azEst are the cooresponding spherical target coordinates.
FYI, each row is a single trial in our assessment recording for this single subject.
We care only about trials where "trialType" is "CalibrationAssessment" and "targetType" is "fixation"
I am mostly interested in understanding if the camera intrinsics saved by unity are correct. π Thanks for sharing the data!
My pleasure. Please let us know how else we can help.
I think it would be beneficial if you could start having a look at the gazer code and combining it with https://gist.github.com/papr/48e98f9b46f6a0794345bd4322bbb607 This example shows how to fit a pye3d twosphere model to a set of hand selected pupil data points (e.g. top 10% confidence pupil data). This gives you more control over the model. Combining it with the gazer code, you can build a reproducible gaze estimation pipeline without running Pupil Player.
@user-3cff0d π π π
@papr Does this use recorded pupils ?
Is that the assumption?
But I mean yes, this was primarily meant as a controlled, reproducible post-hoc pipeline.
It uses pupil player exported 2d pupil data, but you should be able to adjust convert_pupil_datum_observation
to your own pipeline output
Ok, thanks! We are trying to process N subjects with M pupil segmentatnoi plugins, producing gaze estimates for each. Kevin has run into a roadblock in automating that, specifically in triggering gaze estimation after pupil detection.
So, very useful.
No chance you guys have this pipeline already going in-house and are willing to share?
Or did you put the pieces in place, but haven't yet constructed the full offline pipeline?
It is just a matter of putting the pieces together. Gazers can fit and predict gaze data using pupil data + ref data input
No, we don't have that yet. That is why I was asking you to look into it π This is a piece that I built for @user-d1efa8
Ok. Thanks again. @user-d1efa8, keep on eye on this thread and let us know if you are able to contribute. Perhaps we can help one another π
That looks great! I remember asking about triggering an export as well, but I think there's a method that does that
Yes, technically there is. But the script above is about extracting the pipeline and not running Player at all.
@papr @user-3cff0d @user-b14f98 yeah will do; Iβll try to get something next week for yβall. Weβve just been recording a lot data lately so I havenβt had any free time
@user-d1efa8 What lab are you with?
Angelaki lab at NYU; what about you?
Oh, great. https://www.rit.edu/directory/gjdpci-gabriel-diaz
I am Gabriel Diaz. I've met Dora once or twice, in passing.
I'm Kevin Barkevich, I'm one of Dr. Diaz' grad students c:
Oh wow, thatβs awesome haha
Iβm Josh Calugay, nice to meet yβall
Not a grad student rn, but I will be in October π
Excellent!
Just wondering, do you happen to know Greg DeAngelis ? Weβve been working with him to validate our eye tracking data
Yes, he's just down the road.
We're both members of URochester's Center for Vision Science (although I am actually faculty at RIT, down the road)
What a small world
Or a tight-knit field!
Indeed, the world of vision science is quite small, and collaborative.
Iβm glad it is, because I definitely wouldnβt have been able to make it far with this project without help haha