πŸ’» software-dev


user-588603 01 July, 2021, 10:18:38

Is it possible to deactivate just 3d detection at runtime? We want to reduce cpu load while still beeing able to see if the eyes can be tracked live. We would like to calculate both 2d and 3d detection posthoc.

user-588603 01 July, 2021, 10:19:53

Thx

user-588603 02 July, 2021, 08:41:13

i added my changes to the ttl.py example you gave earlier, thats working now. i now added the function in line 30 in order to stop the 3d detector plugin - based on the example you linked above. aparently line 36 never executes, at least i cant see any printout in the command window. do you have time to assist with this code? i dont understand the structure of the pupil labs code, nor python syntax very well. thx.

ttl.py

papr 02 July, 2021, 09:19:59

The issue is the .order. Plugins are loaded based on this value. And since the order is very low in your case, the pye3d plugin is not loaded yet. The trick is to delay the stop command:

+ DELAYED_STOP_SUBJECT = "TTLTriggerDelayedPye3dStop"
...
- self._stop_Pye3DPlugin_pupil_detectors()
+ self.notify_all({"subject": DELAYED_STOP_SUBJECT, "delay": 0.5})
...
 def on_notify(self, notification):
   ...
+  elif notification["subject"] == DELAYED_STOP_SUBJECT:
+    self._stop_Pye3DPlugin_pupil_detectors()
papr 02 July, 2021, 09:01:39

A first general tip in regards to plugin develop is to use pythons logging mechanism over print statements:

import logging
logger = logging.getLogger(__name__)
...
logger.info("info-level message")

Capture will write these automatically to the log file and display them in the world window.

Edit: One important point is that the logger only processes the first argument. i.e. you should pass a formatted string if you want to log multiple variables, e.g.

- print(a, b, c)
+ logger.info(f"{a} {b} {c}")
user-588603 02 July, 2021, 09:44:20

i was suspecting load order, but i didnt get when this list was populated - thx a lot. its working now.

user-588603 06 July, 2021, 11:45:08

We have yet another problem that we cant seem to solve. we use the attached plugin. in line 59 we toggle the TTL for each received frame of eye 0 while self._recording is true. in some recordings it occures that we receive more TTL inputs at the TTL recording device than there are frames of eye0 in "eye0_timestamps.npy". i can explain that one more TTL than npy entry could occure.: if we record an uneven number of frames the TTL line is still high when the recording stops and the stop recording resets the TTL line back to 0 (a toggle) although there was no frame. that happens whenever the recording has an uneven number of frames. we can filter that out. our data however shows that it can occure, that there is an additional 1 to 4 TTL triggers recorded that we can not match to npy entries (there is simply 4 TTL toggles more than npy entries). by shifting the npy and the received TTL vectors against eachother and checking theire time values it becomes clear that the frames which are missing in the npy are the first few frames. i was under the assumption, that when "self._recording" is true every event in line 50 reaches the npy file of the corresponding eye. is there any possibility that this is not the case? how can i improove the filter condition in line 54 to be sure to only toggle the TTL when the frame will actually reach the npy?

ttl.py

user-588603 06 July, 2021, 11:46:54

if the numer of frames in the npy would be 1000 and there would be 1004 matching triggers we could just drop the initial 4 TTL triggers i guess. its still odd to me how this can happen.

nmt 06 July, 2021, 13:45:50

Hi @user-aadbbd, moving the conversation here. Can you confirm that in your filter_gaze_on_surface.py script run from Psychopy, the key on line 37 is "gaze_on_surfaces" and not "gaze on surfaces"?

user-aadbbd 07 July, 2021, 07:25:09

Yeah, it's "gaze_on_surfaces". I'm not exactly sure how to go about debugging this, as my PsychoPy experiment just immediately shuts down when it encounters this code 😦

papr 07 July, 2021, 07:26:47

If you could provide the full traceback as well as the neighbouring source code of were the error appears, we might be able to help you. Also, are you subscribing to more topics than surface?

user-aadbbd 07 July, 2021, 08:26:40

No, only surface I think. Here is the traceback.

Traceback (most recent call last):
  File "C:\Users\hesselmann\Desktop\PsychoPyUniformityIllusion\Uniformity Illusion PsychoPy\UniformityIllusion.py", line 391, in <module>
    if is_looking_at_surface(surface_name):
  File "C:\Users\hesselmann\Desktop\PsychoPyUniformityIllusion\Uniformity Illusion PsychoPy\UniformityIllusion.py", line 165, in is_looking_at_surface
    gaze_positions = filtered_surface["gaze_on_surfaces"]
KeyError: 'gaze_on_surfaces'

Here is the function where I do pupil things.

def is_looking_at_surface(surface_name):
    context = zmq.Context()
    # open a req port to talk to pupil
    addr = "127.0.0.1"  # remote ip or localhost
    req_port = "50020"  # same as in the pupil remote gui
    req = context.socket(zmq.REQ)
    req.connect("tcp://{}:{}".format(addr, req_port))
    # ask for the sub port
    req.send_string("SUB_PORT")
    sub_port = req.recv_string()

    # open a sub port to listen to pupil
    sub = context.socket(zmq.SUB)
    sub.connect("tcp://{}:{}".format(addr, sub_port))
    sub.setsockopt_string(zmq.SUBSCRIBE, "surface")

    topic = sub.recv_string()
    msg = sub.recv()  # bytes
    surfaces = loads(msg, raw=False)
    filtered_surface = {
        k: v for k, v in surfaces.items() if surfaces["name"] == surface_name
    }
    # note that we may have more than one gaze position data point (this is expected behavior)

    gaze_positions = filtered_surface["gaze_on_surfaces"]
    for gaze_pos in gaze_positions:
        norm_gp_x, norm_gp_y = gaze_pos["norm_pos"]

        # only print normalized gaze positions within the surface bounds
        if 0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1:
            return True
        else:
            return False
papr 07 July, 2021, 08:29:30

Please try changing the code to

try:
  gaze_positions = filtered_surface["gaze_on_surfaces"]
except KeyError as err:
  raise ValueError(f"surface={filtered_surface}") from err

This will print out the content of filtered_surface in case it does not contain the expected key

user-aadbbd 07 July, 2021, 08:41:30

PsychoPy becomes unresponsive when I run this

user-aadbbd 07 July, 2021, 09:12:05

Interestingly, the PsychoPy Runner constantly prints "[]" when I run the unedited filter_gaze_on_surface.py...

papr 07 July, 2021, 13:49:07

The unedited version of the file only prints something if there is a gaze point within the "unnamed" surface. So I am not sure why it would print []. https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py

Generally, it sounds like the setup is not configured correctly yet.

user-588603 07 July, 2021, 12:54:34

I think my stuff solved itself by one of our bugs, sorry.

user-d1efa8 07 July, 2021, 13:44:58

im not sure if anyone has had this issue before, and it's not a pupil related question (i figured i might ask here anyway), but for some reason when I have my vive headset on, and i run my project in editor, it seems that time moves at about a x0.5 scale

my fixed timestep is 0.05, and my vive cosmos runs at 90fps, and if i set the timeScale to 200/90 (where 200Hz corresponds to 0.05 fixed time step and 90 is the refresh rate of the headset), everything in my project seems to run as normal

user-e242bc 07 July, 2021, 15:13:10

Hi, when I try to do the calibration, only one circle displayed in the center of the screen, and no other circles showed in the corner of screen next, so I can not complete the calibration, what should I do?

papr 07 July, 2021, 15:19:23

Are you using the screen marker or the single marker calibration choreography?

user-e242bc 07 July, 2021, 15:19:49

I am using the screen marker.

papr 07 July, 2021, 15:21:22

Then it is likely that your scene camera is not correctly setup. Please make sure the screen is fully visible during the procedure. The marker's center will turn from red to green once the marker is being detected. If the marker is detected sufficiently long, the choreography will continue with the next marker.

user-e242bc 07 July, 2021, 15:30:36

Thank you! I solved the problem as you said.

papr 07 July, 2021, 15:32:04

In this case, I would also like to note that the correct setup of the eye cameras is crucial for the pupil detection and gaze estimation to work. See https://docs.pupil-labs.com/core/#_1-put-on-pupil-core and https://docs.pupil-labs.com/core/hardware/#headset-adjustments for details.

user-e242bc 07 July, 2021, 15:36:54

Thx for the information! πŸ‘ I am pretty new with this glasses

user-d1efa8 08 July, 2021, 22:11:26

Is there anyway to save/load calibrations for the vive add-on? Also, is there anyway to load camera settings (ROI, 3D model, etc)?

papr 09 July, 2021, 08:07:07

Loading calibrations works by starting the corresponding gazer plugin (via the start_plugin notification) with the correct parameters. These are announced as a notification once the calibration was successful. Use this as an example on how to listen to notifications https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/filter_messages.py (I recommend disabling line 32)

The topics to look for are notify.calibration.successful and notify.calibration.result.v2.

Regarding camera settings: There is a variety of pupil detector settings you can retrieve and set via the pupil detector network api https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_detector_network_api.py Unfortunately, setting the 3d model is not one of them. πŸ˜•

user-d1efa8 10 July, 2021, 18:28:46

sounds good, thank you!!

papr 09 July, 2021, 08:14:20

@user-28c35b Following up on https://discord.com/channels/285728493612957698/285728493612957698/862969137332092959

The issue is that the network API does not guarantee you an even sampling rate. You will need to buffer the samples and do some kind of matching.

papr 09 July, 2021, 08:15:42

Pupil Capture does that already for gaze data. So if you are looking for binocular pairs, I suggest subscribing to gaze (requires a valid calibration).

papr 09 July, 2021, 08:16:18

Generally, what you are trying to do is much easier when done post-hoc. Are you 100% sure you need to calculate this in real-time?

user-28c35b 09 July, 2021, 08:17:51

Yes, I am trying to calculate some metric (The Low/High Index of Pupillary Activity) in real time, I need diameter from both pupils for this.

papr 09 July, 2021, 08:20:21

Ok. In this case, I recommend subscribing to pupil.0.3d, pupil.1.3d, and gaze. Buffer the pupil data in a dictionary with (<datum timestamp>, <datum eye ide>) as keys. When receiving a binocular gaze datum, use it's base_data field to check which pupil datums were used and look them up in your pupil buffer. Don't forget to evict old data from the buffer as it grows quite quickly.

user-28c35b 09 July, 2021, 08:21:28

Thank you for the help, I try to implement it this way. πŸ‘

user-10631a 14 July, 2021, 07:54:06

Hi, I have pupil core and i need to dynamically modify the minimum duration of the fixation detector, this parameter will depend on the person who will use the software. Is there a way to set it directly in python? using notification system or publishing on certain topics?

papr 14 July, 2021, 07:56:06

Yes, it is possible by sending a start_plugin notification. Do you know how it is structured?

user-10631a 14 July, 2021, 07:59:50

Yes, is like this one? {"subject": "start_plugin", "name": "...", "args": {...}}

papr 14 July, 2021, 08:04:05
{..., "name": "Fixation_Detector", "args":{"min_duration": min_duration_value}}
user-10631a 14 July, 2021, 08:34:25

Everything works fine, thank you for the help!!

user-10631a 14 July, 2021, 08:06:18

I will try and i'll let you know, Thanks so much!!

user-aadbbd 14 July, 2021, 11:26:42

How do I create a remote annotation based on a keypress?

papr 14 July, 2021, 11:28:51

You start by taking this remote annotations example https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py and extend it to send annotations as soon as the user presses a key. (How to capture a keypress is highly dependent on the framework that you are using)

user-aadbbd 14 July, 2021, 11:30:10

Thanks papr!

user-b14f98 14 July, 2021, 18:55:46

@papr This model confidence metric is a really timely update.'

papr 14 July, 2021, 18:59:12

I do not know if metric is the right word πŸ˜„ It is very rough but better than nothing.

user-b14f98 14 July, 2021, 18:56:07

In the next few days, we're going to hit you with a doc detailing our efforts to turn our segmentatnoi network into a Pupil Labs plugin.

papr 14 July, 2021, 18:58:43

Looking forward to that!

user-b14f98 14 July, 2021, 18:56:40

Sneak-preview: A fixation-based assessmetn (9 point grid in VR) shows that it is very robust, very precise, but not very accurate.

user-b14f98 14 July, 2021, 18:56:58

We have spent long enough debugging htat we feel we might benefit from some input from the PL team, if they can offer it.

user-b14f98 14 July, 2021, 18:57:08

All our efforts will be open source.

user-b14f98 14 July, 2021, 18:57:41

Thanks πŸ™‚

user-b14f98 14 July, 2021, 18:59:45

Haha. Yes, we know. In the end, it's a very complicated measure of "how elliptical is your mask?"

papr 14 July, 2021, 19:00:44

Wait, are we talking about the same thing?

user-b14f98 14 July, 2021, 18:59:55

I know, a bit more complicated than that, but that captures the spirit of it.

user-b14f98 14 July, 2021, 19:00:21

@user-3cff0d should share his diagram of it with you - he spent quite a bit of time wrapping his head around the algorithm.

user-b14f98 14 July, 2021, 19:00:55

Oh, oh, sorry, no - and that's my fault.

user-b14f98 14 July, 2021, 19:00:56

Nevermind.

user-b14f98 14 July, 2021, 19:00:59

πŸ™‚ πŸ™‚

user-b14f98 14 July, 2021, 19:01:22

Yes, looking forward to playing with the new p3d model confidence metric (I was confusing it with the pupil confidence metric for a moment)

user-b14f98 14 July, 2021, 19:01:24

Long day.

papr 14 July, 2021, 19:01:37

Ah ok. No worries. πŸ™‚ Feedback is welcome.

user-3cff0d 14 July, 2021, 20:50:20

@papr Here is the flowchart in question. Does this line up with your understanding of the 2d detector's confidence algorithm? Please forgive me if some of the function names are inaccurate, they were based off of a recreation.

Chat image

papr 14 July, 2021, 20:55:28

I must admit that I do not know the algorithm in this much detail. I will put the review of the flowchart on my backlog though. Afterward, would be pretty neat if you were ok with putting this on https://github.com/pupil-labs/pupil-detectors/

user-3cff0d 14 July, 2021, 20:59:56

Sure, I would be okay with it

user-aadbbd 15 July, 2021, 12:46:18

When I create a remote annotation, it does not load in Pupil Player (annotations.pldata is empty). Below is my relevant code, but it's largely just pupil helper code. I don't get any error messages either when running the program.

#set pupil time to psychopy time
pupil_time = core.Clock()
time_fn = pupil_time.getTime()
req.send_string("T" + str(time_fn))
print(req.recv_string())
# Ensure that relative paths start from the same directory as this script
_thisDir = os.path.dirname(os.path.abspath(__file__))
os.chdir(_thisDir)

def notify(pupil_remote, notification):
    """Sends ``notification`` to Pupil Remote"""
    topic = "notify." + notification["subject"]
    payload = serializer.dumps(notification, use_bin_type=True)
    pupil_remote.send_string(topic, flags=zmq.SNDMORE)
    pupil_remote.send(payload)
    return pupil_remote.recv_string()


def send_trigger(pub_socket, trigger):
    """Sends annotation via PUB port"""
    payload = serializer.dumps(trigger, use_bin_type=True)
    pub_socket.send_string(trigger["topic"], flags=zmq.SNDMORE)
    pub_socket.send(payload)


def new_trigger(label, duration, timestamp):
    """Creates a new trigger/annotation to send to Pupil Capture"""
    return {
        "topic": "annotation",
        "label": label,
        "timestamp": time_fn,
        "duration": duration,
    }

# Start the annotations plugin
notify(
        req,
        {"subject": "start_plugin", "name": "Annotation_Capture", "args": {}},
    )

#req.send_string('C')

#start recording
req.send_string('R')
req.recv_string()


label = "start experiment"
duration = 0.1
minimal_trigger = new_trigger(label, duration, pupil_time)
send_trigger(pub_socket, minimal_trigger)
sleep(1)  # sleep for a few seconds, can be less
papr 15 July, 2021, 12:56:27

Also, when using our latest Capture release, you should see a log message in the world window confirming the reception of the timestamp and displaying its age

papr 15 July, 2021, 12:54:26

This code follows the old time sync convention using the T command. I highly recommend switching to our new example https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py

Independently, I think this line needs changing:

- time_fn = pupil_time.getTime()
+ time_fn = pupil_time.getTime
- req.send_string("T" + str(time_fn))
+ req.send_string("T" + str(time_fn()))
...
-        "timestamp": time_fn,
+        "timestamp": timestamp,
...
- minimal_trigger = new_trigger(label, duration, pupil_time)
+ minimal_trigger = new_trigger(label, duration, pupil_time())

time_fn needs to be a function that can be called over and over again. In your example, it is a measured timestamp instead. As a result, all your annotations get the same timestamp assigned. Instead, you want to call the time function when you want to measure time. The linked example above should make this more clear as well.

user-aadbbd 15 July, 2021, 13:00:38

The problem is that when I denote it as a function, it throws an error. Perhaps because I'm trying to use core.Clock() as opposed to local_clock()? Although I'm not sure what the differences are here. I'm just trying to sync the PsychoPy time with Pupil Time so perhaps local_clock also works for that.

Traceback (most recent call last):
  File "C:\Users\hesselmann\Desktop\PsychoPyUniformityIllusion\UniformityIllusionPsychoPy\UniformityIllusion.py", line 113, in <module>
    minimal_trigger = new_trigger(label, duration, pupil_time())
TypeError: 'Clock' object is not callable
papr 15 July, 2021, 13:04:25

My bad!

- minimal_trigger = new_trigger(label, duration, pupil_time)
+ minimal_trigger = new_trigger(label, duration, time_fn())

time_fn() instead of pupil_time(). The naming of the latter is not quite good as it is actually a local clock instance which has a get_time() function

user-aadbbd 15 July, 2021, 13:07:05

Okay this runs but the annotation.pldata file is still empty 😦

papr 15 July, 2021, 13:09:29

Are you getting the confirmation that you have received an annotation?

user-aadbbd 15 July, 2021, 13:13:07

I don't see anything on the World window

papr 15 July, 2021, 13:14:43

Can you confirm that the annotation plugin and recording are started?

user-aadbbd 15 July, 2021, 13:15:15

The recording definitely starts but I'm not sure how to check if the plugin is starting properly

papr 15 July, 2021, 13:15:33

The plugin has a menu entry on the right. If the menu is present, the plugin is running

user-aadbbd 15 July, 2021, 13:16:21

Yes, I tried turning it off, and it turns back on when I run my experiment code

papr 15 July, 2021, 13:16:57

What is your code for pub_socket?

user-aadbbd 15 July, 2021, 13:24:10

Okay it works, I just had to move the code somewhere else in my experiment. Thanks so much for hand holding me through this debugging!

user-aadbbd 15 July, 2021, 13:17:46
import zmq
from msgpack import loads
import msgpack as serializer
from time import sleep, time

context = zmq.Context()
# open a req port to talk to pupil
addr = "127.0.0.1"  # remote ip or localhost
req_port = "50020"  # same as in the pupil remote gui
req = context.socket(zmq.REQ)
req.connect("tcp://{}:{}".format(addr, req_port))

# ask for the sub port
req.send_string("SUB_PORT")
sub_port = req.recv_string()


# PUB socket
req.send_string("PUB_PORT")
pub_port = req.recv_string()
pub_socket = zmq.Socket(context, zmq.PUB)
pub_socket.connect("tcp://127.0.0.1:{}".format(pub_port))

# open a sub port to listen to pupil
sub = context.socket(zmq.SUB)
sub.connect("tcp://{}:{}".format(addr, sub_port))
sub.setsockopt_string(zmq.SUBSCRIBE, "surface")
papr 15 July, 2021, 13:25:19

Again, if you have the time, I highly recommend switching the time sync strategy

user-aadbbd 15 July, 2021, 13:27:01

So use this instead? So that it accounts for latency?

# 2. Setup local clock function
    local_clock = time.perf_counter

    # 3. Measure clock offset accounting for network latency
    stable_offset_mean = measure_clock_offset_stable(
        pupil_remote, clock_function=local_clock, n_samples=10
    )

    pupil_time_actual = request_pupil_time(pupil_remote)
    local_time_actual = local_clock()
    pupil_time_calculated_locally = local_time_actual + stable_offset_mean
    print(f"Pupil time actual: {pupil_time_actual}")
    print(f"Local time actual: {local_time_actual}")
    print(f"Stable offset: {stable_offset_mean}")
    print(f"Pupil time (calculated locally): {pupil_time_calculated_locally}")
papr 15 July, 2021, 13:30:34

It is more than just copying some code from the example, I am afraid. Please try to understand what the code is doing such that you can apply it correctly to your experiment. See also https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py for more context.

local_clock can be time_fn / core.Clock().getTime

user-aadbbd 15 July, 2021, 13:33:16

Ah yes understanding is always good..thanks again for all your help today papr!

user-3cff0d 15 July, 2021, 16:53:47

Is there a way to instruct Player to perform its gaze detection, and instruct it to perform an export (like an 'e' press) through the Network API?

papr 15 July, 2021, 17:30:40

Player does not have a network api. You would need to write a script that calls the gaze estimation code directly. Base class api: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_base.py#L366

user-aadbbd 19 July, 2021, 08:59:50

Is it a common issue to have one screen go dark after calibrating when using a dual monitor setup? Specifically, I have Pupil running on one screen, and then calibration dots will appear on the other. Then, the screen that has the calibration dots turns black after calibration is complete.

user-5e6759 19 July, 2021, 10:04:45

Hi @papr , may I ask if there is a way to obtain the location of the surface marker as to the frame capture by the world camera?

papr 19 July, 2021, 10:16:40

Please see this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb

user-5e6759 21 July, 2021, 03:02:51

Hi papr, sorry for bothering you again. Is the field 'surf_to_dist_img_trans' only available when exporting the recorded file? I was trying to access to it in real-time in a plug-in but couldn't find it.

user-5e6759 19 July, 2021, 10:54:22

Thanks a lot!

user-5e6759 21 July, 2021, 06:38:32

I've figured it out, but I have to modify the surface_tracker.py

papr 21 July, 2021, 07:20:25

Hi, what part needs to be modified? Edit: Probably this part https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface_tracker.py#L488-L496

user-5e6759 21 July, 2021, 07:42:39

yes, add

"surf_to_dist_img_trans": surface.surf_to_dist_img_trans.tolist(),

after line 491

papr 21 July, 2021, 07:43:19

We can include this in our next release

user-5e6759 21 July, 2021, 08:00:53

That will be nice. Thanks!

papr 21 July, 2021, 13:26:02

See https://github.com/pupil-labs/pupil/pull/2168

user-53365f 21 July, 2021, 09:37:48

Hi all. I'm hoping you can help. I'm looking for a hardware-software combination that can give me realtime values of independent left and right pupil positions. The context here is that I have strabismus and would like to develop (for myself) a feedback loop that allows me to determine, during an eye exercise, if I've correctly aligned my eyes.

A provisional poke around various webcam-based open source solutions suggest they're not accurate enough and could give a lot of false positives.

Is this kind of thing viable with Pupil? A superficial scan of the docs suggests (at least from an API perspective) "yes" but I'm wondering if the system as a whole would be good at this task?

Edit: note that this requirement means I need to determine the degree of deviation of both pupils from where they "should" be. I suspect this presents a problem in that heads aren't symmetric and the real-world target I'm testing my eyes against may not be perfectly aligned. So chances are fundamentally this isn't practical outside VR.

user-10631a 21 July, 2021, 09:41:14

Hi, i'm developing an application in unity and i'm using c#. I imported the packages and tested the demo scenes following your documentation and everything works. I also tried the connection with Pupil Capture and following some examples i was able to create some test scenes. Now i need to read data related to fixations on surfaces but i have a problem with data structure. void receiveCustomData(string topic, Dictionary<string, object> dictionary, byte[] thirdFrame = null) Inside this function i was able to read dictionary["fixations_on_surfaces"] but then how can i access the dictionary inside? The one with "topic", "normpos", "confidence", etc.. I can easily read the other data like "name" or "timestamp". Using python i had no problem.

papr 21 July, 2021, 09:44:55

The challenge is that you need to perform a calibration in order to get comparable gaze direction estimates from both eyes. And the default calibration assumes that you are able to fixate the calibration target with both eyes at the same time. You would need to develop a calibration choreography plugin that supports collecting eye data from one eye at a time.

@user-a09f5d has been doing work/research in this regard. Maybe they can share their experiences.

user-53365f 21 July, 2021, 09:47:54

I had a suspicion that might be the case. I can fixate on a target with both eyes for a short period of time which could help. I'll check out the docs to see if I can get a feel for the calibration process and whether that can work.

Assuming the calibration step is fine, is the next step (fetching eye data for each eye) then viable? (Real-world limitations notwithstanding)

user-a09f5d 14 December, 2021, 18:58:10

Hi @user-53365f Sorry, I haven't been on discord for many months so didn't see this till now. I have responded to your direct message.

user-10631a 21 July, 2021, 09:45:22

I checked the data structure from here: https://docs.pupil-labs.com/developer/core/overview/#surface-datum-format

papr 21 July, 2021, 09:51:26

If you use these dual-monocular gaze estimators [1], you can receive gaze direction estimates for both eyes independently via our real-time network API (some delay due to data transfer included of course)

[1] https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350

user-53365f 21 July, 2021, 09:53:28

Fantastic. I've also just checked the calibration docs and that looks viable too. I also get the point of the world camera now!

papr 21 July, 2021, 09:54:46

Could you confirm that you only need help with understanding the inner structure of this dictionary? Or do you also need help accessing it using c# syntax?

user-10631a 21 July, 2021, 10:00:00

Yes, the first one. I tried casting this System.object to list, tuple and dictionary, but it doesn't cast. On python i can easily read in this way dictionary["fixations_on_surfaces"][0]["norm_pos"]

papr 21 July, 2021, 09:58:47

I also have a more general question: Are your performing eye tracking in a VR or an AR environment? And if yes, which of both?

user-10631a 21 July, 2021, 10:01:46

The application isn't in VR/AR. I'm using pupil core

papr 21 July, 2021, 10:01:37

dictionary["fixations_on_surfaces"] is a list/array. Unfortunately, since I do not know c#, I cannot tell you why this would not cast.

user-10631a 21 July, 2021, 10:04:39

Thanks anyway, I'll try to find a way and i'll post it here, it might be useful to someone else

user-10631a 22 July, 2021, 08:53:54

Hi, i was thinking, is there a way to subscribe to a topic within a topic. For example I have a surface to track which is called "test". I am interested in the data only for the fixation on this surface. To read all the data I subscribe to the topic "surfaces.test"is there a way to subscribe directly to "surfaces.test.fixation_on_surfaces" topic?

papr 22 July, 2021, 08:54:37

Unfortunately, with the current implementation, this is not possible.

user-b62d99 22 July, 2021, 18:31:12

Hi, @papr, here's the code we used to open, edit, and save the .pldata file. In this code, we're saving the edited pldata to a new file

Chat image

papr 27 July, 2021, 15:14:15

On first sight, I do not see any errors. I think it could be written more efficiently, though. I will write a reference implementation tomorrow.

user-28c35b 23 July, 2021, 10:54:13

Hi, I am using core, and I get the data subscribing to gaze. I need pupil size and blinks; I can get the pupil size with no problem, but I am not getting the blink data, even though I have turned on the setting for blinks in the pupil capture software. Is there something I am doing wrong? or maybe I have to subscribe to another topic?

papr 23 July, 2021, 10:55:00

You need to subscribe to blink πŸ™‚

user-28c35b 23 July, 2021, 10:56:08

Thank you! I didn't find it in the docs, maybe I wasn't looking in the right place πŸ™‚

user-b62d99 27 July, 2021, 15:09:36

@papr Hi, if you have any suggestions for getting this to function as it should, please let us know!

papr 27 July, 2021, 15:11:33

Apologies, somehow I missed your message.

user-b25f65 27 July, 2021, 15:10:25

Hello, is there any documentation for pye3d? I have looked everywhere but maybe I am missing something

papr 27 July, 2021, 15:11:10

https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-pupil-detection

user-b25f65 27 July, 2021, 15:14:59

Thanks! But are there more specific api docs for pye3d then? I have read this and I think this is more like a explanatory blog post rather than a descriptive doc for the functions included

papr 27 July, 2021, 15:15:57

Unfortunately, not, no. The entry point for reading the source code would be: https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L102

papr 27 July, 2021, 15:17:13

This implements the detector as described in the link above. If you are looking for the actual model fitting code, look here https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/eye_model/base.py#L33

user-b25f65 27 July, 2021, 15:23:15

Thank you! This would suffice for my usage for now I guess

papr 27 July, 2021, 15:23:53

Let me know if you have questions about the source code

user-994e20 28 July, 2021, 11:08:52

Hello, I'm using the pupil core eyetracker and would like to use the blink_detector pupillabs offers. Is it possible to export the blink data, as visible in pupil player, in a csv file when exporting? Thanks in advance!

user-63150b 28 July, 2021, 11:20:04

Hello! Is there a C++ library for Pupil Core (mono) eye tracker?

papr 28 July, 2021, 11:21:44

Hi, there are no native c++ libraries to access the eye tracker directly. Please refer to our network api instead https://docs.pupil-labs.com/developer/core/network-api/

papr 28 July, 2021, 11:20:46

Are you aware of the existing blink detector export data? Does it not provide what you need?

user-994e20 28 July, 2021, 12:44:31

Thanks for your reply. I should specify my need. I am using a surface data csv export and would like to add a column to specify = blink yes or no for each row. I have the blink detector export. However the timestamps do not correspond with the timestamps in the surface export. I think I am doing something wrong. Does this clarify my question?

user-63150b 28 July, 2021, 11:23:24

This is with Python, I need actually for my C++ project to integrate the eye-tracker. I see some random projects, however, not working.

papr 28 July, 2021, 11:24:23

There are c++ implementations for zmq and msgpack which you can use to connect to the network API. In any case, you will have to run the Pupil Core application in parallel.

user-63150b 28 July, 2021, 11:25:12

Thanks. I will try then.

papr 28 July, 2021, 12:45:40

It does clarify the question, thank you!

However the timestamps do not correspond with the timestamps in the surface export. Are you referring to the fact that they do not match exactly or that they are off completely?

user-994e20 28 July, 2021, 12:48:00

They do not match exactly indeed

user-994e20 28 July, 2021, 13:01:50

Is that normal? I figured the timestamps should match the gaze_timestamps in the surface export

papr 28 July, 2021, 13:02:26

Blinks are calculated based on pupil data, not gaze. This is why it does not match exactly.

papr 28 July, 2021, 13:03:33

You will have to assign the blink yes/no value for each surface-mapped gaze based on the condition if its timestamp falls into any of the blinks periods or not (blink_start_ts <= gaze_ts <= blink_end_ts)

user-994e20 28 July, 2021, 13:04:39

Okay thanks a lot for the extra information!

user-b62d99 28 July, 2021, 13:25:58

sounds good, thank you!

user-aadbbd 29 July, 2021, 12:02:34

Hi papr, my monitor turns off after calibration. Is there a way to fix this?

papr 29 July, 2021, 12:07:06

Turns off as in "physically turns off" or "displays a black image"?

user-aadbbd 29 July, 2021, 12:07:24

displays a black image. This only occurs when I have a dual monitor setup and they're displaying different things

user-aadbbd 29 July, 2021, 12:07:47

The screen that the calibration appears on goes black. And then, when I close pupil, it opens again..

user-aadbbd 29 July, 2021, 12:08:04

But this doesn't occur when I duplicate the displays and display the same information on both of them

papr 29 July, 2021, 12:09:55

https://github.com/pupil-labs/pupil/issues/2143#issue-894510320 Please use window instead of fullscreen mode to workaround this issue.

user-aadbbd 29 July, 2021, 12:21:09

Thanks! Is this in settings somewhere? I can't seem to find it..

papr 29 July, 2021, 12:22:21

Yes, go to the calibration settings and disable the "use fullscreen" toggle

user-aadbbd 29 July, 2021, 12:22:50

Awesome. Thanks again, papr!

user-aadbbd 29 July, 2021, 13:32:57

I'm trying to end a trial in PsychoPy if a participant looks away from a specified surface (usually monitor). However, it is often the case that the gaze will obviously not be on the monitor, as shown in the Pupil window, but this is not reflected in PsychoPy. I am wondering if I am using norm_gp_x and norm_gp_y wrong, and if they are only defined when the gaze is on the surface..is there a better way to tell if the gaze is on the surface?

#read from remote pupil data and check if they looking at the surface
        topic = sub.recv_string()
        msg = sub.recv()  # bytes
        surfaces = loads(msg, raw=False)
        filtered_surface = {
            k: v for k, v in surfaces.items() if surfaces["name"] == surface_name
        }

        is_gaze_on_surface = True

        try:
            # note that we may have more than one gaze position data point (this is expected behavior)
            gaze_positions = filtered_surface["gaze_on_surfaces"]
            for gaze_pos in gaze_positions:
                norm_gp_x, norm_gp_y = gaze_pos["norm_pos"]

                # only print normalized gaze positions within the surface bounds
                if 0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1:
                    is_gaze_on_surface = True
                else:
                    is_gaze_on_surface = False
                    print(norm_gp_x, norm_gp_y)
        except:
            pass
        if is_gaze_on_surface:
            continueRoutine = True
        else:
            continueRoutine = False #end the routine if they look away
papr 29 July, 2021, 13:36:48

I think the idea is correct, but the implementation can be improved: 1. Filter by confidence. Gaze with confidence < 0.6 can be inaccurate 2. In your implementation, only the last gaze item decides if the routine is stopped or not. You might want to break out of the loop as soon is_gaze_on_surface is true for the first time.

user-aadbbd 29 July, 2021, 13:41:03

Is there documentation on the syntax for gaze confidence? I am guessing gaze_pos['confidence']? Having a hard time finding this on the website

user-b14f98 29 July, 2021, 15:25:05

@papr, in your recent response to @user-3cff0d , you suggested that we could reduce error by creating a custom gaze class with a modified scene camera location. Two questions: 1) could you help us find the specific variables you are describing and, 2) If this produces successful recalibrations using our modified pupil detection method, would your team consider making those variables transparent in some way through pupil player?

papr 29 July, 2021, 15:26:38

Correction: The calibration needs to correct for the assumed eye ball location, not scene camera (even though this is pretty much equivalent in the end, the former is more or less implemented already) 1) yes 2) yes

user-b14f98 29 July, 2021, 15:27:25

This is why I love pupil labs. Thanks @papr !

user-b14f98 29 July, 2021, 15:28:18

If you have any quick pointers to get @user-3cff0d headed in the right direction, they would be much appreciated. He's quite capable.

papr 29 July, 2021, 16:23:15

ok, there are some unforeseen hurdles which I should be able to solve until tomorrow

papr 29 July, 2021, 15:28:51

I am nearly done with the custom plugin. Will come back soon.

user-b14f98 29 July, 2021, 15:29:04

You are a king.

user-3cff0d 29 July, 2021, 15:29:09

Thanks so much!!

papr 29 July, 2021, 15:39:58

@user-3cff0d Could you please share an example VR recording with data@pupil-labs.com that includes reference locations already? I would give this a quick test before sharing.

Edit: Never mind, I roughly annotated a small recording by myself πŸ™‚

papr 29 July, 2021, 16:54:53

@user-3cff0d I might need to come back to this request. I am seeing some weird behavior with my recording and I am not sure if it is due to its age πŸ˜…

user-3cff0d 29 July, 2021, 15:53:47

πŸ‘

user-b62d99 29 July, 2021, 16:00:29

Hi @papr, for the line of code In [20] in the link you sent, are we supposed to serialize datum ourselves, or is serialized an attribute of datum? We keep receiving an error on that line.

papr 29 July, 2021, 16:06:09

.serialized is an atttribute of Serialized_Dict https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L281 If you are using normal dicts, use writer.append(datum)

user-b62d99 29 July, 2021, 16:11:25

Okay great, we got it working. We have successfully edited the .npy, .pldata, .json, and .mp4 files. Is editing the .size, .time, .raw, .intrinsics, .mjpeg, .time_aux,

user-b62d99 29 July, 2021, 16:11:58

and .bin files necessary for the purposes of our project?

papr 29 July, 2021, 16:12:33

Oh, you are editing pupil invisible recordings?

user-b62d99 29 July, 2021, 16:12:53

Yes, we are

papr 29 July, 2021, 16:13:18

Do you still have access to the originals, before having them opened in Pupil Player for the first time?

user-b62d99 29 July, 2021, 16:15:05

We don't at the moment, but we can get access, would this make it easier?

papr 29 July, 2021, 16:16:03

Yes. In this case you would only need to slice the original files, and Player would take care of generating the correct pldata files. This is the PI recording format https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793

user-b62d99 29 July, 2021, 16:18:46

Okay, do you have any advice on editing the other files that we listed?

papr 29 July, 2021, 16:21:06

raw and time are just binary files that can be easily read and written with numpy.fromfile and numpy.ndarray.tofile. json I would only touch the info one to adjust start_time and duration. intrinsics is generated by player and does not need adjustments. mjpeg requires same handling as mp4. I would not touch time_aux

user-b62d99 29 July, 2021, 16:23:29

Thanks! We appreciate all your help

papr 29 July, 2021, 16:21:31

Please note, that PI uses nanoseconds since Unix epoch as time.

user-3cff0d 29 July, 2021, 16:55:21

Sure thing, I'll send one over now

user-3cff0d 29 July, 2021, 17:04:30

Sent!

papr 29 July, 2021, 17:23:26

Also it looks like the recorded data does not contain pupil data during the annotated period? When using post-hoc pupil detection, I am not able to fit a very good eye model because there is not sufficient eye movement at the beginning. (I used the eye intrinsics from the release)

papr 29 July, 2021, 17:14:17

@user-b14f98 @user-3cff0d Not sure if you are aware but this recording was recorded 2.1 and eye intrinsics were introduced in https://github.com/pupil-labs/pupil/releases/tag/v2.4 Have you downloaded the intrinsics from the release page? See the caveat for 200Hz Vive Add-on Recordings for more information.

user-b14f98 29 July, 2021, 17:22:55

@papr Thanks for reminding us. I'll have to take a look, but there's a chance we have not.

user-3cff0d 29 July, 2021, 17:25:14

Do you want me to do the pupil detection myself and send over the results?

papr 29 July, 2021, 17:27:58

No worries, it will be sufficient for what I am trying todo (compare headset 3d vs hmd 3d gazer). But this has effect on the gaze estimation nonetheless. The 2d pupil detection is robust enough. The issue is that the 3d eye model cannot be triangulated well enough because the eye orientations are not diverse enough. At least it looks to me like this is the case.

user-3cff0d 29 July, 2021, 17:26:00

It would be using our plugin, so the 3d model would likely be better due to the more robust pupil detection in the vr headset.

papr 29 July, 2021, 17:30:48

Do you do post-hoc pupil detection in Player or with an external script?

user-3cff0d 29 July, 2021, 17:29:48

Gotcha, that would be a correctable problem for future data collection

user-3cff0d 29 July, 2021, 17:31:33

We do post-hoc pupil detection in Player, with our plugin installed

papr 29 July, 2021, 17:34:58

With or without your plugin, I recommend freezing the model once it is fit well, and then restarting the detection to apply the frozen model to the whole recording. This is especially important in 3.4 because it has an impact on which model (short vs longterm) is chosen for the gaze estimation. You seem to have a fairly stable setup, justifying the assumption of "no slippage".

user-3cff0d 29 July, 2021, 17:37:00

We've tried that in the past with not-so-great results, but that might have just been because we froze the model at a bad time. We'll give it another shot. Also, we're currently still on version 3.2. I tried to update to 3.4 and got a strange versioning error with one of the dependencies, where it seemed like 2 files required different versions of the same package or something like that

papr 29 July, 2021, 17:38:00

Feel free to share the error. I might be able to help resolving that

papr 29 July, 2021, 17:40:37

@user-3cff0d @user-b14f98 https://gist.github.com/papr/e57f55ddd02245bbfd99e072b4273fdf this is the current version. It runs without error, but the resulting gaze is all over the place (for your recording as well as mine). Something must still be going wrong very badly. I would appreciate your help with debugging this.

papr 29 July, 2021, 17:41:27

Install the plugin into your Pupil Player plugin folder and it should appear as a gazer option in the post-hoc calibration menu (next to 2d and 3d)

user-3cff0d 29 July, 2021, 17:43:48
player - [ERROR] launchables.player: Process player_drop crashed with trace:
Traceback (most recent call last):
  File "...\pupil-master\pupil_src\launchables\player.py", line 843, in player_drop
    from gl_utils import GLFWErrorReporting
  File "...\pupil-master\pupil_src\shared_modules\gl_utils\__init__.py", line 11, in <module>
    from .draw import draw_circle_filled_func_builder
  File "...\pupil-master\pupil_src\shared_modules\gl_utils\draw.py", line 16, in <module>
    from pyglui.cygl.utils import RGBA
ImportError: DLL load failed: %1 is not a valid Win32 application.
papr 29 July, 2021, 17:49:12

ok, this is an issue with the pyglui wheel. Not sure what causes the error, but you might still have all the required dependencies to build from source: pip install --no-binary pyglui --force-reinstall pyglui

user-3cff0d 29 July, 2021, 17:45:30

I'll give this a shot and get back to you

papr 29 July, 2021, 18:09:34

I am not sure if the dummy eye translation values in the plugin are correct. Do you have a hmd-eyes setup that you can use for a quick test?

papr 29 July, 2021, 17:54:07

I will be on vacation for two weeks, starting tomorrow. I will probably not be able to respond frequently in the meantime.

user-b14f98 29 July, 2021, 17:53:21

Hey, the recording we sent you already had eye intrinsic files. Am I missing something?

papr 29 July, 2021, 17:54:31

Ok, great! I just wanted to mention this, as it has an impact on the 3d eye model estimation

user-b14f98 29 July, 2021, 17:54:29

Perhaps we were using outdated default intrinsics?

user-b14f98 29 July, 2021, 17:54:35

Ok, thanks.

user-3cff0d 29 July, 2021, 18:11:13

I don't on my person at the moment, no

user-3cff0d 29 July, 2021, 18:11:21

I work remotely most of the time

papr 29 July, 2021, 18:13:26

Ok, I have everything needed at home but not setup. I can do the test, when I am back from vacation.

To verify the values, we need to run https://github.com/pupil-labs/hmd-eyes/blob/master/python_reference_client/notification_monitor.py + a hmd-eyes realtime calibration. Filtering for start_plugin and GazerHMD3D, we should be able to get the dynamically calculated eye_translations

user-b14f98 29 July, 2021, 18:23:26

Awesome. I hope that you have a wonderful vacation!

papr 29 July, 2021, 18:24:22

@user-3cff0d I found the issue. The ref data required a flip of the y axis. Using the new version, I get a validation accuracy of 1.9 compared to 2.9 with the default 3d gaze mapper. /cc @user-92dca7

papr 29 July, 2021, 18:26:12

@user-b14f98 @user-3cff0d I updated the gist. This is the diff https://gist.github.com/papr/e57f55ddd02245bbfd99e072b4273fdf/revisions#diff-81827a6bee15b12e512a80c75d2b0270629db9e9eb5231bc73d7fcc79926cc0c

papr 29 July, 2021, 18:28:16

@user-d1efa8 The whole conversation above is highly relevant for you, too! Please take the time to read it, install the plugin, and give the plugin a try.

user-d1efa8 29 July, 2021, 19:24:40

Cool, I’ll take a look now; thanks for letting me know !

user-3cff0d 29 July, 2021, 18:28:48

That's great! I'll give it a try asap.

user-3cff0d 29 July, 2021, 19:41:47

How do I actually activate the plugin, or do I just do Post-Hoc Gaze Calibration as normal and it will automatically be used?

papr 29 July, 2021, 19:42:51

post-hoc calib -> calibration menu -> select from the 2d/3d selector

user-3cff0d 29 July, 2021, 19:43:40

Gotcha, thank you

papr 29 July, 2021, 19:44:32

but you get a third post-hoc option, correct?

user-3cff0d 29 July, 2021, 19:45:02

Yes, there is 2D, 3D, and Post-hoc HMD 3D

user-3cff0d 29 July, 2021, 19:51:30

I'm getting some really bad gaze data with that, it's flying back and forth from frame-to-frame

user-3cff0d 29 July, 2021, 19:55:20

Is that what you were observing before you applied the fix?

papr 29 July, 2021, 19:55:34

correct

user-3cff0d 29 July, 2021, 21:07:36

So I've tried it multiple times, including without any other external plugins. The output looks like that no matter what I adjust. Is it possible that this is happening because I'm still on 3.2?

papr 29 July, 2021, 21:10:02
  1. Did you do post-hoc pupil detection to get pupil data for the annotated section?
  2. Did you playback the gaze during the calibration? The few first seconds are noisy for me, too
user-3cff0d 29 July, 2021, 21:12:03
  1. Yes I've redone the pupil detection post-hoc multiple times
  2. I did, yes, but it looks terrible throughout the video. We have software that analyzes the exports of these videos and automatically plots some accuracy/precision metrics on the trials contained within the rest of the video, and all showed huge errors
user-3cff0d 29 July, 2021, 21:12:46

However when I switched the setting back to 3D everything looked normal again

papr 29 July, 2021, 21:15:11

Did you try both versions of the plugin? One with and the other without the flip? And both result in the huge error? Did you recalculate both calibration + mapping after switching plugins?

user-3cff0d 29 July, 2021, 21:14:00

The only difference between our setups that I could imagine would be the version difference from 3.2->3.4

papr 29 July, 2021, 21:16:36

That's not it, for sure

user-3cff0d 29 July, 2021, 21:16:37

I did remove the flip and try again, and the results were even worse. Both have huge errors but without the flip is the worse of the two. I did recalculate both calibration + mapping for each attempt

papr 29 July, 2021, 21:17:22

ok, this is consistent at least.

papr 29 July, 2021, 21:26:41

This is on the same recording you shared with me, correct? Please share the offline_data folder with me. This way I should have the same post hoc pupil data etc like you

user-3cff0d 29 July, 2021, 21:38:22

Messaged it to you

papr 29 July, 2021, 22:04:25

Ok, I can confirm that something is def. still going wrong. Only the gaze for the center point is accurate.

1.9Β° from 94 / 575 samples didn't see the 95/575 samples before πŸ™ˆ

user-3cff0d 29 July, 2021, 22:05:03

Ahh, okay

papr 29 July, 2021, 22:27:41

@user-3cff0d I am getting better results (as in: similar to the default headset calib.) when changing this line https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L179

- scales = list(np.linspace(0.7, 10, 5))
+ scales = list(np.linspace(0.7, 100, 5))

Basically, the bundle adjustment tries different the z-depth-scalings and chooses the best one. 10 seems to be too small if the ref data is manually annotated.

papr 30 July, 2021, 11:57:30

@user-3cff0d @user-b14f98 The real-time hmd plugin is making too many assumptions that are not holding true for 2d annotated reference data. I started making some changes to simplify a post-hoc hmd calib: https://github.com/pupil-labs/pupil/pull/2176

It works the same way as the headset calibration but uses different initial eye translations and ref depths. Unfortunately, it yields less accurate gaze (3.5 instead of 2.9). We will have to investigate why.

The external plugin is no longer necessary.

user-b14f98 30 July, 2021, 15:45:05

@papr the 3D target locations are stored in this pickle of a dataframe. The most relevant columns are targetLocalPos (with subcolumns x,y,z) and azEst (with subcolumns az, el) . azEst are the cooresponding spherical target coordinates.

singleTrialDF.pickle

user-b14f98 30 July, 2021, 15:45:43

FYI, each row is a single trial in our assessment recording for this single subject.

user-b14f98 30 July, 2021, 15:46:45

We care only about trials where "trialType" is "CalibrationAssessment" and "targetType" is "fixation"

papr 30 July, 2021, 15:48:16

I am mostly interested in understanding if the camera intrinsics saved by unity are correct. πŸ™‚ Thanks for sharing the data!

user-b14f98 30 July, 2021, 15:48:47

My pleasure. Please let us know how else we can help.

papr 30 July, 2021, 15:51:28

I think it would be beneficial if you could start having a look at the gazer code and combining it with https://gist.github.com/papr/48e98f9b46f6a0794345bd4322bbb607 This example shows how to fit a pye3d twosphere model to a set of hand selected pupil data points (e.g. top 10% confidence pupil data). This gives you more control over the model. Combining it with the gazer code, you can build a reproducible gaze estimation pipeline without running Pupil Player.

user-b14f98 30 July, 2021, 15:52:10

@user-3cff0d πŸ™‚ πŸ™‚ πŸ™‚

user-b14f98 30 July, 2021, 15:53:26

@papr Does this use recorded pupils ?

user-b14f98 30 July, 2021, 15:53:35

Is that the assumption?

papr 30 July, 2021, 15:55:41

But I mean yes, this was primarily meant as a controlled, reproducible post-hoc pipeline.

papr 30 July, 2021, 15:54:45

It uses pupil player exported 2d pupil data, but you should be able to adjust convert_pupil_datum_observation to your own pipeline output

user-b14f98 30 July, 2021, 15:55:58

Ok, thanks! We are trying to process N subjects with M pupil segmentatnoi plugins, producing gaze estimates for each. Kevin has run into a roadblock in automating that, specifically in triggering gaze estimation after pupil detection.

user-b14f98 30 July, 2021, 15:56:12

So, very useful.

user-b14f98 30 July, 2021, 15:56:54

No chance you guys have this pipeline already going in-house and are willing to share?

user-b14f98 30 July, 2021, 15:57:18

Or did you put the pieces in place, but haven't yet constructed the full offline pipeline?

papr 30 July, 2021, 15:59:08

It is just a matter of putting the pieces together. Gazers can fit and predict gaze data using pupil data + ref data input

papr 30 July, 2021, 15:57:44

No, we don't have that yet. That is why I was asking you to look into it πŸ™‚ This is a piece that I built for @user-d1efa8

user-b14f98 30 July, 2021, 15:58:16

Ok. Thanks again. @user-d1efa8, keep on eye on this thread and let us know if you are able to contribute. Perhaps we can help one another πŸ™‚

user-3cff0d 30 July, 2021, 16:15:08

That looks great! I remember asking about triggering an export as well, but I think there's a method that does that

papr 30 July, 2021, 16:40:58

Yes, technically there is. But the script above is about extracting the pipeline and not running Player at all.

user-d1efa8 30 July, 2021, 17:27:21

@papr @user-3cff0d @user-b14f98 yeah will do; I’ll try to get something next week for y’all. We’ve just been recording a lot data lately so I haven’t had any free time

user-b14f98 30 July, 2021, 17:27:33

@user-d1efa8 What lab are you with?

user-d1efa8 30 July, 2021, 17:27:51

Angelaki lab at NYU; what about you?

user-b14f98 30 July, 2021, 17:28:00

Oh, great. https://www.rit.edu/directory/gjdpci-gabriel-diaz

user-b14f98 30 July, 2021, 17:28:09

I am Gabriel Diaz. I've met Dora once or twice, in passing.

user-3cff0d 30 July, 2021, 17:28:37

I'm Kevin Barkevich, I'm one of Dr. Diaz' grad students c:

user-d1efa8 30 July, 2021, 17:29:00

Oh wow, that’s awesome haha

user-d1efa8 30 July, 2021, 17:29:14

I’m Josh Calugay, nice to meet y’all

user-d1efa8 30 July, 2021, 17:29:33

Not a grad student rn, but I will be in October πŸ™‚

user-b14f98 30 July, 2021, 17:29:46

Excellent!

user-d1efa8 30 July, 2021, 17:30:44

Just wondering, do you happen to know Greg DeAngelis ? We’ve been working with him to validate our eye tracking data

user-b14f98 30 July, 2021, 17:30:52

Yes, he's just down the road.

user-b14f98 30 July, 2021, 17:31:10

We're both members of URochester's Center for Vision Science (although I am actually faculty at RIT, down the road)

user-d1efa8 30 July, 2021, 17:32:29

What a small world

user-3cff0d 30 July, 2021, 17:32:44

Or a tight-knit field!

user-b14f98 30 July, 2021, 17:33:54

Indeed, the world of vision science is quite small, and collaborative.

user-d1efa8 30 July, 2021, 17:37:24

I’m glad it is, because I definitely wouldn’t have been able to make it far with this project without help haha

End of July archive