💻 software-dev


user-0e7e72 06 September, 2021, 15:30:43

Hello, I am trying to stream data from pupil invisible using the network API (https://docs.pupil-labs.com/developer/invisible/#network-api) I have managed to get gaze coordinates, gaze timestamps and world video. I would need to obtain also the world video timestamps, could you point me on how to do that? Thanks a lot!

papr 06 September, 2021, 15:32:33

If you can receive video frames, you are already receiving their timestamps 🙂

world_img = data.bgr
world_ts = data.timestamp
user-0e7e72 06 September, 2021, 15:33:22

great thanks!

user-5d4724 07 September, 2021, 12:45:15

Hello , I was trying to connect the pupils lab with Matlab. I was using these pieces of code in python and Matlab for establishing a TCP connection. Please help me out in finding out how to resolve this issue.

Chat image

user-5d4724 07 September, 2021, 12:45:45

And this is the error with Matlab

Chat image

papr 07 September, 2021, 13:05:49

Hi, this seems to be third-party code. Unfortunately, we cannot provide support for that. Instead, you can have a look at this solution https://github.com/pupil-labs/pupil-helpers/tree/master/matlab which does not require running Python. Unfortunately, the setup is tricky to get right.

user-5d4724 07 September, 2021, 13:14:14

Thanks a lot! will ask again if the setup creates some issue 😉

user-5349b5 08 September, 2021, 09:01:45

Hi guys, I import hmd-eyes.unitypackage to Unity. But I have a problem in Unity as follows: PupilLabs.SubscriptionsController:Update seems to be called only once after the calibration was successful. For using GazeVisualizer continuously, I need to call PupilLabs.SubscriptionsController regularly. How should I call the above program continuously? Should I revise the scripts? Regards,

papr 08 September, 2021, 09:05:43

For more context: @user-5349b5 has been in contact with me via email. The demo works as expected but they are trying to integrate the gaze visualizer into their own view/project (not sure about the correct terminology). Calibration is successful but the SubscriptionsController doesn't seem to be called regularly to fetch/process the incoming gaze data.

user-0e7e72 08 September, 2021, 12:00:32

To follow up on this question, if I collect the gaze data and video data using the network API, should I expected the same fps as documented here https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793 ? Thanks!

papr 08 September, 2021, 12:03:42

yes, although fps might be slower due to network bandwidth limits

user-0e7e72 08 September, 2021, 12:05:18

Ok thanks! I am asking this because the video data seems to have higher fps than the gaze data, but then there must be some problems in my code

papr 08 September, 2021, 12:33:26

It is possible that the gaze estimation is slower when more phone resources are being used, e.g. for streaming data. But a bug on receiving side might be possible, too. Which gaze fps are you measuring? And how are you measuring it?

papr 08 September, 2021, 12:05:51

Do you mean eye or world video?

user-0e7e72 08 September, 2021, 12:22:53

the world video

papr 10 September, 2021, 08:32:48

re https://discord.com/channels/285728493612957698/285728493612957698/885804567697186826 @user-10631a can you please check the type of the object on which you are calling subscribe()? print(type(obj))

user-10631a 10 September, 2021, 08:34:21

This one: <class 'contextlib._GeneratorContextManager'>

papr 10 September, 2021, 08:34:55

ok, you need to call subscribe() on a socket object. That is currently not the case

user-10631a 10 September, 2021, 08:37:50

I get this object from this method: pupil_remote.connect(f'tcp://{ip}:{sub_port}'), maybe i'm missing something

papr 10 September, 2021, 08:39:16

Can you link the source for your code? I need more context in order to check what is going on

user-10631a 10 September, 2021, 08:43:47
 import zmq
import msgpack

if __name__ == "__main__":
    # Connecting to Backbone
    ctx = zmq.Context()
    pupil_remote = ctx.socket(zmq.REQ)

    ip = '127.0.0.1'
    port = 50020

    pupil_remote.connect(f'tcp://{ip}:{port}')

    pupil_remote.send_string('SUB_PORT')
    sub_port = pupil_remote.recv_string()

    pupil_remote.send_string('PUB_PORT')
    pub_port = pupil_remote.recv_string()

    print(f"Pub.Port: {pub_port}, Sub.Port: {sub_port}")

    subscriber = ctx.socket(zmq.SUB)
    subscriber = pupil_remote.connect(f'tcp://{ip}:{sub_port}')

    print(type(subscriber))

    print("Closing Connection")
    pupil_remote.close()  
papr 10 September, 2021, 08:47:20

See also https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py for reference.

Note sub.setsockopt_string(zmq.SUBSCRIBE, "pupil.") is equivalent to subscriber.subscribe("pupil.") in your code

papr 10 September, 2021, 08:46:04
- subscriber = pupil_remote.connect(f'tcp://{ip}:{sub_port}')
+ subscriber.connect(f'tcp://{ip}:{sub_port}')
+ subscriber.subscribe("my topic")
user-10631a 10 September, 2021, 08:47:52

Great, thank you so much for your help

user-b14f98 10 September, 2021, 13:10:04

@papr I request a timeline of 3D model confidence visible near to the calibration/gaze mapping timeline.

user-b14f98 10 September, 2021, 13:10:59

Makes sense, right? It's problematic when the calibration remains static over a region which has changed in 3D eye model position. Those changes degrade the calibration!

user-b14f98 10 September, 2021, 13:12:53

It would be even better if there was some indication of the magnitude of change in the 3D eye model. I'm not quite sure that that would be. What's really changing is the estimated 3D position of the eyeball center, but perhaps what's most relevant to the user upon a change is the change in angular position between the old/new model within camera-centered spherical space ?

user-b14f98 10 September, 2021, 13:13:19

Just a thought...

papr 10 September, 2021, 13:15:51

Visualizing this kind of information should be fairly easy. Angular change would not be helpful if the model shifts in z. But I get what you want. Let me think about it :)

user-b14f98 10 September, 2021, 13:16:24

Not a critical feature, but I've been thinking about this lately. The issue is quite apparent in our attempts to render the 3D geometry of the whole system.

user-b14f98 10 September, 2021, 13:17:13

...the translation component stored in the calib. matrices is static. The eyeball center stored within the gaze positions csv varies.

user-b14f98 10 September, 2021, 13:18:07

Right - perhaps that's the relevant metric? THe linear distance between the 3D eyeball model center and the center stored in the calib matrix?

user-b14f98 10 September, 2021, 13:19:07

...but yes, you're reminding me that, when fitting the 3d model, lthere are an infinite number of distance/size combos that produce the same projection, and distance is allowed to vary.

papr 10 September, 2021, 13:19:22

Ah wait, you are talking gaze data

user-b14f98 10 September, 2021, 13:20:11

I think I'm talking about the issue that the 3D eyeball position estimated during pupil fitting is allowed to vary, while the calibration assumes that it is static.

papr 10 September, 2021, 13:20:13

In any case, would be great to have something flexible

papr 10 September, 2021, 13:21:16

OK, understood. But that can be worked around by making sure that the fit is stable before calibrating and potentially even freezing the model

user-b14f98 10 September, 2021, 13:22:24

Yes. ...and I see that your team is slowly adding features to do this. I do have a hunch that more information could make this a bit easier. For example, would it be possible to make decisions about when to freeze the model after pupil fitting has completed? Perhaps this time series of confidence that we're talking about would facilitate that decision process.

papr 10 September, 2021, 13:26:33

Unfortunately, there is no perfect metric for deciding when the model is fit well. But since you would want to do that before calibrating, it will probably make sense to visualize pupil instead of gaze data. Unfortunately, there is no good way to exactly control freezing during Post-hoc detection.

papr 10 September, 2021, 13:26:57

Therefore, I do not even know if the timeline will be helpful.

user-b14f98 10 September, 2021, 13:27:19

I can imagine some issues with that. Ok, I'll continue to think about this and will come back if I have any good ideas. 🙂 Thanks for listening.

user-5d4724 16 September, 2021, 18:17:37

Hello Sir, Can you please tell if there can be some setting issue in pupils capture or what that results in no data in 'annoations.csv ' file . Rest all files have the correct data stored . Only annotations is a blank file. Please help

Chat image

wrp 17 September, 2021, 06:47:16

Just checking - Did you create annotations real-time in Pupil Capture?

user-5d4724 18 September, 2021, 06:27:18

Yes , annotations are real - time in Pupil Capture

user-9eeaf8 17 September, 2021, 14:31:16

Hello, I have a question about pupil player: Can you use pupil player from the command line with more than the recording path as additional arguments? E.g. open a recording and export with the current settings given a start and stop time?

user-9eeaf8 17 September, 2021, 14:31:34

Thanks for all advice on the topic!

user-3cff0d 18 September, 2021, 03:00:12

Hi @papr , I have an update regarding the pupil-core-pipeline script! I implemented the (in your current version, empty) core/pupil_detection.py script and verified that its output works with the core/pipeline.py calibration script you provided. I have created a pull request if you'd like to review/merge the code into the main branch.

user-5d4724 18 September, 2021, 06:27:57

But it just shows nothing but a blank file

papr 18 September, 2021, 07:06:21

Then it is possible that their timestamps were not in sync. If you use the latest Pupil Capture version it will tell you the age of received annotations. Please make sure that this age is reasonable for your use case.

user-5d4724 18 September, 2021, 15:21:31

Sir, how can we ensure that the age of the annotations are reasonable for my work ? Like by hit and trial or there is some method

papr 22 September, 2021, 17:10:16

Receiving real-time annotations

user-10631a 28 September, 2021, 10:38:28

Hi @papr , I have a problem with sending notifications via publisher using IPC Backbone.

I have a class in which i've defined publisher and a method to start the calibration. But when from the main thread i call this method nothing happens. If I write the same program in a procedural way without using classes, then it works and the calibration starts.

If it can help i can share the source code to see if there is any problem, Thanks

user-10631a 28 September, 2021, 10:50:57
import zmq
import msgpack

class PupilConn:

    def __init__(self, host='127.0.0.1', port=50020):
        self._host, self._port = host, port
        # Create context
        self._ctx = zmq.Context()
        # Connect to IPC
        try:
            self._pupilRemote = self._ctx.socket(zmq.REQ)
            self._pupilRemote.connect(f'tcp://{host}:{port}')
            # Get Subscriber and Publisher Port
            self._subPort = self.initPort('SUB')
            self._pubPort = self.initPort('PUB')
            # Init Subscriber and Publisher
            self.initSubscriber()
            self.initPublisher()
        except:
            print("* Unable to Reach Pupil Remote Server *")

    # Init Port
    def initPort(self, port_type):
        self._pupilRemote.send_string(port_type + '_PORT')
        return self._pupilRemote.recv_string()

    # Init Subscriber
    def initSubscriber(self):
        self._subscriber = self._ctx.socket(zmq.SUB)
        self._subscriber.connect(f'tcp://{self._host}:{self._subPort}')

    # Init Publisher
    def initPublisher(self):
        self._publisher = self._ctx.socket(zmq.PUB)
        self._publisher.connect(f'tcp://{self._host}:{self._pubPort}')

    def startCalibration(self):
        notification = {'subject': 'calibration.should_start'}
        topic = 'notify.' + notification['subject']
        # Message creation
        payload = msgpack.dumps(notification)
        self._publisher.send_string(topic, flags=zmq.SNDMORE)
        self._publisher.send(payload)

if __name__ == "__main__":
    pupil = PupilConn()
    pupil.startCalibration() # Nothing happens
user-994e20 28 September, 2021, 14:13:32

For our research we are using the surface exports that pupil player generates to estimate the gaze on our computer screen in pixels. On the internet I found this: "Please note, that the gaze data is actually a combination of three streams (monocular left, monocular right, and binocular) gaze stream." When I view my data I find that it is sampled in a frequency of approx. 240 hz, however each eye is being registered in 120 hz. So my question is, is it correct that the gaze data in this surface export is a combination of the left and right eye stream? If so, is there also a binocular stream? Is it possible to determine from which stream the data in the surface export is? Thanks in advance!

papr 28 September, 2021, 14:22:34

Please checkout this draft of how the matching works https://github.com/N-M-T/pupil-docs/commit/1dafe298565720a4bb7500a245abab7a6a2cd92f In order to check if the gaze was mapped monocular or binocularly, you will have to look at the gaze_positions.csv file. It's base_data field contains the eye ids and timestamps of the pupil data that was used to generate the gaze datum

papr 28 September, 2021, 14:19:12

You can start the calibration via self._pupilRemote.send_string('C'); self._pupilRemote.recv_string(), too. Does that work?

Generally, I think you will have to keep the program alive s.t. that it has time to send out the message to Capture.

user-10631a 29 September, 2021, 07:53:41

Thanks for your answer, in this way it works, but I need to be able to send messages in order to configure some parameters such as the duration of the fixation or the dispersion, do you have some suggestions?

End of September archive