Hello, I am trying to stream data from pupil invisible using the network API (https://docs.pupil-labs.com/developer/invisible/#network-api) I have managed to get gaze coordinates, gaze timestamps and world video. I would need to obtain also the world video timestamps, could you point me on how to do that? Thanks a lot!
If you can receive video frames, you are already receiving their timestamps 🙂
world_img = data.bgr
world_ts = data.timestamp
great thanks!
Hello , I was trying to connect the pupils lab with Matlab. I was using these pieces of code in python and Matlab for establishing a TCP connection. Please help me out in finding out how to resolve this issue.
And this is the error with Matlab
Hi, this seems to be third-party code. Unfortunately, we cannot provide support for that. Instead, you can have a look at this solution https://github.com/pupil-labs/pupil-helpers/tree/master/matlab which does not require running Python. Unfortunately, the setup is tricky to get right.
Thanks a lot! will ask again if the setup creates some issue 😉
Hi guys, I import hmd-eyes.unitypackage to Unity. But I have a problem in Unity as follows: PupilLabs.SubscriptionsController:Update seems to be called only once after the calibration was successful. For using GazeVisualizer continuously, I need to call PupilLabs.SubscriptionsController regularly. How should I call the above program continuously? Should I revise the scripts? Regards,
For more context: @user-5349b5 has been in contact with me via email. The demo works as expected but they are trying to integrate the gaze visualizer into their own view/project (not sure about the correct terminology). Calibration is successful but the SubscriptionsController doesn't seem to be called regularly to fetch/process the incoming gaze data.
To follow up on this question, if I collect the gaze data and video data using the network API, should I expected the same fps as documented here https://docs.google.com/spreadsheets/d/1e1Xc1FoQiyf_ZHkSUnVdkVjdIanOdzP0dgJdJgt0QZg/edit#gid=254480793 ? Thanks!
yes, although fps might be slower due to network bandwidth limits
Ok thanks! I am asking this because the video data seems to have higher fps than the gaze data, but then there must be some problems in my code
It is possible that the gaze estimation is slower when more phone resources are being used, e.g. for streaming data. But a bug on receiving side might be possible, too. Which gaze fps are you measuring? And how are you measuring it?
Do you mean eye or world video?
the world video
re https://discord.com/channels/285728493612957698/285728493612957698/885804567697186826
@user-10631a can you please check the type of the object on which you are calling subscribe()
? print(type(obj))
This one: <class 'contextlib._GeneratorContextManager'>
ok, you need to call subscribe() on a socket object. That is currently not the case
I get this object from this method: pupil_remote.connect(f'tcp://{ip}:{sub_port}'), maybe i'm missing something
Can you link the source for your code? I need more context in order to check what is going on
import zmq
import msgpack
if __name__ == "__main__":
# Connecting to Backbone
ctx = zmq.Context()
pupil_remote = ctx.socket(zmq.REQ)
ip = '127.0.0.1'
port = 50020
pupil_remote.connect(f'tcp://{ip}:{port}')
pupil_remote.send_string('SUB_PORT')
sub_port = pupil_remote.recv_string()
pupil_remote.send_string('PUB_PORT')
pub_port = pupil_remote.recv_string()
print(f"Pub.Port: {pub_port}, Sub.Port: {sub_port}")
subscriber = ctx.socket(zmq.SUB)
subscriber = pupil_remote.connect(f'tcp://{ip}:{sub_port}')
print(type(subscriber))
print("Closing Connection")
pupil_remote.close()
See also https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py for reference.
Note sub.setsockopt_string(zmq.SUBSCRIBE, "pupil.")
is equivalent to subscriber.subscribe("pupil.")
in your code
- subscriber = pupil_remote.connect(f'tcp://{ip}:{sub_port}')
+ subscriber.connect(f'tcp://{ip}:{sub_port}')
+ subscriber.subscribe("my topic")
Great, thank you so much for your help
@papr I request a timeline of 3D model confidence visible near to the calibration/gaze mapping timeline.
Makes sense, right? It's problematic when the calibration remains static over a region which has changed in 3D eye model position. Those changes degrade the calibration!
It would be even better if there was some indication of the magnitude of change in the 3D eye model. I'm not quite sure that that would be. What's really changing is the estimated 3D position of the eyeball center, but perhaps what's most relevant to the user upon a change is the change in angular position between the old/new model within camera-centered spherical space ?
Just a thought...
Visualizing this kind of information should be fairly easy. Angular change would not be helpful if the model shifts in z. But I get what you want. Let me think about it :)
Not a critical feature, but I've been thinking about this lately. The issue is quite apparent in our attempts to render the 3D geometry of the whole system.
...the translation component stored in the calib. matrices is static. The eyeball center stored within the gaze positions csv varies.
Right - perhaps that's the relevant metric? THe linear distance between the 3D eyeball model center and the center stored in the calib matrix?
...but yes, you're reminding me that, when fitting the 3d model, lthere are an infinite number of distance/size combos that produce the same projection, and distance is allowed to vary.
Ah wait, you are talking gaze data
I think I'm talking about the issue that the 3D eyeball position estimated during pupil fitting is allowed to vary, while the calibration assumes that it is static.
In any case, would be great to have something flexible
OK, understood. But that can be worked around by making sure that the fit is stable before calibrating and potentially even freezing the model
Yes. ...and I see that your team is slowly adding features to do this. I do have a hunch that more information could make this a bit easier. For example, would it be possible to make decisions about when to freeze the model after pupil fitting has completed? Perhaps this time series of confidence that we're talking about would facilitate that decision process.
Unfortunately, there is no perfect metric for deciding when the model is fit well. But since you would want to do that before calibrating, it will probably make sense to visualize pupil instead of gaze data. Unfortunately, there is no good way to exactly control freezing during Post-hoc detection.
Therefore, I do not even know if the timeline will be helpful.
I can imagine some issues with that. Ok, I'll continue to think about this and will come back if I have any good ideas. 🙂 Thanks for listening.
Hello Sir, Can you please tell if there can be some setting issue in pupils capture or what that results in no data in 'annoations.csv ' file . Rest all files have the correct data stored . Only annotations is a blank file. Please help
Just checking - Did you create annotations real-time in Pupil Capture?
Yes , annotations are real - time in Pupil Capture
Hello, I have a question about pupil player: Can you use pupil player from the command line with more than the recording path as additional arguments? E.g. open a recording and export with the current settings given a start and stop time?
Thanks for all advice on the topic!
Hi @papr , I have an update regarding the pupil-core-pipeline script! I implemented the (in your current version, empty) core/pupil_detection.py script and verified that its output works with the core/pipeline.py calibration script you provided. I have created a pull request if you'd like to review/merge the code into the main branch.
But it just shows nothing but a blank file
Then it is possible that their timestamps were not in sync. If you use the latest Pupil Capture version it will tell you the age of received annotations. Please make sure that this age is reasonable for your use case.
Sir, how can we ensure that the age of the annotations are reasonable for my work ? Like by hit and trial or there is some method
Receiving real-time annotations
Hi @papr , I have a problem with sending notifications via publisher using IPC Backbone.
I have a class in which i've defined publisher and a method to start the calibration. But when from the main thread i call this method nothing happens. If I write the same program in a procedural way without using classes, then it works and the calibration starts.
If it can help i can share the source code to see if there is any problem, Thanks
import zmq
import msgpack
class PupilConn:
def __init__(self, host='127.0.0.1', port=50020):
self._host, self._port = host, port
# Create context
self._ctx = zmq.Context()
# Connect to IPC
try:
self._pupilRemote = self._ctx.socket(zmq.REQ)
self._pupilRemote.connect(f'tcp://{host}:{port}')
# Get Subscriber and Publisher Port
self._subPort = self.initPort('SUB')
self._pubPort = self.initPort('PUB')
# Init Subscriber and Publisher
self.initSubscriber()
self.initPublisher()
except:
print("* Unable to Reach Pupil Remote Server *")
# Init Port
def initPort(self, port_type):
self._pupilRemote.send_string(port_type + '_PORT')
return self._pupilRemote.recv_string()
# Init Subscriber
def initSubscriber(self):
self._subscriber = self._ctx.socket(zmq.SUB)
self._subscriber.connect(f'tcp://{self._host}:{self._subPort}')
# Init Publisher
def initPublisher(self):
self._publisher = self._ctx.socket(zmq.PUB)
self._publisher.connect(f'tcp://{self._host}:{self._pubPort}')
def startCalibration(self):
notification = {'subject': 'calibration.should_start'}
topic = 'notify.' + notification['subject']
# Message creation
payload = msgpack.dumps(notification)
self._publisher.send_string(topic, flags=zmq.SNDMORE)
self._publisher.send(payload)
if __name__ == "__main__":
pupil = PupilConn()
pupil.startCalibration() # Nothing happens
For our research we are using the surface exports that pupil player generates to estimate the gaze on our computer screen in pixels. On the internet I found this: "Please note, that the gaze data is actually a combination of three streams (monocular left, monocular right, and binocular) gaze stream." When I view my data I find that it is sampled in a frequency of approx. 240 hz, however each eye is being registered in 120 hz. So my question is, is it correct that the gaze data in this surface export is a combination of the left and right eye stream? If so, is there also a binocular stream? Is it possible to determine from which stream the data in the surface export is? Thanks in advance!
Please checkout this draft of how the matching works https://github.com/N-M-T/pupil-docs/commit/1dafe298565720a4bb7500a245abab7a6a2cd92f In order to check if the gaze was mapped monocular or binocularly, you will have to look at the gaze_positions.csv
file. It's base_data field contains the eye ids and timestamps of the pupil data that was used to generate the gaze datum
You can start the calibration via self._pupilRemote.send_string('C'); self._pupilRemote.recv_string()
, too. Does that work?
Generally, I think you will have to keep the program alive s.t. that it has time to send out the message to Capture.
Thanks for your answer, in this way it works, but I need to be able to send messages in order to configure some parameters such as the duration of the fixation or the dispersion, do you have some suggestions?