Hello, I am reaching out because my research Team is a little stuck with the plug-in 'surface tracker'. In the video, not all 4 markers are always shown, as the study participants did not always do this. I would like to ask if there is a script that allows you to define the four april tag ids that define the corners of the AOI. Otherwise, other markers are mistakenly defined as missing corners in the video.
Hi 👋 Do I understand correctly that there is no single frame in the whole video during which all 4 markers are being detected at the same time? Did you know, that you can add and remove markers from a surface definition in Pupil Player? You might have defined your surface on more markers than intended.
Hello Good time I am very happy to meet your product Unfortunately, I don't have access to purchase glasses, but I want to have the software codes and work on the codes. Can anyone help me in this way, what should I do? Can I use product codes without glasses?
I responded to this in 🕶 invisible (future reference - please post only in one channel - thanks 😸 )
thank you I saw this after sending messages to other channels 😅
Friends, I want to run product codes with Python. Do you have a document for step by step?
You mentioned that you don't have any of the hardware yet. Could you clarify what you are trying to test?
Generally, the two products (Pupil Core and Pupil Invisible) have two software components: A realtime recording component and a post-hoc data processing component (which is able to export the recorded data in CSV format). Which of these components are you interested in?
Unfortunately, I do not have access to purchase hardware. I want to use the product codes to record with my webcam and check which part of the desktop I am looking at? Without having the hardware, can I run the codes and change to know which part of the computer desktop I am looking at?
Thank you for clarifying! Using the webcam for eye tracking corresponds conceptionally to "remote eye tracking". Our software does "head-mounted eye tracking" which works differently. In other words, you will not be able to use your webcam with out software. 😕
Thank you very much for your good explanation. I said maybe I can change the codes and do this for my webcam.
Given the conceptual differences between remote and head-mounted eye tracking, you might be better off with using a software that is designed for remote eye tracking from the start. You won't be able to use any of the Pupil Core software components as they rely on assumptions that do not hold in remote eye tracking.
Hello 👋 , I'm trying to research (iris muscle, point tracking, etc.) by applying various image processing through pupil images. Currently, it is performed through the saved image(Pupil Capture), but I want to process the image acquired from the pupil camera in real-time using opencv-python. Can you please give me any comments or reference on that?
See this example on how to receive frames in real time https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Thank you! I checked the link you shared, and I have an additional question. I had to run Pupil Capture and Pupil service programs first to be able to use real-time pupil images in Python. Is there any way to handle a program (Pupil service) in Python?
Hi, technically, since the applications are Python-based, yes. But at the moment, the application is not designed to be used as a module. You would need to make corresponding changes yourself.