Hey, I was wondering what the default resolution of the scene camera was on the neon glasses (serial 221136) and on the invisible glasses (serial 8GEKB)
Hi @user-2255fa ! Neon scene camera resolution is 1600x1200px as outlined here while Invisible is 1088x1080px
Ok thx!
Hi. I'm trying to use "receive_matched_scene_video_frame_and_gaze" from simple realtime_api on a pupil neon. But it stays in this line forever and returns nothing. I checked it on a pupil invisible and it worked fine. How should I fix this?
Hi @user-46e202 ! Are you running this example ? What does device print?
the code stays at "Connecting to Device(ip=10.181.112.159, port=8080, dns=None)... ". I'm using a slightly different code. I define the device with its ip rather than searching for it (since I'm connected to hotspot that doesn't allow zeroconf dns)
Thanks for following up, it is clearly connected to the phone. May I ask, if you access 10.181.112.159:8080 in your web browser, can you get the scene and gaze signal?
Thank you for the response. I get the gaze signal but I don't get world camera signal.
Hi all, I am new into looking at eye tracking and would like to look at analysing output before investing in a tracker. Are there any other datasets besides the one shared in the documents. Any help would be much appreciated.
Hi, I made my own calibration markers in PTB, but i just wanted to confirm if the exact radius matters - in the PDF from your docs the radius is not fixed and each ring is different radius, does that matter?
@user-93ff01, radius does not matter per se. But it will need to be large enough to be clearly visible in the scene camera, and thus robustly detected. But that
Hi @user-b65d27 π ! May I ask what kind of analysis you would like to do, so that I can better understand why that dataset was not appropriate for your tests? Also, which device do you plan to use: Pupil Core or Neon? Also, you can check the format of the recording data for Neon and also for Core.
Hi, Looking to do analysis on football (soccer) data using the neon tracker. Thanks
Hi Pupil team, I was wondering if there was a way to connect to a specific pair of glasses using phone IP or maybe serial number? We have multiple devices turned on that are on the same network, but I only want to connect to a specific device at a time. Is there a method with the real-time async API that allows me to pick what device to connect to by user input?
If you'd like to present the user with options, you can search for all devices.
Once you know which device, you can connect to it using Device.from_discovered_device
using one of the return values from the device discovery method, or by manually specifying the IP and port in the Device
constructor
Ok thx. Are you able to show me a little code snippet on how to connect using Device.from_discovered_device?
Sure - something like this:
import asyncio
from pupil_labs.realtime_api import Device, discover_devices
async def main():
print("Available devices...")
ordinal = 0
available_devices = []
async for device_info in discover_devices(timeout_seconds=5):
ordinal += 1
print(f" #{ordinal:>4}. {device_info.name} {device_info.addresses}")
available_devices.append(device_info)
print("\nSelection #", end="")
choice = int(input())
device_info = available_devices[choice-1]
async with Device.from_discovered_device(device_info) as device:
status = await device.get_status()
print(f"Connected to {status.phone.ip}")
if __name__ == "__main__":
asyncio.run(main())
is easy enough to figure out with some trial runs in Pupil Capture!
For more snippets, be sure to check out the examples in the official documentation π
Hey hey,
Me and my team are having some issue with Pupil Cloud. On some of our videos collected using a mobile eye-tracker, enrichment does not want to work at all. Our enrichments are reference image mapper. Some of the videos can be mapped nicely, but some don't give any results at all, they just keep loading all the time. We don't know what the difference is, that some work and others don't, because no error is displayed regarding the error in the mapper reference image or anything. Does anyone have any ideas on what to do to make enrichemts work?
Hi @user-06b8c5 π ! What do you mean by loading? Is it the enrichment status spinning or something else? do you have a screenshot?
We have a task in psychopy that sends time stamps to our ecg system but we were hoping we could get those same timestamps to be sent out to our glasses. The code I find is only for detecting one pair of glasses but we currently have 2 people wearing the glasses at the same time. is there a way to get our psychopy to send thse timestamps to both pupil recordings automatically?
Allthough PsychoPy provides a nice, universal API for different eyetrackers, the system is really designed to just be used with one at a time. So to achieve what you want, you'll need to interface with our Python library directly. Which of our eyetrackers are you using?
If Core, you'll want to see the documentation for the Network API. For Neon and Pupil Invisible, you need to use the Realtime API
We are using Pupil Invisible. I found some code to connect it to one but I can't figure out how to make it for both devices at the same time. Is there an example somewhere? Sorry...we are kinda new to this
There are two simple ways you could go about this.
Method 1: Search for all devices and connect automatically
from pupil_labs.realtime_api.simple import discover_devices
devices = discover_devices(search_duration_seconds=3)
You should not rely on the order of the devices in the list. If you need to specific access to each device individually, you'll need to add additional logic or use something like the method below instead.
Method 2: Use the device IPs and connect to each device manually. You will, of course, need to look up the IPs which you can find in the companion app.
from pupil_labs.realtime_api.simple import Device
devices = [
Device(address="192.168.1.169", port="8080"),
Device(address="192.168.1.170", port="8080"),
]
Dear sir Thank you for your help. Could you please answer the following? Best regards.
Q1. As shown in the attached image, data has not been uploaded to Pupil cloud since yesterday (downloaded Zip files cannot be opened). Is there any cause for this? I have tried with both PupilNeon and invisible.
This problem was solved. Thank you.
Hello
I have some questions about gaze RTSP streaming on pupil neon.
1. I have the problem that the last byte of the streaming is always 255 even if no one is wearing the glasses. I checked the code with pupil invisible and it was 0 when glasses arenβt wore by a user.
2. When I use python to receive gaze stream data (using socket.recv
) I get 21 bytes every time while it should have been 9 bytes. Can you give some instructions what I have to use if socket.recv is wrong?
Interfacing with the RTSP stream directly is an advanced use case, and we recommend using the Python client instead. It handles the lower-level details and provides a much friendlier API.
It's also open source, so you may find answers to specific implementation details there
Thank you for your answer. The problem is that I'm implementing an android app. I was checking the stream with python to implement it in kotlin. I was using this documentation. https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html#streaming-api.
Hi, do we know what is the pupil diameter accuracy?
Hi @user-75fede. Which system are you referring to, Core or Neon?