Responding to your message (https://discord.com/channels/285728493612957698/1113359388997058570/1181144919096705046) here. @user-b96a7d. Would you be able to further elaborate on your use of the network api - are you rendering the video frames + gaze overlay or just streaming the raw gaze coordinates?
Responding to your message (https://
Hi - I was trying to run the gaze_contingent_demo in PsychoPy and this is the error I receive:
Traceback (most recent call last): File "C:\Program Files\PsychoPy\lib\site-packages\psychopy\app\utils.py", line 598, in save with open(self.file, "w") as f: PermissionError: [Errno 13] Permission denied: 'C:\Program Files\PsychoPy\psychopy-gaze-contingent-demo-main\README.rst'
Hi, @user-7413e1 - two notes.
1. It looks like you downloaded the gaze contigent demo into your PsychoPy install folder. Instead, you'll want to download it to one of your user folders, such as somewhere within "My Documents", "Downloads", or "Desktop".
2. Once you've done that, be sure to open the gaze_contingent_demo.psyexp
file in PsychoPy
hi looking for advice, when subscribing to the surface tracker, it seems to freeze my python code when no surface/gaze on the surface is detected in that moment. However subscribing to "gaze." pushes a msg about the gaze information every frame with no freezing. Even using a try: except when subscribed to 'surface' doesnt prevent the program from hanging.
I assume this is because of how sub.recv_string() and recv_multipart() work, but is there a way to even just get a "surface: none detected" so that the program doesnt hang?
I tried using threading instead to wait for the surface to be detected, but it seemed to run even slower in the thread, and additionally, once the surface was detected, there seemed to be a backlog of messages waiting to get printed, which were definitely not caught up, as it showed i was looking between surfaces 1, 2, 3, and 4, even nearly 20seconds after looking away from any surface.
Is there by any chance an open source github repository that I can view for the Neon Companion Monitor from http://neon.local:8080/
Not at this time. Are you interested in the client-side (receiving and rendering data streams) code? This can be accomplished using the real-time API (https://docs.pupil-labs.com/neon/real-time-api/tutorials/) for which we provide Python samples
Can you share your code here?
I'm using the method from here https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py
But instead of: if surfaces["name"] == "surface 1"
I'm doing: if "Surface" in surfaces["name"]
To handle 10 surfaces
im looking into building a similar monitor app, I have looked at the provided API links and I have managed to create a working video feed, but I was just unsure how to approach connecting it to a UI so that I could control some of the features such as starting/stopping recording with a button and manually adding in events etc.
Are there any GUI toolkits that you're familiar with or have a preference for in Python?
One easy method for rendering the image to the display would be to simply use OpenCV's imshow
function
im currently using imshow to render the image, is it possible to build the UI off of the imshow function?
You can, but it's not the friendliest experience since it's not really built for that use. For a very simple UI it's fine, but for anything non-trivial, I'd definitely recommend using a some type of UI toolkit (like tkinter, PySide, wxWidgets, etc)
Ah, yes - recv_string
blocks by default. Try this instead:
try:
topic = sub.recv_string(zmq.NOBLOCK)
msg = sub.recv
... snip ...
except zmq.ZMQError:
# No surfaces - should probably sleep here
pass
See: https://pyzmq.readthedocs.io/en/latest/api/zmq.html#zmq.Socket.recv
worked like a charm! thanks as always 🙂
Hello! Does anyone know if there's a way to get the old Pupil Mobile APK? We need to use a phone for running tracking in our study, but are using a version of the pupil labs for HoloLens system, not Neon, and the app is no longer on the app store.
Hi @user-9973bb. Pupil Mobile is no longer available, unfortunately. What's preventing you from tethering to a laptop/desktop and running Pupil Capture?
I'm not sure if it's a good channel, but I may have hit the right one.
We are trying to use an image mapper using an implementation in Python.
Unfortunately there is no good documentation anywhere on the pupil labs site on how to do this. There is this page: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#define-aois-and-calculate-gaze-metrics
But it only shows the implementation for determining the AOI.
What we need for the project is an implementation from scratch, i.e. how to use code to combine a reference image with a movie to convert fixations from a movie to a static image?
Because if this would work, then if I understand correctly, then having this, then using the above link to determine the AOI using the reference image?
I would greatly appreciate your help.
When you say a "movie", do you mean some type of dynamic, screen-based content? If so, have you seen this article? https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/
Hi everyone, I wanted to create a simple player that streams pupil invisible or neon cameras via rtsp. I wanted a way to test the RTSP streaming. Can I use usual video players (like VLC player) for testing?
Yes! You can view the cameras as a network stream using RTSP on port 8086
. For example, rtsp://192.168.0.116:8086/?camera=world
. You can also change world
to eyes
.
On my computer/network, VLC won't open the stream using neon.local
- only the IP address works. It might work on yours, but if it doesn't, try using your companion device's IP address instead.
Also note that VLC's default buffering is a bit aggressive, which leads to a delay. If that's a problem for you, you might try using ffmpeg directly instead. I've had good luck with this combination of flags ffplay -fflags nobuffer -flags low_delay -framedrop "rtsp://192.168.0.103:8086?camera=world
The study requires mobility that even a PC backpack may restrict. I know it is no longer available, but is there any way you can provide the old APK for us to sideload? Even if it is no longer supported, we just need to be able to run this off of a pocketable device. I have found links online to the archived APK but the websites are extremely suspicious and I would really prefer not to have to take that route.
Unfortunately, the APK is no longer available. But you could in theory use a smaller device, like a small-form factor tablet style PC, to make Core more portable.
Now that there is a local Neon environment, does that get recorded on a phone?
Hi @user-e637bd ! Could you please develop? Neon did always recorded locally on the phone, then you could choose whether to upload the data or not to Pupil Cloud. Have a look about the first steps here: https://docs.pupil-labs.com/neon/data-collection/first-recording/
I was just asking based on the above comment Unfortunately, the APK is no longer available. But you could in theory use a smaller device, like a small-form factor tablet style PC, to make Core more portable.
I just wanted to make sure that the same process for Core exists with Neon, its just a different app, ?
Pupil Core and Neon are different eye trackers and they utilise different technologies.
[Pupil Core](https://pupil-labs.com/
Gotcha! My concern is that the site officially lists that the requirements are an intel i7 minimum but portable PCs will run on something like an Atom processor. Have you ever seen Pupil Core run successfully on weaker hardware?
When running on lower-powered hardware, one way to speed things up is to disable real-time pupil detection and gaze estimation. These can then be run in a post-hoc context. Just be sure to record the entire calibration choreography – highly recommend running pilot trials with this approach prior to study data collection.
I'm currently conducting an experiment at a commercial facility using the pupil invisible. After capturing data and adding Events on pupil cloud for both the entry and exit times, is there a way to download the Timeseries Data for the duration between these events?
Hi @user-f03094! That's currently not possible. However, there's an events file (containing all of the entry/exit times you manually added) included in the export. So it's possible to match those events with the timestamps in your other exported timeseries data, thus enabling analysis of sections of data.
Hi, i'm using pupil invisible and i'm trying to define AOIs in the reference image. I've copied the code in this page https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#dependencies-of-this-guide, but when i try to run the programme, this error appear (the code in image is that one which make the error).
Alpha Lab - Define AOIs and Calculate Ga...