Hi @user-f43a29 - We are trying to implement real-time marker mapper functionality using the real-time API. We have the Pupil neon glasses and we were able to successfully test out the code for one surface. However, the screen gaze coordinates seem off for the second surface, even though we're using the same function used to generate markers (AprilTags).
Do the AprilTags placed in 4 corners of the screen need to be in a specific order? Thanks in advance for your help!
Hi, @user-5df8a1 - if I may, I authored the real time screen gaze package. The order of the AprilTag markers isn't important, but I believe the order of the corner vertices for each individual marker is important. It should be top left, top right, bottom right, bottom left
Hi, I'm running pupil core capture from source on Ubuntu 22.04.4. The capture runs and is able to connect to the core cameras but the mouse is not able to interact with the program. Buttons don't work or respond, there isn't even close/minimize/fullscreen buttons on the top right. Have you had this problem come up before? Perhaps I need sudo permissions? Thank you.
Hi @user-eebf39! Do you have the same issue when you try to run from the bundled release?
Thanks @nmt, when I install the bundled release (sudo dpkg -i debfile
) the program crashes before running with the error 'Unhandled SIGSEGV: A segmentation fault occurred.' From what I understand, to use Pupil Capture with Neon I need to run it from source which is why I've been taking that route.
@nmt . Figured it out. Turns out my graphics card needs drivers.
Great to hear!
Hi @user-e6afe3 !
First of all, please forgive my sketching skills here. So, basically in your Unity3D setup you probably have a 3D environment and a virtual camera with a viewport. Whatever is on that viewport is what you are showing on the monitor's screen in your experiment. Is that correct? Or did we got it all wrong?
We provide you with the gaze on the scene camera coordinates and thanks to the fiducial markers (Apriltags), also on the coordinates of a plane /screen.
So now, you have this normalised gaze point over the viewport (kind of like a window), you can cast a ray that pass through that point and extend it until it hits an object.
Unity provides a method that allows you to cast a ray (can be invisible) from an origin and one direction, then if objects on the environment have colliders, you can also check if they have been hit. https://docs.unity3d.com/ScriptReference/RaycastHit.html
But, there are "infinite" possible rays passing through a point so you should also define an origin. That origin can be the Unity's virtual camera and you can simply make the raycast from that origin and passing through the viewport at that normalised position.
But that sets other questions, if you want to make it more realistic, you would probably set the viewport (in Unity) and the camera's origin to resemble your monitor's and wearer's position.
Thanks again, you are correct about the set up, and thanks for the sketch. I have another question, would I need to connect Pupil to Unity so the gaze point is the origin of the ray, since the gaze point will be constantly changing as the wearer looks at different objects in the scene? If so, what would be the best way to do that?
Hello All,
I was wondering how to get the x y gaze coordinates from the eye video in python? also do you have any tips regarding the detection of blinking?
Hi @user-9f3e92, may I know if you're trying to get gaze coordinates from the eye video while recording, or do you want to get the coordinates from a recorded video?
For blink detection, you can do it offline. This guide shows you how, both in real time as you make your recording, and offline, after the recording has been made.
I'm using a neon and recording from phone
Hi @user-07e923 thanks for the prompt reply. I am trying to recover the gaze from a recorded video exported from the phone. Thanks for the link I will have a good look at it.
If the video was exported from the phone, you can import it into Neon Player to get gaze data. Please note that Neon Player isn't as powerful as Pupil Cloud. For example, by default, Gaze is computed at 200 Hz in the Neon Companion app. However, you can choose to down-sample this computation during recording. If you did this, then it won't get recomputed to 200 Hz in Neon Player. Only Pupil Cloud offers recomputation to 200 Hz.
But if you want to process the raw data files yourself, then you can use this source code to grab gaze data directly, or modify the code to your liking.
Hello there! I am searching repository for converting raw data or csv and camera scene to rendered eyetracking movie. off course i know about https://github.com/pupil-labs/neon-player (or only export script :https://github.com/pupil-labs/pl-rec-export) but if possible, i want to use only cli version or something like code without ui...
Hi, @user-43ec86! We are actively working on a new Python package that you might consider for this and an example of exactly what you're looking to do.
Please keep in mind that this library is still under construction. Feedback is appreciated!
@user-cdcab0 thanks for the reply! that helps me!! but sorry, I found public user like me has no right to access these content (said 404) could you please confirm?
Sorry about that - I've updated the repo visibility so you should be able to see it now
Thank you!! I've accessed repo and if i have any problem, can i post issue ?
Of course! π
happy to meet such a great team!!
is there any way in the software for the pupil core to adjust the intensity of the IR led's?
Hi @user-afca69, thanks for reaching out! π It's not possible to change the intensity of the IR Leds. If you find that the eye images are over-exposed, you can change the absolute exposure time by clicking on video source icon.
@user-5f1abb FYI I have updated the script to use gpt-4o
by default (cheaper and faster) and also added an async version.
Thank you!!!
Dear support team, I am trying to use the pupil labs blink detector which is published on Github. There is some tutorials available. Unfortunatly they are showing an example of the simple Real Time API with the blink detector. I would like to use the async one rather than the simple one. In order to do that, a member of your team explain to me that I had to modify the helper.py file. I tried some stuff and came with some results. I also came with new errors in the blink_detector.py file. I will paste the error below. Do you have any advice to share with me in order to use the blink detector with the async real time api ? Thanks for all. File "c****\real-time-blink-detection\BlinkThread.py", line 45, in async_run blink_event = next(blink_detection_pipeline(left_images, right_images, timestamps)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "\real-time-blink-detection\blink_detector\blink_detector.py", line 319, in extract_blink_events for event1, event2 in pairwise(onsets_offset_events): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:real-time-blink-detection\blink_detector\helper.py", line 169, in pairwise next(b, None) File "c:real-time-blink-detection\blink_detector\blink_detector.py", line 277, in compile_into_events for event_type, samples in groupby(samples, key=lambda sample: sample[0]): File ".venv\Lib\site-packages\more_itertools\recipes.py", line 753, in convolve for x in chain(signal, repeat(0, n - 1)): File "\real-time-blink-detection\blink_detector\blink_detector.py", line 258, in <genexpr> probas = (p for p, ts in probas_timestamps1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "\real-time-blink-detection\blink_detector\blink_detector.py", line 186, in predict_class_probas features = np.hstack([window[i][0] for i in indices])[None, :] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Hello, I have a question about Pupil Capture. I followed Developer doc to receive all gaze data based on Pupil Core, and it seemed well last year. However, the code cannot work today when I change 'pupil.' into 'gaze.' in line 41. The full code is shown below. Could you give me some suggestions? Many thanks!
Hi @user-b02f36, may I check if you've calibrated the system first before running the script?
@user-b02f36, I've moved your message here to π» software-dev as it fits this channel better
Yes, Wee. I'm using Windows 11 22H2 and based on a virtual environment with Python 3.11.7.
Could you elaborate on "it cannot work when I change into 'gaze'"? What sort of errors are you seeing? It would also be helpful to learn about your goals, since you mention that 'pupil' works.
It worked well 6 months ago, but it cannot work when I change into 'gaze.'. However, it works well when I subscribe 'pupil.'.
I apologize for not explaining the problem clearly. Let's focus on line 41. Actually, it is the original code from Network API guideline in Developer Documentation of Pupil Core. It shows that when I follow the guideline to subscribe pupil messages, which means line 42 is 'subscriber.subscribe('pupil.')', the pupil message can be printed in Pycharm, but I cannot receive any gaze message when I change line 41 into 'subscriber.subscribe('gaze.')'. So the subscription for gaze data is not successful? But it worked 6 months ago.
Hi @user-b02f36 ! Could you run the calibration and validation again, then you can try running this snippet and see if it prints the data?
Hi, Miguel. I have run the calibration again. It works now.
Hi @user-e6afe3 ! Yes! you would need to connect Pupil Capture to Unity, to do that raycast method I mentioned, perhaps the best way would be to use a modified version of hmd-eyes
where you strip out the part that uses the headset but left the streaming part. But in general, what you need is to use a ZMQ library to access the data from Unity.
Was my previous message well received on your side? Unfortunately, I didn't get a reply from you. Is it due to lack of time? Should I turn to others? Thank you in advance.
Hi @user-a23b90 , without knowing what was changed in the Blink Detector code, it is difficult to say what exactly went wrong. Could you copy-paste the full error message? The message here (https://discord.com/channels/285728493612957698/446977689690177536/1250401698938228826) is missing the final part that says what the error was.
My bad, here is the full error message π . File "\real-time-blink-detection\blink_detector\blink_detector.py", line 319, in extract_blink_events
for event1, event2 in pairwise(onsets_offset_events):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:real-time-blink-detection\blink_detector\helper.py", line 169, in pairwise
next(b, None)
File "c:real-time-blink-detection\blink_detector\blink_detector.py", line 277, in compile_into_events
for event_type, samples in groupby(samples, key=lambda sample: sample[0]):
File ".venv\Lib\site-packages\more_itertools\recipes.py", line 753, in convolve
for x in chain(signal, repeat(0, n - 1)):
File "\real-time-blink-detection\blink_detector\blink_detector.py", line 258, in <genexpr>
probas = (p for p, ts in probas_timestamps1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "\real-time-blink-detection\blink_detector\blink_detector.py", line 186, in predict_class_probas
features = np.hstack([window[i][0] for i in indices])[None, :]
^^^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
Hi @user-a23b90 , so I looked into it more deeply, and it isn't a trivial adaptation. We might revisit it in the future, but there's nothing concrete. As @nmt said, if you need a custom (async) real-time implementation as a priority, we offer those as part of our consultancy packages.
Hi @user-a23b90, sorry for the delay. I'm briefly stepping in for @user-f43a29 here. After reviewing what you've got, it's difficult to say exactly why it's not running. If it's not working as is, the code might need to be refactored to work with async operations. To be transparent, we don't have a concrete plan to do this at the moment, although theoretically it should be possible and might be revisited at a later date. @user-f43a29 will take a quick look later and see if it's an easy fix. For context, Alpha Lab content is essentially experimental demonstrations and not officially supported product offerings, so it's expected to be sometimes incomplete or a bit rough around the edges! If you do need a custom real-time implementation as a priority, we do offer those as part of paid packages, just to bring that onto your radar!
Thank you @user-07e923 it works well. Could you please point me to the relationship between world video frames and gaze points it seems to be 6.8 ish gaze points per frames which is likely due to the different cameras frame rate 198fps (sensor) vs 29fps (scene). How do you translate this into having a single gaze point for a single frame I could get the mean of 7points for one frame but that would produce a large latency after a few seconds of recording?
Yes, that is correct. The scene camera runs at 30 Hz while the eye cameras run at 200 Hz. You can see the specs here. This means that you'll always have more gaze data than scene camera data. Note that 1 frame = 0.03 seconds / 33 ms.
As for having only a single gaze point for a single frame, this is somewhat difficult to answer. Do you have something in the scene which you'd like to map? Or are you looking for the average gaze position?
Hi @user-07e923 . Here is my problem: I used the function: receive_matched_scene_video_frame_and_gaze to create a software to record video and gaze streaming from the phone to my laptop and a series of software to analyse the data and all seems well. After this i decided to record straight on the phone and would like to open the phone files (Neon Scene Camera v1 ps1.mp4 and Neon Sensor Module v1 ps1.mp4) to get the gaze etc same as the function receive_matched_scene_video_frame_and_gaze but could not manage which prompted my earlier question where you pointed me to pupils lab recording utilities which seems to work fine but return 6.8 gaze points per frame when receive_matched_scene_video_frame_and_gaze seems to only return one hence my confusion. Yes i think i'd like to have the average gaze per frame but as it is not a full number of gaze points per frame I was wondering if you had any suggestions or am I simply approaching the problem wrongly?
Hi Neon team, I recently encounter an issue with the realtime api. So I use send_event() to capture timestamps of events at multiple points of my experimental program. We are running this experiment at a clinical site which has unstable internet connection. It happened that during the experiment, at first the connection was fine and the device.send_event() worked fine. But the in the middle of the experiment, the Internet became very weak and the connection was lost, the send_event() failed and it kept reconnecting (below I attached the messages it sent to the console), which freeze the entire program. Is there a way to avoid freezing the program even when the connection is lost? For example, if it can't find the device at that moment, just skip it. Or give it a timeout argument? Any advice?
Here's the error messages:
It will be nice if I can catch this error and move on without waiting for reconnection, without messing up the timing of my main program.
Hi @user-613324! Moving your messages here to π» software-dev!
If your device loses its connection and it breaks your programme, it's important to handle such scenarios gracefully to ensure your programme can recover or at least fail without causing issues.
Typically, you'd want to use Exception Handling, e.g. wrap your API calls in try...except blocks to catch exceptions that occur during communication with the device. If you want to reconnect the device automatically, you'd need to implement some reconnection logic as well.
I also wanted to mention that the devices do not need access to the internet to communicate via the real-time API. They just need to be connected to a local network. If you find that the network you're using is unstable, we recommend using a dedicated router instead.
Hey!!! I was wondering what would be the best way to approach video synchronization with pupilLabs videos. I have the world video of a pupil Core and I want to synchronize it with a phone video recorded at the same time. They have different durations. I'm not sure how to approach the variable frame rate of the Pupil Core capture. If someone else has already solved this issue, I would appreciate a hint. π
Hi @user-50ac4b π ! Was the person who wore the Pupil Core headset also watching the video on the phone? In other words, was the phone video captured by the world video?
no. the phone was behind, recording the scene of the person with the eye tracker.
Ok, I see. Then, it might be a bit trickier, since I cannot make any guarantees that the timestamps in the phone video will be accurate.
Did you have a method to temporally synchronize the two devices? Even something like a flash or some marker that is present in the phone video at the moment when the Pupil Core recording started?
i can approximately find the same moment in the two videos by comparing the actions of the person. and milisecond accuracy is not important.
Ok, in that case, there are some steps you can try. Note that without knowing the details of your experiment or your phone and how it records video, I cannot make any guarantees about the accuracy of the resulting analysis. It becomes harder to make guarantees without the two videos being more accurately synchronized.
If I may, can I first ask two additional clarifying questions: - Is your end goal to know where the person was standing when they were looking at a given object/point? - Have you already completed the experiment or this is an initial pilot/test experiment?
I have gathered the data I need, so most probably no more recording sessions will happen. What I want to do, is have the two videos (climbing setup: phone video of person climbing and persons gaze with the eye tracker) next to each other, so i can visualize the gaze points in the phone video (im using april tags for mapping the points).
Ok, and your end goal is to know where the person was standing when they looked at a given object/point?
(along some other things, but if i can make it work for the gaze mapping across the two camera views, itll be enough for a start)
Sorry for the extra clarifying questions, but it will help me give you the best solution that I can offer. When you say gaze mapping across the two views, do you mean that you want to know what 3D surface/object they gazed in the scene at each point in time?
i've tried matching the frames with pts values. but didn't seem to work.
Yes, just directly comparing pts won't work because the two videos have different timebases, as they've been recorded at different framerates and thereβs the additional detail that they started at different times.
i dont think 3D is needed. But the end goal is to be able to recognize when a person is looking at a climbing hold. But, the first thing i want to accomplice is to have the gaze point in the phone video. (again, using april tags to do the coordinate mapping from the eye tracker to the phone) (the phone video is supposed to be stable)
Ok. I need to step away for a bit, but I'll draft up a basic solution that should hopefully help you get along.
thank you a lot !! π
In the meantime, since you mentioned that you have AprilTags in the scene, you can also take a look at the Head Pose Tracking plugin. If the AprilTags were hanging along the climbing route, this might work for you and will provide localization/orientation of the head in the 3D environment, which could be helpful in your case.
Thank you again!! I'll give it a look!
So, you could try the following, but without some sort of external time sync between the two devices, there is some unavoidable inherent error in the method.
world_timestamps.csv
file in the "exports" folder of your recording. Load the file into your analysis environment and call the result world_timestamps_system_s
, as they are in secondsworld_timestamps_system_s
from all world timestamps, such that they are all relative to recording start (ie., first timestamp is saved as 0 s). Let's call the result, relative_world_timestamps_s
relative_phone_timestamps_s
relative_phone_timestamp_s
. Provided that the timestamping of the phone recording was stable, then the array index of this timestamp tells you the frame of the phone recording that is the closest to the given Pupil Core world camera frameIt is quite unlikely that relative phone timestamp 0 and relative world timestamp 0 occurred at the same exact time, but we do not have a way to know the offset, nor can we account for any temporal drift
If doing this in Python, then the pyav package and the np.searchsorted
and pd.read_csv
functions will be helpful
Thank you a lot Rob! I'll give it a try!!!
Hi @user-50ac4b , my colleagues have pointed out that there is already a Pupil Player plugin that does what you need. It has a user interface to make things easier. Check it out here: https://discord.com/channels/285728493612957698/446977689690177536/903634566345015377
No problem. Just know for future experiments that we have tips on temporal synchronization with other devices, which can make things a lot easier and more accurate.
I tried it out. I can't figure out though how to make the 2nd world video play. It is imported into PupilPlayer. But it seems it stays only as a static image. And I see no option of making manual time adjustments. Maybe only from the json file. But I can't make it play. I don't know if it's a bug, or I missed some files. I followed the steps.
When I have a free moment, I will give it a try here and get back to you.
When I have a free moment, I will give
Hello everyone, I am having an issue with my code for processing gaze data using Pupil Labs. I am using gaze_mapper.process_frame(frame, gaze), but result is empty. Here is the relevant section of my code: please take a look at the file
Hi, @user-3224d0 - at first glance your code looks ok. I would bet that your problem is simply poor marker detection. When using markers on a screen, it can be difficult to balance the brightness/contrast of the markers against the exposure settings of the camera without the brighter parts of your display washing out the image. I can see that you're using 16 markers, so I also worry whether your markers are large enough with appropriately-sized margins.
Three notes that might help: * Although you aren't using Pupil Core, the tips in the Preparing your Environment section of the Surface Tracker plugin apply here too * Look at the images captured by your Neon camera. You can do this quickly/easily just using the scene camera preview in the companion app. If you have a difficult time identifying an AprilTag marker, so will the computer * I like to use the gaze-controlled-cursor-demo to troubleshoot marker configurations. It will draw four markers on your screen and has a simple GUI for modifying the size and contrast of the markers. It provides visual feedback (a red outline) for any marker that is not detected
Hi Dom,
I am encountering an issue with the transmission of eye position data to the LSL stream and hope you can assist me.
Problem Description:
I have adjusted the size of the markers in my setup and changed the coordinates accordingly to improve recognition. The script appears to correctly send information to the LSL (LabRecorder), but the data for the eye position is not being transmitted. I have checked that the device is recognized and the calibration is successful, yet the mapped eye data remains empty.
Setup Details:
Device recognition: Successful
Calibration: Successful
Marker size: Increased
LSL stream: Connects, but no eye data is sent
Could you please help me identify why the eye position data is not being transmitted to the LSL stream, even though the recognition and calibration are successful? Are there specific adjustments or checks I should perform?
Thank you in advance for your support.
Can you clarify what you mean by "eye position data"? Do you mean the gaze point in surface coordinates? Because looking at your code, this is all you're sending, so this is all that will be expected. I tried your code on my own computer and the surface-gaze data is indeed being streamed to LabRecorder.
If you mean eye state data, then you will need to extract the values you want from the gaze data, and then either forward them to LSL via another stream outlet or extend your existing outlet to include these data
Hi Dom,
Thank you for your prompt response.
Yes, I am referring to the gaze point in surface coordinates. In the XDF file I recorded using LabRecorder, I see that the gaze data is empty, meaning the gaze position data is not being transmitted
This is not the case for me. I ran your code as-is with some AprilTag markers on my display and I successfully recorded the attached data.
Hi Dom;
Thank you for confirming that my code works.
I wanted to ask about other possible reasons why we are not getting gaze points in surface coordinates from the function, aside from marker detection. I have checked in the Pupil Cloud and confirmed that the markers are being recognized.
However, during the experiment, I am not getting the gaze points coordinates in my XDF file.
Could you please provide insights or possible causes for this issue? What additional steps can I take to diagnose and resolve this problem?
Thank you in advance for your support.
The best way to troubleshoot problems like this is to break it down and identify possible points of failure. It seems like you've started this already:
* You print a .
every time you receive data from the companion device. Do you see these dots in your terminal?
* You print GAZE COORDINATES
when gaze data is mapped to a surface. Do you see this in the terminal?
* You print Gaze at ...
after it's pushed to LSL. Do you see this in the terminal?
If the answer to any of these is no, then that will tell you where to troubleshoot. If the answer to all of them is "yes", then the problem lies with LSL or your communication with it. Did you enable the stream in LabRecorder?
As a side note, I'd like to point out that you have an asyncio program here, but you're not using the async version of our realtime-api. Your program technically works, but I think you'll find it easier to maintain if you use the async api with an async program or the simple api with a non-async program.
Hi Dom,
thank you for taking the time to provide us some detailed tips.
Our code receives data from the device ("frame": SimpleVideoFrame and "gaze": EyestateGazeData), we are also able to send and record this data via lsl. However, the gaze_mapper output ("result") is empty: "result": MarkerMapperResult(markers =[], located_aois{'3d2f0....': None}, mapped_gaze={'3d2f0...': []}). We think this is the reason that the for-loop is never entered. We also see that the markers are being detected, as some markers are noted in "result" as a list.
Do you have any suggestions why our code is not able to output data? Thank you in advance for your support!
Thanks for the clarification. I understand where the confusion might be. What the function is doing, is taking a frame, then finding the next gaze data that has the closest timestamp to the frame's timestamp. It then returns 1 gaze point to this frame.
In essence, there're a lot more gaze points available because the eye cameras have higher sampling rates. The function just picks the closest gaze point via the timestamp, and ignores the rest.
So, to generate this single gaze point per frame, you could either i) find the gaze point with the closest timestamp to the frame (like what the function is doing), or ii) calculate the average gaze point positions within this frame.
I hope this helps π
It makes sense now. Thank you for your reply.
@user-3224d0 - quick note - your messages were moved into this channel. Also, sorry for the delayed response
If you're seeing markers in a mapper result object but not seeing any mapped gaze data, this would indicate that the detected markers are not part of the surface definition. Are there other AprilTag markers in the area? You should inspect the ids of the markers that have been detected
Also, your markers may still be too small. On a 685mm diagonal display at 1920x1080, I'd probably double the size of your markers for a 75cm viewing distance
[email removed] Macagno !