Hi everyone, I am consuming the Websocket inface via C# and have already implemented to get gaze data with the cool example code from https://github.com/pupil-labs/neon-xr/blob/4343ad21cf7a00bed3bb7f86b5050291252939f5/com.pupil-labs.neon-xr.core/Runtime/Scritps/RTSPClientWs.cs#L284-L320. But now I want to get also the Eye State Data which is described as optional. How can I get the Eye State Data? thanks
Hi @user-65c5e1 ! Eye state data is included in the GazeDatum, you can find right on the link that you shared how is decoded. It's just few lines below 😉
If you are not receiving it, is because you might not have it enabled in the Companion App settings.
thank you, it was the setting in the Companion App 🙂 Now I need to decode the timestamp. which is not included in the sample code - as far as I know. I have tried to decode the last part of the "Eye State Data" but this seems not correct. Maybe there is something wrong in the documentation. Can anyone help?
Hi @user-65c5e1 ! The RTSP packet doesn’t include Unix timestamps. Currently, NeonXR is a lightweight package that only leverages RTSP through WebSockets and doesn’t implement the additional functions available in the real-time Python client, such as timestamp handling.
There’s ongoing work on a client that can handle multiple streams and provide timestamps. A minimal example can be found here, but we can’t provide an ETA for it.
So, if you need multiple streams or Unix timestamps, you’d need to implement the RTCP SR and RTP logic as described here
Hello, I'm using a Neon product and receiving IMU quaternion data via the realtime API.
When I observe the converted RPY values, I sometimes notice some error in the yaw value, and it seems to converge slowly. I'm wondering if the output data has already been compensated for gyroscope drift.
Additionally, if post-processing compensation is necessary, could you please suggest a good approach to handle it?
Hi, @user-4ad7d3 - that's normal - the IMU requires calibration in order to track yaw properly
Thank you for your response. Just to confirm—should I understand that the data being sent is completely raw and not pre-calibrated?
Well it's not "raw" so to speak, but pre-calibration is not possible. The IMU needs to be calibrated every time you use it
Hello! Me again. I just recently started having an issue with my pupil core camera arm, which I'm controlling via PyUVC. I am getting an error about the JPEG data of the camera being corrupted when I attempt to capture a frame with it using cam.get_frame_robust()
. I've used this camera for probably about a year, but as of the past month have started powering it off of a Raspberry Pi (which itself is powered via battery). The Raspberry Pi notes that on this battery, peripherals will not receive full power, as the power supply is not suffiient. Thus far, I have noticed no negative consequences of this. My question then is two-fold. One, is there a simple way to fix this? And 2, could chronically underpowering the camera have damaged it into providing corrupted data? Sometimes a restart fixes this issue, but it went from 0% occurence to at least 50% or greater occurence
Hi @user-ffc425 , does it do this if you plug the Raspberry Pi into mains again? Or, if the eye camera is plugged into a standard Pupil Core headset and PC?
Also, what do you get if you instead use cam.get_frame(0.05)
, as used in Pupil Capture?
Hi @user-f43a29 , you had previously helped with the attached code for realtime data acquisition using py.zmq on matlab. This line takes around 20ms 'surfaces = py.msgpack.loads(msg, pyargs('raw', false));' and the loop for gc = 1:numel(gaze_positions) norm_pos = double(gaze_positions{gc}{'norm_pos'}); norm_gp_x = norm_pos(1); norm_gp_y = norm_pos(2); disp(['gaze on surface: ', num2str(surface_name), ', normalized coordinates: ', num2str(norm_gp_x), ','num2str(norm_gp_y)]) end takes about 250 to 300 ms. For real time tracking of the eye, I need these delays in fetching the realtime eye data in matlab to be as close to 5ms (200Hz sampling) as possible, so that I can control my psychtoolbox stimulus presentation. Do you think there are other better options? I have the lsl plugin running and do get the streams of eye data in the La Recorder. But not sure how to access 'norm_pos' of the 'gaze_on_surface' in matlab though through the lsl matlab functions.
Hi @user-3bcb3f , I see that you use disp
in the loop. Printing to the console is actually quite a slow operation in computing terms and might be the principle source of slowdown. MATLAB offers nice profiling tools that can help determine any other sources of slowdown.
With respect to the py.msgpack.loads
line, I do not think the speed there is related to MATLAB. The Python interface in recent versions of MATLAB is now quite performant and you essentially see the real speed of the Python function. What do you find for the speed of recv_string
and recv
lines?
The LSL plugin does not stream gaze_on_surface
. It only streams gaze in scene camera coordinates. The LSL plugin is also not really meant for real-time gaze contingent streaming of data, although I have not explicitly timed it in such a context. LSL's main purpose is to rather ensure accurate time sync with other devices for post-hoc data analysis.
Hi Rob, I should have been clearer earlier. #1. The mgpack.loads line takes 20 ms. We did this by adding GetSecs before and after this line and checking the time difference. Isn't 20ms itself too long given eye data updates every 5ms. # 2. Is there another way to access the norm_pos in gaze_positions which is randomly sized cell dict everytime the loads is run, other than using the for loop which might be time efficient? #3. The recv_string only takes 5 to 6 ms. Which I think is still ok. I did use tic n toc as well as GetSecs before and after each line to see the difference. #4. My purpose is to get real time values abd check if eye reached target. Currently I am getting a lot of delayed information. So by the time my target updates, the actual eye has already left the target and I never get a succesful trial.
Can https://github.com/pupil-labs/real-time-screen-gaze, this be adapted for working with pupil core?
Actually, that package is something of an adaptation of Pupil Core's Surface Tracker
Oh I've found it. It's from one pupil to another!
Does anybody use pupil-core to control mouse cursor? Is it right to subscribe gaze. and get norm_pos to control mouse cursor. I test it , feel bad. Any suggestion?
Hi @user-da1455 👋
It looks like you're using the normalized gaze coordinates from the egocentric scene camera space — in other words, gaze position relative to the scene camera, not the screen.
That’s likely why the cursor control isn’t looking as expected. To control the cursor on a screen, you’ll need to:
1. Define a surface on the monitor using a surface marker.
2. Translate gaze coordinates into the surface coordinate system.
3. Use those transformed coordinates to control the cursor.
You can achieve this by using the Surface Tracker and subscribing to the surface
topic in the network api, which gives you gaze data already mapped to the surface.
This has come up before — so you might find this past discussion helpful:
https://discord.com/channels/285728493612957698/285728493612957698/1199003211881775154
Hi @user-da1455 , following up on what my colleague, @user-d407c1 , wrote, you might also find this Python example helpful.
Hi, Would it be possible to know when the next release of the Pupil Companion App is planned ? I am currently waiting for a feature that is apparently integrated in the next release, thanks!
Hi @user-05ba05 👋 ! If you're looking for real-time blinks, fixations, or eyelid openness data, check out the Play Store for updates. 😉
hello, I have a question is that the green rectangle of the calibration and the red rectangle of the suface tracker mustto be coincided? cause I found if they are not coincide, when i run the example you provide mouse_control.py , the cursor's position is not right
Hi @user-da1455 , to clarify, the green rectangle and the pink rectangle are distinct:
They do not necessarily need to coincide. So long as your gaze remains within the green & pink rectangle, then your gaze can be reliably mapped to the screen, even if the rectangles are shifted relative to each other.
It seems in your case that the calibration could be improved. That spiral of orange lines towards the middle (see attached zoomed image with red arrow) suggests that the person may have been looking off center when the calibration started and then they moved their head to comfortably look at the target.
It can help when doing a calibration to already have your head in position before starting and then, try and minimize head motion during the calibration. In other words, try to only move your eyes until the calibration has completed. Once calibration is complete, you can comfortable move your head again. At the end of calibration, you will see an accuracy display. 2.5 degrees or less should be sufficient for the mouse control script.
i use pupil core
okay, Sincere thanks, it helps
Hi, I'm having trouble with the real-time-api for Neon. If I run the example from https://docs.pupil-labs.com/neon/real-time-api/tutorials/#starting-stopping-recordings (with an added output for the discovered device), I get the appended output. What I can see: - The device is discovered. - The call to recording_start() seems to work and a recording ID is returned. However, the Neon companion app does not seem to react to this (i.e. no recording is visibly started in the app.) - A call to stop_and_save returns an error saying that there is no recording going on, which is consistent with the observable state of the app, but not with the return value of recording_start(). - The serial number of the connected glasses is -1, which seems fishy - The app shows the device as connected. - I can start a recording manually in the app. What could be the issue here?
Hi @user-d1f142 , serial number
refers to the serial number for Pupil Invisible devices, our previous eyetracker. With Neon, you want to check the module_serial
field. Thanks for reporting. That part of the example will be updated.
I see you are using Python 3.13. That is the newest Python which is not fully supported throughout the Python ecosystem yet. Can you try again with a fresh virtual environment and Python 3.11 or 3.12?
I tried before with python 3.8 and it had exactly the same result, I updated to 3.13, thinking it might be the issue. If you want me to try, I can also try 3.11 or 3.12.
Ah, Python 3.8 is not supported by that library. Python 3.11 would probably be the best choice right now.
If that does not resolve it, then please open a Support TIcket in 🛟 troubleshooting and we will follow-up there.
Hi @user-f43a29 , what is the sampling rate of pupil labs core ? I have to define while doing analysis of percentage change in pupil size.
Hi @user-e83888 , Pupil Core can sample data at up to 200Hz, depending on the power of the recording computer. We recommend Intel i7 and newer, or if you are on a Mac, then the M1 chip and newer.
To determine the average sampling rate in the case of your data, you could do the following:
duration = timestamps_seconds[-1] - timestamps_seconds[0]
sample_rate = len(timestamps_seconds) / duration
Thank you so much for sharing this Rob , could you please also share any of your tutorial script so that i can reassure that i am picking right columns to analyse , we want to see percentage change over time of both the diameters together across different trials which i had already extracted.
Sure. Our Python tutorials for working with Pupil Core data are hosted here. You may be interested in this one on working with the pupil diameter data.
You may also want to take a look at this third party library, PyPlr.
what about this horizontal line issue in some of the trials , how can i find those trials out of array of trials or is there anyway by i can exclude these trials using confidence signal
We don't really have any tutorial snippets that would be able to identify which of your trials contained pupil size measurements only within a given range, which I think is what you're asking me. But with that said, it is likely, as you suggest, that the confidence signal would be much lower during those trials in which the pupil size is measured as close to zero.
Hello, I am new to eye-tracking. While reading the docs, I found the simple api and the async api.
For my use case, I need to stream data parallely from pupil labs and an ecg sensor. Can anyone please help me understand which API (simple or async) is best for this scenario?
Hi, @user-f0ed79 - historically the async API was more performant, but they should be comparable now thanks to a recent update. Under-the-hood though, the simple API is just a wrapper for the async API, so it will always have at least a little more overhead
Yes. What I am currently trying to clarify is that using matlab py client and msgpack as in this message thread https://discord.com/channels/285728493612957698/446977689690177536/1360209110666055793 leads to 30 ms or more delay in accessing realtime surface gaze data, so we created a python code as mentioned here https://discord.com/channels/285728493612957698/446977689690177536/1363791680616267788 to get an LSL stream which can be used through matlab lsl to access the realtime surface gaze datum. It would be nice if you could comment on whether we are doing it logically right or did we miss any aspect of realtime pupil core data access.
Thank you for clarifying. I wanted to make sure I fully understand your goal and implementation so I can provide accurate advice.
I’ve looked at the code you shared. Conceptually, it isn’t quite right—you’re adding an extra layer on top of our real-time API in the form of LSL, which is likely to increase latency rather than reduce it.
Instead, I’d recommend using our dedicated LSL plugin to stream gaze or pupil data: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture
My suggested approach would be as follows: 1. Send your Psychtoolbox event corresponding to stimulus onset via LSL. 2. Stream eye movement data from Pupil Core via LSL.
This way, you can compare the two data streams post hoc and directly measure the latency between stimulus onset and gaze shift.
It’s important to note that this approach should isolate the latency measurement from any gaze-contingent or processing latency. That is, to obtain a screen-mapped gaze datum, we must acquire the eye image, run 2D pupil detection, generate a gaze datum, detect the AprilTags, map gaze to the screen, and finally, serve the result over the network. As you can imagine, each of these steps can add additional—and potentially variable—latency to your estimation. In my view, it’s better to isolate your core metric from as many parts of this processing chain as possible.
From what I can see, the gaze-contingent part of your experiment is mostly used to update the stimulus to a smiley face when it's gazed at. So for this part, processing latency isn't so critical.
This solution works perfectly for posthoc latency calculations but how do I solve my issue of tracking whether the eye landed on the target in realtime to change my feedback stimulus? Pupil LSL plugin stream does not give surface gaze datum if I understand correctly (https://discord.com/channels/285728493612957698/1355072420498636831/1359203812027666653). With py.msgpack the delays are atleast 30 ms (https://discord.com/channels/285728493612957698/446977689690177536/1360209110666055793) which is already 6 times the rate at which realtime data is actually recorded by the tracker (5ms for 200Hz). If from the realtime API, I access the raw data, converting it to surface gaze datum again has so many steps.
Great. So the critical metric part is solved! As for the stimulus update, you'll have to live with the current constraints of the surface tracker in this case. As @user-f43a29 had stated in his message, a workaround would be to set the scene cam to 120 Hz. If you aren't able to track the markers well with this, then feel free to share a screen-capture showing why and we can try to provide some feedback on that!
hi, i am working on the DIY project and the talk between you [email removed] really do inspire me a lot! i just wanna confirm that the ir-filter of hd-6000(eye-camera) is right here in the red circle, right?
Hi @user-e962d7 , I'm not in a position to provide explicit support for DIY Pupil Core projects, but the IR filter is found in the lens housing of the camera. What you have marked with the red circle is the camera sensor on the PCB. The video here demonstrates what you should be removing. I've attached a snapshot of one of the key points.
@user-f43a29 thanks for your sharing! btw its too sturdy to get the lens housing out of this cube, as what has been mentioned [email removed] cannot use Small philips screwdriver to unscrew it !is there a way to get the lens housing without breaking the lens protective cover?
Hi @user-e962d7 , I am unable to provide explicit support to that extent on Pupil Core DIY projects. The main steps are contained in the associated Documentation. Otherwise, members of the community who have experience with this step should feel free to chime in 🙂
Hi @nmt Neil, thank you for the inputs. As suggested we set scene camera to 120Hz and with increased size of surface tag managed to run the code snippet in this https://discord.com/channels/285728493612957698/446977689690177536/1360209110666055793. The surface is a bit more unstable than higher resolution at 30Hz. However, the ts average we get is still around 45 ms (distribution 40 to 54 ms) and my testing at the 30Hz also gave me average time of 43 ms for each iteration (40 to 48 ms). 2 questions: 1. I am not able to replicate your claim of 30 ms https://discord.com/channels/285728493612957698/446977689690177536/1360209110666055793 from this chat. Why? 2. Changing the scene camera resolution does not lower this as suggested. Finally, how do we access the raw data at 200Hz and the previous img to surface transformation values and calculate surface values ourseleves?
how do we access the raw data at 200Hz and the previous img to surface transformation values and calculate surface values ourseleves
This is already done for you by our software, but if you really want to do the mapping yourself you need three things * Gaze data * Camera calibration * A transformation matrix
So,
* To do this in real-time, you'd want to subscribe to gaze
to receive gaze points and surfaces.YOUR_SURFACE_NAME
to receive the img_to_surf_trans
transformation matrix.
* To do this post-hoc you should make a recording with Pupil Capture, then load it in Pupil Player. Enable the surface tracker in Player and export data. This will give you gaze data in one CSV and the img_to_surf_trans
transformation matrices in another CSV.
* If you haven't calibrated the camera yourself, the default calibration data is in the source code
To convert your gaze points from scene camera space to surface points, you need to first undistort the gaze points using the camera calibration and then apply the transformation matrix.
If you want to calculate the transformation matrix yourself from a world frame, I recommend starting with OpenCV's homography tutorial
Hi @user-3bcb3f. Can you please share the full code you have used?
Hi Neil, I just used this with the pupil capture scene camera set to 30Hz with 1920-1080 resolution with 30Hz refresh rate and another time again with 640-480 resolution and 120 Hz refresh rate. ns = 300; ts = zeros(1, ns); for n = 1:ns st = GetSecs(); topic = sub.recv_string(); surfaces = py.msgpack.loads(sub.recv(), pyargs('raw', false)); surfaces = struct(surfaces); gaze_positions = cell(surfaces.gaze_on_surfaces); norm_pos = double(gaze_positions{end}{'norm_pos'}); et = GetSecs();
ts(n) = et - st; end
Hi, @user-3bcb3f - if you don't mind me jumping in here, I just wanted to add a couple of notes.
Core's surface tracker buffers gaze data and processes it in chunks when each world camera frame is captured. In other words, you should receive as much data as the gaze sampling rate implies - as long as a surface is always detected - but that data will come in chunks. You'll probably see 6 or 7 gaze samples in each chunk (assuming 200 Hz gaze / 30 Hz world)
That particular piece of code is receiving a group of samples, but only examining the last one (gaze_positions{end}
), so it's a bit misleading (although I'm not really sure what the original purpose was for it). Another point about that snippet is that it's also measuring network latency. This will be small when using localhost
/127.0.0.1
, but not zero.
There are some other factors that will affect your observed surface-gaze-sampling rate. * Insufficient resources - if you don't empty the ZMQ buffer quickly enough, data will be lost. Not a problem for that snippet, but something to keep in mind * Missing surface detections - if no surface is detected in a world frame, none of the gaze data associated with that frame will be mapped and sent. People often use just 4 tags on the corners of their display, which is usually "ok", but I'd suggest 10-12 markers framing the display all the way around. Besides improving detection, it also improves accuracy.
Hi @user-cdcab0 Thank you for the input. But we need to do the mapping in real time not posthoc. The main issue is that even after putting the scene camera at 120Hz, I am only able to get the surface gaze datum after 45 ms delays ( which is equal to 30Hz scene camera refresh rate scenario). What am I missing here?
Could you make and share a recording?
Hi, would you like me to do a recording in pupil capture and send? currently I am just fetching the values using py.msgpack code and logging how much time the loop takes.
Yes, please
Hi @nmt , Please find attached a video of the surface tag capture by scene camera at 120 Hz using 640-480 resolution. Could you suggest, how to further stabilise the surface at this resolution.
I think we will need you to share a full recording as asked by @user-cdcab0. This will be necessary to provide concrete feedback
Yes, as @nmt suggests, please use the recording functionality within Pupil Capture and share the entire recording folder with us. This will allow us to troubleshoot and rule out certain factors
Hi @nmt and @user-cdcab0 . I am sharing here in this folder https://1drv.ms/f/c/FB1DFB83EAA92E53/EstwP0SdCJVChBNMH90V4M8BFF8yl_MiNHKucwjK0gaDKA?e=iRTA9F I have added two recording: one at 120Hz ( 640x480 resolution) of scene camera and second one at 30Hz. While the recording was going on, I also ran the matlab code which uses zmq msgpack through matlab python client to access real time data. The matlab workspace variables norm_pos120Hz and norm_pos30Hz has the positions saved while the ts120Hz and ts30Hz has the time taken for the 300 runs of realtime surface gaze data access. Hope this will help you let me know what can I do to optimize realtime data access.
Additionally, I am also including a subfolder 'python_realtime' which has the python code I tried use to create a custom lsl stream that I am able to access in matlab (this is alos in the subfolder). This method seems to be able to get data at 30-35 ms delay in case of 30 Hz refresh rate of scene camer and around 10-15 ms delay in case of 120Hz refresh rate of scen camera. Previously, you mentioned that this method only will increase delay, but my observations are not the same. Could you suggest if this would be a feasible solution in my case?
This usually happens when you are using manual exposure and the exposure time is set too high. Please try switching to auto mode or lowering the exposure time and try again
Previously, you mentioned that [relaying to LSL] only will increase delay, but my observations are not the same
Well you're not comparing the same timing data. In your LSL code you send the surface-mapped gaze data to LSL with its originating Pupil Capture timestamp. Because this timestamp is attached to the data, it sorta doesn't matter when you relay it - LSL will record it with the timestamp you sent along with it. So you could send the data an hour later, with a full second between each sample, and LSL will still record the data with its Pupil Capture timestamp, since that's what you send.
Finally, I notice some problems with your surface definition. First, for screen-based tasks, you probably want to adjust the corners of the surface definition so that they align with the corners of the screen. This will allow you to normalized surface coordinates to pixel coordinates. Secondly, in your 120Hz video there are many frames (I would guess 1/3 of them) where one or more markers isn't detected, causing the surface tracking to be very poor. I've included a few screenshots that demonstrate this, but it should be apparent in Pupil Capture when you're viewing it live as well. At the lower video resolution, markers need to be even bigger, and, like a mentioned earlier, more markers will help too. Having said that, I was able to achieve reasonable mapping from your video by simply deleting and re-creating the surface definition when paused on a frame with good marker detection.
Thank you for the detailed reply. 1. I guess that could be the reason for the delays I face with matlab? I am not using manual exposure mode but it was on aperature priority mode. For some reason I am not able to select auto mode. How do I choose the right exposure time? Can I set it at the bare minimum to be able to see the screen and surface tags clearly?
But the MATLAB file, along with this Python script, can access the latest surface gaze data with lower delays (upto 10 ms in case of 120Hz refresh of scene camera) while the purely MATLAB py.zmq. msgpack code takes ~ 45 ms read the latest sample ? In this case, do you suggest I replace my code with the file "pupil_capture/pupil_capture_lsl_relay/gaze_surface.py" you have created? If so, could you guide me on how to use it to obtain surface gaze datum?
I will add more markers around the screen of bigger size and see if the surface stabilizes. Also, can we put a higher resolution camera separately for the scene camera and overcome this issue?
Also, FWIW, I have a fork of the LSL plugin with an outlet for surface-mapped gaze data. I wrote it for another customer asking them to test, but never heard back from them. As someone else had mentioned though, LSL isn't perfect and you will see some additional inaccuracies introduced by it.
Thank you for the detailed reply.
Real time surface gaze data acquisition
hello, I am getting a " 'type' object is not subscriptable" error with Pupil Capture when trying to load pylsl as a plugin.
Hi, @user-9eba65 - it seems a somewhat recent update to pylsl
has broken compatibility with the version of Python that Pupil Core 3.5 is built with. Please use pylsl==1.16.2
and this should work for you
Thanks for your answer. I’ll try and comment the result here