Note that you can also find examples for Psychopy here.
Hi, can i ask for unzip software you recommended for unzip the downloaded recording file from pupil lab
Hi @user-8825ab , have you tried 7-Zip? Please see this message: https://discord.com/channels/285728493612957698/633564003846717444/1343484891517681736 But, may I also ask what exact error message you are receiving and what Operating System you are on?
I have downloaded the recording file, but it does not let me open, can i ask for some help please
Hello, I would like to change the parameters used to evaluate saccades and fixations in your algorithm. Is there a way to do it in the cloud? Or is there some source code in python maybe that I can run locally? Thank you
Hi @user-3e88a5 , the parameters on Pupil Cloud cannot be changed, but you can modify the parameters in the pl-rec-export implementation. It is written in Python and can be run locally.
Good afternoon, I would like to know how I can access the accelerometer data in the NEON raw data. I wanted to know the strength of gravity (hypergravity, normogravity, microgravity) at time of recording. Thank you!
Hi @user-4440c8 , if you're looking to extract it from the saved raw data, then our pl-neon-recording library can do that. It is open-source, so it shows how to work with the open binary file formats. Check out this IMU example. You can also stream it during the recording with the Real-time API.
eye_state
Hello, I was wondering if it was possible to get access to my previously closed tickets as I can no longer see the solutions provided by the mod team.
Hi @user-e13d09 , yes, this is possible. We will follow-up with you in the Support Ticket.
My colleague @user-f43a29 ran some tests and got a latency to available result in Matlab or Python was within ~14-15 ms (with Ethernet).
For screen based settings, you would additionally need to map gaze to the screen, which can take an extra ~1 to ~1.5ms with real-time-screen-mapper, although that also depends on your computer resources.
I hope this addresses your concerns about latency.
Hi! I am using RTSP to stream video from the Neon glasses and I would like to undistort the video stream to do further processing. This article (https://docs.pupil-labs.com/alpha-lab/undistort/) was helpful, although there is a dead link. From what I can tell, you have some software that undistorts the video; are the camera parameters that the software uses available somewhere?
Hi @user-2109ae 👋 ! If you are using the realtime api, you can directly access the camera parameters from it. Check out this example.
help please, just recorded several studies with pupil neon. recording and saving on the phone worked fine without issues or errors, but now all recordings show 0s and upon export no data.
found them in some strange way: all of the recordings are combined into the very first, shows 0s but is 1:02:40 long. even including durations where record was off. it looks like i pressed "record" once and then let it run for 1 hour...which definitly didn't happen
Hi @user-688acf , can you open a Support Ticket in the 🛟 troubleshooting channel?
Hello, I'm trying to install Neon Player but keep getting an error saying "This installation package could not be opened."
Hi, @user-c881f1 - ~~do you mind opening a ticket in 🛟 troubleshooting?~~
I was able to replicate the issue on Windows. It seems to be some problem with our automated build pipeline. For now I have manually re-built the installer package locally and updated the download. Please re-download the installer and try again.
Note that the working installer is ~310mb. If your fresh download is <100mb, then it's the old version and you'll need to clear your browser cache and download again.
hi Pupil Labs! We were planning to get an android device to go with one of our Neon eye-trackers. Is there a minimum RAM requirement for the companion app? would a Moto 40 Pro with 8GB (+256GB storage) be sufficient for the complete 200 Hz recordings?
Hi @user-ccf2f6 , are you potentially referring to the 8GB RAM Moto Edge+ (2023)?
@user-cdcab0 Thank you!
No problem - thnaks for reporting the problem 🙂
Hello, I am using Neon on Psychopy. The two are connected using a router and the ANKER hub suggested by PupilLab. All was working well, until when I was testing the CompanionApp stopped starting to record.
I tried many things, but at the moment, If the devices are connected: - I can see the PupilsCamera using the browser (so the laptop can connect) - the function discover_one_device is not discovering anything (none output) - the attached Psychpy script is not ABLE to make the recording start. BUT, if I detach the ANKER hub from the phone and then reattach it (with glasses and ethernet always connected to the hub), only then I can make the Psychopy-Pupil connect working.
Any advice why the companionApp is giving this issues (I already tried to uninstall/reinstall, clear cache, clear all data, I also tried turn on/off).
Hi, @user-f4b730 - just to make sure I understand your problem correctly - you're saying that PsychoPy can only interact with your Neon if you unplug it from the hub and then re-attach it?
Since the end of last week our Pupil Labs Neon is stuck on an FPGA update...Everytime we start the Neon companion app and we connect the neon eye-tracker, a FPGA update starts and successfuly finishes. After a restart, the app asks always to again install the FPGA update. What can we do?
Hi @user-11dbde 👋 ! Sorry to hear that. Could you open a ticket on 🛟 troubleshooting or send an email to info@pupil-labs.com so we can follow up with next steps?
Hi I would like to inquire about the coordinate accuracy (error) of the gaze point data measured by Neon, and how much can it be controlled within? Thanks
Hi @user-bd5142 , Neon's gaze estimation accuracy is ~1.3-1.8 degrees, as assessed & validated in our Neon Accuracy Test Report.
If you have observers that significantly deviate from the population average, such that their gaze estimates exhibit a constant offset, then you can apply a one-time Gaze Offset Correction in their Wearer Profile. It is saved for future recordings, so if they were to come back, let's say 4 months later, you simply load their Wearer Profile, the Offset Correction is automatically applied, and you continue recording.
Hi Pupil Labs! I’ve created a custom program using the API with PaychoPy where it would count down to start after the recording starts when I hit enter. It worked yesterday but not today. I’ve tried with different companion phones and modules and glasses but the issue seems to persist. By any chance was there an automatic update last night?
Hi @user-0001be 👋 ! Just a heads-up — there was a major PsychoPy update last Friday. Could it be that it was updated on your end?
Please note also that we don’t automatically update any Python libraries on your system, so if something changed, it likely came from a local update.
Hello. We want to stream gaze data from the motorolla phone to a laptop when we don't have internet. We purchased the recommended Anker USB hub and I am able to connect the laptop and the motorolla together but the motorolla can't find the Neon glasses. USB Camera is also unable to find the glasses. The Neon glasses are on (led is on). Any tips on how to connect the two? Thank you. Update: seems I can either connect the laptop and motorolla or the motoroll and the glasses, so I instead need a USB switch. Has anyone connected a laptop to the phone to the glasses successfully with a usb switch without internet?
Hi @user-eebf39 , have you plugged Neon into the port marked "5 Gbps"? The USB cable on the Anker hub also should be plugged into the Companion phone for Neon to be properly recognized. A USB switch is not necessary with that hub.
With respect to connecting to the laptop, you do this via Ethernet cable. Do you have a router? That is the easiest method: you use the Ethernet port of the Anker hub to connect Neon to the router. Then, you also connect the laptop via Ethernet cable to the router and it handles the automatic device discovery. The router does not need an internet connection or WiFi functionality.
Hello, where can I find a detailed guide for using the new pupil labs? I want to understand as many of the features as possible. I appreciate the help!
Hi @user-7d4a32 , you can reference Neon's Documentation. Feel free to ask any questions you have here!
Hello! I am using psychopy to run a perceptual experiment. The eye-tracker (ET) and the companion device are running on the same network. When I run the experiment (or in pilot mode) it does not trigger the ET to start recording. I'm also not getting any timestamps for events, I assume because it isn't recording anything. What am I missing?
I have the "start recording" component at the start of the first trial, and the stop component at the end of the experiment. The PLevents components are at the start of each trial. Still, nothing is recording. If I have chrome open with Neon Monitor while i run the pilot/experiment, it shows it is connected to the device. But still, nothing is recording, unless I do it manually.
Any help is greatly appreciated!
Hi, @user-a83fa3 - could you provide some more information? The following would be really helpful:
* PsychoPy log from running your experiment
* Your .psyexp
file
* The version number of PsychoPy you're running
* Your operating system
Don't use that one yet. I think they are still considering it beta, as they bundled it last Friday but haven't posted it on their website yet
Some explanation - 2025.1.0
requires a minor change to plugins. Without it, plugins don't work. Unfortunately, that same change makes it so that the plugins don't work in older versions of PsychoPy.
I have the 2025.1.0
version of the plugin ready to go, but I don't want to publish it until they make 2025.1.0
available on their website (otherwise everyone who uses the download from the website will not be able to use the plugin)
If you need to use 2025.1.0
, I can send you a .whl
of the plugin, but the installation process is a little different
I try to reinstall the 2024.2.4 and see what happens
I have also noticed that sometimes on Windows, the PsychoPy uninstaller doesn't complete remove the PsychoPy folder, and I have to manually delete it.
I would ask that you 1. Move that to ~~the end of the routine or~~ each frame 2. Restart your Companion Device 3. Share an image of the scene camera's view of your screen while it's displaying AprilTag markers. You can either share the recording's video or just take a screenshot of the scene preview in the companion app 4. Share your psychopy log
Sure, I will try it tomorrow, shall I send the video to [email removed]
Hi! @user-cdcab0 I have a quick question. In the hdf5 data obtained through psychopy, is there a way to tell when there is a blink?
Not yet. Blink detection will soon be possible on-device, and at that point I believe it will be possible to send blinks to PsychoPy, although I haven't looked into it too deeply yet. Would you consider submitting a 💡 features-requests?
I do usually have it on run mode. I'm trying it on a different computer that isn't set up to the university. it doesn't actually trigger the eye-tracker to record though? I have to manually hit record on the companion device. Would it help if I connect it direclty to the computer?
it doesn't actually trigger the eye-tracker to record though Yes, the recording triggers and your events are saved in the recording for me.
Would it help if I connect it direclty to the computer? By "it" do you mean the Companion Device? That might solve your problem, but you would need a USB hub and there is some configuration
Are you able to share the PsychoPy log?
so the events are saved on the companion device for you? like, the companion device is recording what the eye-tracker sees? by "it" i was referring to the actual eye tracker
I see the events in the .csv file with a time stamp, but there is no video recording saved on the companion device
I got it to work1!! thank you!!
Glad to hear it! What ended up being the problem?
by "it" i was referring to the actual eye tracker By the way, the Companion Device is a required component of the Neon eyetracking system. Connecting your Neon frame to a PC will not give you any gaze data. This is produced by the Companion Device
Guys, Neon app started to crash on OnePlus phones. Any ideas how to solve this quickly?
Hi @user-e6ae95 , could you open a Support Ticket in 🛟 troubleshooting ? We will follow-up with you there.
Hi, I would like to ask:
1) Is there an easier way to do what I’m trying to achieve? 2) What can be a reason that the gaze.csv (with columns 'gaze position transf x [px]' and 'gaze position transf y [px]', output by pl-dynamic-rim) not match the (x, y) positions on the screen video?
The output from pl-dynamic-rim includes a video composed of three views (1st screenshot) and a gaze.csv. I only need the screen recording with the gaze overlaid. To do this, I extracted the main function from pl-dynamic-rim and modified a few lines (code.txt). Is there a simpler way to obtain just the screen video with the gaze overlay?
I’ve attached screenshots showing that the gaze data doesn’t align with the position shown in the screen recording. This issue hasn't occurred before—previous videos matched correctly with the gaze data. But in this case, they don’t.
Hi @user-12efb7 👋
Great to hear the package is working for you! It's definitely long overdue for a cleanup and refactor 😅 — hopefully I’ll find some time to update it soon.
Just a quick note: we generally don’t provide code reviews unless you're on a consultancy package. If that’s something you’re interested in, feel free to check out our support packages.
1) Is there an easier way to do what I’m trying to achieve?
Generating the three videos is a good approach for cross-checking — especially to catch cases where gaze was incorrectly placed (e.g. due to a blink or misalignment in the reference image mapper).
If you're looking to get a final video with just the end result, you can make the following minimal changes to the library:
2) What could cause gaze.csv values to not match the (x, y) positions in the screen video?
The values in gaze position transf x [px], gaze position transf y [px] are exactly what get plotted in the screen video. If there’s a mismatch, it’s likely due to a modification in your code — perhaps an unintended change to how frames, scaling or mappings are handled, but without diving into the changes you made is hard to know.
Thanks so much! I didn’t know about that — I’ll definitely check it out 🙂 And thanks for answering my question as well 🙂
One more question — what’s the best practice for calibration if I'm planning to use Neon only with on-screen data? At the moment, I’ve only found this: https://docs.pupil-labs.com/neon/data-collection/offset-correction/
Neon uses NeonNet, a deep learning-based approach to infer gaze directly from eye images — so no calibration is required.
That said, performing a Gaze Offset is definitely the optimal approach if you’re aiming to fine-tune accuracy. Note, that this only needs to be done once per wearer, since the offset is stored in their profile.
You can also apply the offset correction post-hoc to individual recordings if needed.
Hello, thank for the help that was provided prior. I have custom Python scripts, based somewhat on the samples, that 1) stream the scene and eye cameras from the Neon and save them to MP4 files. 2) I overlay the gaze data dynamically into the scene camera video that is both saved and previewed on-screen 3) I receive and write out the IMU and gaze data to flat file 4) I overlay information from other Lab Streaming Layer feeds into the scene camera video dynamically as well. 5) I write out video heart beat information (duration and frame info) to LSL.
My customer would like to have audio muxed into the scene video. Is it possible to get the audio information? From what I can tell, RTSP is being used underneath. I can even see that a URL parameter of "audioenable=on" is being supplied. Thank you for any help that can be provided
Audio is definitely present in the RTSP stream. If I enable the microphone, I can hear audio if I point VLC at the following: rtsp://<ip address>:8086/?camera=world&audioenable=on
Hi @user-937ec6 ! Currently the Realtime Python API client does not have support for audio streaming.
I've now run out of space on Pupil cloud, even though I've deleted a bunch of old recordings. How do I "empty the trash" so to speak? I am doing it in the trash section but i keep getting an internal server error. It isn't updating how much storage is actually left when there are deleted recordings, and i havent got any enrichments for these files
Could you open a ticket in 🛟 troubleshooting , please?
Hi @user-a83fa3 , you find the Trash by clicking on the three button menu at the top left of the Workspace view and choosing Show trashed
, as shown in the attached image.
Hi! I was wondering if there was a way to merge two seperate recordings from the monitor app?
Recordings either triggered by the monitor app or the phone itself can not be merged, if you would find this a useful addition, please feel free to request it on the 💡 features-requests channel for evaluation.
Hi, I used the Neon eye tracker on a surgical robot to record gaze data. Unfortunately, the scene camera was facing the robot and didn’t capture the surgical field. To visualize the gaze, I used the robot’s internal video and projected the gaze coordinates onto it after synchronization. However, the projection isn’t accurate. Could this be due to a calibration issue or the strong surgical lighting affecting the eye tracker? I’d appreciate your thoughts. Thanks !
Hi @user-2b5d07 👋 ! There are quite a few unknowns here — would you mind sharing how you performed the re-projection onto the surgical robot’s video?
From what you've described, that step seems like the most likely source for an issue.
Hi, I am using psychopy to write a program to show image stimuli following this tutorial: https://docs.pupil-labs.com/neon/data-collection/psychopy/. I wonder if the April Tag needs to be added to each image stimulus on the computer display as shown in the gaze_contigent_demo.psyexp? Thanks in advance!
Hi @user-9a1aed , if you want to know where the person looked on each image, then yes, it will be necssary to display the AprilTags for each stimulus.
We’ve been successfully using Neon with PsychoPy for data collection over the past three months without any issues. However, today it suddenly stopped working — PsychoPy is no longer communicating with Neon. The start and stop recording triggers aren’t functioning, gaze data is not being received, and the red gaze circle is no longer visible on the companion phone.
I’ve already tried restarting all devices, clearing the cache, and resetting any active Python backends on the laptop, but the issue persists. The browser live stream still works, so it seems that the connection between the phone/eye tracker and the computer is intact.
Do you have any idea what might be causing this? I’d appreciate any insight you could provide. Thank you!
Update: the neon.local:8080 browser live stream also stopped working
- the red gaze circle is no longer visible on the companion phone
- neon.local:8080 browser live stream also stopped working
Can you open a ticket in 🛟 troubleshooting? You'll want to resolve these problems before trying anything with PsychoPy
Hi Pupil Labs, I was looking to get more information about how the timestamping on world/scene and eye cameras is achieved in Neon eye-trackers. Is there documentation around it already? I'm particularly interested in accessing the scene camera frames directly as a usb video device and get as accurate timestamps as possible. This'll help me do some image processing tests directly on the scene camera output instead of having to record it with the companion app and then exporting it for processing. Thanks in advance!
Hi @user-ccf2f6 , all data from Neon are timestamped with the same high-precision clock. If you are looking to use Neon's Scene Camera that way, then you can similarly timestamp the incoming video frame with the high-precision clock that is present on the receiving device. You will just want to account for USB transmission + processing delay, which could depend on the receiving system.
Hello, I am using Neon Player v5.0.1 and the lastest CompanionApp 2.9.0. After exporting the recording and try to load them in NeonPlayer, the NeonPlayer crashes. If trying to load them again the screen show "file format outdated, delete the NeonPlayer subfolder astart from fresh"... Even after doing this, the NeonPlayer crashes again
Hi, @user-f4b730 - could you send me your recording to troubleshoot? I'm unable to replicate this with any of my recordings
the error in the log is the one at the bottom:
Hello, is there an easy way to downgrade the companion app? We rely on the api version 1.3. and the phone was updated to the newest app version
Hi @user-0e6279 , while rolling back the app is not possible, it is useful to clarify some points:
Hello! I am testing my preprocessing pipeline that will use Face Mapper enhancement. I only have 2 recording so far that are about 1.5 minutes each, yet the enhancement has been running for over 30 minutes. How long should I expect this processing to be? Is there any way to speed it up, as I will need to process about 2000 of these in the near future.
Hi @user-13d297 , would you be able to open a Support Ticket in 🛟 troubleshooting with the associated Enrichment IDs?
Hello, I am running a program on psychopy with neon connected to the phone's Neon Monitor app, but the fixation does not seem to be synced with the eye tracker. I need the participant to fixate on the cross before the trial starts. How can I achieve that? I can only view the fixation data in the app, but the program does not capture the participant's fixation. Could anyone please help?
Hi, @user-9a1aed - Are you wanting to visualize the participants gaze in your PsychoPy app? PsychoPy doesn't automatically visualize gaze data for you, but it's pretty simple to implement yourself. We put together a very simple gaze-contingent demo in PsychoPy which does this. Check it out!
i cannot find stream on the top left of my companion device, so perhaps i did not configure correctly.
That video is from an older version of the app. Streaming information is now available by tapping icon on the top right corner that looks like a phone with a wireless signal coming out of it (see screenshot)
If PsychoPy cannot connect to the device ("cannot connect to host neon.loca:8080
"), then nothing else will work correctly. First, please make sure that the Companion Device and the PC running PsychoPy are on the same network. Then, I'd go into your experiment settings, and change neon.local
to your Companion Device's IP address
For exposure, your left image is good, but the right one is pushing it. Can you try making the markers larger though?
Is it possible that the gaze indicator may have block the aprilTag? There are multiple AprilTag markers for redundancy and accuracy. You can occlude several of those markers and still achieve good tracking.
I saw that some people had issues with the local network and thus switched to the device's hotspot for connection. May I know if there is a way that I can check if the eye tracker is properly connected to the program? You could try installing the Realtime API and running the examples. This would remove PsychoPy from the equation and help identify whether the problem is a networking issue.
I saw that some people had issues with the local network and thus switched to the device's hotspot for connection. May I know if there is a way that I can check if the eye tracker is properly connected to the program?
Thank you. I connected my mac to the device's hotspot and ran the code to connect the deivce using the IP address https://docs.pupil-labs.com/neon/real-time-api/tutorials/ I am not familiar with Python. After following the steps, I received the following error.
File "/Users/yyang/Library/CloudStorage/OneDrive-HKUSTConnect/ustProject/aqRec_project/3results/scripts/from pupil_labs.realtime_api.py", line 1, in <module> from pupil_labs.realtime_api.simple import Device File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/realtime_api/init.py", line 4, in <module> from .device import APIPath, Device, DeviceError, StatusUpdateNotifier File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/realtime_api/device.py", line 11, in <module> from pupil_labs.neon_recording.calib import Calibration File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/neon_recording/init.py", line 6, in <module> from .neon_recording import load File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/neon_recording/neon_recording.py", line 9, in <module> from .stream.imu import IMUStream File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/neon_recording/stream/imu/init.py", line 1, in <module> from .imu_stream import IMUStream File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/neon_recording/stream/imu/imu_stream.py", line 7, in <module> from . import imu_pb2 File "/Users/yyang/opt/anaconda3/lib/python3.9/site-packages/pupil_labs/neon_recording/stream/imu/imu_pb2.py", line 11, in <module> from google.protobuf.internal import builder as _builder ImportE
It looks like API is for more advanced users. I used this Monitor app to view real-time tracker on my mac like below. Does it mean the connection works out successfully?
It looks like API is for more advanced users. It is, but it can help us determine the problem better
I used this Monitor app to view real-time tracker on my mac like below. Does it mean the connection works out successfully? Yes, probably, but the error you shared above could be the cause of your PsychoPy not receiving data. Can you share your PsychoPy log?
Okkk. Thank you! I got it working but the accuracy seems very off. In my program, the participant has to look at the cross before viewing each image (in the demo, i fixated on the cross all the time but did not seem to in terms of the gaze indicator?). When I test the program with the eye tracker, even if i am fixating at the cross, it does not initiate the stimulus presentation. the log from psychopy is below. Thank you!!
position: [-0.3 0. ] 2.5955 ERROR Failed to import the ioLabs library. If you're using your own copy of python (not the Standalone distribution of PsychoPy) then try installing it with:
########### Experiment ended with exit code 0 [pid:9436]pip install ioLabs 2.5956 ERROR Plugin
psychopy-iolabs
entry point requires modulepsychopy_iolabs
, but an error occurred while loading it. Stopping run loop for rtsp://10.79.43.134:8086/?camera=gaze&audioenable=on Stopping run loop for rtsp://10.79.43.134:8086/?camera=world&audioenable=on 1.7204 WARNING Monitor specification not found. Creating a temporary one... ioHub Server Process Completed With Code: 0
On a laptop with Windows, you definitely should double check that UI scaling in your display settings is set to 100%, but yours looks like it just needs a gaze offset
Hi. I have one question about the IMU data.. Looking at the raw data I have (full data line below): euler = [-24.877704997985777, -9.23895125875146, 109.65766517017222] quaternion = [0.5464919805526733, 0.13019901514053345, -0.1879594624042511, 0.8056462407112122]
If I run "from scipy.spatial.transform import Rotation as R" and: R1 = R.from_quat(quaternion) print(R1.as_euler('XZY', degrees=True))
I was looking for the axis combination that could explain the transformation but cant find it. What am I missing?
This is the whole line from the "imu.csv" file: c6785c19-b102-43ad-842a-43547b1779c7, de0ee700-8a17-4fae-b93b-dda8b1d9fca8,1719939445325777116, -62.200546, -2.672195, 140.501022,0.490723, -0.298828,0.9375, -24.877704997985777, -9.23895125875146, 109.65766517017222, 0.5464919805526733, 0.13019901514053345, -0.1879594624042511, 0.8056462407112122
I recently made this exact same mistake 🤦🏽
edit: see below
Hello, I have a problem with the csv exported from the PupilCloud, specifically with the saccades data. I can see the saccades in the videos but the saccades.csv files I downloaded from the cloud are empty. Could you help me with this please ? (the app version I use is 2.9.0-prod)
Hi @user-639b3a , I see you opened a Support Ticket in 🛟 troubleshooting . Thanks. We will follow up with you there.
Hi team,
I’m still seeing this error when calling *device.receive_gazedatum() from the Real‑time API 1.5.0 on Ubuntu 22.04/ROS 2 Iron, even after disabling both “Compute eyestate” and “Compute fixations” in the Neon Companion app:
Raw gaze data has unexpected length: [email removed] \x00?\x19\xf9y', timestamp_unix_seconds=1745004922.9094741) Traceback (most recent call last): File "/home/callahan/.local/lib/python3.10/site-packages/pupil_labs/realtime_api/streaming/gaze.py", line 150, in receive cls = data_class_by_raw_len[len(data.raw)] KeyError: 89 Stopping run loop for rtsp://10.76.85.60:8086/?camera=gaze&audioenable=on * What I’ve tried so far: - Confirmed pupil_labs.realtime_api is v1.5.0 - Disabled “Compute eyestate” & “Compute fixations” in Companion - Restarted both the Companion app and my ROS node
Despite that, I still got that error. Can you advise on any additional debug steps you’d recommend?
Thanks for your help!
Sorry, never mind, it got fixed. Thanks!
Hi team,
I’m now encountering this when calling gaze_mapper.process_frame(frame, gaze):
[eye_gaze-10] Traceback (most recent call last):
[eye_gaze-10] File "/home/garrison/ros2_ws/install/pupil_labs_ros2/lib/pupil_labs_ros2/eye_gaze", line 33, in <module>
[eye_gaze-10] sys.exit(load_entry_point('pupil-labs-ros2==0.0.0', 'console_scripts', 'eye_gaze')())
[eye_gaze-10] File "/home/garrison/ros2_ws/install/pupil_labs_ros2/lib/python3.10/site-packages/pupil_labs_ros2/eye_gaze.py", line 157, in main
[eye_gaze-10] result = gaze_mapper.process_frame(frame, gaze)
[eye_gaze-10] File "/home/garrison/.local/lib/python3.10/site-packages/pupil_labs/real_time_screen_gaze/gaze_mapper.py", line 64, in process_frame
[eye_gaze-10] gaze_undistorted = self._camera.undistort_points_on_image_plane([[gaze[0], gaze[1]]])
[eye_gaze-10] File "/home/garrison/.local/lib/python3.10/site-packages/pupil_labs/real_time_screen_gaze/camera_models.py", line 35, in undistort_points_on_image_plane
[eye_gaze-10] points = self.unprojectPoints(points, use_distortion=True)
[eye_gaze-10] File "/home/garrison/.local/lib/python3.10/site-packages/pupil_labs/real_time_screen_gaze/camera_models.py", line 60, in unprojectPoints
[eye_gaze-10] pts_2d_undist = cv2.undistortPoints(pts_2d, self.K, _D)
[eye_gaze-10] cv2.error: OpenCV(4.10.0) /io/opencv/modules/calib3d/src/undistort.dispatch.cpp:403: error: (-215:Assertion failed) CV_IS_MAT(_cameraMatrix) && _cameraMatrix->rows == 3 && _cameraMatrix->cols == 3 in function 'cvUndistortPointsInternal'
It looks like something’s off with the camera matrix or distortion parameters. I’ve tried resetting the calibration, but the error persists. Could you suggest on how to troubleshoot this?
Thanks!
Can you make sure your real-time-screen-gaze
package is up to date? If it is, please share your code and I can help troubleshoot
also, i have another issue with
Stopping run loop for rtsp://10.76.126.146:8086/?camera=eyes&audioenable=on
Stopping run loop for rtsp://10.76.126.146:8086/?camera=world&audioenable=on
Real‑time API 1.5.0 on Ubuntu 22.04/ROS 2 Iron I've tried restarting the app and the node, but they don't seem to be working Can you advise on any additional debug steps you’d recommend?
Thanks for your help!
Sorry thank you! I restarted the phone and it’s working. Never mind! Thanks
Great to hear! Just for the record though, the real-time-screen-gaze
package is separate from the realtime-python-api
package
Maybe from the pupillary measurements?
Probably not easily. I can tell you subjectively that, during a blink, gaze and pupillometry data that Neon produces is mostly noise. It's feasible that one might be able to reliably classify signal versus noise with one or both of those streams to identify blinks, but I don't have any examples or previous work to point you towards. It is feasible for us to send this data to PsychoPy though - please submit a post 💡 features-requests 🙂
in our specific study settings. We are not able to use store/use anything from the raw recordings Maybe you and I have discussed this before, but could you remind me why this is?
Hi, Does anyone have an idea about how the projection of the gaze x and y coordinates onto the scene video is done? Also, is there any time offset between the recorded gaze coordinates and the scene video ? Thanks
Thank you, I will submit a request. Just to be clear, what data are you going to send to Psychopy? The eye camera footage? The blink info?
And the reason we can't do that is due to the IRB regulation of this particular study, not due to technical reasons
Blink events. Raw eye camera data might be feasible, but I'm not sure. Do you have a use case for this?
we can't do that is due to the IRB regulation Ah, thanks. Hopefully in future works you can make recordings. IMO, it's much easier to process/analyze these data post-hoc than real-time, especially with PsychoPy