Hi there, is there any chance to make the Networking API plugin available for the Player? We want to do recordings and thereafter connect via ZMQ to the playback.
@papr Sorry for being impatient, but do you have a quick answer on that?
Hi @papr, The coordinate of the gaze point in the first picture shows a large error in the drawing of the second picture. Is this because the screen captured by the plug-in๏ผsurface tracking๏ผ is not a rectangle.
@user-594d92 could you please specify what you have visualized in the second picture?
I want to use the coordinates of the gaze points on the first image and then draw them on the second image. As you can see, the gaze point is on the cup, but the point drawn by the coordinates is away from the cup.
@user-b8f52e This is currently not planned and I am not able to assess how much work that would be at the moment.
thank you
@user-594d92 are you drawing all mapped gaze points? Or only a few? What do you use as your data source?
Sure, I drawed all mapped gaze points. The data used is from the file. The file path is as follows: C:\Recordings\2020.11.19\004\exports\001\surfaces\gaze_positions_on_surface_Surface 2\
@user-594d92 This was the information I was looking for. Thank you. This is the correct file to use. Would you be able to share this recording with [email removed] From the pictures alone I am not able to tell what is going wrong.
In the mean time, could you please check if this tutorial works for you? https://github.com/pupil-labs/pupil-tutorials/blob/master/02_load_exported_surfaces_and_visualize_aggregate_heatmap.ipynb You should be able to replace the cover image with your own.
https://drive.google.com/file/d/1LNDB0FpyU95y3OrALct6kYhC9bJUKX6b/view?usp=sharing I have uploaded the Recording. What I want to do is to draw the mapped gaze points in the background picture.But I don't know why there is a large error in the drawing of the second picture.Thank you very much!
@papr Thanks for your advise,but I don't konw how to share the recording video and I know I can share the exported data here.
Hello the community, I have an issue with my eye tracker... One of the camera is disconnected from the device. I tried to change the resolution or switch the cameras but i have still the problem. I got a reconnection when i touched the cables ๐ฌ maybe it's a mechanical issue? Thanks for your advice.
@user-74c497 Please contact info@pupil-labs.com in this regard
Hi
Quick question about the eye cameras on the core head set. What is the wavelength band used by the infra red cameras? I want to try recording eye movements through LCD shutter glasses but before buying any glasses I wanted to check that the glasses transmit the IR light from the eye cameras.
Thanks!
Is there any way to use raspberry pi for Pupil-Core or use raspberry pi instead of cellphone?
Hi @papr, I have cloned pupil-2.6 and tried to execute run_capture.bat with the pupil-core headset, but the world process shutdowns due to an ERROR as shown in the picture. Do you have any hint for this error?
@papr Hi, A version error occurred when I run pupil2.6 from source. I configured the environment according to โDependencies for Windowsโ. But the page "Download FFMPEG v4.0 Windows shared binaries" show not found , so I used these .dll files from the previous version๏ผ2.1๏ผ
(pupil36) E:\pupil\pupil2.6\pupil\pupil_src>python main.py capture world - [INFO] numexpr.utils: Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. world - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. world - [INFO] launchables.world: Application Version: 2.6.19 world - [INFO] launchables.world: System Info: User: YB, Platform: Windows, Machine: MSI, Release: 10, Version: 10.0.18362 world - [ERROR] pupil_detector_plugins.detector_3d_plugin: This version of Pupil requires pupil_detectors >= 1.1.1. You are running with pupil_detectors == 1.1.0. Please upgrade to a newer version! world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "E:\pupil\pupil2.6\pupil\pupil_src\launchables\world.py", line 164, in world from pupil_detector_plugins.detector_base_plugin import PupilDetectorPlugin File "E:\pupil\pupil2.6\pupil\pupil_src\shared_modules\pupil_detector_plugins__init__.py", line 16, in <module> from .detector_3d_plugin import Detector3DPlugin File "E:\pupil\pupil2.6\pupil\pupil_src\shared_modules\pupil_detector_plugins\detector_3d_plugin.py", line 42, in <module> raise RuntimeError(msg) RuntimeError: This version of Pupil requires pupil_detectors >= 1.1.1. You are running with pupil_detectors == 1.1.0. Please upgrade to a newer version!
world - [INFO] launchables.world: Process shutting down.
Hello @papr After opening the plugin "Surface Trackerโ, how do I get suface data in real time ?
@user-a98526
RuntimeError: This version of Pupil requires pupil_detectors >= 1.1.1. You are running with pupil_detectors == 1.1.0. Please upgrade to a newer version! You can do this with:
pip install --upgrade pupil-detectors
Please follow these instructions and let us know if continue having issues.
I can also recommend running the bundle instead from source. This will avoid any of these setup issues.
@user-2ab393 Are you familiar with the network api? https://docs.pupil-labs.com/developer/core/network-api/ This example uses it to receive the surface-mapped gaze in realtime: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py
@user-d444bc Which Python version are you using?
Did you know that you can run from bundle and do nearly everything that you can do with running from source? Similarly to @user-a98526, you could avoid these type of setup issues. You can download it here: https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads
I am using python 3.6.0., and I am just trying to add some functions to test my experiments.
Hi everyone! I'm doing a study which involves Pupil Core and Psychopy. I want to do the match of both data, so I can analyze eye's movements according to specific stimulus. However, it has been a bit complicated for me to convert their timestamps to the same format... Also, I added a plugin in pupil player to visualize the recording time and I noticed that it doesn't match to the time on info.player file... These times are referred to what?
@user-f195c6 I guess the first resource to look at would be https://docs.pupil-labs.com/core/terminology/#timing
@user-f195c6 Afterward, I would recommend having a look at this: https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Thank you very much!!! ๐
Hello, I've got a problem. I recently changed my fisheye camera to the normal camera provided. I also recalibrated the world intrinsincs, as described on pupil labs website.
My problem here is that I have this recording with surfaces described as on my screenshot. You can clearly see that the gaze and fixations are inside the surface. But on the export csv file, I have almost all of my data that has bigger and smaller values. I'll send you the corresponding export file so you can check what I mean. But I don't really know where it comes from and I didn't have this problem using the fisheye camera. I'm copy pasting the surface definitions from other files which use the same layout. I'm using pupil player 2.6 and I can give more information if required. It happens on other recordings using the same surfaces definitions too. Do you have an idea about where this problem could come from ?
@user-19f337 I have two questions: Did you record the recording with the new intrinsics? Also, did you define the surface after changing the intrinsics or before?
@user-19f337 If you want, I can have a more detailed look. For that, please share the recording with [email removed]
I recorded using the new intrinsics but the surface definition is a copy paste from an other recording which maybe didn't have the correct intrinsics when defined. I will redefine the surfaces and I'll update you when done. ๐
Works fine now, thanks!
Hi community! I am new to Pupil Core and am using two Core headsets in a social interaction experiment with a mother and a baby. I am having trouble recording both participants simultaneously in Pupil Capture. Could anyone help me with this?
@user-1d05f7 The recommended setup would be to use two separate computers, one for each headset. Additionally, you should activate the Time Sync and Pupil Group plugin. This allows you to record synchronized recordings.
if it's not possible to record from two computers, can I record from the same computer?
Yes, theoretically, it is possible when running the software twice on the same computer. But for one, the computer might not have sufficient computational resources, and secondly, the camera auto-selection does not work in this case. You will have to manually setup the headset cameras for each software instance. In other words, the software is not designed to run twice on the same computer with two separate headsets.
Ok, thanks. Are there instructions for manual setup of the cameras?
@user-d444bc Please be aware that we require Python 3.6.1 or higher. The difference between 3.6.0 and 3.6.1 is that the latter comes with a newer typing
version which could be the reason for your issue.
Thank you very much for your answer. I wasn't aware of that requirement. So, after updating python version, I can run it without the problem.
@user-d444bc hopefully, yes ๐
Hi @papr My apologies if you have already seen my earlier question, but wanted to ask again just in case you missed it. Are you able to tell me what is the wavelength band used by the infra red eye cameras please? I want to try recording eye movements through LCD shutter glasses but before buying any glasses I wanted to check that the glasses transmit the IR light from the eye cameras. Thanks!
@user-a09f5d Apologies for not responding yet. I forwarded the question to our hardware team. I will forward their response to you as soon as I hear back from them.
Perfect! Thank you very much. I figured that was the case but wanted to check on the off chance. Thanks again.
Hi, I have a question regarding the "gaze history" feature (in vis polyline plugin). When I try to use this, it runs the preprocessing fast/fine, but gets to about 20-23% of calculation and then the app stops responding. It reliably gets stuck at around the same percentage regardless of the duration I select, so any insight would be appreciated. I'm on Mac OS.
@user-a09f5d The wavelength is 860nm
Thanks for that @papr . I assume that 860nm is the peak wavelength? Were you able to find out the wavelength range from the eye camera (eg. 830nm-890nm with the peak at 860nm)?
@user-3d024c could you please share the ~/pupil_player_settings/player.log
file after reproducing the issue? I hope it has hints about the issue.
It's currently stuck at 20%, not responding, but hasn't crashed yet so there's nothing relevant in the file but I believe I attached it.
@user-3d024c you are right that the log does not indicate a crash. I was not able to reproduce the issue with a recording of mine. Could you please share the recording with [email removed] such that we can try to reproduce the issue?
Update: I let the app not respond for a while instead of quitting and it spontaneously completed calculation/began responding again. Sorry to bother you, thanks!
@user-a09f5d That is the wavelength that our IR leds emmit. From my understanding, this is more or less a constant.
@user-a09f5d I will try to find out more information about the eye camera filter range though.
Sorry, my bad! I was actually asking about the distribution of wavelengths around the peak at 860nm since most LEDs don't only emit at a single wavelength, but rather a narrow band of wavelengths around a peak. For example light spectra for LEDs usually something like this (see attached). In this example you would say the LED emits between 900nm-1000nm with a peak at approx. 940nm.
*....usually look something....
@user-a09f5d ok, I think I understand. I do not think that I will get this information for our LEDs but I have again forwarded your question, pointing out that you are actually interested in the wavelength range of the IR filter, not the IR LED wavelength range. Can you confirm that I understood your question correctly?
So I'm actually interested in both, as they both effect the light that is actually emitted and used by the eye cameras. Thanks for forwarding my question on. Even if you can't find out the extra information knowing that the LED emits at 860nm is extremely useful so thank you for that.
In the future I would like to measure the emission spectra of the eye camera LED for myself. If I am able to borrow the equipment and take the light measurements I would be more than happy to send you the results in case you find them useful or to help with any future questions from others.
@user-a09f5d An independent verification of the wavelength numbers would be very helpful!
No worries! If I am able to measure it I will be sure to let you know. Thanks again for your help. Please let me know if you are able to find out any more information.
Hello, apologies if this has been asked before. I just wanted to verify that the max dispersion slider when detecting fixations does not refer to a radius (contrary to other algorithms). Thank you!
That is correct.
Hi@papr๏ผa new problem occurred after I used PIP install -- upgrade pump detectors. The reason why I use "run from source" is that I am still using the Realsense plugin. Thank you very much for your help.
(py36pupil26) E:\pupil\pupil2.6\pupil\pupil_src>python main.py capture Traceback (most recent call last): File "main.py", line 33, in <module> from version_utils import get_version File "E:\pupil\pupil2.6\pupil\pupil_src\shared_modules\version_utils.py", line 8, in <module> import packaging.version ModuleNotFoundError: No module named 'packaging'
I am trying to install these packages to solve the problem.
I found that the error was due to my incorrect installation of PIP install - R requirements.txt ใ But when I use this command, the following happens:
As shown in the figure, I have installed Shadow
I tried to solve these problems by manually installing different WHL files. Finally, I successfully ran pupil_ captureใ However, it is still unable to connect to realsense. The plug-in error message given is as follows:
world - [INFO] numexpr.utils: Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. world - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. world - [INFO] launchables.world: Application Version: 2.6.19 world - [INFO] launchables.world: System Info: User: YB, Platform: Windows, Machine: MSI, Release: 10, Version: 10.0.18362 world - [WARNING] plugin: Failed to load 'realsense2_backend'. Reason: 'cannot import name 'load_intrinsics''
@user-594d92 the easiest way to share it is to zip and upload the recording folder to a file sharing service, e.g. Google Drive and share the link to the data in the email
@user-594d92 could you also share the image that is displayed on the monitor? i.e. the undistorted surface target image
This is the picture displayed on the monitor
@user-594d92 It looks like it is not using the correct intrinsics. You said, that you changed the lens and reestimated the camera intrinsics, correct? Somehow they did not get applied to this recording.
You can check this by exporting the recording with the iMotions exporter. It should export an undistorted video but instead it is completely distorted inwards. This is why the surface mapper maps the gaze towards the middle.
In Capture, you can check the current intrinsics by enabling the "show undistorted image" button in the intrinsics estimation plugin menu
Thank you very much for your suggestion. I will try again according to your suggestion
Hello. One of my eye cameras stopped working. Can I still get a robust eye detection and gaze point with just one eye?
It will be less robust, especially if you look away from the working eye camera.
Also, what is the best way to troubleshoot the eye camera? A study participant touched the frame and it stopped working. It seems there was a little static discharge. If I disconnect and connect the camera, the windows usb sound sounds, but then pupil capture says it cannot connect to the device
Please contact info@pupil-labs.com in this regard.
This has happened in the past, and usually closing pupial capture, unplugging the eye tracker, and disconnecting and reconnecting the camera did the trick.... but not this time
HI@papr ,I want to know if there is a way to use realsense plugin in pupil2.6. I found that there are differences between the two versions of the code, can I directly add load_intrinsics to pupil2.6.
@user-764f72 Please create fork of the realsense plugin gist that is compatible with the latest Pupil release and make a PR to the community repo, linking both gists, indicating which version to use with which Pupil release. I think you only need to update the camera_model imports. Please also link the version here for @user-a98526 to test.
Hi @papr , first thank you for kind replies that I've asked so far. I am currently analyzing eye captures with about an hour - and it is hard to interpret timelines (ex. blink detection) in Pupil Player. I want to ask if you have future plan to make a adjustable timeline window. Thanks ๐
I am getting the following error on my computer when loading pupil player:
ohhhhh
@user-908b50 You are running a newer version of the software on an older recording. this should have been updated automatically. I guess that you did not pull the tags correctly. Nonetheless, simply delete the lookup files. they will be regenerated correctly
can you share with me what you did?
Sorry for the delay in getting back to you! Busy with final exam and term paper. Just getting back to my data now. You'll have to go in and delete and eye.lookup & world.lookup files. That would get your files processed.
Help. I did run_capture after installation it gives error File "main.py", line 33, in <module> from version_utils import get_version ImportError: cannot import name 'get_version'
@user-8711fc I see you also wrote an email to [email removed] I will respond to you via email from there.
@user-a98526 @user-764f72 updated the community repository with an updated version of the RealSense plugin. Please use the appropriate plugin based on your installed Pupil version: https://github.com/pupil-labs/pupil-community
Thanks for your help.
Hi everyone, I need to send a trigger from my MATLAB script to Pupil Core about my trial number, stimulus onset/offset, etc. Do you know what this trigger should be, in terms of MATLAB functions and its arguments?
@user-98789c Please have a look at our matlab helper scripts https://github.com/pupil-labs/pupil-helpers/tree/master/matlab Triggers are called annotations in Pupil Core software. You can read about them in our documentation. https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations
Alternatively, there are ways to call python functions from matlab. We will be able to support you much better in case of problems if you use Python instead of Matlab.
Hi everyone. I want to know if the Pupil Core can record eye movements in videos rather than static images? I notice that most of the research is focusing on the analysis of static images in my research domain. Also, I DIY a product; however, the world camera cannot work even though I tried the method by deleting the Pupil Core device and install it again in ' device manager'. When I test the world camera with a media player, it can work. BUT it doesnot work in Pupil Core
Pupil Capture is designed to predict gaze on real-time video. Pupil Player can predict gaze for recorded video. Is this what you are looking for?
Regarding the driver issue, it looks like the cameras are not correctly installed in the libusbk driver category. Please make sure to run the latest Pupil release and to connect the headset before starting Pupil Capture, for the automatic driver installation to work.
Hi@papr, I want to know what I should do to get Realsenseโs depth_ Frame, similar to revc_ world_ video_ frames_ with_ In visualization:
The depth_frame is not being published on the IPC, therefore it is currently not possible to subscribe and receive it. You would have to modify the plugins code to publish it.
if topic == 'frame.world': recent_world = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) elif topic == 'frame.eye.0': recent_eye0 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3) elif topic == 'frame.eye.1': recent_eye1 = np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'], msg['width'], 3)
URGENT: Pupil Capture says camera is already in use or blocked. What have I done to it? What should I do to fix it?
@user-98789c Which operating system do you use? Have you tried restarting the computer and make sure that there is no other program accessing the camera?
Windows 10 Pro on Lenovo ThinkPad
@user-98789c Please try restarting the computer as a first step. Make sure the headset is connected and start Capture. Does the issue remain?
I restarted the laptop and it still does not work. I have been trying to interact from MATLAB with Pupil Capture, maybe there I have done something wrong?
@user-98789c Could you please go to the Video Source menu in Capture, enable "Manual camera selection, click "Activate camera" and check which cameras are listed? Do you see 3x "unknown"?
Hi there, I have an experiment running in psychopy to with I connected the pupil labs eye tracker via a network API. This all works. But so far I have been doing the calibration before I started recording and I haven't used any validation (I read the "best practices" way too late). So now I've added code so that calibration starts after recording starts. Now my two questions: 1) I don't see how I can remotely start the validation. When I look here (https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote), it only shows other commands but none that would start the validation process. Is it possible to do it via the remote control? 2) I'm still very new to eye tracking, so it's not really clear to me which values are good and which aren't. What should the validation look like? And how can I improve the accuracy if it's not good enough?
Is there anyone who could help? I found the right accuracy values now (1.5 - 2.5 degrees for 3D). But I'm still not sure how to run the validation via remote control and how I could improve the accuracy if it's not good (above 2.5 degrees)...
Hello, I think I have asked this question before.
It is not working whenever I click the Annotation hotkeys using the keyboard. Why ?
@user-7daa32 Have you frozen the surface again? Remember, that logs are hidden while the surface is frozen.
I need to press the annotation hotkeys while recording. I didn't freeze anything. I am recording
The window needs to be active in order to register the key.
I want to click the S and F using the keyboard while I am recording
@user-7daa32 are those upper-case? You might need to hit shift+S
Wow! Thank you. Better to use lower cases
Hello! Hope everyone is ok! I have a question which might be stupid, but is there any problem of travelling with the pupil core glasses? I mean, if I need some kind of document to show at the airport which declares the type and purpose of this device... I'm a bit afraid of the inspection zone
Hi, pupil core has ce and fcc certification. You should be able to import it without issues.
@user-f195c6 I do not think so. But I will forward this question to our operations team. They know all about customs.
Thank you! Let me know, then ๐
I am being told to please contact info@pupil-labs.com in this regard. ๐
Hello, I have a query about the licensing. Is the pupil core software GPL or LGPL
@user-da7dca then this is probably an intrinsics issue. Please estimate the intrinsics https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation
@user-e8e825 LGPL 3 https://github.com/pupil-labs/pupil/blob/master/COPYING.LESSER
https://photos.app.goo.gl/Q1h8eJHpnmy5C1Hy9
I only added START Annotation once but it appears twice on player. Why?
Unfortunately, you do not seem to have granted viewing access for people with the link. Therefore, I am currently not able to access the image.
There is overlapping fixations
Any experience in combining the VR add-on with the Valve Index - is this possible?
Does the right and left eyes has their own individual data?
Pupil data is generated for each eye separately. Gaze data tries to combine it (not always possible). See our documentation for details about the contained data: - https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format - https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv
Hi, I am annotating data with Pupil Player plugin. Pupil Player allows me to annotate at roughly 1/world camera frequency. However, I am annotating based on eye camera images and want to annotate with more frequency. Is it possible for me to annotate more frequency than the frequency at which a world camera image is recorded? I assume only eye camera images would be updated when no new world cam image exist.
You have not been forgotten. I had to look up how it works. Unfortunately, it is not as easy as starting a calibration. You will have to send a {'subject': 'validation.should_start'}
notification. https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py#L56-L62
Thank you! That really helps.
Do you also have any tips on how I can improve the angular accuracy if I get values above 2.5?
Player will run the playback frame-by-frame for the world video but not for the eye video. Even if there was no world video, the artificial frames would use interpolated timestamps.
I think your best chance is to pretend that the eye video is one of the eye videos. Switch the videos, do the annotations, and switch back. When replacing, make sure to replace all files (*.mp4
, *_timestamps.npy
, *_lookup.npy
, and *.intrinsics
).
This will give you eye frame accurate annotations but it requires some manual renaming between sessions.
Thanks!
Hi, when i connect the device it keeps making the connecting and disconnecting sound, and when i run the pupil_capture.exe it detects the eyes cameras but not the world camera. do you have an idea what is happening? is it hardware issue? my set up is intel realsense d415 camera mounted on top of the eye tracking device and i'm using windows 10. thanks in advance!
@user-835f47 I am sorry to hear that. Are you using the Realsense plugin for Capture already?
I don't think so. Can you explain me how to add this plugin?
@user-835f47 See our documentation on how to add plugins in general: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin
Afterward, download the corresponding RealSense D400 Source plugin for your Pupil Capture version: https://github.com/pupil-labs/pupil-community#plugins
Please read the readme contained with the plugin link. It contains important information.
If you have questions, @user-a98526 is using the plugin as well.
Regarding the disconnect sounds, I would start by using Intels Realsense preview program to check if the camera works as expected.
Thanks a lot!
@user-ae4005 This is often due to inaccurate pupil detection. Unfortunately, it is not easy to give improvement tips without a negative example. If you have one, please share it with us and we will be able to give recommendations.
Okay, I don't have any examples right now and it also only happened a few times so far. If it comes up again I'll save the values and will get back to you! Thank you!
Specifically, we would need the recording. But you are already recording all calibrations, so you are on the safe side.
Yes, making the changes right now!
@papr I'm struggling with the automatization of the calibration in psychopy... The calibration starts but it skips right to the next routine (without finishing the calibration). I think I just need to keep checking if the calibration has finished so that I can tell psychopy to only continue to the next routine once the calibration is finished. Do you know how I can do this?
@user-ae4005 you can wait for the "calibration.successful" or "calibration.failed" notification and continue afterward.
How exactly do I get this notification? Do I first need to send a notification? Or do I just check "calibration.successful" in psychopy?
@user-ae4005 You subscribe to "notify.calibration" similar to this script https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
I followed the code from the link you send me but I'm getting stuck with subscribing to the notifications: pupil_remote.setsockopt_string(zmq.SUBSCRIBE, 'notify.calibration') with the following error: zmq.error.ZMQError: Invalid argument
My "sub" is called "pupil_remote" and the eye tracker connects well and I receive other notifications (using "notify") so I'm not sure why this is not working... Do you have any idea what I'm doing wrong?
@papr Thank you! I'll try it out now ๐
@user-ae4005 How do you initialize pupil_remote
? Please be aware that you need an additional SUB socket compared to previously, where you only used a REQ socket.
Hmm, okay. I guess that might be the problem. This is how I'm initializing pupil_remote now:
# Setup zmq context and remote helper
ctx = zmq.Context()
pupil_remote = zmq.Socket(ctx, zmq.REQ)
pupil_remote.connect('tcp://127.0.0.1:50020')
pupil_remote.send_string("PUB_PORT")
pub_port = pupil_remote.recv_string()
pub_socket = zmq.Socket(ctx, zmq.PUB)
pub_socket.connect("tcp://127.0.0.1:" + str(pub_port))
@papr So that means I need to set up another sub, just like in the example you sent? And not work with pupil_remote
similar to the pub_socket, you need a sub_socket. See the linked script for details.
So I did that now but I'm still getting the same error.
open a sub port to listen to pupilpupil_remote.send_string("SUB_PORT") sub_port = pupil_remote.recv_string() sub_socket = zmq.Socket(ctx, zmq.PUB) sub_socket.connect("tcp://127.0.0.1:" + str(sub_port))
sub_socket.setsockopt_string(zmq.SUBSCRIBE, 'notify.calibration')
And the error: sub_socket.setsockopt_string(zmq.SUBSCRIBE, 'notify.calibration') File "C:\Program Files\PsychoPy3\lib\site-packages\zmq\sugar\socket.py", line 201, in set_string return self.set(option, optval.encode(encoding)) File "zmq\backend\cython\socket.pyx", line 435, in zmq.backend.cython.socket.Socket.set File "zmq\backend\cython\socket.pyx", line 279, in zmq.backend.cython.socket._setsockopt File "zmq\backend\cython\checkrc.pxd", line 25, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Invalid argument
Any mistakes you can see here?
Okay, my mistake. I thought I could use the socket I already set up. Thanks again!
You will still need pupil_remote as entrypoint
@user-835f47 I have been using the Realsense plug-in for a long time, and currently I am customizing the plug-in to obtain depth information about Realsense. If you have any questions, I think we can communicate with each other.
Thanks! if I will have problems or questions I will ask you.
- sub_socket = zmq.Socket(ctx, zmq.PUB)
+ sub_socket = zmq.Socket(ctx, zmq.SUB)
Oh no... Sorry. And thanks!
Is Discord the best place to ask for help, or is there a separate support email? vr-ar seems a bit quiet.
Hi, I am newbie of developer, and have questions of the process of the program is running. I am looking at source files, and have a question. So, how does it really runs? In my idea, in pupil-master/pupil_src/main.py, starting from launcher(). But after that, I didn't get how does the pupil window(World Window) is turning on. on luanch() function, I think there isn't a GUI inside, so I am wondering how could the World Window is open
hello everyone, i have some weird issue with the calibration step...there seems to be a huge disparity between horinzontal gaze tracking (which works really well after calibration) and vertical gaze tracking (which doesnt seem to sense vertical movement at all). I already played with the outlier threshold to check if vertical calibration dots are recognised. Which seems to be the case ... what would be your suggestion to solve this problem? thanks in advance!
@user-da7dca Did you change the lense by any chance?
not really... also pupil detection is really stable in all eye rotation scenarios
@user-83ea3f The launcher launches any of the launchables
in the respective folder (which ones depend on the application)
i use a custom solution for the worldcam (realsense 305) however i also tried with a logitech of the shelf webcam which suffer from the same error. Also one weird thing is that when using 2D model instead of 3D everything seems to work fine
Will try tomorrow๐ however I already got an calibration from matlab read in so Iโm not sure if this will fix it
@user-da7dca I do not see how that relates to each other. Could you provide a bit more context?
I did an full calibration of the realsense via matlab and exported the distortion parameters to pupils calibration settings file. Which did also worked out, since I could visualize the Undistorted image in pupil capture
Ah, thanks for the clarification. Makes sense.
Why is the start timestamp in fixation file different from that on the annotation file ?
Could you update this link? https://discord.com/channels/285728493612957698/285728493612957698/481454347687952385
Because they are based on different time source. Annotations use world timestamps. Fixations use gaze timestamps.
Hello,
That also means that there will be two calculated result for each eye?
The difference is minimal
Hi what type of data is produced when we use Pupil Invisible for shopper research? Do we get heat maps?
its not clear what data will be available if we buy and use Pupil Invisible
Hi Zak, please see my response from [email removed]
Because I don't know where to modify to generate the depth-image topic, I modified realsens2_backend.py to send depth_image(not depth_frame). I used the socket and UDP transmission methods. After testing, it will not lose packge when sending on the one computer. At the same time, the release start command is added to ensure the end integrity.
The depth image is compressed and processed by other ways to make the transmission fast enough.
The accuracy of the received depth image remains basically unchanged (100 points are randomly sampled without error)
This is my modified backend.py.
This is the receive .py for test
This is the corret file
You must modify the depth image to 1280*720 before using it. If there is a sending error, you can modify the compressed part of the split code (img_split).
This is the results
I test it on pupil2.1
@user-a98526 Feel free to provide the code via a repository and link it in https://github.com/pupil-labs/pupil-community with a description
It is a great honor to submit my code to the repository. I uploaded the wrong receiving code again, this will be the final correct receiving code.
Hi There. I'm new to the pupil labs software and I'm currently setting up an external eye camera as there is a sight hardware problem preventing me from using the pupil labs glasses camera. I am interested in getting the 3d eye normal x,y,z data from the post hoc algorithm in pupil player, and I have two questions.
1) Is there information on lens distortion correction for the pupil labs eye camera that should be considered when passing videos from external camera through the pupil player for pupil detection?
2) We record the eye in the dark so the pupil would be quite big. I'm trying to get a feel for how much the 3d model algorithm struggles when the pupil is obscured by the top eye lid. i know we try to get the user to open their eyes as wide as possible, but when the pupil is occluded say 1/5? is that going to be a problem? or other factors is going to make it difficult to say. Would adjusting the ROI to not include the obscured region help?
@papr Hi,when i am reestimating the camera intrinsics ,although I have tried a lot of times ,I find it is difficult to get a undistorted image by enabling the "show undistorted image" button in the intrinsics estimation plugin menu.I want to know whether there are some skills in reestimating the camera instrinsics .Maybe my operation is wrong. Besides I want to know what the "move your headset at different angles" means in https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation. Sometimes I try to keep my head static when reestimating the camera intrinsics.
Hello, I've noticed the pupil data for each eye in a binocular recording doesn't always have the same timestamp. I've been using pd.merge_asof(left_eye, right_eye, left_index=True, right_index=True, suffixes=('_l','_r'), direction='nearest')
to keep binocular data in a single dataframe. Is this a valid approach or would you recommend keeping the data for each eye separate, with its original index?
@user-430fc1 depends on what your index is. If it is timestamps it is likely correct. If it is just the sequence number it is definitely not correct, as the eye cameras operate independently of each other. I will have to look up that function in detail though to confirm the above.
@papr I've been using this with pupil_timestamps as the index. Seems to work OK. The 'direction' argument has 'backward', 'forward' and 'nearest' options. I have assumed that 'nearest' would make the most sense, and that it should not matter which eye is on the left.
What happens if multiple timestamps are matched as nearest?
Hmmm, good question. The resulting index always seems to be without duplicates
So my pyglui has started to freeze again. I have been at it for hours. Starting and re-starting but it freezes on the surface tracker plugin & mid-way during fixation detector processing.
Kindly suggest ways to troubleshoot. I pulled my tags correctly.
@user-908b50 how do you know it is pyglui freezing? Did you inspect the logs? Could you share them with us?
This the terminal output. Will share the log too!
@user-908b50 There is no sign of an python exception, i.e. it is likely the legacy marker detection setup issue again. Make sure the cv2 module you are using has been compiled with TBB=ON
. The easiest way to do that is to compile from source and symlink it:
git clone https://github.com/opencv/opencv
cd opencv
# checkout a stable version, e.g. 4.5.0
git checkout 4.5.0
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON ..
make -j2
sudo make install
sudo ldconfig
Can't access my log atm from the e-drive. But okay I can re-install! i had to uninstall the above and re-install the old way because of cv2 import errors.
The old-way is by installing opencv via apt-get or pip install opencv-python
? Unfortunately, for Ubuntu, these install methods do not provide tbb-support. ๐
No, the instructions for ubuntu 18 with sudo apt lib. Yeah, unfortunate! Fingers crossed!!
Hi@papr, I want to know how to get gaze information in Realsense or other plugins. I don't know if one plugin can use the results of other plugins.
Hi. I recorded 400x400 pix images with the eye cameras and pupil identification results look better. However, I found a "WARNING No camera intrinsics available for camera eye1 at resolution (400, 400)!" message. So my questions are A. do I get any degradation, not upgrade, of gaze accuracy using a higher resolution? B. if so, why there is no default intrinsic for 400x400 px?
Does anyone know where the user-specific plugin
directory might fall on Windows 10?
Hi Kevin. You can find detailed information on where to find the plugin directory here: https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin As an example, my Capture plugins are located here: "C:\Users\Neil\pupil_capture_settings\plugins".
Hi, thank you! I did end up locating the folders. Regarding PupilDetectionPlugin-extending plugins, how do I actually have the data collected in the separate "Eye" windows in, say, Pupil Player use that plugin? I've put my custom pupil detection plugin in all 3 plugin folders (service, player, capture) and while I can enable it in the main "World" window I do not see any effect in the"Eye" windows
@user-3cff0d were you able to run the example pupil detector plugins, e.g. https://gist.github.com/papr/ed35ab38b80658594da2ab8660f1697c ? Note that you will need Pupil v2.6 or higher to run these custom pupil detectors.
I am currently running from source on the latest version pushed to master
. I'll try the example plugins, thanks!
Hi all, capture 2.6.19 will not open (here is the log file). I am starting it from a macbook with big sur 11.0.1
There are several issues preventing Pupil desktop apps running on Big Sur that relate to PyOpenGL. These depend on Python 3.8.7, which is not released yet. As soon as Python 3.8.7 is officially released, we will begin the update process
Hi When I run the pupil capture, The pupil detection and world cam did not work. whereas the decet eye0 and detect eye 1 work correctly
Hi @user-e69bd5, please contact info@pupil-labs.com in this regard
Merry Christmas for everyone! @papr i have question to you, in my app I get video frame and message in msgpack, and there is some kind of data delay with displaying norm_pos gaze, but video frame response in realtime. I think it isn't problem with counting gaze position to pixel in app. Is there any algorithm which can help me to get and display good realtime gaze pointer something like in Pupil Capture?
for getting gaze point position should I only multiply norm_pos[0] with frame width and norm_pos[1] with frame height? ๐
Thanks, I have installed Python 3.9.1 via brew. Has it something to do ? Should I downgrade to previous versions ?
Hi Chris. Downgrading in this instance will not help. We will release an update in due course, but this is dependent on the fact that Python 3.8.7 is not yet fully supported on macOS 11 Big Sur
Hello, Precision is calculated as the Root Mean Square (RMS) of the angular distance (in degrees of visual angle) between successive samples during a fixation. What does here successive samples during a fixation mean? Does it mean visual angle between estimated gaze and actual targets? So RMS is square root of the mean square of accuracy?
Hi @papr After reestimating the camera intrinsics,I check the current intrinsics by enabling the "show undistorted image" button in the intrinsics estimation plugin menu.But I get a picture like this.I want to know whether it makes a difference in my result in getting the right positions of my gaze .Besides how can I get the right intrinsics.
Hi everyone, I am going through the data structure in my recordings, and I have some questions, all of which might be so basic, but I'd appreciate some clarification about them. Thank you.
I would appreciate some guidance about my questions ๐
pixel_x = int(norm_pos_x * pixel_width) pixel_y = int((1.0 - norm_pos_y) * pixel_height)
norm_pos_x and norm_pos_y are in norm_pos array here?
hello, I have some problem installing 'pyglui', on Windows10, python3.6(32bit)
ERROR: Command errored out with exit status 1: command: 'C:\Anaconda3\envs\pupil32\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\IVCLAB\desktop\pyglui\setup.py'"'"'; file='"'"'C:\Users\IVCLAB\desktop\pyglui\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps cwd: C:\Users\IVCLAB\desktop\pyglui\
is there any solution for this??
Please see reply from [email removed] in this regard
or here?
Yes, the second one is for the gaze data. That gives you the position with respect to the world view.
ok, I will try use it
the frequency of sending gaze.3d.01. is 30 fps?
it doesn't work correctly, there are any method/algorithm to display correct gaze position (like in Pupil Capture) in realtime?
Hi@nmt, I want to know how to get gaze and fixation information in Realsense or other plugins. I don't know if one plugin can use the results of other plugins. In my plugin, I use gaze = events [" gaze "], but it doesn't seem to work.
Hi @user-a98526. You should be able to access gaze and fixation data (if the online Fixation Filter Plugin is running) in your custom Plugin with the recent events method. E.g.
class access_pupil_data(Plugin):
...
def recent_events(self, events):
fixation_pts = [
pt["norm_pos"]
for pt in events.get("fixations", [])
]
gaze_pts = [
pt["norm_pos"]
for pt in events.get("gaze", [])
]
Hi there, I have installed intel realsense d400 series. But at some reason, World: Calibration requires world capture video input. But I've checked through Intel RealSense Viewer, and it works. Would you let me know the process of using the world view camera? Thanks, advance
@user-83ea3f You'd better use the pulse version of 1. X, which provides realsense support plug-ins
https://github.com/pupil-labs/pupil-community, here you can find some useful information
Hey @user-98789c. Apologies for the delay over the xmas period.
What is the relation between relative and absolute time range? Look at this discord message for an overview of relative and absolute time ranges. https://discord.com/channels/285728493612957698/285728493612957698/729648460449579028
How long is each world frame? There is a small variance for the duration of each world frame. However, the average over all frames should be equal to ~1 / FPS. This will differ as the FPS can be set by the user. E.g. if the world camera FPS is 50Hz, each frame will be ~0.02 s.
How can gaze point coordinates be negative? Should they not always be positive? Is it in accordance to some surface? Which gaze points are you referring to? For example, norm_pos_x/y coordinates will not be negative as they are defined as normalised coordinates in the world camera coordinate system, where (0,0) is the bottom left corner, (1,1) is the top right corner, and (0.5, 0.5) is the image centre.
What is base-data in fixation.csv? What can be extracted from it? This is the "timestamp-id timestamp-id ..." of pupil data that the gaze position is computed from.
How does the 3d c++ method work? Are there other options to choose for the pupil size calculation methods, used by Core? Where can I read about them? The 3d C++ method is a novel approach to single camera, glint-free3D eye model fitting including corneal refraction. You can read about it here: https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf
For pupil size estimation, the other option provided by Pupil Capture is 2d Pupil Size measured in pixels. This, obviously, is not as robust as the 3d eye model.
About question number 6: in world-timestamps.csv we have one column for timestamps in seconds and another column for pts. I'm guessing it stands for pupil time stamp? and what is their relation?
How are x and y positions in eye image frames normalized? See here for details of the coordinate systems of each camera: https://docs.pupil-labs.com/core/terminology/#coordinate-system. The x and y positions in the eye image frames are normalized using the height and width of each respective frame.
What information can be extracted from ellipse-..., sphere-..., circle-3d-..., projected-sphere-...? You can see explanations for all of these data here https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
What are the infomration included in the .csv file for each surface? What do img-to-surf-trans, surf-to-img-trans, etc. mean? You can read about the surface tracker export files and what they contain here https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
What are tokens? You likely will not need to interact with these files. But here is a brief description: available data is announced between different plugins. Data (e.g. pupil positions or gaze) is produced by some plugins and consumed by others. Data consumers need to know if data producers provide new data or if the data producer changed (e.g. the user switched from Pupil From Recording to Offline Pupil Detection). All announced data has a unique token, so Listeners know if they receive multiple announcements for the same data and can ignore redundant announcements.
In an experiment where we want to investigate people's confidence in their decision making using their pupil size (based on some hypotheses about an existing relationship), what exact .csv files (or other types of files) should information be extracted from? Look under the heading โdiameter_3dโ in pupil_positions.csv
Are there ready python scripts available, only for the purpose of demonstrating what can be done with the data? With the wealth of raw data that Pupil Core exposes to the user, there are many options for analysis with Python Scripts. It would be worth checking out the Pupil Community repository for ideas: https://github.com/pupil-labs/pupil-community
@user-98789c I will need to liaise with the team about question 7 - confidence
@nmt is somewhere code/algorithm which allow displaying correct gaze position with video from world camera.? I wrote some code in C#, I can connect with Pupil Capture through ZeroMQ, but I cannot get the same gaze position in real time as in Pupil Capture. I'm reading norm_pos array in gaze.3d.01. message.
Hi @user-467cb9. The method that @user-d8853d described should work. It essentially describes the denormalize function. What are you drawing/rendering the gaze point with?
I'm using EmguCV (OpenCV for C# library)
method circle() on Mat class object
i can see video from main camera in real time
but my gaze pointer moves slow, and don't have the same position like in Pupil Capture...
Pupil Capture send gaze position in real time?
with minimum 30 fps frequention?
@user-467cb9 depending on your settings, the gaze positions could be available at more FPS than the world camera. So you could get multiple gaze points for every world camera frame. Because your gaze point is moving slow, it could be that you aren't drawing all of the gaze points available at each world index. But rather spreading them out over the world camera frames
@user-98789c 7. How is confidence calculated? The confidence value is a weighted average of 2d confidence and a confidence value that measures how well the model fits to the current pupil observation. 2d confidence is measured from the number of pixels on the circumference of the fitted ellipse.