Hi,@papr ,can i use the RealSense Video Backend as a custom plugin in the latest Pupil capture
@user-7daa32 The idea is to set up an external coordinate system with markers in which the scene camera can be tracked. This allows you e.g. to map gaze from multiple participants into a common 3d space. Check out our tutorial on using the plugin https://www.youtube.com/watch?v=9x9h98tywFI @papr thanks
Hi, is there a possibility to change the fixation point visualization (size, color) for live-viewing in pupil capture or is this just possible in pupil player?
This is not possible unless you create a custom plugin by copying the exising code and modifying the plugin name and visualization code
Hi, I have a few questions about the pupil detection. When I tested the pupil detection with 'algorithm' mode, I found that the red circle with red dot is not shown even though the pupil area was well distinguished with blue area. So, my question is,,, 1) Why the pupil is not detected (red circle with centered-red dot) even though the pupil area is well covered (blue area)? 2) Is there any tip for detecting pupils easily? - I already changed the several parameters such as exposure time, pupil size, and intensity range. But, the pupil detection is not stable still. 3) Is there any way to modify the pupil detector in source code level? - I think that the confidence threshold that makes the pupil area to be pupil detection is little bit sensitive. So if I could change that, it would be another way to solve this issue. Thanks in advance 😄
Hi @user-48e99b, can you share a screenshot of your eye window when the pupil is not detected? Maybe one with the algorithm view and one without?
@user-c5fb8b
Here they are. I wondered that why left case couldn't detect the pupil (red dot) : )
@user-48e99b The pupil detection algorithm assumes the pupil to be the only dark area in the image. It tries to find and connect the edges of this dark area (cyan colored lines). Afterward, the algorithm tries to fit an ellipse to a these edges.
As you can see, you have a very dark shadow in your images that overlaps with the pupil. You can also see how the edges are nicely fit on the left side of the pupil but the right side has a hole. This is why the detection fails.
@papr Thank you for the detail explanation. So, both the light source position and pupils' shape (clear circle) will be critical for detecting pupils. I will keep on it when I use it! Thank you again 😄
@papr Ah,, may I ask you one more question? So, if I failed to detect pupils, taking care about 1) light source position, and 2) pupils shape is enough? Is there anything else I should take care about? Thank you!
@user-48e99b Your pupils seem very dilated in these pictures. The algorithm assumes a minimum and maximum pupil size (red circles bottom left of algorithm view). I would increasing the maximum pupil size value in the settings to avoid false negative detection in cases were the pupil is very dilated.
@papr Okay, I will try it. If I face similar issues in the future, I will texting on this chat. Thank you for your kind supports. : )
Hi, @papr I have a question about the pupil detection again. When I update the pupil-labs source code from GitHub, I found that there are light-blue circle and dot which I didn't see before. Also, when I try to detect the pupils, the light-blue circle and dot are well generated while the red one is not. So my questions are ... Q1) What roles of light-blue circle and dot? Q2) What differences between red circle/dot and light-blue circle/dot? Q3) Is it possible to utilize the light-blue circles and dots for fixation detection? (It looks better than red ones) Q4) In the case of bright images, there are yellow areas. What they mean? For detail understanding my question, please check below images. Thanks in advance! 😄
Hi@papr ,I want to develop a plugin for detecting the center of gravity of an object. My problem is: I have no other way to directly obtain the world camera frame besides using the network plugin.
hey friends! i got a some problem.
results event is not comming..
i not have vr device. that is reason?
anybody can help pupli using unity except vr device?
dude, i just found out your github and this is amazing, just 1 question before i jump in, can this work with any hardware? for example and raspberry pi with its camera?
@papr Hello, I've been getting this error when grabbing frames from the world camera and I was wondering if you had any insights into what might be causing it...
Traceback (most recent call last): File "C:\Users\engs2242\Anaconda3\envs\pupil\lib\threading.py", line 926, in _bootstrap_inner self.run() File "C:\Users\engs2242\Anaconda3\envs\pupil\lib\threading.py", line 870, in run self._target(self._args, *self._kwargs) File "C:\Users\engs2242\Documents\cvd_pupillometry\code\python\pyplr\pl_helpers.py", line 130, in detect_light_onset topic, msg = recv_from_subscriber(subscriber) File "C:\Users\engs2242\Documents\cvd_pupillometry\code\python\pyplr\pl_helpers.py", line 84, in recv_from_subscriber payload = msgpack.unpackb(subscriber.recv(), encoding='utf-8') File "C:\Users\engs2242\Anaconda3\envs\pupil\lib\site-packages\msgpack\fallback.py", line 125, in unpackb raise ExtraData(ret, unpacker._get_extradata()) msgpack.exceptions.ExtraData: unpack(b) received extra data.
I'm subscribed to 'frame.world', using msgpack=0.5.6 and, have the following camera settings: resolution=(640, 480), frame_rate=120, auto_exposure_mode=manual, absolute_exposure_time=62, auto_exposure_priority=disabled.
Could you please share the original code? It looks like you are calling the msgpack unpack function on non-msgpack data
@papr It happnes when I call the function detect_light_onset(...) on line 75 of plr_protocol_1_experiment_script.py. I should add that detect_light_onset(...) is calling the recv_from_sub(...) function, which appears to be where the issue occurs.
Thank you. I will have a look at it Monday morning and come back to you
Hi there again. I'm continuing my journey with the eye tracker after I managed to get the demos from github to run. Thanks for the support so far 🙂 I now need to connect the eye tracker to a visuomotor rotation experiment I set up in psychopy.. I couldn't find much info on this online so I thought I'll ask here. Is it possible to send and receive flags from my experiment in psychopy using the coder window or do I have to run psychopy from pycharm or anaconda?
Hi there. I want to read the 'world.mp4' using matlab. But VideoReader in matlab doesn't work for this video. After encoding this video to mp4, matlab works but the frame data does not coincide with the original video.. And i find that the format of the video is JPEG(i guess it's motion jpg) and codec ID is mp4v-6C. However i could not find any solutions on Mathworks.. If you have tried to read the video with matlab before, could you tell me how?
Hello @papr , Can you tell me the FOV of the 200Hz Pupil core eye camera? Thank you in advance.
@user-499cde It is 66 degrees
Thank you very much for the quick reply
@user-ae4005 Hi. You are correct that you need some extra code in the Coder window. Checkout our network API documentation for details https://docs.pupil-labs.com/developer/core/network-api/
@user-20b83c You are correct that Pupil Capture will record video as mjpeg by default. I have never tried to decode mjpeg videos with Matlab so I am not sure what solves the issue. In Python, we use a wrapper around ffmpeg (and its underlying libraries) to decode the video. Maybe there is something similar for Matlab.
@user-430fc1 Running detect_light_onset
in a background thread multiple times with the same subscriber creates a race condition. Either
- only start one background thread
- let them use their own subscribers
- use a Lock to ensure that the same zmq message is not processed by different threads at the same time
@papr thanks so much, will try this out
@papr thanks 😄 i will try this out
@papr Thanks so much! I saw this, I just wasn't sure whether I can do it via the coder window in psychopy or I need to do it via pycharm... Great to hear that I can do it all in psychopy 🙂
@user-ae4005 This does not matter to be honest. The Builder window in Psychopy just generates python code files in the background. These are just text files which can be edited in any code editor. Using Pycharm gives you the advantage of good auto-completion and some other features. But in the end, it is just a matter of personal preference.
Okay, good to know! It seems easier for me to just keep everything in psychopy so I won't have to also deal with pycharm. Thanks again 🙂
Hi ! I'm having troubles with interpretating the data from pupil player. First, i'm sorry if my question is trivial or silly. I do understand that there is a difference between the relative time range and the absolute time range. Could someone explain to me what process allows us to go from the relative time range to the absolute one ? Thank you very much !
Hi @user-0fcca1, Pupil Capture uses a clock with an arbitrary starting point for recording timestamps. Your first timestamp could be any number really, but following timestamps will be consistent. So e.g. for a 20 second recording, your timestamps might range from 185724 to 185744, this would be the absolut time range. For relative timing we always subtract the first timestamp, so the relative range would be 0 to 20. "Relative" basically means "relative to the start of the recording".
The absolut timestamps are mostly used to synchronize the different data streams between e.g. eye and world video. For interpreting a recording the relative notation is usually more meaningful.
@user-c5fb8b got it ! Thank you so much !
@papr Hello! I am wondering if I could ask you for the code that the developer used to enable the annotation function in the Pupil Core? Thanks!
Hi @user-c629df, Pupil's source code is fully available on GitHub, here's the code for the annotation plugins: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/annotations.py Diving head first into this might be a bit confusing initially. If you have any questions, we're happy to help. Please post code-related questions in the 💻 software-dev channel afterwards 🙂
@user-c629df In case you were looking for the network API example script that sends annotations to Capture, you can find it here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
In line 58 you can see the notification that is required to start the annotation plugin remotely.
Hi, since updating to the latest version of Pupil Capture I seem to be getting a lot of NaNs in my pupil_3d data, despite everything looking OK during recording. Also, just wondering what the blue circle represents as I don't recall that being there in the older version. Thanks
@user-430fc1 Has this data been filtered by confidence?
@user-430fc1 The blue ellipse is the result of the 2D pupil detector. With Pupil v2 we are running both detectors in parallel, which allows for more custom use cases if you want to combine data from both detectors.
@papr the yellow trace on the top panel is the raw diameter_3d, with lots of NaNs. I've noticed it happen a few times now. What do you mean 'filtered by confidence'? I apply a 3rd-order butterworth with 4hz cutoff after interpolating blinks.
@user-430fc1 Pupil exports all data, even low-confidence data. The confidence value is an essential measure of how sure the detectors are about their output. Low confidence normally means bad or even no pupil detection on a frame, so you should ususally filter your data for confidence in almost all use cases. I would try discarding data with confidence < 0.8 as a start. The optimal confidence filter will be different for each participant/trial as the experimental conditions will determine the pupil detection difficulty and thus ultimately the quality and confidence.
However, in most cases you should be able to have a fixed confidence filter for all of your analyses.
@user-c5fb8b Thank you - I'll add the confidence filtering to my workflow. By the way, I've just noticed my exported pupil_positions.csv files look different, and that pupil_3d data is only missing when 'method'=='2d c++', which I didn't get in the previous software. Could this have something to do with it?
@user-430fc1 yes. As stated above, Pupil v2 will run both detectors. So you will have two rows per timestamp in the csv. One with 2d c++
and one with 3d c++
. The 3D detector also produces a "2D result", so it contains the same keys as the 2D detector plus additional ones. However, the 2D part of the 3D result will not be identical to the 2D result. Does that make sense?
Specifically, the 2d ellipse of the 3d result is the backprojection of the 3d circle into the image. This is used to draw the red circle in the eye window.
@user-430fc1 this also means that you probably want to filter for either only 2d c++
or 3d c++
, otherwise you end up with multiple data points for a single timestamp. If you want to combine information from both detectors, you would probably need to aggregate the 2 entries for each timestamp. Again, all of this depends on what you want to achieve.
@user-c5fb8b Yes, makes sense. Thanks!
Hi, I returned to some data collection with the latest version of Capture (nice!). Unfortunately, I'm getting consistently low confidence from the right eye camera, in spite of having what appears to be a quite well-trained 3D model. Left eye works well with ~ same numbers for 3D model. The other thing I noticed is that the right camera image is a little blurry. These are the newer small cameras that don't have an obvious way to focus them. Any suggestions welcome!
Hello, I have a very basic question.. I am trying to sync my world video recording with the fixations and saccade pupil data collected and in doing so am looking for the start date and start time information to put into csv files. I only have the world timestamp recording csv file. Is there a way to retrieve the start date & start time information? Thanks in advance!
Hi, I'm wondering that what differences between blue circle pupil detection and red circle pupil detection results in the latest pupil-labs software. Also, is it possible to utilize the blue one for obtaining fixation detection instead of red one? Moreover, where can I find the confidence threshold to filter out unreliable pupil detection results? Thanks in advance!
@user-48e99b Here's some information: https://github.com/pupil-labs/pupil/releases/tag/v2.0#user-content-downloads "Parallel Pupil Detector Visualization - #1873 We have changed the color of the 2D ellipse visualization in all eye videos to blue in order to differentiate between 2D and 3D detection. You will now always see 3 ellipses on the eyes when the visualization is enabled:
Color Description
blue: 2D pupil detection result red: 3D pupil detection result green: 3D eye model"
@user-f0edd6 Thank you for letting me know! 😄
Hi, I wounder that how the 3-D pupils are detected from the 2-D pupils. Could you help me to understand a simple process?
Hi@papr ,I have a few questions to ask for your help. 1. Can I stick Apriltag on a curved surface (eg a cup) and detect it? 2. Does the detection of Apriltag require image size and resolution?
is it possible to use this SDK with any camera?
@papr ,and I tried to put the Apriltag on the cup, but I couldn't detect it.
@user-c5fb8b @papr Awesome! Thanks for the links!
Hi @user-0f5d40, the fixation export should contain all necessary information to match the data with world frames: https://docs.pupil-labs.com/core/software/pupil-player/#fixation-export You get start and end world index as well as a pupil timestamp. The timestamp is a gaze timestamp, so it originates from the eye video and there might not be a world frame with the corresponding timestamp. Pupil already does the matching and thus provides you with the world indices for the fixation. Does this answer your question? I'm confused by you mentioning saccades, as we don't compute saccades in Pupil. Are you maybe using a custom way of calculating fixations and saccades?
@user-48e99b as @user-f0edd6 already mentioned, red and blue ellipse are the visualization of 3D and 2D pupil detector. Please note that the fixation detector does not work on Pupil data, but on gaze data. You will only get either 2D gaze or 3D gaze, depending on which gaze mapper you have selected, this way you can control which data will influence the fixations. Please be aware of the general trade-offs between 2D and 3D gaze mapping: https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline
@user-48e99b the simple explanation of the 3D pupil detector is: it keeps a 3D model of the eye-ball, which it continually updates and optimizes based on the incoming 2D pupil data. The 3D pupil of the 3D eyeball is then back-projected into 2D, which is what you see as red ellipse. While this sometimes works less accurate than the direkt 2D detector, it allows to compensate for movement of the headset (slippage) after having already calibrated, when using the 3D gaze mapper.
@user-a98526 the surface tracker in Pupil can only be used to track planar (flat) surfaces. The markers also have to be planar. The image quality seems kind of bad in the image you provided. The contrast between the black and white parts of the marker is quite low. It looks like your print of the marker is not high quality, do you have enough black ink in the printer?
@user-a98526 regarding curved surfaces again: even though Pupil only tracks planar surfaces, you can probably still track something like the cup, just as you did with the marker. You have to be aware though that the exact gaze mapping to the surface will then include an error caused by the curvature. Also the tracking is most stable if you add markers to all 4 corners of your surface, which is not possible in this scenario. You will have to experiment whether this setup will provide you with good enough data.
@user-0d7a46 Pupil is built primarily for our own eye-trackers (Pupil Core and Pupil Invisible): https://pupil-labs.com/products/ It is possible to purchase the Pupil Core headset in a configuration that allows using a custom world camera via usb-c. Our video backend supports most (!) UVC compatible cameras, I can provide more details if needed. Other than that there is also the option to built a DIY headset for people with solid technical knowledge: https://docs.pupil-labs.com/core/diy/
thanks, i wanted to see if i can implemented in an AR HMD and glasses are not an option, i will review your links Thanks for answering. @user-c5fb8b
@user-0d7a46 we also offer a few VR/AR add-ons: https://pupil-labs.com/products/vr-ar/
i was looking into that but the problem is that the design from the ones i use it not compatible
its the opensource NorthStar
ok, I don't think we have any experience with the NorthStar
@user-0d7a46 I describe the technical requirements for cameras here https://discordapp.com/channels/285728493612957698/285728493612957698/725357994379968589
thanks for the info, ill take a look
@user-c5fb8b Thank you for helping me.
@user-a98526 also I notice that you cut out the marker right at the black border. The white area around the markers is actually an important part for recognizing the marker and should be roughly the width of 2 of the marker "pixels". If you took the apriltags from our docs, you will notice that we included cutting lines. Please make sure to keep a white border around the markers with approximately that size: https://docs.pupil-labs.com/assets/img/apriltags_tag36h11_0-23.37196546.jpg
Hello, I'm having a problem installing the device on my laptop (Windows). When I launch Pupil Capture with the device, no cameras are detected. When I try to follow the procedure indicated on the documentation (delete devices, run as administrator, etc) it never works. Do you know how to solve the problem please ?
Hi @user-c5fb8b, thanks for your response. Yes, I'm trying to develop an automated script to identify the fixations for now (i will tackle saccades later) in my world video recording by mapping the gaze data corresponding to each frame to a reference image coordinate system. To frame my question better, does the info.csv file contain a start date, start time, and world camera resolution? I was not the person who recorded the data so I am trying to see if I missing some information from the gaze data, timestamp and fixation csv files I was given. Thanks in advance!
can I remotely control start and stop of recording in core?
can I put any event code along with eye movement data in core?
Yes. checkout our network api docs https://docs.pupil-labs.com/developer/core/network-api/
@user-c5fb8b I checked your comments. Thank you for the detail explanation!
Hi @user-39c8c4, which version of Pupil and Windows are you using? Also which headset do you use? Can you see the cameras listed as libUSBK devices in the device manager after having launched Pupil Capture once?
@user-0f5d40 so you don't have the entire recording available, but just 3 files? Can you list the exact names of all the files that you have there? I'm trying to figure out whether you have the raw recording data or an already processed export from Pupil Player.
@user-0f5d40 and specifically to your questions: yes, the info.csv contains a start time, both a pupil timestamp (arbitrary clock for generating timestamps) and a system timestamp in unix epoch format. However, the world camera resolution is not part of the info file. Please also note that the info.csv is an older file, indicating that the recording was made with an older version of Pupil. Newer recordings will contain a file info.player.json with equivalent information.
hi @user-6bd380 just chiming in that I've done this kind of analysis of the data I have collected from Pupil Labs, and what I've done is I use one of the many tensorflow object-detection algorithms (which can detect faces, eyes, mouth, etc.) on the world videos and then you can map the fixations output by pupil player and see if they fall within the coordinates output by the object-detection algorithm. I have found it works pretty nicely. I hope it helps 🙂 @user-2be752
@user-2be752 Hi Teresa. I am working alongside @user-6bd380 on a project which appears to be quite similar to yours. I have developed a program in Python that can recognize and extract the position of the mouth and the eyes of a person in a picture thanks to tensorflow object. Furthermore my program can also tell whether a point (x,y) belongs to a specific polygon (in that case the convex hull of the points representing the facial feature of interest).
My problem is that I now need to extract the different coordinates that I have from the eye tracker. I have seen data with the name "norm_pos_x" - which I assume refers to a normalised system of coordinates - but I am a bit bewildered because some of the data indicate sometimes indicate negative numbers, or numbers greater than 1. Do you know if there is a way to easily convert these coordinates so that I can compare it with what I have (like pixel coordinates). I am not sure if I am very clear, so don't hesitate to ask me the points you did not understand. Thanks in advance !
If anyone has any idea ot thought about my problem I would be very grateful
@user-883e1f you will get negative values or values > 1 when your gaze gets mapped onto points outside of the range of the world camera. This is totally possible, since the world camera does not cover your entire visual field. Since you cannot really map this data to anything on the world frame, you most probably want to filter out data not in [0, 1] for both x and y. Afterwards you can just multiply the norm_pos with the world camera resolution to get pixel coordinates.
Pupil Player will e.g. also not visualize gaze points outside of the world camera frame.
@user-c5fb8b Thank you very much for your prompt reply. I will then proceed by filtering the unappropriate data
I guess the origin is at the bottom left side of the picture ?
@user-883e1f yes for normalized coordinates. There are also other coordinate systems in use throughout Pupil, here's the link to the specific docs in case you wonder what the others are: https://docs.pupil-labs.com/core/terminology/#coordinate-system
@user-c5fb8b I'll have a look at it. Thank you for your assistance and have a nice day !
hi, i have some questions about pupil core: is it possible to integrate the hardware in Tobii Pro Studio? At the moment we work with Tobii Pro Studio with a Tobii eyetrackingbar but we would like to expand our setting with eyetracking-glasses, preferably with pupil core.
@user-ed78cc I am not aware of such an integration.
Hi @user-39c8c4, which version of Pupil and Windows are you using? Also which headset do you use? Can you see the cameras listed as libUSBK devices in the device manager after having launched Pupil Capture once? @user-c5fb8b Thank you for your answer, my laptop runs under Windows 8, and I don't see the libUSBK device..
@user-39c8c4 Unfortunately, we do not support Windows 8. We only support Windows 10, Ubuntu 16.04 or higher, and macOS 10.13 or higher. In this case, we would recommend upgrading to Windows 10 or using a different computer running one of the supported operating systems.
Ok i'll try to install Windows 10 then, thank you !
hi, i have some questions about pupil core: is it possible to integrate the hardware in Tobii Pro Studio? At the moment we work with Tobii Pro Studio with a Tobii eyetrackingbar but we would like to expand our setting with eyetracking-glasses, preferably with pupil core. @user-ed78cc or does anyone have any experience whether the data from pupil core can be imported into Tobii?
Thanks for the quick reply @user-c5fb8b! the files I have are: fixations.csv, gaze_positions.csv, world_timestamps.csv, pupil_positions.csv, as well as the world.mp4 recording. I believe that it was recorded with the newer system so the info.player.json file would apply here. I'm curious if those other 4 files I have contain the equivalent information that the info.player.json file has or if I need to acquire this file as well (specifically the timestamps of start date/time of particular importance here) Also which file would have the world camera resolution data?
@user-0f5d40 these are files from an export from Pupil Player, this is not the raw recording.
The world.mp4 is also most likely an exported world video. Do you see any additional visualizations in the world.mp4 video, like gaze point overlays or similar?
I'm not sure what your specific goal is, but I feel it would be the easiest if you were given access to the original recording.
Please note that when exporting in Pupil Player you can also just select a specific time-range to export, so you don't even know in this case if the files you have represent the full recording or just a slice. Normally an export from Pupil Player also contains export_info.csv
file with additiona information, such as which frame range from the original recording was exported.
Anyways, you do have timing information here in the world_timestamps.csv, but these are pupil timestamps, so this is an arbitrary monotonic clock. If you need a real-world starting date and time, you will have to get access to either info.csv or info.player.json.
Maybe you can elaborate what you need the start time for?
@user-0f5d40 if your goal is really just to implement a custom fixation detector, you might actually be fine with the gaze_positions.csv export. Depending on which kind of fixation detection method you are using, the gaze data with timing should be enough?
@user-c5fb8b yes, the world video recording has the fixation and gaze point overlayed on the video. Like you said, it is most likely the exported world video so I need the export file as well. I want to create a script that will match the fixations to the world camera recording to analyze specific areas of interest. So initially I am trying to process the data so that I have a table of timestamps for each frame in the world (a frame_timestamps.tsv file) as well as all gaze data where the gaze coordinates are represented with respect to the world camera (a gazeData_world.tsv file).
@user-0f5d40 you can already match the timestamps with the information that you have. The world_timestamps.csv
has one row for every frame in the exported video. It gives you the pupil timestamp for every frame.
In gaze_positions.csv
you also get a gaze timestamp for every gaze datum. Since the eye cameras usually run at a higher frame rate, these will not match to the world timestamps exactly, but the export also does this matching, so you should see a column world_index
here where the gaze timestamp was matched to the closest world frame.
Just to be clear, all these timestamps are generated with the same clock, so you can use them for synchronization/ordering
I think you might actually be able to do what you want to with the limited files you have. Does this make sense?
@user-c5fb8b yes, this does. thank you so much for your help! I will try now and reach out if I have any issues 🙂
Hi, I faced an issue while installing 'pip install pupil-detectors' My development environment is as follows: - OS: Ubuntu 16.04 - gcc/g++ 6.5 (they were originally 5.4, but I upgraded to solve the problem even it was failed) - The main error is '/usr/local/include/ceres/internal/integer_sequence_algorithm.h:64:16: error: 'integer_sequence' is not a member of 'std' So, could anyone help me to solve this issue? For more detail information, please refer the following error notes.
@user-48e99b hey, would you mind testing the build outside of an anaconda environment first? Does it generate the same errors?
@papr I just tested by following your comments, and unfortunately there are still same errors.
Actually, I don't understand the error is happened. Because, I already tested the installation manual with another PC (same OS, same environment I guess..). Then, there were no such issues.
@user-48e99b the problem seems to be that your ceres installation has parts that contain C++14 features, while pupil-detectors builds only withc C++11 support
@user-48e99b It indeed seems as if there was an update to the ceres codebase 6 weeks ago, which now requires C++14 support
I'll take a look at how we can fix that
@user-c5fb8b Thank you for supports! Please let me know when it can be solved! Thank you again.
@user-48e99b can you try reinstalling ceres with:
git clone https://ceres-solver.googlesource.com/ceres-solver
cd ceres-solver
git checkout 1.14.0
mkdir build && cd build
cmake .. -DBUILD_SHARED_LIBS=ON
make -j3
make test
sudo make install
sudo sh -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/ceres.conf'
sudo ldconfig
This is the same code from our docs for installing dependencies on Ubuntu <=17, but I added the specific checkout of version 1.14, which should still be compatible with C++11
for your reference the current instructions: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu17.md#ceres
@user-c5fb8b Okay, I will try it and let you know soon 😄
@user-c5fb8b Hi, it was solved!! Thank you very much : )
Ok, thanks for testing so quickly, I'll take care of adjusting the documentation then 🙂
can I use pupil-mobile with Pupil Capture v2.0.182?
hi, I have a problem. The MP4 eye video files output (eye0.mp4 and eye1.mp4) cannot be opened or played in anything besides VLC media player. I would like them to be displayed in unity automatically. Is this possible?
I have started seeing blue circle around the pupil instead the usual red circle even though the config graphs read almost 1.0. please is this still Okay?
@user-7daa32 We have changed the color of the 2D ellipse visualization in all eye videos to blue in order to differentiate between 2D and 3D detection. You will now always see 3 ellipses on the eyes when the visualization is enabled:
Color Description
blue: 2D pupil detection result
red: 3D pupil detection result
green: 3D eye model
Thanks
@user-7daa32 We have changed the color of the 2D ellipse visualization in all eye videos to blue in order to differentiate between 2D and 3D detection. You will now always see 3 ellipses on the eyes when the visualization is enabled:
Color Description
blue: 2D pupil detection result red: 3D pupil detection result green: 3D eye model
@papr Thanks. But I am on a 3D but seeing both colors
@user-7daa32 v2.0 runs both, 2d and 3d, at the same time. I forgot to mention that
@user-7daa32 Check out the release notes for information about more changes: https://github.com/pupil-labs/pupil/releases/v2.0
Thanks
Hi there, I'm back with a new question. I managed to connect the experiment I built in Psychopy with the eye tracker and record my eye movements. I'm now trying to synchronize the beginning and end of each trial in Psychopy with the eye data. I'm thinking to use either some sort of flags or timestamps that will show in the eye data (so that I can match the eye data with the behavioral data) but I'm not sure what the bet way is to approach this. I found the network_time_sync in the pupil helpers docs but I don't know how to implement it... Do you have any advice?
@user-ae4005 Are you running Pupil Capture on the same computer as the experiment? As a simpler approach, you can start by setting Pupil time to your experiment time, similar to this example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py#L42-L43
Instead of "T 0.0"
(which sets the clock to zero), you can send f"T {psychopy_clock()}"
where psychopy_clock
is the clock function that you use in your experiment.
Afterward, you can send remote annotations to capture similar to this script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py Note, that you should adjust time_fn = psychopy_clock
here.
Then you can send annotations/trigger for the start and end of your experiment blocks, or any other visible event. Later, you can use the annotations plugin in Player to visualize the recorded annotations and check if the time synchronization is good enough for your use case.
Great, thanks! I'll have a look at it.
And yes, for now I'm running everything on the same computer. Do you think that's a problem?
No, not a problem. This is actually better for time sync.
Awesome, happy to hear. Thanks again, will try to get it running with the example code you sent 🙂
Hello, we've got a project where we're trying to direct a robotic arm with 3D gaze information. So far, we are able to calibrate using every available method (i.e. screen markers, single marker, physical marker) through Pupil Core but are unable to achieve gaze accuracy sufficient enough to reliably control the robotic arm (although the confidence values are consistently over 0.8). We have a 6 DoF position tracking system (PolhemusLiberty) and we are able to accurately track the position and orientation of the participant's head as well as a calibration point in space. We are wondering if we can use this data instead of the world camera to calibrate the 3D gaze data.
@user-f0edd6 What amount of accuracy are you looking for? Technically, you could be using the PolhemusLiberty data for a custom calibration. Instead of mapping pupil data to the scene camera coordinate system, you could create a mapping function that maps directly into your PolhemusLiberty coordinate system. This requires you to assume that the relationship between the headset and the participants head do not change (no slippage). This would be similar to an hmd calibration. But from its core functionality, it is equivalent to the existing calibrations in Pupil Capture and it is unlikely that this will result in higher accuracy.
@papr Hi, I'm getting an error in lines 30 and 40 of the remote_annotations.py script. The problem seems to be the "format" function. I attached a screenshot of the coding window. If I don't define the format properties (as shown in the screenshot) it works, but if I add format(pub_port) or format(time_fn) I get an error telling me I need to fix the code. I read the documentation of the format function and it all seems correct... Can you maybe help me understand what I'm doing wrong?
@user-ae4005 which version of python are you using?
I'm using version 3.6 now (I had multiple versions installed, so I de-installed the non-relevant versions). I also managed to get around this problem just using the "str" function.
Hello, I'm curious as to what the blue histogram, coloured lines, and cyan boxes represent in the algorithm view?
@user-430fc1 They visualize intermediate results from the pupil detection algorithm. You can find the details about the algorithm here https://arxiv.org/pdf/1405.0006.pdf
error messages in pupil-mobile after 1 hour running? https://photos.app.goo.gl/W6YYotJQYWtX3mXF7
@user-f1866e Would you mind sharing information about your setup? (phone and Android version)
It is Oneplus 5t. android version 9. OxygenOS version 9.0.10
@user-f0edd6 What amount of accuracy are you looking for? Technically, you could be using the PolhemusLiberty data for a custom calibration. Instead of mapping pupil data to the scene camera coordinate system, you could create a mapping function that maps directly into your PolhemusLiberty coordinate system. This requires you to assume that the relationship between the headset and the participants head do not change (no slippage). This would be similar to an hmd calibration. But from its core functionality, it is equivalent to the existing calibrations in Pupil Capture and it is unlikely that this will result in higher accuracy. @papr We'd like as much accuracy as possible, our thinking was that if we can more accurately track the depth/location of the target object during calibration, then that would yield higher accuracy. How are you measuring/estimating the depth values during 3D calibration with the physical target? If you would like to see any of our data to get a clearer idea just let us know, we are seeing a lot of jumps/noise in the 3D gaze data.
@user-f0edd6 We are actually not estimating the depth of the reference targets. The 3d gaze is estimated via vergence. I can give you more details tomorrow.
@papr All right that sounds good, thanks!
Any idea for those errors that I posted?
hi, ive been considering using the pupil labs eye tracker for one of the studies im currently in
is someone available to help me regarding this?
i want to know if its suitable for this project
@user-7a9aad If you are able to talk about it publicly, feel free to provide us with some details here. Alternatively, you can write an email to info@pupil-labs.com
will the pupil tracker be able to measure how long the pupil is fixated on an object
@user-f1866e Unfortunately, not. I will try to reproduce it. Were you able to reproduce it? Were you able to expand the notification to see more of the error message?
and what is the output of the tracker
I tried but it did not work. I did not get those errors today. I will keep trying it. And is there any way to charge One plus 5T while using Pupil_mobile?
@user-7a9aad Pupil has a built-in fixation detector but does not have object detection. If you are interested in fixations on areas of interest, checkout the surface tracker plugin. Generally, you can find more information on https://docs.pupil-labs.com/core/
There is also an example recording linked on https://pupil-labs.com/products/core/tech-specs/ which you can download and try for yourself using Pupil Player
@user-f1866e I am not aware of a possibility to charge the phone while the headset is connected.
How about OTG hub or splitter?
@user-f1866e We have never tested the software with such devices/cables. You might see decreased performance when using them.
Ok. Thanks.
I am using Pupil v2.1.0 and in terminal, it shows lines of errors. Traceback (most recent call last): File "threading.py", line 916, in _bootstrap_inner File "threading.py", line 864, in run File "pyre\zactor.py", line 59, in run File "pyre\pyre_node.py", line 53, in init File "pyre\pyre_node.py", line 521, in run File "pyre\pyre_node.py", line 480, in recv_beacon File "logging__init__.py", line 1320, in warning File "logging__init__.py", line 1444, in _log File "logging__init__.py", line 1454, in handle File "logging__init__.py", line 1516, in callHandlers File "logging__init__.py", line 865, in handle File "shared_modules\zmq_tools.py", line 52, in emit File "shared_modules\zmq_tools.py", line 162, in send File "site-packages\msgpack__init__.py", line 47, in packb File "msgpack_packer.pyx", line 284, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 290, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 287, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 234, in msgpack._packer.Packer._pack File "msgpack_packer.pyx", line 232, in msgpack._packer.Packer._pack File "msgpack_packer.pyx", line 281, in msgpack._packer.Packer._pack TypeError: can't serialize UUID('d5039825-bb95-4760-a729-e3cf63695527')
@user-f1866e Thank you, the traceback is very helpful. This seems to be triggered by an edge case in one of the underlying third-party library. I can't remember seeing this case myself. We will look into ways of handling this appropriately.
@papr When you get a chance can you into more details on how "we are actually not estimating the depth of the reference targets. The 3d gaze is estimated via vergence."
@user-7b943c The 3d calibration uses bundle adjustment to estimate the physical relationship / rotation between the eye cameras and the scene camera. Afterward, the 3d gaze vectors of the eye model are transformed into gaze rays within the scene camera space (starting at the respective eye ball centers) and intersected. This intersection is the gaze_point_3d result. This assumes a phenomenon called vergence. https://en.wikipedia.org/wiki/Vergence
Unfortunately, we need to make a lot of assumptions / estimates in this process, e.g. the initial eye ball positions within scene camera space. If these estimates do not fit the actual values, small errors are introduced into the system. These add up and result in inaccuracy.
The bundle adjustment is optimized such that the backprojection of the gaze_point_3d into the scene image plane is as close to the located reference positions as possible. In other words, the system tries to optimize accuracy in the 2d image plane, not in the 3d camera space.
That said, we use pre-recorded camera intrinsics which are used to transform points between the 3d camera space and the 2d image plane, and vice-versa. You can replace these pre-recorded intrinsics with custom intrinsics for your camera by running the camera intrinsics estimation plugin.
Hi guys, I'm wondering if I could use my own camera module (with special lens and filter) as scene camera on Pupil Core headset. What connection interface is these cameras currently using? SPI or I2C or something else? I want to plug my camera on USB-C clip board and make it work with eye cameras. Any hardware spec available? Thanks!
Hi All, I am using the Pupil Core glasses on Windows 10 with Pupil v2.1-0. I am having issues with Pupil Capture not detecting the two eye cameras (log shows "Could not connect to device! No images will be supplied" - see attached). It only detects the world camera (Pupil Cam1 ID2). I have tried the troubleshooting steps here (https://docs.pupil-labs.com/core/software/pupil-capture/#windows), and this did not solve the problem. When I go into Device Manager, I only see that world camera driver under libusbK USB devices, nowhere else (see attached). I do not see the other drivers for the eye cameras. I have also tried the steps for Zadig manual installation (https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md), and it still does not see the two eye cameras even after I "list all devices" (see attached). Are there any other solutions for fixing this issue?
@user-1a2c5d have you used an other version of Pupil successfully previously? Generally, if the device manager does not show the camera in any of the categories it indicates that the camera is not connected physically. In other words, it is possible that the connectors of your eye cameras came loose. I would recommend sending an email to info@pupil-labs.com with these screenshots and your error description.
@papr I tried the 2.0 version after first trying 2.1, and no luck there - the same processes yielded the same results. I'll plan to follow up with the email. thanks!
Hi @papr I managed to synchronize my psychopy experiment with the eye tracker now and I'm sending triggers. I can see that the annotations are being send and recorded but when I do the export to the csv files the annotation file is empty. They do appear in the csv file if I run the demo script (remote_annotations). I checked the code multiple times and I don't see where I'm going wrong. Do you have any idea?
@user-ae4005 I think there is something going wrong with the time sync as Player will only export data within the trim mark range. i.e. it looks your world timestamps have a different time base than your annotations. Could you share the exported world_timestamps.csv
file for confirmation?
@user-ae4005 How did you implement the time sync? Do you adjust Capture's clock or do you adjust your psychopy clock?
@papr these are the timestamps
I'm using the monotonicCLock of psychopy for the capture. This is what I do: time_fn = core.monotonicClock.getTime
@user-ae4005 Yeah, you can see that they are 1594890968.415954
while your annotations start at 16
. You will have to send the T {...}
command before starting a recording.
Right, I thought that could be a problem... I just wasn't sure how to fix it
So I suggest the following procedure for your experiment:
T
cmdR
cmdC
cmd and wait until doner
cmdI'm sending: pupil_remote.send_string("T " + str(time_fn))
Is this what you mean?
And the corresponding pupil_remote.recv_string()
You need to keep these lines as a pair.
Yes, right. I'm doing that.
Generally when sending stuff via the pupil_remote
socket, you need to call recv_string
afterward. Just a small detail to keep in mind which can be overlooked quickly
So the only thing I'm not doing is the calibration. But this shouldn't affect the time stamp synchronization, right?
Correct. Maybe add an additional time.sleep(2.0)
after resetting time in order to be sure that capture adjusted to the time reset
or the appropriate psychopy function to pause the experiment for 2 seconds
Did it, and also figured out that I missed the () after time_fn when sending to pupil remote pupil_remote.send_string("T " + str(time_fn)). All working now. Thanks again for the great support!
Hi guys, I'm wondering if I could use my own camera module (with special lens and filter) as scene camera on Pupil Core headset. What connection interface is these cameras currently using? SPI or I2C or something else? I want to plug my camera on USB-C clip board and make it work with eye cameras. Any hardware spec available? Thanks! Hi @papr Could you give me some help? Thank you 😁
@user-699cbb Your question has not been forgotten. I forwarded it to our hardware team. I will come back to you as soon as I have a response.
@user-699cbb The scene camera is connected via a 4-pin JST connector to the hub.
See this message about further camera requirements if you want to use the built-in UVC backend. https://discordapp.com/channels/285728493612957698/285728493612957698/725357994379968589
@papr Thank you!
@user-699cbb to be more specific, this should be it https://www.amazon.com/daier-Micro-4-Pin-Connector-Female/dp/B01DUC1M2O
hi guys, I'm looking at integrating gaze information into a VR application for some clinical studies. What VR rigs have been used sucessfully (from a mechanical fit perspective) with core ?
@user-a1c65e Do you know about our HTC Vive add-ons? No need to use the Core Headset in a VR rig.
no, didn't know. do you have a product link?
@user-a1c65e https://pupil-labs.com/products/vr-ar/
would that work with a steam index as well ? any experience with the index?
Hi, we are trying to implement pupil labs in our lab and synchronize it with other streams using LSL. We were able to add the lsl plug in and - on the same computer - were able to detect the stream using Lab Recorder on the same computer. However, we are trying to send the stream over the network from the computer that has Pupil Labs software and send it to another computer which has LabRecorder on it (both computers are on the same network). However, whenever we try this we aren't able to detect the stream in LabRecorder. I was wondering if anyone knows if we should use a different interface/plugin to stream over the network or if there is an inlet code available that we should use on the other computer receiving the stream.
@user-48fec1 I would recommmend asking this at the LSL slack channel. They know how to handle this type of issue better than we do.
@user-a1c65e I do not know. We do not have an add-on for the Steam Index
ok great thank you
Hi@papr , can I use norm_pos_xwidth and norm_pos_yhigh to get the pixel of the user's gaze point.
@user-a98526 Not exactly. The y-axis in the pixel coordinate system is flipped. Therefore:
pixel_x = norm_pos_x * width
pixel_y = (1.0 - norm_pos_y) * height
Hello, who can I contact to receive a formal quotation for Pupil Core with high speed camera? I need one to make a purchase through my university. Thanks.
@user-ab6a19 You can request a quote via the checkout process on our website. 🙂
@user-ab6a19 https://pupil-labs.com/cart/?pupil_w120_e200b=1 Make sure to check the Academic Discount
box, hit next, and check the Request Quote
box.
Hello, previously I was exporting the data offline and I was able to get the fixations.csv file in export directory. But now I am doing in a similar way but it is not generating the fixation.csv file.
I have uploaded the image just to show, it is showing 0 fixation. Even when I am trying to export the file, I did before already (it was generating the fixations file) still it doesn't export or generate fixations file.
Any suggestions regarding this would be helpful for me to proceed.
this image to show the plugins I loaded before exporting the data.
Hi@papr ,I tried to run Pupil Player from sorce, but there was an error when putting the log file into the pupil player:
I have installed the environment correctly according to the dependencies. My python version=3.6 Pupil Core version=1.21
@user-6bd380 Maybe you can reduce the minimum duration of gaze.
Hi, is the maximum frequency of both the World and eye Cameras 200 Hz ? and the value can be reduced as required !?
Where in the pupil core can we set the frequency of cameras ? I'm having trouble finding the drop down menu for it.
@user-3ede08 hey, the world camera has a maximum frame rate of 120hz. By default, Capture runs the world camera at 30hz. The eye cameras at 120hz. You can change the frame rates in the video source menus of each window.
@user-6bd380 this is an older version of Pupil Player, isn't it? I would suggest updating to the most recent version of Pupil Player.
Ok I got it, thanks. We need to plugin the device first. However, with a resolution of (192, 192) we can order a frame rate of : 200 Hz, right ? So, we may conclude that, the max sampling rate is 200 Hz, I think. 1) What is the role of exposure mode and time ? 2) If I chose different resolution and sample rate for each eye, the resulting gaze patterns (combination of both eyes) will have the smallest resolution and sample rate of both, right ?
@user-3ede08 yes, 200hz is possible at 192x192. I was talking about the default settings. 1) exposure time is capped based on the frame rate, i.e. frame rate takes priority over exposure time. 2) our algorithm tries to match as many binocular pairs as possible, everything else will be matched monocularly. This means you will should get binocular samples at the lower frame rate plus additional monocular samples from the higher frequency camera
ok thanks @papr
@papr thank you now its working fine .
Does anyone know if there have been any projects that uses Pupil products to control a robotic arm? My team and I are trying to control a robotic arm with Pupil Core using a ViperX300 from Trossen Robotics. Any examples would be of great help!
@user-7b943c check out: https://pupil-labs.com/news/object_fixation/
Also check out: http://harp.ri.cmu.edu/assets/pubs/fja_rss2018_aronson.pdf
Hi @user-a98526, following up on the error log that you sent: you mention you are running Pupil v1.21 from source. The log indicates that Pupil has not been set up correctly. Back then the dependencies were a bit harder to set up, did you follow the previous steps on setting up the dependencies? The latest instructions are not valid for this older version. Is there a specific reason for why you are using v1.21 when you did not have it set up yet? I would recommend using the latest version as it is much easier to setup, especially on Windows!
Hi@user-c5fb8b,I set it up according to the current dependencies. The reason I use v1.21 is that the scene camera used by my pupil core is RealsenseD435i. The latest version of Pupil no longer supports Realsense.
@user-a98526 have you tried running the RealSense cameras with the custom community Plugin with the latest version of Pupil? Here's the plugin: https://gist.github.com/pfaion/080ef0d5bc3c556dd0c3cccf93ac2d11
You should download the file realsense2_backend.py
and put it into ~~your user folder\pupil_capture_settings\plugins\~~ (EDIT: this was for running from bundle which does not work for the realsense-backend-plugin. Put the file into the location of the Pupil source code\pupil_capture_settings\plugins\ )
Then you should be able to start the RealSense backend from the PluginManager menu in Pupil Capture.
However, if this does not work for you, here are the dependency setup instructions for running Pupil v1.21 from source on Windows. Please be aware that they are very technical and require a lot of setup for the C++ modules that you still had to build manually back then. Feel free to ask any questions if you run into issues along the way: https://github.com/pupil-labs/pupil/blob/v1.21/docs/dependencies-windows.md
@user-a98526 I gave the wrong path for placing the custom plugin. It's now adjusted correctly 👆
@user-c5fb8b Where should I put the modules that the plugin depends on?
@user-a98526 you will have to run Pupil from source for this to work. But like I said, never versions should be much, much easier to setup on Windows. Then you can install pyrealsense with
pip install git+https://github.com/pupil-labs/pyrealsense
as Python library into the environment you are using to run Pupil.
@user-c5fb8b It works, thank you for your patience.
@user-a98526 glad to hear that! Are you running the latest v2.1 version of Pupil?
@user-c5fb8b yes.
Hello, we've got a 60 mins blink detection recording, can I ask how to seperate data of the first 3 mins from the remaining 57 mins in two different files, and also how to export the initial 60 mins data in a 10mins interval to know the trend of the blinks?
Hi @user-0b619b, you can adjust the exported range of the video by dragging the trim marks on the left/right of the timeline or by adjusting one of the range settings in the general settings. You can then export multiple times with different export ranges to create exports from different sections. That being said, I feel like analyzing the trend in the blinks would be better done in a post analysis on the data than by splitting the data into multiple exports. Have you thought about some moving window accumulation on the entire data? This would probably give you a lot more analysis flexibility than a fixed interval export, depending on what you want to achieve.
@user-c5fb8b Thank you so much 😊 and good idea of moving window accumulation !
is there any way to save calibration results and reuse it in Pupil Capture?
Hi. I followed the discussion about getting the realsense (415) world camera to work. I have installed the new v2.1 on a new computer. That was very easy except the world view was not there, of course. I placed realsense2_backend.py into the plugin folder and now have to install pyrealsense. The pip command is not available. Do I have to install a version of python for that to happen?
Hi @user-70a9d1, you will have to install the same Python version as it comes with the bundle, install pyrealsense via pip, and symlink the installation into the user plugin folder. Which bundle are you using?
@user-f1866e After a calibration, you should be able to turn Capture off and on again, and the gaze mapping should still work. Please note, that it is not recommended to reuse calibrations between multiple sessions as they loose accuracy over time through slippage.
pupil_v2.1-0-g4116162_windows_x64.msi
@user-70a9d1 Then installing any Python 3.6.x version other than 3.6.0 should work.
Thanks @papr I installed python 3.6.5. If I then run "pip install git+https://github.com/pupil-labs/pyrealsense" from the windows cmd box it doesn't recognize 'pip'. Sorry, novice here.
@user-70a9d1 Please use pip install pyrealsense2
. Also make sure that you tick the "Add Python to PATH" tickbox during the Python installation. After reinstalling, restart the cmd st. the PATH variable is reloaded correctly.
Thanks for the path help. I get the following -- "ERROR: pyrealsense2-2.36.0.2038-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform." I have an intel i7 chip and am using windows 10.
@user-70a9d1 du get this during the install or when running Capture?
Hello, I have been inconsistent in learning the eye tracking because of work related stuff. I wish to try as much as possible before my research begins proper. A quick question guys. Does the resolution affect the FPS? What's is the importance of the video source resolution. I am seeing FPS of 15 under 10234,768 resolution... I got FPS of more than 50 with a blurry resolution of 320/240
The above is for the world camera.
I was able to get FPS of more than 60 for the eye camera and a CPU of more than 170
Also what does the yellow one the algorithm mode shadows mean ?
% Data dismissed after calibration: is it suppose to increase or decrease?
I got the message when trying to install from the command line. First I tried "pip install pyrealsense2" and got the following "ERROR: Could not find a version that satisfies the requirement pyrealsense2 (from versions: none) ERROR: No matching distribution found for pyrealsense2" Then I thought maybe I should download the pysense2 package and install that. That led to the error I gave in the previous message.
I want to control external devices with TTL pulses during the pupil Core recording. Can I do that from the capture GUI or I have to use the API?
Hi all 🙂 I would like to use the pupil core headset and an EEG device, in real time but I need them to be synchronised. Any ideas on how I can synchronised those two devices ? And is a simple way exists to sent triggers or tags to the pupil core's data recording ? Thanks in advance !
@user-7daa32 Please see my notes below: Frame rate - Yes, frame rate is limited by resolution because there is only so much USB bandwidth available. :) Glasses and Pupil Core - Technically, it is possible to use both but it is very difficult to adjust the eye cameras such that their field of views are not occluded by the frame or distorted by the lenses. We recommend wearing contact lenses instead. Yellow area in algorithm view - These are spectral reflections whose pixel edges will be discarded during pupil detection. Details can be found here: https://arxiv.org/pdf/1405.0006.pdf Data dismissed after calibration - Ideally, this number is zero but this is not possible to achieve. Low confidence data happens during blinks or if an eye looks away from a camera which will both happen during a calibration.
@user-70a9d1 pip looks here for available builds: https://pypi.org/project/pyrealsense2/#files So Python 2.7, 3.5, 3.6, and 3.7, as well as Linux, Windows 32/64bit are supported. Apparently, your setup did not match any of these.
@user-4ddeb2 This is not possible from the UI. You will have to write a custom plugin for that. https://docs.pupil-labs.com/developer/core/plugin-api/
@user-60f500 I can recommend using https://github.com/sccn/labstreaminglayer for that
Do anyone know about Stimulus presentation and self-adaption , Python or MATLAB?
@user-00fa16 I have some experience with https://www.psychopy.org/
You will have to integrate the self-adaption yourself though. Pupil provides a realtime Network API which you can use to receive and process data. https://docs.pupil-labs.com/developer/core/network-api/
Which one is better or available for stimulus presentation and connect with pupil lab? python or MATLAB?
@user-00fa16 Python
@papr and the self-adapation (control the process of experiment by gaze or ignore) can be realized by python and API you sent to me a moment ago?
@user-00fa16 Correct
You might need to evaluate if the total latency of the data processing/transmission is sufficiently low for your experiment though.
OK and thank you for your notice
Do you have any API about presenting optical grating, color lump or other pattern, and simple motion? Just like a template, we can achieve the desired effect after changing the parameters?
@user-00fa16 Stimulus presentation can be done using psychopy. See link above.
Ok, thanks for your patience
OK. I had mistakenly installed python 3.8 instead of 3.6. "pip install pyrealsense2" now worked. I put realsense2_backend.py into the plugins folder. You mentioned the need to symlink. How do I do that?
python3.6 -c "import pyrealsense2; print(pyrealsense2.__file__)"
@user-70a9d1 what is the output of that command?
Don't know why player refused to open. I still want to work with the previous version and then switch to the latest one
@user-7daa32 Close the program, delete the Home directory -> pupil_player_settings -> user_settings_*
files, and start again
C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\site-packages\pyrealsense2__init__.py
@user-70a9d1 Try
mklink /D C:\Users\admin\pupil_capture_settings\plugins\pyrealsense2 C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\site-packages\pyrealsense2
I get "Cannot create a file when that file already exists."
I edited the command. I think I switched target and source
Same error with edited command
I edited it again
OK. That seemed to work except I got "You do not have sufficient privilege to perform this operation."
@user-70a9d1 Restart your command prompt as an admin 🙂
@user-7daa32 Close the program, delete the
Home directory -> pupil_player_settings -> user_settings_*
files, and start again @papr
I only have the APK and no file.. window installer package
OK. Thanks for your patience! That installed the plugin. No world view yet in capture. I will get back to you if I can't figure it out from here.
@user-7daa32 The files I am talking about are not part of the installation. It is an extra folder in your user's home folder
The day I installed version v2.0.161, I got no folder. Please I am not seeing any folder
@user-70a9d1 Check the logs if the plugin could be loaded. Afterward, check the Video Source menu. You should be able to select the camera from the selector. Try manual selection, too
@user-7daa32 Please try searching for "pupil_player_settings"
@user-7daa32 Please try searching for "pupil_player_settings" @papr
Got it! Saw subfolders: plugin, player, user settings for eyes and players ... They seem empty and only the plugin is a folder
@user-7daa32 Delete all the files that start with "user_settings"
@user-7daa32 Delete all the files that start with "user_settings" @papr
Wow! Thank you.
The log says "device_id: No device connected." and then it 'restarts' and tries again and again... I guess the connection to the camera is faulty.
@user-70a9d1 Yeap, sounds like it. But great to see that the symlinking works
Yes. Thanks for that!
That green circle is not steady. It's going off and one or fading away. Sometimes gets small and larger. Going in and out of the ROI. Please what's the ideal set up for this ?
If I got a response: " did not collect enough data for mapping accuracy" what do that mean?
Another issue today. I was looking at questions on a screen with white background. The writings were not visible . The paper was just plain