Hi so the button for Detect Eye 0 is stuck at on and camera isn’t showing, it shows for a few seconds then disappears. The button is also just stuck and doesn’t toggle. Any troubleshooting thoughts? Thanks
Hi, could you please try restarting with default settings? Does the issue reappear?
Hi! Is it possible to find the angular position, velocity, and acceleration of the pupils using pupil core? Is it also possible to do pupillometry using pupil core?
Hi, yes, this is possible. The software will estimate pupil diameter by default. To calculate angular velocities, you will need to export the recording and use this tutorial: https://github.com/pupil-labs/pupil-tutorials/blob/master/05_visualize_gaze_velocity.ipynb
Hello! I obtained the coordinates of points A and B as (x1, y1) and (x2, y2) using the Pupil Core eye tracking headset , and then obtained the coordinates of point C, but why are its coordinates not within the range of points A and B? Where A and B are obtained by gazing for a long time and obtaining 2000 coordinates to take the average.
Hey, from your description, it is not entirely clear why C would have to lie between A and B. Generally, I would like to note that the accuracy of the points might vary (e.g. during blinks) and that it is recommended to discard low confidence values. You can read more about the coordinate systems here https://docs.pupil-labs.com/core/terminology/#coordinate-system
Because I constructed a rectangular region using the points A and B, and the C I was gazing at was inside this rectangle. But the value of point C with high confidence is not in this range either.
Could you share a recording with data@pupil-labs.com that includes the calibration choreography and the subject gazing at the different points?
In the process of recording the video, I found out what my problem was. Thanks for your answer!
This is nice to hear! Would you be ok with sharing what the problem was? It might give us insight into how we could improve our documentation.
I used 2000 gaze data and then averaged them to get the coordinates of points A and B. But when I look at the world index of these 2000 points, it is not the same as the world index when I focus on that point in the video. I would like to ask if the pupil data obtained through the API cannot be done in real time.
You can receive data in realtime. 🙂 https://docs.pupil-labs.com/developer/core/network-api/
I recommend using timestamps not video indices to find specific sections in your data. Frame indices can be misleading when comparing data from different sources.
Okay, thanks for your answer, I'll give it a try.
We planning to use heart rate and GSR with the Eye-tracking. Can we sync all these together? We have not got the device delivered yet but just curious
Hi @user-7daa32. It's usually possible to sync with third-party equipment using our Network API: https://docs.pupil-labs.com/developer/core/network-api/#network-api or Lab Streaming Layer Plugin: https://github.com/labstreaminglayer/App-PupilLabs. But it really depends on what HR and GSR equipment you'll be using. I'd recommend having a look at the manufacturer's documentation on sync capabilities.
Hi, I cannot get pupil capture to work under mac os x Monterey using sudo. Getting the error ImportError: ('Unable to load OpenGL library', "Failed to load dynlib/dll 'OpenGL'. Most probably this dynlib/dll was not found when the application was frozen.", 'OpenGL', None)
The full command/output is: NIWA-1019500~/Downloads$ sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture MainProcess - [INFO] os_utils: Disabled idle sleep. world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "PyInstaller/loader/pyiboot01_bootstrap.py", line 151, in init File "ctypes/init.py", line 348, in init OSError: dlopen(OpenGL, 0x000A): tried: 'OpenGL' (relative path not allowed in hardened program), '/usr/lib/OpenGL' (no such file)
... File "OpenGL/platform/baseplatform.py", line 15, in get File "OpenGL/platform/darwin.py", line 41, in GL ImportError: ('Unable to load OpenGL library', "Failed to load dynlib/dll 'OpenGL'. Most probably this dynlib/dll was not found when the application was frozen.", 'OpenGL', None)
world - [INFO] launchables.world: Process shutting down. MainProcess - [INFO] os_utils: Re-enabled idle sleep.
But this log is missing a specific output that should be present. Can you confirm that the version at /Applications/Pupil\ Capture.app/
is the newest v3.5.7, not an outdated one? Given that you are starting the command from the Downloads folder, I have the suspicion that the newer version was not yet copied to the Applications folder.
Thank you for reporting this issue. I will try to reproduce this issue later today.
The above error is on Monterey 12.3.1
I have just upgraded but cannot reproduce the issue 😕
Thanks for any help
I installed pupil_v3/5-7
Hi, is the a possibility to disable the 'c' and 'r' hotkeys used by Pupil Capture. I'm running an experiment with letters on the same computer, so every time the letter 'r' is pressed as part of the experiment a new recording starts or ends... 🙂
Hey 🙂 Technically, there should be a way to get these disabled via a custom plugin. But the application is only able to process keyboard input if the application is focused. If your experiment is running there should be no keyboard input to Pupil Capture.
And when is an application 'focused' exactly? Is it out of focus when I'm minimizing Pupil Capture?
The answer is yes. 🙂 Thanks for your help. Just need to make sure that Pupil Capture is not focused during the experiment.
Thanks
I want to be sure I am doing the right thing.
In order to create the active tracking area, I create a single surface that covers the entire area of the stimulus before data export. I assumed the dimensions would be normalized to X =0,1 and y = 0,1. Then I have to use the gaze-position data on the surface. If that's correct, what's the difference between the y and x_norm and the x and y_scaled?
what's the difference between the y and x_norm and the x and y_scaled They are equivalent.
x_scaled = x_norm * surface_width
y_scaled = y_norm * surface_height
where surface_width
and surface_height
are the size values that you can enter in the ui.
The scaled values are only there for convenience.
Example: Your surface is a computer screen with a resolution of 1280x720 pixels. Then you could set this resolution as the surface size. The scaled values would correspond to pixel locations on that screen.
Thanks....let me get it correctly, the scaled values are the actual locations on the screen in pixels. And I can convert it to Cm.
No, this is not the case. The screen example is just an example on how you could use the size/scaling. For your use case, I recommend using the norm values.
We still need to convert norm values to values in centimeter. I assume we just need to manually measure the distances between apriltags and then use it for the conversion
This is the correct approach!
Thank you.
Then the normalized height and width will be 1 each right ? Sorry for too many questions
yes
I'm using the Pupil labs core and the capture software (3.5.1) on a Macbook Air running 12.3.1. I start capture in admin mode via terminal, and while the scene camera is working, the camera for eye cannot be found. Any suggestions as to the cause?
Hi, I am sorry to hear that. Could you please start the System Information
(utility program that comes with every macOS installation), select USB on the left, and check how many and which Pupil Cam
entries are listed in the USB Device Tree view?
It only sees the Pupil Cam1 ID2
hi everyone, i have an issue with the world camera. When i try to adjust it, it disconnects for a time or freezes. In the terminal i see the following:
world - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting… Estimated / selected altsetting bandwith : 904 / 1600. world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. !!!Packets per transfer = 32 frameInterval = 327868 world - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting… Estimated / selected altsetting bandwith : 904 / 1600. world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. !!!Packets per transfer = 32 frameInterval = 327868 world - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting… Estimated / selected altsetting bandwith : 904 / 1600. world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. !!!Packets per transfer = 32 frameInterval = 327868 world - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting… world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. Estimated / selected altsetting bandwith : 904 / 1600. !!!Packets per transfer = 32 frameInterval = 327868 world - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting… world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. Estimated / selected altsetting bandwith : 904 / 1600. !!!Packets per transfer = 32 frameInterval = 327868
I am using windows.
does someone know the cause of this?
Hey, it looks like moving the camera disconnects it temporarily. This is not necessarily due to a hardware issue. Do you also see these types of disconnects if you don't adjust the camera?
(the scene camera)
Which is the scene camera. This indicates a hardware issue as the two eye cameras are not recognized by the mac. Please contact info@pupil-labs.com in this regard.
ok - will do - thanks
just tested it again. it started when i tried to adjust the world camera (lense or angle). before that it worked fine. In the past it has sometimes disappeared when i restarted the program, but atm it doesnt disappear at all.
it doesnt disappear at all By
it
, are you referring to the issue?
yes sry
Ok, then this indicates a potential loose contact. Please contact info@pupil-labs.com as well.
alright, thanks for your support
Hello everyone, during the recording the camera says "world: unexpectedly large amount of puppil adata..." unfortunately I have not been able to find out in the manual or in this thread what this warning means and whether it is important to deal with it. If I have just searched wrongly I apologize and would ask you for a link to the manual where all the ET releases are described. Thank you
@user-9e0d53 @user-f356fa This message ends in "was dismissed due to low confidence". Please see https://docs.pupil-labs.com/core/#_3-check-pupil-detection for reference. Feel free to share a Pupil Capture recording of the calibration procedure with us or data@pupil-labs.com to get more concrete feedback.
here. Colegue send me image of these status
I happen to have the same problem!
Hello, New user of discord and pupil core. I was wondering, is there a way to stream live data into MATLAB?
We have some examples on how to stream gaze data to MATLAB: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
I am curious. May I ask what your use case is that you need the data in real time?
I am working on a research project, we are trying you use eye tracker to improve the usability of brain-computer interfaces.
Hey thanks, will look into it.
Hello, I get data of gaze position in format [number1, number2] from pupil API. I don't know how to interpret these numbers. Is the first number mean X position and the second number mean Y position? The first number has a range between 0-1, but the second does have from 1 and -2, why?
Hi! Could you specify which code you are using to receive this data?
I subscribed to the topic: gaze.2d..01.norm_pos.
See our docs on coordinate systems https://docs.pupil-labs.com/core/terminology/#coordinate-system
Everything outside of the 0-1 range is outside of the image. Make sure to remove low confidence values before looking at the data.
Is there published validation material in regards to the core head pose tracking during processing?
I don't know any from the top of my head. Check out publications that cite our products here: https://pupil-labs.com/publications/
@papr Our group would like to do a pupil dynamics and gaze tracking study where the subjects explore environment outside in a city. During their exploration there we want to record their pupil size and gaze direction and follow what features elicit their attention. Can core set be used for this sort of application? Eg, can the pupil data and video of the scene be collected locally without headset being plugged in a computer?
Does anyone have information on the voltage/wattage/amps for the pupil core eye tracking cameras? Not the model with the world view camera, just the headset with the the two pupil cameras. 5v?
I can not find any documentation showing the power requirements in the technical docs/Discord. Would anyone mind helping me out?
Nevermind found it, thank you
https://discord.com/channels/285728493612957698/285728493612957698/824004931353575475
Hello everyone
The image of the computer monitor looks distorted in Player. Although the surface created by the on-screen apriltags apriltags is a perfect rectangle, it doesn't cover the entire screen because the monitor image is not flat. Would this not pose an issue to data exported, given that part of the screens are not within the normalized space?
The visualized lines just connect the four corners. They do not represent the actual edges of the surface. The software corrects for the distortion behind the scenes.
Thanks
Hi, I use pupil core and it used to work well. But since a few days, the eyes camera are not detected by PupilCapture (while the world camera still works well). In the device manager, I have only one PupilCam detected (Pupil Cam ID2). I tried to re-install the drivers, but it did not solve the issue. I have the intuition that it should not be an hardware problem because both of the eyes camera stopped working at the same time. Any idea? Thank you for any help!
Hello!
I managed to stream pupil core data at around 100Hz and eeg data at 5000Hz to the LSL lab recorder. I also streamed EEG data at 5000Hz to the pupil capture LSL recorder but got unstable results. What I'd like to try next is streaming pupil-core data to openvibe via LSL. I see that openvibe can read lsl streams. Does anyone have experience streaming pupil-core data to openvibe?
Thanks 🙂
Hey @user-df0510 you should be able to use Pupil Capture LSL Relay to send gaze data from Capture to LSL (and from there to openvibe).
Yes looks like it. I'll let you know how I get on with it.
Hello, I have an output for pupil0 diameter normal, but for pupil1 is not. Confidence for both outputs is in the range of 0.8 to 1.0. Output is in the attached images.
Hey, could you tell me what values you are plotting exactly (2d vs 3d pupil diameter) and from where you are reading the data (realtime vs exported csv file)?
Hai, May i know how do we know whether the participant is gazing at the area of interest. From the picture I attached it stated no fixation available. Any advice on this? Thanks
Hi 🙂 To detect fixations, you need gaze data. You get gaze either by calibrating in Pupil Capture or using the post-hoc calibration feature in Pupil Player. Do you use Pupil Capture in your setup?
Yup we are using pupil capture. Are we able to use circular calibration marker on the same recording to collect the gaze data?
The circular marker are the calibration targets, yes. Calibration is a step that is independent of surface tracking. Only the latter requires square markers.
Do we calibrate with circular markers first then only we add on the square markers?
You can have the markers attached/being displayed during the calibration (as long as they do not occlude circular ones). My recommendation for the procedure in Capture would be: 1. Setup square markers, setup surface 2. Start a recording 3. Start a calibration (displays circular markers) 4. Finish calibration, continue displaying square markers 5. Perform experiment 6. Stop recording
I don't always record before calibrating. I calibrated before recording, any problem with that ? I think the only advantage is when post ad hoc calibration is needed
Hello, I recently started getting these error messages after updating to macOS monterray (not sure if related). Any ideas what's going wrong?
Hi, yes, you need to use the latest Capture release and start it with sudo from the terminal.
For step 3, do we need to enter calibration mode because it will lead to screen calibration instead?
You can chose any calibration mode you like from the calibration menu. If Capture is running on the same screen as your stimulus a screen calibration would make sense.
Thanks, just tried it but got the same error...
sudo open Pupil\ Capture.app
Use sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
instead. See also the release notes for reference https://github.com/pupil-labs/pupil/releases/tag/v3.5
I plotted 2d pupil diameter, and I read data from exported CSV file.
Would you be able to share this specific csv file with me? I would like to reproduce the issue.
Hi! We're in the middle of an experiment and our Pupil Core have suddenly stopped working! We have tried on several computers. Same error message.
On macOS please run sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
from the terminal. This will ask you for your admin password and attempt to run the program with admin privileges. (This assumes that you copied Pupil Capture to the Applications folder)
Which operating systems have you tried? Were they all macOS?
Do you have any troubleshooting tips?
No, both MacOS and Windows.
Worked yesterday...
Ok, we'll try
Now the eye cameras worked, but not the world camera. Any suggestions?
The world camera suddenly connects, but freezez and dissappears. It dissconnects and reconnets.
OK, please send an email to info@pupil-labs.com with this information
I’m running into ioHub startup issues when trying to connect pupil labs with psychopy on a macOS system. Has anyone faced the same, could you please share how you fixed it?
Hai, thank you for this. We tried this method yesterday and it worked. However we tried using the same calibration marker as yesterday the software could not detect the calibration marker. Perhaps the ones we used are already folded. Wanted to know, Does the printing quality of the calibration marker matters? Our calibration could not be detected.
same calibration marker as yesterday Are you referring to the square markers from yesterday's screenshot?
Hi @user-f51d8a. Is it possible that you used the 'Stop' marker? There are actually two markers – one to start calibration and one to stop. It might be easy to confuse these if you printed them off
PsychoPy Integration
Dear Pupil Labs Team,
no matter which Calibration procedure I use, I don't get my Angular Accuracy higher than 3 degrees.
I planned to use the pupil core to analyse gaze behaviour of radiologists when reporting on MRI images (where they have to identify certain ROIs in MRI images, using two screens).
If I just reach 3 degrees angular accuracy, with a monitor distance of approx. 1 meter from the radiologist, that would mean that the ROI would have to be at least 5 cm large/wide to be detected sufficiently, right? If I am not mistaken, the pupil core should be able to reach accuracies of up to 1° of visual angle, allowing me to detect smaller ROIs.
I followed the instructions on your documentation page and adjusted the pupil detection parameters as best as I could. I also tried all the calibration procedures, apart from the natural features one. What else could I do? Advice would be much appreciated!
Hi, calibration accuracy not only depends on the calibration choreography but also on the pupil detection and 3d eye model fit quality. Feel free to share a recording of a calibration with data@pupil-labs.com for concrete feedback.
I have a dummy question about camera intrinsics estimation, after running this camera intrinsics estimation and clicking on show undistorted images it works great. But my question is, the recorded video does not show the undistorted image, can the newly estimated intrinsic file be imported directly into the recording or should it be imported manually somewhere?
Hi 🙂 Happy to hear that you were able to estimate custom intrinsics successfully. Please note that the undistortion is only applied to the preview, not the recorded video. Capture stores the intrinsics as part of the recordings in world.intrinsics
. Technically, you can replace existing files with your newly estimated one but this is rarely necessary.
What led you to estimating custom intrinsics?
Because our screens do not seem to be strictly parallel in the camera view, in this case when we apply marker to the screen for surface tracking purposes, what is recorded is not a strict rectangle. So I can replace the world.intrinsics file manually and the recording will show up as the preview situation, right?
Note that the surface visualization is just an approximation. The true surface borders (in undistorted image space) are not visualized. For performance reasons, Pupil Player detects the markers in the original distorted image space and only transforms the marker corners into the undistorted image space where the actual surface tracking and mapping happens.
ok. Therefore, this is basically unnecessary and will probably just make the video look nicer. 😀
Hello! This it is urgent. I'm doing an experiment with people for my Master Thesis but suddenly one of the two cameras does not work. Last time I used the eye tracker was on monday and it worked correctly, but now I just turned it on and this happened
@user-f76ddf https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Hello! We are looking into purchasing the Pupil Core Headset, but we have a technical question - apologies if this is not the best place to ask! The app we are intended to conduct research on is not publicly accessible. It is only accessible via machines authorised on our VPN. We currently use ZeroTier for this which simulates a local network. Will this cause any issues when recording sessions (The machines we record on will have access to this VPN) as other solutions we have tried have required the app to be publicly accessible?
Hi! Pupil Capture does not require a network connection nor direct access to the application. Gaze is estimated within the scene camera video not the application. Filming the application with the scene camera is therefore sufficient.
I followed the steps, but the same happens
What do you see in device manager?
There are typically three possible categories where the cameras can be listed:
- Cameras
- Imaging Devices
(Dispositivos de imagen
in your case)
- libusbK USB Devices
If a camera is not listed in either of these that means that the camera is physically disconnected. Please reach out to info@pupil-labs.com if this is the case.
What do you see in imaging devices
- and show hidden devices
in view options
The option show hidden devices
is already activated
Do you have access to another laptop (e.g. macos or linux) to plug Pupil Core into? Windows can be a bit picky with drivers sometimes.
I'm going to try to restart the computer
Nothing, the same
This appears in the window when running as administrator
@user-f76ddf do you have access to another computer (macos/linux) that you could quickly connect Pupil Capture to?
Right now I can't
the computer I'm working with is windows, and I use the eye tracker with unity, but all in windows
@papr can you follow up here please?
Yes, that's the case, the camera is not listed in any of those three
I tried with other headset also equipped with the eye tracker and it worked, so definately the problem is the camera
Please contact info@pupil-labs.com with this information to initialise the repair procedure
Hi @papr @wrp , I'm currently having a very similar problem (albeit with different errors) with the Pupil Labs VR/AR Eye-tracking for HTC Vive Pro (I posted a detailed description of the issue in the "vr-ar" channel). Would you be able to take a look at it? I do apologise for cross-posting, but it's quite urgent since I'm currently in the middle of testing for an experiment.
similar to @user-f76ddf (one of the two cameras - right eye one - is grey and starting in ghost mode)
Hi! I would like to ask a question about time matching. I currently use two pupil lab cameras to capture eye images (already synchronized) without using world camera, but their timestamps lengths have a big difference (9705 frames vs 8486 frames). May I ask how can I match timing of thoes two videos offline? I also tried to show those two videos in Pupil Player and found the eye overlay (3825 frames) matched two eye videos as I expected. Thanks so much in advance!
I checked the timestamps again and according to image contents I found out two very close timestamps (<0.01s) from two eye cameras respectively do not match at all, but the image content of two timestamps with a 0.3s difference from two eye cameras respectively match well. Any ideas why this happens?
Hi Pupil Labs team, I just installed the software on my Windows 10 computer. The world view and eye 0 cameras are connected, but eye 1 cannot. Specifically, it reads: video_capture.uvc_backend: Could not connect to device! No images will be supplied. The same issue exists on a Mac computer. Would you please provide troubleshooting advice? Thanks!
Hi. I want to know how the confidence for the individual eyes is computed. I mean the confidence that should be above the threshold if it is to be used in gaze mapping. Where can I find this in the source code?
This is part of the 2d pupil detector. You can find the source code here https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_2d/detect_2d.hpp#L600
Confidence is high if the fitted ellipse matches its support pixels well.
Hi! I dont know if this is the right chat, but I am going to give it a shot. I want to connect the pupil core device with Psychopy. According to the psychopy documentation I should be able to just connect the device to my computer, and then choose it via drop-down. Unfortunately the pupil core device does not appear in the eye-tracker drop-down menu. Do you have any idea why? I know I should also contact the psychopy guys with this issue, but maybe I am missing something from the pupil core side
Hi, the eye tracker drop-down in PsychoPy should just say "Pupil Labs". And it should always be selectable, independent of the hardware being connected or not.
When you want to run your experiment, it is important that Pupil Capture is running and the video preview works for all three cameras.
Hello
The distortion seen in the player also adds distortion to the gaze location. Our fixation and gaze plots have gazes and fixations that are not in the right location in the stimulus overlay. The scanpath simply is not accurate but looks accurate on the video
@marc @user-53a8c4 @user-92dca7 @papr is there a lower /upper bound on the pupil size estimate in the detection algorithm of the Pupil Core?
The 2d pupil detection algorithm has a lower bound parameter "pupil min" (in pixels; adjustable in the eye window)
Hi, do I understand correctly that on both devices world
and eye0
work but eye1
does not? Or do none of them work on macOS?
Please note that on macOS Monterey, you need to start the application with administrator rights in order to access the cameras. Run this command from the terminal to do that:
sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
That is correct. On both devices, world and eye0 work, but eye1 does not. The application is being run with administrator rights on both computers.
What do you use as input for your scanpath?
We used the gaze position data. We used velocity to classify events
Hi, the eye tracker drop-down in PsychoPy should just say "Pupil Labs". And it should always be selectable, independent of the hardware being connected or not.
When you want to run your experiment, it is important that Pupil Capture is running and the video preview works for all three cameras. @papr
Hmmm well the option does not show up for me. I tried to select different versions of Psychopy but that did not solve it
You need to run PsychoPy 2022.1 or newer
A reinstall solved it. Thanks!
Hello, I have a question on blink detection (not sure if this is better suited in 👁 core or in 💻 software-dev but anyway...):
When a recording is opened in Pupil Player and exported (with the Blink Detector Plugin enabled), the resulting blinks.csv
file contains a neatly arranged list of blink events, each with onset and offset timestamp.
If however reading the data recorded by Pupil Capture in blinks.pldata
using the load_pldata_file
function from file_methods.py
, the resulting object contains a bunch of single events which are labeled either as onsets or offsets, with each actual blink resulting in several onsets and offsets and the total number of onsets not necessarily matching the number of offsets.
Am I correct in assuming that blinks.pldata
contains the "raw" threshold detector output whereas some additional logic/processing is applied when exporting blinks.csv
? If so, is there a convenient way to apply this without having to open a recording in Pupil Player?
Hi,
blinks.pldata
contains the realtime detected blinks. Due to the realtime constraint, Capture generates a blink event as soon as it detects a blink onset and continues emitting blink events until an offset is detected.
blinks.csv
contain the post-hoc detected blinks where the full gaze data is available, allowing the algorithm to look "into the future" and finding the offset for each blink.
You can use this script to run the post-hoc blink detection algorithm on the intermediate data format https://gist.github.com/papr/40ba7d99f572bc0fe388a81aa2f87424
Alright, thanks for the clarification, and the script!
Gaze in scene camera or surface coordinates?
Gaze in surface coordinate
have you plotted those gaze samples on the stimulus overlay? Do they fit? Where does he stimulus overlay come from?
Yea, plotted. It is a picture in the background of the plot
But how did you generate it? Is it a crop from the scene video? Or the same stimulus that you displayed to the user?
The same that was displayed to the user
And you have adjusted the surface in Player/Cloud to match the stimulus size exactly, right?
Nope
Then this is the cause for the incorrect mapping, not the distortion.
Adjust the surface by adding the height and width of the stimulus?
Like adjust the height and width in surface tracker plugin?
The Apriltags were on-screen pictures. I only need a single surface which is generated automatically when the surface plugin is active. I don't need to move anything. In this case, the whole book is a surface
Could you share the stimulus overlay with us?
So the apriltag markers are part of the stimulus overlay?
Yes
Nope.... I'm so sorry. The whole stimulus is just like a square or rectangle, (like the book) having many detractors and targets on it. All the detractors and targets are AOIs. So, I am not interested in what happened in a single AOIs but the overall stimulus. The whole stimulus is a surface. This way we will have both the gaze samples for fixations and saccades in one thread and file
I understand. The video was only about clarifying what I meant by adjusting the surface. I did not want to suggest that you had to create multiple surfaces.
What I would like to know is the relative position of the markers in relation to your stimulus. You can share it privately if you do not want to share it publicly.
Also I want the dimension of the whole stimulus to be (0,1) X (0,1). That's normalized. The reason for having a single surface
This does not change if you adjust the surface. 0,0 will always be the bottom left corner and 1,1 the top right.
The whole screen space is the surface including the white space (around that book for example). This is because the participant might look at the edge of the book. The whole screen space is the stimulus
Ok, let my revise my previous question. Have you adjusted the surface corners such that they match your screen corners?
The surface automatically matches the screen. Maybe I should adjust it to the bazel. Note that the object (monitor) image is distorted
The bazel should not be included if it is not part of the stimulus overlay.
Unfortunately, without having access to the data, I am not able to help you further. The surface tracker is able to compensate for distortion during the surface mapping process. From what I can tell, your setup should work fine.
Okay I encountered another issue. I set up the pupil core eye tracker, calibrated it before with psychopy, i.e. set the calibration process in the experiment routine. When I am running the experiment with Psychopy, I get a new folder within the recordings directory (so like in the same way when a regular pupil capture recording had been done). But when I drag the folder to pupil player, there is no data available to analyze. F.e. for the pupi.lcsv file all headers are available but it contains no rows with data. Also the world video is missing. Do I have to specifiy else before running the experiment?
Mmh, sounds like you were doing everything correctly. Could you please restart Player with default settings (see general settings menu) and try again?
This unfortunately did not change. The video is also 0 seconds long, which seems weird. Do I maybe have to use post-hoc data in this case?
Do I maybe have to use post-hoc data in this case? That should not be necessary.
looks like this
This sounds like the recording is started and stopped immediately.
I got the solution. It seems like its necessary to enable the "sync timing with screen refresh" in psychopy. You learn something new everyday I guess. Thanks again for the very good custom support!
Thank you. Please contact info@pupil-labs.com with this information
Will do. Thank you.
@papr Hi! I would like to ask a question about time matching. I currently use two pupil lab cameras to capture eye images (already synchronized) without using world camera, but their timestamps lengths have a big difference (9705 frames vs 8486 frames). May I ask how can I match timing of thoes two videos offline? I also tried to show those two videos in Pupil Player and found the eye overlay (3825 frames) matched two eye videos as I expected. I also checked the timestamps and according to image contents I found out two very close timestamps (<0.01s) from two eye cameras respectively do not match at all, but the image content of two timestamps with a 0.3s difference from two eye cameras respectively match well. Any ideas why this happens? Thanks so much in advance!
Yes, my apologies. I saw your question previously but have not been able to respond. Could you please clarify how you performed your tests? Do you have source code that you could share?
Thank you so much for your reply! I connected two eye cameras (no world camera) using the same cable on a windows 10 and captured ~2min videos. Then I tried to extract frames with closest timestamps from both cameras (codes attached below, I used the generated world camera data as a reference), but the results showed that images with closest timestamps do not really synchronized as shown in this video https://drive.google.com/file/d/1X74DgyvAa-Z_M_ypf4jTvuF3Pf2tkLrp/view?usp=sharing. You can see the head movement in first a few seconds has a clear delay in the left eye camera. I assumed that two very close pupil time timestamps (e.g. 607485.238510 and 607485.242093) from two cameras are synchronized and do not need additional operation, so I only tried to find the closest timestamps. Not sure if my understanding is correctly.
world = osp.join(data_root, "world_timestamps.npy") eye0 = osp.join(data_root, "eye0_timestamps.npy") eye1 = osp.join(data_root, "eye1_timestamps.npy")
world_ts = np.load(world) eye0_ts = np.load(eye0) eye1_ts = np.load(eye1)
def find_closest(target, source):
"""Find indices of closest target
elements for elements in source
.
-
source
is assumed to be sorted. Result has same shape as source
.
Implementation taken from:
-
https://stackoverflow.com/questions/8914491/finding-the-nearest-value-and-return-the-index-of-array-in-python/8929827#8929827
"""
target = np.asarray(target) # fixes https://github.com/pupil-labs/pupil/issues/1439
idx = np.searchsorted(target, source)
idx = np.clip(idx, 1, len(target) - 1)
left = target[idx - 1]
right = target[idx]
idx -= source - left < right - source
return idx
sync_eye0 = find_closest(eye0_ts, world_ts) sync_eye1 = find_closest(eye1_ts, world_ts)
A question about the new Core, our lab has been using an older version and recently received the newest model, do the older programs need to be updated as well? Is there additional software that needs to be handled? Or will the data be outputted in the exact same format?
Depends on which version you have been using. I would always recommend to run the latest version
Hi, i need to know if i can use pupil lab open source software with Gaze Point GP3 hardware? thank you so much
Please see my response here https://discord.com/channels/285728493612957698/633564003846717444/975652666610974770
Hi @papr , is it possible to get a standalone post-hoc fixation detection python script? similar to this https://gist.github.com/papr/40ba7d99f572bc0fe388a81aa2f87424#file-extract_blinks-py
Yes, I will work on it this week
Here you go! https://gist.github.com/papr/f7d5fbf7809578e26e1cbb77fdb9ad06
Hi,all! I have run pupil video backend on Raspberry Pi 4B, I want to send Pupil Core's videos to Pupil Capture on computer through Pi. But I found that my frame rate can only go up to 30fps, although on the computer my core's frame rate can be up to 120fps. How can I increase the transmission frame rate of the video backend, is it related to the CPU performance of the Raspberry Pi? @papr
The pupil-video-backend uses a default frame rate of 30 Hz. Checkout the -vp
option to change the selected resolution and frame rate. Note: Every resolution only supports a specific set of frame rates, e.g. you cannot run the 1080p resolution at 120 Hz.
Hi, thank you for sharing the details! So to clarify: The shared time matching code works for you in the sense that you get very close timestamps, but the issue is that the resulting images are not in sync, correct?
My guess is that your eye video frame identification code (the function that returns the eye video frame for a given timestamp) is not working as expected.
@papr I really appreciate your reply! Yes, I was able to get very close timestamps but the resulting images are not in sync. Regarding the eye video frame identification code, since the eye video (eye0.mp4) and the corresponding timestamps (eye0_timestamps.npy) have the same length, I assume they match one-to-one. Am I correct? If not, could you tell me how to extract a frame according to a timestamp? And one interesting thing I found is that when I drag the directory into Pupil Player, it can visualize two eye videos in sync very well with a apparent decrease of frames. As I was stuck by this issue for several days and had a very close deadline, I would much appreciate if you could reply at your earliest convenience. Thanks again!
@user-c8166b
Regarding the eye video frame identification code, since the eye video (eye0.mp4) and the corresponding timestamps (eye0_timestamps.npy) have the same length, I assume they match one-to-one. Correct
So you match frames based on indices? What tool do you use for decoding?
I currently use eye0 = np.load(osp.join(data_root, "eye0_timestamps.npy"))
to read .npy, opencv cap1 = cv2.VideoCapture('eye0.mp4')
to read .mp4, and cap0.set(cv2.CAP_PROP_POS_FRAMES, image_index); ret, img0 = cap0.read()
to get one specific frame. Then I assume the specific frame (e.g. image_index = 0) matches the timestamp eye0[0].
OpenCV is known to be inaccurate regarding returning frames. Please use pyav instead. See https://discord.com/channels/285728493612957698/633564003846717444/974585476352720916
For information on how to seek in pyav see https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb (uses PTS instead of frame indices; this is how Player keeps the eye videos in sync)
Please contact info@pupil-labs.com in this regard
Thanks!
Hi @papr , thanks so much for your previous help! I have another quick question about 2d pupil detection. The output debug image looks good to me (I can see the clear pupil highlighted in dark blue), but the confidence is 0.0 and the returned pupil is a top left point. Any ideas of why this happened? I checked the pupil size is within the range I set (40,80). Thanks a lot!
This looks good indeed. Could you share the eye video and its timestamps with data@pupil-labs.com such that I can reproduce the issue?
In the mean time please try running Capture with default settings
Hello everybody. I collected some data today using the Pupil Mobile app (which I learned will no longer be developed). I used the same smartphone and I now have four data sets, three of which I can import into Pupil Player (Win 10). However, one import results in the error "[ERROR] libav.mp2: Header missing". I found some ideas on github and tried to delete the folder "pupil_player_settings" and to use an older version of Pupil Player (3.4.). But as you might guess, both approaches did not work - I appreciate any ideas on this issue so that I can import and analyze the data.
Hi 👋 This error indicates a corrupted file. Could you share the full error traceback? I might be able to tell you which file is affected.
In the gaze_positions export csv, there are normalized positions for x and y. Do any of the export fields represent the gaze positions of each eye independently (assuming we were able to record binocularly)? It seemed like the normalized x and y positions represented the joint position of the two eyes together. Also, how does the headset determine the z position of the 3d gaze point in the gaze_point_3d_z field?
Yes, by default, Pupil Core software tries to match left and right eye data and use the pair for an improved estimate. You can see if the eye was mapped monocularly or binocularly in the base_data
field. See https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv for details.
You can read more about the matching here: https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
And if you do not want to perform any matching, you can use the dual-monocular gazer plugin https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350
hi! in Pupil Player, is there a way to set trim marks by inputting a specific frame range instead of using the seekbar sliders? I can't seem to get the range to be as precise as I want by dragging
Yes, check out the general menu
Hello everyone. I have a question about frequency amount. For an experiment, I record between 3 and 5 minutes of video to extract pupil diameter in order to analyse pupil dilatation. The problem is that on 30s i have already 12000 data lines and I would like to reduce the frequency in order to analyse longer recordings without getting thousands and thousands data lines. I don't know if it's possible. Thank you a lot and have a great day
Hi, the easiest solution would be to reduce the video frame rate of the eye cameras. You can do that in the eye windows' Video Source menus.
Hello everyone. I am a new user to pupil labs. I am trying to use its detector codes to process my eye videos and generate 3d models of the pupils at each frame. Here is my code: https://colab.research.google.com/drive/10A9FLAARcV4zzwFZcN7kCl3uldSSaZxv?usp=sharing
Everything looks fine, except that the radius of my pupil model (a 3d circle) converges to 0.6 mm, which I guess is unreasonably small for a human pupil. Here is my video (eye1): https://drive.google.com/file/d/15SDR_M19eSlybe4Chs_5CoMshwRcRpNH/view?usp=sharing and here is the results I get by processing the eye1 video: https://drive.google.com/file/d/1-1YS-HgyE2QGGI2o7xXaP4fAV53PDlE9/view?usp=sharing
weirdly, when I record the video of my another eye (eye0), I accidently make the video upside down, and the radius converges to 1.5 mm, which I think is correct. However, after I tried to correct this by flipping each frame with the flipud function, I encountered the same problem: the radius converges to 0.6 mm too now. Here is my eye0 video: https://drive.google.com/file/d/1Z1PV5T3dVIH8Xg_zNDu9Y0GF3Wqzxp1x/view?usp=sharing Here is its results (for the original upside down eye0 video): https://drive.google.com/file/d/1fHk62UO3Hd-Lyqco7WY_-TTUmJvGgGjO/view?usp=sharing
I thought this could be because I used the default camera intrinsics, but after changing the intrinsics to my camera's (you can see this in my code) I didn't improve or change the results at all. Could someone please take a look and tell me how to fix the problem? Any help will be greatly appreciated!
Hi and welcome! Is it possible that you are working with @user-c8166b? Your data looks very similar. 🙂
To get an accurate pupil size estimate, the eye model needs to be fit well. To check if this is the case, I highly recommend visualizing the projected_sphere
field.
Flipping the image should not have any effect.
Hi there ! I'm really hoping I'm not asking something that's already been asked but as far as CTRL F is concerned, it doesn't seem like it. I'm running Pupil Capture on Windows and not problem at all, it works super fine, everything is cool and easy. But for experimental reasons, we need to switch to Linux to run other scripts in parallel. So far it worked perfectly, until today... Black screen... Nothing written. The window opens when the execution is launched, black screen and then nothing. We tried running it as administrator through the terminal and saw this error: world - [INFO] camera_intrinsics_estimation: Click show undistortion to verify camera intrinsics calibration. world - [INFO] camera_intrinsics_estimation: Hint: Straight lines in the real world should be straigt in the image. world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables/world.py", line 675, in world File "plugin.py", line 409, in init File "plugin.py", line 441, in add File "camera_intrinsics_estimation.py", line 119, in init_ui NameError: name 'idx' is not defined
So, from that we tried EVERYTHING: running sudo pupil_capture, de-installing the app, restarting the computer, re-installing Pupil Capture, updating the whole operating system, updating the installer software, updating python to the latest version (as .py files are concerned) and so on... So, I don't know if you have any idea at all or if anyone encountered such issue. I'm sure it's something very stupid because it stopped working overnight basically... I must be missing out on a very small detail.
Thanks for your answer, if you have one... any other question on this forum including any words of this error message are rergarding another issue.
Oh and by the way, Pupil Player works perfectly fine with the recordings we did before Pupil Capture died 😦
Hi, this issue is specific to the camera intrinsics estimation plugin and caused if a previously selected monitor is no longer available. The issue will remain until you delete the user settings (rm ~/pupil_capture_settings/user_settings*
).
May I ask why/what for you have been using the camera intrinsics estimation?
Okay, that's weird because I don't recall running the camera intrinsics estimation but I did actually plugged off a secondary monitor so you must be totally right ! I might have run the camera intrinsics a long time ago when I got an hold on how this whole system worked !
ok, just wanted to ask as there is usually no reason to run the camera intrinsics estimation. Good luck with your project 👍
And it actually worked, thank you very much !!
Hi there! I'm working with pupil core data and am having trouble loading one of our larger files (8.33Gb about 65-70min of recording) into the pupil player. On my Mac it just crashes on a pc it try's to load before going non-responsive. Any help/guidance on how to get out the basic outputs would be very helpful! Cheers in advance 🙂
Hey, Pupil Player was not designed to handle this kind of recording lengths 😕 What kind of data are you looking to extract?
Hello there, we are trying to compare the performance of Pupil Core with a static eye tracker, to see if we can replace the static eye tracker with pupil core. We have three example test cases in mind.
For the first test case, we want to compare the performance based on a screen. We use Apriltag markers to define a surface in the camera FOV and for the calibration and validation we use the routine provided in Pupil Capture. We got some good results in this test. Here my question is, the fixation coordinates, that we get after describing the surface, has the origin on bottom left corner of the defined surface, right?
For the second test case, we make a use of a plane, the plane is at the distance of 1.5 m from the test person and the camera. So here, for the calibration, we use the world marker. In the docs, it is mentioned that the spiral head motion is best for calibration. We are not quite sure, what is meant by it? Should the markers be placed on different positions so that sipiral motions is the result or should the marker be viewed on a fixed position by moving the head? When we use fisheye camera, the FOV is distorted. Is this accounted for , while calculating the coordinates for fixations? Also, what happens when the test subject looks away from the defined surface and then again back on the surface. Is this a problem ? Should the experiment ideally be done in a way, that the plane is always visible for the test subjec?
Can we get the surface normalized gaze positions using the live data streams?
Regarding the calibration procedure in the second experiment, the gaze points as shown in pupil capture does not coresspond with the postions of markers on the plane. Is it because old calibration is used and only after correct calibration, one can expect this to happen ?
Surface-mapped gaze and fixations are in surface-normalized coordinates, which have their origin in the bottom left, yes.
2) The marker should be fixed and the subject should rotate their head such that the marker moves in a rough spiral within the scene camera's field of view. It does need to be a perfect spiral. This is just the easiest way to sample a wide range of viewing angles. Distortion is compensated in regards to gaze mapping/fixations but not regarding marker detection. In other words, marker detection might work worse with high distortion. The calibration assumes that the subject is fixating/looking at the marker. If this is not the case the calibration/gaze mapping will be less accurate. If you use surface tracking the surface may leave the field of view.
3) Yes. See this network api example https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py
4) Could you elaborate on the setup for the 2nd test case? Do you use surface mapping?
we define a plane at the distance of 1.5 m from the test subject. We have some fixed coordinates on the plane. The plane is also defined as a surface using the markers. Then we carry out the calibration. For calibration, we put the markers in 5 different positions, where our test subject looks at the calibration points. (we get accuracy of about 3 degrees, I am unsure how this is calculated, could you comment on this too?) For the validation, then we put ask the subject to look at 3 more known point on the planes. The discrepancy ist however very big. We are unsure what is the cause of this. Is there a better routine to perform a test like this?
To calculate accuracy, the software collects the current calibration parameters, pupil data and targets (in normalized scene camera coordinates). The pupil data is mapped to gaze locations in scene camera coordinates. Both, gaze and target data, are undistorted and unprojected into 3d gaze directions. Then we match gaze and target data temporally and calculate their angular differences. In the end, we calculate the average difference.
Could you share an example recording that includes the calibration procedure with [email removed] This way we could give more concrete feedback.
I also have another quick question.
Could you comment on the following test setup, where the test subject looks at three different monitors, each with some degree of rotation along the Z-axis?
I will send you the recording right away through e-mail
By rotation around the z-axis, are you referring to tilting heads left/right? If I understand you correctly, there is no need for three surfaces. One should be sufficient.
yes. That is what i meant. Thank you for your quick reply.
Hey. I am trying to figure out how the calibration works for the core. I have looked at the "notify" file, and can see that there is "binocular_model", "left_model", and "right_model" with a lot of parameters. How does the calibration work with these parameters?
What I ultimately want to do is reverse the calibration, to get “uncalibrated” data, or maybe be able to transform the data to a generic calibration. Is there any detailed info somewhere about how the calculations are done for getting the calibrated gaze data?
Also more generally, how is the eye modelled? I can see that there is 3D and 2D based gaze mapping. Am I correct to assume that 3D is a model-based approach? Is there for example any way to get the optical axis?
Thank you in advance.
Uncalibrated data corresponds to the "pupil" data. See https://docs.pupil-labs.com/core/terminology/#pupil-positions It is data in eye camera coordinates.
We have two pupil detectors: 2d and 3d. The latter relies on the output of the former. Capture records both. Regarding which gaze pipeline to use, read https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline
Hi all, I'm having an issue with my Pupil Core:
Pupil Capture will only show one eye window, and only one eye will measure confidence at a time. I can change the camera to the other eye, but only one will be used for eye tracking at a time. Pupil Service will show both eyes at a time happily; I can't seem to find how to open a second eye window in Pupil Capture.
I have tried reinstalling the Pupil Software, and the drivers for the Pupil Core, to no avail. Any help would be lovely 🙂
Problem solved! I used "Restart with default settings". Thanks to the user who mentioned it in the past :)
Hello. I’m new to this and I might have a stupid question but I’ll try. Everytime I want to see the markers and the heatmap on the defined surfaceI have a triangle in center and only there it appears a spot of red heatmap, on the rest of the screen not and I don’t know what to do about it. Can you help me, i want to see the heatmap on all defined surface, not only one spot
Hi @user-26057c 👋. Could it be that the wearer mostly looked at this central point of the surface during recording? If they did one would expect the red to be located at that point. If you can share a screenshot that might help clarify things.
Talking about that triangle and the whole recording is like that
Thanks for sharing the screenshot. The heatmap is essentially a 2d histogram of gaze points. I can see that gaze was also located in other regions of the phone's screen. What exactly is unexpected about your results?
The problem is that even if i stare at my phone screen on another corner for a minute, the red heatmap will still be up there where is the triangle. I think it should be normal to have some red spots there too, am I wrong?
In that case, it sounds like your accuracy is insufficient to capture whether gaze was really allocated to the phone's screen. You can quickly evaluate that by looking and pointing at icons, like in this video: https://drive.google.com/file/d/110vBnw8t1fhsUFf0z8N8DZMwlXdUCt6x/view?usp=sharing. Do they match? If not, you'll need to improve your accuracy.
Ok, thank you very much!
Hey! We are trying to use our Pupil Core on a M1 Macbook Pro, we managed to get the video through but the following error keeps coming up when attempting to calibrate pupil capture: gaze_mapping.gazer_base: Not sufficient pupil data available. Unsure as to what is being done wrong? Has anybody experienced the same issue? Thanks!
Yes, I got that problem as well during a trial yesterday, while we solved it by readjusting the minumum pupil diameter of the pupil detector
Hi @user-ec2bd2 👋. First thing to check is whether the eye cameras are positioned such that the pupils are clearly visible and well-detected by the software. Further details here: https://docs.pupil-labs.com/core/#_3-check-pupil-detection
Hi;
I would like to calculate the heatmap for a given time interval in a recording. Hence, I need to somehow find out either how to trim for that specific time interval or a technique that helps out calculating the heatmap. Can you please provide any solutions regarding this problem?
Hey all, I’m conducting a study and want to analyse the number of fixations and their durations on a defined surface afterwards. After exporting the first participants I found two interesting data sheets “fixations_on_surface_Infotainment.csv” and “gaze_positions_on_surface_Infotainment.csv”. 1. What exactly is the difference between them? 2. Which one of them is better for my purpose? I believe it’s the fixations…csv, or am I wrong? 3. Is there already a python or R script or sth. similar to extract the number and duration of fixations on a surface? Thank you for your help! Kind regards, Lena
Hi 🙂 Gaze data are single gaze estimates, based on a pair of pupil positions. Fixations are a group of gaze points that lie close to each other. You can count the fixations by looking at the number of entries in fixations.csv
. The file also includes the durations. The fixations_on_surface_X.csv
file is only helpful if you are interested on the fixation location in surface coordinates.
Hi;
I would like to calculate the heatmap for a given time interval in a recording. Hence, I need to somehow find out either how to trim for that specific time interval or a technique that helps out calculating the heatmap. Can you please provide any solutions regarding this problem?
Hello Buse,
Firstly, I'm feeling happy to see someone here from ALKU.
I can suggest a trick of your work in the question. I was likely it is a trick due to its simple operations to be done. In the timeline of Pupil Player has trim marks respectively, in the outset and end of your recording. As you can use them, "relative time range to export" textbox in the general settings can be used by you. When opened Heatmap option from Surface Tracker, you should press to key E to export the recording in definied time interval via these ways. After that, heatmap_surface.png can be seen into {record name}/exports/{export name}/surfaces/ path.
For others, I am hoping that you will feel free to text me.
May it be easy!
The relevant operations have been suggested for Pupil Player v3.4.0. Different requirements can be seen in your version. However, I think you will adapt it in one sitting.
Thank you so much for your helping
Hey all, So we are setting up a study where we would like to use Pupil Invisible in our Unity application. Is there a Unity SDK for using the Invisible product line or is that online for the VR product?
Thanks!
Hey, we do not have a Unity integration for the Pupil Labs Realtime API. May I ask what kind of functionality are you looking for and how the glasses are used in your study?
Hey! We are setting up a study exploring real-time adaption of gameplay elements based on psychophysiology. As such, we would love real-time gaze data from our Invisible glasses in Unity. We have experience with gaze mapping from remote eye trackers to the images we are exposing to in the experience. Since all our other data is coming in through Unity, we would also prefer that our eye tracking data was too. So we dont have to rely on a connection to a python server etc
Then, there are two conceptual difficulties: 1) Pupil Invisible requires the Android Companion app to estimate gaze in realtime. Therefore, you will always need a network connection to receive data in realtime. 2) Pupil Invisible is a head-mounted eye tracker and estimates gaze in its scene camera's coordinate system. You would need to transform these to your display coordinates, e.g. using marker mapping (in realtime).
Yes, for faster detection, the algorithm assumes a maximum duration. It is configurable via the ui.
Hi all. We are conducting a study in which we want to sync pupil core glasses with other events that are being called in an experimental script that is run in Vizard (python based). I have looked at the Pupil documentation for syncing between Pupil and other software, and they all seem to start with the need to import zmq. We are running our experiment on Windows and I'm finding it hard to import zmq. Any advice?
Hi, Vizard has a Pupil Labs integration https://docs.worldviz.com/vizard/latest/#PupilLabs.htm%3FTocPath%3DReference%7CInput%20Devices%7COther%20devices%7C_____7 but does not expose any of the timing functionality. But that means that they are shipping zmq already. I recommend contacting their support with your question.
Hello. In case you or anybody else are looking for another method of synchronizing data, we've been using multiple PupilLabs devices for a long while and syncing the data using Lab Streaming Layer (LSL) https://github.com/sccn/labstreaminglayer. LSL allows for multiple streams of data using a Pupil Capture plugin you can find in the github page under the Apps folder. You simply need to edit the plugin to rename your streams differently so you can differentiate your two recording (Core) sources, then record at a centralized location over the network. So this way you record the raw data but even when post processing PupilLabs recordings work very well with LSL timestamps because if the Pupil LSL relay is enabled all timestamps are LSL based so the data you get from your two devices will be on the same temporal domain easily. Of course this complicates the data recording bit a bit but it's worth it. We've been using LSL to temporally align over 20 streams of data, and it is very very robust.