hi. I can't get my pupil core to work. Grey screens only. Working on macOS Monterey12.2. Downloaded software from docs.pupil-labs.com today. Please help
Hi, due to new technical limitations, Pupil Capture and Pupil Service need to be started with administrator privileges to get access to the video camera feeds. To do that, copy the applications into your /Applications folder and run the corresponding command from the terminal:
- Pupil Capture: sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
Please see the release notes for details https://github.com/pupil-labs/pupil/releases/tag/v3.5
thanks, that worked. Do I have to do that every time when starting, or can I set some startup options?
Unfortunately, yes. :-/
also: only one camera works. Error message: Eye0 can't set value
In the video source menu, switch to manual camera selection. Please check how many cameras are listed there.
Hi Pupil Labs Team π I was wondering if there is an option to export the surface video as in the preview you can see in the analysis tool. When I export the data, it only seems to export csv-files, however, for further analysis it would be helpful for me to have direct access to the video file containing only the surface area. Thanks for your help in advance!
Hi, thank you for the help yesterday, also, I have a problem with the calibration of Pupil Core. I have been following the steps on the web but I am not able to get it. What can I do?
Hi, what is the exact issue that you are facing?
I am doing the calibration with "Screen Market" and with "Single Market", but when I finish the calibration the point of the gaze in the screen isn't where I am watching. The cameras to my eyes are detecting the pupils every moment. I don't know why it doesn't work.
Do you see the blue circle around the eye ball? Does it move much?
The blue circle around the eye ball is right, I move the eye and it is adapted to my eye ball
In this case, I would like you to share an example Pupil Capture recording of you performing the calibration with [email removed] After the review, we will be able to give more specific feedback.
Okay. I will send it as soon as possible. Thank you
Hello, I was working with the Pupil core and now the world camera is not working. I only see a gray screen when I open Pupil Capture and even the computer doesn't detect that camera. The others are working good. Some advice?
and even the computer doesn't detect that camera Are you referring to the device not being listed in Windows Device Manager? If that is the case, please write another email to info@pupil-labs.com in this regard. I have reviewed your shared recording and will follow up via email.
Okay thank you
I only found an old answer in the discord from 2020 were it was not a feature yet. Did this change by now?
Hi, this did not change.
Hi! I have a problem with pupil labs. I was doing a registration and suddenly pupil capture didn't work anymore.
In particular the world camera. Instead the eye's camera work
Hi! I am having trouble with opening my pupil core data through pupil player. When I drag the output file onto pupil player the terminal error is regarding allow_pickles = False in the npyio.py file. I've gone through and changed allow_pickles = True and reinstalled with the new file, but I'm still receiving the error. Does anyone know a solution to having pupil player read file as allow_pickles = True or have an idea of what other scripts may be getting pulled in to create this error?
If I had to guess, one of our dependencies (numpy, short np) is having trouble loading a specific file. Therefore, I do not believe that a piece of code might be broken but (parts of) your recording files.
Could you share the full error message, using the unmodified source code? This would allow me to confirm my suspicion.
Hi, Is it possible to add an external video stream (e.g. from a webcam) to a recording when using Pupil Capture?
Hi, the recommended workflow would be: 1. Record the webcam with an external recording software 2. Convert the recorded video file into a Player-compatible format using https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a 3. And then use this custom video-overlay plugin with the option to temporally adjust the overlayed video https://gist.github.com/papr/ad8298ccd1c955dfafb21c3cbce130c8 (use the transformed video from step 2 here)
Otherwise, Pupil Capture does not have the built-in possibility to record a 4th video source.
Alright, since I'm already using the network API to control the recording I could use that to start and synchronize the 4th video source
@papr hi! I have a problem with pupil, can you help me?
Probably, yes. What is the issue that you are encountering?
@papr Hi! I have a problem with pupil labs. I was doing a registration and suddenly pupil capture didn't work anymore. In particular the world camera appear grey
So it stopped working mid-way? Do the eye cameras continue working?
Yes
Eye cameras continue working
Hi, I have the same problem, eye cameras still working but the world camera view in the Pupil capture is gray. Did you solve the problem?
Please contact info@pupil-labs.com too
Please see the attached screen
One of the eye cameras on my pupil core device has stopped working properly. The image is so blurry. The reasons I can think of are either that the participant that I was recording took off the headset a little harshly, or that the experimenter has disinfected the headset and has put some gel on the camera. Is there anything I can do about this?
Yes, it looks like the lens got dirty. You could try cleaning it carefully with a microfiber cloth.
In the older 120 Hz models (Pupil Cam1), it is possible that they got out of focus. If you have such a model, you could try to refocus the eye cameras https://docs.pupil-labs.com/core/hardware/#focus-120hz-eye-camera
Hello, I have a question about Pupil data. Is there any problem about 3d_diameter when participant wear contact lens?
Hi π The distortion introduced by contact lenses should be negligible, so no problem. π
Thank you for quick reply.π
Hey I have a question about the process after the ellipse detection. I have an Asian friend filling in as a subject for a study and when I check the pupil windows the pupils are detected almost perfectly for every frame. But somehow, despite that, the POG that is detected is terribly incorrect. Is there any way for me to correct this post-hoc? I believe there are some physiological averages that the software uses which might be messing the results up in this case. Is there any way to correct for this?
You can recalibrate using the post-hoc calibration feature in Pupil Player. You can also add a manual offset to the result.
Hey guys, I've recorded some goal directed saccadic eye movements and tried to measure the kinematic (main sequence i.e.) as a control of the eye quality. I've included only eye traces of highest quality with low noise levels and no blinks etc as benchmark for further recordings. I notice some curiosities in the data (see figure). 1) max peak velocities saturate quite early (pk velocities do not surpass 300-350 degrees/second). 2) the gain of the pkvel*saccade duration seems to be 1, suggesting a peak velocities that is also equal to the mean velocity somehow (?). What I suspect is going on here is that a too low and too long estimation of my saccade peak velocity and duration distort the gain potentially due to over filtering. However, these parameters have been extracted from unfiltered data (on my end that is, so if any it should be noisy but an over estimation of the peak velocity, right?). So I was wondering, is the data collected by pupillabs/lsl plugin already being filtered with a cut off frequency of below 75Hz? I feel the 200 Hz sampling frequency should have the potential to describe the full saccade kinematics. Note, figure is produced from one participant, I'll collect data from others to make sure he isn't just 'slow'.
So I think I may underestimate saccade duration here, as I select on and offset as the idx when velocities pass a threshold value (20d/s). If I 'correct' this, and allow the onset and offset to move a few samples to the closest local minimum next to it (where the velocity onset ramps up or down before/after passing the threshold value) i do get slightly longer saccade duration and as such a gain 1.2 - 1.7, yet im sure this not the cleanest way of doing this and may lead to an over estimation of saccade duration. Still my problem remains, why are the peak velocities of unfiltered saccadic eye movements recorder on 200Hz PL-LSL so low?
Here are some figures to show the quality of the recording. See example saccade detection. Data was recorder at 200Hz. Saccades are detected by passing the 20 D/s velocity threshold, corrected onset/offset (with a max shift in 5 samples either way), and then evaluated based on some parameters (such as a window of high confidence > 0.9, minimum / maximum saccade duration and amplitudes). Only the cleanest examples are kept by the algorithm (i tried to be rigorous here) and then visually inspected to make sure they are of high standard.
Hi Jesse, happy to see that your research is making progress! Could you let us know which of the LSL-recorded data fields you use to calculate the velocity?
@user-0b16e3 can you have a look at the above? Could there be something in py3d that acts as a filter?
Hi, in Lsl-recorded data, does anyone know how the 3d gaze data (e.g. in the streams gaze_point_3d_x/y/z) was converted into the normalized streams norm_pos_x/y ? I tried a simple normalization equation on the 3d gaze data but that did not really match the normalized streams.
Hi, we use camera intrinsics to project the 3d point from the 3d camera coordinate system into the normalized image plane. See also https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hello! Is it possible to run pupil core without running the programm as an admin? Or is it necessary?
Hi, currently it is only necessary to do so on macOS. On Windows, once the drivers are installed, you can run the software without administrator rights.
Hi I am getting this error when running pupil-capture.exe. Anyone know how to fix it ?
Hi, could you please share the full log message context?
Thank you! Can you confirm that there was a pop-up asking for admin privileges?
the chinese in the log means permission denied
yes and I confirmed it
In this case, it is not clear to me why the permission was denied. Is this your personal PC or is it managed by your company/institution? One possibility I could think of is that there are additional security policies installed on the device, prohibiting the driver installation.
Strange, It is my own PC.
I thought it may be because of admin privileges. I tried to open it with admin but it showed this again. The software just doesn't detect cameras.
Yes, this is expected. The software checks for the existence of the drivers and will try to install them if they were not installed previously.
Hi, currently it is only necessary to do so on macOS. On Windows, once the drivers are installed, you can run the software without administrator rights. @papr How do I check if the drivers are installed? I installed the programm via the webpage, but it still asks me everytime to run as an admin (win 10)
Do you see the camera video feed being previewed in the application windows? Or are they grey?
They are just grey.
@user-619e31 could you please try following steps 1-7 of these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
This solves the problem! Thank you!
Okay thanks.
Hi Papr, Can you share the re-calibration steps please? I tried post-hoc re-calibration, but nothing has changed so why would this work? I am not sure. Do we need to change any physiological or other parameters? Also, there is no constant offset that I can remove using the manual offset step. It varies based on the gaze direction I believe.
Sounds like you got the process right already. With the post-hoc calibration, you are able to add/remove reference targets, and therefore adjusting the calibration. If you want you can share the recording with [email removed] and I can have a look and give more concrete feedback. There are only very few physiological parameters in the pipeline that are modelled explicitly. In my experience, they get corrected for in the calibration optimisation procedure and inaccurate gaze is caused by a different issue.
Tja sk> Yes, this is expected. The software checks for the existence of the drivers and will try to install them if they were not installed previously. @papr Thanks!!
Do I understand it correctly that the id0 and id1 confidence parameter tells me how "good" the eyetracvking device is adjusted to the respective eye?
Yes, one could say that. Or more precisely: If the eye cameras are not adjusted correctly, confidence will be low. There are other reasons for low confidence though, e.g. during blinks.
Yes, one could say that. Or more precisely: If the eye cameras are not adjusted correctly, confidence will be low. There are other reasons for low confidence though, e.g. during blinks. @papr thank you so much!!!!
I have a go/no-go task with four fractal images, during which participants should associate each of the four fractals to a decision, and all along I record pupil size. I need to make sure that the changes to pupil size come only from the mental workload and the learning process, and not the fractals differences in luminosity, etc. What are the image characteristics I need to check, for the fractals to be equally presented? Things like red and green and blue histogram? and luminosity histogram? Any references/papers about this?
Re colors see also https://files.cie.co.at/x046_2019/x046-PO195.pdf
Please also see what these authors did to avoid illuminosity-difference effects in their experiment https://www.researchgate.net/publication/335327223_Replicating_five_pupillometry_studies_of_Eckhard_Hess That might be difficult to transfer to your experiment, though, given the complexity of the images. See also Appendix B in particular.
Hi Papr, Thank you! Currently velocity is computed based on the norm_pos_x and norm_pos_y values (i.e. pl.Data(2,:) and pl.Data(3,:)). These traces are calibrated/mapped to known real world coordinates (to a 0.1 degree precision) using a simple neural network. We asked participants to look at targets and press a button to indicate they fixate their gaze to these targets (15 different positions throughout the field of view). 200 ms of eye traces are then sampled, and we map the median of these values to the actual known world fixed coordinates. The network has 2 inputs (x,y) and 2 outputs (norm(az), norm(el)), which we then scale to actual azimuth and elevation. The network has 1 hidden layer with a few units (3 or so). We then iteratively map each sample to azimuth / elevation, which perfectly predicts the location the participant is looking at. (As we do not use any memory in the network, it seems unlikely that any unwanted filtering may occur on this level). We then compute a saccade amplitude, R, using Pythagoras (i.e. hypot(az,el)). From R we take the two-point central difference differentiation algorithm to compute the velocity.
Thank you for the details! Are you presenting the targets in a VR environment or on an external desktop screen?
Targets are being presented in the real world, each target is an LED on a sphere at exactly 1 meter away from the eyes. The participant's head is fixed to not allow for any translational movement.
Hi, depending on the display size you might have issues with within-image illuminosity differences. These will be difficult to get rid of. Also, different light spectra cause different pupil dilation [1]. You might be able to work around that by converting the images into grey images. But I do not know if your experiment allows that. Could you tell us a bit more about the details?
Thanks a lot π
Hi Papr, we try to place 4 Apriltag markers on each corner of the screen to identify it as a surface as the means to get the gaze point on the screen, but the detection of some markers was not very stable, is there any way to improve the marker detection in post-hoc analysis?
Unfortunately, there is not much one can do post-hoc. π Typical issues include insufficient white border around the pixel grid.
Could you please replicate the same graphs (same filtering process) but instead of using your custom norm-pos-to-az-el conversion, use the gaze_normal0/1_x/y/z
values of the same samples that you had chosen previously? To calculate the velocity, you can either calculate the cosine distance between two vectors (yields saccade amplitude directly) or use
r = np.sqrt(x ** 2 + y ** 2 + z ** 2)
el = np.arccos(y / r) # for elevation angle defined from Z-axis down
az = np.arctan2(z, x)
Note that the functions above yield radians, not degrees.
Hi Papr, I feel like i'm doing something wrong. Are el and az supposed to be the same value here?
Sure, do you know which streams those are; idx that is?
Re extracting header names from XDF files, see this pyxdf implementation: https://gist.github.com/papr/219929d26777711ad204acfe4bc5e2e4#file-xdf_convert-py-L124-L125
They should be part of your LSL recording already. These two 3d gaze vectors correspond to the 3d eye model orientation in scene camera coordinates. I recommend analysing the two vectors (left and right eye) separately. They should correlate strongly though.
Ok thank you. I'll post them once i've finalised the figures! π
See this csv file re column indices. Note, that the timestamp column is usually excluded when data is loaded from the xdf file. Btw, depending on your xdf file loader, you can also extract these column names from the file, avoiding the dependency on indices.
No. What values are you getting?
Oh nvm, the second stream contains only nans. i thought the data was plotting over eachother.. π
Here is the full error message. I've tried multiple files and they all produce the same error. I realize that the system I am using is slightly modified, so I would like to allow pickles, even if that isn't conventional for the code.
I am not sure why your timestamp file would contain pickled data but the reason why changing the source code to allow pickled data did not work is that you are executing the bundled application. It is independent of the modified source. You need to run from source to see the effect.
What do you mean by you tried different files? Do refer to recordings?
Yes, different recordings!
Could you please list the files of recording folder that you are trying to open?
Hi It's been a while since I was last on here. I have a few more questions about using the core for my experiment. Specifically I wanted to double check a few things about circle_3d_normal_[x,y,z]
.
1) Is circle_3d_normal on a linear scale? e.g. if the value changes from 0 to 0.1, is this the same amount of eye movement as if it changed from 0.9 to 1?
2) Are the values of circle_3d_normal_[x,y,z]
extracted from each eye camera comparable to each other? For instance, if both eyes move by a circle_3d_normal value of 0.2, does this mean both eyes moved the same amount in a given direction (eg. they both moved 2 deg)?
3) I guess this one depends on the answer to 1) and 2), but can the values of circle_3d_normal_[x,y,z]
be used to calculate the angle between the two eyes (assuming eye gaze is not perfectly parallel)? For context, I currently use circle_3d_normal_[x,y,z]
to calculate the angle between the position of the same eye at two different time points, but we would also like to calculate the angle between the position/direction of the two separate eyes at the same time point.
I also have a few question about extracting direction information but they can wait for now. Many thanks!
Hi, circle_3d_normal
is part of the uncalibrated pupil data (in eye camera coordinates). Re 2 and 3: You want to use gaze_normal0/1_x/y/z
and eye_center0/1_x/y/z
from the gaze data. The former corresponds to circle_3d_normal
but in scene camera coordinates. Re 1: Yes, these are in an undistorted euclidian space.
Hi @papr Thanks for that. So to confirm, it's okay for me to use circle_3d_normal_[x,y,z]
(taken from from a frozen eye model) to calculate the angle between the position of the same eye at two different time points, but I can't use circle_3d_normal_[x,y,z]
to calculate the angle between the position of the two different eyes? We have spoken about the former at length before as a way to calculate the angle of strabismus (eye turn) and you recommended using the uncalibrated circle_3d_normal_[x,y,z]
eye camera coordinates (so long as the person's head does not move) since the strabismus prevents us from properly calibrating the system (and so we can't get the calibrated gaze_normal0/1_x/y/z
). I have been using circle_3d_normal_[x,y,z]
to calculate strabismus angle for some time now and it seems to be working well.
Hello papr, I am running the post-hoc pupil detection on Pupil player, but once I start this procedure the gaze point on the the image dispears, do you have any idea to make the gaze point show again? thanks
Running post-hoc pupil detection shouldn't make the gaze point disappear.
Is it possible you have selected Post-Hoc Gaze Calibration
as the Gaze Data Source without running the post-hoc calibration?
Hello @papr Is it possible to get spectrogram of the raw signal (possibly eye movements ) recorded from Pupil Core ? Otherwise, do you have recommendation of obtaining the spectrogram of the raw signal ?
Thanks, but I after I clicked "Dectect References, Calculate All Calibrations and Mappings" button, there still no gaze point showed up. I am wondering is there anything I missed?
The first thing to check is that the calibration marker is clearly visible during the recorded calibration choreography, and subsequently, that the marker has been detected. You can check the marker detection progress in Pupil Player's timeline: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Hi @user-205d47. It would be helpful for us to learn more about your aims with regards to frequency analysis of the raw data. What exactly do you intend to do with a spectrogram?
Hi @nmt Thanks for getting back. I was wondering if you ever looked at the spectrum of your eye movement data ? I am interested to find out if pupil labs implements any pre-filter on the raw data that one gets from pupil capture and how does the spectrum look like ?
@papr Hi there, I've repeated the main sequence+ and "skewness" relationships for the same saccades i've posted earlier, albeit computed from de gaze_norm0/1_x/y/z streams. Note that the left (blue) and right (pink) are pretty much the same. Correlation between the R (dimensionless movement) of both eyes is almost 1. Overal tendencies between both graphs from the different pupil labs-LSLS data streams are very similar I would say. Peak velocities remain low (even a bit lower then before). while ideally, these should be near 750d/s for longer saccades. I've also include the relation ship between accelerated duration and saccade duration as a function of amplitude and duration overal (blue and pink completely overlap).
Thank you Pablo for the explanations. I tried cleaning the lens and it's a little better now, but the red circle around the pupil keeps jumping all around and I guess it needs to be focused. I got these messages when I open Pupil Capture, and it seems that I have the Pupil Cam1, whose focus is adjustable, right? If so, can you please tell me how to adjust the focus? I didn't find the adjusting screw thing shown in the videos.
One recommendation was a lens pen, similar to this model https://www.foto-gregor.de/lenspen-micropro-nmcp-1-
Is there any CPU restriction for Pupil Player 3.5.8? Is the eye detection multi-threaded - i.e. can it use any more than 2 cores? I am trying to speed up my data analysis using a virtual machine on Linux and despite having lots of CPU, it appears even slower than on my desktop computer
Hi, the algorithm is fairly sequential which is why it is difficult to parallelize. Are you trying to use VMs to start multiple Player instances? I think you can do so without them.
Hi we are running an image viewing task in Inquisit. The task will have components from which we don't need eye tracking data, but the AOIs that temporally appear about 1/3 of time, do need to be analyzed for AOI gaze time and blink rate. The Pupil Core recording will take a while to process (60 images per participant) if I use Pupil Player to individually find/export each images recorded AOI surface gaze data. Is there a way to batch download a participant's AOI gaze data for all AOI's (60) at once?
Hi π Are you using the surface tracking feature already?
Thanks for sharing the log. It is important to look at the IDx suffix. 0 and 1 refer to the eye cameras. ID2 refers to the scene camera.
eye1 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam2 ID1. eye0 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam2 ID0. You have the 200 Hz non-adjustable-focus eye cameras. I will ask my colleagues for further cleaning tips.
Ah so the Cam1 I saw in the log is about the world camera, good to know π
Do I have to check for any size or material specifications so that the lens pen fits in the lens and properly cleans it?
The pen tip should be comparably small, given the size of the lens. Otherwise, it is important that it is dry-cleaning.
@user-98789c Otherwise, before purchasing such a lens pen, please contact info@pupil-labs.com s.t. we can set up a video call and a colleague from the hardware team can look at it.
Sure, Thank you π
Hello! We are looking into purchasing the pupil core with the mounted scene camera and were wondering whether the calibration works with curved surfaces? We have a large parabolic screen (height x diameter: 3.5 x 3 m). Can the matching be done by the provided software or would we have to do it ourselves?
Hi @user-13a459 π. Pupil Core is a head-worn eye tracker, and provides gaze in its world camera's coordinate system. This means the wearer can move freely in their environment. You can present the calibration marker on screen, or even use a physical hand-held marker. In order to use Pupil Core for screen-based work, the screen will need to be robustly located within the world cameraβs field of view, and Pupil Coreβs gaze data subsequently transformed from world camera-based coordinates to screen-based coordinates. We achieve this by adding April Tag markers to the screen, and using the Surface Tracker plugin: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking Note that surfaces defined are assumed to be flat. A potential workaround for curved monitors would be to add multiple surfaces along the curvature.
Hello! I am trying to use remote command ""T {time}" to resets current pupil time. But after this command, I am not able to give another remote command like "t". the error report is "ZMQError: Operation cannot be accomplished in current state", can anyone help me on this ?
Hi, looks like you forgot to call recv() on the socket. The Pupil Remote socket follows a REQ-REP pattern where you need to recv() for every send(). Please be also aware that we no longer recommend using the T command. See https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py as a replacement
@papr @user-92dca7 @nmt Hi guys, wasn't sure wether you caught my previous post with the figures you asked for. If not see above π. Well anyways, I hope to hear from you with regard to max saccadic velocities and wether there may be any process that acts as (LP) filter of sorts. Kind regards, Jesse
Hey, pye3d has a mechanism to filter out camera pixel noise (see short-term model https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-estimates-eyeball-position-on-different-time-scales) but from our understanding, this should not have a large enough impact on saccades to result in 2-3x-lower-than-expected measurements. We will try to reproduce this internally.
@nmt Splitting the screen into flat regions might be sufficient for our needs and would save us some trouble. Either way will have to try it out in practice and see if extrapolating rays from the eyes in the world view and projecting onto the curved surface would be necessary then. Thanks a lot for the suggestion, I appreciate your help!
That's no problem at all. Note that once you have defined your surfaces using the Surface Tracker Plugin, the Plugin will automatically map gaze from scene camera coordinates to surface coordinates. You can download an example recording that contains surfaces data here: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing. Just load it into Pupil Player, our free desktop software (available here: https://docs.pupil-labs.com/core/, click 'Download Pupil Core Software'). Then, just enable the Surface Tracker plugin to see the surfaces and mapping results. You can also generate heatmaps.
Hello ! π Yes, I am surface tracking and using April Tags to define the computer screen and post-hoc manually using the "add surface" feature to digitally define image AOIs. The image AOI's occur about every 12-19s, then not again for 8-10s, so there is parts of the recorded world video for which gaze time doesn't need to be analyzed. It's taking a long time to go through to define each "add surface" then export data for each. (I have 6000+ images lol) Is there a way to annotate and batch download the data for each participant world video recording (100 images) without averaging out all the gaze time for that recording?
My recommendation would be: You can send annotations as part of your experiment to Capture. Set up only one aoi, export its gaze data and use the stored annotations to trim the gaze data for each image.
Right, thanks for reminding me!
it's okay for me to use circle_3d_normal_[x,y,z] (taken from from a frozen eye model) to calculate the angle between the position of the same eye at two different time points, but I can't use circle_3d_normal_[x,y,z] to calculate the angle between the position of the two different eyes? Correct, if you want to look at absolute orientations! You can easily compare within-eye differences between models.
Nice! Good to know that still stands. I will stick to only comparing values taken from the same eye camera and won't compare values, or amount of change, between the two eye cameras. Thanks a lot
Hi @papr, thanks for coming back on this. Potentially i've found the error already. However, I'm not sure I fully understand how lsl extracts and writes data from pupil labs particularly when recording with two eyes. It appears when I look at the pupil-labs-lsl timestamps they seem to be consistently separated 2.5 ms apart (opposed to the expected 5ms at 200 Hz). I assume this has to do with two eyes recording simultaneously and alternating samples represent the 2 eyes(?). But then how do I know which sample belongs to which eye? and also are both eye cameras then really recording in opposite phase at exactly 2.5ms interval?
Mmh, LSL stores matched data and I think this is a result of how the matching works. See https://github.com/N-M-T/pupil-docs/commit/1dafe298565720a4bb7500a245abab7a6a2cd92f
Hi @papr @nmt I am using the VR add of Pupil Labs for measuring pupil dilation in a VR environment. Subject fixates on a red dot in presence of a grey background. But we are having troubles with getting clean and stable recordings. Pupil detection flickers (see attached image and link for the video ) and yields low confidence. We also see a few reflections in the eye, not sure of the source yet but could this be the reason for back tracking and do you have any suggestions on how to get a clean and stable recordings from the VR addon of pupil labs ? https://filesender.surf.nl/?s=download&token=4c883088-d356-4e1c-986e-7be2bcd4cfe3
Hi @papr , what effect does the size and width paramters in the surface definition GUI have? What is the "real world size" mentioned in the documentation? Is it cm,m,inch,miles? Is it the real real world measures, like measured with a ruler? Thanks a lot
Hi there! I have a question related to the setup of the software which is I think is just a problem due to the IT department and the restrictions we have... I install the software and after a while they are blocked by the IT software called Carbon Black Cloud. The IT guys told me: "Please reach out to the software vendor and have them provide an anti-virus exclusion list for us to whitelist." Is this something that you can provide about this??? Thanks for your help!
It is meant for scaling of the heatmap and gaze data. Gaze is mapped primarily to a normalized coordinate system. You can enter the most sensible unit, e.g. pixels for a screen. See also https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system
thanks a lot, so in our real world setting we are currently tracking shelves, so centimeters would be an option? And if we are not at all interested in the heatmaps, would we have to set any values here or could we leave it at the default 1.0/1.0?
It would be helpful to know about the format of that list. If you could point us to corresponding documentation that would be helpful. I can look into it, too, but earliest next week.
This is what IT wrote back to me: "Carbon Black allows for file path, application executable and/or SHA256 file hashes as exclusions. Preferably the first two."
Thanks! Will check with IT and come back to you.
Hi @user-205d47. Thanks for sharing the video capture. It looks like contrast between the pupil and iris is low, which is hindering the pupil detection. Please see this message for steps you can take to improve pupil detection (for VR eye tracking, steps 2β5 are relevant): https://discord.com/channels/285728493612957698/285728493612957698/943090995917103144
That's great Neil. Thanks a lot for sharing these pointers. Especially the Contrast and ROI Trim. I will try these out and hope it works. Yes, I am aware of the Best Practises and refer it for any doubts.
Hi Neil. I tried a new bunch of recordings based on the points that you suggested and also went through the Best Practises. However, please the attached video of poor quality of tracking in VR. There a few issues I see here and hope you can help to clarify. First, there are a couple reflection points in the eye, which I believe are from the infra-red camera itself. Could this hinder tracking? Second, the major limitation here is that the orientation of the camera is fixed as compared to the Core device where you can adjust the eye camera to get the best possible eye/pupil tracking. Is there something that you recommend to solve this issue ? We also have Pupil Core and right away it tracked the pupil. Thanks a lot in advance.
@user-205d47 If you haven't already, I'd also recommend reading our Pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Hello I am currently running an experiment where participants will view visual stimuli through a mirror stereoscope (a different image presented to each eye separately through each mirror). I will simultaneously track their eyes using Pupil Core. Since they will be wearing the glasses (and therefore the mirrors won't block the infrared light) does it matter what type of mirrors we have (hot vs cold mirrors)? Thank you!
good morning. Has anyone managed to open the mp4 files from Matlab on Mac? I only get errors
As i work on mac, I cannot open many .mp4 files (as i think they are build for windows or so). However, you can use free software such as handbrake to convert it to a functioning .mp4 file. Then you can use VideoReader to import data into Matlab.
or, alternatively, is there a way of reading the pupil dilation from matlab? Or is the only way to run the pupil_player first and then export the results to a csv? I only need the dilations, ideally directly from pupil_capture
@user-ed3a2a @user-027014 By default, Capture records mjpeg data and Quicktime Player does not know how to play that back properly. They should work correctly using VLC player. See also https://docs.pupil-labs.com/developer/core/recording-format/#video-player-compatibility
yes, I have VLC, that works, but I would like to measure pupil dilation during the experiment, not afterwards.
is there a way to 'remote control' pupil_player? No offence, but the documentation is somewhat lacking and hasn't been updated since 2016
it looks as if this file only works from already generated cvs files?. I assume there is no Matlab solution?
The extract script generates a CSV file from the raw recorded data.
You can receive data in realtime using Matlab, check out https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Regarding extracting the pupil diameter from the recorded data: Technically that should be possible, but I do not have a turnkey solution for that. It might be easier to run the mentioned python script and reading the generated CSV file in Matlab.
thanks! A little more help please, as there is 0 documentation. I worked out how to start and stop a recording. How do I query the pupil size?
Could you please clarify which of the two solutions you are interested in?
What works! Ideally online via send and receive notificaitons.
I worked out how to start and stop a recording. Are you using the linked matlab helpers for that already? Or are you referring to manually controlling the recording via the UI?
matlab via zmq via the given 'pupil_remote_control' example.
Check out the filter_messages.m
example. With it, you should be receiving pupil data without any changes. The script prints the "norm_pos" field in line 70
[topic, note('norm_pos')] % print pupil norm_pos
You should be able to access the pupil diameter via note('diameter_3d')
Perfect! Then you are close to what you want.
Ideally I just need to extend that to query some other info
thanks, I will do that and come back to report if it works! Will take a few hours probably.
The pupil datum is documented here btw https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format
it seems to work so far, but another question in the middle. every day or two pupil_capture stops working. Terminal throws 'world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.' I then have to reinstall software from scratch. Any idea?
Reinstalling the software usually does not make a change. The session settings are stored outside of the installation file location. Also, the driver installation is checked every time Capture launches, not at installation time. So I am not sure how a reinstall can fix the issue.
On Windows, Windows Updates reset the drivers which could be a reason for the device to not be recognized after an update. Otherwise, there is only the possibility of a physical disconnect.
Are you running on Windows?
no, MAC
Ah, right, you mentioned that above, my bad. The argument about the reinstallation stays the same, with the difference that there is no driver installation needed on macOS.
ok, point taken. Just having the situation again. Restart doesn't help, same error.
Do they need to cool down or something between uses? is matlab changing anything?
Should we quickly jump into a video call via the Discord voice channel? Then I can check it out.
yes please
how?
You should see a channel called pupil-voice
on the left. Just click on it and you will join the voice/video chat.
Hello Everyone, I was trying to connect my Pupils Eye tracker to LSL but itβs not connecting does anyone know how to fix it? Thanks π
Hi @user-e29c16. Could you please outline the steps you have already taken to set up your Pupil Core system with LSL?
Hi Neil thanks for replying me, Basically, I used this blogspot to follow the steps https://ws-dl.blogspot.com/2019/07/2019-07-15-lab-streaming-layer-lsl.html?m=1, and I ended up here without having any error but once I go to install thereβs no any lab recorder.exe file generated.
Thanks for clarifying. A large section of these instructions are related to building LSL (and its apps) from source. Depending on what you want to do with LSL, this might not be necessary. You can, for example, download LabRecorder.exe directly from here: https://github.com/labstreaminglayer/App-LabRecorder/releases/tag/v1.14.2
yes, I was about to tell you that thing as well I installed it but the pupil's eye tracker is not linking to the lab recorder, I was wondering how to link the eye tracker device so it can show under the "Record from streams"? Thank you.
Great. The next two steps are:
1. Install pylsl
and copy or symlink to the Pupil Capture plugin directory
2. Install our LSL Plugin to the Pupil Capture plugin directory
Detailed instructions available here: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#pupil-capture-lsl-relay-plugin
I also noticed that the tutorial you were following linked to an old version of Pupil Capture. Be sure to use the most recent version: https://github.com/pupil-labs/pupil/releases/tag/v3.5
Thanks, Neil, I did that steps as well but still it's the same issue pupil's eye tracker is not linking with the lab recorder and once I enter start there's an issue coming as " Missing: Qset()"
Could you please share the capture.log file that can be found in the pupil_capture_settings folder? The most likely reason that the streams are not appearing is that the plugin is not working as intended.
Could anyone be able to help me with this issue please?
Hello
Please eye cameras have a sampling rate of what ? We calculated 140Hz. although the laptop is corel i5, the memory is really low. We used a resolution of 400 X 400
I can see that sampling frequency 200Hz at 192X192 is in the spec. I chose 400 x 400
The programs are installed into C:\Program Files (x86)\Pupil-Labs
, more specifically
C:\Program Files (x86)\Pupil-Labs\Pupil <version>\Pupil <app name> <version>
e.g.
C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1
The executables are:
pupil_capture.exe
pupil_player.exe
pupil_service.exe
PupilDrvInst.exe
Thanks! Sending this to IT and will let you know their response.
Hello, I have download the core program from the website and when I plugged the glasses, it says that the device is found but there is no image. Could you please help me? I am working from a Mac...
Please be aware that you need to run Pupil Capture with administrative privileges on the new macos Monterey. Please see the release notes for details.
can somebody help me?
Hello, I am facing couple of issus (1) I am doing the conversion from Pupil time stamp to System time stamp . After the conversion I can see that the System time is actually 3 hrs lesser then the exact system time of my computer. I am doing the whole conversion with by help of the tutorial provided on the Pupil Labs website.
(2) I am running an experiment where I need to send tags for every image presented on the screen( Image start time, Image end time and Image No.) Can someone help me out and let me know how I can do that using Pupil Labs eye tracker. (3) At last but not least, can we connect Pupil Labs eye tracker with Psychopy?
It will really helpful if anyone can help me out to figure out the solutions. Thank you so much.
Hi, the conversion to system time will yield time in UTC+0. You are likely in a different time zone and need to adjust accordingly.
You can send such tags via the annotation feature. https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
But with our new Psychopy integration this might not be necessary at all. https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html#pupil-labs-core
Good morning, Is there someone that can help me with my problem? I need guidance to know what I am doing wrong!
The maximum sampling rates are 200 Hz at 192x192 120 Hz at 400x400
The real sampling rate might be lower due to insufficient cpu resources.
The sampling rate of the gaze (combined data from both eyes) might have an higher sampling rate.
Thanks. So it is best to present the sampling rate we calculated from our data. We have ordered systems that have a high memory and sampling rate.
The result from our pilots study shown SR of 140Hz at a resolution of 400 X 400
Hi, thank you for sharing the video. As you can see, the 2d pupil detection (yellow circle) is still being fitted to the iris instead of the pupil. Please reduce the pupil max parameter in the 2D Pupil Detector settings until this does no longer happen. You might need to reduce the Pupil Min value, too, but this is difficult to tell from the video.
Thanks @papr for your quick reply. I will try your suggestion today and update. In the mean while if you have any thoughts on the other 2 points I mentioned in my comment then please let me know.
thank you for your answer could you plese tell me where are those release notes?
https://github.com/pupil-labs/pupil/releases/tag/v3.5 See the macOS Monterey support section.
I will have to come back to you in this regard. I will ask my colleagues for best practices.
Thanks, that will be great. Looking forward to it.
The reflection only become a problem if they lie on top of the pupil border. In these cases, the confidence values will be lower than they usually would be, but the gaze estimation should be affected much. Regarding the eye camera positioning: One option is to use the VR headset's built-in IPD-adjustment functionality. Based on the shared video, the positioning is ok, though. The eye ball is fairly centred. π
Hello @papr I tried it again with you suggestions by changing the pupil min/max, intensity and exposure etc. Unfortunately, nothing has helped so far. The tracking is better for darker eyes in general. The attached video is for a blue eye subject. Do you have suggestions for how it can be improved. @nmt Also pointed a few issues and we have also tried adjusting the ROI, exposure etc.
Hello Papr, thanks for replying, Here is the capture.log file.
Looks like the plugin is not loading due to one of my recent changes. Sorry for that! Do you have the possibility to test a possible fix in a few minutes?
Sure, Tell me what do.
to*
@user-e29c16 could you please replace the outlet.py
file with this one? Afterward, start Capture. The LSL Relay plugin should be listed in the plugin manager. Enabling it should cause the stream to appear in the LSL Recorder app.
Thank you so much papr, now its working.
Great, thanks for testing! I will release the fix now
Sure, Thanks.
Hi papr, after running a few trails there's the issue that comes with camera, It's like if I turned on both the eye0 and eye1 the main camera is getting disconnected if I turned off the eye 0 and eye1 the main camera is getting activated again, could you be able to help, please? Thank you.
And just to confirm, are you using a Pupil Core headset or custom cameras?
I do not follow. Which cameras are you turning off? And how do you do that (via software or by disconnecting them physically)?
So, first I start capture after that everything works smootly and i tried enabling eye0 and eye 1, once i enable them the main cam disconnected. Yes i'm using pupil core headset.
Now, it's keep saying WORLD: Camera disconected. Reconnecting...
Please try restart with defaults settings
. Both eye windows should appear and all three cameras should work as expected.
Yes, once I hit reset with default setting my computer got restarted too, and now it's working, Thanks again for your help.
Would it be possible for you to share a raw 1-minute-long Pupil Capture recording with this exposure setting to [email removed] Having access to the raw eye video data allows me to give more concrete recommendations regarding the parameters.
Sure, I will send the raw recordings to the email.
Could anyone be able to help with reading the data recorded from the lab recorder by using matlab? Thanks.
Check out https://github.com/xdf-modules/xdf-Matlab/tree/master
Thanks for replying, I tried that one and I gave the correct path but after trying to load, it gives me an error like this "Unrecognized function or variable 'load_xdf'."
Unfortunately, I do not have access to a Matlab instance. From that error message, it looks like 1) addpath
was not called or 2) called with an incorrect path (see the repo's README for reference). If you do not think that this was the issue please reach out to the author.
This is the file I'm trying to load.
The issue is not related to this file.
Thank you so much again, I'll check with that.
Hi @papr I am currently having a pupil headset and trying to use it with a zed stereo camera. I know that pupil capture can time sync all the nodes in one pupil group automatically. I tried to open the zed camera by pupil capture and create a pupil capture instance, but it failed.
Please be aware that Pupil Capture only supports cameras out-of-the-box that fulfil the requirements listed in this message: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
Before you start writing your own backend, could you let us know a bit more about what you want to achieve? Does your Pupil Core headset have a built-in scene camera or are you planning on replacing it with the ZED camera? In case of the former, do you only need time synchonization between the Pupil Core data and the ZED video stream?
Short note regarding Pupil Groups and time sync: Pupil Groups does not perform time sync. There is a dedicated plugin to enable time sync between Pupil Capture instances. See also this document for current best-practices on synchronizing time with external data streams https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py
Hi @user-5fb1ee Have you already created a video backend for the ZED camera? ZED cam has a python lib from what I can see already. IIRC you will need to build a plugin that inherits from the video capture base plugin: https://github.com/pupil-labs/pupil/blob/cd0662570a1a495c42b0185ed4b28630d1ba70ee/pupil_src/shared_modules/video_capture/base_backend.py ( @papr please feel free to elaborate when you are online )
Hi again! I would like to add reflective markers for motion tracking to the pupil-core frame. Do you have any experience with certain shapes? 3D prints? suggestions? Thanks!
Hi @user-20283e π no personal experience here with reflectors for mocap. For head pose have you considered: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking - for integration with other mocap systems you might want to check out @nmt's repo: https://github.com/N-M-T/Pupil-Labs-Mocap-Trigger
@papr Hi! I want to make an app that shows gaze points in real time . I wonder if this can be done with MATLAB
Hey @user-8a20ba π - have you already looked at: https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/filter_messages.m#L58 ?
Hello all,
I noticed that the eye camera image for the left eye is blurry. However, the focus wheel is clued in the case of the PupilLabs Core.
Is it possible that the focus adjustment wheel moved somehow? No matter how I adjust the eye camera, the confidence of the left eye is always a bit worse.
Can anyone help me with this ? Look at eye 1 - the lashes
Kind regards,
Cobe
Hi, would you be able to share a one-minute recording of you looking into different directions while keeping the head still? If so please send it to [email removed] I would like to confirm that the blur is really the cause for the lower confidence. In our experience, it is more likely for there to be a different reason. It is not possible to adjust the focus of the 200 Hz eye cameras.
While the eye model looks well fit already, I would generally recommend to rotate the eye cameras slightly more outward s.t. the eye ball center lies further in the eye cams' fields of view. This can be helpful to avoid clipped pupil images when looking to the far right or left.
Hi, are there any requierements for downloading the pupil core Software? Like how much ram etc is needed
The required RAM is difficult to estimate as this heavily depends on the recording length and the detection features being used.
And is there a recommended software for analyzing the Data?
pupil player is pretty great...
it gives you all the csv you will need
The required RAM is difficult to estimate as this heavily depends on the recording length and the detection features being used. @papr Thanks! So there are no specific requirements that my device should have?
For Pupil Capture, a fast CPU is most important (>2.8 GHz Intel Core i5). For Player, RAM is most critical. We recommend 8GB or more.
For Pupil Capture, a fast CPU is most important (>2.8 GHz Intel Core i5). For Player, RAM is most critical. We recommend 8GB or more. @papr π
On the website it is written, that in order to properly adjust the eyetracker, a green circle should appear in the eye camera. however only a light or dark Blue circle and a smaller red one with a red dot appear. Where did I go wrong?
Let me update that accordingly.
The newer software version uses blue instead of green colors. You should aim for a dark blue model, together with the other points listed in that docs section
Thank you so much, this is incredibly helpful! Is there anyway to get the amount of eyemovements made (so saccades essentially) and the pupil diameter? Or do I have to import a plugin for that?
Pupil diameter is calculated by default. Saccade detection is not available.
Hi Papr, I Just did some recordings using lab recorder but seems like none of them recorded any numerical data, after installing the EEG toolkit to Matlab and tried to read but when I checked the cells their nothing. Could you be able to help me with this? I'll attach the .xdf file and could you tell me is it ok or is there anything missing with the file? Thank you.
I have confirmed that the file contains the two LSL streams (gaze and fixations) but did not receive any data. It is very likely that you just need to run the calibration to get the setup working.
Just to confirm, did you run a calibration prior to the recording? The outlet uses gaze data which requires a calibration in Pupil Capture.
I just wanna make sure the data is recoding properly from lab recoder and saving as .xdf file without missing any?
Can anyone able to verify this to me please? Thank you.
Has anyone encountered a "moov atom not found" error when trying to view their world.mp4? Does anyone know how to troubleshoot this other than trying to repair the file post-process?
I have tried to reboot my Android device but still face this issue.
Hi, Thanks for replying, I just used the default settings and I didn't change anything. if I have to change before recording what should I need to change?
You do not need to change anything. You just need to follow the normal getting started procedure, connecting your headset, adjusting the eye cameras for good pupil detection and perform a calibration. See steps 3 and 4 of https://docs.pupil-labs.com/core/
Im having the issue, that fixations are not detected with the respective plugin. Although pupil confidence is almost 1.0 for both eyes, and I am consciously fixating a spot for a few seconds, the plugin doesnt detect any fixation. What could be the mistake?
The detection can take a moment to complete. On the right, you should see a progress indicator when the detection is performed. Can you confirm that? To reinit the detection, change the gaze source temporarily from "Gaze from recording" to "post-hoc calibration" (or vice versa)
The detection can take a moment to complete. On the right, you should see a progress indicator when the detection is performed. Can you confirm that? To reinit the detection, change the gaze source temporarily from "Gaze from recording" to "post-hoc calibration" (or vice versa) @papr Wow thanks! My mistake was using the posthoc option and not the recorded one. Thank you so much!
Just to clarify: Using the post-hoc calibration can be a valid option! Just make sure to set it up and calculate some gaze first. Without gaze data, the fixation detector won't have any input.
Just to clarify: Using the post-hoc calibration can be a valid option! Just make sure to set it up and calculate some gaze first. Without gaze data, the fixation detector won't have any input. @papr Makes sense! I will try some options to work things out!
Does anyone have any idea why my world camera is blurry?
check the lens - focus or dirt on surface/inside lens
Please see https://docs.pupil-labs.com/core/hardware/#focus-world-camera
I'm just getting this message.
@papr Hi, I would like to know the unit of the "gaze_point_3d_xyz" in "gaze_positions", is it in "mm" or "pixel"?
Many Thanks~
It is mm. See "3D Camera Space" for reference https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hi all. I trested several times the calibration in 3D mode and 2d mode. and all the time we had better results with 2D.
yes, the 2d mode can be more accurate after calibration but is more prone to slippage. This makes 3d mapping the better choice in many cases.
Is 2D more preferrable? or it could be related to usage of older (2.3.0) version of software
(i have data uploaded if needed)
we made many tests and 3d (on 2.3.0) is always worse.
You might also want to consider testing a newer release. We have updated the 3d model with the Pupil Core 3.x software release.
That might very well be π Although, one might also need to consider gaze accuracy after wearing the headset 5/10/15 minutes. There is a lot of natural slippage that the 2d can not compensate for. If you only plan to record 20 seconds after a fresh calibration (without slippage), then the 2D pipeline is the better choice. As described the in the Best Practices: The choice is use case dependent.
I also wated to know if surface tracking data is sent via network api if plugin is enabled
Yes, see this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py
are there any network api changes which can cause backward compatibility issues?
yes, see the "Network API Changes" section in https://github.com/pupil-labs/pupil/releases/v3.0 You are most likely affected by "Changed 3D pupil datum format" and "Binocular 3D gaze data with string-only keys". But these changes can be adapted very quickly.
Thank you. I hope we'll be able to adapt without many issues to latest releases.
I've been having a lot of trouble with my Pupil Core picking up the right eye's pupil. It's seems to be consistently bad across participants. I'm not sure what else I can do besides adjusting the 2D settings and resetting the 3D model every now and then. Any help would be appreciated
Have these issues appeared just recently? Could you share an example picture of the right eye? Does it look noticeably different from the left eye?
Hello, is it possible to calibrate one eye at a time. Im using an older version of pupil capture. Do you simply just close one of the cameras?
Hi, new calibrations overwrite each other, i.e. with your approach only one eye at a time could be calibrated. Could you clarify the requirements a bit further? 1) Do you have one or two target coordinate systems (example: single scene camera vs dual-display hmd) 2) Are your subjects to fixate the gaze target(s) with both eyes simultaneous?
There are some custom calibration choreographies + algorithms but they all require Pupil Capture 2.0 or newer.
Or, is it sufficient to just calibrate, separately on a single monitor?
We are using a stereoscopic display. So the observer views two monitors through two mirrors: the right mirror feeds the display of the right monitor to the right eye and the left mirror feeds the display of the left monitor to the left eye. We are using both left and right eye cameras (and one world camera). The participant is positioned in front of the mirror set up (centered between them). I'm not sure which is the best way to calibrate given this setup. What would you suggest? I was also having trouble calibrating viewing the images through the mirror. The calibration wouldn't start unless I was looking directly at the monitor (as opposed to through the mirror). Thanks again for all your help. I've attached an example of our display setup (taken from erkelens et al.)
Hi I am currently using the pupil core device and I would like to prototype a wireless use by sending the video of the three cameras to a computer, is it possible to do the pupil tracking on a recorded video using pupil capture or another software ?
Hi, yes, Pupil Player supports post-hoc pupil detection and calibration given a Pupil Player compatible format. See https://docs.pupil-labs.com/developer/core/recording-format/ and this example script that can convert an externally recorded video into a Player compatible format https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a (please be aware that this does not yet solve your temporal alignment challenge for the three videos)
These issues have been happening for about a ~year (maybe longer). The issue is that the algorithm just seems to be having a ton of trouble locating the pupil, especially when the eye moves towards the nose bridge. I can get an image, but I'll have to get one of my research associates to do it as I don't have access to the hardware atm
This sounds like the pupil rotates away from the eye cameras, making it difficult to detect it. Have you tried the orange ey camera arm extenders? They might position the eye cameras further towards the nose, getting a better view of the pupil.
Hey is there any way I can get the specs on the camera module for the VR/Ar camera module? Iβm writing a research paper on it
Please see https://pupil-labs.com/products/vr-ar/tech-specs/
Hi again, has anyone ever used the Pupil Core device without usb wire, as I am trying to mount in on Hololens 2 I would need my user to be able to move anywhere without cable ? I thought that I could also mount a Raspberry on the headset to do the pupil tracking with pupil capture on the Rapsberry and then send it to my computer, yet if someone has a better solution I would be glad to here it. Thank you
Please see this project https://github.com/Lifestohack/pupil-video-backend The RPi is not powerful enough to run Capture but it can stream the video to another computer running the app.
Also is it possible to order only the connection wire (no cameras no frame) ? as I already have a pupil core device
I think we can make this happen. Please send an email to info@pupil-labs.com
Yes, I only want to achieve time synchronization between the Pupil and the zed video stream
Great! I this case, you can use a small external script instead of investing the time to build a custom backend. See this example for reference https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py You might want to use the Camera.get_timestamp
function as your corresponding client clock https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1Camera.html#af18a2528093f7d4e5515b96e6be989d0
Together with TIME_REFERENCE.CURRENT
https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1TIME__REFERENCE.html
Hello, I have pupil core product with world camera and eye0 camera. I am trying to use pupil core with raspberry pi. Raspberry pi is just for the streaming video over WiFi to another PC where pupil capture is runnig. Is there a way or example project about this? I tried Pupil Video Backend, but I could not run the program. Somehow it crashes. Any help will be very beneficial for me. Thanks in advance.
The mentioned project is the way to go in this regard. I do not know a better solution than this project. Please contact the author regarding the issue.
Hello, I bought Pupil Core. Where can I check the serial number?
Hi, Pupil Core does not have a specific serial number, but the hardware gets assigned to the related order ID(eg. pupil_w120_e200b_2022031609XXXX
).
Do you think a Raspberry could compute the pupil tracking ?
90% sure that it is too weak, at least for higher frame rates
Okay, maybe send the video to a computer through the raspberry might be a better idea ?
Yes, that is what the linked project is for :)
Thanks a lot !
then this is for you https://gist.github.com/papr/fcbafd5cf748c9b11e64a4dd37ec8e9a
Thank you! Do these plugins work for older versions of Pupil Capture. We are currently using a much older version (1.14.9)
@user-ced35b the idea is that the two screens replace the scene/world camera with two artificial scene coordinate systems. That is what these plugins are all about. Yes, you can calibrate using the world camera but it won't be any use to you.
No, 2.0 or newer is required π
haha we are using software that records eeg and presents our stimulus that is pretty old and doesnt allow us to update our IOS. Do you have any other suggestions to calibrate through these mirrors?
what is your reason for using this old/ancient version? π
Would it not work to calibrate on a separate monitor that has the same viewing distance?
The default calibration method assumes binocular vision. I do not know if that would be ok for your. Can you tell us more about what you are trying to measure? And how about running Capture on a separate computer?
I was worried about temporally synchronizing three systems since now our EEG is too old to implement LSL or any updated synchronizing tools (computer running Capture, the computer presenting the stimulus, and our EEG system). But youre right I think thats our best bet. We are just measuring gaze position and pupil dilation while the participant views two distinct images through the mirror stereoscope. They are just fixating the center of the mirror setup, thiis leads to perceptual dominance of one of the two images.
Hello. I have access to a pair of the vr-ar googles. But I just want to use then for eye tracking on a computer monitor. And the world cam is giving me some issues. I have gotten it to work by attaching the googles to a pair of glasses, and using an old webcam as the world cam, but this is a very awkward setup. Do you have any ideas as to how I could spoof my monitor output as the world cam, or how to circumvent the world cam altogether? Again, for my use I only need to track gaze on a screen with no head movement. Thanks in advance.
I think assuming binocular vision is ok. I'm just not sure what the best way is to calibrate through the mirrors on the two separate monitors
Is it possible for you to display the calibration marker window on each of the displays at the same time?
Hi Papr, Because of your Help I finally go the data for that thank you so much! , So in Gaze --> we have 22 channel count data points for each time point and in Fixations --> we have 7 channel count data points for each time point, I was wondering if you could answer me the difference of gaze and fixations data stream and also wondering what each of these data points represent. Thank you so much again kindly, for helping to understand.
The xdf file contains meta data with names for each of these channels. Depending on your xdf loading software, you should be able to extract these. For the meaning of the gaze channel names see https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
For the fixation channel names see https://docs.pupil-labs.com/core/software/pupil-player/#fixation-export
Generally, gaze is an estimated viewing direction based on one or two eye images. Fixations are based on series of gaze estimates which do not deviate by a specific threshold during a specified time, i.e. when the subject is fixating an object.
hi~I have a question about the code in the pye3d/base.py in the image. Why does the right gaze_2d point towards the sphere center? I found that itβs stated in the paper that we should disambiguate the right gaze vector to be the one pointing away from the eye ball center.
Hi @user-371233 π. This should be possible, but there are certain constraints.
In order to understand where the wearer is looking, it's necessary to calibrate the eye tracker. During calibration, the wearer fixates or follows a known target (reference location) while Pupil Capture records and matches pupil data to these locations. Normally, these reference locations can be displayed on the monitor and picked up by the scene camera.
With no scene camera, you could write a custom calibration plugin for Pupil Capture that collects reference data from the system that also drives the monitor. Important to note: The monitor would need to have a fixed spatial relationship with the wearer in order for the calibration to be accurate β like a head-mounted VR system.
An alternative approach to make the system less cumbersome might be to de-case your webcam. Then you can use the default calibration, and subsequently April Tag markers to map gaze onto your screen: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Thank you so much for the answer. The April tags are definitely useful. I am trying to look at the calibration. But I cannot find a "calibration" file in the recording, or find calibration values in any of the files. Where is the calibration for the user saved?
Hello, will the HTC Vive binocular add on work with these virtual reality glasses? https://www.shinecon.com/vr-glasses/vr-glasses-for-mobile-phone/vr-shinecon-hot-seller-vr-headsets-vr-glasses.html
Hi, thanks for your question. I discussed it with @user-92dca7 and you are right, the correct 2D gaze vector, v
, has to point in the opposite direction of the line from its 2D starting point, p
, to the estimated 2D sphere center, c
. Mathematically that means: v*(c-p)<0
. Note, there is a typo with respect to sign in the original Swirski paper. In our code, we are effectively testing for an equivalent condition, namely v*(p-c)>0
. For this note, when dot_products<0
, we use the 2D gaze line which was NOT used to calculate dot_products
in the first place, but the other one (np.where(dot_producs, 1, 0)
).
Thanks for the reply! Do you mean that you will correct the code to "np.where(dot_producs, 1, 0)" ?
That depends if the VR headset has the same geometry as the HTC Vive. The add-on clips on https://pupil-labs.com/_nuxt/img/htcvive_e200b.adba5ce.png
You can buy the cameras + cable tree without the casing if you want to engineer your own casing.
Ah ok, cheers!
Hello, I was just wondering if there was anything already created that allowed the glasses to be used in real time? I've been using the core for a Senior capstone project and I have somewhat of a working program but thought I ask before diving into a rabbit hole.
Thank You so much for your response again, I used MATLAB to load my .xdf files for that I installed eeg libraries, but after loading the .xdf file it does gives me the data but it's not showing what it represents is there a way to get it correctly without using MATLAB or how you suggests me to get this in MATLAB. And also for the visualizing what kind of tools should I need to use, is there a way to get live visualization if yes how can you describe the process for me please? Thank you!
See the LSL documentation https://labstreaminglayer.readthedocs.io/info/viewers.html
Hi @user-99bb6b π. Would you be able to elaborate on you use case? When Core is connected to Pupil Capture, our desktop software, both gaze and pupil data can be recorded and streamed all in real-time.
Sorry still kind of new to all this haha, we are using the glasses to capture text when a user stares at an object for a few seconds. But to do this I need to export the video to get the data from the CSV file. But atm I am now reading about the IPS Background the the remote Pupil APIs.
I can't do it at the same time, but do you think it would be sufficient to just do a binocular calibration on just one of the monitors (each monitor is identical to each other in size and distance)? Would this give me a meaningful calibration?
Wouldn't this be weird visually for the subject? Would the second eye be able to fixate the correct target if the second screen did not display a visual target?
I guess I was just imagining that they directly look at one display instead of viewing the display through the mirrors?
Ah, I understand. Can they look behind the mirror, to the target A in your previous diagram? Is this why you need the scene camera?
Is there any advice on how to set up the glasses in a way that allows to capture the pupil in extreme angles? I have really been testing all kinds of stuff, but when looking down as far as I can, the confidence for both eyes always decreases below 0.5 I have been watching the yt Videos but still feel like I am doing something wrong. Is it just that "hard" or what could be the issue?
It is likely that the eye lids are occluding the pupil in this configuration. You can use the orange arm extenders to place the eye cameras more frontal and further down to the eye. This should give you a better view on the pupil in the described situations.
It is likely that the eye lids are occluding the pupil in this configuration. You can use the orange arm extenders to place the eye cameras more frontal and further down to the eye. This should give you a better view on the pupil in the described situations. @papr
Thanks! But is it possible that there are simply angles that cannot be captured because of the eye-lid und eye lashes? Or is the camera made to capture every angle?
The former. Pupil detection quality will always depend on the actual eye camera position. While Pupil Core is designed to be as flexible as possible regarding eye camera placement, it is still limited by the length of the eye camera arm, the range of the ball joint, and lastly the length of the eye camera cable.
You can find the arm extender geometry here https://github.com/pupil-labs/pupil-geometry/blob/master/Pupil%20Headset%20triangle%20mount%20extender.stl You are free to modify it and 3d printing your own arm extenders that are more suitable to your use case (remember the cable length constraint though!).
The former. Pupil detection quality will always depend on the actual eye camera position. While Pupil Core is designed to be as flexible as possible regarding eye camera placement, it is still limited by the length of the eye camera arm, the range of the ball joint, and lastly the length of the eye camera cable.
You can find the arm extender geometry here https://github.com/pupil-labs/pupil-geometry/blob/master/Pupil%20Headset%20triangle%20mount%20extender.stl You are free to modify it and 3d printing your own arm extenders that are more suitable to your use case (remember the cable length constraint though!). @papr
Thank you so much!
But in your code, you use dot_products<0
to find the gaze_2d direction, and you said that v*(p-c)>0
should be the right condition. π€£ π π
Please see this PR for reference https://github.com/pupil-labs/pye3d-detector/pull/51
I know, it is not intuitive but bare with me for a moment:
Each observation has two possible directions, indexed with 0 and 1 in aux_3d
. gaze_2d
are the projections based on the vectors at index 0 (I will call these base
and the vectors at index 1 other
).
dot_products
is therefore based on base
directions.
Now, let's define this array mask correct_choice = dot_products > 0
. It is true for every entry whose base
direction points away. False otherwise. With the ~
operator, we can invert this mask incorrect_choice = ~correct_choice
. Here, we will assume incorrect_choice == dot_products < 0
.
There are two possible implementations:
1. Select base
(index 0) for all true entries in correct_choice
np.where(correct_choice, 0, 1)
other
(index 1) for all entries in correct_choice
that are False
.
Or alternatively (!): Select other
(index 1) for all entries in incorrect_choice
that are True
.np.where(incorrect_choice, 1, 0)
Our implementation corresponds to 2) while the paper describes 1) but they are effectively the same.
Maybe, I should really just change the few lines of code to avoid any future confusion about this.
Ok so thats what the dual display HMD calibration does? I'm working on setting up a separate computer so I can use these newer plugins. I'll definitley look into these plugins and will probably come back with more questions π thank you so much for your help, i'm very new to pupil core!
Sure, no worries. π Happy to help!
Hi, I just wanted to confirm that everything is working how it should be. I installed the Dual-Display HMD calibration plugin. The participant will then look through the mirrors (viewing the fusion of the two displays) and calibrate following the calibration client instructions (look top left corner for 1 s, top right corner 1 s, etc.)? All the subject perceives is just one display during calibration.
Hello everyone, We have tried using the plugin pupil_capture_lsl_relay. The LSL stream is detected by our program but no data is then received. We believe it could be because of the error that we have when activating the plugin in Pupil Capture, i.e., "Couldn't create multicast responder for 224.0.0.1 (set_option: An operation was attempted on a host impossible to reach)". Have you ever had this error? Do you know what is its cause? Best, Puzzled researchers
Hi, apologies for that. The README was missing a note that you need to calibrate first before you can receive data via the LSL relay. I have just added it to the readme to avoid this issue in the future. I do not believe that the displayed warning affects your setup.
Hello all! I wanted to know if itt is possible to set up a config for pupil capture which will always set same settings every time pupil capture starts
Every time we start the app - it uses same defaults - with resolution 1280-720 instead of fullhd, manual exposure, wrong settings for network api and calibration mode.
The app should be able to restore these settings (with the exception of the Pupil Remote port) from a previous session. From the top of my head, there are 2 possible reasons for this not to happen: 1. The session settings are from a different version in which case Pupil Capture starts with defaults to avoid incompatibility issues 2. The previous Pupil Capture session was not terminated gracefully, not allowing Pupil Capture to store the session settings.
You can set the Pupil Remote port via the --port
argument when launching Pupil Capture.
so any start of the app causes the operator to set it up the same way.
If there is any settings or config file to make settings permanent - it would be extremely welcome
Are there any way to remove incompatibility issues and re-save latest correct config
The former can only happen if you use a different version than before. If you always use the same version there should not be any issues. Can you share the capture.log file with us?
Hello, I would like to know if it is possible make the calibration of the gaze from Pupil Mobile directly, without using Pupil Capture
Hi @user-b28eb5. It isn't possible to calibrate with the mobile app. What you can do is record a calibration choreography, and subsequently run the calibration post-hoc in Pupil Player
So for example, if I want to use Pupil Mobile to record a video and I want to know where I am looking at, then first I have to make the choreography in the video and after that the gaze will be detect in the rest of the video, right?
The workflow would be to record a calibration choreography and then perform your eye tracking experiment. Subsequently, you would load the recording into Pupil Player and perform the calibration in a post-hoc context. Read more about post-hoc calibration here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Okay, thank you
Another question, for IOS is it available Pupil Mobile?
is Pupil Mobile available?*
Hi, no, it is only available for Android. Please be also aware that we have stopped maintaining the app two years ago. It remains in the Play Store for legacy reasons.
Then, is there another app mobile that I can use?
If you need to be mobile, I can recommend having a look at our Pupil Invisible product.
Alternatively, you might be able to build Pupil Capture from source on a Raspberry Pi. Note, that the RPi does not have the computational resources to run the Pupil detection in real time. But they should be sufficient to store the video stream to storage. Alternatively, you can use the RPi to stream the video via the local network to a running Pupil Capture instance.
Okay, do you have any documentation about Raspberry Pi with Pupil Capture?
You can find the Ubuntu source install instructions here. https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu18.md Note, that some of the pre-built Python packages are not built for ARM (the raspberry pi CPU type). You will need to build these from source, as well (pip, the python package manager, will automatically attempt that for you and will let you know if you are missing any dependencies). Our documentation does not cover this but you are welcome to discuss any issues with building them here.
Okay, thank you
Hello, I want to know whether we can set AOI using heat map?
Hi, yes, this is possible via https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
The existing client is just meant as a template/example implementation on how to interact with Pupil Capture and how to generate reference data. I highly recommend adapting the script to do two things: 1. Display a target stimuli on each display (ideally looks like one for the subject) 2. Send the corresponding target locations for each display to Capture (see https://gist.github.com/papr/fcbafd5cf748c9b11e64a4dd37ec8e9a#file-dual_display_choreography_client-py-L137-L150)
As a result, you will get gaze estimates for each display separately.
The base implementation will be inaccurate as the target locations are not reproducible (e.g. "center" will mean something different for each subject).
Ok thank you! Does it make sense to essentially combine a custom screen marker choreography plugin (e.g. nine point screen marker choreography) with dual_display_choreography_client.py?
Hi @papr quick question; How similar should gaze traces: {'gaze_normal0_x'} {'gaze_normal0_y'} {'gaze_normal0_z'} {'gaze_normal1_x'} {'gaze_normal1_y'} {'gaze_normal1_z'} for the left and right eye be? I'm analysing some data and i find for all participants fairly large differences. Overal shape between left and right eye seems to match, yet scale does not and also i notice an offset between both eyes when i compute gaze with these traces.
Could you please specify what you mean by "shape"? y should be more similar than x
by shape i mean the general rough estimates of where the eyes looked. See image. Right is orange blue is left. Targets were at 1m distance, so convergence shoudlnt realy play a role here. Gaze computation was as described by you in a previous post:
r = sqrt(x.^2 + y.^2 + z.^2); el = acosd(y./r); az = atan2d(z,x);
So what I mean here is, yes, they do look similar but also they differ quite a lot in terms of their offset and scale. So which eye is correct?
Also, out of interest, have you removed low confidence values already?
So which eye is correct? Probably, neither. gaze_normal is an estimate that depends on (1) the fitness of the eye model and (2) calibration quality. There are many places where errors may aggregate. The calibration is optimized s.t. that the 2d projection of gaze_point_3d corresponds to the recorded 2d reference targets as well as possible. gaze_point_3d is composed of both gaze_normals and averages out some of their errors.
Thanks for getting back. Ok, so I take it that these differences are to be expected and reasonable when comparing the individual eye cameras? Regarding your question; data is not filtered in anyway. Only NaN values are removed in order to compute gaze.
Some differences are expected. The amount of difference will depend on the factors mentioned above. If you want, I can try to reproduce the graph with my own data the week after next.
That would be great! So, when it comes to position data from pupil labs what is the most 'raw' data that you get? I assume it is gaze_normal_x/y/z, right? from there on you combine the information to get. norm_pos_x/y and gaze_point_3d_x/y/z or do all of them follow different pathways?
The former is correct. If you don't need to rely on the calibration (don't need scene camera coordinates) or are planning on building your own calibration, have a look at the circle_3d_normal pupil data.
Ic, but this stream is not provided in the LSL i think. Anyways many thanks. I think I've got a plan now.
I am not sure if I am just not looking in the right place or if it just a so simple I haven't figured it out but is it possible to receive the data from the glasses while they are recording(basically real-time data capture)? Or a way to basically open pupil capture and press export for a previously recorded file?
Hi @user-99bb6b. You seem to have stated two aims here, 1) to stream real-time data from a Core system, and 2) to access recorded data without Pupil Player. If you can elaborate more on your intended use case I will try to point you in the right direction.
Hi! Does the HTC vive eye-tracker use corneal reflections to track gaze? It seems like Pupil Labs is using "dark pupil" detection is that correct? Then what is the use of the IR LEDs and IR camera's, in our eye-image I see 5 dots around the pupil, which I assume are coming from the IR LEDs? Thanks for the help!
Hi @user-4811f9 π. The VR Add-on, like Core, employs dark pupil detection rather than corneal reflection tracking. The pupil detection algorithm is designed for black and white eye images captured by an IR camera, hence the purpose of the IR illuminators.
Hi, I'm Hani, I'm a new user of pupil core. May i know, if i want to get the Area of interest result, did i need to set the marker before recording or after recording ? and Could u please advice me how to set up it ?
Hi @user-5da140, welcome to the community! Yes, you'll need to set the AprilTag markers prior to recording. Check out the documentation for a detailed overview of how to set up and use surface tracking: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Hello, I am wondering how can I run 2 Pupil trackers (using Pupil capture) in a same PC? Also, for saving the computational cost, is it possible for me to firstly use Pupil capture to perform the calibration, then I can run 2 Pupil capture using python APIs (and linking to the corresponding calibration files)
Hi, can you download gaze overlay footage? The description of it in the product overview says you can, but it doesn't seem to be an option
Yes, just enable the World Video exporter in the Player plugin menu: https://docs.pupil-labs.com/core/software/pupil-player/#world-video-exporter
It's recommended to have two PCs, if possible. That said, you can open two instances of Pupil Capture on the same PC. To reduce processing requirements, I'd recommend ensuring that everything is set up correctly (eye camera positioning etc.), then disable real-time pupil detection. Calibration and pupil detection can be performed post-hoc. Note that you'll need to record a calibration choreography, one for each eye tracker. Read more about post-hoc pupil detection and calibration here: : https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Many thanks Neil, may I further ask you that for saving processing cost, is it possible if I could perform calibration first. Then, I could turn off Pupil detection (while turning on Detect eye 0 and Detect eye 1 - for recording of pipil). After collection, I could drop the collected folder into Pupil Player for post-detection of eye gaze?
I am using the glasses to capture a user reading text, whether that be a sign or a book. I then take that image or video and convert the words into text. I am trying to test the software that gets the words from the image or video in 3 different way. The first being with images from the world.mp4, then just the world video and finally in real-time. Once I get them working I can optimize the real-time accuracy of the other software to make it faster. The issue I would like to start with is exporting the data without opening pupil player and pressing export. I was maybe thinking of using CMD commands, or if could use the pupil remote to just export it similar to how you would stop and start a recording.
Thanks for clarifying. You can receive world video frames in real-time via our Network API. This example Python script shows how https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py. You should be able to run your image processing on these frames
Hi I am still working on the wireless connection with pupil core, it appears that the github backend that you send me seems a bit slow. I am streaming the video from the raspberry to the port of my computer, the video has no latency when I watch it, yet when I use the backend to send it on capture it becomes REALLLLY slow, I don't know if it is my CPU but I don't think so, I think the github backend is not very well optimized, I would like to add a direct link between capture and my video stream on port but I don't know what file I should modify to do this in your github files ? Do you have any idea ?
have no response from pupil-labs staff! trying to buy core product and get tech data on NIR led wavelength (850nm?) custom app. anyone has a phone or email contact for pupil-labs tech/sales staff?
Hi @user-7e7501. Welcome to the community π. I note that you have reached out via email. A member of the team will respond to you there π
Well I made some changes and it is now better, I decreased the image resolution, the thing is that the problems comes from the backend solution, I guess that comes from the notification system, sending a payload with a "high resolution" image (640x480) might be hard sometimes I don't know why, it is working pretty well with (320x240)... I would still prefer to configure my own source over the network rather than use the notification system, I tried to use usbip to connect over IP, capture detect the USB but then when I select it... capture crash... If there is no other solution I will keep working with video backend plugin i think
Hey, yes, it is likely the network bandwidth is the limiting factor. the backend currently only supports uncompressed image formats. To optimize for bandwidth, you would need to extend both components (client and capture side) to support reading, transferring, and uncompressing the compressed video.
You can indeed perform a calibration first to check that it worked okay. Important note: Make sure that you record the calibration, as disabling real-time pupil detection will also disable real-time gaze estimation. You'll need to run pupil detection and calibration post-hoc.
Hi @nmt , following the question of reducing the computational cost. Please let me know if the following steps are correct. I will firstly perform the calibration with markers and record the calibration file as a normal recording file? Then, I could start recording as usual. Do I need to turn on Pupil detection during time of calibration? Also, I suppose Detect eye 0 and Detect eye 1 should be turned on all the time, either in calibration files or in normal recording files?
Hello, I would like to know does pupil cloud charge?
I've responded in the Invisible channel π: https://discord.com/channels/285728493612957698/633564003846717444/958363045699158037
Hi all, I'm new here. Hoping someone can help me with a technical issue with my Pupil Core. I own two Pupil Core headsets, both with bilateral pupil cameras and world cameras. One headset works just fine, the other displays the pupil cameras but returns the attached error when I start Pupil Capture. My inclination is to think that the world cam is broken here, but I'm not sure how to check this. I'm seeing (2) Pupil Cam2 ID0 cameras in my device manager under "libusbK USB Devices" but can't seem to identify a world camera. Any advice here?
Hi @user-19e084 π. In the first instance, please follow these instructions to troubleshoot the camera drivers installation: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Hi guysοΌI want to calculate the actual distance of eye gaze from one point to another point on the screen(ie. which is calibrated), but the norm_pos information obtained from gaze datum is based on the world coordinate system rather than based on the coordinate system of my screen, how can I get the result I want? 1. Is norm_pos multiplied by the actual size of the scene captured by the world camera? If so, how can this actual size be obtained? 2. Is it possible to get norm_pos based on the coordinate system of my screen, so that I can directly multiply norm_pos by the actual size of the screen to get the data I want.
Hi @user-64deb6. You can use the Surface Tracker Plugin + AprilTag markers to map gaze from scene camera coordinates to screen-based coordinates. This will get you norm_pos relative to your screen. Read more about that here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Thank you very much, but I also want to ask if I don't need to calibrate after opening Pupil Capture before I use Surface Tracker Plugin + AprilTag markers?
Hello, I'm trying to use Pupil Core for an Intel Macbook (Monterey 12.3). However, while the device is detected on Pupil Capture, I only see a grey screen. What should I do to get it working?
@user-b0efbb, you'll need to run Pupil Capture with administrative privileges on MacOS Monterey. Please see the release notes for detailed instructions: https://github.com/pupil-labs/pupil/releases/tag/v3.5
Thanks for this. Much appreciated.
No problem π. Yes, you'll need to calibrate. Calibration is a necessary step to obtain gaze data. Surface tracking is an additional step to map that gaze to surface-based coordinates.
Thanks a lot.
Not sure if this is the right channel to ask in, but I'm working on a research project with underwater eye tracking. We have a lens over the camera as part of the water proof enclosure. Is there anyway to get a read out on what effect this has for the IR? Whether it's filtering wave lengths or reducing the intensity?
In the gaze_positions.csv export, the website says that the norm_pos_x and y positions are normalized coordinates. What are these normalized coordinates? Like what range of coordinates are there across the whole image?
Hi All. I can start pupil capture (mac monterey) using sudo and the world camera shows - but for some reason the eye camera is gray. The problem is apparently the following : eye1 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. Wonder if anyone could suggest. Glasses work fine on a windows PC
Hi @user-b49db3. Please ensure that the wired connection to the eye camera is secure, then try restarting Pupil Capture with default settings (you'll find that option in the General Settings menu of Pupil Capture)
Hi all, recently my pupil core stopped working, when I launch pupil capture it says video_capture.uvc_backend: could not connect to device! No images will be supplied.
I followed these instructions https://docs.pupil-labs.com/core/software/pupil-capture/#windows to no avail
I found the drivers, uninstalled them, rebooted, ran pupil capture with admin privileges but now the drivers don't even show up in device manager anymore
could it be my device is broken?
Hi @user-b7f1bf. We have responded in our email thread π
Thank you! I'll try the things you mentioned π
Normalized space is ~~coordinates are~~ bound between 0 and 1, e.g. image centre: x=0.5, y=0.5
. You can read more about Pupil Core's coordinate systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Thatβs what I thought it was as well, but my output csv includes values above 1 at certain x and y timestamps. Would there be a reason for this?
I made a slight edit to my previous response. The normalized scene camera frame is bound between 0 and 1. However, gaze can be estimated outside of the scene camera's field of view. So normalized gaze coordinates exceeding those bounds are not unexpected.
Okay, thank you for the quick response!
Hi all! I am using one Pupil core with a Macbook. I have also used it as a dual eye-tracking set up with another Macbook, and Pupil Groups and Time Sync. Now I intend to buy a new computer for the second tracker. What do you recommend? Should I go for a Macbook to secure that the two get connected easily, or would Windows work fine? Does Windows editions have any benefits over Mac? (I am particularly interested about recording sound, which is not any more avaliable for iOs versions)
Hi @user-d1bcb6 π. Mac or Windows will work with your Pupil Groups set up. The key specs are CPU and RAM. Since you are already using MacBooks, I'd be inclined to go for a Mac as your new addition. Pupil Core software performs very well with M1 silicon (that's what I use). Just be aware that you'll have to run Pupil software with administrative privileges on macOS Monterey. In order to record sound (Windows, Mac or Linux), you'll need to use third-party software, such as Lab Streaming Layer's audio capture + our LSL integration.
Thanks, Neil, great, good to know with which system is works stable. . With the audio, I intend to follow the recommendations that you have put here: https://discordapp.com/channels/285728493612957698/285728493612957698/887783672114208808 Do you still recommend exactly this? Do you expect it will work on macOS Monterey? (with my current machine I have not been successful: AudioCapture wants me to use later version that I have, but when I use later version, than PupilLabs doesn't work any more... π )
Can you clarify which version of macOS your Pupil Core installation doesn't work with?
No, I am afraid I cannot, since I downgraded back. Also, I am using old version of Pupil Core (1.12.17), because I have further processing adapted for it.
Ah. You'll need to upgrade to the latest version of Pupil software if you intend to use it with macOS Monterey. What specifically is preventing you from upgrading?
I see. I post process the data from two eye-trackers to overlay gaze paths on the world camera video with an external utility, which might not work with the updates. (but I get that I better solve it π )
Yet the question about sound: what would be your advice: go for Mac, put Monterey, and try the steps from previous messages?
Audio recording is possible with the LSL integration, yes. Note, however, that the LSL plugin adjusts Capture's timebase to synchronize Capture's own clock with the LSL clock. This means that time synchronization will potentially break if the Pupil Time Sync plugin is active. Also note that while the audio is captured, there are no utilities to easily play it back in synchrony with the recorded scene video.
Oops. Thanks a lot for getting seriously in my situation. What if I use LSL plugin on the Clock Master? Would then LSL adjust the Master's time, and Pupil Time Synch adjust the time on the second machine?
Do I understand it right that the outcome I can (1) approximately synchronise with exported video e.g. by ffmpeg, (2) use exact timestamps to draw audiogram on the frames of this video?
Last question: is there any version of old Pupil Capture + iOs system where old synchronisation with sound works? (actually, this is not only my problem, I know also other researchers who really keep their old versions to keep sound recording working)
What if I use LSL plugin on the Clock Master? Would then LSL adjust the Master's time, and Pupil Time Synch adjust the time on the second machine? That should work. I recommend setting up the time sync config first, then start the LSL relay plugin on the Clock Master. Starting the LSL relay on the clock followers will not work.
Do I understand it right that the outcome I can (1) approximately synchronise with exported video e.g. by ffmpeg, (2) use exact timestamps to draw audiogram on the frames of this video? I am not sure about (1). Once you have time synced timestamps, (2) should be doable.
Have you tried the LSL Recorder plugin for Capture? Instead of changing the time base, it stores LSL data in Pupil time https://github.com/labstreaminglayer/App-PupilLabs/blob/lsl_recorder/pupil_capture/pupil_capture_lsl_recorder.py#L162-L163 in a CSV file. Using an external script, one should be able to transform this csv file into a Pupil Player compatible audio file. Using this CSV, it should also be possible to implement (2).
It's our LSL Plugin that updates Capture's timebase to sync with the LSL clock. So AFAIK, you would need to avoid using Pupil Time Sync. I'm unsure about the best way to merge the audio waveform with the scene video. I think @papr will be able to provide more insight on those things next week. We made the decision to remove audio capture when we released Pupil Capture v2.0. Unfortunately, this feature was too difficult to maintain across our supported operating systems.
Hello, could someone point me to the original screen marker choreography code/plugin? Or any default screen marker or calibration choreography code. I found the custom nine-point calibration plugin but was hoping I could get the base code? I am trying to implement displaying a visual target in my dual_display_choreography_ client (not too sure how to do this since its not a plugin that I'm subclassing and rather a client that enables a plugin). Thank you!