Hello, I am trying to run the Pupil Labs Neon on a desktop to eliminate the delay when transferring data via the API from a smartphone. When I run the software from the Pupil GitHub repository, the Scene camera connects successfully, but the Eye cameras do not. When I click on the 'Neon sensor module' directly, an error message appears, stating it is "already used or blocked" (as shown in the attached picture). How can I solve this issue? Thank you.
Hi @user-fce73e. Neon is not compatible with the Capture software. It is possible to run from source on Linux and Mac, although this is more experimental and you cannot use Neon's native gaze pipeline. If you need low latency API use, you can use a USB hub and ethernet: https://docs.pupil-labs.com/neon/hardware/using-a-usb-hub/#using-a-usb-hub
great, thank you for the update.
Hi, I had a question about post-hoc eye-tracking. I just made a recording with the pupil-core camera in the 120 FPS (400, 400) resolution mode. Before I made this recording, I checked the lighting in Pupil Capture. I noticed the pupil tracking here was perfect. After making the recording and reconstructing the video as a .avi file, I then fed it into a Python script that I have written, using Pupil Labs detectors library to extract the pupil from this playable video. Here, I found the performance much worse than what I saw live. I was wondering if anyone had any thoughts on how to improve this? I'm also open to improving the conditions of my image as well. Here is a sample image with poor pupil detection
Hi @user-ffc425 , if you were using the script that was previously linked (https://discord.com/channels/285728493612957698/446977689690177536/1408369985331662878), you might need to modify the parameters when you apply the 3D pupil detector. I would recommend referencing Pupil Capture's detector plugin code to see how it is done there.
Hi @user-cdcab0 Can we process additional LSL streams we have recorded using Pupil Capture using Pupil Player? I have an extra event marker stream coming from MATLAB that I have recorded simultaneously with pupil and gaze data. I wanted to plot it over the gaze data and use it to parse the gaze data into trials or epochs. Additionally we would also like to process this data along with EEG we have collected. Can the pupil recordings be integrated into brainstorm or EEGLAB?
Pupil Player does not have built in functionality to process the LSL streams. However, we do have an opensource plugin feature. So in principle, you could write a plugin that could generate the plots you describe: https://docs.pupil-labs.com/neon/neon-player/plugin-api/#plugin-api Pupil recordings are not compatible with brainstorm or EEGLAB.
Hi, I am not looking for realtime access to the stream. We already get it through the lsl_rec plugin. I would like to offline sync it with the other default pupil capture streams (gaze_position, pupip_possitions) after recording. But I am not able to match the timepoints exactly. I first export gaze position using pupil player and then look at the two streams. I am sharing here the exported .csv of gaze position (https://1drv.ms/f/c/FB1DFB83EAA92E53/Ep3zdOxuS19MuNvu9ocNeh8BYFJkdRoRy-d-75M8IeAv8A?e=a0oPcL) and my taskmarkerstream .csv (https://1drv.ms/x/c/FB1DFB83EAA92E53/Eb0DZ0eTdbpIp-NIEFfFyyYBRE8inqkOI-61MWg1psYnNA?e=hKd2uN)
Indeed, I was referring to post-hoc analysis - Pupil Player has no real-time capabilities. In the case of the Pupil Capture LSL recorder plugin which you have used, the saved timestamps belonging to other sensors are already synced and converted to Pupil time. However, there are unlikely to be one-to-one matches across sensors. This is because the other sensors are not running exactly in time with eachother. Thus, you would need to simply find the closest match.
Hi @user-4c21e5 @user-cdcab0 We wanted to reduce the size of Pink dot with yellow border showing fixations and eye postion on the pupil capture screen, so that we can see more exactly where the participant is gazing. How can we do that?
You can refer to this section of the docs for instructions on modifying the overlay visual properties: https://docs.pupil-labs.com/core/software/pupil-player/#visualization-plugins
Hi @user-4c21e5 The link you provided is for changing the visual properties in pupil player. We are looking for chnaging it in the pupil capture itself, while recording. Is it possible?
Hi everyone, I ran a test using the real-time-blink-detector with a NEON recording (https://github.com/pupil-labs/real-time-blink-detection) and noticed that, ( besides taking quite a long time to finish analyzing the recording), it also detected many more blinks than what the Neon Player reported. My question is: is the blink detection algorithm used in Neon Player the same as the one used in the real-time-blink-detector? Thanks a lot!
Yes, they're different, but neither of those are what we recommend for blink detection with Neon. Instead, the Neon Companion app can now detect blinks in real-time and save them within the recording
Ah, I see! Thanks a lot for the clarification! So, is it necessary to upgrade the Neon Companion app on the smartphone? It doesnβt show up as available for meβ¦ Or should I instead upgrade the Neon Player? Thanks!
If you're connected to the internet and the Google Play store doesn't show any app updates, then you're already on the latest. You'll want to make sure that "Compute eye state" is enabled within the app settings.
You'll then want to load these recordings with the latest version of Neon Player
Hello! I have another question about this. In the gaze.csv file, the eighth column is the blinkID (which corresponds to the one in the blinks.csv file), but at those indices there are also gaze_x and gaze_y values in pixels. Are those values generated through some kind of interpolation? Maybe something done by the AI algorithm? With the Pupil Core, I found it useful to have the confidence value. I had considered using the eyelid aperture measurement, but in all my recordings this information is NaN. Many thanks for your help!
The gaze estimator will always produce a gaze point when provided with an image - regardless of the state of the eye, the eyelid, or even the presence of an eye in the eye image. So yes, even during a blink, the gaze estimator will still make it's best estimate at a gaze point. Depending on your research, you may want to simply filter these gaze points out
Ok, got it, thanks a lot! Do you think thereβs any reason why the 3d_eye_states.csv file has the last six columns (eyelid angle top left [rad], eyelid angle bottom left [rad], eyelid aperture left [mm], eyelid angle top right [rad], eyelid angle bottom right [rad], eyelid aperture right [mm]) all set to NaN? I downloaded the data from Pupil Cloud
That sounds like you may want to open a ticket in π troubleshooting
Hi @user-3bcb3f ! For gaze visualisation you can drag this onto the plugins folder, enable and choose how do you want the circle to be displayed.
Hi, I put this .py file into the plugins folder, but the plugin manager does not show any options where I can manipulate the circle on the pupil capture.
If you placed it correctly, restart Pupil Capture. On the plugin side panel you should see a new entry called Display Recent Gaze.
Enable it, and at the bottom youβll get a new plugin panel with options to adjust color, size, and alpha value of the visualised gaze.
Thank you very much. Got it working π