Hi, how do I get the latest and greatest version of pupil capture WITHOUT having to run from source? You last release was in 2021, but there are way newer commits, even a few months ago, e.g. on the develop branch.
EDIT: keep up the good work!
Hi @user-84387e , it is possible to bundle it locally. Do I understand correctly that this was already tried? And thanks!
... it was not π I first wanted to know if I was missing something with the official builds. but okay, then we will bundle it ourselves, thank you.
Last question, while we're on it: How did you make that Windows defender doesn't delete your .exe? I see you are using PyInstaller kind of. With our project, Windows Defender really doesn't like the resulting .exe. But yours seem to be fine, even without a certificate. Am I right assuming you are doing some trickery to distract Windows Defender? Looking at deployment/reproducible_build.sh
:
export PYTHONHASHSEED=42
No intentional trickery. We use pyinstaller, like you said, but then we use WiX to wrap that up in a Windows installer (despite the name, pyinstaller does not do that itself).
You can install WiX yourself and give it a shot, although you'll want to run the reproducible_build.ps1
script in PowerShell (rather than the .sh
).
However, you may find it simpler/easier to whitelist the folder so Windows doesn't scan it for threats. Search of "exclusion" on this page: https://support.microsoft.com/en-us/windows/virus-and-threat-protection-in-the-windows-security-app-1362f4cd-d71a-b52a-0b66-c2820032b65e
Ah, good to know. Awesome, I will give that a shot. Thank you a lot!
Hello!
I'm experiencing an issue with the Pupil Labs app while annotating data. Previously, I could add multiple events without any problems, but now the app stops responding when I try to add more than seven. Once I reach seven events, I can no longer click on the sidebar where the events are listed. Is this a known issue, or is there a way to resolve it? I already tried restarting, and checked the updates, there's nothing wrong with that.
Hello, @user-414114! Would you be able to share further details about the issue - which eye tracker model are you using, and also which software application?
Hi ! Could I use https://github.com/pupil-labs/gaze-controlled-cursor-demo?tab=readme-ov-file with my Pupil Core ? My device is not detected
That repo is for our Neon model. You'll want to check out this community contributed cursor repo for Pupil Core: https://github.com/emendir/PupilCore-CursorControl
Thanks
Good evening folks! I was looking through some of the archived threads about Neon and the Core Pupil Capture platform but didn't see this specific issue. I did a big data collection for my research using pupil core but we had some issues with participants not being able to see without glasses. I ideally need to have these participants perform my experiments and we have Neon with the corrective lenses but I figured it would open my research up to scrutiny if some of the participants were recorded with a completely different eye tracking platform. However I thought a potential work around is using the Neon headset with Pupil Capture.
The problem I'm running into though is so far I cannot get it to work at all. I am able to get Pupil Capture to recognize the world camera of Neon, but the eye cameras are not recognized and this is confirmed when I attempt to calibrate and it says "Not sufficient pupil data available". When I was working with adding Neon as the local usb camera, I see a second option for Neon Module, however if I select this, it immediately crashes Pupil Capture and my laptop to the point I have to restart it. I understand it is experimental but I was hoping to at least get a recording going. I'm running a Macbook Air, I can potentially arrange to test it on a windows machine as well but was wondering if there were any suggestions or if anyone has run into this before? Thanks!
Hi @user-5f9f8f! Using Neon with Pupil Capture is of course possible, but as you mention, it is somewhat experimental. My first question would be did you run Pupil Capture from source using the neon-support
branch?
Hello! I am working with the Pupil Labs fork of pyuvc to control the Pupil Core arm camera, and I have observed the following behavior when exposing the camera to 0%, 10%, and 100% light intensity using an industry light generation device. Due to this behavior, the camera maxes out when in sunny outdoor settings, as this is with the IR emitters on the camera itself blocked. I have also set the camera to the following Auto Exposure Mode. I would expect the AGC in the following plot, if it were working properly, to attempt to push the signal to the midpoint of the value range? Any thoughts on how I could get it to do that in this example as well as outdoors?
Hi @user-ffc425 , what is the radiance or luminance output of your light device at those three levels? Do you also have a plot of what was measured?
I don't have a graph exactly, and I can get the luminance measure if needed, but off the top of my head I know the illuminance is roughy 0/300/3000 lux at the 0%, 10%, 100% levels. Essentially, I set a level, kept it at that period for roughly 30 seconds, then went to the next level
Hi @user-ffc425 , thanks for the clarification! And, sorry, I meant a graph of what was measured by the Pupil Core eye camera that you are testing. As far as I understand, the graph you posted above is what you theoretically expect to measure. It can also help to see what was measured during the test.
Then, I think I am in a good position to provide some ideas/tips.
Oh! Sorry, I misunderstood your question too. The graph above (Pupil Mean Pixel Intensity over Time) is what the pupil camera measured. It's the average pixel intensity per frame over the recording period. I would expect, if the AGC were working properly, that during the periods where the pixel average is extremely low that it would attempt to push those higher, and vice versa when too high
Thanks. And may I ask, the pupil_utils
package is something written by you? Has it been adapted from these Pupil Capture routines (https://discord.com/channels/285728493612957698/446977689690177536/1337576215443406930)?
Overall, you might have best success using Pupil Capture's Auto Exposure routines. You can find an overview of those in this commit.
And just to be sure, AGC
sounds like it is intended for auto-gain control?
Yes, I wrote pupil_utils
. Basically, that variable is just converting a boolean TRUE/FALSE
to the integer representation. I saw an integer used to select mode from the following file on the Pupil Labs Github.: https://github.com/pupil-labs/pyuvc/blob/master/examples/access_100_frames.py. I'll take a look at the commit you sent, thanks! And yes! Sorry, I should have clarified. I am using AGC to stand for automatic gain control
@user-ffc425 I see and no worries.
If it helps, auto-exposure essentially controls how long the camera "collects" light, whereas auto-gain controls the scaling applied when doing the analog-digital conversion of that "collected" light to a sensor reading. They can indeed produce similar effects on the final image, though.
You might overall fare better by referencing the Pupil Capture code for the eye cameras. The pyuvc examples serve as a base of how to generally interact with arbitrary UVC cameras, but the Pupil Capture code here shows how to optimally work with the eye cameras.
Thanks for sending the link! With a cursory glance, it seems that AEC (auto exposure control, also valid as you say for the behavior I want) is enabled by default and with similar configurations to the control dict (though, I am missing a few, like the priority for instance). Any thoughts on that being the case and seeing no real adjustment happening in regards to the plot? I will give it a more in depth look and try some things out as well
Hi @user-ffc425 , I checked. You will want to incorporate the Auto Exposure
code from that commit. It is a reliable way to set the exposure level of Pupil Core's eye cameras.
Hi @user-ffc425 , I will check and update you.
Sweet thanks!
Hi, I am trying to write an C# Code to connect to my Glasses. The HTTP REST Calls to get the status and start recording are working fine. But when I try to connect to the WebSocket I always get the Error 'Connection'-Headerwert 'Upgrade,Keep-Alive' is not correct. Has anyone suggestions for me? Thanks
Hi @user-65c5e1 , you might want to reference Neon XR's C# implementation for initiating the RTSP connection & receiving data over WebSockets. Let us know if that clears things up!
Hey pupil labs, I have an issue with running the Google collab code for "Map gaze into an alternative egocentric video"
Iam getting an error as this StopIteration Traceback (most recent call last) <ipython-input-11-2cb29c821ee6> in <cell line: 0>() 26 args = Args() 27 ---> 28 main.main(args)
1 frames /usr/local/lib/python3.11/dist-packages/pupil_labs/egocentric_video_mapper/optic_flow.py in calculate_optic_flow(neon_timeseries_dir, alternative_video_path, output_dir, optic_flow_method) 266 optic_flow_method="farneback", 267 ): --> 268 neon_video_path = next(Path(neon_timeseries_dir).rglob("*.mp4")) 269 optic_flow_method = optic_flow_method.lower() 270 Path(output_dir).mkdir(parents=True, exist_ok=True)
StopIteration:
It says it's not able to find the mp4 file inside the folder.
How do I resolve this issue ?
Hi @user-b6f43d! Obvious question, but is the .mp4 inside the folder?
Yeahh, i just uploaded the whole time series + scene video folder. Didn't make any changes.
Hi @user-b6f43d , I'm briefly stepping in for @nmt here.
What did you enter for neon_timeseries_path
?
https://drive.google.com/drive/folders/1hCHzXYmMA04mxNZZX8AotM4G9AYkSw0C?usp=sharing
I see.
Entering the pathname that way is not compatible with the setup of this Google Colab.
You rather want to enter the pathname relative to the /content/drive/My Drive
folder structure of the mounted Google Drive, similar to the example shown in the Google Colab:
/content/drive/My Drive/NeonRecordingFolderName/
You can determine the relevant pathname by clicking the file browser on the left hand panel of the Google Colab window, as shown in the attached image.
Thanks Rob. I uploaded the files directly in Google collab and it's works.
But again Iam getting a error saying there is some error is the video file it is trying to decide.
It says, error splitting the input into NAL units.
Update i tried with another insta 360 video.
The code gave 2 output files (neon optic flow.csv and egocentric_video_mapper_args.json) and error statements like this
Calculating optic flow: 100%|ββββββββββ| 16968/16968 [10:49<00:00, 26.13it/s] Calculating optic flow: 100%|ββββββββββ| 15213/15214 [13:55<00:00, 18.21it/s]
AttributeError Traceback (most recent call last) <ipython-input-10-2cb29c821ee6> in <cell line: 0>() 26 args = Args() 27 ---> 28 main.main(args)
3 frames /usr/local/lib/python3.11/dist-packages/pupil_labs/egocentric_video_mapper/video_handler.py in get_frame_by_timestamp(self, timestamp) 46 self.video_container, pts, self.lpts, self.last_frame 47 ) ---> 48 frame = vid_frame.to_ndarray(format="rgb24") 49 self.last_frame = vid_frame 50 self._read_frames += 1
AttributeError: 'NoneType' object has no attribute 'to_ndarray'
Hi @user-b6f43d , would you be able to put your Neon and Insta360 recording on Google Drive and share it with [email removed]
Sure Rob, Thanks
I just sent an email, please check. Thank you.
Hi @user-b6f43d , I replied via email.
Hi, I'm currently working with the Pupil Labs Neon device to record video data, and I understand that it includes an integrated IMU sensor. I'm exploring the possibility of using this device for running SLAM algorithms. I wanted to check if your team has conducted any trials or evaluations using SLAM (e.g., ORB-SLAM3 or VINS-Mono) with this device, as Iβve tested both but the performance hasnβt quite met expectations. Additionally, Iβd really appreciate it if you could provide the IMU noise parameters for the device, such as: Accelerometer measurement noise standard deviation, Gyroscope measurement noise standard deviation, Accelerometer bias random walk noise, Gyroscope bias random walk noise, I guess these would help a lot in tuning and optimizing the SLAM pipeline. Thanks in advance for your help!
HI @user-7b8ffd , cool project!
The IMU used in Neon is documented here. At that link, there is a data sheet that includes the info you are looking for.
Feel free to post results from your experiments in πΈ show-and-tell !
Hi, when I use Neon, I notice some errors on the gaze position which depends on who uses it. Some peopleβs gaze positions tend to be on the left of the actual positions while others may be on the right or above. So Iβm wondering is there a calibration that can be done to mitigate this issue when Neon is used across different users? Thank you!
Hi @user-851979! Do you mean in terms of screen-mapped gaze, as I understand you were working with real-time screen-gaze
package, or gaze estimates as they appear in the Companion app?
I mainly looked into the screen-mapped gaze from the real-time screen-gaze package, but I think it's both in terms of screen-mapped gaze and gaze estimates as they appear in the Companion app. I've only checked with the center of the circle in the Companion app's video stream.
Handling pupil core data recorded from different applications.