Hi all just thought I should ask this
Hi everyone! I am working on setting up Pupil Core eyetrackers to receive annotations from MATLAB Psychtoolbox stimuli, demarcating trial onset + changes with timestamps, and am running into this error when I run the example "pupil_remote_control.m" script.
I have downloaded matlab-zmq. When I run "addpath('~/MATLAB/matlab-zmq')", I get this error: "Unable to resolve the name 'zmq.core.ctx_new'. Error in pupil_remote_control (line 23) ctx = zmq.core.ctx_new();"
When I run "addpath(genpath('/Users/knix/Documents/MATLAB/matlab-zmq'))", I get this error: "Error using zmq.core.ctx_new Execution of script ctx_new as a function is not supported: /Users/knix/Documents/MATLAB/matlab-zmq/lib/+zmq/+core/ctx_new.m Error in pupil_remote_control (line 23) ctx = zmq.core.ctx_new();"
Any tips for how to get MATLAB to recognize zmq?
For more reference, the script has been added to my MATLAB path, but when I run the line "ctx = zmq.core.ctx_new;" the error message still says: "Execution of script ctx_new as a function is not supported".
Running the line "zmq.core.ctx_new" or "zmq.core.ctx_new()" alone does not return any errors -- just runs that script. But it seems like storing it as the variable "ctx" as the pupil_helpers MATLAB script suggests returns this error.
I am trying to load a recording folder into pupil player and am getting the error: "ValueError: Cannot load file containing pickled data when allow_pickle=False". I am also unable to load and view any of the camera videos using VLC media player (all of my other recordings load fine, and my videos play on this system). Is this recording corrupted in some way?
Pickled Loading Problem
No video on Mac - I've tried the fix in the terminal that was reported on Github, but I'm still not getting video from my Pupil Core glasses. I'm on Venture 13.2
I appreciate any suggestions! The fix worked when I was on Monterrey, but isn't working now.
Hi! I' m new to Pupils and I would like to know if there is a way to get the calibration data taken in pupil capture, i.e. the precision error as it appears in the script. Is it possible? Thanks!!
Hey, the calibration data is stored as part of the recording (for post-hoc analysis) and is announced via the Network API (for realtime analysis). Which of the two are you interested in?
Thank you for the quick answer. I am most interested in angular accuracy. How could I obtain this value from previously recorded videos?
You can recalculate the accuracy+precision for a set of Pupil Core recordings using this tool https://github.com/papr/pupil-core-pipeline
I will explore it! Thank you!
No video on Mac - I've tried the fix in the terminal that was reported on Github, but I'm still not getting video from my Pupil Core glasses. I'm on Venture 13.2. I appreciate any suggestions! The fix worked when I was on Monterrey, but isn't working now.
Hi, can you share the exact terminal command that you executed and its output?
Sure, it will take me a minute to set things up to get the output, but the command I used was "sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture"
It's working now...I replicated exactly what I did last night, but it's working now. Problem solved I guess! π
@user-89d824, it would also be worth gently cleaning the camera pinhole lenses with a microfibre cloth to ensure there's nothing causing a blur
Hi everyone, I have a problem on my Pupil Lab Core, the wires on the camera connector broke. Does someone know the reference of the connector? Cause I am not equipped for such small soldering...
Please contact info@pupil-labs.com in this regard
Hi Neil, thanks!
Hey @papr how do eye0 and eye1 timestamps map to the pupil_timestamp in the exported pupil_positions.csv?
Hey @user-eeecc7. Those timestamps are equivalent to Pupil Time. When an eye video frame is received by Pupil Capture it's assigned a Pupil timestamp.
Another question, but this one has got more to do with using Unity with Pupil Core.
Basically, I need the participant to start recording by clicking on a button in Unity, which will start the recording in Pupil Capture. The way I've done this is: In the RecordingController script, I've changed the startRecording and stopRecording booleans to public bools.
And then I created two functions in PressRScript.cs (see attached) which maps to button presses in Unity.
In Unity, I have two scenes - The first scene is where the Recording Controller script is attached to a gameObject. I used the DontDestroyOnLoad() function to keep this gameObject from being destroyed when we transition to the second scene, during which the first recording will end, and start (and end) an additional two times. In total, that should give me 3 separate recordings.
I would say this method works about 70% of the time, but sometimes it wouldn't work after we transition to the second scene and throws the exception error attached. I would have to press the R key on the keyboard manually to end/start the recording.
Any idea on what's going on please or if there's a better way to do what I need to do, i.e., have the participant start/end Pupil Capture across different scene transitions by clicking on a button in Unity?
We can't offer much advice in the way of makeup I'm afraid
Hi @user-d407c1, could you help me with this problem please? π
Hello I have question about pupil core,invisible.
Can i get paper using core and invisible
also can i get some video about core and invisible
like that introduce product simply
@nmt @user-d407c1 @papr Hi, sorry about yesterday. I'm trying using python with frame work to custome and visualize the stream data from core glasses connect with my PC by USB! Can you suggest for me some libraries sand example codes to do that ? Or any solution help me to collect realtime data from glasses to visualize by python on my PC ?
Hi @user-277de7 Did you had a look at the Network API? https://docs.pupil-labs.com/developer/core/network-api/, also have a look at https://github.com/Lifestohack/pupil-video-backend
Hello there. I just upgraded to a more recent Pupil Capture (3.5) version, and now the LSL stream is empty. The stream is found, and all channels are there, but the sampling rate is also specified as 0 Hz. Is this a known issue? I used the search function and didn't find any comparable cases. Do you have an idea how to solve this potentially? Best regards -Kilian
Hey @user-9f3df4. Can you confirm that the Core system was being worn and a calibration had occurred?
Hi @user-3aea81 π. We don't have a video introducing Core and Invisible, but you can check out the website for lots of information: https://pupil-labs.com/products/. We also have a publication list on the website: https://pupil-labs.com/publications/. If you have any specific questions, feel free to ask here!
thanks for reply ^^ I got point
Yes, it was the core system. We are only using pupilometry, so there was no further calibration than the head movement to fit the eyeball model. But the interesting parameters (3D Pupil diameter) are fine in the debug mode for the eyes.
Can you also confirm that you're using the latest LSL relay version: https://github.com/labstreaminglayer/App-PupilLabs/releases/tag/v2.1 ?
I will check this and come back to you tomorrow.
Hi Pupil Labs team and community, are there any tips or guidelines which lighting conditions (i.e., how bright the room should be) are best to get meaningful and reliable pupil size measurements? Many thanks in advance! Johannes
i have one more Q. can i check how consistently see the object? like how long my eyes stay on the object
We provide gaze points and fixations in scene camera coordinates out of the box, if you know where an object is you can use those coordinates to understand whether they looked at an object or not. To track the object on the scene camera, you can use any video segmentation tool of your liking, but it will require some processing from your side. There are software providers like iMotions that offer turnkey solutions for tracking AOIs and provide you with gaze statistics both for Core and Invisible.
Using Invisible/Neon you can also use our reference image mapper(https://docs.pupil-labs.com/invisible/enrichments/reference-image-mapper/) or if you would like to track an object on a screen you caqn use the marker mapper (https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/) and define areas of interest (AOIs) in a 2D image, to track how long you looked at an object (Check out this link https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/). Additionally, more AOIs tracking features are also coming in the future.
Hello, can I ask a qustion? In fixation export, what does mean norm_pos_x and y? I thought it is fixation point (0 to 1), but my exported data have the value of (-2 to 3).
Those are the normalised coordinates of the fixations at the scene camera coordinates. As you can see here (https://docs.pupil-labs.com/core/terminology/#coordinate-system), it starts on the bottom left and goes from 0-1. Some values can be found out of the 0-1 as the eye cameras may cover a bit more range than the scene camera, but values should nonetheless stay close to the range 0-1. Was there a successful calibration on the recording? Was the calibration performed covering an ample section of the scene camera or was it a single point calibration (with no movement)?
Oh maybe I found answer in document. The out of 0-1 value is out of calibration right?
Every calibration was successful. Then, are norm x, y of fixation.csv and norm x, y of gaze_positions.csv the same?
They are the same but one provides the information for gaze points while the other one provides it only for detected fixations. If you would like us to have a look at the recording please share it with data@pupil-labs.com and refer to this conversation.
Matlab ZMQ MacOS
LSL
Hi, I have some basic questions on how the preprocessing code was setup in the build version. Are the exported files filtered (i.e. bad samples or low frequency etc)? Is there some documentation to support these choices? Also, I want to implement a saccade detection code, but I am not sure how-to in the already pre-built version? Do I need to build from scratch?
thanks gor reply. this picture is my point
The example will only work with Invisible recordings + downloaded RIM enrichment results. If you want something similar for Core, you'd have to use the Surface Tracker analysis tool and do some modifications to the code.
it's only for invisible????? if not I can make it from open source?
and i can make with pupil player or pupil capture?
I have replied here https://discord.com/channels/285728493612957698/285728635267186688/1072423129214881873
My apologies, I didn't know why I missed it!
Hi!
The exported files are not filtered, but we often recommend discarding samples with a confidence of 0.6 or lower before preprocessing the exported data any further. You can read about confidence here: https://docs.pupil-labs.com/core/terminology/#confidence. This value is measured from the number of pixels on the circumference of the fitted pupil ellipse.
Re. a saccade filter, working with data post-hoc would be easiest. It mightn't be necessary to run from source in this case. For a real-time implementation, developing a Plugin and running from source would be recommended. Check out the Plugin API docs: https://docs.pupil-labs.com/developer/core/plugin-api/#api-reference
Note that a while back there was a community-contributed saccade detector: https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing. Not sure if it will still work, but worth a try.
Yes, that's what I did. I only exported data at a CI of 0.80 so the exported surface fixation values (what I am interested in) won't have anything below that. Okay, thanks for sharing links. I will be implement the saccade detector code post-hoc so I will take a look and see where to insert this code so I get saccades during export.
A... I see ^^ thanks you so much
Note that with the Surface Tracker Plugin, you can drag and drop AOIs within Pupil Player. Read more about it here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker So actually all you'd need to do after running the plugin and exporting the results is compute the metrics (e.g. dwell time) and generate the plots.
Hi Pupil Labs team and community, are there any tips or guidelines which lighting conditions (i.e., how bright the room should be) are best to get meaningful and reliable pupil size measurements? Many thanks in advance! Johannes
Hi Johannes! It's best to use a dimly lit environment for pupil size measurements. If your room is too bright, you may not see much differences, as light also make the pupils smaller. Please check our best practices for pupillometry https://docs.pupil-labs.com/core/best-practices/#pupillometry and I recommend that you also read https://link.springer.com/article/10.3758/s13428-021-01762-8 Furthermore, you can adjust the eye camera exposure settings in each eye window to achieve better contrast between the pupil and the surrounding regions of the eye (namely the iris).
Oh great, thank you so much - very helpful indeed!
Hello ! is there any way to send hardware trigger as for example TTL pulse on Pupil Core glasses to synchronise de gaze data with others sensors (EEG, ECG ...) ?
Hi @user-c76d64! There is no out-of-the-box TTL signalling. However, you can use ourΒ Network API (https://docs.pupil-labs.com/developer/core/network-api/)Β to control/send events to Pupil Capture. It might be possible to combine this with something like aΒ USB to TTL sensor (https://pcbs.readthedocs.io/en/latest/triggers.html#dlp-io8-g), or even an Arduino, to send triggers. This library could be helpful for you https://pyserial.readthedocs.io/en/latest/pyserial.html
Thanks for your answer, Yes it could be a solution but has we are working with EEG sensors we need a really high time precision when sending these triggers. We want that both triggers : the one sent to Pupil core and the one sent to the EEG sensors are received at the exact same time. The idea is to have any time desynchronisation between the triggers of the 2 devices in order to then synchronise the gaze and the EEG data based on the trigger events.
For this time of data synchronisation Lab Stream Layer (LSL) could help you, have a look https://labstreaminglayer.readthedocs.io/info/intro.html most EEG systems on the market are supported and we maintain a relay for Capture https://github.com/labstreaminglayer/App-PupilLabs
Are PupilCore glasses (running Pupil Mobile and streaming to Pupil Capture) compatible with any National Instruments DAQ devices? We are trying to sync our wireless Pupil setup with an existing Qualisys MoCap setup that is streaming data to MATLAB in real-time
Hi @user-2e5a7e There is no official support to connect with DAQ devices, but you can try https://github.com/maltesen/liblsl-LabVIEW.
Hello ! I had some issues with one recording: for some reason, I could not end the recording properly from inside Pupil Player and I had to terminate the process dirtily (I think my laptop froze). It turns out that data from this recording is in a strange state, with two videos files (eye0 and world) being in "writing state" (files have .mp4.writing extensions). I cannot import it in Pupil Player, but I am not sure if it is because I am missing some files, because of encoding issues, etc. Here is the list of the files that I have :
blinks.pldata eye1.mp4 fixations.pldata notify.pldata surfaces.pldata eye0.mp4.writing eye1_timestamps.npy gaze.pldata pupil.pldata world.mp4.writing By any chance, is there a way to salvage this recording ?
Hi @user-def465 ! It seems like the data was corrupted due to a force close of Pupil Capture. There are no guarantees that you can recover the data but there are several things that you can try:
Recovering the mp4 files You can use a tool like https://github.com/anthwlock/untrunc to try to restore them. Please refer to their documentation.
*Restoring pldata
files https://docs.pupil-labs.com/developer/core/recording-format/#pldata-files *β¨
You can try reading them like here https://discord.com/channels/285728493612957698/285728493612957698/649340561773297664 to restore the data.
To be readable by Pupil Player you would still need an info.player.json
file, which you can generate it by https://gist.github.com/papr/bae0910a162edfd99d8ababaf09c643a#file-generate_pupil_player_recording-py-L72-L90
(Note that you will need to adjust the start_time_synced
)
Alternatively you can read the pldata
files directly out of the plate with https://gist.github.com/papr/81163ada21e29469133bd5202de6893e
Thank you ! I will try these methods.
hey team, this might be a stupid question, but i cannot manage to add a vis circle plugin? I redownloaded capture, but vis circle is just not there? could you help me out?
Hi @user-c8e5c4 ! vis_circle is part of Pupil Player, to see gaze in Capture, you only need to perform the calibration
aah, is see, thank you!
Hi team. Marco from Italy here. One question. After some times I restart making some tests with invisible. Usually after loading in cloud, I download raw data to adjust and se something more in player. But nothing on video. Something changed since my last usage? Or some issue by me? Thanks
Ciao @user-6e0242 π First of all, please ensure that you are downloading raw data in the Pupil Player format by right-clicking on your Invisible recording and then selecting the proper format. Second, make sure you have the last version of Pupil Player on your computer (https://pupil-labs.com/products/core/). Let me know if it does the trick!
Thanks. My error was to download from project instead of from the single video so it wasnβt in the right format. Thanks π
From the analysis perspective, does it matter if some of the recordings were preprocessed using player 2.5.0 and others (that couldn't with 2.5.0) were processed using 3.5.0. I am just trying to anticipate from a methods perspective if the reviewer might find it problematic.
@user-908b50, It depends what data they contain. More specifically, what version of Pupil Capture were your recordings made with? A new 3d geometrical eye model (pye3d) was released in Core software v3.0: https://github.com/pupil-labs/pupil/releases/tag/v3.0
Hi everyone! Excited to be part of the community. I am currently trying to understand how the pupil core eye tracker can be setup together with a portable eeg. Can someone give me any tips, links to documentation or even video? This is my first time with such project
Hi @user-5ba46b ! Combining the Pupil Core eye tracker with a portable EEG. This is a great idea, as it can provide a more comprehensive view of the user's cognitive state. You can check if your EEG is compatible with Lab Streaming Layer https://labstreaminglayer.readthedocs.io/info/supported_devices.html, and use the LSL plugin https://github.com/labstreaminglayer/App-PupilLabs
Otherwise, you can also use the network API https://docs.pupil-labs.com/developer/core/network-api/ to send annotations to synchronise both systems.
Hi all, is there any information regarding the common pupil size range captured by PupilCore?
I'm running some tests and I'm finding in my experiment pupil responses ranging between 1.5-2.5 mm (even in a dark room), which seems below that which is reported in the literature Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis
Pupil size varies between roughly 2 and 8 mm in diameter (MathΓ΄t, 2018; Pan et al., 2022), depending mainly on the amount of light that enters the eye.
I'm wondering if you could help me giving some advice. Thanks in advance!
Hi all, Can someone explain to me why in the data export file, all cells from the 'diameter' column are populated, while the cells from 'diameter_3d' are populated intermittently, appearing in one cell and then not in the next?
Thanks a lot!
Which version of python do i need to use pupil?
Hi @user-ae54c3! To run pupil from source you need Python 3.7 or newer
Hi, Pupil team, I want to experiment with the pupil core, but I just started using it and there is still a lot I didn't understand. I need your help! At present, I have encountered three problems. The first problem is calibration. Although I have calibrated, there is still some deviation. What should I do? The second is in the process of use, my fixation point is very unstable, or shaking, which is not consistent with the fact that I am looking at a certain place. Thirdly, the images recorded during use are not clear. Although I have improved a lot by manually adjusting the focus of the camera, I still want to ask if there are any other ways to improve them.
Hi @user-2ce8eb π. The first thing to ensure is good pupil detection. This is the foundation of everything. Be sure to check out this section of the getting started guide: https://docs.pupil-labs.com/core/#_3-check-pupil-detection
Hi team pupil @user-d407c1 @papr @user-c2d375 , I want to visualize real time gaze and surface data from pupil core by cv2, what message should I use for sending? And do you have any examples?
Hey @user-277de7 π. We'd be super grateful if in future you could avoid tagging specific/multiple members of our team in your questions, unless you're replying to a message that is! This will also encourage other members of the community to get involved π
Hi @user-277de7 check out https://docs.pupil-labs.com/developer/core/overview/#surface-datum-format for the topic and field names
Hello @user-d407c1 we are currently working on a school project with the pupil labs's eye tracking system but we faced a problem . could you please explain us how to retrieve the gaze coordinates ? thank you for your answer.
Hi @user-1fa7e1 It's hard to provide feedback without further context, what is the error that you faced? Did the error occurred in Pupil Player? or in Pupil Capture? Could you share some logs? If you prefer, if it may contain sensitive data, you can contact us through info@pupil-labs.com
No, we are trying to retrieve the gaze coordinates through Python and not pupil player (@user-d407c1)
Check out Core's realtime api: https://github.com/pupil-labs/pupil-helpers/tree/master/python
Pupil size estimates are provided in mm by pye3d, which is a 3d geometrical eye model. The output can depend on a lot of factors, such as pupil detection, model fitting and headset slippage. Intermittent population suggests a noisy model. I'd recommend reading our pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Hey, thanks a lot for your response!
I'm attaching a screenshot of the pupil_positions.csv There are two rows for each timestamp, one for '2d c++' and one for 'pye3d 0.3.0 real-time' methods. The 'diameter_3d' column is only filled for the 'pye3d 0.3.0 real-time' method, as expected, however, the 'diameter' column is filled in both rows, resulting in different diameters for the same timestamp. Which I'm finding weird. Is this correct?
Edit: it ends up leading to two different frame samples
We have not used an artificial mechanical eye, but rather evaluated gaze directions with respect to synthetic ground-truth eye images: https://docs.pupil-labs.com/developer/core/pye3d/#academic-references
OK, thank you for the references!
Hi, Neil! Has your team released synthetic ground-truth eye images or synthetic model metioned in "A fast approach to refraction-aware eye-model fitting and gaze prediction"? If hasn't, do you have plan to release them? This is important for us to learn your papers and methods in depth.
Hello! Not sure if I am posting on the correct channel. I have a question for the coordinates of the surface tracker. As you can see in the screenshot, we have a project with 6 tags (4 in the corners and 2 in the long sides). I used the pupil player (workflow: pupil invisible -> pupil cloud -> download raw) to calculate the surface tracker. I am now working with the data from the 'surfaces' subfolder. I have two questions I would like to ask:
to obtain the gaze coordinates on the surface, I use the file "gaze_positions_on_surface_Projection.csv". What is the difference btween x_norm and x_scaled? Where exactly is the 0,0 of the rescale? We want to reverse engineer what pixel people are looking at, so we need to be precise if the suface includes all the tag or its midpoint etc.
The file fixations_on_surface_Projection.csv is empty is this expected behaviour? I thought we can now get fixations for pupil invisible too.
Thanks!
Hi @user-94f03a π x/y_norm represent the x/y coordinates of the gaze point on the surface, while x/y_scaled are the same coordinates but scaled accordingly to the surface dimension manually defined in Pupil Player (in your case, width and height parameters within the "Projection" surface section). (0,0) is the bottom left corner of the surface and (1,1) is the top right corner. The file 'fixations_on_surface_Projection.csv' is empty because the fixation detector plugin is not available for Pupil Invisible recordings.
I would suggest to use the Marker Mapper enrichment in Pupil Cloud to obtain fixation data within the surface (https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/)
Hi pupil people,
I have been asked if the smoothing process that I use for my pupil plots is causal or not.
I smooth my data at two stages:
first, using a low-pass filter while preprocessing, like this:
def _smooth(x, data_freq):
LpFilt_cutoffFreq = 4
LpFilt_order = 4
lowpass_filter = scipy.signal.butter(
LpFilt_order,
LpFilt_cutoffFreq,
fs=data_freq,
output="sos",
)
result = scipy.signal.sosfiltfilt(lowpass_filter, x)
return result
and in the end, I also smooth my plot using convolution, like this:
smoothbin = 2
signal_avg_conv = np.convolve(signal_avg, np.ones(smoothbin) / smoothbin, 'same')
An example of my plots is attached.
Do you have any idea how to do "causal smoothing"?
I assume it means for each data point, I don't use future data points for smoothing.
also about this plot, has anyone experience with double peaks after viewing some stimuli?
Hi @user-98789c π Maybe one possibility would be to use the Savitzky-Golay filter (a type of moving average filter that uses a least-squares polynomial fit to smooth the signal). It can be applied using the savgol_filter function (from the scipy.signal library). The mode
argument in this function specifies how the signal is padded with additional values at the edges. When mode
is set to nearest
, the padding values are simply copies of the closest available data points. This effectively means that the smoothed signal is not calculated based on future values, which I understand might be what you are looking for. But note that there might be other techniques more appropriate for causal smoothing that I am not aware of π
just for your information, @user-480f4c and anyone interested, I applied the scipy.signal.savgol_filter
and the result is not different from applying the scipy.signal.butter
filter that I was using..
thanks a lot @user-480f4c I'm looking into it π
Our lab has been using the Pupil Core headset on young infants with a smaller eye radius than a typical adult. We would like to compensate for that difference. Can you tell us how radius might be used in the 3d eye model so that we can scale our final eye movement data correctly to the infant eye size
Hi @user-b9005d. We have discussed this case internally and have a few notes:
Eyeball size is an important parameter for determining eyeball position (our ETRA 2019 paper provides details about how it enters the calculations). It is thus conceivable that adjusting it in pye3d (our 3d eye model) will result in more accurate estimates for children. However, we have never tested/verified this.
Conceptually, it should be possible to change the parameter and re-run pupil detection + the 3d model in a post-hoc context using the raw recordings containing eye videos etc. We don't think such changes could be made to the exported csv data, however.
Refraction correction has a fixed diameter built in. So, while you could change the appropriate parameter in pye3d in order to accommodate children, you could not use the refraction correction we provide.
Hope this helps!
Hi Neil,
I sent the email last Monday. The title is, "Pupil detection problem and Pupil Capture closing itself mid-recording"
May I know if you've received it please? Thank you.
Hey! Just followed up to your email π
Hi every pupil labs workersοΌ I want to make my own eye-tracking headset from scratch, and I hope you can give me some advice and help. Can you tell me which python libraries you use, how to lock the iris, how to use two cameras at the same time and use the eye tracking results in the form of red dots to lock the object displayed by the other camera, thank you
Hey @user-84cfa7 π. Recommend checking out Pupil DIY in the first instance: https://docs.pupil-labs.com/core/diy/
Hello PupilLabs I would like to pre-order the "Is this thing on?" Is there a way to do this? When I add to cart I only get the "just act natural".
Is there anybody out there?
Hi @user-02de1f π. Note that most of the Pupil Labs team here on Discord are based in Europe, so timezones don't always overlap π Re. 'Is this thing in' - we aim to start shipping that near the end of Q2 this year, but we don't have a concrete date.
Hello, I wanted to ask regarding the 30 minutes video call onboarding workshop that is advertised in your website. May i know how can we arrange this workshop?
Please reach out to [email removed] in this regard
I noted you've also emailed us. A member of the sales team will respond there as well!
Is there a way to batch export a folder of files using shell script that can interact with the pupil player
Hey! See this message for reference: https://discord.com/channels/285728493612957698/446977689690177536/839399327989891093
Hi, I have been looking into some of the tutorials and it seems the one where the surface fixation values are plotted onto the surface image is different. Now, you transform those values into pixel space. I did not. I just flipped the y axis and plotted as in onto my reference image. Then, I counted the number of fixations in each area of interest on my image. These values are then averaged across all participants for each AOI and used in models.
I am wondering what is the purpose of transformation. Will I get incorrect surface mapping otherwise?
So this is what I am getting (sans transformation).
Code
Hello @user-d407c1 ! Thanks again for your previous suggestions.
I am now working calling Pupil Core commands in MATLAB via MATLAB-ZMQ, following the documentation at https://github.com/fagg/matlab-zmq, using MATLAB R2022b and MEX configured to use βMicrosoft Visual C++ 2019 (C)β β I tried using other compiling options (MinGW64 Compiler (C), and it didnβt work.
I believe there is a problem with my MEX configuration but I havenβt seen documentation online for this specific error β this comes when I run the βmakeβ script (config.m options that I tried, also in attached screenshots, as well as my ZeroMQ 4.0.4/lib directory for reference. Any tips/work-arounds for resolving the MEX issue? Thank you!
Hi @user-cdb45b. We're unable to provide support with MATLAB-specific issues, I'm afraid. It would be worth reaching out to the authors of matlab-zmq
How do I find the sampling rate of my eye tracker?
You can look at the total number of samples compared to the duration of your recording in the exported files
I'd like to add that I am getting different count fixations in each AOI in my image when exporting data using 2.5.0 player versus the newer 3.5.1. So, I would need to export all of the data with the same player version.
It's worth double-checking that the fixation detector thresholds are set the same in both versions
Its the same
Fixations
Hi everyone. I want to know how I can see the data that pupil lab recordings give me regarding fixations and others. Since I do not have a program that allows me to view documents in ".npy" or ".pldata" format. Thank you
Hi @user-a11557. You can load the recordings in Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-player
Hi. For gaze_position.csv, which is stored through pupil capture, I want to know how many Hz the data is stored in.
For example, how many data are stored per second.
I wonder if there is a way to reduce the number of Hz according to the user environment.
Please see this message for reference. Note that if you want to have a specific amount of data for a given environment, I'd recommend downsampling after the recordings have been made.
Hi, I'm using a pupil core. There was no problem when using it indoors without turning off the lights, but when using it outside, the gaze is not detected properly. This seems to be because the eye-camera is an IR camera, so it cannot properly detect the pupil due to sunlight. I wonder if the reason I mentioned is correct. Also, I wonder if there is any other way to use the pupil core outside (with good or better gaze detection performacne) than using it after sunset.
The eye images can be washed out by sunlight. It would be worth reducing the eye camera image exposure time for bright environments
Hi! I've recorded data from two gaze experiments in one sitting via PupilCore glasses. My plan was to split the gaze recordings into two separate files via the PupilPlayer, then apply some optimized post-processing per experiment (fixation detection etc.). However, I cannot load the exported (i.e.split) files back into PupilPlayer, which seems to imply that I first have to apply all possible post-processing before exporting/splitting files. Do you have any suggestions for a workaround? Thanks in advance!
You can choose to export specific sections of your analyses for post processing. Just drag the ends of the timeline in Player (see screenshot) before export. Splitting the recordings like you suggest is not a recommended workflow.
Hi Pupil Labs I am trying to understand the coordinate system used for the exported 2d data. I have two basic questions: 1. Is the coordinate system calculated from the calibrated area? 2. What is the coordinate system for the exported 2d data? I read it is 0,0 in bottom left corner and 1,1 in top left corner. I am confused about this because the world camera is 1280x720 pixels (so not a 1:1 ratio). Can you please help with this basic question? Thanks in advance
Hey @user-660f48 π. 2d gaze data are relative to the scene camera image, not the calibrated area. You can read more about that here: https://docs.pupil-labs.com/core/terminology/#coordinate-system. Note that a square aspect ratio is not a prerequisite for normalized coordinates.
Hi, can we use the Pupil Core to achieve the function of the picture we show?
You could potentially do something similar with Core, but you will need to use AprilTags and the Surface Tracker. https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
The reference image mapper is only available for Invisible.
Does anybody have a solution for using PySerial in a plugin with the 3.5.1 binary release under Windows 10?
Hi @user-3578ef. Welcome to our community! Would you be able to share more details about what exactly you're trying to achieve with PySerial + Pupil Capture?
are there any Docker image files that work for running an image in Docker on a M1 Mac? I've been trying to make one, however it fails due to issues like installing cysignals
Hi, I was wondering how the confidence in the gaze_positions.csv file ist being calculated/what it stands for. I have tried to compare it with the pupil_positions.csv file but can't really see the connection. Is it the mean of the left and right eye confidence? I know the meaning of the confidence of one eye and how it's being calculated, but I'm not sure concerning the gaze data.
Hi @user-ffe6c5 π Gaze confidence is derived from pupil confidence. This is achieved by calculating the mean of the confidence values of the corresponding pupil data that is used to compute the gaze datum.
thank you @user-c2d375 ! Can you explain briefly, how the corresponding pupil data for a gaze datum are chosen? I noticed that most pupil_timestamps can be found as gaze_timestamps but some are missing in the gaze_positions.csv file and some gaze_timestamps are new (compared to the pupil_positions.csv file)
Please take a look at our documentation for an overview about pupil data matching https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching
@user-c2d375 right, i totally forgot about that. Thank you very much!
Hi there! I have one of these homemade eye tracker in my lab (https://hackaday.com/2013/02/12/build-an-eye-tracking-headset-for-90/) and I am trying to figure out if it still works. I plugged it in the computer and tried to run the Pupil Core software on it but it did not work. My question here is if there is another software to run this and if so where can I find it?
Hi @user-5ba46b π. How long has that been lying around π? Always fun to see old projects. First thing to check is whether the cameras are detected by your computer's OS
I have a pupil core headset and the worldview camera was taken apart and now we cant figure out how to put it together. Is there any documentation that you can provide to help put this camera back together
We don't have documentation in this regard as it is not designed to be dismantled π.
Hello. I have three Pupil Core Eye Trackers. 1 of them works well. 1 of them displays the world camera view through Pupil Cam 1 ID2, but the eye cameras only can display Pupil Cam 1 ID2 (not Pupil Cam 2 ID0). I tried uninstalling all the devices, restarting the computer, restarting default setting on Pupil Capture, and it still does not show Pupil Cam 2 IDO (but it does show up as a hidden driver). Is there anything I can do about this? The third eye tracker throws a USB Device not recognized error every time I plug it in (even though I use the same cables for the other two eye trackers). Are there any steps I can take to remedy at least 1 of the other eye trackers quickly?
Hi @nmt , I'm currently working on a project using the pupil core eye tracker and just wanted to ask you about the sampling rate. I noticed a few people have asked about this before, but I am seeing some discrepancies in the sampling frequency for the data I am collecting. Are these discrepancies due to the pupil player processing or pupil capture settings I have?
Note that Core's sampling rate can be variable depending on things like cpu load and user-set camera resolution. If you can elaborate a bit on what you mean by discrepancies I'll be able to offer something more concrete
The events["surfaces] code, which used to work well, has been used in ROS. Is there an internal problem?
this code here:
surface_name="" surfaces=events["surfaces"] print(surfaces) for surface in surfaces: for gaze in surface['gaze_on_surfaces']: gaze_on_surf=gaze['on_surf'] gaze_time_and_surface_name=gaze['timestamp'],surface['name'] gaze_time_and_surface_name=list(gaze_time_and_surface_name) c.append(gaze_time_and_surface_name)
Hi @user-0b9182! I'm not sure I fully understand your question here. Could you elaborate a bit more?
Hi, team! I have two questions about calibration. 1. In the calibration process, you only filter the pupils with low confidence, but there is some noise remaining with high confidence; 2. When I did 2d 5 point calibration (no world camera, only monitor), of course I could get the correct predicted position with the previous 5 points in the verification phase, but the 4 points used for verification did not perform well. 9 points can get better result.
Hi @user-9f7f1b π. Would you be able to explain your setup in more detail, i.e. how did you calibrate without a scene camera?
Hi @nmt , I have a question about the pupil.pldata file. If I generate one myself using a different pupil detection method, do I need to flip the location of the right pupil to compensate for the right video being flipped? If not, is this flip applied when I export the data from the player?
Hi @user-0b9182 what version of Pupil Capture were you using? I mention this because at some point Capture was updated to use msgpack 1.0, this introduced some breaking changes to old plugins Let me follow later with the release notes and affecting changes
In Capture 3.0 https://github.com/pupil-labs/pupil/releases/tag/v3.0, the network API changed to be compatible with the msgpack-python 1.0 standards and their "strict_map_key" policy. Every release after that changed to use strings as dictionary keys on all msgpack-encoded data published via the Network API. This change mainly affects binocular 3D data, which was previously referred to by using integers.
Hi @user-2ce8eb! Please avoid hijacking other people's answers. We always try to respond to everyone as soon as possible, so there's no need to mention us or reply onto others questions.
Firstly, I want to remind you that Mobile Core has been deprecated, intrinsics might not get properly corrected when streaming to Capture, thus you might see worse accuracy. Onto accuracy, keep in mind that the accuracy of your results will ultimately depend on your experimental conditions. - If you're using the 3D calibration feature, please remember to roll your eyes before capturing a good 3D eye model. This will help ensure the most accurate results possible.
In both cases, please ask your subjects to move their eyes and try to keep their head still during calibration.
Bonus: You can start the recording before calibration and re-run the calibration in Pupil Player to see which calibration method works best for your specific case.
Oh well. I will not mention you or reply onto others questions. And what I want to tell is that I have known the app called Mobile Core has been deprecated, and I just want to a experiment which involves having the tester use a mobile phone and using the pupil core to see where the tester's gaze is on the screen and how the eye moves. The phone's screen appears smaller in the world video. Now, I can not get a good calibration!