what is the difference between Pupil core and Pupil Neon?
Hi, @user-38b270! ππ½
Both Pupil Core and NEON are wearable eye-tracking solutions that we offer (along with Pupil Invisible). Pupil Core was our original product. It's ideal for studies demanding high accuracy or pupillometry. It is fully open-source and can be extended and customized to meet research aims. A wealth of raw data is exposed to the user. Pupil Core requires calibration and a controlled environment such as a lab to achieve the best results. It connects to a laptop/desktop via USB. For more details see https://pupil-labs.com/products/core/
NEON is our newest eye-tracking product. It has a modular design that fits a variety of headsets and utilizes deep learning for calibration-free and slippage-invariant gaze estimation. This allows for long recordings in any environment, with no decrease in data quality. You can even take the glasses off and on without needing to re-calibrate. For more details, see https://pupil-labs.com/products/neon/
Pupil Core has been a reliable tool for researchers since 2014, providing accurate gaze, eye state, and pupillometric measurements. However, to achieve the best results, Core requires a controlled environment such as a lab. Neon's deep learning-powered gaze accuracy matches Core's and even surpasses it in certain conditions. This versatility makes Neon extremelyΒ powerful.
And what is the prices difference?
The prices are listed at the top of each product page. Right now, NEON is listed at β¬5900 and Core is β¬3440. Academic discounts are available if you're affiliated with an academic institution.
It should be noted that some import fees, tariffs, taxes, or other charges may be applicable depending on your region. You'll want to reach out to us via email at info@pupil-labs.com if you need more specifics about that for your region.
Does that include the software and license? what would be the total for each?
NEON comes with a companion device (included in the price) and uses an app that we make available for free. Recordings and data can be exported from the app and/or uploaded to our Pupil Cloud service (included).
Pupil Core requires an active connection to a PC running our Pupil Core software. We do not provide the PC, but we do provide the software which is free and open source.
You are right, AOIs will be a major part of my research interest but I still would like to see, for example, their first fixation duration within one of these AOIs. π
Ah, I see. I believe there are data/research that shows that 4000ms is something of an upper limit for fixation duration. I don't have any references readily available, but I think you may find it worthwhile to review the literature in this regard
Hi, I'm using Pupil Invisible for an eye-tracking project. The goal is to have a stream of the Video, as well as the x and y coordinates of the gaze in ROS as two individual messages. The best way to do this, would be to have an android application on the phone that is connected to the glasses, that already sends this data in the right format. Are there any resources from your side, that could assist me to develop such an app? Thanks in advance!
Hi @user-4334c3 π - since you're using Invisible, do you mind posting this to the πΆ invisible channel instead?
hello, is it possible to retrieve the real time video stream from pupil core using the network api only? Or do I need to tinker with the source file here https://github.com/pupil-labs/pupil/tree/master/pupil_src/launchables
Hi, @user-ded122 - yes, you can stream video with the network api. Here's an example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Hey there, I am using Pupil Core for a quantitative research study and I am still quite unexperienced with the software, so aplogies in advance if the questions are unclear. Within my study I am interested in the number of fixations of a certain AOI, duration of these fixations, and the pupil dilation/diameter of the participants. Participants will be divided in two groups, and they need to watch a one-minute advertisement where the AOI remains constant. My struggle right now is 1) creating these AOI's using the april tags, and 2) exporting the data. Since I will have at least 60 participants, my plan was to compare the average values of the number of fixations, duration of fixations and the pupil diameter to examine possible differences between the two groups. However, I am unsure how to obtain these average values based on the raw extracted data that the pupil player provides.
Hi, @user-074e3a π - let's tackle your challenges one at a time. I believe you first mentioned struggling with defining AOIs using AprilTags. Have you followed the guide at https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking ? If not, please take a look. If you have followed that and are still struggling, can you tell us more about which part of the surface tracking process you're stuck at?
Hi, I am Stefano Lasaponara from Sapienza University. I purchased Pupil Core a couple of months ago. Today for ethics purposes I need a certificate of conformity with EU standards, does anybody knows where to find it?
Another question, do you know which CPU on the market runs best with the eye trackers? Disregarding cost issues and we want to achieve 200 Hz performance if possible.
We prefer Apple silicon - e.g. M1 with 16gb unified memory - we easily max out the sampling rate of Core with these
Hi there, I have some questions about data analysis. We have collected gaze data using Pupil Core from 10 participants. We want a heat map on a reference image for each participant's data, like what the reference image mapper enrichment does. I found this example here: https://docs.pupil-labs.com/alpha-lab/multiple-rim/, but it is provided with Pupil Invisible. So can you please point me to any repositories/examples if possible?
Hi @user-1ed146 π. Reference Image Mapper is only available for Invisible and Neon recordings, unfortunately as it runs in Cloud. For Core recordings, you can of course use the Surface Tracker Plugin + AprilTag markers to generate heatmaps: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
Hi @nmt @papr Can you please let me know where I can find the reason behind why the bivariate polynomial is used? Is there a paper or study that I could refer for this?
Hi @user-d407c1 @nmt - we want to use the glasses in shop alongs and soon after sit down with the participant to just playback/ show them where/what they were looking at. Is this easy to play back instantly? Or do we need to create a scanpath following the steps listed on the website
Hi @user-9e9b86 π I just replied to you by email. Please feel free to reach out if you have any further questions.
Hi,
This is a new problem I've encounter in Unity - I'm unable to start the recording from Unity (last week, I still could) and I'm wondering this could be the culprit. I haven't touched any of my scripts in my projects for the past two months so it couldn't be that.
Please could you help me out with this? Thanks!
Hi all, I want to know if the AddOns (especially HoloLens AddOns) have been spec'ed for gaze tracking accuracy and 3D pupil location accuracy? I understand that we don't have a "ground truth" 3D pupil location on real imagery but if anyone has attempted to spec out on CGI images? I also posted this on vr-ar but since the platform is Core, I didn't know which channel to post this question to.
Hi sir, I have problem with using camera in pupil_src. I can't use the camera to detect the image and calibrate the point of view. Please help me. Thankiu so much
ERROR eye0 - video_capture.uvc_backend: Could not connect to device! No images will be uvc_backend.py:133 supplied. meanwhile my device has uvc support and is not being used on any software
I didn't get an answer previously, so here are my questions again:
Hi All, we just got some uncased Pupil core systems that we want to try to adapt to use in non-human primates. We need to custom build our own helmet, and some 3D models of the eye and world cameras would be helpful for prototyping. Is that possible? Secondly, the pye3d model must have some constants that are based on human eye anatomy, would it matter on the substantially smaller eye diameters of non-human primates like macaques? Could we tune these parameters somehow? The interpupil distance is also much smaller in monkeys, and I suspect this may affect some of the 3D estimations that may use binocular data? And finally, as a user of Eyelink and Tobii Spectrum Pro hardware, I really appreciate the beautiful hardware and great software design coming from Pupil Labs, congratulations!!!
Hi @user-93ff01 π ! Thanks for the positive feedback! And sorry that we missed the first time you asked. Here you have the 3D models to help you prototype your own custom mount https://github.com/pupil-labs/pupil-geometry
Yes! pye3d is opensource, so you can go to the repository of pye3d https://github.com/pupil-labs/pye3d-detector fork it and modify it to your needs. As small guide, you can find these physiological parameters such as ranges for the location of the eye-ball, diameter of the eye ball, phi and theta... at https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/detector_3d.py#L575
You would probably not need to modify the pupil diameter range, but the sphere diameter (_EYE_RADIUS_DEFAULT) and potentially circle_3d_center (x,y,z) values at which one can expect it to be in the eyeball.
Specially if you are interested in binocular data, as the algorithms would estimate the distance from the eyeball's centre and the gaze vector to infer the binocular data.
Alternatively, and depending on what data you are looking for, you can use the 2D detector, although it would not be able to correct for slippage.
In short, yes, to all the questions. π
TLDR: 1) do you have 3D models of the eye and world camera we can use to build a custom headset. 2) can pye3d model apply to non-human subjects given the constants dervied from human eyes. 3) does inter-pupil distance have any impact on your algorithms?
Hi there, I understand that the core tracker needs to be connected to a PC or laptop via USB but is there any possibility for it to be connected to a data logger that can allow it to be mobile? Do you have any recommendations? thank you.
Hi, @user-6cf287 - if you're looking for a truly mobile solution, you really should consider Neon over Core
We may be able to get away with the 2D detector, but monkeys can be like kids and can move quite a bit and we are making a prototype helmet so we may certainly get some slippage, which was why the 3D detector looked more robust if applicable to non-human primate.
I am still a bit unsure of which parameter aligns with the inter-pupil (or inter-center) distance?
I am using Pupil Core. Currently, eyes cannot be detected. Are you familiar with this issue and any solution for this? π core @user-cdcab0 @user-92dca7 @marc
Hi @user-dd1963 ππ½ - are you using a Core headset purchased from Pupil Labs or a DIY unit?
If it's a unit from us, it seems the drivers weren't installed properly. There are some troubleshooting steps here: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
If it's a DIY unit, can you tell us more about the cameras you're using? The first two images indicate an issue with the drivers you're using or the hardware itself. That will need to be resolved before it'll ever work with Pupil Capture
Hi there, I have a question on the exported fixation data. In the data frame, I have values larger than one and values less than zero in the "norm_pos_y" column. However, the bounds should be x:[0, 1] and y:[0, 1] in 2D normalized space (the coordinate system). Can you please help me figure out what is going on here?
Gaze/fixation coordinates can exceed the bounds of the scene camera's FoV, since the eye cameras are able to capture a bigger range of eye movements. Note that such data can also be of low quality, so it's worth checking the associated confidence values.
Hi, some of our experiments will take place in a dim room: does the world camera have an IR-cut filter or could we use IR illuminator in the room to help the world camera? Any other advice?
The scene camera records visible light. What are your participants going to be looking at and how low is the ambient illumination?
Question 2: With the Eyelink 1000 we start and stop recording on every trial (with additional trial /sync markers to time sync properly), with our Tobii Spectrum Pro we were recommended to keep recording for the whole block and only send trial sync markers. For Pupil core, what is the preference, is it ok to start / stop recording before / after each trial giving the same session name (I didn't see a pause / unpause command), or record the whole session and use trial markers? It seems given we can record all camera data raw this might grow the files with lots of useless inter-trial video etc?
How long are your trials going to be? We recommend to split your Pupil Core experiment into chunks to avoid slippage errors accumulating, and for managing the size of recordings, like you mention. Read more about that here: https://docs.pupil-labs.com/core/best-practices/#split-your-experiment-into-blocks
Hi QG TrαΊ§n TiαΊΏn 1665 ππ½ were there any
@nmt Hi! If I want to define a rectangular area of interest, should the tags added to the four corners of the rectangle be the same or different?
Hi @user-6b1efe π it is crucial to use unique Apriltag markers, so please ensure to add different markers to each corner of the area of interest
For synchronisation, I'd recommend sending events over the network api. This example script shows how you can do that using Python: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py. Or use our LSL relay: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#pupil-capture-lsl-plugins
I've responded to your question in the software-dev channel (it'd be great not to duplicate Qs in future π)
Using IR illuminators would solve this problem for us, but we are worried about the world camera (and any possible side effect on the eye cameras?)
Hi Pupil Labs, just checking if this problem has been encountered by anyone: https://discord.com/channels/285728493612957698/1093713310261723217/1104725246440902658 Thanks in advance:)
Hello I'm trying to use core device with pupilcapture 3.5.1 but i need to add audio and sync it with my pupillometry. I find in old request that this is possible using LSL. I'm really not familiar with coding etc ( i'm in my medical fellowship) So i m trying following every step... Unfortunately, plugin seems to be well installed in pupil capture but i always have the same problem which is : see the screen i tried to remove the flush fonction with "#" (sorry fisrt time for me) error is catched from resample fonction and all audio frames still droped
I think i missed something... if you can help me.. did someone succeed in recording audio ?
Thanks a lot
Hi, I am working with pupil data that was recorded using pupil labs hardware about 8 years ago. Does pupil player allow me to upload videos from the past or does the recording have had to be recent/live for me to analyze pupil dilation? If that is the case, does anyone have any recommendations for software that allows analysis of pupils based on past video recordings?
Hey @user-fcd05e! It's a bit difficult to say without knowing the specific version the recordings were made with. In the first instance, I would make a backup of the recordings, and then just try to open them in Pupil Player. If it doesn't work, feel free to report back here π
To expand on @nmt answer: Over the last few years, there have been a lot of updates and improvements to Pupil Capture/Player. If your recording was made in an old version of Capture, it might not be supported by the latest Player versions.
To get all the latest fixes and improvements, you will need to upgrade the recording format. To do this, please follow these steps. Note that it's worth firstly only trying this with one recording to see if it works.
- Back-up data: To avoid any data loss, create a copy of the recording(s)
- Upgrade recording format: this operation is only possible by opening the recordings in Pupil Player 1.16 (see release notes at the link for further information)
- Get the upgraded recording folder and now you should be able to open it on recent versions of Player
Hello everyone,
I've broken a piece (see photo) Do you know where I can order another or where I can find the file for the 3D printer?
Thanks
Hi, @user-59397b - I believe you'll find those here: https://github.com/pupil-labs/pupil-geometry
Hey there!
I want to use the export functionalities of the Pupil Player in a script for batch processing. I need to include marker/surface detection and gaze on surface mapping. Does anyone have any suggestions on what could be a good approach to build this pipeline using the Pupil repository? I'd also be happy if anyone has some scripts to share.
Hey @user-934716! There is a community-contributed batch exporter, which looks like it handles existing surface data: https://github.com/tombullock/batchExportPupilLabs
Hi, I used Pupil Core to collect data during sight interpreting tasks (in which participants read text printed on paper and interpret orally simultaneously) and want to calculate saccade lengths moving out from one fixation to another to see if there are various reading processes, e.g., scanning with relatively longer saccades while typical reading with relatively shorter saccades. I searched the chat history and found similar discussion. I'm not sure if the velocity-based approach would be fit for purpose so I intend to use the equation attached. I wonder if head movements will affect the results and if there's a way to account for them? Many thanks!
Hey @user-52c504 π . This equation calculates the distance between consecutive 2D coordinates. If you are not interested in angular metrics (e.g., angular displacement, velocity etc) this is fine. Now, regarding your question about the effects of head movements, this depends on whether you have surface-mapped fixations or not:
I hope this helps!
Hey, anyone using LSL can help me ? https://discord.com/channels/285728493612957698/285728493612957698/1105951795756421272 thank you π
Hey @user-f77049 π. Apologies, your message slipped off the radar!
It looks like your using the audio-rec
branch of the Pupil Capture LSL repo. Unfortunately, this is a dev branch and may or may not work - in your case it's the latter and I don't have an obvious fix.
One workaround is as follows: 1. Run 'pupil_capture_lsl_relay' Plugin from the master branch: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture β this will relay pupillometry over the LSL network 2. Use LSL AudioCapture app to capture and stream audio data over the LSL network: https://github.com/labstreaminglayer/App-AudioCapture 3. Record everything in LSL LabRecorder https://github.com/labstreaminglayer/App-LabRecorder
That method is a bit cumbersome, but will get you a unified collection of a raw audio waveform and pupil size. Let us know how you get on!
Hello everybody! IΒ΄m want to use the Core-Tracker for my research project, however IΒ΄m facing a problem. When I open the Pupil Capture Software and I try to record, IΒ΄m not getting any recordings - not from the world camera and not from the eye cameras. The software says that it has recognized the world camera and the pupil cameras. When I record, it says that no world video has been captured (and I canΒ΄t find any pupil recordings in the file that has been created automatically). ItΒ΄s probably a very basic problem or even something hardware-related, but IΒ΄m completely new to the system and would love some help from you guys. Thank you in advance π
Hey @user-e2fd67! What operating system are you using on your computer running Pupil Capture?
Hey folks! Tomorow I'm starting a project that will use a pupil core to control some servomotors. I will need to get the gaze position in real-time and send it to an Arduino. I just want to make sure I'm starting in the right place -- should I be using the Network API (https://docs.pupil-labs.com/developer/core/network-api/)?
Hi, @user-1c31f4 ππ½! That sounds like a pretty neat project! The Pupil Core Network API which you linked to is exactly what you want for that π
Glad to hear you're up and running now!
Hey again folks -- is it possible to run the pupil core software on a Raspberry Pi 3?
Hi, @user-1c31f4 - yes, we have had users report success running Core on Raspberry Pi's. You can find various discussions using Discord's search feature
Or at least, receive the data stream from the Core Glasses?
Another option is to use the Raspberry Pi to stream the video to another PC where Pupil Capture is running. Check out https://github.com/Lifestohack/pupil-video-backend
Hello, I am new here.βΊοΈ Glad to see you! I hope someone is willing to help me with a problem I am having with Pupil Core. The point is that I'm doing some aviation research and I made a recording with eyetrackers on an air traffic controller. Now I want to transfer the information to the Blickshift software for a detailed analysis. But at the time of export I have no fixations and I don't know how to fix this problem. Can anyone help me with some information or anything else?
Greeting @user-37b6ea π. Make sure you've enabled the fixation detector plugin in Pupil Player before you export: https://docs.pupil-labs.com/core/software/pupil-player/#fixation-detector
Can we use reference image mapper with pupil core data?
@user-cdcab0
Hi, So I am using Pupil Mobile app and Pupil Player. I want to know what would the sampling frequency for each camera for the data collected with Pupil Mobile. I suspect its is 30 fps since but is there a way to increase it ?
Hi, @user-c5ca5f - it looks like you can change the framerate on Pupil Mobile for each camera. Open the pupil cam, then hit the hamburger button β° at the top left. Scroll down to frame rate and tap on it to see the available options.
Sorry for a basic question but is this something I would be able to use to build an eye tracker to speech or to use a computer for my aunt with locked in syndrome? Her insurance is denying the one she needs so I'm seeing if I can use something open source.
Hi @user-2dcf9f π ! Yes, you might be able to use Pupil Core for that, yet you would need an additional layer of software that you'd need to implement by yourself to do some of these things.
Other users have written some routines to use the computer's cursor with their gaze https://github.com/emendir/PupilCore-CursorControl or to communicate using simple signals.
Nonetheless, it might be preferable to go for a calibration-free system like Neon to avoid the nuisance of frequent calibration for your aunt.
If you need further information, do not hesitate to contact info@pupil-labs.com
PS. We will soon release an article in our blog that might be of your interest.
Hello, I would like to know the differences between Neon and Tobii Pro Glasses 3 with a 100Hz sampling rate. Specifically, what kind of data can be captured, whether there are any paid software for analyzing the data, and any other relevant information. Thank you.
Responding in π neon π
Hey guys, I'm having a very bad time trying to run from source on a raspberry pi. I've worked through a bunch of 'collecting packages' errors, but now I'm getting errors following the 'Building wheels for collected packages: ndsi, pupil-detectors, pyglui, opencv-python, pupil-labs-uvc' stage.
Here is the pastebin of the errors I'm getting: https://pastebin.com/SZs9nf2x
Pip requires some build dependencies to install some of those packages. You probably just need to run sudo apt install libglew-dev libeigen3-dev libturbojpeg0-dev
to get what you need.
Alternatively, you may be able to find pre-built wheels for those packages for Raspberry Pi
This was it. Still no luck getting it running on the pi, but that fixed those errors!!
If anybody could point me in the right direction, it would be very helpful. This is the first time I've tried to build any program from source, so these errors seem quite impossible to me.
Dear author
I keep getting this error while running the code, but my environment is always configured. Is there any way to solve this problem
Looks like this was responded to here: https://discord.com/channels/285728493612957698/446977689690177536/1110509528082026566
@user-d407c1
Hello, is there a way to access the pupil core API without keeping the Pupil Capture GUI application running in the background?
Hi @user-ded122 ! that's what Pupil Service is meant for., depending on your needs, as it does not have scene video feed. https://docs.pupil-labs.com/core/software/pupil-service/
Hey @user-ded122! Just to expand on Miguel's response, Pupil Service has no functionality for scene video, so you won't be able to calibrate and obtain gaze data (unless you're doing calibration in some VR scene). If you're just interested in pupillometry and eye state obtained from the eye cameras, Service will record those things.
I hope that's helpful to people in the future!
Keep us posted on how your efforts go!
Will do! Has anybody here had success running pupil capture from source on an ARM chip?
No but we are probably going to try to buy some https://wiki.radxa.com/Rock5/5B (or other RK3588 board) soon as the CPU+GPU are way faster than the RPi-4 and hopefully have MESA support so we can test and feedback
LattePanda sigma looks great too, but much more expensive...
Hi Pupil Lab, Why I can't find the "forget password" button on the current Pupil Cloud login page? Any solution for this? Thanks.
Looks like this was responded to in https://discord.com/channels/285728493612957698/633564003846717444/1110839688383692820
hello. I bought a core product. I want to directly receive and output data from an ir camera using Python on Ubuntu. However, even with reference to related references, it is difficult to find a way to connect to a serial port to receive input data, convert it, and output it. where can i check it?
Hi, @user-b83651 - have you tried using the Network API https://docs.pupil-labs.com/developer/core/network-api/ ?
You may find some helpful samples on this repo as well https://github.com/pupil-labs/pupil-helpers/tree/master/python
Hi everyone, IΒ΄m having an issue while running the codes as it seems there is a problem with compatibility in the version i have of python, as i keep getting this error: AttributeError: module 'packaging.version' has no attribute 'LegacyVersion'. the code in particular is the fixations detector I downloaded from the github associated with pupil labs: https://github.com/pupil-labs/pupil/tree/318e1439b9fdde96bb92a57e02ef719e89f4b7b0/pupil_src/shared_modules IΒ΄m wondering if I can fix this by downgrading the python version, which python version was used when writting this codes?
hi i have a question i was reading about the best practices to use the markers and it was mentioned that it should be set upward always? is that correct for me am using several markers to create a a squeeze surface on the tv screens so that i can capture everything inside. I assigned some markers downwards some to the left is that an issue?
Hi, @user-eb6164 - no, I don't think the orientation of the markers makes any difference. You can arrange and orient them however you want - even mix and match, as long as you don't change that after you define the surface. If you do change the orientation or position of any of the markers, you'll want to delete and redefine the surface
Hello, team! I have a question about 3D calibration. 2d gaze calibration/mapping works via polynomial regression and 3d calibration is calculated using bundle adjustment. As we can see, pye3d model outputs circle_3d_normals, why not map them to 3d ref points(we can fix ref points' depth, so we can map [x, y, z] normals to [X,Y] ref points), but use complex bundle adjustment algorithm?
Hi! I was wondering how can I configure a surface and obtain its related data (fixations_on_surface, gaze_positions_on_surface, etc.) via API or without directly using Pupil Capture and Pupil Player. Thanks a lot!
Hi, @user-a7d513, I'm actually working on a project right now which has accomplished this. In short, the Surface Tracker class can be used without Pupil Capture, but the API isn't really made to be used stand-alone, so getting it done is a little bit hack-ish. I'll be wrapping up some of the harder bits into a separate Python package and we'll have a couple of demos to show how to use it.
We're still probably a couple of weeks out from making it publicly available if you can wait just a bit. Otherwise if you have more specific questions I can probably help answer them sooner.
Hi there. We have managed to break a wire that is going into the eye camera plug. Is it possible/easy to do a DIY fix ? e.g., is the white plug a standard size/type? Any help appreciated - this is a pupil core headset
Have there been any updates regarding running pupil core on arm64 macs natively? The instructions suggest to use an intel python, requiring quite a bit of extra setup (I use pyenv or conda-forge and getting them to build intel binary isn't straightforward)?
Hello, I'm am experiencing frequent crashes in the Pupil Capture app. I am using the Pupil Core device, and during recording, the application often gives me the warnings in the screenshot. Furthermore, the eye camera windows often freeze - in this case, I can 'unfreeze' them by manually selecting the camera source; if I don't do that in a timely manner, the window crashes and the Capture app will not let me open it again. I thought the issue could be related to high CPU usage (I'm running the app on a 11 gen i5 laptop), but it also occurs when the CPU is at low usage. Previous messages in this server hit at a low quality cable causing similar issues, but we are using the original cable that came with the glasses. Do you have any suggestions to prevent this issue?
Hello, I am still experiencing this issue. Anyone could please help me out?
Hi everyone, for my thesis we are using the Pupil Core. I have two questions regarding something me and my thesis buddy can't seem to figure out:
does the core also record audio and if so, which setting do I enable. In Pupil Capture I have it selected on 'sound only' in general settings. However, I cannot hear any audio when I play back the recordings.
We're specifically interested in pupil size measurements. After creating an export file we're looking into the pupil_positions csv file. After seperating the text to columns we'll get a whole bunch of data. Is pupil size among on of these columns?
Sorry in advance lol... we're complete eye tracking noobs and my supervisor has no clue as well π
Hi @user-d99595 π !
Pupil Core has no microphone so it can't record audio by default. Yet, if you rely on audio, you can use iMotions software or the Lab Streaming Layer framework to record audio and sync it with the data.
Yes, you should look for the diameter_3d
column. Check out this link for a description of all fields on pupil_positions.csv https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
Hi Miguel,
thanks for you swift response. So just to clarify, the diameters for both eyes are displayed in the same column?
you would need to filter by the column eye_id
- where 0 stands for left and 1 for right eye
Hello, does Pupil Labs work for DIY headset units that only have one camera (eye camera with IR reflections)?
Hi @user-a246bc ! Depends, how do you plan to relate (calibrate) it to the environment without scene camera?
I understand that would assume static head position, but I would like to experiment nonetheless.
HI, team! Does anyone have time to reply this question? "Hello, team! I have a question about 3D calibration. 2d gaze calibration/mapping works via polynomial regression and 3d calibration is calculated using bundle adjustment. As we can see, pye3d model outputs circle_3d_normals, why not map them to 3d ref points(we can fix ref points' depth, so we can map [x, y, z] normals to [X,Y] ref points), but use complex bundle adjustment algorithm?"
Hi @user-9f7f1b, I just wanted to understand better what you mean, do you want to map the circle_3d_normal output of pye3d onto a sphere of fixed diameter (at a given depth) using, for example, some polynomial regression? Is that right? Note that this approach may have issues as it will be limited to only one distance, and the polynomial mappers may not work so well with slippage.
To estimate gaze using the 3D Mapper, we use a geometric method that involves intersecting visual axes. This requires knowing the eyeball position and gaze normals in SCENE camera coordinates. The bundle adjustment is used to determine the relative poses of the eye cameras with respect to the SCENE camera as in Pupil Core, they can be adjusted.
thank you!
i am having an issue today with pupil capture it is not allowing me to set the fixation threshold even i watched the video of recorded eye tracking all was what i fixate on is wrong so i discovered that minimum fixation seemes emtpy or low and i cannot change it. I do not want to reinstall the software because i already defined my surfaces what could be done? i attached an image
also if i need to reisntall everything is there a way i can backup my surfaces and use them again? any clear guide for this?
You'll find your settings in C:\Users\<username>\pupil_capture_settings
, including surface definitions. You can back up those files if you want to be safe
thank you
pupil-tutorials
hi !! I am wondering if pupil invisible could record pupil size data ? i cant find this data in those csvs. Thank you!!!!!
Hi @user-74c615 ! Pupil Invisible does not perform pupillometry. The eye cameras are off-axis and for some gaze angles, the pupils aren't visible. This is not a problem for the gaze estimation pipeline as higher-level features in the eye images are leveraged by the neural network, but it does make it difficult to do pupillometry robustly
π π
@user-cdcab0
thank you for your reply!!!
i have another question ? dose core has this ability?
Yes, Pupil Core and Neon (in the future) can do pupillometry.
ok ! thank you !
Hi everyone, I'm pretty new with Unity and Pupil Core so I'd be glad if someone could help me! Is it possible to use a totally custom-made calibration instead of the one that pops up when I connect my pupil core glasses and start the unity project? How can I separate the calibration from all the other script that I need? is there an efficient way to do it? Thanks in advance!
Hi @user-740d7c! The demo calibration scene can be modified, i.e. it's possible to change the number and positions of the reference targets: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#calibration-settings-and-targets. Would that be sufficient for your needs?
Hello, I'm trying to find out the accuracy of the calibration for a recording but I can't find it in the data. Can I access it directly? thank you !
Hey @user-744321! You'll need to recompute the accuracy and precision values. Here's a tool you can use for this purpose: https://github.com/papr/pupil-core-pipeline
Hi@papr ! This is a set of experimental data. I am using Aggregating Fixation Metrics Inside AOIs, how do I export the sections.csv file? thank you !
Hi @user-5fbe70! Would you be able to clarify what Aggregating Fixation Metrics Inside AOIs is? Are you referring to this Alpha Lab content: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#define-aois-and-calculate-gaze-metrics ?
What do negative values for the absolute time cell mean? This happened for 4 of my participants. I am using those values to calculate duration in my code.
Pupil Time can indeed be negative. Read more about that here: https://docs.pupil-labs.com/core/terminology/#timestamps
For these participants, the gaze timestamps, base data, and gaze point (x and y), as well as eye centre (xyz) coordinates are all negative.
@nmt yesοΌI found that Define AOIs and Calculate Gaze Metrics only work for the Products used: Pupil Invisible, Neon, and Cloud. I'm using Pupil Core, can't get the sections.csv file.π
Yes, that tutorial is meant for post-Reference Image Mapper enrichments using Neon or Invisible. That said, you can use the Surface Tracker Plugin in Pupil Core to map your gaze to flat surfaces in the environment: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker. The plugin gives you the option to generate different AOIs. Metrics would need to then be computed, but it shouldn't be too difficult π
Hello! Under the support webpage, I see that an Onboarding Workshop is included with Core / Invisible. May I ask if anyone knows how to register for this workshop? Many thanks π
Hey @user-7a517c! Please reach out to info@pupil-labs.com with your original order id and someone will assist you with getting that scheduled!
Thank you @nmt π
Hi, I have a question regarding the eye tracker calibration. We are having studies where the screen is projected to a wall and is about 2-2,5 meters away from the participant. How should we calibrate the eye tracker for this situation. Do we just use the card or do we also have to place the calibration points on the screen digitally or manually? the second question is do we need to calibrate the eye tracker for every participant or is it enough to calibrate it once and save the settings? thank you.
Hey @user-6cf287 π. Either would work. The key point is to make sure you capture sufficient gaze angles to cover the screen as part of the calibration choreography. You'll need to calibrate each wearer. We recommend a new calibration whenever they put on the headset.
Hey folks, very quick question -- what version of python do you reccomend I use to use the network API without running into dependency issues?
Are there specific dependencies you're concerned with? pyzmq
requires at least Python 3.3, but it looks like msgpack
recently dropped support for Python 3.6. Generally I'd recommend using the latest stable Python release you can, but it seems you need to be at least on 3.7 to use the network API. If you need to use Python 3.6 or ealier, it seems you'll need to install an older version of msgpack as well.