Hi, I am planning to buy the NEON eye tracker, Can I get a quote?
Hi @user-38b270 π . Thank you for your interest in Neon! If you are considering buying Neon with the "Just Act Natural" frame, please fill out a quote request form at this cart link: https://pupil-labs.com/cart/?ne_jan=1 and the sales team will generate a quote for you.
do you ship to Jordan?
Thank you Nadia, I got a quote with an academic discount. but it does not show a software, do I need to buy a software for eye gaze analysis?
Hey @user-38b270, nice to hear you already got the quote! Neon comes with a companion device (included in the price) and a companion app. This app is available for free in the Google Play Store. You also get a free Pupil Cloud account that you can use to upload, manage, view, enrich and analyze your recordings. This means that the software you need for Neon recordings (Companion App) and their analysis (Pupil Cloud) is free!
Thanks Nadia, Do you have training material for how to use and analyze data?
You can check out the documentation here, which contains various guides and explainers: https://docs.pupil-labs.com/neon/
Hey Pupil Labs team, I had a quick question regarding utilizing the Neon Monitor app. In the documentation it says you must be on the same WiFi network to use the app, does this include hard-lined computers that are connected by LAN to the same network or must the computer be connected by WiFi specifically to access the monitor app?
Hi @user-ac085e! Wired network connections would be fine too. It should be the same local network. The docs are technically to strict asking for the same wifi!
Hello, Do you have more precision about pupillometry release with neon device (I know it's suppose to be Q2 / Q3) Have a good day
Hi @user-f77049! We are currently still expecting it in Q3, but can not yet give a more precise ETA. Once we have a more concrete date, we will be sure to update the ticket here: https://feedback.pupil-labs.com/neon/p/pupillometry-eye-state-estimation
Awesome, thanks Marc.
Dear Pupil Labs team, we have now gathered our first experiences in our EEG/eye-tracking set-up using LSL. So far we noticed that both the connected mobile phone as well as the built-in sensory unit in the just-act-natural frame start to heat up quite quickly (~10 minutes). While the glasses only cause a warm sensation on the forehead, the phone itself overheated to the point that it stopped the recording and displayed an error message. We also tried recording with the devices not placed on the subject, but on a table, and the errors stopped to occur. Can we do anything to prevent overheating for both the phone and the glasses? Thanks, Julian.
Hi @user-e0a71c! What are the ambient temperatures in your recording environment? The module itself should not be overheating but its possible with the phone. In mild climates this is rare, but during longer recordings in hot climates its a possibility. You are probably not doing this, but another factor that promotes overheating is charging the phone while at the same time making recordings (which is possible using a USB-hub).
Another thing that increases the load on the phone is real-time streaming of the data over the network either with the real-time API or the Monitor app. Other custom software running on the phone would also affect it.
Improving the ventilation around the phone should of course help, so e.g. carrying it in pockets or arm holsters with little ventilation might not be ideal. Besides cooling down the phone, an alternative would be to use a OnePlus 10 Pro
phone instead of a OnePlus 8
. We are about to officially support this newer model of phone which has a lower risk of overheating given its generally improved capabilities.
Unless at least 1-2 of the risk factors are present the issue might also be a faulty phone, as overheating is generally pretty rare.
Hi all. Is anyone here using Pupil for anything else but research?
Hi @user-423655! While the academic market and industrial research make up a large portion of our market, there are also plenty of other applications like e.g. in clinical medicine, professional performance and training, and neuromarketing. Do you have a specific application scenario in mind? Also feel free to reach out to [email removed] to get a product demo and discuss potential applications with one of the product specialists!
Hi @marc , thank you for the prompt reply. I own a consumer research company myself so understand that side of the tech quite well. I do however have another idea that would be well suited to the technology
Cool, we'd be happy to discuss feasibility if you want!
Hi. I am looking at gaze.csv and there is a column named "worn" which I could not find in the web documentation. I just tried a video in which I put on and off the Neon lenses and that variable does not change (always 1.0), which raises a couple of doubts. What is the "worn variable"? Also, if I am not mistaken, I see gaze estimation and fixation estimations without having the Neon lenses on. If this is correct, could it be the case that the algorithm fills in data if there is an error detecting gaze somewhere during the recording? Is there an easy variable in raw data to know when eye is not detected or Neon lenses are not worn?
The "worn" data should indeed be binary data indicating when the glasses are worn and when they are not. This works decently well, but the algorithm has not been ported to Neon yet. Instead of not reporting this value at all (which would be in line with what the docs suggest), it is currently being reported as 1/TRUE always. I see how this is confusing! Its a bit of a work-in-progress state until we manage to port the algorithm over to Neon.
Hello, Can someone tell me if Neon works with PupilCapture on MacOS? Have you already managed to use data like pupillometry? I only have windows but I'd like to make sure it really works before I borrow a macbook. Thanks a lot
Hi, the neon inter-camera geometry seems to assume a fixed inter-pupil (inter-ocular) distance. How would they work on something like a non-human primate with smaller eyes that are closer together. We currently have 3 pupil cores we are trying to adapt where we can move each camera independently, but the Neon has several advantages. The other main problem would be the pupil detection models that may be trained on human data?
Hi @user-93ff01! Neon is not assuming a fixed inter-pupil distance and works fine for subjects with small IPDs like young children. But using it with non-human primates might not be possible because of how the machine learning pipeline works. Neon does not do explicit pupil detection but outputs a gaze estimate directly based on the eye images. If the eyes present in the images look a lot different than actual human eyes it is a bit unclear how the performance changes and if the resulting signal is still usable.
For example how would a 3.5cm inter-pupil distance be handled by the Neon?
Hi.
I have a question regarding IMU sampling rate. We plotted IMU sampling rate (see first graph below). We get data every 4,30 ms (median; around 230 Hz), with a deviation of 1.095, a higher noise than with the eye tracker (median = 5.00 , std = 0.024, see second graph below). Would this be a limitation of the IMU itself or noise regarding the timestamp synchronization with the eye-tracker? If the latter is the case, would this improve with the new IMU update?
Thanks!
Hi @user-b55ba6! This is a limitation of the IMU itself. All currently planned updates will not affect this limitation.
Thanks Marc. OK, the model then would probably not work, though it may be interesting to try. We have previously recorded camera data for another eye-tracker used to develop their head model, so if you were interested we could do the same, but most researchers working with primates may not need the Neon feature set (head fixed tracking in our field is dominated by SR Research) and you priority seems to be human primates π€
I am afraid for now we are indeed focused on human primates! π In theory the technology should be applicable to non-humans, but it would be quite the effort to make that happen!
Hi, I am testing a new Neon. We expected we can get every frame of eye cameras and its timestamps and they would be in Neon Sensor Module v1 ps1.mp4 & Neon Sensor Module v1 ps1.time (& Neon Sensor Module v1 ps1.time_aux? What aux means here?) My question is, how can I get 1. eye camera video frames, 2. timestamps from data recorded by Neon companion app? Could you provide us an example code as you did for imu (https://gist.github.com/marc-tonsen/d9237b17d1685889bff771b0bc239506 )? Probably, you can provide a script for splitting Neon Sensor Module v1 ps1.mp4 with ffmpeg and a script for extracting timestamps using protobuffer.
Hello, in addition to the previous question, I am experiencing an issue on Pupil Cloud. I am trying to play or process the video I shot, but the display shows "The media could not be loaded, either because the server or network failed or because the format is not supported." and I can't do anything.
Hello Pupil team (and everyone!). My team is debating to buy the Pupil Neon for real-time robotics applications, but we have some questions:
1. About the degree of backward compatibility with the Pupil Invisible. Will applications designed for the former work the same? Are there some features no longer compatible?
2. Is the Pupil Neon suitable for Real-Time applications? We are looking into integrating it onto ROS2 for control purposes. Can it be connected directly to a Linux system, either wired or wirelessly? What is the expected latency?
3. Is the Pupil Neon suitable for users with disabilities or special conditions? We were using the Tobii Glasses before, but faced some detection issues on people with eye defects. Can the Pupil handle misshapen pupils, for example?
Thanks a lot for your time
Hey @user-58da9f π. Great to hear you're interested in Neon! Responses to your questions below:
Neon in many respects is a plug-and-play replacement of Invisible. It operates by tethering to a smartphone and recordings can be uploaded to Cloud/processed with all the familiar enrichments. There are some differences with the data streams, like a wider-scene camera field of view, and a 9DoF IMU! Full overview here: https://docs.pupil-labs.com/neon/basic-concepts/data-streams/#data-streams. One caveat is that Neon recordings are not compatible with Pupil Player and the legacy Invisible API. Have you already been using Invisible in your research? If so, perhaps you could share some details about your existing applications and we can try to offer something concrete.
Neon must be tethered to a smartphone companion device to get the ML-based gaze estimation. Real-time streaming to ROS could happen via our network api: https://docs.pupil-labs.com/neon/real-time-api/introduction/#introduction. The latency in this case would be bound by your network. More on syncing and how to reduce latency here: https://docs.pupil-labs.com/neon/how-tos/data-collection-with-the-companion-app/achieve-super-precise-time-sync.html#achieve-super-precise-time-sync. Where transfer latency is an issue, you can also tether the phone (with an additional USB hub) to your computer via ethernet. We have implemented gaze-contingent paradigms with this solution in-house.
This would likely be dependent on the specific condition. Note that we haven't systematically evaluated Neon's performance with misshapen pupils. That said, the ML network uses eye image pairs, rather than traditional pupil detection, and leverages higher-level features of the eye. So we don't predict a big reduction in performance, but it would be a case of testing it out.
Hey Pupil team, I was just informed that our newly ordered Neon is estimated to ship in mid/late June, I was wondering if anyone could provide some context or reasoning about this? We were under the impression that we would receive the item earlier (and give us time to become acquainted before needing it for projects)
Hey @user-ac085e π. Apologies that this is later than you expected! Due to a large number of pre-orders and some initial supply-chain hick-ups we currently have a longer orders backlog than usual which leads to a 4-6 weeks lead time. We expect this to go down to our regular <2 weeks in the near future once we have worked through the backlog. The team are working as fast as they can in this regard.
Hi, @user-ac085e ! As @nmt explained we where pretty overwhelmed with orders recently. We are working as fast as we can to fulfil the orders so you can have your device! I checked your order and will follow up via email.
Hi there! I collected some data using Neon, where I presented some dots on a monitor (monitor, from now on) and I have been trying to calculate the offset between the fixation and dots' location. I first need to find the coordinates of the dots on the recorded video screen (screen, from now on), based on the monitor's coordinates. I used markers so I know where the monitor's four corners are on the projected screen. What I am struggling with and do not understand is, when I calculated the distance from the camera to the screen is not the same for horizontal and vertical direction. In the website, the screen has 1600 x 1200 pixels with 132 x 81 fov. That gives me the distance from the camera to the screen as 356.18 for x direction (horizontal) and 702.05 for y direction (vertical). Why's this the case? What is the best way to find the dots coordinates?
(p.s. If there's a good way to search for a same question in channels, please someone let me know.. I'm not sure whether the same question already came up in the past, but couldn't find it. Apologies in advance if this is a redundant question!)
Hi @user-594678! I am not sure I fully understand how you are computing things. Are you using the Marker Mapper or your own custom marker setup to track the screen?
That gives me the distance from the camera to the screen as 356.18 for x direction (horizontal) and 702.05 for y direction (vertical). I am not sure what this means, how do you calculate this?
Is your goal to do an accuracy validation, i.e. to calculate the error of the gaze estimates using a set of ground truth targets presented on a computer screen?
Hi! We are interested in running a research project with Neon for which we would need to track Neon in space (both x,y,z position and orientation). We are planning to check the IMU precision for this. - Is there documentation on the gyroscope, accelerometer and magnetometer technical details? - Can we have more details on how the IMU data is integrated to calculate absolute orientation in space? - Can we access magnetometer data?
@user-b55ba6 we are using the https://invensense.tdk.com/products/motion-tracking/9-axis/icm-20948/. It has a built in sensor fusion engine to calculate pose of the IMU in regards to the world. The fusion engine takes gyro,accel and magnetometer data into account. We report the data as shown here: https://docs.pupil-labs.com/neon/reference/export-formats/#imu-csv
I would note that the IMU is great for getting a pose but not for getting a full position.
Thanks for the response Moritz. What is the reason you see IMU as not great for full position? Is it something about this specific IMU? We are planning to test accuracy in the lab integrating the pose and accelerometer data, and considering what other methods we might include if this is not precise enough. It would be great to make the most out of Neon's data as possible. Thanks!
Hi Moritz. I was reading the IMU datasheet from the web you mention and have the following doubt: can the raw magnetometer data used by the fusion engine be accesible so as to get the IMU 9 dof?
I should have mentioned we do not need absolute position but relative position to NeonΒ΄s starting point. Thank you both for the responses!
No problem! Even for a position relative to Neon's starting point, you'll need an additional source to fuse with the data (e.g. visual odometry) - integrated accelerometer readings will just drift over time due to the finite sampling frequency
Hi Pupil Labs! I've been working with your pupil invisible glasses for eye tracking in flight for my research. It's proving really successful for our research questions, however your Neon model 'ready-set-go!' for sports would suit us much better as we can easily fit it inside helmets and have pilots ware the helmet/visors that they are used to flying with. However we had a big problem in our previous data collection, in that the USB cable from glasses to phone kept disconnecting. Is there any scope to use Neon without the cable needing to be attached at all times? We actually have a few corrupted files on our pupil cloud because the cable disconnected mid-flight. It would be great to try and rescue some of the data in these if you have a solution?
Hi @user-9d7cb5! Really glad to hear Invisible is proving successful for your research questions!
The 'ready-set-go' Neon model was designed to provide a more optimal solution for dynamic situations, and for sure it will fit underneath helmets. With regards to the USB cable, it does need to be attached at all times. The phone acts as a power source and recording unit.
However, one of the advantages of Neon is that there is no USB connector at the glasses frames, and the USB cable is far more supple. Basically, you shouldn't run into the same disconnects that you have experienced mid-flight.
Re. the existing Invisible recordings, please forward their IDs, along with a description of the issue (share a screenshot if possible), to [email removed] and I will ask the Cloud team to investigate π
Many thanks!
Since you're already working with the IMU data, you should note that it's important to calibrate the IMU ~~prior to~~ just after starting a recording. Calibration means to move the headset around in space a little such that the fusion algorithm produces an accurate orientation estimate.
Thanks for update. The calibration should be made in every recording? Is there a way to know when the calbration worked? (to know, for eg, how long to do it)
let me check with the team and get back to you!
Also, in case this stops you from getting your work done, you can still access Pupil Cloud vie the old UI at old.cloud.pupil-labs.com
.
Hej Neil, In the new version of the cloud UI there is no 3-rd option to download raw recording data. How can user download recordings in raw format then? We use raw recording data to import it in the iMotions app both for recordings downloaded from the companion app device and for recordings downloaded from cloud, and imported using the same parser for both of them.
Hi @user-912183 π It is possible to access also this export format. Please go to the Workspace Settings section and then click on the toggle next to "Show Raw Sensor Data". In this way, you'll enable also this 3rd option in the download menu. Please keep in mind that you need to enable this feature for each workspace you'd like to download this kind of raw data π
Thank you @user-c2d375 that works!
Hi, I got strange timestamps for IMU data in one of the recordings. I made 4 recordings and imported the data from the device. When importing to the iMotions app I found that one of the respondents missing IMU data. Looking into the raw data for that recording I found, that the gaze timestamp range and IMU data timestamp range has no overlaps, i.e. the gaze timestamp range: [1683195011xxxxxxxxx, 1683195033xxxxxxxxx], while the IMU data first timestamp is 1683195095580091395, which is out of the gaze range. I could not find any gaze/IMU timestamp offsets either in the recording data or in the PupilLabs docs. Looks like the data on the companion device gets out of sync for that recording. Is it known issue or I missing some timestamp offset value in the data? The issue is reproducible BTW.
Hi there, is there a price available for the bare metal - yet? The website states it will be shipping Q2. Thank you
Hey @user-ffa59a π . Yes, the Bare Metal is available - we start shipping next week. The price for Bare Metal is 350 EUR for just the frame and 5800 EUR for the full bundle including the Neon module and a Companion phone.
Hey Pupil Labs, thanks for all your help so far. I was wondering if anyone could provide an ETA for arrival of our Neon? We have an estimated ship date on or before 5/19 and our location is in the US, California, zip code 90501.
Hi @user-ac085e! I think a colleague is already in contact with you via email!
Hi, I hear rumors that playing back of neon-recordings is not as we know it from the invisible and Core (using Pupil Player). Is that true?
Hi @user-e40297! That's correct. Neon recordings made with the Neon App are not compatible with Pupil Player. We currently don't have functionality for desktop playback or data export. However, we are developing a script to enable raw data + video with gaze overlay export from raw Neon recordings grabbed off the phone. We havenβt a set release date just yet. Later in the year, we also plan to have a simple desktop app with similar functionality.
So the functionality f the QR-codes for instance is lost?
In order to determine th eplace of fixation on a screen
You can do this with the Marker Mapper enrichment in Pupil Cloud. It leverages AprilTag markers to transform gaze and fixations to, e.g. screen-based coordinates: https://docs.pupil-labs.com/enrichments/marker-mapper/#marker-mapper
But i thought that the coordinates were only offered off line using pupil player?
The Marker Mapper enrichment linked to in my previous message is essentially Cloud's version of the Surface Tracker π
Hi @marc, I received an email response, however, they were unable to answer my question. Could you perhaps provide a general estimation of shipping time after the product has been shipped? I understand the product will ship on or before the 19th but was hoping for a timeline of when we can expect it to arrive.
I apologize for my many questions and emails, I am just trying to schedule out my departments responsibilities and find time to learn/test out the Neon before we begin any future projects/studies with the device. Thank you so much for all your help so far, we are very excited to try out the Neon!
Hi @user-ac085e! Let's move this do DM to discuss your specific case!
Hi @nmt I'm looking for some guidance on whether to buy the Neon vs the Invisible - could you give me a sense of shipping estimates between the two? Ideally, I'd love to just be able to purchase the Neon for an upcoming launch, but if it's materially faster to get my hands on the Invisible, that would be helpful to know
Hi @user-f51f0d! Expected times of delivery as follows:
Neon - Mid to late June
Invisible - This Friday if order is within the EU or next week if outside of the EU
Have you used Invisible before or is this a first order? There are a several differences between the systems that might make holding off for Neon preferable. Just let us know if you need any input!
Hi. Following some previous conversations, is there a date for the inclusion of IMU data in the real-time APi and the guidelines for IMU calibration? We are planning some lab course of actions and it would be great to have an idea.
Hi @user-b55ba6! We expect this to be released by the end of next week!
(it relates to this question)
I'm sorry to bother you, but could you please answer the following questions?
Can you tell me how to use NEON to get a camera image of my eyes (camera installed inside my glasses)? I have NEON and Invisible. I remember that when I downloaded a file uploaded to PupilCloud before, I could get the above camera image of my eyes. However, when I tried it today, there was no camera image of my eyes in the download folder.
I only have two options for the download button, from which do I get the eye camera footage?
(1) Timeseries data (2) Timeseries data + Scene video
Hi @user-5c56d0 π ! You can still get the raw android download, but with the new Cloud UI, we decided to simplify menus (especially for new users). Therefore and because not many people require these files, you have to enable the download of this files on the Workspace Settings.
@user-d407c1 Thank you very much. I was saved.
Hi @nmt , I've completed an order for 2x Neons on May 17th but haven't yet received confirmation from Pupil Labs that the transfer/order went through. Could someone please update me on its status?
Thanks! Daniel
Hi @user-c6019c ! To this regard, would you mind contacting sales@pupil-labs.com with some additional information, like your order id, name, etc?
Yes -- have sent them an email already, but will send another. Thanks!
Hi! Just out of curiosity, is there a place where we can see a photo of Neon's Just Act Natural on someone's face, to see how it looks?
Hi @user-8b1248 ! I got something for you:
https://twitter.com/pupil_labs/status/1660644024875720706?s=20
Hi @user-8b1248 π ! next week we will have an update to neon's product page including pictures on models
Nice. Thanks!
Hello! For someone looking at making their first purchase of a Neon, what additional equipment / software needs to be purchased alongside it please to get started?
Hi @user-1b05b3 π ! You don't really need anything else, everything comes included in the bundle (phone, module, frame, software, etc.) if you wish to have a demo, please contact sales@pupil-labs.com
Hi guys, really excited to have received our Neon earlier this week!! Apologies if I am missing something obvious, but how can we download a version recording with the gaze overlay included? I found the enrichment section and the gaze overlay says 'completed', but when I download the video from the project it does not include the gaze overlay
Hi @user-ae76c9 π ! Once the enrichment is completed, and if you are on the project page, you can click at the bottom left "Downloads" banner. This will open a page and you should see the option to download the data from the enrichment (which includes the video with the overlay) along with the other downloads.
Kindly note, that there is an option to remove the gaze circle on that enrichment (in case you want to download the undistorted video alone or to download the fixation scanpath), so ensure that you have the toggle on when running the enrichment.
Thank you for your work. How can I get the blink data? The blink csv in the file downloaded from PupilCloud (of NEON) is empty. I used NEON. I have confirmed that the downloaded blink data from PupilInvisivle does not empty and contains blink crrectly.
Hi @user-5c56d0 π ! Would you mind sharing the recording ID via email with info@pupil-labs.com such that we can further investigate this issue?
Hey @user-8e2030 π. Responding to your message (https://discord.com/channels/285728493612957698/285728493612957698/1110202997398581389) here.
We're all very excited about Neon and its performance, and we're getting some great feedback from the community already.
Neon is pioneering and sets itself apart from other systems in quite a few ways. Two key aspects in this regard are: 1. it has a modular design, which allows it to fit a variety of frames (see above image for some examples), and 2. it's calibration free, which enables super quick and easy set up, but also maintains accurate gaze data even in very dynamic conditions / in direct sunlight etc. (~2.4 deg uncalibrated; 1.6 deg with offset correction π)
Neon tethers to an Android smartphone which makes it fully portable, and samples gaze in real-time at 200 Hz. It also has a bunch of other features, data streams (IMU, fixations and blinks) and access to our Cloud analysis platform at no extra cost. There's too many features to list here π . We've just updated our website with all relevant information: https://pupil-labs.com/products/neon. Definitely recommend checking that out!
We'd love to show you Neon in action! Feel free to reach out to info@pupil-labs.com to arrange a personal demo and Q&A session
Do you have any hopes/expectations of improving the gaze accuracy further with future updates? Or is 1.6 deg expected to be the upper limit for the Neon hardware?
I want to know if Neon can capture the following three types of data: 1. Fixations, 2. Saccades, 3. Pupil diameter changes. Thank you.
Hey @user-8e2030 π . Regarding your question, see my points below: - Fixations - Fixations for Neon are calculated in Pupil Cloud once the Neon recording is uploaded there. For more details about fixations (including our fixation detector whitepaper), please see our documentation: https://docs.pupil-labs.com/neon/basic-concepts/data-streams/#fixations - Saccades - Whilst we have not confirmed experimentally that Neon can capture saccades, implicitly saccades equate to the inter-fixation interval (or gaps) between fixations using our fixation detector. - Pupil diameter - Pupillometry will become available for Neon in Q3 2023. You can follow the updates here: https://feedback.pupil-labs.com/neon/p/pupillometry-eye-state-estimation
I hope this helps!
I want to know if Pupil Invisible can capture the following three types of data: 1. Fixations, 2. Saccades, 3. Pupil diameter changes. Thank you.
For Pupil Invisible the same fixation detector is available and again you can implicitly detect saccades just like with Neon. Pupil diameter is not available for Pupil Invisible and will only come for Neon.
When will it be possible to measure pupil diameter? Can you provide a specific timeframe?
At this point we can unfortunately not yet give a more precise estimate than Q3 2023.
Does Neon provide free software for data analysis and export? I have already downloaded the software, but I'm not familiar with using Neon, so I don't know if it's free or not in terms of software usage. If you have any advice regarding the software's usage and whether it's free or not, it would greatly help me. Thank you.
Neon is compatible with Pupil Cloud which is a cloud-based analytics platform, which is free to use. You can read more about it on the website here: https://pupil-labs.com/products/neon/software
How do you use the Marker Mapper ? For a rectangular area should we use 4 markers or more and why are there so many Markers in the pdf. I am missing some information here I guess.
Hi @user-c1e127! This depends on the size of the surface you want to track. For a surface to be tracked well, at least 2-3 markers need to be visible in the scene video. So for a large surface that is not fully visible at all times you'd use a lot of markers, while on e.g. a small screen that will always be fully visible in the scene video 3-4 markers would suffice. Does that make sense?
Another reason why there is a lot of markers is that some users want to track many different surfaces in parallel. Using different markers for different surfaces allows to easily distinguish which one is which.
I get it @marc Thank you for the information, it makes sense now.
Can we get better resolution ? Or this is the best ?
Hey @user-c1e127 π. Can you please clarify which product you're using?
I am using Pupil Neon with Pupil Cloud to generate this Marker Mapper using the markers provided in the documentation
Generate heatmap on text
What does "Run" on the enrichment page actually do? I'm struggling to figure out how to add an enrichment to a project right now. The process isnt quite fully documented on the enrichment documentation page. So far, I've created an enrichment for a project that contains a few videos of the same environment, selected a reference image and scanning recording (both which fit the recommended qualities described in the documentation) and then hit "Run" but I'm not sure where to go from here.
Hey @user-ac085e π ! Once your enrichment is completed, you should be able to click on it (left panel) and preview on Pupil Cloud how gaze is mapped onto the reference image (Iβm attaching an example here). You can also generate heatmaps (go Tools
> Heatmap Visualization
) and you can download the data in csv files (click on Downloads
on the bottom left part of the cloud window and download the enrichment data). I hope this helps!
Thanks for reaching out @user-ac085e ! To add on top of @user-480f4c's answer, I just wanted to mention that once you hit the Run button, the enrichment process will start, and you should see a spinning circle on the side of your screen. You'll also see a banner on the bottom left of the screen that tells you the enrichment is being processed. Once it's finished, you'll see a tick next to the enrichment (just like in the screenshot above), and you'll be able to download the results on the downloads page. Let us know if you got stuck on any of these steps!
For the Pupil Neon, how much distance is there between the outside of the lens and the eye?
HI @user-edef2b! The distance between the eye camera's lens and the eye is different from person to person due to differences in face geometry, so there is no general answer.
Hi Pupil Labs Team, we are also struggling with low resoltions (when testing on mobile). Anything to increase the quality of the screen recording (scene cam)?
Hi @user-cca81b! You may want to check out the fresh content we have here: https://docs.pupil-labs.com/alpha-lab/phone-screens/
Hi, I am interested in using Neon for an eye tracking project, but I don't really understand what the output format is. Would it be possible to get a single HDMI output with both eye footage?
Hi @user-6c6921 ! What are you struggling with? Can you share a bit more about what do you want to achieve? If sensible, you can also contact us by email at info@pupil-labs.com
Hi, I have downloaded the Neon Companion app and connected the Neon glasses. As soon as I did that the app notified me that the firmware / FPGA need to be updated. I did that but the FPGA update got stuck and the progress bar does not move. What shall I do? Can I disconnect the glasses safely and re do the update?
Hi @user-a0cf2a ! You can safely disconnect Neon from the phone, then you will need to Force Close the application, you can do this by navigating to the home screen, hold your finger over the icon and selecting "App info" and then force close. After that you can connect Neon again to the phone and launch the application to retry the update.
Hi, folks. Can I suggest that you demonstrate your cameras ability to deal with sudden changes in illumination? How quickly does it adjust when you walk from a dimly lot indoor environment to a bright outdoor environment? How does it fair outside when you transition from direct sunlight into a shaded area? Does it even perform under bright direct sunlight?
Hey @user-8779ef! We already have some demo footage in direct sunlight - everything works just fine π: 1. https://www.youtube.com/shorts/RgjOan89uCc 2. https://www.youtube.com/shorts/SAco-gw42e8 Although thanks for the suggestion about transitions. The sorts of transitions you mention have no affect on gaze estimation, but they do influence the scene camera view/auto-exposure. I don't think we've got any great footage to share, though. I'll have a dig around and if not we'll definitely think about some demos that cover this topic!
I suggest this because I think the last few demos youβve made have been under controlled illumination. Thanks!
Hi Pupil Labs Team! We followed your procedure to calibrate the eye tracker on the smartphone, and upon reviewing the recordings in the app, everything seems to be tracked correcly (i.e. fixations align appropriately with the presented objects). However, when watching the same recordings in the Pupil Cloud, we noticed a consistent offset towards the upper left. In other words, the fixations are displayed shifted towards the upper left, while the saccades themselves appear to be accurate. For some reason this discrepancy occurs only in some recordings. Has anyone reported this issue before or can you think of a possible explanation for this?
Hi @user-886a79 π. Would you be able to confirm which eye tracker you're using? Neon or Pupil Invisible?
Also, what are the power requirements of the Neon? I am looking to buy a USB hub to allow charging of the phone while running the Neon eye tracker and wanted to make sure the hub could properly power the Neon glasses. Is simply a data connection to the Neon glasses sufficient or does it need a charging+data usb-c connection?
Fixations and saccades