Hello! I try to connect Neon to my Macbook, and use Pupil Capture to catch eye movement. However, it said that the selected camera is in use or blocked. And Local USB: camera disconnected. How can I solve this question
Hi @user-d714ca ππ½ ! To use Neon with Capture, you need to run it from source. Have you checked our guide? https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/
Yes, I follow it. But still does work.
Hey, @user-d714ca - looks like there's a bit of information that's out of date/missing on the readme. I'm working on updating it, but wanted to send this so you could get up and running sooner.
* ~~The readme says to use the develop
branch, but it's missing some code which may be affecting you. master
is up to date and should work, so to switch back run git switch master
~~ Fixed, though you will want to update your local copy: git pull
* Due to an open issue with how Mac handles USB devices, you'll need to run Pupil Capture with admin rights. To do this, add sudo
before the python command: sudo python main.py capture
where do i buy the eye tracking for htv vive pro
[email removed] ππ½ ! You can either contact us at sales@pupil-labs.com or feel free to fill out the quote request form at this cart link: https://pupil-labs.com/cart/?htcvive_e200b=1
Hello,nice to meet you. I am a Japanese office worker, is there any way I can try your company's NEON? Sorry for my poor English.
Hi @user-cc3f72 ππ½ ! While we do not offer free loans, we do have a 30 day free return policy, such that you can try Neon and return it if it doesn't meet your requirements. You pay shipping costs, we refund the price of the hardware. Also feel free to contact us at [email removed] and book a demo and online Q&A session with one of our product specialist to discuss how Neon can meet your research requirements.
To use these python codes (https://docs.pupil-labs.com/neon/real-time-api/track-your-experiment-progress-using-events/) do i need to connect neon eye tracker to computer? But i can't connect eyetracker to computer with the lines that eyetracker originally has.
Hi @user-2eeddd π - You do not need to connect Neon to a computer. Neon Companion App streams data over your local Wifi.
If you're looking to add events with a convenient user interface, you can use Neon Monitor app: https://docs.pupil-labs.com/neon/getting-started/understand-the-ecosystem/#neon-monitor
If you're looking to add events programmatically then you would use the real-time API.
The key here is that both your computer and Neon need to be on the same WiFi network. For better performance, we recommend using a dedicated router if possible. Does not need to be connected to the internet.
We have multiple pupil labs products (invisible and neon). For ease sake I added the same account to all phones to get all data stored in one account. I tried logging into pupil cloud using two different account in different Google Chrome windows, but I kept getting and error in both windows so I decided to try connecting multiple Companion apps to the same account. But, I get an error when trying to include the data from the NEON into a project.
Is this because I cannot combine data from Invisible and Neon in one user/cloud
hI @user-648ceb π I am sorry to hear you're experiencing issues with Pupil Cloud. There's absolutely no issue with including recordings made with both Neon and Invisible in the same project. Could you please clarify whether the recording you're attempting to add to the project is still in the processing phase, or if it's displaying a β οΈ error icon?
It should be done processing and I can view the recording just fine
Hi, Can someone give me an update on when the pupillometry features will be available on the Cloud?
Hi @user-33607d ππ½ ! We are currently working on developing pupillometry as a feature in the Cloud. By the end of October, you will be able to obtain pupil size estimations post-hoc.
Hello! Can I do batch processing on the Cloud? Especially for the enrichment Reference Image Mapper?
Hi @user-d714ca ππ½ ! Do you mind clarifying what do you mean by batch processing? See some points below that might help:
Batch exporting recordings is possible programatically with the Cloud API (https://api.cloud.pupil-labs.com/v2), but for example batch creating Reference Image Mapper enrichments is not available programatically.
However, note that you can apply the same Reference Image Mapper enrichment automatically on all (or some) of the recordings that you have added to your project. Assuming you have a project with 5 recordings of 5 different participants looking at an area of interest, you can start a Reference Image Mapper enrichment which will be applied to all of these recordings, as long as they all have the same events as the start and end events you'll use for defining the period of the recordings where the enrichment will be applied (by default, these are recording.begin
and recording.end
)
Hello! Within the analysis pipeline, is there any way to do a consecutive gaze heat map for an entire video recording? Additionally, is pupil capture and player compatible with the pupil neons?
Hi @user-328c63 ! Would you mind developing the first request? Do you want a heatmap on the scene camera coordinates as a gaze distribution over the recording? or you would like to obtain a heatmap over a surface or reference image in the environment? - The first one is not available out of the box but it is trivial, as you get a gaze.csv with the coordinates and a row per gaze estimate. - The second one is available through the Marker Mapper or Reference Image Mapper on Cloud.
https://docs.pupil-labs.com/enrichments/reference-image-mapper/ https://docs.pupil-labs.com/enrichments/marker-mapper/
Neon recordings made with the Companion Device (phone) are not compatible with Pupil Player, you can use Neon with Pupil Capture but there are some caveats, as for example you would not be using NeonNet (the deep neural network to estimate gaze) and therefore you will need to calibrate it. We strongly recommend using Neon with the Companion Device and Pupil Cloud but if you want to know more about how you can use the Core pipeline with Neon have a look here: https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/
Hi again@user-b03a4c ππ½ ! I'm replying to your message (https://discord.com/channels/285728493612957698/285728493612957698/1159717473021599744) here.
The Gaze Overlay enrichment has been changed to "Video Renderer". This was part of a series of updates we announced recently on Pupil Cloud. You can find the details here: https://pupil-labs.com/releases/cloud/v6.1/ We renamed the Gaze Overlay Enrichment to Video Renderer and moved it into the Analysis view
(see attached screenshot).
Please note that the Video Renderer is only available on Pupil Cloud currently. However, we are currently working on an offline solution for Neon that will allow users to export eye-tracking footage with gaze overlay.
thank you for quick replying! I got it.
Hello, I wonder if I need any driver or pre-install for the Neon directly connected to my Windows PC? cuz I can't find Neon in the Video Source. it only shows a bunch of unknown @ Local USB, and in the Device Manager, there's a "not installed driver warning" for Neon Sensor Module v1. Thanks
ps I'm trying to use the Pupil Capture app
Hi @user-0262a7 ππ½ ! Please note that using Neon with the Pupil Core pipeline only works under MacOS and Linux and you'll need to run it from source. You can find the relevant documentaion here: https://docs.pupil-labs.com/neon/how-tos/data-collection/using-capture/
Got it, Thank you so much!
Hi, I am getting huge angles (some around 17Β°) as output on the gaze.csv of my recordings whereas the real measured angles are no bigger than 4Β°. Is this due to distortion? I am using pupil cloud, does it not automatically correct distortion? Do I need to apply an enrichment? Thanks!
Hi @user-858e7e ! Would you mind providing some more information? First of all, are you using Neon or Invisible? Secondly, where do you get 17Β°, on azimuth or elevation? And how do you know the angles are no bigger than 4Β°?
Dear sir Thank you for your help. Could you please answer the following? Best regards.
Q1 Is there an eye tracker model that can measure the distance between the user's eye and the object (the object being viewed by the user)?
Could you please answer the following as well?
Q2 Is there an eye tracker model that can measure the angle of eye orientation (the angle of how many degrees the eyeball is tilted)?
Hi @user-5c56d0 π. At the end of this month, we aim to release our new eye state estimation pipeline for Neon.
It will output the orientation of each eye, as depicted in Q2.
In theory, eye state can be used to examine viewing distance using ocular vergence, as depicted in Q1. However, in practice, this can be a bit tricky depending on the circumstances. For example, if the object is very far away, the eyes' visual axes may never converge.
We'd be excited to see how you fare with such investigations!
Initially, eye state (and pupil size measurements) will be available post-hoc, once recordings have been uploaded to Pupil Cloud.
Keep an eye out for the update! We will announce it here and you'll also see it in Cloud.
Hi I'm ken. I want to use NEON with Pupil capture because I want to export recorded video to Pupil player. Although I followed instruction in homepage, Pupil capture did not recognize NEON and the screenshot is here. Would you tell me how to deal with it.
I use Apple M2 Mac. OS: Ventura
Hi @user-b03a4c are you running it from source code and starting it with sudo privileges?
Hello, I am a university student in Japan. The Neon website says that pupil diameter measurement will be available in the third quarter of 2023, but when exactly will it be available?
Hi @user-fb5e57 π We are currently finalizing our pupillometry pipeline, and it will be available as a feature in Pupil Cloud by the end of October
Hello, I would like to know if there was any paper using Neon to do the research.
Hi @user-d714ca π Neon is still pretty new in the world of eye-tracking tech. Even though many researchers are already putting it to work, you might not find published papers just yet. To explore studies utilizing Pupil Labs products, I recommend checking our publication list at https://pupil-labs.com/publications/
Thank you for your info
Hi, I reported an issue with the IMU timestamp vector on github (https://github.com/pupil-labs/pl-rec-export/issues/2), but I am not sure this is the right place. The timestamps seem to be inconsistent, and I wonder if you could help me debugging this. When plotting the raw data (extimu ps1.time) I have this timestamp vector (timestamp vs. index)
Hi @user-fd8e85 ππ½ ! We are looking into this and we'll get back to you as soon as possible! Thank you for your understanding and patience! π
When plotting the timestamp from the imu.csv file I get from Pupil-labs cloud, I have the following graph.
Both are wrong, is this an identified bug? I believe that both the companion app and the FPGA firmware are up to date
Hello, I have two questions regarding the reference image mapper: 1) Does the reference image mapper work for mapping AOIs in a 360Β° indoor setting? Most AOIs will not move but the person wearing the tracking device will move around in the room. What happens if not all AOIs can be captured in a single photo as reference image? Is it possible to provide more than one reference image without slicing the recording into pieces for mapping? I found this guide in which a similar situation is described: https://docs.pupil-labs.com/alpha-lab/multiple-rim/ It however appears that for that example the recording was split manually (using events) to apply the mapping on the most appropriate reference image. Is that necessary? 2) Is the reference image mapping available if pupil cloud is not used?
Hi @user-3f477e ππ½ ! Please see below my points:
1) It is highly recommended to slice the recording into pieces for the mapping as this will significantly reduce the processing and completion time for your enrichments. For the example you mentioned (someone walking around looking at different AOIs and mapping gaze onto these AOIs), I'd highly recommend following the Alpha Lab guide on applying multiple Reference Image Mapper enrichments. Regarding your point that the slicing was done manually (using events), that's indeed the case for this guide. Note that you can add events either post-hoc in Pupil Cloud as done in this guide or in real-time using the Neon Monitor App or the Real-time API.
2) Unfortunately, the Reference Image Mapper enrichment is only available on Pupil Cloud.
I hope this helps, but please let us know if you have any further questions on that!
Is it possible to synchronize and collect the neon signal with a motion capture system like Vicon?
Hi @user-3c037a ππ½ ! There are different ways of synchronising Neon with other sensors. Specifically, for Neon-Vicon synchronization, you have the following options:
1) We maintain an LSL relay for publishing data from Neon in real-time using the LSL framework. This enables a unified and time-synchronised collection of measurements with other LSL supported devices. You can find the LSL relay on PyPi (https://pypi.org/project/pupil-labs-realtime-api/) and relevant instructions on readthedocs (https://pupil-labs-realtime-api.readthedocs.io/en/stable/). However, as far as I can see it's unclear whether the Vicon system is supported by LSL (see here: https://labstreaminglayer.readthedocs.io/info/supported_devices.html#supported-motion-capture-hardware). I'd recommend contacting them to confirm.
2) Alternatively, you can use Neon's Real-time API: https://docs.pupil-labs.com/neon/real-time-api/introduction/. Using the Real-time API you can access the raw data, control experiments remotely, send/receive trigger events and annotations, and also synchronise Neon with other devices.
I hope this helps!
Thanks @user-480f4c ! It was very helpful.
Thank you @user-480f4c
Hello, I have a question. I noticed a slight misalignment between the video obtained from GazeOverlayEnrichment and the video I created with a scene camera overlaying gaze data. I suspect that the cause might be a timing synchronization issue. When trying to synchronize the timing using the 'timestamp[ns]' in the 'world_timestamps.csv' and 'timestamp[ns]' in 'gaze.csv', what's the best way to achieve this synchronization? Do I need to perform any resampling or other steps?
Hi @user-b8ccc0! There's nothing special here. Matching gaze samples to the nearest existing world frame should suffice. What are you using to render your video? If you could outline the steps you've taken I might be able to point something out.
I do have a follow-up question for my own interest - why do you need to create your own video when we already provide one?
Do you have a distributed of the neon in Australia? SO we can avoid paying Customs taxes?
Hi @user-5eb072 π. Thanks for your question. We do ship to Australia directly from Berlin. Prices on our website are exclusive of import duties/taxes. If you would like a quote or further information about the import process, reach out to sales@pupil-labs.com and someone will assist you from there!
Hi there, I'm wanting to know the amplitude and speed of reading of the subjects from the data/recordings on Pupil Cloud. I was wondering, is this possible and how do I get these two parameters from the data? Thanks π
Could you elaborate a little on what you're trying to measure/study? Are you talking about visually reading printed text? If so, what do you mean by amplitude?
Hello I have a question. Is there any way we can see how the calibration process working?
There isn't a calibration process for Neon device like there is for other eye trackers. Images of the eyes are mapped to gaze locations in the scene using a neural network
Hi @user-29f76a! Responding to https://discord.com/channels/285728493612957698/285728493612957698/1162362434485501982 here. It would first be worth double-checking the module is securely attached to the frame. E.g. if you've previously removed the module from the frame, check the screws are secure. If that's all in order, there might be another issue. If so please, send your original order id to info@pupil-labs.com and someone will assist you from there!
Hi, I didn't detach anything and the frame looks like in order. I plugged it properly but still, the frame was not connected to the apps. I will send an email soon.
Hello, I have three questions. 1) Can we preprocess the data in the Pupil Cloud? 2) I would like to know if there are any papers that talk about the steps to process invisible/Neon export data. 3) Are there any other free ways that can be used to process Neon data.
Hi @user-d714ca ππ½ ! Regarding your first three questions (https://discord.com/channels/285728493612957698/1047111711230009405/1162660268376084570), could you please clarify what kind of preprocessing did you have in mind? Ultimately, data preprocessing depends on your specific setup and research questions. It would be helpful if you could provide more details on that. Please also find below some additional points that might be helpful:
As for the follow-up question in your last message (https://discord.com/channels/285728493612957698/1047111711230009405/1163107071651225691), fixation detection is automatically done once the data is uploaded on Pupil Cloud and the parameters cannot be adjusted. May I ask why you would like to adjust the fixation threshold?
Sorry to disturb you. I have another question. I want to know where can I adjust the fixation threshold and visual angle. Can do it in the pupil Cloud?
@user-480f4c Thank you very much for your response. For the preprocessing, for example, we now find a systemetic inaccuracy of the fixations in a sample. We want to fix it so that when we do the reference map enrichment, the fixation can locate in the reference image. Another preprocessing example is that we show a stimuli to the client, but he uses his hand block his eyes. However, the fixations are located in the reference image. We want to delete it.
Hi, when scene camera overlayed with the estimated eye position from gaze, we noticed a constant vertical error compare to target . Is there any reason we get this, and is there any way we can adjust this?
Hi @user-5216cd ππ½ ! Have you tried the offset correction feature in the Neon Companion App already? You can find this feature by clicking on the name of the currently selected wearer on the home screen and then click "Adjust". The app provides instructions on how to set the offset. Once you set the offset, it will be saved in the wearer profile and automatically applied to future recordings of that wearer (unless you reset it). Please note that you'd need to set the offset correction prior to the recording.
Hello, do you have any publications related to Marker Mapper?
Hi @user-d714ca ππ½ ! We do not have specific publications related to Marker Mapper, however, you can find a list of publications that have used our products in our publication list: https://pupil-labs.com/publications/ Please also have a look at this relevant post: https://discord.com/channels/285728493612957698/633564003846717444/1072072155547832350 - May I ask what you'd like to learn about the Marker Mapper?
For IRB purposes what are the potential risks for using pupil neon glasses? Currently have the following: "Possible risks include the disturbance by IR light and/or radiation if the IR sensors that the device uses are on the same wavelength as that of the illuminators of the device. It should be noted, however, that both risks are very low-level."
Hi @user-45f648 π. I'm not sure I fully grasp what your statement is intending to convey, since it appears to mention two devices. Is your question about potential risk to the Neon wearer from IR illumination? If so, Neon has passed IR safety testing, so there's no risk during normal usage.
Hi pupil labs team. I wanted to ask if there has been any updates or changes to the Neon firmware or app that would results in code being changed for interacting with the real time API client?
Our simulator code was successfully interacting with the Neon as of last month (starting/stopping recording, labeling events). As of today, however, our code connects to the neon but no longer has any control over it.
Hi @user-ac085e! Our real-time API should function as expected. I just verified that the original examples for our Python Client, which demonstrate how to connect to the device and start/stop recordings, are working perfectly. Which Companion app and real-time api versions are you running? And could you share the part of your code that's not working, along with the terminal output?
Hi, I'm Ken. I want to create heatmap based on my recording using enrichment(Reference Image Mapper) However, it takes so longtime and did not complete making heatmap. Could you teach me a possible reasons?
Hi @user-b03a4c ππ½ ! In general, it is highly recommended to slice the recording into pieces for the mapping as this will significantly reduce the processing and completion time for your enrichments. You can "slice" the recordings directly on Cloud by adding events. Please have a look at this guide that also used event annotations for the RIM enrichments: https://docs.pupil-labs.com/alpha-lab/multiple-rim/
Could you please clarify if your enrichment was successfully completed? Can you see the heatmap after running the enrichment?
I want to report a problem that the Neon Companion has always failed to open. How to fix it?
Hi @user-d714ca! Please long-press on the app icon, click on information, and then select "Force Stop". Then try re-opening the app and ensuring that it's up-to-date in Google Play Store
Thank you! Iet me try it now
It seems work! Thank you!
Hi everyone, i'm Daniel, a researcher from the UK who is looking to use the Neon to hopefully track gaze of young stroke survivors to identify obstacles to walking in real-world settings. I was just looking for advice from those that have used the Neon before or another Pupil-Labs eye tracker that may be suitable for a project such as this. Additionally, as this is a one off project i was wondering if there may be a company/institution based in the UK that would be willing to reach out to me with advice or potentially let us rent/borrow their gaze tracking glasses. Many thanks in advance for all your help
Hello, Can someone help with the gaze metrics that have to be run through python? I am not able to get it done by myself even though I have a little knowledge in python. Is it possible for someone to get on a call with me? Or maybe someone who can be my point of contact so that I can send any questions/doubts directly to them?
Hi @user-37a2bd π. Neon's gaze data doesn't need to be processed through Python scripts. We do, however, have several experimental post-processing tools in the Alpha Lab section of our docs. Are you referring to these? If so, Discord is the appropriate place for asking questions π. Could you elaborate a bit on which scripts you've attempted to run and the problems you've encountered?
Hi Pupil team! Does Neon use dark-pupil detection?
Hi @user-594678 ! Neon uses a different approach for gaze estimation. It does so by the means of a deep neural network. This neural network has been trained with a large cohort of people and simply by "seeing" the eye images is able to define the gaze position.
Thanks @user-d407c1 for the quick response! I wonder when do you think the white paper will be out? Also, should I think then in general what Neon does is capture the eye positions thru the IR cameras and do some processes that was trained with a DNN to finally output the eye positions?
hi Pupil Labs, We're working on an application that streams from multiple Neon devices on the same network. We currently have four and will be extending to more but we're wondering if there's a way to mock the real-time APIs to stream data from simulated devices for testing purposes?
Hi @user-ccf2f6 π. While it may sound simple in theory, it is actually very challenging to implement in practice due to all of the intricacies of how the data is streamed, and knowledge about the app's internals that we cannot disclose. We recommend using an actual Neon + phone combination for development work.
Hiya, I am trying to use the Reference Image Mapper. I have uploaded the Reference image and selected the Scanning recording, but the Processing % is always cero. Any suggestion to how make it works?
Hi @user-2cfdc6. Depending on the duration and number of recordings you're running the reference image mapper on, it can take some time to compute. Can you try refreshing your browser window and seeing if the progress has changed?
Hi team. Any suggestions for how I can make this easier on myself (identifying fixation points for outside in bright sunlight), or shall I recommend an option to change the font colour for future recordings (or is this an option I missed)? Thanks - some of the recordings are easier, but transitions from shade to sunlight cause this issue where it's impossible (rather than just quite difficult - even light grey pavement).
Same happened here. please help us @nmt
Hi @user-c10c09 π. Thanks for sharing the feedback. You're right that the web player in Cloud only showed the id fixations in white, and there wasn't an option to change it. We've just pushed an update such that the id fixations are more salient π
Posting the error message here -
Thanks for sharing the output of the terminal. That helps. Let's move this conversation to the software-dev channel - it's better suited to there!
Anyone have the same problem? @user-d407c1 @nmt @
Reference Image Mapper fail
Hi @nmt ! Thank you for pointing a relevant material. I will take a look into it. Looking forward to reading the whitepaper for Neon as well!
Hi team, Iβve been facing some issues with the companion appβs network interfaces not working correctly. Every so often (usually when multiple devices are on the network) the RTP server cannot be connected to. When this happens, the glassesβ LED indicator is constantly solid upon reconnecting them to the phone (first image). When they are unplugged from the phone the graphic in the app does not switch back to the picture of the phone as it should (second image). This is only resolved when I restart the phone. Replacing the glasses with a new pair does not resolve this.
Is this a bug in the companion app?
Also note that GET <ip>:8080/
resolves fine but GET <ip>:8080/api/status
hangs indefinitely.
Hi Pupil Team, According to this link - https://docs.jupyter.org/en/latest/running.html there should be an "Areas of interest" tool under the tools section. I've attached an image of the same. Is this feature yet to be released?
Sorry I meant to include this link. https://pupil-labs.com/releases/cloud/v6.1/
Hi @user-37a2bd! The "Areas of interest" tool is not available yet, but it will be available in the next Pupil Cloud update. In the meantime, you can follow this tutorial to define areas of interest on a reference image (after running the Reference Image Mapper enrichment): https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/
Hello! I want to use the Neon Companion on another phone, but it said 'Your device is incompatible with Neon Companion'. What can we do?
Hi @user-d714ca π the Neon Companion app is only compatible with OnePlus 8, 8T and 10.
Thank you~
What is the cost (US) for the glasses and software? I have not seen any prices.
Hi @user-af038f π The cost of the bundle can vary slightly depending on the specific frame you'd like to include. The bundle includes Neon as well as a complimentary membership to Pupil Cloud. You can find detailed pricing information for Neon here: https://pupil-labs.com/products/neon/shop
Hi! We were trying to download some recordings recorded yesterday and today but there is error (see attached). Can anyone help us with this? Thank you!
HI @user-e141bd π. Can you please DM me the ID of an affected recording and I will ask the Cloud team to investigate.
Hi @user-e141bd. The Cloud team have reprocessed your recordings, so they should be working as expected now. Could you please upgrade the Neon Companion App in Google Play Store? This should prevent future instances of the error!
Thanks, @nmt ! DMed you with the IDs just now.
Thanks. I'll report back asap
Hi!
I am interested in working with the WORLD images from Neon. I would like to know what is the best way to get the best quality images from the scene camera. Raw files provide an mp4. Is there a recommended way to process videos? Is there a way to get the raw frames?
I use opencv library to get frames. The first frames - from 0 to 1500 frames approximately ,that also are the first seconds the camera is open - seem to have poorer quality. As I am new to image processing,, I am not sure if this has to do with raw data, my opencv video-to-frame-processing, or any other step in the image processing steps. Looking at the API scripts I see you recieve data using AV libraries.
Do you have any suggestions for image handling from Neon, especially for researchers that would consider images as data?
Thanks!!
Hi again @user-b55ba6! OpenCV is known to be unreliable, often leading to frame drops (see also this relevant message: https://discord.com/channels/285728493612957698/633564003846717444/1020311042099785748). We recommend pyav (https://pyav.org/docs/stable/) instead. You can have a look at this example that shows how to extract individual frame images from the world video using pyav. This tutorial was created for Pupil Core recordings, but a similar approach can be applied for Neon recording: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb. See also this relevant tutorial on frame extraction using ffmpeg https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
Also, I have a timestamp question. I am looking at world_timestamp.csv timestamp, gaze.csv timestamp and imu csv timestamp. I see that the world camera is the first to provide data, then imu (almost around 2 seconds after) and then gaze data almost 500 ms after imu. Does this make sense? Does this mean eye camera is also delayed (and this explains that gaze data comes also some seconds after)?
Hey @user-b55ba6 ππ½ ! It is known that the time each sensor starts varies. The difference between sensor start times ranges from 0.2 to 3s on average, so we would recommend starting the recording and waiting a few seconds to ensure all sensors have started properly before any important part of your experiment begins.
In case it's relevant, we have a guide on timestamp matching across sensors https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/
I hope this helps π
Hi everyone. Has anyone here read publications using Neon setup especially telling more about methodology and analysis?
HI
import nest_asyncio nest_asyncio.apply()
from pupil_labs.realtime_api.simple import discover_one_device device = discover_one_device()
from datetime import datetime import matplotlib.pyplot as plt import cv2
eye_sample = device.receive_scene_video_frame()
dt = datetime.fromtimestamp(eye_sample.timestamp_unix_seconds) print(f"This scene camera image was recorded at {dt}")
matplotlib expects images to be in RGB rather than BGR. Thus we use OpenCv to convert color spaces.eye_image_rgb = cv2.cvtColor(eye_sample.bgr_pixels, cv2.COLOR_BGR2RGB) plt.figure(figsize=(10, 10)) plt.imshow(eye_image_rgb) plt.axis("off");
gaze_sample = device.receive_gaze_datum()
dt = datetime.fromtimestamp(gaze_sample.timestamp_unix_seconds) print(f"This gaze sample was recorded at {dt}")
plt.figure(figsize=(10, 10)) plt.scatter(gaze_sample.x, gaze_sample.y, s=2000, facecolors='none', edgecolors='r') plt.xlim(0, 1600) plt.ylim(1200, 0)
I ran the above code and 'unable to create requested socket pair' error came out. What is this problem about and how can i fix to run this code?
Hi @user-2eeddd π. There could be some network connection issue. Does Neon Monitor work for you? Read how to set it up here: https://docs.pupil-labs.com/neon/getting-started/understand-the-ecosystem/#neon-monitor
Hi pupil labs team. I wonder if there is a way to get the gaze positions form neon without using pupil cloud? We have some files that have already been deleted from the phone and cannot be uploaded to pupil cloud anymore.
Hi @user-c16926 π Do you mean that you exported those recordings from the phone on a computer, and then deleted them from the phone?
Hey guys, I had a question with regard to the neon glasses and the outputs. So say in a project we had 5 participants and had 5 different products that we tested. Now when we do a reference image mapper enrichment for any product does it take the data of all 5 participants and gives us an output or do we have to do a separate reference image mapper enrichment for each participant?
Hi @user-37a2bd π Indeed, it is correct that if you run a separate Reference Image Mapper enrichment for each product, the data export will include all participants who looked at that specific product. So, instead of creating separate RIMs for each participant, my recommendation is to create separate enrichments for each product. To make the most out of the Reference Image Mapper enrichment, I'd suggest you consider two different methods:
1) Create five distinct projects to distinguish between the different products that participants looked at, especially if your experiment involves looking at one product per recording. 2) Add all the recordings to the same project, include custom events on the recordings to mark when a participant begins and finishes looking at the product(s), and create separate enrichments for each product, making sure to select the custom events to run the enrichment only over the portion where each specific product was present on the scene. This is particularly useful if multiple products are exposed to the wearer during the recording. You can follow a similar approach as described in this Alpha Lab tutorial (https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-on-multiple-reference-images-taken-from-the-same-environment).
Either way, once the enrichment process is completed, the data export will include gaze and fixation data from all the participants. To distinguish among them, you can match the recording id
column in these files with the recording id
column in the sections.csv
to identify the respective wearer or recording names. Please take a look here for more information about the RIM export (https://docs.pupil-labs.com/export-formats/enrichment-data/reference-image-mapper/#reference-image-mapper)
Alright Eleonora, thanks.
I was looking at the Demo workspace in the cloud and noticed that all the videos are in the same project folder. I have a few more questions arise from this -
Question 1 - Does the Reference Image Mapper exclude all of the scanning videos automatically from the data points? (since there are multiple scanning videos in the folder) If they do exclude the scanning videos, what is the criteria that is to be met so that the software understand that it is a scanning video, for e.g., does the wearer name have to be something specific like "scanning"/"scanner"? Question 2 - Do we have to create separate project for a product/section if we have a participant viewing multiple products/sections? Is it necessary to get the most accurate output?
Regarding the demo workspace, your observation is correct. All the recordings are indeed within the same project, but the Reference Image Mapper enrichments are executed on different custom events to ensure that each enrichment processes data only from the relevant section of the recording.
For instance, if you click on the first Reference Image Mapper named "Adel_Dauood_1_standing" and click the downward arrow next to "Advanced Settings," you'll notice that the custom events Adel_Dauood_1_start
and Adel_Dauood_1_end
have been chosen as the start and end points, respectively. This means that this specific enrichment processes data solely within the segment of the recording defined by these two events, and only for those recordings where these events are present.
On this regard, you'll observe "5/36 recording matched," indicating that out of all the recordings in the project, only 5 contain these two custom events. Consequently, the reference image mapper will only be applied to these segments, and the export will exclusively include data related to them.
To address your questions:
1) Yes, the scanning recording is automatically excluded from the RIM processing, as you need to select a specific scanning recording when configuring the enrichment. If you have multiple scanning recordings in the project, to prevent them from being processed by RIM, I recommend not adding any custom events to them. Instead, retain the default recording.begin
and recording.end
. By running your enrichments solely on custom events (and not on the two default events), they won't be matched and consequently will be excluded from the processing.
2) In this case, you do not need to create separate projects, but indeed separate Reference Image Mapper enrichments, each with distinct images of the various products/sections, to ensure that gaze data is mapped only to the specific object. The demo workspace, for instance, includes several recordings where the wearer looked at different paintings, and subsequently, multiple enrichments were executed to map gaze data to the different artworks (selecting the relative custom events). This approach is quite similar to what is demonstrated in this Alpha Lab tutorial (https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-on-multiple-reference-images-taken-from-the-same-environment) with pictures of different areas within the same living room.
Thanks @user-c2d375 , understood everything you explained! I do however have anther question, the heatmaps that are generated for the RIM will be a consolidated one of all the wearers then would it not? If that is the case is it possible to generate a heatmap of a product/section only for one wearer? I see that as a possibility only if we create separate projects for each wearer, which would be a little tedious, or do you guys have a separate program for that like how you have for the gaze metrics and still and dynamic scanpaths?
Hi team, Iβve been facing some issues
Hi Team, I had a question on whether it is possible to get the following metrics from Pupil's equipment. I am attaching a photo of the output we get for individual participants from another set of Eye Tracking equipment. Please let us know how much of it is possible to get from the Pupil Neon which we are currently using or any other equipment that you guys have in your roster.
Hi @user-37a2bd. It is possible to compute those metrics from Neon's data streams. However, we do not explicitly report them. You would need to process the exports and compute the desired metrics offline. The approach you take would largely depend on the experiment or use case.
@&288503824266690561 what does a solid green LED indicator mean? The glasses wonβt connect to the app but the green LED turns on when connected to the companion device.
Hi! I think this might be a hardware issue. Please reach out to info@pupil-labs.com to get further diagnostics.
Hi @nmt, thanks for having the team reprocessed the recordings for us! It seems like two of the recordings are still showing the same video transcoding error. I will send you the ID of those two recordings. Do you mind taking a look at them again? Thanks so much! Or should we try to download the videos locally from the phone for now?
Hi Team, We ran a project yesterday with the neon, we had a total of 10 RIMs out of which I have attached 5 of the heatmaps. The heatmaps seem to be way off on two of them and the rest seem to not collect the data accurately. We processed two sets of 10 RIMs in the same project. The first time we processed it the heatmaps seemed to be accurate but the second time we ran it the heatmaps were off. Unfortunately we didn't save the heatmaps the first time around. Could you guys let me know what the problem is? Also is there any time limit within which we have to work on the uploaded files? I tried working on an RIM from a video that was recorded 10 days back and it gave me an error saying the following - "Error: The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image."
Hi @user-37a2bd
The message that you saw "Error: The enrichment could not be computed based on the selected scanning video and reference image. Please refer to the setup instructions and try again with a different video or image."
indicates that the reference images can not be found in that video.
You can see here how to perform the scanning recordings https://docs.pupil-labs.com/enrichments/reference-image-mapper/
Without seeing the recordings is hard to say whether they are appropiate, so feel free DM and invite me to your workspace if you want some more concrete feedback.
Regarding the heatmaps, kindly note that the scanning recording is by default also included in the generation of the heatmap, so that might be the reason why you might be seeing it off. In an upcoming update of Cloud (targeting next. month) you will be able to select the recordings contributing to the heatmap on an easy way. For now, you will have to determine them using events, meaning you will have to create an event start/end on the recording that you are interested on, and then use them on the enrichment. Let me know if you need assistance with it.
Let me know if you would like any other data from the above project.
Thanks again!
Hey all, does anyone know if there are any limitations to the amount of "wearers" you can create within a workspace?
Hey @user-ac085e! There are no limitations on the number of wearers you can have in a workspace π. Although if it gets really cluttered, it might be preferable to set up a new workspace, but it depends on your project/preferences.