Hi. Is there a way of downloading the Neon Scene video with the gaze overlay visible? At the moment I can only see this on the Cloud itself. Once I download the Scene video the overlay disappears.
Hi @user-c1bd23, you can create a video with the gaze on Cloud by going to your project > Visualizations > Video Renderer. Here you can also customize the size and color of the gaze.
Can I create a scanning recording to use in Pupil Cloud with a Neon and still have the reference image mapper use data collected with an Invisible?
@user-7aed79 I replied to your message here: https://discord.com/channels/285728493612957698/633564003846717444/1235529907354861618. Let's keep the conversation there to avoid double posting π
Hi all! My team is in the process of designing a study using wearable eye trackers and I had some questions about viability of using Neon. Who would be the best person to reach out to (or would posting here on Discord suffice)? Thanks so much!
Hi @user-45a63b, you can post your questions here on Discord. Otherwise, if you want to have a demo call, feel free to reach out to us via email: info@pupil-labs.com
Hi Neon team, is there a way to save the settings of Neon Player, and batch process multiple recordings using the same set of settings in Neon Player?
Hi @user-613324! Batch processing of recordings with Neon Player isn't possible. However, depending on which analysis plugins you want to run, it might be possible to process your recordings programmatically. So, two follow-up questions would be, 1. Which analysis plugins do you want to use? and 2. Are you interested in running analysis from the command line?
Hi all, in order to make our order from the University i have to register this company (Pupil Labs) as a supplier with our University. Can I use one of you as a contact for the form? @user-480f4c @user-d407c1
@user-82f555 Please contact sales@pupil-labs.com in this regard and a member of our team will reply ASAP.
eye
Hello. I wonder about the average gaze point producing frequency on the current companion device. I guess 200 Hz is a fairly high amount for a mobile device which will run with a neuro network. I was hoping that you could share this specification. Thanks..
Hi @user-7b683e - I'm not sure I fully understand your question. Could you elaborate a bit?
Note that Neon's eye cameras run at 200 Hz. If your Companion Device is OnePlus 10 Pro or Moto Edge 40 Pro, you can get the real-time gaze at 200 Hz as well.
Hello Nadia. Yes, the eye cam operates at 200 Hz. However, I was curious about the approximate frequency of the pupil detector on these phones, during real-time usage, not from recordings. I guess I got the answer as 200 Hz for that too. Thank you.
Hi @user-7b683e π ! To expand on my colleague's response, you can adjust the inference sampling rate through the Companion App. Simply navigate to the settings page, where you'll find options to set the rate at 100Hz, 33Hz, or even disable real-time inference altogether, in which case you will obtain the gaze data in Cloud at 200Hz post-recording.
Thank you for your response Miguel, hello. I guess the mobile app offers us two alternatives, 33 and 100 Hz, is that correct?
Hi team we got this error message while recording on the Neon glasses on the companion app. Could you let me know what the issue is?
We were recording during the day out in the sun. Could that be the problem?
We got this error for 2 recordings and both those recordings did not get uploaded to the cloud.
Hi @user-37a2bd, Could you open a ticket in #β β troubleshooting? We'll continue the conversation there.
Hi Nadia. I have created a ticket
Hello, I'm Wiss from Thailand who having a question about un-uploading clip to a PupilCloud
I did record by NEON on last Friday 3rd May 24, while disappeared without any uploaded to a PupilCloud. What should I do, coz I need to see the analysis so soon?
Plz kindly suggesting
I still have 2 record files in a companion device (one record from world camera, and another record of subject's eyes)
Hi @user-afb0c1, Your screenshots are from the phone storage. May I just check with you, do you see your recordings inside the Neon Companion app? You can check this in the app by tapping the file icon.
Hello β I have a question about the coordinate systems for the gaze and imu data. We are trying to calculate a gaze angle that also accounts for the head position (so if the participantβs head is tilted up/down, it would change the gaze angle). Our initial plan was to get pitch from imu.csv and the gaze elevation from gaze.csv and then add them together. In the documentation it says βThe IMU's coordinate system is rotated by 102Β° around the x-axis in relation to the scene camera's coordinate system.β β but itβs not clear how to actually convert the coordinate system to get things properly aligned. Do you have any suggestions on how we can combine the pitch and elevation to get gaze angle relative to the horizon? Thank you!
Hi @user-01e7b4, just to help me understand, you would like to calculate gaze in relation to the head position, is this correct?
It's true that gaze is provided relative to Neon's scene camera, e.g. azimuth and elevation, whilst the IMU outputs is given in its own coordinate system. However, it should be possible to examine the relationship between head posture and gaze using relative changes in these values. Is that feasible for you?
Hi, I was wondering whether it is possible to download the gaze raw data AFTER doing an enrichment (mapping to 2D). Otherwise the gaze data coordinates are always in relation to the video of the recording I guess and can't be compared between wearers?
Hi @user-861e5b ! Which enrichment are you referring reference image mapper or marker mapper? After performing an enrichment, you will get a separate gaze.csv file which contains the remapped gaze in reference image or surface coordinates for all the recordings enriched.
I was referring to marker mapper. Ah but that already answers my question. Thank you! π
Hello! I am having issues with getting the reference image mapper to function. I upload the scanning video, a still image (set a few AOIs), and include the test video and each time it throws an error after processing. I think I am following all the instructions, but I am not having any success
Reference Image mapper error
Hello. API question. Is it possible to have the realtime API and RTSP servers visible when you connect to the phone's hotspot? Right now it is only visible when the phone connects to the hotspot of another device (e.g., laptop).
Hi @user-44c93c π ! Are you asking about setting up a hotspot on the Companion Device? While this is technically possible, we strongly advise against it. Using a phone's hotspot consumes significant device resources that are essential for the Companion App's functions, such as inferring gaze, pupil size, and eye state. Therefore, running a hotspot on the Companion Device may negatively impact its performance. We recommend setting up a hotspot on a different device or using a dedicated router.
Hi @user-d407c1,
we are running a study with pupil labs and other sensors using LSL to synchronize all data streams. For no obvious reason your lsl relay stopped working lately.
Error message: [05/09/24 12:48:47] ERROR Raw gaze data has unexpected length: [email removed] gaze.py:78 [email removed] \xbe\x9a;\xe8>\xa9\x10,?e\x01U', timestamp_unix_seconds=1715255328.2962165) Traceback (most recent call last): File "C:\Users\XXX\PycharmProjects\eyetrackerlsl\venv\lib\site-packages\pupil_labs\realtime_api\streaming\gaze.py", line 75, in receive cls = data_class_by_raw_len[len(data.raw)] KeyError: 65
Hi @user-e74b04 π! Yes, the latest version of the app (https://discord.com/channels/285728493612957698/733230031228370956/1232987799003729981) includes real-time pupillometry and eye state.
With it the format of the gaze from the realtime API has expanded the data size from 21 bytes to 77 bytes. To fix the error you see and be able to use pupil size and eye state in realtime, please update the real-time API python library by running pip install -U pupil-labs-realtime-api
.
Did you change the variable with an update?
Thanks.
Next time it might be helpful to write here that there is an update: https://pupil-labs-lsl-relay.readthedocs.io/en/stable/history.html#id1
@user-d407c1 I did the update and apparently it works but I get a huge number of Warnings:
WARNING Dropping unknown gaze data type: EyestateGazeData(x=834.7101440429688, y=469.3448181152344, worn=True, pupil_diameter_left=4.754848957061768, eyeball_center_left_x=-27.83203125, eyeball_center_left_y=11.42578125, eyeball_center_left_z=-48.80615234375, relay.py:92
optical_axis_left_x=0.1795196384191513, optical_axis_left_y=0.21891407668590546, optical_axis_left_z=0.9590877890586853, pupil_diameter_right=4.134486675262451, eyeball_center_right_x=34.16015625, eyeball_center_right_y=9.1845703125, eyeball_center_right_z=-52.77099609375,
optical_axis_right_x=-0.2444096803665161, optical_axis_right_y=0.21104899048805237, optical_axis_right_z=0.9464260339736938, timestamp_unix_seconds=1715258636.4630961)
@user-e74b04 apologies for the inconvenience, I had a look at the LSL relay and we were checking for a GazeData. We will update ASAP the LSL relay to fix this and accommodate for the Pupillometry and Eye State but in the meantime I have updated it on this branch. Could you try it and let me know if that solved it?
@user-d407c1
Traceback (most recent call last): File "C:\Users\XXX\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\XXX\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\XX\AppData\Local\Programs\Python\Python310\Scripts\lsl_relay.exe__main__.py", line 4, in <module> File "C:\Users\XXX\AppData\Local\Programs\Python\Python310\lib\site-packages\pupil_labs\lsl_relay\cli.py", line 16, in <module> from pupil_labs.lsl_relay import relay File "C:\Users\XX\AppData\Local\Programs\Python\Python310\lib\site-packages\pupil_labs\lsl_relay\relay.py", line 7, in <module> from pupil_labs.realtime_api.models import Component, Event, GazeDataType, Sensor ImportError: cannot import name 'GazeDataType' from 'pupil_labs.realtime_api.models' (C:\Users\XXX\AppData\Local\Programs\Python\Python310\lib\site-packages\pupil_labs\realtime_api\models.py)
Or wait.
Apologies, it is my fault. I imported GazeDataType from models instead of simple.models, I updated the branch to properly import it
Now it seems to work without Warnings. Let's hope that the data is fine too.
Software testing helps π
The data should be the same. Definitely! I'm sorry for the trouble. I didn't have the chance to test it here right now, but I wanted to allow you to continue with your work as soon as possible. π«‘
Hi Neon team, I encountered the exact same problem as described here (https://discord.com/channels/285728493612957698/1047111711230009405/1210375932682965002) and I tried the solution suggested in the response (https://discord.com/channels/285728493612957698/1047111711230009405/1210531886087012363). Since then, when I connect NEON to my smartphone, it displays βdevice name: N/Aβ and when I take a measurement, it just displays βNo scene video.β Is there any solution to this symptom? Thank you.
Hello @user-8bd8dd ! Could you kindly open a ticket on π troubleshooting , such that we can better assist you?
@user-d407c1. Does the new version use more resources on the phone? It overheated during the measurement and stopped all apps.
Which Companion Device do you have?
Actually it still recorded the data but displayed the warning
It would be really great if you could release an adapter that lets me connect the eye tracker directly to a pc. Then I wouldnt have to deal with this extra phone and wifi streaming
Hey all, should i contact the sales email address for a quote? We need an official quote to add Pupil Labs as a supplier into our univerisity system
I sent an email but havent heard back
we are so so so excited to get these new glasses!
Hi @user-82f555, thanks for reaching out π Could you send me your email address in DM just to make sure that we have received it? The Sales team will reply to you asap.
Hi @user-a5a6c3 ! No worries, let's start with this example and explaining some of the asyncio functions.
Here, asyncio.Queue
is used to temporarily store video frames and gaze data as they are received. This ensures that the incoming data doesn't get lost if processing can't keep up immediately. Essentially, it acts as a buffer, holding data until it's ready to be processed.
asyncio.create_task()
in that code schedules tasks to continuously receive gaze and video data without waiting for each operation to finish. This means your code can receive new data while still processing the old, keeping everything running smoothly and efficiently, without delays or blocks.
Depending on how you want to implement it, you might not need multithreading, but if you were to do so, note, that the Pythonβs Global Interpreter Lock (GIL) can be a limitation for CPU-bound tasks, as it allows only one thread to execute Python bytecode at a time. However, for I/O-bound tasks (like network data receiving or file reading/writing), threading can be beneficial.
Also note that. when using threads, data sharing between threads must be handled cautiously to avoid data corruption. Using thread-safe queues or locks where necessary is crucial.
If you plan to do so, your code could look something like:
gaze_url = device.direct_gaze_sensor().url
video_url = device.direct_world_sensor().url
queue_gaze = queue.Queue()
queue_video = queue.Queue()
gaze_thread = threading.Thread(target=thread_function, args=(gaze_url, queue_gaze, 'gaze', device))
video_thread = threading.Thread(target=thread_function, args=(video_url, queue_video, 'video', device))
processing_thread = threading.Thread(target=video_processing_thread, args=(queue_video, queue_gaze))
Oneplus 10 pro
We just bought them last year
Hi! I am trying to produce the same exports as in your tutorials, and specifically trying to export the gaze_positions.csv file. But what I get is only timestamp, gaze x and gaze y variables. How can I get the csv to include the index, confidence and the rest missing, like I see it in your sample_recording_v2? Thank you.
Hi @user-42cb18 π ! May I ask which eye tracker do you have? You can find Neon's data format here
Neon does not include confidence values, as it has a different approach to estimate gaze. Perhaps you are mixing up with Pupil Core's tutorials?
Hi Miguel! I am using the Neon eye tracker. And I was following the tutorial for extracting a frame and visualize fixations that I find here: https://github.com/pupil-labs/pupil-tutorials.
@user-42cb18 Pupil Cloud or Neon Player would def be the easiest path here, as you won't need to code anything (See this message https://discord.com/channels/285728493612957698/633564003846717444/1203972316896428053).
But I assume you would like to do other things over the frame image, and thus programmatically access both right? In that case, here was discussed a small snippet to do so https://discord.com/channels/285728493612957698/1047111711230009405/1187427456534196364
Thank you very much.
Can you please help me the following: What does the "pts" value represent in the snippet you shared with me?
Hi again, Miguel! Do you have any code snippet for showing also the fixations over the frame image? In your Pupil Core tutorials you were plotting fixations alongside their id and connecting them with a line.
Hi @user-e74b04 ! Apologies, I missed your message. There is indeed an increased workload with the real time estimation of pupil size and eye state. While our latest Companion Device, the Motorola Edge 40 Pro, handles this increased workload without any issues, the One Plus might not achieve the same performance when streaming and computing all this data in real-time at 200Hz.
If you're observing overheating warnings, you can try lowering the gaze sampling rate in real time and accessing the 200Hz data later via the Cloud. You can do this in the app settings.
We know that you might want the 200Hz gaze data in real time, and we are working on an app update to include a toggle to disable pupillometry and eye state features if they're not needed, allowing you to maintain a 200Hz gaze rate with the One Plus device.
Being able to use Neon with a computer in the way you mention is not on the roadmap, but we hope the new app version will alleviate the issues!
Hi. So I checked the data. When the phone overheats we get missing samples in LSL but not in the local recording. So, by manually aligning the local recording with the beginning LSL recording and then interpolating that signal onto the LSL time stamps it works but it is really not pretty. LSL also use different time stamps (seconds from last PC startup) than you do so we can't even automatically align it. Question: One of our recordings is in perma "processing" mode on the pupil cloud. I can download the raw data but the main reason why we use your device is pupil size. Do you know how to fix the problem?
Hello, I am having the same problem opening the recording data(replied). The same message is from the file neon_player.log(attached file). Its Ubuntu OS uses neon_player 4.1.3(latest). And when I drag the recording folder to the neon window, The Neon Player quits unexpectedly.
I used recording data, which can be downloaded from the Neon official web page.
That log message is unrelated. For crashes that happen without any relevant log messages, the best thing to do is to "start fresh" by closing Neon Player, removing your neon_player_settings
folder, and then trying again
Hi pupil team, I'm trying to extract the optic_flow_vector using your scripts. I was wondering whether it is supposed to work when I download one of my neon recordings and set the directory as 'rec_folder', and feed it to the functions you developed (e.g., "rec = Recording(rec_folder)" in line 96 in helpers). Thank you for your answer in advance!
Hi @user-594678 π. Which helper function are referring to? Can you share a link?
Hi There, I had a quick question about analyzing AOI data for this product I ran a few trials for a study I am working on and I set the AOI points on your website. Once I downloaded the data for a specific image that a few viewers looked at I am noticing that for one specific viewer (participant), their AOI fixation times are identical for different AOI points For example, one person is looking at the nose and eyes of a participant and in the excel file it seems that the fixation times for that viewer for nose and eyes r identical Was wondering if someone could help me understand this better thank you!
Hey @user-20657f, just to help me understand the issue, when you visually inspect the recording of this participant, does the participant's gaze go to other areas of the face, or does it appear stuck?
Nope. Everything seems to be working fine. Even when I create the AOI Heatmap I can see the different parts of the face have different average fixation times
But when I download the excel file to analyze the raw AOI data I can see that for nose, mouth, eyes the AOI fixation times are identical w
Could you try the following: In visualizations, unselect all the other recordings except for this participant. Then, on your visualization metric, set it to fixation count. Do you see equal number of fixations in each AOI?
which would not make sense
Nope different fixation count
Okay, this means that this participant visited each AOI a different number of times. Now change the metric to average fixation duration, are the numbers the same?
Seems that on the cloud pupil website, the metrics are different for each AOI but not when i download the excel rawdata
Again, they are all differtent for av fix dration
Would you mind sharing the data, so that I can take a look? You can send the data to data@pupil-labs.com
just sent ! Thank you
Will send now. Thank you
Hello! I just want to make sure I'm sampling accurately. Each eye camera is at 200hz, IMU is at 110Hz and Scene Camera is at 30Hz?
Hi @user-0001be , these numbers are correct.
Thanks Rob! So means I have to interpolate the IMU to match the eye camera at 200hz, if I want to have data for that points inbetween.
Yes. You'll probably want to experiment a bit with interpolation options, but linear interpolation might be a decent first try.
@user-f43a29 over the network with the API does the sampling rate drop significantly? What are the typical tested values? I'm getting IMU Sampling Rate: 26.35 Hz and Gaze Sampling Rate: 21.21 Hz. And without any processing it's giving IMU Sampling Rate: 63.93 Hz and Gaze Sampling Rate: 169.73 Hz.
What kind of network are you connected to? Local dedicated router, hotspot, or work wifi shared with other people? May I ask what the processing is?
I'm displaying the live IMU and gaze data while checking to see if it's between a certain range. Secured password wifi for all staff. Can it connect to a desktop that is receiving WIFI and has the cappability of hotspot? Can I share over bluetooth?
Are you using Matplotlib to show the data in real-time or something else? Work wifis that are shared with others will lead to increased transmission latencies and potentially dropped data, so we recommend against that. It is best to use a dedicated local access point that is only used for your Neon and computer, like a dedicated router or, yes, the hotspot capability of your desktop. A Bluetooth connection is not supported.
Alright thanks! Is there no way to access it without any network? Like I can just get a router just as a point of connection but I don't need any "internet" per say? I'm using PsychoPy to display the live values. Should I remove that?
The hotspot on your desktop could potentially be fast enough and is worth a try.
You can also use the USB hub mentioned in this part of the documentation to connect via Ethernet cable to get even lower latency.
Correct, no connection to the internet is needed for usage of the real-time API. You just need a networked connection between Neon and the computer.
If you are using the simple API with PsychoPy, then the display of real-time values will be locked to the refresh rate of your monitor (e.g., 60Hz) and that will limit the rate at which you can receive data from Neon. My assumption is that this, in combination with work WiFi, is producing your results. If you want to receive the data at sensor rates while displaying the data in real-time, then you could consider switching to the async version of the API and using threads, but if you explain a bit more about what you want to do, then it might not be necessary to go through that effort.
Hi @user-0001be, just to add to my colleague's response, you can reduce WiFi bandwidth competition by disabling things like Cloud upload and other background apps.
@user-f43a29 @user-07e923 I'm trying the hotspot from my desktop now. So my desktop is receiving wifi and the output is hotspot. The Neon mobile is connected via WiFi to the desktop's hotspot. How can we make this work as they are not on the same network?
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host 172.26.218.238:8080 ssl:default [The remote computer refused the network connection]
When the Neon is connected to the desktop hotspot, it should effectively be as if they are on the same network, regardless of which internet connection the desktop has. It is similar to sharing a cellphone hotspot.
That the connection was refused suggests that they are connected and can see each other. There is perhaps a firewall setting blocking communication, but the internet also suggests that this may be a Windows specific error message. To be sure, I just tried it here with my Linux laptop's hotspot, where it worked, but I am not able to test with Windows just now.
Hi @user-e74b04 π ! I'm briefly stepping in for @user-d407c1 here. Soon, we will release an update where you can enable/disable eye state computation, which will resolve your overheating issues. Regarding the problematic recording, can you open a ticket in π troubleshooting ?
@user-f43a29 it works now sorry I forgot to update the ip address. Thanks for your advice! My sampling rate still does seem low even with hotspot. What Iβm processing is actually if the head velocity along horizontal (gyro_z) is within a certain velocity? Like more than 150deg/s. Would this be taking up a lot of processing power?
When you say sample rate, do you mean how fast you receive data if you just run the "device.receive_imu_datum()" function repeatedly, by itself, in a loop (timing each individual call) or do you mean when that function call is paired with your PsychoPy display routine?
To be clear, any processing that is in addition to the "receive_imu_datum()" call will incur a delay until the next call of that function can be reached, when using the simple API. If just setting a velocity threshold for a single axis, then that should be a rather minimal load, but it will depend on what happens within that specific if-else block.
When paired with PsychoPy. Then at the end I print out my sampling rate for IMU and Gaze respectively. Alright thanks! I think itβs my code and I really have to look into it.
Hi @user-0001be , I realized something: do you need your stimulus and/or data display to be synchronized to the refresh rate of your monitor?
Hello, I'm kinda new with the device and also the way to explore the data collected. I have so some basic questions is it the right place to ask for some basic help that I did find in the doc (maybe it in it but I don't see it or else)? Like I labelled the data and I want to try the enrichment reference image mapper but in my project subjects observe in all the videos 3 different thing and I don't really know how to separe them (3 diff project?) or there is some where a way to select in the project x video you want to have this reference image and this reference video and repeat this for each thing observed?
If I'm not in the right place to ask, don't hesitate to tell me ! My mistake π΅βπ«
Thank!
Hi @user-bdc05d ! π This is the perfect place to ask for help.
You could as you mention create multiple projects and separate them or you can keep them under the same project and use the temporal settings and events like in the Demo Workspace to delimit the scope of each enrichment.
If you need detailed steps or further assistance, feel free to ask!
Hi! Currently, I am performing measurements with Neon glasses. However, I am experiencing some issues with the usb-c cable. Is there any way in which I can change the cable as it is connected to the tracking part? Hope to get some answers so I can continue testing!
Hi @user-8d67d4 , thanks for opening the ticket in the π troubleshooting channel. We will continue communication there.
It just that each of my video are recording an observation of one of those 3 thing (not mixed in one video), so if I understand well it is better to separate projects
Yes you can keep them separate then
Okay thank! I was thinking that label was made for this purpose but it also great like this
Labels are meant only for visually organising your workspace.
okay ! thank for precisions π
is it possible to correct the offset online (in the cloud)?
Hi @user-bdc05d, this is Wee stepping in for Miguel. It's not possible to do the offset correction online yet. You'll have to do it offline with Neon Player
okay thank for your answer
I found it but how to add the "corrected" video to the cloud then?
Unfortunately, Pupil Cloud does not support re-uploading recordings.
Hello, Is there a possibility to rent ET-glasses for a short period of time? It's just for private use and I can't afford to buy one for just 1 or 2 days of experiment.
Greetings Patrick
Hope I'm in the right channel to ask.
Hey @user-a88cf6 π. Thanks for your question. Please reach out to sales@pupil-labs.com and someone from there will be able to respond/look after you π
Hello! If we use the gaze offset correction in neon player, will this apply to other recordings of the same wearer as well or is it only applied to the single video?
Hi @user-335772, thanks for reaching out. The gaze offset correction in Neon Player applies only to that specific recording.
Awesome thank you! Another question, when I try to download Neon Player I get a message that there is a corrupted image file when trying to open the downloaded file. Any thoughts on how to get around this?
Hi @user-335772. Can you please create a ticket in the π troubleshooting ? We can assist you there in a private chat.
is there a way to know the openess of the eyes? (in one csv maybe?)
Hi @user-bdc05d, we currently don't provide eye openness as a metric. We're still working on this and I don't have a timeline of it'll be ready/released.
Hello! We are using currently the neon pupil labs device, we wanted to take off the lenses to avoid extra brightness during an eye-tracking experiment. How can we take them off? thank you!
Hi @user-05896b, which frames do you have? Can I ask for further clarification about what βavoid extra brightnessβ means?
the basic one for neon "just act natural"
Ok. You can simply pop them out, but note that if the lenses or frames are damaged in the process, it will void the warranty, so put some pressure, but not too much. And may I ask what βavoid extra brightnessβ means? Do you want to replace the lenses of βJust Act Naturalβ with sunglass lenses?
thank you very much, no, it is only to use them without the lenses, just for trying
Ok. Just to say, if anything, removing the lenses can only increase brightness, not avoid it. If you want, Iβd be happy to help you in more detail if you explain what you intend to do.
Hi @user-07e923 , is it possible to use Neon locally, plugged into my notebook (with a pupil capture plugin)?
Hi @user-41f1bf, may I know why you'd like to use Neon with Pupil Capture and not with the Companion app?
It's possible to record Neon in Pupil Capture. But there's a huge catch: gaze estimation won't be done using neural network, but using Pupil Core's gaze estimation pipeline. This means you'll need to perform a calibration and control your environment to avoid headset slippage.
Also, unlike Core with adjustable camera angles, you can't adjust the cameras in the Neon module.
Neon support in Pupil Capture is experimental and will require running from source code. You can learn more here. Oh, and this only works on Mac and Linux. There're driver issues with Windows π .
The main point is to avoid problems with LGPD in Europe.
The second point is plugin. What do you think would be the "way to go" for me? Would be great if I could use my screen tracker plugin with Neon.
I'm afraid I can't give you more advice over privacy and LGPD. Perhaps my colleagues could chip in for this, or maybe our privacy and security FAQs might provide something helpful for you.
Concerning the plugin, can I just clarify that you'd like to use the Surface/Marker mapper with Neon?
I have adapted the surface tracker to track the screen, no need for fiducial markers.
Ah, I see. So, a bit of context: we use Neon Player to view Neon recordings offline. So, there's really no need to upload data onto Pupil Cloud if you want to work completely offline. You still get all the gaze data via Neon Player.
Neon Player functions similarly as Pupil Player, and you can integrate you own plugins.
Not so fast, but fast enough
And requires some light constraints too
Does the companion app works without internet connection?
Yes! You only need a local network to remotely control Neon and the Companion app. Internet is only needed to upload data onto Cloud.
So, I need the plugin to run into "Neon Capture"
Is it possible?
Hmm, what I can think of is to use the Real-Time API to get the live-feed of the scene camera. This will be limited by the frame rate of the camera (30 FPS). You'd still get data of your plugin via the scene camera.
Can you clarify it a little bit? Does Neon supports plugins?
Oh, I'm misinterpreting something. Sorry. π No, the Companion app doesn't support plugins.
I mean the Companion App
Would you mind making a custom version of the Companion app for me? I sent I PR some time ago with the plugin
This I gotta check with others, if it's possible. Please be patient.
Thank you
@user-41f1bf can you maybe explain what is your planned research and analysis? This can help us provide more precise feedback.
Hi @user-480f4c , I am teaching a pseudo-alphabet for people. Some data suggests that people look into some letters and not others. For example, the first two letters, or the first and the last, and so on. I am planning to record/analyse eye movement patterns and correlate them with participant's performance.
Tracking the screen is necessary for homographic transformations. I need data converted to screen space. (A secondary contribution is to make making choices easier, participants could just look into the screen and press a button, no need of mouse and keyboard and less noise.)
I don't want to use fiducial markers because the pseudo-alphabet is made of black-white shapes. Participants will tend to look at markers too often.
Thanks for providing this info @user-41f1bf, this helps a lot. Indeed, in most cases of screen-based work, we recommend using the AprilTag markers for mapping gaze on screens;
However, we do offer a workaround that does not require markers. Have you checked this tutorial? It shows how to map gaze onto dynamic screen content using the output of the Reference Image Mapper enrichment on Cloud.
Hi Nadia, I am not allowed to upload personal information for your Cloud (It requires an authorization that I don't have). So, I need to do everything offline.
Unfortunately, we don't offer a markerless solution to detect surfaces in our offline application, Neon Player.
Thank for your service Nadia, I am afraid I will need to write my own data analysis solution.
Hi @user-41f1bf! If I may chime in here. If you already have a markerless screen tracking approach implemented, then frankly, it is conceptually possible to adapt that to work with Neon in an offline context.
Here are some points: - You need to log into Pupil Cloud once to set Neon up, but after that, you can effectively disable all Cloud uploads. Thus, no data ever has to go into the Cloud. - You can then communicate with Neon via our real-time API. You can send events to the phone, or you can stream data, in real-time, over the local network, as @user-07e923 mentioned. - Since both the scene video and gaze are streamed, you can build real-time screen-mapping applications, e.g. like this one we did that uses AprilTag markers, or even experiment with other markerless approaches, although these might best be processed post-hoc. - So, whilst we do have options in Cloud, it isn't strictly necessary, and a custom version of the app that has plugins really isn't needed β you can do most things with the real-time API.
Anyway, I hope this helps and is useful for others too!
So, is it possible to use my plugin with Neon Player?? Or to load my Neon Recording into Pupil Player??
Hi @user-41f1bf ! If your plugin works with Pupil Player, it will most probably work with Neon Player. The Plugin system works the same
Nice, thank you
Sure! What exactly would you like us to test? A screen recording with Neon? I can't find the PR. Could you kindly link it?
Note that you can download Neon Player without having Neon. Then, you can check whether your plugin installs properly, and we can make a recording for you if you need it.
I am also adding the Plugin API
May I ask you to kindly test it before I complete my purchase? I did a PR with it
Generally, I would recommend creating a standalone plugin rather than modifying the source code. This approach makes it easier to share, maintain, and even transfer the plugin to Neon Player.
Additionally, before merging features like this into Pupil Player or Neon Player, we should conduct more testing and a discussion to determine its overall value and prevent bloating the software with specific features.
Please note that Neon recordings can only be loaded into Neon Player and not Pupil Player.
I think the best way to proceed would be to make it a standalone plugin that can be installed/added, and we can make a neon recording like you mentioned and share it with you.
I known what I need, but I am not sure where to go from here. What do you think? Could you merge this PR into Neon Player? Or the best is just to load the Neon Recording into Pupil Player?
Just want to test how this plugin behaves with a Neon Recording. You need a white monitor screen in the field of view, inside a room with dimmed lights.
Oh.. sorry about that. The plugin is here https://github.com/cpicanco/capture_plugins/blob/master/screen_tracker_neon/ScreenTrackerOnline.py
So, as I said earlier, the plugin was made for pupil capture.
Pupil Player requires more changes for this to work due to the background task runner
@user-d407c1 , if you take a closer look, you will notice that the surface tracker does not exposes the marker stuff (called by the background task runner) for the plugin system, am I correct?
Hi, newby here, and honestly asking for a friend. I'm from Humanities, and our neuroscientist is currently unavailable, and we need an estimate of time required to calibrate Pupil Labs' 'Is this thing on - Neon eye tracking bundle' for our experiment. This is to cost the time for our funding application (deadline looming, of course...π).
So in our experiments, participants will sit at a desk and look at a book, to read one page at a time. They will have four pages to read, some might be from the same book, some from other books. Each book is set at a desk, and participants should move from one desk to another, depending on the books of their choice. Reading each page will take no more than 4 minutes, at the very most (each page has a small poem and a picture).
We are allowing 20 minutes per participant for the reading itself, but I need time allowance for the calibration process for each participant.
The equipment and recording etc will be operated by a very trained neuroscientist. It's just the time for the calibration that we need.
Hi @user-2c8b77, thanks for getting in touch! Neon doesn't require calibration. This is because Neon uses neural network to estimate gaze. Due to this, Neon is resistant to slippage. You simply put it on and start recording. Even if the frames slip, there's no need to stop. Pop it back on and continue.
I'd say most of the time would probably be spent switching conditions/phases of your experiment.
Just wondering, have you had a demo video call with us yet? If not, we can provide you with more precise feedback in the call.
Good morning guys.
Quick question: Is the NEON module in "Just act natural" frame weather sealed? We need to use them in rain, and are worried about the durability in such conditions.
Kind regards Claus S. Hermann DanCrash ApS
Hi @user-6c4345, Neon is water-resistant due to its silicone casing. Therefore, light rain and sweat should be fine. When cleaning, pressure + cleaning solution might get forced into the components. So care should be taken when cleaning the device. See here.
Edit: pinging you for the update@user-6c4345
okay, it will be very interesting for the project I'm working on! looking forward to see this!
hi I opened the Neon app and it displayed the firmware version update, but it crashed. May I ask what the reason is? Thank you
Hi @user-4bc389 , could you open a support ticket in π troubleshooting ? Then we will continue communication there
OK
Hello. I am using neon's realtime streaming abilities. I used under the hood documentation: https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html to connect using RTSP. Everything works fine now except Worn
flag which indicates whether the user is wearing the device. The flag is always toggled whether the device weared or not. I wanted to know if it is a known issue or I'm doing something wrong. I previously used Pupil Invisible and they worked correctly.
Hi @user-46e202 π! The worn detector is not yet implemented with Neon. If you would like this to be prioritised please suggest it at π‘ features-requests
is it possible to cut video in the cloud ? (shorten a little bit)
Hi @user-bdc05d π ! See this message https://discord.com/channels/285728493612957698/633564003846717444/1242214874558500874 Essentially, you can trim videos using events with the video renderer.
okay I'm not sure how to do this it is just my scanning video it's 3sec too long π₯²
The video renderer only renders a final video with the gaze overlay. Unfortunately, there is no way to crop a scanning recording video yet.
Feel free to upvote this request https://discord.com/channels/285728493612957698/1212053314527830026/1212053314527830026 to see this possibility.
hi, my neon app shows low storge ~30 mins left to record, can i ask how to solve that propblem? and why?
Hi @user-8825ab ! it seems the memory on your Companion Device is running low. If you have too many recordings, consider deleting them from your device if they are already backed up to the Cloud.
Alternatively, if you are not using Cloud or if you want a double backup, you can connect your phone to a PC and transfer the recordings to free up space.
thank you Miguel, very helpful
hi, I was trying to add scanpath with https://docs.pupil-labs.com/alpha-lab/scanpath-rim/#generate-static-and-dynamic-scanpaths-with-reference-image-mapper but as I followed the steps an error occurs when I select the file (dowload of the ref img mapper) am I doing something wrong? the error is the following: raise Exception( Exception: Wrong folder! Please select a Reference Image Mapper export folder
Hi @user-bdc05d ! does your folder contain the following files:
required_files = [
"fixations.csv",
"gaze.csv",
"reference_image.jpeg",
"sections.csv",
"enrichment_info.txt",
]
?
yes
Feel free to remove lines 30 to 42 of that script, it hasn't been updated to include AOIs files. And the check would fail.
there is 2 mores: aoi_fixations.csv and aoi_metrics.csv
Hello! Do you know where could I ask information regarding the extension of warranty?
Hi @user-7ad436 ! You can ask [email removed]
self.img_height = self.img.shape[0] ^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'shape'
Hello! I've been trying to connect my PC to the Pupil Labs Neon using the Python API. While I can connect to the devices through the app's user interface (as described here: https://docs.pupil-labs.com/neon/data-collection/monitor-app/), I'm encountering an issue with my Python code. The code doesn't throw any errors, but it seems to be continuously reading the device. I suspect the problem lies with the Device(address=ip, port="8080") line. Any insights or suggestions on how to resolve this would be greatly appreciated! Thanks in advance!
`# The two lines below are only needed to execute this code in a Jupyter Notebook import nest_asyncio nest_asyncio.apply()
from pupil_labs.realtime_api.simple import Device
This address is just an example. Find out the actual IP address of your device!ip = "192.168.1.16"
device = Device(address=ip, port="8080")`
Hi @user-81cee3 , just to confirm:
little question: in aoi_metrics.csv and aoi_fixations.csv when I compare the total fixation time for a recording (given in metrics, and I do a sum for fixation with the correct recording id and area) I don't have the same time in result, am I mistaking something ?
still in trouble with this, can I have a little help :)?
like for a zone A in recording 1 if I look in metrics.csv I have a number that is not the same that the number obtain by the sum of zoneA x recording1 in fixations.csv (I'm talking for total fixations here)
@user-07e923 Thanks!
you can no longer stream information from the web browser after subsequent re-opening of the browser? This is correct. I use my laptop PC as the mobile hotspot.
Okay. May I just clarify which Neon Companion app are you using? The latest version is 2.8.15-prod. If it's outdated, please update the app.
@user-07e923 My version is 2.8.15-prod already.
Could you try using a different device to set up the hotspot instead of having the PC act as the hotspot? Please ensure that you're connected to a 2.4 GHz network. The hotspot doesn't need to be connected to the internet.
I cannot replicate the issue with my monitor app. Could you tell me exactly the steps that you're doing, so that I can try and re-create the problem?
@user-07e923 Thank you for your help! My procedures are follows: 1. Set up the laptop's mobile hotspot and connect the phone. 2. Launch the Neon companion app on the phone 3. Open browser on the laptop (Chrome or Edge) 4. Enter URL (neon.local:8080 or IP address) <- first time after reboot it works 5. Close the browser 6. Open the browser again 7. Enter URL (neon.local:8080 or IP address) <- does not work
Now, I tried to do another PC and another Network, but I have same results. Please let me know if you need more information.
Okay. I'll retry with these steps to see if I can replicate the issue.
Could you try using a different device to do the hotspot? Also, are you by any chance connected to Eduroam?
One last thing: when you type in the IP address, did you also include the :8080 ?
@user-07e923 Thanks in advance. Different device: Yes Eduroam: No :8080 : Yes
Does it show only a white screen when you re-open the browser or do you only see a white screen when you enter the ip?
I see this in both browser.
I can only replicate this issue on Microsoft Edge. However, I cannot replicate this issue on Chrome or on Firefox.
Could you try installing Firefox to see if this browser works for you? Also, could you tell me which Neon Companion device are you using? E.g., OnePlus 8, OnePlus 10, Moto Edge 40 Pro?
okay thank a lot π !
Hi @user-bdc05d, this issue should be fixed. Please check again and let us know if you're still experiencing issues. π
hello, Effectively I try some this morning and number are much closer (still experiencing diff of 1 or 3 unit of difference but it is much better), thank for the reactivity ! π
but it is weird now I have average fixation duration that are often above 1.1sec (sometime I up 3sec π΅βπ« ), this is way to long compare to the know 100-500ms, is that normal or maybe I am mistaking the meaning of average fixation duration?
I don't know what your experimental setup is, so I cannot offer you any additional feedback on fixation duration. It's possible that the velocity between subsequent eye movements aren't fast enough to be classified as separate fixations. This would explain why a fixation appears longer than usual.
You can read more about how we detect fixations here. We also provide the white paper for our fixation detection.
on average, how much time scanpath generation take? because my tests take so long for the moment I'm a little bit surprise
I'm afraid there's no fixed answer for this. The run time depends on factors like the recording length, the number of recordings you're running, how powerful is your computer, etc. For example, mine took more than two hours to complete, and I had four recordings lasting approximately 45 seconds each.
But I'm also on a laptop. So it really depends.
Hi Pupil Labs, in Neon Player, when I try to render the scene video with a gaze overlay using the manual offset correction, the manually offset gaze marker looks fine in Neon Player, but is not reflected in the final render (i.e., is there but shows the original gaze marker). Am I doing something incorrectly?
Hi @user-f252d8, thanks for getting in touch! We're aware of this issue and we're still working on a fix.
It'd be great if you could you tell me the Neon Player version that you're using, the Companion app used to make the recording, and also when the recording was made.
during the adjustement of circle size (scanpath) is it suppose to see the image? I just have a red background with one green circle?
(red circle take all the image in fact)
Hi @user-bdc05d, yes, the red circle has a very large size. You'll have to press m (to make circle bigger) or n (to make circle smaller) to change the circle size. Once you're done selecting the size, press q to save it, and the script will draw the gaze based on the selected size.
For context, you see this when you first run the rim_scanpath.py script.
oh okay thank, I have seen that but has the point green nearly disappeared I don't press n in off
@user-07e923 Thank you for your help. I want to remove the transparent lenses from my PupilNeon glasses. How can I do this?
Hi @user-5c56d0. May I ask why you want to remove the lenses? It can be done, but depending on which frame you have, it can be tricky to get right. Can you please tell us exactly which frame you have?
Hi @user-5c56d0 ! Just to add on top of my colleague's response. Lenses are tightly fitted, especially with the heat sink version of the Just Act Natural frame, and there is a risk of chipping them during removal and refitting. We recommend exercising extra caution if you choose to proceed.
With that said, here is how you can remove them: 1. Important to note: Remove the module first to avoid any damage. 2. To pop the lenses out, apply light pressure with your thumbs from the inside to the outside, starting at the nose side while holding the other side of the lens. 3. If they resist, you can heat the bottom of the frame up to 120Β° C to make removal easier. Please be very careful while doing this.
Thank you for your reply.
βͺ My NEON is the one in the attached picture.
γ»Can the module be removed by pulling it out by hand?
β My colleague removed the lower front round frame of neon glasses(This neon was normal glasses.). Is this something that new NEON can do?
γ»Purpose: I want to experience which is easier to use with or without lenses.
The same information is also sent to Miguel.
I am using the Neon player (4.1.4) which works fine on my laptop. When I installed it on another computer (Win 10 server), it keeps crashing with the following error message:
[13:49:31] ERROR player - launchables.player: Process Player crashed with trace: player.py:766
Traceback (most recent call last):
File "launchables\player.py", line 585, in player
File "plugin.py", line 412, in __init__
File "plugin.py", line 434, in add
File "audio_playback.py", line 96, in __init__
File "audio_playback.py", line 157, in _setup_output_audio
File "sounddevice.py", line 1494, in __init__
File "sounddevice.py", line 817, in __init__
File "sounddevice.py", line 2660, in _get_stream_parameters
File "sounddevice.py", line 569, in query_devices
sounddevice.PortAudioError: Error querying device -1
INFO player - launchables.player: Process shutting down. player.py:768
This happens after dragging and droping a recording folder into the window, just before the new UI is opened (i.e. the processed files in the local neon_player folder are created). The computer does not have a soundcard installed, I assume that is the reason for the error. Would it be possible to fix the error?
I'm sure that can be fixed, although I may need your help testing it, as I don't have any computers which lack audio hardware
Hi @user-5c56d0 ! I removed one of your messages since they are public, so no need to duplicate them. π
The model you have is the oldest one without a heatsink, so, it should be easier to remove the lenses. You can follow the same steps as described before https://discord.com/channels/285728493612957698/1047111711230009405/1245753344346161253
Thank you for your reply. I could it easily. I was saved.