Hi @user-e3da49! 'Raw data export' is now called 'Native Recording Data'. This is what can be loaded in Neon Player.
Re. the recording affected by a low battery, please try long-pressing on the Neon Companion App Icon, select 'App info, and 'Force stop'. Then open the app as normal. Does it prompt you to save a recording from a previous session?
Hi Neil, thank you so much! That worked! ๐
okay, is this in the pipeline? Weโre already using the other information in our remote control applications and itโs very helpful but the ability to set recording parameters with the API would allow us to not worry about accidental toggles on the companion app during an experiment procedure.
You can log this as a feature request. Further instructions in feedback ๐
Hi Neil, are you sure that 'Raw data export' is now called 'Native Recording Data'? it takes me hours to download the zip - not like before the update. Also last time I checked, the files in 'Raw data export' were equal to "Time Series Date + Scene Video" - but I am not quite sure anymore. Moreover I have the phenomenon, that I canยดt unzip the "Time Series Date + Scene Video"-zip. Canยดt be the issue of my converter, since it opens the face mapper zip... thank you so much for your help!!
Hi @user-e3da49 ! That is correct! Apologies for the confusion. The "Raw data export" = "Time Series Data + Scene Video"
is the CSV files with all the data and the videos.
The "Raw sensor data" = "Native Recording Format"
is the data as it comes from the phone in binary format.
Hmm.. downloads are now faster since the last update, could it be you have more videos on the project? worse internet connection? Could you do this test to ensure you have a proper download speed?
Reg.canยดt unzip
what browser are you using? I mention it because of this
Hi Miguel, thanks for the clarification ๐ I downloaded the zip files via Google Chrome and itยดs just the "Time Series Data + Scene Video" - zip I canยดt open ...
Hello, Is there a limit on the number of collaborators that can access a workspace on pupil cloud? I was only able to add 3 so far,. The rest receive the invitation email, but the link never works.
Hi @user-068eb9 ! There is no limits on the number of collaborators. Could you kindly check a) that the email they are login in Cloud and the invited ones match and b) they are using the latest invitation link (especially if you clicked on resend).
Some email providers like gmail group emails, and they might have collapsed the invitation together leaving you upfront with an old invitation.
If after checking these points, you still have issues, please reach us here or by email with the workspace id and email accounts affected and we will look into it.
Also, a recording of mine suddenly stopped after 28 minutes without any reason. The front camera light was on. What do you think happened and how can I prevent this next time (I want to record for longer than 2h straight)? Also the uploaded 28 min recording in Pupil Cloud says "gaze pipeline failed" and that I should inform you guys. It would be great, if you could restore the whole recording of over 2 hours ๐ฆ thank you so much in advance!
Hi @user-e3da49 ! Could you follow up by email at info@pupil-labs.com with the recording ID? You can obtain that by right clicking over the recording in Cloud.
Hi there. I am trying to use the Reference Image Mapper with an image of a simulator screen setup and then a recording from our Neon eyetracking glasses on the cloud. The enrichment is still in processing after an hour, is this normal?
Hi @user-f4e4e0 ! The time it takes to compute a reference image would depend on the amount and duration of the recordings in the enrichment, as well as workload in Cloud.
If it is taking too long, it might be that your browser put your tab to "sleep", and the state never changed, can you do a hard refresh and confirm whether it got stuck?
Hi! I am trying to use enrichments, but my video is longer than 3 minutes. I put in events in order to use the enrichment function on segments that are less than 3 minutes, but I still cannot find a way to upload the video... is there a way to cut up the video in the cloud in order to only submit the segments for enrichment?
Hi @user-4724d0 ! Unfortunately, you can not use subsections of a recording as a scanning recording.
The rest of the recordings can be as long as you wish but the scanning recording has to be below 3 minutes.
If you would like to use subsections, feel free to suggest it in feedback
We are currently experiencing recording errors due to what we suspect to be a loose contact at the USB-C plug of the Neon glasses. โจThe glasses repeatedly disconnect from the phone after about 1 min of recording even if the cable is not moved.โจCould you give me advice on whom to approach with this issue? Is there an address to ship the glasses to for repairment?
Hi @user-23cf38 ! Can you follow up by email to info@pupil-labs.com with your order id , and neon glasses serial number? There we will be able to organise a repair if needed
Hi, weโve encountered this issue multiple times with our neon devices where, during a recording, the sensor light turns red and the phone starts vibrating. The recording stops and the app throws an error. We cannot perform any further actions on the app until we restart it and the recording seems to be corrupted. Can you please help us know what could be the cause of this issue and how can we avoid running into it?
Hey @user-ccf2f6 ๐. Do you already have affected recordings in Cloud? Please send an email to [email removed] with an ID. You can right-click on a recording in Cloud, select 'View recording information'. There you will find the ID. Someone will assist you from there!
Hi, is there a possibility to adjust the wideness of the scene camera? Right now the camera has a super wide angle, I would like to zoom in on the environment for my study with people.
Hi @user-231fb9 ๐. Adjusting the wideness, or zooming, isn't possible with Neon's scene camera. What does your experiment/research setup look like? Perhaps we can suggest some alternative solutions!
Hi there, is there a way we can check an error that occured during one of our recordings if we provide a recording ID?
Hi @user-f4e4e0 ! Yes! Please share with us at info@pupil-labs.com your recording ID and a description of the issue and we will look into this.
Hi, Pupil Labs team. I want to buy a Lens Kit for participants who need vision correction but the explanation is not clear whether I need to buy the specific frame or not for this. Currently, I only have Neon device by default. Could you give me a hint?
Hi @user-29f76a ! I can see clearly now is a frame that supports prescription lenses. The lenses can be changed at ease since they are attached through magnets.
This frame by defaults comes with a set of lenses from -3Dpt to +3Dpt as described there. On top of that frame we offer this extension lens kit with additional lenses.
The "default" Just Act Natural frame does not accept this set of lenses. I hope this helps.
So the "I can see clearly now - extended range lens kit" cannot fit with the standard frame?
They won't fit, that's correct. The "I can see clearly now" frame and these lenses come with embedded magnets that connect them together.
Hi! First time attempting to use Neon and the connection between phone and glasses does not seem to work, ie the phone screen still displays the "plug in and go" screen. We think this might have to do with the Neon not being connected to the same network as the phone, since it worked earlier before we brought the glasses into a new environment (new network, new workspace login). But it is an uneducated guess at best, any help would be appreciated!
HI @user-0055a7 ! I noticed there might be some confusion regarding the connectivity of the Neon module. Neon is not wireless and requires a direct connection to the Companion Device (phone) via a USB Type-C cable.
Could you please confirm if you have it connected this way?
Additionally, did you know you're eligible for an onboarding call, which could be very helpful in addressing any questions or concerns you may have. I invite you to reach out directly to info@pupil-labs.com to schedule your session.
Thanks for the reply, sorry for the confusion. We do indeed have the Neon connected to the phone via USB-C cable.
The onboarding call sounds very helpful, will look into it if we cannot resolve the issue
thanks for following @user-0055a7 ! So you have the glasses connected but they are not detected, is that right? Could you please write to info@pupil-labs.com indicating the app version and a picture of the back of your Neon module (something where the QR code is clearly visible).
Hi pupil-labs team, I am having an issue with the scene camera of my device that i can't figure out. Everything is working fine when i make a recording using the companion app. However, When i try to stream data (even just using neon.local:8080) the scene video does not stream. After attempting to stream, the scene camera stops working. So clicking record on the companion app leads to recordings without scene videos, and pressing the video camera on the bottom right does not display any video. if i unplug the device for a few minutes, scene video works as usual, but attempting to stream leads to the same problem again
Hi @user-275c4d ! this sounds like network issue, does the same occur if you connect using the IP shown in the Stream menu? Also, have you seen these network requirements
Has anyone encountered this issue before?
Hi, this is Younghoo from Seoul National University. We're interested in using Neon device for our research, but just want to ask if there are any software or applications that we need to purchase additionally to use the device. Thank you.
Hi @user-9857ce ๐ There are no additional costs for software. If you'd like a quote feel free to fill out a request via our website (if you haven't already ๐ )
Thank you @wrp for your reply
hello, i am a prospective customer that would be using the Neon model in a clinical setting, mainly working with patients that have suffered concussion and traumatic brain injury. I am very familiar with several VNG set ups, and am curious if the Neon could be used in a similar way. Could you please provide me with a sample sheet that displays objective gaze mapping and possibly saccadic information?
Hi @user-ffb983 ! thanks for sharing your use case. Neon will provide you with gaze data and eye videos at 200Hz. The gaze output looks like this.
Additionally the gaze (x,y) position can be obtained in realtime. It will be output in the scene camera coordinates although you don't need the scene camera content.
In general, it would work detecting eye movements on a videonystagmography environment.
If you would like a videocall demo to show you Neon and discuss anything, feel free to inquire about it at info@pupil-labs.com
Is it only possible to use the AOI tools/Editor in conjunction with reference image or marker mapper enrichments? The old SMI analysis software allowed you to or select an AOI post recording without requiring any form of mapper system
Hi @user-057596 ! We are working on some sort of manual mapping tool. For now unfortunately, you need the reference image mapper or the marker mapper tool.
Thanks @user-d407c1 a manual mapping tool would be very welcome, also is it possible then with the present AOI tool/Editor to select a particular AOI for each single recording in a project folder rather than the same one being applied to all the recordings in the project?
All Areas of Interest (AOIs) defined in the enrichment process are applied to all recordings that have been enriched. ๐
But... within the AOI heatmap tool, you have the flexibility to customize visualizations by selectively toggling AOIs and choosing specific recordings to include. Is that sufficient?
Thatโs really helpful ๐
Hello, I'm new here! Does code written using the API for Pupil Core work in Neon?
Hey Pupil Labs peeps, where can I see the gaze offset values for a wearer in the companion app? I would like to manually offset the gaze in Neon Player, but wondering if rather than adjusting it based on playback, could I just find the values for the wearer and use that?
I have also found that neon player crashes very often while processing the exported files from the phone. Console gives this message:
Hey @user-cad8c8 ! The realtime API from Core and Neon are completely different, they use different protocols as well. You can check them out here and here. They are not compatible with each other. Pupil Invisible and Neon are based on the same realtime API.
Thank you!
Hi @user-09f634 ! The values for it are not shown directly on the app. But they are stored in the info.json file upon being recorded. Is that sufficient? I will pass the feedback, regarding the crashes. Are you using the latest version 4.1 ?
yes this works for me, thank you! Regarding the crashes, I am on v4.1. Restoring default settings seems to help with the crashing sometimes..
I am initiating an experiment to investigate the disparities in viewing behavior between real-life interactions and screen-mediated experiences. In this study, I will be testing individuals with distinctive features such as a prominent birthmark or other stigmatizing aspects. During the experiment, the test participant and research fellow/screen will be positioned facing each other, and the use of a wide lens on the camera may pose challenges. While not insurmountable, it may result in slightly less sharp images.
Thanks for the overview! The lens on Neon is not configurable; you will always get the wide-angle view. Have you already done some pilot testing/are there definite features that are not visible on the Neon's scene video?
Hi Pupil team. I am working on a project involving connecting and processing multiple Neon glasses. I have a list of static IPs of the devices and use a loop to go through each one and find it using Device(address, port)
. However, if the device with the passed IP is not currently connected to the network, I notice a delay of about a minute where the terminal hangs before proceeding with finding the next device. If the device can be found, there is no delay. Is there a way I can get around and/or set the delay time for finding a device by IP that is not connected? Thank you!
Hi @user-465a3e ! Are you using the simple or async API ? Using the async won't block the thread and is probably more suitable for your use case.
Hi there, is there a way to edit and chop up footage from the recordings? We are interested in using your enrichment feature to analyse areas of interest, but we won't be able to stop and start recordings while our participants are in mid-drive.
Hi @user-f4e4e0 ! Yes! that is what events are for. You can use them to select sections to enrich or sections to export in the video renderer.
Thank you
Hi, @user-09f634 - I think I see the bug that causes your crash. It's Windows-specific, but simple to patch. Thanks for the report and sorry for the trouble. The fix is included in the v4.1.1 release
thank you (: I will try the new release
Can I update the OnePlus 10 Pro (currently Android OS 12), or would that break something?
Hi @user-1391e7 ! Yes! Neon Companion App does support android 13. You will need to grant an additional permission for the storage that is new to android 13, but no worries you will be prompted.
(the companion device for our pupil neon glasses)
Android is annoyingly persistent in reminding me that there is one available. I'm slightly worried someone hits that update button during an ongoing study.
Okay I see. So does it mean that if I buy "I can see clearly now", I need to take of the bare metal from my "just act natural" and the put it on my new frame "I can see clearly now"? @user-d407c1
No, the nest (PCB) is on all frames, you just need to swap the module from one frame to the other, like is shown here.
Okay, just need to double check before buying it. - the "I can see clearly now" is the frame + -3 to 3 lens kit (cable to connect the phone companion and bare metal is included) - phone companion not included - I can swap the module to one another
am I correct?
That is totally correct, on our shop you have the bundles on top, which include:
and at the bottom, under accessories we have the frames alone with that PCB, no module, phone or anything else. - the frame "I can see clearly now " comes with prescription lenses from -3Dpt to +3Dpt, you can order this frame here.
Thanks
Hi - I am trying to use the marker mapper enrichment in an experiment where two people interact face to face, each of them wearing one pair of neon glasses. They discuss an object appearing in front of them (on a screen or on a piece of paper), which I have tracked with the markers (four QR codes, one in each angle). The issue is that it seems that the markers can be reliably detected only when appearing in front of the person, but as soon as they are slightly on the side, the enrichment fails. I have tried many different things (e.g. printing the QR, putting them on the image showed directly on the screen, changing the size and the luminosity of the image, varying the distance between the wearer and the screen etc), but none of these attempts resulted in the enrichment to detect all four QR codes reliably (yes, I made sure each QR has quite a large white border around it). Is this something that is expected? Should the marker appear exactly perpendicular to the wearer gaze to be detected? Is there anything else I could do to make sure they are detected by multiple people looking at the target from slightly different angles (e.g. in a two-person conversation)? Thanks!
Hi @user-7413e1 ! Could you post a picture of the markers not being detected, that would help us understand your environment. Tilted QR codes could be trickier to detect but depends on how much they are tilted.
yes - here is one (from person A's view) when I tried to use a printed version (not preferred), and also one (from person B's view) when it was shown directly on the screen
Thanks for sharing the image. It's very helpful! Honestly, this isn't a particularly challenging perspective, and I think you'll be able to get the markers detected! Here are two things I'd suggest you try: 1. Printed markers - They appear quite small in the scene camera image. I would suggest doubling their printed size. I expect they'd then be detected, even at the viewing angle shown in your image. 2. Digital images on screen - The markers here seem to be a decent size. So, I think what's happened is that the markers are slightly over-exposed in the scene camera image. They do look blurry. This is likely due to a combination of screen brightness and sub-optimal backlight compensation. You'll want to ensure that these are appropriate. I first try reducing the brightness of your screen and then adjusting the backlight compensation (also in the Companion app settings) until the markers are clear with good contrast. There's further context about backlight compensation in this message: https://discord.com/channels/285728493612957698/1047111711230009405/1200039507832098816
Let me know if that works!
how long should the "World Video Exporter" take for a 2 minute video? Not more than 30 minutes right? it kinda looks like it's done based on the progress bar, but it still says it's running, maybe indefinitely. Though I have had the world video exporter run before on other exports from the phone and it did not take this long, so perhaps there is something wrong with my file but the application is not telling me?
The encoding time would depend on your PC specs, but 30 minutes for a 2 minute video would certainly be a typical. Does this happen every time you export the world video from this recording? Does it happen with every recording?
Hi Pupil Labs Team, I just wanted to ask a simple question. I recorded a video (more than 3 minutes) and I tried to make the enrichment process, but it already took more than 90 minutes, and itโs not done yet. Do you have any experience? How long normally does the process normally take until it is complete for a 4-minute video?
Hi @user-602698 ๐. May I ask which enrichment you are running?
Hi Pupil Labs. We experienced an interesting issue with the devices. We used the API to start and stop recordings on 5 devices at the same time. A few minutes after recording ended, all the phones began vibrating and displayed an error message. "We have detected an error during recording". There were no errors thrown by the API during recording itself. The vibrating did not stop after restarting the app or unplugging the glasses. One of the glasses displayed a "Sensor Failure". Vibrating only stops after force quitting the app. Here is a video of the vibrating devices. Do we know the cause of this issue and how we can prevent it from occurring in the future?
One thing you can quickly check is if the apps are all on the latest version and if the issue persists then.
Hi @user-465a3e! Please contact info@pupil-labs.com and someone will help from there with the debugging process!
Hello! I am using marker mapper enrichment in the Pupil Cloud. The tags arent being detected. What could be the problem? The Neon player is able to detect them, so the size of the tags is proper .
Hi @user-ee081d! Please try again but this time using the markers available from our documentation: https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/#setup. The markers shown in your screen capture aren't compatible with the Cloud enrichment.
Hi @user-ee081d ! can you navigate a few frames forward or backward and see if they are detected?
I tried it and it didnt help.
Hi! I have send you a friend request, and I will follow by DM with my email address, can you invite us to that workspace, so we can further investigate what is happening?
Hi Iโve just updated the Neon app and now itโs no longer connecting to eye cameras and there there is no gaze overlay.
Hi @user-057596 ! I'm sorry to hear that. Does Neon get recognised at all in the app?
Regardless of that, could you please contact us by email [email removed] with the serial number of the glasses or alternatively a picture of the QR code that is in the back of the module?
They are working when you use the adjustment feature but when you press record and play back there is no gaze overlay and the stream from the eye cameras has stopped.
Hi @user-d407c1 here is the glasses QR code
Can you please try clearing's the app data? But before you do so, in the Device Storage > Documents > Neon
folder, there might be an app_android.log that can help us understand what happened.
@user-d407c1 how do you locate the Neon Folder?
Using the files app on the phone, it should be on the Internal Storage and then under Documents
@user-d407c1 Found it and there is an app_android.log and is 21.2MB
Great! Please share it with us at your earliest convenience
Email it ?
Yes, please. I think it will be too big for Discord, and easier to keep track.
I will send from our university gmail which will be Herriot Watt University Psychology
@user-d407c1 email sent.
thanks! We will reach you ASAP, kindly note that there is a weekend in between, and our response might be delayed until Monday. In the meantime, you can try clearing the app data
@user-d407c1 Will do and thanks. Have a great weekend
It has worked for me with a short recording (55s), I don't think that one took very long. I tried the same recording I was having trouble with (2 minutes) on a different laptop as well - over an hour later it is still "Running". Both laptops have good specs (i7 + dedicated gpu). This recording that I am having issues with is from Sept 2023. Could that possibly be affecting things?
Probably not, but if you're able to share the recording, I could take a closer look
Apologies if this has been asked before. Is it possible to share recordings, specifically a scanning recording, between workspaces? Thanks!
Hi @user-4b3b4f ๐! It is currently not possible to share or move recordings across workspaces
Thank you!
Hi @user-ee081d ! It seems you are using the tag16h5
markers family, this family is not supported in Cloud as they can be unreliable. Could you please try again using the recommended ones (tag36h11
)?
New problem is : after applying the enrichment and defining the surface, i cant download the gaze.csv file here
I am using tag 25 family and it was detected by the neon player, but not the cloud. I would try with the tag36h11
Hi @user-ee081d ! On the downloads tab, there are two panels the left one corresponds to the original data, while the right one corresponds to the enrichments' data.
You should be able to download it from there.
after pressing download i am only getting enrichment_info.txt file
Are you using Safari? https://docs.pupil-labs.com/neon/pupil-cloud/troubleshooting/#my-enrichment-download-contains-only-an-info-json-file-and-nothing-else
yes, thank you ! it helped!
Hey, we are needing to synchronize pupil labs timestamps with other software. Going forward we implemented event markers with the API but we still need to be able to calculate a latency on recordings we have already collected. We can do so by selecting the frame of the specific event but the video playback in the cloud has some grey for a second or two before the first frame occurs. We need a way to assosiate the timestamp of the video or frame number with the recording.begin event that is included in all downloadable folders, is there a place where this information about how much time passes between the recording.begin event and the first frame in the video "neon scene camera v1.mp4" is stored?
Hi @user-28ebe6 ! When you download the TimeSeries + Scene Video, you will have a world_timestamps.csv which includes timestamps for every frame of the video that gets downloaded there.
Regarding the grey frames in Cloud, due to the higher sampling rate of the eye cameras compared to that of the scene camera, coupled with a possible slight delay in the scene camera's initiation, there is a possibility of capturing gaze data before the first frame from the scene camera is actually recorded. To ensure no valuable gaze information is lost, Cloud incorporates placeholder grey frames to fill these gaps. This method allows us to retain all gaze data for comprehensive analysis.
That said, each gaze point and eye camera frame is meticulously timestamped. This ensures that the synchronisation of gaze points to the scene camera frames remains precise and unaffected despite the addition of scene frames to bridge any starting timing discrepancies.
About the synchronisation moving forward, I strongly recommend having a look at our Lab Streaming Layer relay which will make it easier to sync with other devices, the time offset estimator from our realtime API and how to force syncing the Companion Device with an NTP
Ah, I see, will use that download option then for the recordings we have already collected.
Hello, good evening! Im running into a in issue at the moment... Im performing a Reference Image Mapper enrichment, but although the process says 'completed', the gaze over the reference image does not appear, and when trying to make a Heatmap, it does not show anything over the image...
I've noticed that on the bottom, the bar is colored in gray, as opposite from other enrichments that I have where its purple. I assume its something regarding the reference image? Or what could be the problem here?
the ID is: 0462120d-7b3c-4c84-a862-13bf61c31427
Thanks in advance
Good morning all, we are doing an experiment in order to analyze how many words do we read on our daily basis. The person recording this video claims that he was able to read the bar's menu on this pic. But in the video it is blurry. Can we adjust the quality of recording? Thank you very much!
Hi @user-e825d8 ! The resolution of the camera can not be changed. The camera is designed with a fixed resolution of 1600x1200 pixels, offering a field of view (FOV) of 132ยฐ horizontally and 81ยฐ vertically.
This gives you approximately 13 px/deg. Kindly note, that this value also does not hold exactly true, as you also have to account for radial distorsion and the density won't be the same across the whole field of view.
For instance, an object with a height of 10cm located 5 meters away, subtending about 1.15 degrees (like the one you propose), would approximately be represented by roughly 15 pixels on the sensor. This comparison helps set a baseline for understanding object representation at a distance.
In the context of visual perception, if we were to analogise with a digital sensor, the human eye can perceive details at an approximate rate of 60 pixels per degree. This is coarse analogy as you can't compare a digital sensor with the eye, but can serve to understand why they saw the letters but they were not captured.
On top of that, there are other factors such as motion blur. ie. movement of the head during the camera exposure can lead to lose of details.
With these, I only want to help you set realistic expectations for what the camera can capture.
Hi @user-831bb5 ! Yes, the grey bar indicates parts of the video where the image was not found.
If you did not received an error message with "It could not be computed based on the image and scanning recording", it means that at least the scanning and image were paired. Perhaps you have another recording where the image was found?
Thank you for your fast reply Miguel!
Yeah I don't know what is happening here, because I've done enrichments with worst lighting conditions, and they were perfect. Do you think I need to update something or maybe I miss some firmware update? (The android is not updated, but I don't think that could be an issue right?)
Today I tested another enrichment and its been 8 hours for 3 simple 15 second clips with a good set up of light and nothing. the enrichment hasn't even finished. (it did now, almost 9 hours and it gave an error of course) Last time I did a video renderer with like 20 minutes, and it was done in a matter of minutes. Nothing too crazy in terms of time.
The attached picture is the one that took 8 hours and it has given an error.
Also, I've noticed that a simple enrichment is taking toooo long to compute. Im doing an image maper. Its beeen almost an hour. The scanning video is 23seconds + the user video is 37s as well. Is that normal? (ID e58fcdad-16a3-464b-8b86-6d23a2446d57) I know that the time is subject to the server, but I've never experienced that much time for something on the Pupil Cloud.
Now that Im checking and old project, I'm noticing that the user video does not have the dots over the object.. could that be the thing that is messing my recordings?
Is there a way to perform the image mapper offline with Python? (I did the Map Gaze onto a screen and worked wonderfully and fast, and the computer is not that fast to be honest with you)
The cloud-like with the dots is not appearing on the user videos
Thanks Neil for taking the time to respond. I do see your point regarding the difference between the image and the recording. But I have an other one wich is exactly the same and same result. Take a look at the second image that I;ve sent for example, the colors are different as well and the light conditions... worst if you ask me. Does making adjustments with the refrence image colors would produce better results?
I;ll DM you thank you very much
Hi there! I'm planning to conduct experiments involving the observation of photographs with elderly individuals. Are there any specific concerns or factors I should be mindful of, such as issues related to offset correction?
Hi @user-9d91ed! Neon generally performs well with older adults. In fact, you can read more about that in our performance whitepaper. When viewing photographs, it might sometimes be useful to use an offset correction to get the most accuracy possible, but it depends on how big the viewing stimuli are. I always recommend doing some pilot testing with the equipment to get familiar with this!
Hi. I am trying to run the Reference Image Mapper and it never finishes processing. Is this something I am doing incorrectly? I was able to run the Reference Image Mapper two days ago successfully. Thanks.
Hi, I also have same problem to run Reference Image Mapper from yesterday. I tried different scanning videos but it never finishes.
Same situation here, I left one and after 9 hours nothing. Tried one last night and I'm not 100% sure but checked today and it was 16 hours and nothing (The videos did display 100% processed when selected from the drop-down menu, but the enrichment was still processing) . Now it displays an error.
@user-831bb5, I've received your invite. We will take a closer look at your workspace and data on Monday and see if we can figure out how to get your reference image mapper enrichments working properly ๐
Hey @user-4b3b4f and @user-a370d3 ๐. Is this a recent change? Did you do anything different in the Reference Image Mapper when compared to how you used it previously?
Hello, I have a question. The Bare metal tracker is no longer recognized by the app I think
when I pulg it into the phone nothing happens on the app
This might sound dumb, but could this be because the phone battery ran out so I recharged it and for some reason now when I plug it in it does not recognize it?
It's possible that a full battery drain has affected this! Firstly, please long-press on the Neon app icon, go to "Storage usage" and tap "Clear data". The reconnect the bare metal and see if it shows. If not, please reach out to info@pupil-labs.com with the serial number of the module.
I've set up a reference image mapper enrichment that has been processing for 20 minutes. Is that normal? How do I get it to load?
Hi @user-8b5404 ๐ ! The Reference Image Mapper (RIM) Enrichment is computationally intense. Depending on the current number of other RIM requests in queue, as well as other factors, it can take a significant amount of time for the Enrichment to complete. If you check back after a period of time and it is still not showing as complete, try doing a browser tab refresh or logging out and back in. This should trigger a UI update that shows the latest processing status, which could indeed be complete at that point. For future reference, if you want to reduce the computational expense of RIM, then you can use events to process shorter sections of the recording. Check this message for details.
Hi Pupil Labs team, I have a question about Neon Monitor. It is working great, and I do not have any connection issues. I am just wondering if there is any way to stream the eye camera along with the scene camera? The lab I work in works with young children and their parents as they play with toys (and we hide behind a curtain). The kiddos are not afraid to move the eye tracker if they are annoyed with it. When we used Pupil Core models in the past, Pupil Capture let us keep tabs on the scene cam and eye cam using our computer behind the curtain. That way, we would know right away if the child (or parent!) bumped the camera, and we would step out to help them fix it. Our solution for now is to keep an eye on the gaze overlay with Neon Monitor to make sure it is tracking correctly. If there is a simple way to stream the eye camera from behind the curtain too, though, that would be great!
Hi @user-471762 ๐ ! One of the many benefits of Neon is that it is calibration-free and tolerant of slippage, so if the glasses are bumped a bit, then the experiment can continue without any cause for concern. The glasses could even be briefly removed and put back on and recording will continue as planned. You will still get accurate and reliable gaze data. So, you do not need to worry as much about monitoring the eye video streams or the gaze circle. Currently, the Monitor app does not provide an eye video stream, so if you want that, then a small Python script using our Real-Time API would be an option.
Hi Neon team, I'm using Neon together with the realtime API, I used the API to discover_one_device, stream eye video, and send_event, at first it worked perfect, however, after many times of testing, it suddenly stops working and output this error message: "File "...\aiohttp\streams.py", line 622, in read await self._waiter aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected" What does this mean? How should I proceed?
The error itself is a little non-specific, but if you examine the .message
property of the exception, it may provide additional insight as to the cause. Generally speaking, it's best to wrap your realtime api calls in try-catch
blocks somewhere along your application's call stack so that your code can detect exceptions and handle them appropriately (e.g., attempting to reconnect, alerting the user, writing to a log, etc)
Hi @user-613324 ๐ ! If you have encountered this error, then you want to check the "Stream" section of the Neon Companion app. Do you see the message, "waiting for dns service"? If so, then you need to restart the app, and if that does not fix it, then restart the phone. When you start the Neon Companion app again, it might ask you to allow permissions for Neon. Make sure to select "yes" and "always".
Hi everyone. We are starting to play with our Neon systems now and working out what the data look like. Only one question so far: can I download enrichment videos from pupil cloud? For example the video showing detected faces after running face mapper?
Hi @user-53a8e1! It is not possible to download the enrichment with the face mapping overlay. However, you can use the face_positions.csv output to get the coordinates of the bounding box and overlay it on the raw video, but this would require some coding. See also this message for more information on how to render the raw scene video with csv data.
Hi pupil labs team, is it possible to record and stream data at the same time? Is there anything i should keep in mind when doing this? I am planning to use the async data streamer while recording/saving data to the companion device. Thanks!
Hi neon team, I just wanted to bump this question. Thanks!
Good morning Pupil Labs Team We have discovered a tiny scratch on the lens of the NEON module.
Is it possible to change the lens or somehow repair it?
Thanks.
Hi @user-6c4345! Could you please contact sales@pupil-labs.com in this regard, sharing an image of the module's scratch?
Hi, I have been collecting data with my Neons for the past few weeks. Iโve had this error come up a few times and havenโt been able to pinpoint why this error occurs. We are operating the glasses exactly the same each time but the error seems to occur at random. Any help would be greatly appreciated!
Hi @user-1423fd! Could you please contact us by email [email removed] sharing the serial number of the module? You can find it by connecting your Neon system to the phone, accessing the App's main screen, and tapping the top right info icon. There you can view the module serial number.
Hi! First time attempting to use Neon and pupil clouds. I get a error with gaze pipeline on all my recordings. The recordings work fine on the phone (for the most part). Thanks in advance.
this is the full error message: "Gaze pipeline failed for this recording. We have been notified of the error and will work on a fix. Please check back later or get in touch with [email removed]
Hi @user-a7636a! Thanks for reaching out. Can you please share the recording ID with us (right-click on the recording >ย โView recording informationโ)?
Absolutly, here it is: 98412e61-ada4-4ae6-8b07-73e93f15eab9
Okej with further investigating we noticed that this particular video did not have working gaze tracking in the video on the phone. This video has working gaze traking on the phone and does not have the same error message in the cloud aplication but instead its just buffering/processing endlesly: 52fa5b20-3137-4332-b563-2b3dd1d7a741
Thanks for clarifying @user-a7636a! Let me talk with the Cloud team and I'll provide some updates asap. In the meantime, it seems that you have an older version of the Neon Companion App (2.7.5-prod
). Note that the latest version is 2.7.11-prod
. Please update the app via Google Play Store. This might trigger some firmware and FPGA updates. During installation, grant all required permissions, selecting "Always Allow." When prompted for camera access, click the checkbox before confirming.
Please try making some new recordings with the latest app version, and let us know if the issue persists
@user-a7636a, I just updated my previous message ๐๐ฝ , please have a look ๐
Hi again, i did as you suggested and its still not working, it keeps on buffering. we also tried switching acounts and workspace, to no avail.
can you please send me the recording ID for the recording made with the new app version 2.7.11-prod?
here you go: a4c5c07e-b554-4652-b774-b3e556436ac1
@user-a7636a there is high load in cloud currently which might cause delays in the recording processing. We appreciate your patience and understanding and we will get back to you as soon as possible. ๐๐ฝ
Thank you for the help. That explanaition seems resonably because some of the videos seem to be working now.
@user-a7636a please note that your recordings have been processed.
With the real-time API at what tick rate does the API listen for event markers sent as UDP messages?
You can't send events as UDP messages. They are sent HTTP POST messages, which are strictly TCP.
With regards to the timing, you can include a timestamp with the event. If you do, then it does not matter when the message is received, it will be recorded with the timestamp you provide
Hi! is there a way to map gaze data to the surface ( like using Marker Mapper) without the Cloud ?
Hi @user-ee081d! Mapping gaze data on surfaces is also possible using the offline analysis workflow for Neon, Neon Player, and specifically our Surface Tracker plugin.
hi, i am having difficulties understanding how the reference image mapper works. Does it normally take a while to process? and does it use all recordings i have in the project?
Hi @user-2b8918 ๐ ! All enrichments use by default every recording in the project.
If you want your enrichment to only be computed on a section or some specific recording within the project, you will need to use events to define its temporal selection.
This is selected when you create the enrichment under Advanced Settings > Temporal Selection.
If you are only interested in a portion of the recordings, we strongly recommend that you do so, as smaller sections take less time to compute.
The amount of time an enrichment takes to compute depends on several factors, like the type (eg. reference image mapper is more computationally expensive than others) , the duration (i.e. how many recordings and how long they are) and the queue (how many people are making use of the cloud resources at the moment).
Does this clarify all your questions?
Yes, thank you. So it is normal for it to take a while to process?
https://discord.com/channels/285728493612957698/1047111711230009405/1209872360006623302 We are currently under high demand in the Cloud, and you may experience longer times, but in general can take a bit, yes.
Ok perfect no problem, was just worried it wasn't working - thanks for your help!
Hi Neon Team, I had a question regarding the Neon glasses and Companion updates. I turned on the wifi for my phone and got a request for a FPGA firmware update. However, this update went for 30 minutes with no progress on the update bar and I believe it crashed in the app. I had to restart the app, however, now it won't recognise the glasses when I plug it in. What would you recommend doing?
Hi @user-f4e4e0! Can you please try disconnecting Neon, force stopping the app, starting the app again, and connecting Neon?
To force stop the app, please try long-pressing on the Neon Companion App Icon, select 'App info, and 'Force stop'.
Please let us know if this works.
Hi, is it normal that I've had enrichments processing for over 15 hours without result? In the past they took max. 1 hour. I've deleted those enrichments and started again one by one now, but is there something that can be done to speed up the process? Thanks ๐
Hi @user-231fb9! It might be that your browser put your tab to "sleep", and the state never changed. Could you please try to hard-refresh your browser ("Ctrl" + "F5" (Windows) or "Command" + "Shift" + "R" (Mac))?
Hey Pupil Labs team, is the Neon IEC 62471 compliant? I have not been able to find the documentation on it.
Hiย @user-ac085e ๐ ! Yes, Neon has been tested for EN 62471:2008 and is classified as exempt (no photobiological hazard). ๐ฆบ
I've found documentation that states the Invisible is IEC 62471 compliant but nothing regarding the Neon.
Hello, I have had problems getting the "just act natural" to connect with the cell phone. What i can do?
Hi @user-63bd0f ๐ ! Would you mind describing a bit more the issue? are the glasses not recognised by the app? In that case, could you please reach us by email at [email removed] with the serial number of your glasses? Simply send a picture of the back of the module where the datamatrix (what looks like a QR code ) is visible
Hi Rob, after I encountered the server disconnect error, I did nothing but wait overnight. Then the error was gone and everything runs fine. I checked the "Stream" section of the Neon Companion app again and this time it shows "Stream to Pupil Monitor" with a QR code below. I don't need Pupil Monitor but I do need to use realtime api to send timestamps of the events. And I believe the error of server disconnection I encountered is triggered by the device.send_event() function. So what should I do to avoid or mitigate this error?
Hello. I have a reference image mapper enrichment with two AOIs. Is there a way for me to download a fixations csv that will give data on whether the fixation is on AOI 1, AOI 2, or not on either AOI? Thank you!
Hi @user-8b5404 - You can get several fixation metrics on the AOIs you have defined in the aoi_metrics.csv
file which is included in the the Reference Image Mapper download folder.
Any information on IEC 62471? "Hey Pupil Labs team, is the Neon IEC 62471 compliant? I have not been able to find the documentation on it."
Hi! I am running the gaze-controlled-cursor-demo. I can get all four tags detected after adjusting the brightness and size but only for a few seconds, after that, the red bourder starts twitching on the markers and it is hard to get all 4 detected at the same time. What could be my problem?
Could you make a recording and share the scene video here? Much easier to diagnose that way
Hi
I am working in transportation engineering with the Neon glasses.
We went for a test run and the glasses stopped recording after 20mins, I couldn't start recording again(it started recording but stopped in 20-30sec). Again after 10mins gap I started recording again, it started working.
Is this the maximum time it can work in one go or is there any problem with the device.
Hi @user-b6f43d! Sorry to hear you're experiencing issues with your glasses. Recording time depends on the phone's battery. Using a fully charged device you get around 4 hours of continuous recording time.
To see what might have happened, could you please contact us at [email removed] sharing relevant information (module serial number, app version, recording ID in case the recording was uploaded to Cloud)?
Hello, the Bare metal tracker, it is no longer connecting to the Neon Companion Application. Although we plug it in, it still displays "plug in and go".
We have attempted the following, none of which fixed the problem: We force quit the app, then reconnected the bare metal We cleared the data storage, then reconnected the bare metal We installed the application on a different device and connected the bare metal.
Hi @user-c5d00c ! We have contacted you by email, please let us know if it reached you.
Thank you for your work.
I would like to utilize Neon and Invisible on multiple smartphones. This is because there are limitations on the battery operating time of a single smartphone. Also, there is a need to use another smartphone if one breaks.
Q1 Are Neon Companion and Invisible Companion available on Google Play? (I couldn't find Neon Companion) Can I install them from any Android device?
Neon Companion is already installed on the OnePlus 8T. I was able to install Invisible Companion from Google Play on the OnePlus 8T. However, Neon Companion could not be found on Google Play for the OnePlus 6.
Q2 Are there any requirements for smartphones to use Eye Tracker?
Q3. Which indicators in the csv can be used to determine the data that signify the saccade?ใ Which of Pupil Core, Neon or Invisible can be used to obtain the saccade?
Hi @user-5c56d0. The Companion apps that run with Neon and Invisible are specifically tuned to work with certain models of phones and Android OS versions. We require a lot of control over various low-level functions of the hardware, and we want to ensure optimum robustness and stability, among other things.
The apps are available on the Google Play Store, but only if you have a compatible phone and Android version for that app.
It's likely you couldn't install the Invisible App on the OnePlus 8T because of an incompatible Android version. The Neon App is also not compatible with the OnePlus 6.
The best thing to do is to visit each respective documentation page that outlines specific compatible phones and Android versions: - Neon Compatible Phones and Android OS versions - Invisible Compatible Phones and Android OS versions
Regarding saccades, technically all three systems can be used to compute saccades, although they're not explicitly reported in the exported data.
Neon/Invisible: For example, saccades are conceptually equivalent to the intervals between classified fixations. The caveat here is that this approach assumes no smooth pursuit eye movements occurred. We do plan to release explicit saccade metrics, although we don't have a concrete release date just yet.
Core: Inter-fixation intervals are also conceptually equivalent to saccades for Core. But note that some time ago, there was a community-contributed saccade detector: https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing. Not sure if it will still work, but it's worth looking into!
Hello, would it be possible to transfer recordings from one workspace to another? Also is it possible to give workspace user exclusive access to projects?
Hi @user-ff3a0a ! Unfortunately, it is currently not possible to move recordings across workspaces. We did received this feedback in the past, and we were going to transfer it to our new ๐ก features-requests channel and consider it.
If this is something you would like to see sooner than later please create a request there.
Same goes for projects granular access. I'd recommend separating these two requests.
so with the monitor app the frame and
hi Pupil Labs, I wanted to confirm whatโs the storage limits on pupil cloud for Neon users?
Hi @user-ccf2f6 ! There are currently no storage limits in Pupil Cloud.
Hi @user-b6f43d - Sorry for not including this in my first message:
Hi, I used the marker mappers enrichments to define some area of interest for my analysis. I found out that the coordinates of a same fixation are different between the two csv files exported for the two enrichments. Thus I would like to know how are computed the normalized coordinates for the marker enrichments and why are they different ? Also, are the coordinates of the marker mapper file that contains the specific fixation (fixation detected = TRUE) more reliable than the other one? I put an example in this photo with a same fixation among the different fixations csv files. Thanks !
I have a follow up question regarding that. Were the surfaces defined in the same way or did you change their size by moving the corners before running the enrichment?
Also, I had several recordings where the scene camera was grey until more than 15s, even up to 30s (see photo). We first thought about a heating problem but it seems to happen both during the beginning and / or the end of a session. Do you know how this problem occurs, and what can we do to prevent it? Thanks !
Hi @user-edb34b! The grey frames are expected and they are used as placeholders on Cloud because eye cameras start a bit earlier than the scene camera sensor and we don't throw away this data. See also this relevant message https://discord.com/channels/285728493612957698/1047111711230009405/1207360774176243803
Thanks for clarifying @user-edb34b!
Note that the fixation coordinates in the fixation.csv exported in the Marker Mapper enrichment folder are given in the normalized coordinates within the surface.
I recommend having a look at the explanation of the exported data here.
I have a doubt, in marker mapper enrichment, my marker are getting identified in some frames and in the next frame maybe because there is too much light on it because of the sun it is not getting recognized. so will it skip the data in frames which are not getting recognized or is it okay if the marker are identified in any one of the frames
Can someone reply to this ?
The frames without marker detections will be skipped, yes. The markers do need to be detected for the mapping to take place, and we don't do any interpolation of frames with missing detections.
Why even after the markers are identifies the surface is not properly defined ?
Can you drag the bottom left corner of the surface down so you cover the full windshield? Or was this trapezoid shape intended? It looks like the surface visualized to the right of the video is warped due to the geometry of the surface.
Its showing like this