Hello neon Team. Are you working with some sport teams or individuals wit using the neon pupil lab for screening. Do you have some ideas where can pupil lab help to screen or improve in the differente sport?
Hi @user-51c732 - I understand you're planning to use Neon in sports applications, is that right? Could you please explain a bit more your research needs and setup? I'm not sure I fully understand your question.
Dear Nadia You are right. I have just an idea and think to start using the Neon for these purposes. And I will be happy if you already have some experiences and ideas and can share with me.
@user-51c732 Neon is a great option if you want to collect eye tracking data in sports applications. We have some videos on our website showing Neon when playing basketball/riding a bike etc.
For sports applications, we recommend opting for the frame "Ready Set Go" frame. This frame has an adjustable headband and fits well under helmets and other headwear. It is suitable for sports and other activities involving fast movements, ensuring the module stays firmly on the subject’s head.
If you could share more details about your planned research, that will help me provide better feedback. We can also schedule a Demo call if you prefer and we can show you Neon in action 😉
Thank you very much It will be great to have a call and go through
great, can you send us an email to [email removed] and we can coordinate via email 🙂
Ok
Hello, why does Neon stop recording a couple minutes into my experiment?
Hi @user-2e0181! Please open a ticket in 🛟 troubleshooting and we can coordinate there!
Hi everyone, We're interested in ordering a frame (a ready set go) as an extra for our recently purchased neon device. Before we do so, I would like to ask if there is any possibility we could request some customization of the headband on it?
Sorry I just noticed on the website that headbands are also available. Is it possible to get some images of the head straps (beyond those on the web page, i'm specifically hoping to get a closer look at the earpiece sleeve) ? and would it be possible to vary their length if needed? lastly, is there any room for customization of the strap material (elastic band instead of sheathed steel cable)?
Hey @user-97c46e! Can you please contact [email removed] Someone from there will be able to assist you with these questions 🙂
Hola!, Lamentablemente sufri el robo del movil que viene con mi equipo Neon. Que dispositivo me recomienda adquirir para utilizar mis lentes
Moto Edge 50 Fusion es una opción?
Hola @user-fd453d , Lamento escuchar que te hayan robado el teléfono. Puedes ver los dispositivos compatibles en este enlace: https://docs.pupil-labs.com/neon/hardware/compatible-devices/. Personalmente, te recomendaría el Motorola Edge 40 en alguna de sus variantes, ya que es el dispositivo más potente.
Moto Edge 50 es una opción?
Solamente podemos garantizar, dado que han sido probados, el correcto funcionamiento con los dispositivos listados anteriormente.
ok gracias
Hi folks, we're using the ready set go frame with our neon device and within 10 minutes of starting to recording, a good bit of heat that can be felt on the skin. While i expect electronics to generate heat, its a lot more than I anticipated and its getting quite hot 15 minutes in. Is it expected to get this hot? If so, does anyone have any suggestions for how to deal with wearer discomfort if we're recording over hours?
Hi @user-97c46e! Which frame are you using? Does it have a heatsink? Our current frames for sale come with an aluminum heatsink.
The heatsink helps dissipate heat from the module to the front of the frames, ensuring the module gets warm but not hot. Typically, the module reaches around 10°C above ambient temperature for most frames, with slightly lower temperatures for ICSCN. These measurements were taken using FLIR and are based on the module’s core temperature beneath the silicon.
Hi @user-d407c1. Thank you. We have the ready set go frame. It has a plastic housing on the top which seems to have a grating kind of construction near a coil on the electronics package (i'm assuming for the purpose of heat dissipation).
I want to know what the amplitude of an eye movement in degrees (i.e., as an angle from the center of the glasses essentially). The simplest way seems to be to get the coordinates of a square in the scene camera exported by the pupil neon (which is the info I have), and work out the ratio with the estimated angular dimensions of the same square. Is this reasonable, or is there a better way?
Hi @user-4df54e 👋. Neon already provides the elevation and azimuth of a gaze ray in degrees. You can use this to compute the amplitude of gaze shifts. More in the docs: https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv. Do you think that would suffice for your needs?
Pupil Cloud Add-ons Q&A
I apologize if this is a silly question, but is there a way/option to use Neon without wifi (on the phone) and upload the recordings locally instead of through Cloud? Thank you!
Hi @user-bce0dc ! You can transfer them via USB.
What OS version is requried to run for the Neon glasses?
Hi @user-bef103 👋 It depends on the model of the Companion device (phone) you're using with Neon. Which model do you have?
of the phone
oneplus 10 pro
its running 14.0 Right now
Unfortunately, the supported android versions for OnePlus 10 Pro are Android 12 or 13, so you'll need to downgrade to a compatible version. Please open a ticket in https://discord.com/channels/285728493612957698/1203979608563650650 and we can coordinate from there!
Okay thanks
Hey pupil labs, i wrote a code that reads gaze data and world timestamp data from two CSV files, calculates relative time based on the first timestamp from the world timestamp file, and then converts the relative time in nanoseconds to a Minutes : Seconds : Milli Seconds format.
I happen to see that the converted file says that i have some gaze data beyond the time of the recording. Ex: In my case the recording end at 17minutes:31seconds but the last value in converted gaze file is 17minutes:59seconds:339ms.
What might be the problem
found what the problem is, Thanks pupil labs
Hi @user-b6f43d , I see you found the problem, but just in case:
world_timestamps.csv
is not the start of the recording.start_time
of the info.json
file from the timestamps in each data stream of interest.This is because the different sensors (scene camera, eye cameras, gaze, etc.) run in parallel and do not all start at the exact same time. Also, they must briefly initialize for about ~1.5 seconds after you start a recording.
Also, they do not all stop at the same time when you end/stop a recording. The eye/gaze sensors might still run for a bit after the scene camera has stopped.
Hi everyone,
I’m using the Neon eye-tracking glasses for a study where participants look at their own bodies in a mirror. I need to define AOIs for each participant (e.g., hands, legs, stomach), and I want the software to automatically extract the eye-tracking data from the video of each participant looking at the mirror, based on these AOIs.
I understand that I need to have a reference image, but I’ve also seen mention of a scanning video. Is the scanning video absolutely necessary for this process? I’ve come across studies that don’t seem to use a scanning video, so I’m unsure of its role here.
Could someone provide clear instructions on how to do this correctly? I’ve managed to get it working in a way, but I’m not confident it’s the most systematic approach for research purposes. A detailed, step-by-step guide would be greatly appreciated!
Thanks in advance!
Hi @user-4da0be 👋 !
Since you are working with a mirror, then that can potentially cause some difficulty for Reference Image Mapper, but they are not dealbreakers; it depends on your exact setup. May I ask, how large is the mirror and how much context, contrast/color, & variation is there in the surrounding environment?
@user-4da0be Also, if you do not strictly need the features of Reference Image Mapper, then you may find the Map Gaze Onto Body Parts Alpha Lab to be helpful. It does not require a scanning recording and automatically determines AOIs for body parts.
This can be especially helpful if the participants are moving their arms, for example, while looking in the mirror, but it does require more work to extract fixation metrics.
May I ask what data exactly you are looking to compute for the AOIs? Is it fixation metrics? If so and Reference Image Mapper does not work satisfactorily for you, then my colleague @user-480f4c has pointed out that you can also use the Manual Mapper Enrichment.
hello, is saccade detection (+ export) only available in pupil cloud or also neon player? can't seem to find it in player 😦
Hi, @user-688acf - you're in luck! We just published Neon Player v4.5, which includes saccade exports!
lucky me 🙂 great thx!!
(y) thx
awesome! thx
I have a question about best practices when using tag aligner (https://github.com/pupil-labs/tag-aligner). For the first step in tag-aligner it says you have to make a RIM enrichment. For this step you need a scanning video of the scene, a still photograph of the scene, and the actual eye tracking recording that is of interest. The github readme for the tag aligner mentions that at least one of these two recordings needs to include the AprilTag. Currently I am including the AprilTag in the scanning video (since it cannot be included in the actual eye tracking recording because of the experiment). However, does the still reference image that is used for the RIM step also need to contain the AprilTag? Also will the RIM enrichment that is used by the tag aligner be less accurate because the scene changes between the scanning and eye tracking recording since the AprilTag is not present in both?
Hi @user-a09f5d , a Reference Image Mapper Enrichment by default does batch processing of all the recordings in a Project. So, when the instructions say, "This can be the scanning recording, but does not need to be.", it means that it can even be a third or fourth recording, which we could call the "aligning" recording
This "aligning" recording does not even need to be an eyetracking recording. It just needs the AprilTag present, within the scanned region.
Then, you do not have to worry about any error being introduced into the mapping by the presence/lack of AprilTags in the scanning/eyetracking recordings.
Hi Neon team, I noticed there's a new feature in the Neon Companion app: Companion Settings --> NeonNet --> Compute eye state after a recent app update. I didn't remember seeing the "Compute eye state" option from the older versions. So is it related to the realtime eye state feature introduced in this April? So is this option only for generating the 3D eye state in REAL TIME (like the realtime gaze)? I tried with this feature both turned on and off in the app, and found no difference in the timeseries data downloaded from the Pupil Cloud. Even with this feature turned off, the Pupil Cloud will still generate 3d_eye_states.csv. Can you give me a use case where we need to turn on this feature? Thanks!
Hi @user-613324! Yes, that's correct. Enabling the "Compute eye state" option allows you to capture 3D eye state measurements in real time. Even if this option is off, you can still download 3D eye states from Pupil Cloud, as all data streams are reprocessed there from your recording. Potential use cases for having it enabled include streaming eye states via the real-time API, or working locally without Cloud access, such as when exporting recordings directly from your phone to computer.
Hey pupil labs, one of my recording shows a error like this, it says it will get fixed. how long will it aproximately take ?
Hi @user-b6f43d! Can you try refreshing your browser tab and seeing if it's been processed? If it hasn't, please open a ticket in 🛟 troubleshooting
Hi Pupil Labs! We are using the neon glasses indoor in our lab. We are using it for a car simulator that we built. We have two problems, and one question: - At the start of the recording(usually for 3sec, but in extreme cases 10-15 sec), the whole screen is grey, only the red circle(gaze data) is shown. I will include a screenshot about this. -The second problem is about overheating. After a few recordings, the module in the middle gets really hot, and it affects the performance of the glasses. -Question: Is there any way to connect the glasses directly to a windows machine, and manage the functions from there? (Recording, downloading videos directly) Thank you for your help!
Hi @user-dac41e! It's normal for the sensors to start at slightly different times, but 15 seconds seems unreasonable. First, ensure you're using the latest version of the Companion App. If not, please update it via the Google Play Store. Then, try clearing the app's storage and cache: disconnect Neon, long-press the Neon app icon, select 'App info', go to 'Storage & cache' and clear both. (This won't delete your recordings, but you'll need to log in again.) Reconnect Neon and make a test recording.
Secondly, could you clarify what you mean by heat affecting the performance of the glasses? Do you receive error messages or notice something else?
In terms of direct tether to a computer, this isn't supported if you want to use Neon's native gaze pipeline, as this runs on the Phone. May I ask why you want to do so?
blinks.csv timestamps alignment
Hello! I am working on achieving the precise time sync for my devices. the documentation was very clear and the code worked beautifully (thank you for all the work put into this!). However, I have a question about the output. The first time I ran the code the offset estimate was about 37ms, and we are trying to achieve an offset of <20ms max. Thus, i restarted my computer and the mobile phone devices, and toggled the phones to update their time. After this, the new output was -950ms (even worse!). Any ideas why this might be occurring and what I can do to achieve an offset of less than 20ms?
Hi @user-87d763 , may I first ask what your use case is?
our aim is to have two neon eyeglasses recording at the exact same time. having the precise time is essential for ensuring that we can have both eye glasses recorded at the exact same time
Do you need this synchronization during the experiment (real-time) or only during post-hoc analysis?
both would be ideal
Ok, so, you want to compare data from two different Neons that were recording at the same time? Will other devices be involved and do you need to trigger those other devices based on data from Neon?
@user-87d763 Ok, I see. And just one last question: do you mean that you want both Neons to collect a gaze datum, for example, at exactly or close to exactly the same timestamp?
yes, we want both neon's to collect the gaze data at the exact same timestamp
Hey everyone, We're encountering an issue with the synchronization of the timestamps between the frames and IMU data. Specifically, we've noticed a delay of approximately 1:30 minutes between the two datasets in some sessions. After performing calibration movements during the session, we tried to synchronize the data by looking at the frame numbers marking the start and end of the movements, retrieving the corresponding timestamps, and then extracting the IMU data within that time frame, but found that the frame timestamps didn't align with the corresponding IMU data. After investigating, we found that the start times of the IMU and the frames were off by about 1:30 minutes. When we shifted the IMU data by this amount the data matched again.
However, this issue has only occurred in certain sessions—while the sessions before and after these had no such issues. Additionally, both sensors were shut down with the usual delay of 1-2 seconds at the end of each session, as expected.
In the plot below, the first panel shows the IMU data with the original timestamps, with the red lines marking the frame timestamps. Furthermore, approximately 1:30 minutes after the frame timestamps, we can also see the expected pattern in the IMU data. In the bottom panel, the IMU timestamps are shifted, and you can see the expected pattern in the IMU data between the frame timestamps.
Has anyone encountered something similar or have ideas on what might be causing this? Any help or suggestions would be greatly appreciated! Thanks in advance!
Hi @user-e11dd1 , to be sure I understand:
Hi, we use neon player to process data after recording. We record each trial separately so, when we go to use Neon Player, we have to drag the export file into the interface and then wait for processing to finish. With many trials, manually uploading, waiting, then downloading results is time consuming. Is there any way to load multiple recordings for processing at once or to do them in succession automatically?
Hi @user-5a2e2e , I'd first like to let you know about a best practice for Neon:
When following this method, you will have a smoother experience using Pupil Cloud and Neon Player to analyze your data.
If you have already collected all your data, then you could use the pl-rec-export tool in a command-line loop to "batch process" each of those recordings. If you prefer working in Python, then there is also the pl-neon-recording library. Then, you can write Python loop to "batch process" the recordings.
Hola tengo claro los equipos recomendados por pupil, sufrimos el robo del equipo mobil y tenemos un samsung A55, con android 14 pero no reconoce las gafas. Alguna chance de ocupar este equipo?...en chile no se comercializa la marca one plus y el moto 40 no se encuentra en el mercado. Que me recomiendan?
I'm adding some context in English for other members of this channel that might find this helpful:
Martin was asking whether other devices such as Samsung A55 or Moto Edge 50 fusion or Pro could be used with Neon.
The only supported devices that have been thoroughly tested are the ones listed in our documentation:
Hola @user-fd453d - Siento mucho que te hayan robado el móvil. Lamentablemente, los únicos dispositivos compatibles son los recomendados en nuestra página web (OnePlus8T, OnePlus10 y Moto Edge 40 pro). Si no puedes encontrar un teléfono compatible de un distribuidor local, podemos ofrecerte uno; por favor, ponte en contacto a través del correo electrónico [email removed] y el equipo de ventas te proporcionará más detalles.
nadia consulta, y el motorola edge 50 fusion o edge 50 pro?
Lamentablemente,no. Tendría que ser uno de los que aparecen en la lista de nuestra documentación: https://docs.pupil-labs.com/neon/hardware/compatible-devices/
Hola nadia solo para confirmar, este es el equipo compatible. Es un motorola edge 40, no es edge 40 pro.
Hola @user-fd453d ! Como mencioné en mi mensaje anterior, los únicos dispositivos compatibles son los que recomendamos en nuestra página web. Por lo tanto, el Moto Edge 40 no es compatible, mientras que el Moto Edge 40 Pro sí lo es.
English version below for anyone who may have similar questions:
The only compatible devices are those recommended on our website. Therefore, the Moto Edge 40 is not compatible, whereas the Moto Edge 40 Pro is.
Hi Pupil team, regarding to the recent decision of limiting the cloud storage, I was wondering whether if I download the data from the cloud and remove it, would it be possible to upload the data to the cloud back. It would be useful if I can upload the data after I remove it from the cloud in case I want to do some post-processing using the cloud system that I didn't realize beforehand. Thanks for your response in advance!
Hi @user-594678 , it is not possible to re-upload recordings to Pupil Cloud after you have deleted them.
Team I need some urgent help. My device is not getting recognised on the phone
Hi @user-37a2bd , we have continued communication in the ticket. Hoping to get you up and running again swiftly!
I have an open ticket. Can someone please join and help me with it
The device is getting recognised as a charging device instead of the glasses.
Hi everybody, I‘m trying to Transfer Recordings in the Pupil Cloud from one Workspace to another workspace from a different person. I want them to be displayed just like their own recordings in their workspace. I don’t just want to invite the person to my workspace. Is there a way to make that possible? Kind Regards Nicola
@user-da6260 , I've moved our messages to the 👓 neon channel. We can continue our converastion here.
Hi @user-da6260 , it is not possible to transfer recordings between Workspaces. Are you experiencing errors with the Workspace invite process?
Yes, the Email of the Workspace I want the Recordings to be Transfered to doesnt have an own Google Account. Its just a forwarding email adress. If I Click on the link there will Happen two things: 1. I cant Open the link in the Account I actually want to Transfer it to because it always switches to the one connected to my personal Google Account. 2. Theres an error saying the link is either expired or has already been used, even the I just created the link.
Hello. I put a USB HUB on the smartphone with an HDMI output. I can display Neon on the TV. But I cannot display app in landscape. How must I do this ?
Hi @user-ae2b7f, may I ask you why you would like to stream the Neon companion app via HDMI? If you're looking to monitor what your participant is viewing during the recording, you can use the Neon Monitor app on a device connected to the same network as the Neon Companion device.
The solid colors of the walls aren't going to be helpful for RIM at all, so anything you can do there to add more detail / (visual) texture would help... but I'd bet that what you have there is enough if the floor is constantly in view. With things like this though, the most honest answer is often "try it and see"
I agree with @user-cdcab0. What you have there should be sufficient. But we'd always highly recommend trying it out, especially prior to any experiment.
Hi is this an appropriate place to ask a question about AOI's and heat maps?
Hi @user-d00286, yes! Feel free to ask your question here!
Thank you @user-cdcab0 and @nmt . I think I might print some visual texture for the walls to be on the safe side. Is there a way of validating the accuracy of a RIM enrichment for the 'try it and see' method? Or is it just a case of genetrated a heat map of an enriched eye tracking recording and checking that it looks 'right'?
You can validate the mapping with the side-by-side view without generating a heatmap, but this is pretty much the same as what you're thinking.
When RIM processes each frame of a recording, it has to localize the camera and the reference image in order to map gaze. I believe that, in your use case specifically, it's only important for the camera to be localized, and this is indicated by the white dots (see the first video at the top of that linked page).
I have never updated my Motorola Edge 40 Pro that came with the bundle, although it keeps prompting me to update. I believe the instructions are not to update, as the companion app is designed to run optimally for the phone OS as it came in the package. I was wondering if the new companion app announced requires some updates, or we should keep the OS as it is?
Hi @user-c541c9 , newer versions of the Neon Companion App do not require you to update the Android version.
Aside from that, if your Moto Edge 40 Pro still has Android 13, then it is okay to update to Android 14. We now support that on Moto Edge 40 Pro.
Hey guys. Has anyone noticed any network interference between the Neon streaming via the unity package or through the web neon.local:8080 page? I have an image generator program running for our simulator, and I lose connection to the Neon when its running. I checked with wireshark and they are using completely different ports for communication, and there's a fair bit of traffic (60 pps for the image generator, and whatever I set the streaming to for the neon), but I feel like that isn't the issue. Any ideas?
Hi @user-d086cf , may I first ask some questions:
Hello! I have a very simple question; Is there a way to toggle off the fixation lines? I only want a video with the numbered Fixation points, no lines connecting them. Thanks!
Hi @user-162faa 👋 ! Currently, there isn’t an option to disable just the “scanpath” line from the fixation visualisation in Pupil Cloud. The fixations visualisation are either fully enabled or disabled.
As you mentioned, you can suggest this feature in the 💡 features-requests channel. But, in the meantime, you can try:
A) Using Neon Player with the fixation plugin. Please note that the data in the Cloud is recomputed to ensure 200Hz, so you may find slight variations.
B) Alternatively, you could build your own visualisation, for example leveraging the scanpath.json
endpoint from Pupil Cloud API see more on https://discord.com/channels/285728493612957698/446977689690177536/1281276487827460157
If you add the feature request, could you share more about your use case? It would help us understand why you prefer no scanpath. Wouldn't having multiple fixations points without a connection cause confusion?
(apologies if this should go in feature requests plz let me know)
Hello @user-f43a29 and others at Pupil Labs. We have a Vuzix Z100 smart glasses we're using for an AR study at the MIT Media Lab. We'd like to integrate the https://pupil-labs.com/products/neon/shop#bare-metal Bare Metal Neon with it - do you think that's possible?
Hi @user-d6285a we have not tried integrating the Neon Module into Vuzix Z100. The first thing to check is if there is sufficient space to physically integrate the module. If you have a rough model of the Vuzix Z100 or want to physically prototype and do not yet have Neon at your lab you can try prototyping with the files in this repo: https://github.com/pupil-labs/neon-geometry
Thank you! I did try the ref image mapper last year, but I look at a lot of paintings in one visit. I found that to take a scan video each time interfered too much with my process. The video is good enough to illustrate for my purposes at the moment, especially since the points are transformed further in my plugin for c4d 🙂 Thank you for the recommendation though!
Hi Team, If we have audio recording enabled is the audio generally not audible during the playback on the phone itself?
Hi @user-37a2bd , if the sound is from nearby the wearer and not too quiet, then it is generally audible during playback, provided the speaker volume of the phone is appropriate. For instance, if I wear Neon and talk at my normal volume, then I can clearly hear myself during playback. I can also hear sounds in phone playback from several meters away, such as tapping a pen against a table.
How far away is your sound source and what is the sound source?
Is the microphone on the glasses or does it use the mic from the recording device?
Neon Microphones & Audio playback
Hey pupil labs, One of my recording is showing like this ? how long should i wait for it to get processed ??
Hi @user-b6f43d - could you open a ticket in our 🛟 troubleshooting channel, sharing the ID of this recording? We can assist you there in a private chat 🙂
Hi! When I download gaze data from Neon Player, the cvs file contains timestamps and gaze coordinates. How do I get the corresponding frame numbers?
Hi, @user-d5a41b - if you have the World Video or Eye Video Exporter plugins enabled, they will each save a timestamps.csv file that contains the timestamp for each frame in their respective videos. You can cross-reference these timestamps with your gaze data to determine the frame index.
hello, I'm having troubles with the pupil neon. I'm connecting them to the companion device, I can see a white led turning on on the glasses but in the companion app the device is not recognised and there is still written plug in and go. What does that means?
Hi @user-3e88a5! Sorry to hear you're experiencing issues with your Neon. Could you please open a ticket in our 🛟 troubleshooting channel and we will assist you there in a private chat?
Have a question re: lighting. I've done a scene recording for manual mapping (it's too complex and dynamic for RIM), but unfortunately the lighting was very poor due to the nature of the environment, and hence it's difficult to make out specific features for MM to map it accurately
Hi @user-c1bd23 👋! Unfortunately, currently we do not have any post-processing video editing tools in Cloud, that allow you to manage exposure or contrast, if this is something that you would like to see, you can create a feature request in the 💡 features-requests channel, in this case feel free to upvote this existing request and describe the specific functionalities you’d like to see and how they’d enhance your workflow. Meanwhile you can define exposure at the time of recording, check out here how.
Also, could you share a bit more about where you’re encountering issues with manual mapping, is it that you can't see where the fixation lands? Understanding these challenges will help us support you better and may also strengthen the feature request.
Regarding the internal error message that you are encountering, would you mind creating a ticket at 🛟 troubleshooting and sharing the enrichment and recording ID?
Is there any way to boost the brightness within the mapper video?
I'm also getting an "internal server error" + "fixation not found" when trying to perform manual mapping on a couple of videos, they have approx 12000 and 9000 fixations. Another video with about 2000 fixations isn't giving this error.
One quick question. I am attempting to connect to the IP Address and stream the view…but unfortunately the streaming does not work regardless of my browser or computer that is used. Are there any specifics to try besides?! Or ways to reset the Ip address or URL?
Hi @user-3bcaa4 , which eyetracker do you have? It sounds like you might have Pupil Invisible or Neon?
Neon
Hi @user-3bcaa4 I've moved our discussion to the 👓 neon channel. - Are your computer and Neon connected to the same WiFi network? - Is it a university or work WiFi connection?
Yes they are connected to the same connection. And it’s a work connection. Usually with a mobile hotspot, so that could also be impacting the streaming I’m sure as well. It did work previously though using a hotspot for a brief time though
@user-3bcaa4 thanks for the clarification.
Let us know how it works out.
Good day a quick question is it possible to get the eye tracking videos (as in the pupil eye videos) off the https://cloud.pupil-labs.com/
Hi @user-2dfdbc 👋! Are you referring to eye camera videos? Currently, Pupil Cloud doesn’t display eye images. If this is something you’d like, please consider upvoting this request: https://discord.com/channels/285728493612957698/1247514112565575732 and sharing any details on how it would benefit your workflow.
In the meantime, you can download the Native Recording Format by right-clicking on a recording. This format includes eye camera videos, which you can view alongside the world camera footage in Neon Player. For more on this, check out the documentation here
Hi, we have been using neon to record for a few months now with no issues. The last couple of times that we tried, the glasses were not recognized by the companion device. The light on the front of the glasses turns on while plugged in, but the app doesn't show that it's connected and we can't record. We are using the given USBC cable. Is there something else we can try or check?
Hi @user-5a2e2e 👋. Please open a ticket in 🛟 troubleshooting and someone will help run you through some debugging steps.
Hi all. I'm having issues logging in to Pupil Cloud
Every time I try to sign in this morning using either the Google login or the direct login, it seems to initially work but I then get redirected to the login screen
I've cleared cache, tried a different browser, and restarted my PC. Not the first time this has happened either, seems to happen every few days
I've just tried a different computer as well, same issue
Hi @user-c1bd23 - thanks for reaching out. We're currently working on fixing this issue! Thanks for your understanding! 🙏🏽
Thanks Nadia. I just managed to log in but my recordings aren't loading. All was working fine yesterday
Hi there, I'm encountering an issue where some recordings that I uploaded to pupil cloud is still showing processing (purple loading circle) for the whole day, it usually processes very fast on pupil cloud. I was wondering if there's anyway to fix this issue?
Hi @user-c1bd23 and @user-d2d759 👋. Thanks for your report. We're aware of the issue and are working to resolve it. We'll post here once everything is sorted!
Hi Miguel, my issues are sorted and I'm now able to map my recordings. Thank you to your team!
Hi, can the yaw measurement from the IMU be interpreted as cardinal directions? For example, does 0 degrees correspond to north, and so on? I was a bit confused because I initially assumed this was the case, but I’ve seen some discussions on Discord referencing quaternions.
Short answer is, yes. But since you ask about quaternions, I will provide a more detailed overview.
The IMU does indeed output quaternions. These describe how the IMU is rotated relative to the world, i.e. magnetic north and gravity.
We convert quaternions to Euler angles (pitch, roll, and yaw) as they are somewhat easier to understand. A yaw angle of 0° indicates alignment with magnetic north.
Be sure to read this section of the docs, as it contains some important considerations you should know about when working with Neon's IMU, such as calibrating the IMU to obtain accurate yaw readings.
Hi PL team. Same problem again as yesterday. Everything was working fine until about 20 minutes ago, then the fixations stopped appearing in Manual Mapper and I got logged out. Now trying to log back in I get redirected just like yesterday. I managed to log back in successfully once and do some more mapping, but it's logged me out again and I'm unable to sign in
Was the underlying cause to this found? It keeps happening whilst I'm working on mapping, currently trying to meet a project deadline
Hi @user-c1bd23! Thanks for contacting us. Let me liaise with the Cloud team and I will report back to you