@user-15edb3 , I'm replying to your message (https://discord.com/channels/285728493612957698/285728493612957698/1191684449432244305) here. Can you please elaborate on your research question and planned analysis?
Thanks....hi..So i have this wearable glasses and i have been collecting data earlier by uploading the data in cloud and downloading.In that the gaze positions were not normalised and was in absolute values so i get gaze values like (700,400),(600,500) etc. But recently i found that in offline mode data collection the frame number also is getting recorded. However the gaze data ie, norm_x and norm_y are normalised values. However for my algorithm i need absolute values. Basically this new data is just scaled and reference is changed .But from where do i find that scaling factor.
Thanks for clarifying @user-15edb3 - Neon Player exports the gaze data in normalised coordinates. However, you can easily convert to pixels by multiplying the values with the image frame resolution (see also here ). We will be adding additional information to our Neon Player docs to cover all relevant details. Stay tuned for updates! π
that seem to work partially but still i doubt if it is accurately matching with the same values as it was giving previously unscaled.or am i doing something wrong. I multiplied norm_x with 1600 and multiplied norm_y with 1200 and substacted from 1200(siince reference is tope left for unscaled and ref is bottom left for scaled ones). i feel 1600 is not the exact scalar value to be multiplied.this seemed neglible discripency but my aoi detection algorithm doest seem to work now.kindly help
Your approach to getting the pixel coordinates is correct. Could you elaborate a bit on your aoi detection algorithm?
that worked. i hd some other issue in code. thanks for ur response
Hello, I am working on pupillometry with the NEON device, and in the raw data, I do not see any distinction made between the right and left eye. Similarly, in the online documentation, there is no clarification. Could you tell me if this is a randomly chosen eye or an average of the diameters of both pupils ? Thank you !
Hi @user-f77049 π. You're correct that there is no distinction made between the left and right eye. Please see this message for reference: https://discord.com/channels/285728493612957698/1047111711230009405/1180106433086365786
Thank you for the response, however, the diameters of the pupils are not the same between right and left to within 1 mm, which is huge when the variations related to cognition do not exceed 0.5 or 0.6 mm... The relative variations in pupil diameter are the same, however, it's true. It would be interesting in the future to actually have the data for each eye separated. Have a good day
Hello,
I am trying to connect our neon to LSL. I have installed the lsl-relay but it cannot find any devices. I was able to get the stream to my browser (even though the URL did not work, I used the IP) so I am in the same network. But I am also not the biggest python pro so might have made a mistake there. I have installed lsl-relay package in pycharm and then used the the pupil_invisble_lsl_relay.exe.
Alright I kind of fixed it by connecting directly using the IP.
Yeah, that's typically the solution. On some network configurations, the mDNS broadcast/discovery won't work and specifying the IP address manually is th eonly way around it
@user-cdcab0 Thank you. Now, in a test run the gaze tracking worked. Is there a way to get the pupil size data directly or do I have to first upload the video to the cloud and then download the pupil size?
That data is currently only available via cloud
Are there plans to get it in real-time or at least locally on the phone?
That's the plan, but there is no definitive timeline for that availability just yet
Hello, When I am trying to switch workspace or add a wearer. It's showing like this. What should I do ?
HiΒ @user-b6f43d ! could you please try logging out of the app and back in?
Thank you, it works now
Hi again @user-b327efππ½ ! Regarding your question here: https://discord.com/channels/285728493612957698/285728493612957698/1194271538153783337, Neon's pupil-size measurements are provided as a single value, reported in millimetres, quantifying the diameter of the 3D opening in the centre of the iris. Currently, our approach assumes that the left and right pupil diameters are the same. According to physiological data, this is a good approximation for about 90% of wearers.
Note, however, that we do not assume that pupil images necessarily appear the same in the left and right eye images, as apparent 2D pupil size strongly depends on gaze angle. We are working on an approach that will robustly provide independent pupil sizes in the future. Unfortunately, we can not offer a concrete timeline for completion.
Hi there Pupil labs, we received our 2 neon ready set go trackers just this week, and have been testing it. However, 1 of the 2 trackers is not working properly. It will not always connect to the companion properly, it stays stuck with the white led on. At other times it will connect (and the app keeps asking the permissions each time), but then, recordings will just stop randomly. I'll send a photo of the state when not connecting to the phone.
Hi @user-23177e ππ½ ! I'm sorry to hear that you are experiencing issues with one of your glasses. Could you please send us an email to info@pupil-labs.com or send me a DM with your email? This way, I can provide you with some debugging steps to identify the root cause of this issue. Thanks!
We've tested both companions with both neon devices, and found that the problem is not related to the companion phones. These both work fine with the other neon tracker.
What could the issue of the neon tracker be? should we send it off for a replacement perhaps? thanks for your support!
My colleague also just informed me that someone from our institute (Max Planck institute for Psycholinguistics) will be travelling to Berlin tomorrow, so perhaps the tracker can be dropped off at Pupil Labs for investigation/repair?
hello so i have been using neon player to download the data offline.It does take long time for my large data.However am only interested in some specific excel files alone instead of the entire thing.Is there any chance to customise and speed up my process? Precisely i am in need of gaze position csv file which i get after drag drop,then click download symbol which creates folder like neonplayer/export/000/gaze_position.
Hi @user-15edb3 ! If you only need the CSV files, you may want to check this CLI utility as you will be able to do batch exporting of all CSV files
Hi all. Trying to Define AOIs and calculate gaze metrics using the guide on Alpha Lab
Hi @user-c1bd23! Was it because you tried to run this cell ? That cell is to define the function that would be executed when called like this plot_color_patches(reference_image, aois, aoi_ids, plt.gca())
, are you running the code as notebook? If you are not familiar with notebooks, you can try this vanilla python script, simply save it as defineAOIs.py and run it like python3 defineAOIs.py [args]
.
When running #normalize patch values, I get the error "NameError: name 'aoi_values' is not defined"
This is the guide I'm using: https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/#define-aois-and-calculate-gaze-metrics
Hi, I noticed that the video from the world camera has a lot of motion blur (good illuminated interior). How can I reduce such blur? Is it possible to set the exposure manually?
Hi @user-a0cf2a ! In the settings you can enable manual exposure, although we do not recommend fiddling with it, as you may experience VFR (variable frame rate)
I suspect that's where I'm going wrong (I'm brand new to Python!)
Learning as I go along
hello there! does someone have a pdf or any other document in which there is written how the detection and computation of blinks and pupil size work with the pupil neon? I looked on the homepage and couldnΒ΄t find any. Wish you guys a great weekend π
Have you looked at the Alpha pages? https://docs.pupil-labs.com/alpha-lab/blink-detection/
Hi @user-9cb1df! Both blinks and pupil size are detected through neural networks. How blinks are computed is defined here which includes a link to our whitepaper and you can even run the blink detector by yourself, following the tutorial Syed pointed you to.
For pupil diameter we use a different network for which we have not disclosed yet much information, but what we can say is that is uses both eye images, and physiological plausible limits and that it takes into account the cornea and gaze angle to provide you with physiological values, here you can find some more info.
@user-c1bd23 We should probably have added typing annotations π .
If you are new to Python, my recommendation is that you use the vanilla script I mentioned before. Simply save the script, and if you have the dependencies installed in your environment, it should be pretty straightforward to run it. https://discord.com/channels/285728493612957698/1047111711230009405/1195008075036381276
It is also somehow commented.
That said, in the tutorial (which is due to be updated soon π ), the aoi_ids are used as aoi_values to show the AOIs selected first.
aoi_ids = pd.Series(np.arange(len(aois)))
plot_color_patches(reference_image, aois, aoi_ids, plt.gca())
I've run the vanilla script and obtained a file called aoi_ids
Where would this be fed into in order to generate an overlay of AOIs on the reference image?
When running the snippet you should also have been prompted with the plots, was it not the case?
No. I only got a prompt to select the AOIs on the reference image
Once complete, it exited python and generated the aoi_ids file
Hi Pupil Labs team, Despite meticulous calibration efforts, we've noticed a systematic shift that requires us to individually adjust the areas of interest for each subject (The image shows the plotted gaze data using an R script). We're reaching out to see if there are any algorithms or tips available to retrospectively correct these shifts, eliminating the need for manual adjustments based on visual judgment. Any guidance you can provide on this matter would be immensely appreciated. Thank you in advance!
Hey the NEON was never sampled at 250Hz right ? Always 200Hz after uploading it to the cloud ?
Hi @user-c1e127 ! Neon can also sample at 200Hz in realtime, and yes it does not sample at 250.
Thanks much
Offset correction
Hello Pupil Labs team, I recently made three recordings that resulted in three different errors:
Recording 1: December 20th, 2023 The recording was supposed to last about 40 minutes. Afterwards it turned out that the recording stopped after about 15 minutes. The following error message was displayed: "Calibration not synced! Tap for more instructions." However, I couldn't press it because an FPGA update was supposed to be performed. However, the update hung up because the connection to the eye tracker was interrupted. After several attempts, I was finally able to carry out the update. The aforementioned error message has not occurred since then, but the recording is now no longer usable for me.
Recording 2: January 10th, 2024 The same companion device with different neon glasses was used for this recording. This time, no error message was displayed, and the recording duration is also stated as 43 minutes on the companion device. However, when I start the recording, the video is only 8.5 minutes long. In the Pupil Cloud, on the other hand, the recording is 43 minutes long, but from minute 8.5 only a gray screen can be seen (without scene camera, without sound, without fixations).
Recording 3: Here the original neon glasses were used again with the same companion device. The recording lasted approx. 1 hour and 29 minutes. The duration is also indicated as such on the companion device. However, when I start the recording, I can see that the video only lasts 4 minutes and a few seconds. In addition, the frame rate is greatly reduced. The video can be found in the Pupil Cloud with a duration of 1 hour and 29 minutes. However, the frame rate is also greatly reduced here and only the first 4 minutes have sound. This is followed by a gray screen for about 14 seconds, after which the recording of the scene camera can be seen again, but the sound is missing here. Again, there was no error message after the recording.
I now have a total of three recordings that can no longer be used. This is very annoying because these recordings should be used for a thesis. Is there a way to avoid these mistakes in the future or even to restore the recordings?
Thank you in advance Jonas
Hi @user-5f1b97. Sorry to hear you've experienced an issue with these recordings!
For the first recording, it seems that the system required an update, which has now been implemented to prevent future issues. To give some context, when our app is updated, the system may sometimes prompt the user to install firmware (FW and FGPA) upgrades on the connected module. These upgrades often contain stability improvements.
That being said, it appears you've already uploaded all three recordings to Pupil Cloud, which will allow us to investigate using the metadata to identify the problem, with the aim of potentially recovering them.
Could you please contact us at info@pupil-labs.com and send the IDs of the three recordings? You can find the IDs by right-clicking on the recordings in Cloud and selecting 'View recording information'
Hello pupil team. I am working on ROS2 integration using the Pupil Neon, but I'm experiencing significant artefacts when connecting to it. The image looks very broken after moving and takes 1-2 seconds to go back to normal. Is this purely caused by network quality? Will a wired connection solve this issue?
Will switching to the async API improve performance?
After some experimentation I found it probably was my current desktop. Switching to my laptop with a stronger Wi-Fi card solved the issue
Hi all, was the working range of the Neon tested? If yes, how much is it? Is it equal vertically and horizontally? Thank you!
Hi @user-97997c ! Do you mean whether accuracy has been measured across the field of view? If so, have you seen our whitepaper measuring Neon's accuracy?
Hi Miguel, thanks for the tip, tha white paper already helps a lot. I was asking what is the maximum range that the NEON can measure. In the white paper you show data for +/-30deg, both horizontally and vertically. Is that the maximum you feel confident to provide? Thank you!
Hi - can somone help me rescue a corrupt file?
Hi @user-88aaf8 - can you contact info@pupil-labs.com in this regard? Please share all relevant details, e.g., recording ID or the recording exported from the Companion Device (see our guide on how to transfer recordings from the phone to your computer).
Hi, weβre trying to record 30-minute sessions of data across 4 devices and are finding that the companion app/device will crash about 16% of the time. Can you provide any troubleshooting steps? Weβre using LSL Rely. Could that be a factor? Relatedly, is there a desktop app, like Pupil Capture that is compatible with Neon?
Hi, @user-ddf0f7 - sorry to hear about the trouble. Can you tell us more about the crashes? Are all of your companion device apps up to date? Are you recording data (on the device) while simultaneously streaming to LSL? That could be somewhat "overloading" the companion device, especially if it overheats.
Regarding a desktop app, the source version of Pupil Capture does have some support for Neon on Linux and Mac. This bypasses the companion app completely and works much like Pupil Core, but has not been extensively tested or used.
Hello, I would like to know if the data exported using Neon Player matches the data downloaded from Cloud? thanks
Hi @user-4bc389 ! There is a documentation and an app update coming soon to better reflect this. As of today, the timestamps are converted, and the surface mapper does not match the marker mapper. But, in the upcoming release, it would match the format completely, and the only differences that you would see are: 1. Recordings extracted from the phone do not contain eye state or pupillometry yet, and the sampling rate matches the one chosen at the time of recording.
Thanks
Hi all
I'm new in Neon. I have registered a video and I want to see on the desktop. I installed the Neon Player v4.0.0 but I receveid this messagge: session setting are from a different version of this app. I will not use those
Hi @user-b32205! Could you perhaps expand on that a little such that we can get a better idea of the situation. Are you able to load and play the recording?
can you help me?
Hi - I have a logistic questions/ask for advice: when I record an experiment with different participants, would you have one wearer per participant, each with either one or multiple files (depending on whether the same participant is doing >1 trials)? Is there another/better way to organise my recordings or is that what you would do/did?
Hi @user-7413e1 π. One wearer per participant is the recommended way to structure your recordings. You can create arbitrary number of wearers and use whatever naming scheme is appropriate for your study!
Hi - question on fixation: I have tried to compare two different trials using events. In one, I moved my eyes restlessly, very quickly all around the room without stopping on anything in particular. In the other trial, I fixated one point for a few seconds. I then plotted the fixation durations and it shows a much higher value for the former trial than for the latter. I was expecting the exact opposite. I used this tutorial: https://docs.pupil-labs.com/neon/real-time-api/track-your-experiment-progress-using-events/
Can you explain why? Also, in the code on the tutorial there seems to be some confusion between fixation numbers (which I would expect to be higher in the first trial) and fixation duration mean (which I would expect to be higher in the second trial). Thanks for the help!
Are you sure you didn't plot the number of fixations as opposed to the duration of fixations?
Hi, one subject's eye video is frozen after 15 seconds, but the gaze point and video are good. Do you have any idea about how to fix the eye video?
Hi @user-159183 ππ½ ! Sorry you're experiencing issues with your recording. Is this the recording you shared with us yesterday or a different one? If it's a different one, could you please share with us the recording ID? You can either contact us at info@pupil-labs.com or DM the ID of the affected recording.
Hi @user-b6f43d ππ½ ! The processing time for your enrichment can vary depending on several factors. The duration of your recordings and the current workload on our servers are two major variables.
We highly recommend to slice the recording into pieces and apply the enrichments only in the periods of interest, using Events.
See also this relevant message for more details: https://discord.com/channels/285728493612957698/1047111711230009405/1186329479753256991
After the analysis iam not getting anything in the heat map, what might be problem ?
@user-b6f43d can you please clarify if the enrichment processing has completed? Is there any error? Can you visualize the enrichment like in this tutorial?
its not showing anything
Hi
I have a question regarding gaze_visualization when downloaded from cloud. When I see gaze data in Neon device, gaze seems much noiser that when I look at the gaze_visualization video from the cloud. Actually, when I manually plot gaze data over the pupilabs video, raw data seems noiser than the red circle from the gaze_visualization video.
Hope I make sense. Do you apply any filter to the gaze data in the video from pupilabs? If htis is the case, it would be great to knwo which filter you are applying.
Thanks!
Hi @user-b55ba6 ! Do you mind sharing more details on how you render the video? The raw data that you get in your gaze.csv is the same used for visualizing the red circle overlayed on the scene video.
Hello, I have been trying to use the newly developed Neon Player software instead of the Pupil Cloud because of online data processing restrictions with my University. However, the software doesn't seem to have any enrichments/pugins for detecting faces in the visual environment (i.e., an equivalent of the Face Mapper in Pupil Cloud). Is this in fact the case? And, if so, are there any future plans to add a face mapper enrichment to the Neon Player as well? Thank you!
Hey, @user-7d60a5 - Neon Player isn't designed to be a complete 1:1 offline solution to Cloud, and there aren't any specific plans to incorporate a Face Mapper plugin into Neon Player at this time.
Neon Player is extensible though, so it is feasible that one could write their own plugin for this
@user-068eb9, responding to your message (https://discord.com/channels/285728493612957698/1195352014486523925/1197911358050664570) here. Are you asking about correcting the gaze signal or correcting mapping after running an enrichment, like Marker Mapper or Reference Image Mapper?
Correcting the gaze signal. Thanks.
hello..i have been facing some issues suddenly with log in issues in pupil cloud...is it only me or is there any maintanence going on ?..so it doesnt say incorrect credentials etc but doesnt let me in as well
Hi @user-15edb3 ππ½ ! Could you please clarify whether you get an error or any message while trying to log in?
Hello - I am trying to use marker mapper. This is what I see on my screen: it seems it doesn't allow to add markers on the enrichment before running it. Am I doing something wrong? [also, it would be super useful to have more details in the tutorials... many things like these are kind of given for granted π ]
Hi @user-7413e1 ! Could you try navigating a bit a head on the recording and see if you can click on define surface?
Hi What is the width and height of gaze data in pupil neon? Are they 1600 width and 1200 height? In pupil invincible, it is 1088 width and 1080 height.
Hi @user-b10192 ! That is the resolution of the scene camera, yes!
Hello, what do the red thick circes and the blue ones in the results of the scene camera mean? What is the difference?
thanks for helping out in advance
Hi Pupil Labs, I'm looking for a way to import events to my recordings on pupil cloud using an external dataframe (csv/docx or any other format). Is this possible right now? It'll be useful for ex. when we have multiple recordings which share the same events and the event timestamps were collected on a separate device as UNIX timestamps (on the same network clock).
In addition to adding them using the Cloud interface, it's also possible to create events via the Cloud API. You'll want to look at the events POST request under 'recordings'
Hi @user-ccf2f6 π ! Apologies for the delay! Importing events into Pupil Cloud is possible. It requires working with our Cloud API, but note that it is not well documented at this time. You want the "CreateRecordingEvent" function. Here you will find an example of working with the API via Python. You will need to add a function for CreateRecordingEvent. Please let us know if you need assistance with that.
For future reference, if you need more flexibility, then take a look at our Real-Time API. Adding events during an experiment via the Real-Time API will automatically add them to the timeline and they will be uploaded to the Cloud for you, ready to go.
Hi, also is it possible to import completely deleted but exported projects back again into the workspace?
I have the problem, that the face detector didnΒ΄t recognize every record of a face, since the stimuli (in a movie) were too bright or blinded the glasses. How can I add this info afterward?
thanks a lot in advance
Hello, i want to interaction Neon with my glasses. But i do not know where can i find the connect board PCB file?
Hi @user-a2d00f ! That would be the Bare Metal, if you already have Neon and only need the PCB, you can get it here.
Thanks, i have a Neon, but i want to design the connect board PCB by myself for interaction in our own glasses system. Thus i want to know do the electric interface of PCB open source or not? If not, the way we want to integrate in our system is only buy the Bare Metal?
Hi @user-e3da49 π ! The red circles are gaze, and the blue circles fixations. You can read more about them in the respective links!
When you say that you exported the projects, do you mean that you downloaded the data from Cloud? To be clear, data cannot be re-imported to Cloud.
Since faces were not always mapped in your recording, do you plan to manually label faces in your recordings, by hand? You can at least add events to the timeline that indicate when those unlabelled faces are present in the recording. Or have you found a different face mapper tool that you would like to try? If so, then other face mapping tools can only be run offline, not in Cloud.
Thank you, can I run a face mapper in neon player and export a cvs with a column "fixation on face" (true/false)?
Thanks a lot!
furthermore I have the urgent problem, that a started recording stopped unexpected and the following message appeared: "We have detected an error during recording!" - Now I am missing a recording of 2,5 hours and donΒ΄t know how to avoid such an incident in future. Is it possible for you guys to fish out the recording out of somewhere somehow? Please help..
Hello! Is anyone else having trouble downloading the 'timeseries csv and scene video' recording data? every time i try to download and open the zip, i get this error.
Can i get a tutorial video in which i learn about face mapper enrichment ?
Hi @user-b6f43d π! We do not have a video tutorial, but we do have documentation for the Face Mapper. What part of the Face Mapper Enrichment is causing you trouble?
You can also submit feature requests here: https://feedback.pupil-labs.com/
But i do not understand what meaning in your web of "we will open source ... and electrical interface. You can make your own interface PCB". If we have not the electrical schematic, how do we develop our own interface PCB.
Hey @user-a2d00f. I have good news - we'll be able to share the schematics that document the interface. It might take a bit of time to prepare the final version for sharing, however. I'll let you know here when it's ready π
Hi pupil labs. I tried your updated Areas of Interest (AOIs) and it's pretty cool!! I have generated the image in the video (https://www.youtube.com/watch?v=RUmcU8zxMF0 ), but in 7.0v it describes that it generates a CSV file, β AOI Metrics After you have drawn AOIs you will automatically get CSV files of standard metrics of fixations on AOIs: total fixation duration, average fixation duration, time to first fixation, and reach. β but I didn't find the standard metrics of fixations in the download. Comparing the old version, I only see a new variable "fixation detected in reference image".
Am I overlooking something here? Thank you!
Hi @user-b44746 ! Apologies for that, this has been fixed. The metrics should be already downloaded with the enrichment. π
Hi Pupil Labs Team, Currently, I am starting to set up my environment for my experiment (website user experience). I tried to set the monitor behind, in front and beside the window, but I found a little bit of a problem with the focus from the camera, it showed only a white screen and could not detect the website's component. I captured some examples under this message, can you give me information about my issue? Is the problem because of brightness, color contrast or something else? And do you have any tips about that? But the last capture on my laptop screen could work
Hi @user-602698 ππ½ ! Please find below some points that might be helpful in addressing your questions:
Please note that adjusting the brightness of your screen might also help!
Hello,
For my thesis, I am conducting an experiment/research using Pupil Labs Neon. It involves an attention study comparing the wearing of face masks. I will measure the difference in gaze behavior during real-life interaction with a person and interaction with a person on a projector. The research is framed as a study related to eating (to encourage participants to gaze more freely at others). Various factors will differ, such as the type of face mask, the gender of the research fellow, etc.
I need advice on data processing. The data crucial for me is "time to first fixation" and "fixation duration". Is there a way to automatically obtain this data with Neon without manual counting? I mean, through code or another method?
Thank you in advance! Hope to hear from you soon. If there are any questions or uncertainties, feel free to ask them.
Hi @user-231fb9 ! Avg fixation duration and time to first fixation can be computed onto AOIs in Cloud thanks to the update from this week. See more here.
To do so, you will need to map your gaze into an image using the reference image mapper or the marker mapper.
I believe this is not exactly what you are looking for, but rather to detect the first fixation and fixation duration for an specific period of time regardless of gaze location on a surface, is that right?
Well, in that case you will need to look for it yourself (in any software of your choice or programming language). Simply add events relevant to you, and then look for the timestamp on the events.csv , use that timestamps and the recording id to filter the fixations.csv. As a result, you will have rows only with fixations corresponding to that period.
To compute average fixation duration you can take the average of the duration columns. To compute the time to first fixation, you can take the first row as the first fixation and subtract the start of the recording timestamp (this info is in the info.json of that recording or you can take the first timestamp in the gaze.csv for that recording).
If you need to do it aggregated for several wearers my recommendation would be that you use Python, Matlab, R or similar.
For the marker mapper enrichment in Pupil Cloud, is there a reason that recording id
is not included as a column in the surface_positions
csv file? I see that it is included in the gaze
csv file. Given that an enrichment automatically runs on all videos in that project, it makes it hard to parse out which segments of data in the surface_positions
csv file comes from which video.
Also, I am noticing when using apriltags that when the user moves their head (e.g., tilts to the left or right), for about 5-15 frames, all or some of the apriltags show as not visible when they are still in view. When the user is constantly moving their head, these gaps in data become very frequent and the data quickly becomes noisy. Is this expected behavior or is there something specific about our setup that might be causing this? When the user's head is stationary, there is almost no data loss on detected marker IDs
.
Hi @user-44c93c ! this has been fixed, the surface positions should now include rec id
, thanks for reporting it.
Regarding the detected markers, is hard to tell without seeing the video, but could it be due to motion blur?
Hi Neon team, is it possible to set the parameters (e.g., wearer_id, wearier_ied, etc) of the wearer profile using the Real-Time API instead of using the companion app before the recording?
Hi @user-613324 π. That's not currently possible. You can see an overview of the current functionality at this page
Hi Neon team,
having issues streaming the data to windows 10 laptop. any tips?
Are you using the Monitor app or real-time API?
Hello team, I'm rohan. I was recently assigned a task to configure and setup the pupil labs plugin with the new neon configuration. I was closely following this pupil labs neon installation page ( https://www.psychopy.org/api//iohub/device/eyetracker_interface/PupilLabs_Neon_Implementation_Notes.html#pupil-labs-neon ) and setup and tried to install the pupil labs plugin through the plugin manager. But it fails repeatedly and shows the same error ( refer the image ). I tried re-installing psychopy entirely and it still gives the same error. Iβm using a Macbook pro laptop with the latest m1 pro silicon chip. Any suggestions would be helpful. thank you
Hi @user-e2db0a π. We've replicated the error you're getting. We will investigate and let you know when we have a fix!
Hey, @user-e2db0a, what version of PsychoPy are you running?
Can i map two surfaces simultaneously in a single frame ?
Hey @user-b6f43d! Those two surfaces represent different planar surfaces, so you'd want to create two enrichments, one for each!
Hi! I am trying to use the iMotions Exporter within Neon Player 4.0.0, however I consistently get this error: Currently, the iMotions Exporter only supports 3d gaze data
Hi @user-1423fd ! iMotions exporter is not compatible with Neon Player. iMotions software consumes native format from Neon directly, no need to export it using this legacy plugin from Pupil Player.
Also I recommend you to update Neon Player to 4.1 as it includes many fixes, the export format matches the Cloud export format more closely and more.
gotcha, did not realize it was an outdated plugin. Will also update to 4.1 now! Thanks Miguel!
no worries! Kindly note that we have recently renamed the "Raw sensor data" to "Native Recording Format" and I am not sure if iMotions already reflects this change.
Regarding the improved thermal performance of Neon mentioned in the 'announce' channel; is that change due to hardware changes? firmware changes? Just the changes to the frames? Mostly curious to know if it was a firmware change that would improve the thermal performance of the neon modules we already have. Thanks!
Hi @user-44c93c ! This improvement is achieved through new frames with a better way to distribute heat thanks to a heat sink. Basically, we heard feedback from all of you and work towards a cooler solution βοΈ To benefit from this you will only need to swap the frame.
Hi guys, there has been an update, I missed coming up and now I am wondering where I can find the "fixations_on_face" information since there is just "gaze_on_face" left apparently. Thanks for your help!
after I do the face mapping enrichment o.c. π
and last one for now π whatΒ΄s the advantage of the neon player in comparison to the Pupil Cloud? Also what is a "recording directory" I have to drag into the player?
Hi @user-e3da49 ! Fixations on face CSV should still be there and have a boolean containing whether the fixations landed there or not. Could you confirm? That said this parameters can also be extracted easily from the fixations.csv
file and face_positions.csv
info. Please, let us know if you need any help obtaining those.
Reg. Neon Player, this is an offline solution that works for cases where you can not use Cloud. There are not many advantages over using Cloud. For example, the face mapper, the reference image mapper or pupillometry are only available in Cloud.
On the other hand, Neon Player offers you more visualisation possibilities, including seeing the eye videos. To try it out, simply export a recording from the phone or download the Native Recording Format data from Cloud.
Once download it, unzip and then drag and drop the folder over the Neon Player window.
Hi, anyone here have purchased Lenst Kit? Is this for NEON?
Hi @user-29f76a! This is an extended lens kit for the 'I can see clearly now' Neon frame. Note that the standard frame already ships with -3.0- to +3.0 diopter in 0.5 steps. You can read more about that here: https://pupil-labs.com/products/neon/shop#i-can-see-clearly-now
Hey Dom, sorry for the delayed response. its 2023.2.3.
Thanks! What was your install method for PsychoPy (installer from the psychopy website, pip, homebrew, from source, etc)?
Hi Pupil Labs, is there a way to check/set the parameters of audio-recording, gaze sample rate, etc. when remotely starting a recording with pupil-labs-realtime-api? I know we can set it with the companion device but it'll be much more dependable if we can ensure it before every recording without having to physically check the app settings.
Hi @user-ccf2f6! Whilst you can see quite a bit of useful information about the Companion Device using the real-time api, such as battery level, free storage etc. it's not possible to set/check the sampling rate or audio capture. Check out our code examples for available functions
Also, unfortunately, a recording of over 2 hours disappeared after a push-up notification on the screen regarding the low battery status of the phone appeared. This time I could find the Scene Videos and the Eye videos in the storage data of the phone - which I canΒ΄t trace back to its location. Please help me find it somewhere.. the wearer didnΒ΄t click on anything (neither save nor discard options were shown)
its with the installer from the website
Ok, that's good. I haven't actually been able to replicate the same error that you shared, but we did find a different error that only occurs with the standalone installer and only on Mac, which seems to be related to the way PsychoPy is bundled. We're working with the PsychoPy team to resolve it, and I'm expecting it will actually require an update to PsychoPy itself. Unfortunately, I don't know how long that will take to resolve.
Having said that, the issue we do see does not occur when using the pip
-installed version of PsychoPy on Mac, so it's possible that it may fix your issue as well if you're willing to try it out.
One final note - the documentation link you shared earlier is malformed (there's an extra slash), and it breaks the formatting on the page. Somehow google indexed the bad URL. The correct URL serves a page which is much nicer to read.