Hi all, I am looking for advice on how to write an extension for the Opensesame to be able to use it with Pupil Neon device. Would appreciate any help.
OpenSesame supports Python, so the best way to integrate them will be with the Realtime API
Hi everyone, I am trying to develop an android app that gets invisible companion RTSP data and does some ar tag detection, etc. I have some problems on receiving RTSP data. Decoding RTSP stream using androidβs MediaCodec is not working as it seems to produce no output. Do you have any suggestions on how to find a solution to my problem? Are there any third-party open-source projects (written in Java or Katlin) that work with pupil invisibleβs RTSP stream?
Hi, everyone. I read on the functionality under this link: https://pupil-labs.com/products/neon/specs, it now says (under βPost-hoc Dataβ at the bottom): Pupillometry data and eye state (Available in Pupil Cloud). I have checked it on software and pupil cloud, I am clueless to get the pupillometry data. Does anyone know how or have ever worked with these? Do you mind to show me how to get that data or how it works? Thanks
Hi @user-29f76a ! There is nothing you need to do, pupil diameter and 3d eye state are automatically computed when a new recording is processed in Cloud. After it has been processed, and when you download the timeseries CSV files, you will find a new file named 3d_eye_states.csv
I see, I take new data and it's available. But when I run the enrichment, it takes forever. It's been 3 hours and not done yet. Is there any issue on the pupil cloud at the moment?
Hi @user-29f76a ! the enrichment duration would depend on the amount of recordings on the enrichment, the type of enrichment (ie. reference image mapper would take longer) and the queue of enrichments being processed, if you think is stuck, try doing a hard refresh on the page or contact info@pupil-labs.com with the enrichment ID.
It's definitely taking forever. We did the first step: hard refresh on the page. Now maybe we can send an email
Hi, I am trying to create a local setup of the Alpha Lab "Map Gaze Onto Body Parts" via these instructions: https://densepose-module.readthedocs.io/ . I have followed the MacOS instructions exactly as they are listed, but I am still receiving the error claiming the torch module cannot be located.
One workaround that I did have to use was downloading Johnny Nunez's most recent version of detectron2-0.7-cp311-cp311-macosx_10_9_universal2.whl because the link (https://github.com/johnnynunez/detectron2/actions/runs/5953527699) to the action used when the instructional guide was created leads to nowehere... I am assuming this may be the source of my error? Any and all help would be greatly appreciated! Thanks!
Hi @user-1423fd ! I see, that link died, which wheel did you download/installed? https://github.com/johnnynunez/detectron2/actions/runs/7285006064
I downloaded detectron2-3.11-pytorch2.0.1-macos-latest-wheel.whl form the link you provided
Dear all, i am trying to connect to Neon device thought python application code. I am running a simple api example code : βfrom pupil_labs.realtime_api.simple import discover_one_device device = discover_one_device() β And no device is found. Laptop is connected through hotspot to the companion phone, while neon companion app is running.
Hi @user-ee081d ! You can pass the ip address to the device
class please try accessing with the IP and then check this https://docs.pupil-labs.com/neon/data-collection/monitor-app/#connection-problems
@user-d407c1 1) Thank you , it connected with the phone ip . 2) Now i have a problem with saving the recording, i get an error - pupil_labs.realtime_api.device.DeviceError: (400, 'Cannot stop recording, not recording') . 3) Cant use the monitor app, http://neon.local:8080/ page canβt be reached
2) Does your template contain the required fields? 3) It is most probably a DNS issue; how do you create the hotspot?
2) What does it mean "contain the required field"? 3) I connected to the phone through the wifi
Hey hey
I have a question. I am very interested in Reference Image Mapper. We wanted to use it for our research and article. Therefore, I have 2 questions: 1) Is there documentation available somewhere on how Reference Image Mapper works? At least some rough content on how 3D data is mathematically projected onto 2D data, or anything that isn't generalities? 2) Is there a Python implementation of Reference Image Mapper or at least an API? Possibly in any other programming language?
I will be grateful for your answer.
Hi @user-06b8c5 π. Thanks for your message β glad to hear you're interested in using Reference Image Mapper for your research. We find it to be a powerful tool when used correctly. I can explain the steps at a relatively high level:
Firstly, we build a 3D model of the environment based on the scanning video. You can visualise this as a 3D point cloud in Pupil Cloud once the enrichment is completed.
We subsequently use this to track the position of the scene camera, and project gaze onto the model. This is how we achieve robust mapping for wearers moving around in their environment and looking at features from various perspectives. From there, gaze is mapped onto the 2D reference image. It's this final part that facilitates the generation of heatmaps and AOI analysis.
I'm unable to share more about the inner workings of the algorithms at this moment. However, if you wish to evaluate the performance of the mapping, this can largely be done in the side-by-side visualiser in Pupil Cloud. Here, you can manually determine whether the mapping was successful or not on a frame-by-frame basis.
HI everyone, I am trying to recreate the pupil lab's live stream where its able to show real-time video of the glasses to another device using the neon companion app. I have been able to create a window and broadcast the livestream with the gaze using the async real-time API, but I am not sure how to create the GUI with all the working buttons that's usually on the right side. I was wondering what python library did u guys use or what language was used to make the GUI. Also, if I can get access to the source, that would help me alot!
Hi, @user-2255fa! Welcome to the community π
The web-based monitor app is not written in Python. To create a GUI in Python, there are plenty of options, but popular choices are: * Qt (via PySide or PyQt) * Tkinter * Kivy * wxPython * Loads of others
There are pros and cons to each of course and different people will have different favorites. Personally, my favorite is PySide
This is what im talking about ^^
What language was used to make the web based livestream with the GUI?
I believe it's written in TypeScript, but at this time there is no equivalent realtime API library like the one we provide for Python
Hi everyone! Does anyone know how can I download the Pupil Player App? Does anybody use it for analyzing the data?
Hi @user-1e429b ππ½ ! You can download it here - scroll down to "Assets"
Thank you so much!
Also weβre recording eye tracking data in the ski sport. Iβve got several eye tracker (Pupil Invisible) recordings with corresponding scene camera video from real ski track while sportsmen doing his routine. Does anyone have the experience in analyzing video recording with gaze references? Maybe someone can share with some articles?
Hi I got a question about how pupil saves event files in a csv. file with recording id, timestamp, name etc. I was wondering how I can take in that data from a recording with my own code and save it in a csv file using the real-time API?
Hi @user-2255fa ! Do you want to get the events from Cloud programmatically, or send events in realtime ?
hello - does anyone know how to download video (with scanpath/fixations annotations) from Pupil Cloud?
Hi @user-e3c1d6 ! you can use the Video Renderer enrichment. It is currently a bit hidden though, you have to go to the analysis tab on your project and select "+ New Visualisation".
Hi! I was wondering if there is a way to download the video with an enrichment applied? I'm using the face mapper enrichment.
Hey guys, I have been working with some secondary pupil dilation data, where I had some queries I hope someone can answer. 1) Would the blinking detection algorithm work for data not collected through Neon offline ( instead collected through Eyelink) 2) Do you have any recommendations on interpolating missing values in the first and last timestamp, where measures like cubic spline don't accurately interpolate the data?
Hi @user-51bca2 ! Our blink detection algorithm is specifically designed for Neon and would not work with data from other eye tracking systems. You are welcome to read the blink documentation and the associated whitepaper. Regarding your second question, I would recommend checking the literature for best practice. For example, since you are asking about blinks and you mentioned in another thread that you are measuring pupil diameter, then you might consider the approaches used in this paper from Mathot or the methods in the PupilPre package for R.
Hi, would you please help me add two eye trackers to a group? I have installed the plugin, but they can not find each other in the pupil groups!
Hi @user-813003! Can you tell us something about the network you're connecting the two instances of pupil capture through? Is it an institutional network, for example?
@user-9f3e92 responding to your message (https://discord.com/channels/285728493612957698/1047111711230009405/1201653205713555497) here. Can you please share some more information about the network your connecting on. Is it an institutional network?
Hi, Pupil Labs team. I am doing a research that will include the questionnaire. I will do it in Qualtrics because the "template" feature seems not really fit my questionnaire. My question: is that possible to incorporate the pupil labs software with Qualtrics? or could you suggest me other ways? Thanks
Hi @user-29f76a ππ½ ! Have you encountered any limitations when using the template feature? Alternatively, if you prefer storing your surveys within Qualtrics, you can correlate them later with Cloud data based on the subject's ID. Could you please provide additional information into your planned analysis? This might help me to offer more specific and helpful feedback.
@nmt no it's home network. Both devices connected to the same network as hotspot doesn't seem to work. (however hotspot would be ideal for my application). Anyways.. Using the same network, I can see the live data on my browser using http://192.168.1.68:8080/ . But "from pupil_labs.realtime_api.simple import discover_one_device
device = discover_one_device()
print(f"Phone IP address: {device.phone_ip}") print(f"Phone name: {device.phone_name}") print(f"Battery level: {device.battery_level_percent}%") print(f"Free storage: {device.memory_num_free_bytes / 1024**3:.1f} GB") print(f"Serial number of connected glasses: {device.serial_number_glasses}") " returns a none type by the look of things: Error: print(f"Serial number of connected glasses: {device.serial_number_glasses}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'serial_number_glasses'. I'm using a python 3.11.5 venv in VScode Windows 10 pro
Automatic discovery is failing, which happens on some networks, esp. hotspots. However, if you know the IP address of the companion device, you don't need to use automatic discovery. Replace
from pupil_labs.realtime_api.simple import discover_one_device
device = discover_one_device()
with:
from pupil_labs.realtime_api.simple import Device
device = Device("192.168.1.68", 8080)
Hi @user-cdcab0 thanks i got it going now. The glasses are streaming fine using a common home network with my laptop. Do you have any settings recommendations regarding using hotspot as i cannot even get the stream on the browser in that type of configuration even though the phone and laptop are connected.
You should definitely check to see if your hotspot has any firewall settings. It may be configured to block the traffic which is needed for the system to work
Hi, my plan would be asking participant to rate each stimuli (say 3 stimuli) with 13 questions (so 13 x 3). And in each section I will display the picture, that's why the Neon software (template) is not compatible. Could you explain me more details about correlate the subject ID because we can only manually write participant ID in Qualtrics?
Thanks for clarifying @user-29f76a. By correlating the subject IDs between Cloud-Qualtrics I meant having the same ID for each subject in both platforms so that you can easily track that the X eye tracking data set corresponds to the X Qualtrics survey.
However, I understand that you want to get eye tracking data and questionnaire data for every stimulus, is that right? e.g., by defining as one section the period that stimulus a,b,c is presented