πŸ’» software-dev


user-ee081d 06 January, 2024, 19:09:21

Hi all, I am looking for advice on how to write an extension for the Opensesame to be able to use it with Pupil Neon device. Would appreciate any help.

user-cdcab0 08 January, 2024, 04:31:44

OpenSesame supports Python, so the best way to integrate them will be with the Realtime API

user-46e202 08 January, 2024, 11:56:19

Hi everyone, I am trying to develop an android app that gets invisible companion RTSP data and does some ar tag detection, etc. I have some problems on receiving RTSP data. Decoding RTSP stream using android’s MediaCodec is not working as it seems to produce no output. Do you have any suggestions on how to find a solution to my problem? Are there any third-party open-source projects (written in Java or Katlin) that work with pupil invisible’s RTSP stream?

user-29f76a 08 January, 2024, 15:42:34

Hi, everyone. I read on the functionality under this link: https://pupil-labs.com/products/neon/specs, it now says (under β€œPost-hoc Data” at the bottom): Pupillometry data and eye state (Available in Pupil Cloud). I have checked it on software and pupil cloud, I am clueless to get the pupillometry data. Does anyone know how or have ever worked with these? Do you mind to show me how to get that data or how it works? Thanks

user-d407c1 09 January, 2024, 07:53:40

Hi @user-29f76a ! There is nothing you need to do, pupil diameter and 3d eye state are automatically computed when a new recording is processed in Cloud. After it has been processed, and when you download the timeseries CSV files, you will find a new file named 3d_eye_states.csv

user-29f76a 09 January, 2024, 13:24:14

I see, I take new data and it's available. But when I run the enrichment, it takes forever. It's been 3 hours and not done yet. Is there any issue on the pupil cloud at the moment?

user-d407c1 09 January, 2024, 13:41:01

Hi @user-29f76a ! the enrichment duration would depend on the amount of recordings on the enrichment, the type of enrichment (ie. reference image mapper would take longer) and the queue of enrichments being processed, if you think is stuck, try doing a hard refresh on the page or contact info@pupil-labs.com with the enrichment ID.

user-29f76a 09 January, 2024, 13:43:55

It's definitely taking forever. We did the first step: hard refresh on the page. Now maybe we can send an email

user-1423fd 11 January, 2024, 15:01:56

Hi, I am trying to create a local setup of the Alpha Lab "Map Gaze Onto Body Parts" via these instructions: https://densepose-module.readthedocs.io/ . I have followed the MacOS instructions exactly as they are listed, but I am still receiving the error claiming the torch module cannot be located.

One workaround that I did have to use was downloading Johnny Nunez's most recent version of detectron2-0.7-cp311-cp311-macosx_10_9_universal2.whl because the link (https://github.com/johnnynunez/detectron2/actions/runs/5953527699) to the action used when the instructional guide was created leads to nowehere... I am assuming this may be the source of my error? Any and all help would be greatly appreciated! Thanks!

user-d407c1 11 January, 2024, 15:37:16

Hi @user-1423fd ! I see, that link died, which wheel did you download/installed? https://github.com/johnnynunez/detectron2/actions/runs/7285006064

user-1423fd 11 January, 2024, 15:39:34

I downloaded detectron2-3.11-pytorch2.0.1-macos-latest-wheel.whl form the link you provided

user-ee081d 12 January, 2024, 11:09:45

Dear all, i am trying to connect to Neon device thought python application code. I am running a simple api example code : β€œfrom pupil_labs.realtime_api.simple import discover_one_device device = discover_one_device() β€œ And no device is found. Laptop is connected through hotspot to the companion phone, while neon companion app is running.

user-d407c1 12 January, 2024, 11:14:39

Hi @user-ee081d ! You can pass the ip address to the deviceclass please try accessing with the IP and then check this https://docs.pupil-labs.com/neon/data-collection/monitor-app/#connection-problems

user-ee081d 12 January, 2024, 11:28:26

@user-d407c1 1) Thank you , it connected with the phone ip . 2) Now i have a problem with saving the recording, i get an error - pupil_labs.realtime_api.device.DeviceError: (400, 'Cannot stop recording, not recording') . 3) Cant use the monitor app, http://neon.local:8080/ page can’t be reached

user-d407c1 12 January, 2024, 12:22:47

2) Does your template contain the required fields? 3) It is most probably a DNS issue; how do you create the hotspot?

user-ee081d 12 January, 2024, 12:26:34

2) What does it mean "contain the required field"? 3) I connected to the phone through the wifi

user-06b8c5 14 January, 2024, 20:41:21

Hey hey

I have a question. I am very interested in Reference Image Mapper. We wanted to use it for our research and article. Therefore, I have 2 questions: 1) Is there documentation available somewhere on how Reference Image Mapper works? At least some rough content on how 3D data is mathematically projected onto 2D data, or anything that isn't generalities? 2) Is there a Python implementation of Reference Image Mapper or at least an API? Possibly in any other programming language?

I will be grateful for your answer.

nmt 15 January, 2024, 14:24:35

Hi @user-06b8c5 πŸ‘‹. Thanks for your message – glad to hear you're interested in using Reference Image Mapper for your research. We find it to be a powerful tool when used correctly. I can explain the steps at a relatively high level:

Firstly, we build a 3D model of the environment based on the scanning video. You can visualise this as a 3D point cloud in Pupil Cloud once the enrichment is completed.

We subsequently use this to track the position of the scene camera, and project gaze onto the model. This is how we achieve robust mapping for wearers moving around in their environment and looking at features from various perspectives. From there, gaze is mapped onto the 2D reference image. It's this final part that facilitates the generation of heatmaps and AOI analysis.

I'm unable to share more about the inner workings of the algorithms at this moment. However, if you wish to evaluate the performance of the mapping, this can largely be done in the side-by-side visualiser in Pupil Cloud. Here, you can manually determine whether the mapping was successful or not on a frame-by-frame basis.

user-2255fa 17 January, 2024, 20:30:14

HI everyone, I am trying to recreate the pupil lab's live stream where its able to show real-time video of the glasses to another device using the neon companion app. I have been able to create a window and broadcast the livestream with the gaze using the async real-time API, but I am not sure how to create the GUI with all the working buttons that's usually on the right side. I was wondering what python library did u guys use or what language was used to make the GUI. Also, if I can get access to the source, that would help me alot!

user-cdcab0 17 January, 2024, 21:06:47

Hi, @user-2255fa! Welcome to the community πŸ™‚

The web-based monitor app is not written in Python. To create a GUI in Python, there are plenty of options, but popular choices are: * Qt (via PySide or PyQt) * Tkinter * Kivy * wxPython * Loads of others

There are pros and cons to each of course and different people will have different favorites. Personally, my favorite is PySide

user-2255fa 17 January, 2024, 20:30:20

Chat image

user-2255fa 17 January, 2024, 20:30:55

This is what im talking about ^^

user-2255fa 17 January, 2024, 21:17:33

What language was used to make the web based livestream with the GUI?

user-cdcab0 17 January, 2024, 21:24:29

I believe it's written in TypeScript, but at this time there is no equivalent realtime API library like the one we provide for Python

user-1e429b 22 January, 2024, 14:05:52

Hi everyone! Does anyone know how can I download the Pupil Player App? Does anybody use it for analyzing the data?

user-480f4c 22 January, 2024, 14:08:36

Hi @user-1e429b πŸ‘‹πŸ½ ! You can download it here - scroll down to "Assets"

user-1e429b 22 January, 2024, 14:11:40

Thank you so much!

user-1e429b 22 January, 2024, 14:27:23

Also we’re recording eye tracking data in the ski sport. I’ve got several eye tracker (Pupil Invisible) recordings with corresponding scene camera video from real ski track while sportsmen doing his routine. Does anyone have the experience in analyzing video recording with gaze references? Maybe someone can share with some articles?

user-2255fa 22 January, 2024, 16:21:28

Hi I got a question about how pupil saves event files in a csv. file with recording id, timestamp, name etc. I was wondering how I can take in that data from a recording with my own code and save it in a csv file using the real-time API?

user-d407c1 22 January, 2024, 16:46:08

Hi @user-2255fa ! Do you want to get the events from Cloud programmatically, or send events in realtime ?

user-e3c1d6 22 January, 2024, 16:52:24

hello - does anyone know how to download video (with scanpath/fixations annotations) from Pupil Cloud?

user-d407c1 22 January, 2024, 16:58:52

Hi @user-e3c1d6 ! you can use the Video Renderer enrichment. It is currently a bit hidden though, you have to go to the analysis tab on your project and select "+ New Visualisation".

user-05d680 14 February, 2024, 21:16:34

Hi! I was wondering if there is a way to download the video with an enrichment applied? I'm using the face mapper enrichment.

user-51bca2 24 January, 2024, 19:58:20

Hey guys, I have been working with some secondary pupil dilation data, where I had some queries I hope someone can answer. 1) Would the blinking detection algorithm work for data not collected through Neon offline ( instead collected through Eyelink) 2) Do you have any recommendations on interpolating missing values in the first and last timestamp, where measures like cubic spline don't accurately interpolate the data?

user-f43a29 25 January, 2024, 11:44:28

Hi @user-51bca2 ! Our blink detection algorithm is specifically designed for Neon and would not work with data from other eye tracking systems. You are welcome to read the blink documentation and the associated whitepaper. Regarding your second question, I would recommend checking the literature for best practice. For example, since you are asking about blinks and you mentioned in another thread that you are measuring pupil diameter, then you might consider the approaches used in this paper from Mathot or the methods in the PupilPre package for R.

user-813003 26 January, 2024, 01:21:04

Hi, would you please help me add two eye trackers to a group? I have installed the plugin, but they can not find each other in the pupil groups!

nmt 29 January, 2024, 14:36:46

Hi @user-813003! Can you tell us something about the network you're connecting the two instances of pupil capture through? Is it an institutional network, for example?

nmt 30 January, 2024, 02:33:35

@user-9f3e92 responding to your message (https://discord.com/channels/285728493612957698/1047111711230009405/1201653205713555497) here. Can you please share some more information about the network your connecting on. Is it an institutional network?

user-29f76a 30 January, 2024, 13:41:48

Hi, Pupil Labs team. I am doing a research that will include the questionnaire. I will do it in Qualtrics because the "template" feature seems not really fit my questionnaire. My question: is that possible to incorporate the pupil labs software with Qualtrics? or could you suggest me other ways? Thanks

user-480f4c 30 January, 2024, 14:15:52

Hi @user-29f76a πŸ‘‹πŸ½ ! Have you encountered any limitations when using the template feature? Alternatively, if you prefer storing your surveys within Qualtrics, you can correlate them later with Cloud data based on the subject's ID. Could you please provide additional information into your planned analysis? This might help me to offer more specific and helpful feedback.

user-9f3e92 30 January, 2024, 16:35:41

@nmt no it's home network. Both devices connected to the same network as hotspot doesn't seem to work. (however hotspot would be ideal for my application). Anyways.. Using the same network, I can see the live data on my browser using http://192.168.1.68:8080/ . But "from pupil_labs.realtime_api.simple import discover_one_device

device = discover_one_device()

print(f"Phone IP address: {device.phone_ip}") print(f"Phone name: {device.phone_name}") print(f"Battery level: {device.battery_level_percent}%") print(f"Free storage: {device.memory_num_free_bytes / 1024**3:.1f} GB") print(f"Serial number of connected glasses: {device.serial_number_glasses}") " returns a none type by the look of things: Error: print(f"Serial number of connected glasses: {device.serial_number_glasses}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'serial_number_glasses'. I'm using a python 3.11.5 venv in VScode Windows 10 pro

user-cdcab0 30 January, 2024, 23:27:04

Automatic discovery is failing, which happens on some networks, esp. hotspots. However, if you know the IP address of the companion device, you don't need to use automatic discovery. Replace

from pupil_labs.realtime_api.simple import discover_one_device
device = discover_one_device()

with:

from pupil_labs.realtime_api.simple import Device
device = Device("192.168.1.68", 8080)
user-9f3e92 31 January, 2024, 13:53:51

Hi @user-cdcab0 thanks i got it going now. The glasses are streaming fine using a common home network with my laptop. Do you have any settings recommendations regarding using hotspot as i cannot even get the stream on the browser in that type of configuration even though the phone and laptop are connected.

user-cdcab0 01 February, 2024, 05:29:09

You should definitely check to see if your hotspot has any firewall settings. It may be configured to block the traffic which is needed for the system to work

user-29f76a 31 January, 2024, 15:25:46

Hi, my plan would be asking participant to rate each stimuli (say 3 stimuli) with 13 questions (so 13 x 3). And in each section I will display the picture, that's why the Neon software (template) is not compatible. Could you explain me more details about correlate the subject ID because we can only manually write participant ID in Qualtrics?

user-480f4c 31 January, 2024, 15:48:13

Thanks for clarifying @user-29f76a. By correlating the subject IDs between Cloud-Qualtrics I meant having the same ID for each subject in both platforms so that you can easily track that the X eye tracking data set corresponds to the X Qualtrics survey.

However, I understand that you want to get eye tracking data and questionnaire data for every stimulus, is that right? e.g., by defining as one section the period that stimulus a,b,c is presented

End of January archive