πŸ•Ά invisible


user-b9d000 01 March, 2024, 20:11:01

Hi Pupil Labs, It is possible to run the reference image mapper or something similar on Python? It seems this tutorial exists (https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/) but it requires us to use Pupil Cloud’s Reference Image Mapper beforehand which we cannot do because we have been correcting the gaze in Pupil Player. Is there any way we can use the reference image mapper outside of Cloud?

nmt 02 March, 2024, 10:13:48

Hi @user-b9d000 πŸ‘‹. Indeed, that guide is intended to be run using the reference image mapper export. Unfortunately, we don't have a way to run the reference image mapper itself outside of Cloud. It sounds like you could do with post-hoc gaze correction in Cloud. If so, please add this as a feature request in the new πŸ’‘ features-requests channel

user-453f5f 05 March, 2024, 12:35:15

Hi Pupil! Is it possible to upload your own reference video for the Reference Image Mapper enrichment? Some of my reference videos turned out to be a bit longer than 3 minutes and I wanted to cut out beginning and ending parts to make it fit and then use those. Or is there any built-in tool for video editing?

Second question - if I created a recording in a different workspace, is it possible to transfer it to another workspace?

Thanks, Iga

user-480f4c 05 March, 2024, 12:48:39

Hi @user-453f5f! Regarding your questions:

  1. The video that will be used as a scanning recording needs to be up to 3 minutes. If your existing scanning recording is longer, you could either try using one of your eye tracking recordings as a scanning video assuming that they are up to 3 minutes and scan the environment/scene from different angles/distances. Alternatively, if you still have access to the same scene, you can create a new scanning recording using your Pupil Invisible glasses. Please make sure to check our docs for more tips on the Reference Image Mapper. Feel free to upvote this relevant feature request: https://discord.com/channels/285728493612957698/1212053314527830026
  2. Currently, it is not possible to transfer recordings across workspaces. There is already a ticket for this in on our feature-request channel: https://discord.com/channels/285728493612957698/1212410344400486481 - Feel free to upvote it!
user-453f5f 05 March, 2024, 12:54:58

Unfortunately, eye-tracking recordings are 15 minutes long and we are now away from the scene... Guess we'll have to go back then. Thank you! πŸ™‚

user-b9d000 05 March, 2024, 20:58:12

Hi pupil labs. A quick question about densepose (https://docs.pupil-labs.com/alpha-lab/dense-pose/): If our video has multiple people to be identified, does this dense pose script have a way of differentiating multiple people? Is there a way we can label each person individually?

user-d407c1 07 March, 2024, 13:52:25

Hi @user-b9d000 ! The code per se has no way to differentiate people. You could adapt it to track the bounding boxes, and if the subjects remain in the FoV it should be simpler.

If the subject gets out is a bit more complicated, as you will need probably to manully input when they reappear.

user-d1b99d 06 March, 2024, 23:42:38

Hi Pupil Labs,

I'm new here and have lots of questions; I need help! Is it possible to connect the Invisible with Psychopy? If so, which version of Psychopy supports it? Additionally, I've read the documents about Core connecting with Psychopy (https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html). Can Invisible achieve the same functionality? Also, the Pupil Lab plugin manager can't successfully download. I tried the manual method and checked the pip list (https://github.com/psychopy/psychopy-eyetracker-pupil-labs) , which shows it's there, but still, I can't find the eye-tracking device with Pupil. I apologize for the barrage of questions, but any answers would greatly assist me. Thanks!

user-cdcab0 07 March, 2024, 00:15:41

Hi, @user-d1b99d - if you're on Mac, there is a known issue with the current standalone build of PsychoPy which prevents the latest Pupil Labs plugin from being installed, but we are working with the PsychoPy team to fix it, and it should be working in the next standalone release. This issue is not present on other operating systems or in the pip-installed version of PsychoPy.

Having said that, the plugin is designed to support Neon, and has not been developed or tested with Invisible

user-91d7b2 07 March, 2024, 13:29:08

I'm not sure what is going on but the enrichment runs, but no heatmap is prepared. What are some potential issues?

user-d407c1 07 March, 2024, 13:55:08

Hi @user-91d7b2 πŸ‘‹ ! What do you mean by it runs but there is no heatmap? Does the enrichment finishes successfully? If so, if you go to the visualisations tab, do you see the option to create a heatmap based on the enrichment?

user-41ad85 08 March, 2024, 20:35:48

We are looking back to our project from last year and we are noticing that the videos in pupil cloud are not playing correctly or at all but the videos we downloaded from the devices are playing fine. Is there a way to reupload these files to the cloud? specially since the error seemed to have happened during upload? We also have a handful of children who were deleted from pupil cloud by an RA but we have their raw data/videos backed up in our drive? Can we reupload those too?

user-480f4c 11 March, 2024, 14:15:05

Hi @user-41ad85 - Re-uploading is not possible - however, the recordings might be recoverable. Could you please go to our πŸ›Ÿ troubleshooting ⁠channel and create a ticket? This will create a private chat where you can share more details about your recordings (eg recording ID).

user-d407c1 11 March, 2024, 15:37:42

Hi @user-91d7b2 ! I see the issue, you need two recordings in a project, a scanning recording and a normal recording, you used the same video for both. Scanning recordings are automatically removed from the data.

If you can access the court again, simply make another recording, can be with the classes in the hand, then use that one as a scanning recording. If you would like to use a subsection of a recording as scanning recording, please upvote this feature request.

I have removed myself already from your workspace, but let me know if there is anything else you need assistance with.

user-91d7b2 11 March, 2024, 17:11:31

That's odd as in the past (about a year ago) it worked by just using the normal recording. I don't have access to the court again.

user-0a5287 12 March, 2024, 15:14:55

Dear Colleagues! The phone does not upload data to the cloud. There are no errors. The recordings are played back on the phone. Android 11, OnePlus 8T.

Chat image

user-d407c1 12 March, 2024, 15:20:16

Hi @user-0a5287 πŸ‘‹ ! Can you please check the phone is connected to a network that has internet access and then try logging out and back into the Companion app and let us know if this triggers the upload?

If not, could you please visit https://speedtest.cloud.pupil-labs.com/ on the phone's browser to ensure the phone can access the Pupil Cloud servers.

user-de220f 13 March, 2024, 03:33:10

Excuse me, I want to know what happened to my pupil-labs application -INVISBLE COMPANION ,it appear "errno 4 error" and can't open it today. Could you tell me what happening to my application ,and tell me how to deal with it

nmt 13 March, 2024, 04:14:40

Hi πŸ‘‹. This looks like it could be a USB cable issue. I see you've already sent us an email to which I've responded. Let's continue the conversation there!

user-de220f 13 March, 2024, 03:33:39

Chat image

user-612622 13 March, 2024, 14:41:46

Hi. I get this error on video from cloud "Video transcoding failed for this recording. We have been notified of the error and will work on a fix. Please check back later or get in touch with [email removed]

but it works on a phone mobile application. What should i do to fix it?

user-480f4c 13 March, 2024, 14:44:08

Hi @user-612622 - Can you please create a ticket in our πŸ›Ÿ troubleshooting channel? This will create a private chat where you can share more details about your recordings (eg recording ID).

user-272517 15 March, 2024, 14:00:09

I have a grey icon in the outer circle; the scene camera icon... Am i overlooking something?

user-480f4c 15 March, 2024, 14:07:56

Hi @user-272517 - can you please create a ticket in our β πŸ›Ÿ troubleshooting channel? This will create a private chat where you can share more debugging steps to solve your issue.

user-272517 15 March, 2024, 14:04:55

the saved recordings are uploaded in the cloud, but there is no video, so on the left there is just a grey box with this

Chat image

user-612622 15 March, 2024, 16:41:21

@user-d407c1 @user-53a8c4 I try to merge pupil diameters informations from pupil_positions.csv to fixations.csv data. There is a small difference in world_timestamp and pupil_timestamp, they vary. How can i combine this data? Should i make some average to these timestamps? Thanks for any suggestions

nmt 18 March, 2024, 03:48:39

Hi @user-612622! Pupil Core's eye cameras and scene camera can operate at different sampling rates. So, differences in those timestamps can be expected. But essentially, it sounds like you need to find all pupil data within the period of a fixation. Cell 5 of this Pupil tutorial shows how you can do that. Although you might not be working with surface-mapped fixation as in the tutorial, the same principles apply.

user-612622 16 March, 2024, 12:19:19

Why i cant upload pupil invisible exported recording in "pupil format" to the pupil player? It says "InvalidRecordingException: There is no info file in the target directory. "

user-612622 16 March, 2024, 12:32:34

I've tried to download from cloud and companion app and i get the same result

nmt 18 March, 2024, 03:51:04

Are you dragging and dropping the whole recording directory into the Player window, and not just a video file, for example?

user-0b4995 20 March, 2024, 08:19:35

hello, the pupil invisible glasses, where exactly is the microphone? we recently had some issues with missing audio and wondered if we maybe covered the microphone.

user-d407c1 20 March, 2024, 08:53:28

Hi @user-0b4995 ! the microphone is in the scene camera module. Check all the sensors here

Chat image

user-0b4995 20 March, 2024, 10:27:38

ok thanks!

user-2251c4 20 March, 2024, 15:22:24

Hi! I have question about the data provided by Marker Mapper from Pupil Cloud. I have one group who have been wearing the Invisible eye-tracking glasses at the same time. With the help of AprilTags and the Pupil Cloud Marker Mapper, I have defined and marked two different surfaces from the recordings. Then I have downloaded it from the Pupil Cloud and now I am running some analysis. However, I wonder why the number of fixations per participant found in these files are different, although they were all recorded during the same session? For example, Marker Mapper AOI_1 tells me that participant fixated 20 times, but the Marker Mapper AOI_2 tells me that the same participant fixated 31 times. I wonder if the detected fixations are somehow related to the visibility of the AprilTags markers, but I couldn't find any explicit information about this from the documentation? If so, how many AprilTags must be visible for the fixation detection? If I remember right, I have seen a bit conflicting information that either 1 or 2 markers must be visible.

user-f43a29 21 March, 2024, 13:36:03

Hi @user-2251c4 πŸ‘‹ ! Is there an aspect of your experimental design that forces each observer to have the same number of fixations per AOI? Typically, the number of fixations will not only vary for each AOI, but also for each observer, so getting a different number of fixations for each AOI is expected, as many factors can effect what is fixated and for how long/often. When using the Marker Mapper, for a surface to be detected, at least 2 markers must be visible, but it is better when 3 or more are detected, as then data are more reliable. For best results, a surface is defined with 4 or more AprilTags.

user-d119ac 21 March, 2024, 13:52:04

Hi! I have a question about how to extract (calibration parameters) information from calibration.bin file

user-2251c4 22 March, 2024, 17:42:36

Hi! I tried to check and debug my code but the issue persists. I computed them like this: fixations_count_per_recording = fixations_df.groupby('recording id')['fixation id'].nunique().reset_index(name='fixations_count'). Please let me know if this is a wrong way to do it!

user-f43a29 27 March, 2024, 22:31:36

Hi @user-2251c4 , I tried to reproduce this issue, but I do not get this error. The code that you sent is to count unique # of fixations, but the table that you show in the original post was made with code that instead computes total fixation duration. I would recommend to double check what happens there.

user-a5a6c3 27 March, 2024, 08:32:11

Hi Pupil labs, i'm using the real time API on python and i have a question. Can i use multithreading with the real-time API?

user-cdcab0 27 March, 2024, 08:45:20

You will want to do all of your realtime API interactions within the same thread, but that can be a background thread or the main thread. There's also an asynchronous interface

user-df855f 27 March, 2024, 10:03:27

Hi Pupil labs, If I download the timeseries CSV and timeseries SCV and scene video, only the info.json file is downloaded. As I am am particularly interested in the CSV files I am wondering how to access them.

user-f43a29 27 March, 2024, 10:06:47

Hi @user-df855f πŸ‘‹ ! It sounds like you could be using the Safari browser? If so, check out this part of our Troubleshooting docs

user-e33a15 29 March, 2024, 00:39:24

Hi Pupil Labs,

During a recording session, I encountered this issue: " Recording error: We have detected an error during recording!" but I am unsure of the cause. After replugging the USB-C cable, the issue was resolved. To ensure smooth operation for upcoming experiments, could you provide guidance on how to prevent this from happening again?

Thank you in advance

nmt 29 March, 2024, 02:49:18

Hi @user-e33a15! Please open a support ticket in the πŸ›Ÿ troubleshooting channel and we can assist you from there.

End of March archive