Hi Pupil Labs, It is possible to run the reference image mapper or something similar on Python? It seems this tutorial exists (https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/) but it requires us to use Pupil Cloudβs Reference Image Mapper beforehand which we cannot do because we have been correcting the gaze in Pupil Player. Is there any way we can use the reference image mapper outside of Cloud?
Hi @user-b9d000 π. Indeed, that guide is intended to be run using the reference image mapper export. Unfortunately, we don't have a way to run the reference image mapper itself outside of Cloud. It sounds like you could do with post-hoc gaze correction in Cloud. If so, please add this as a feature request in the new π‘ features-requests channel
Hi Pupil! Is it possible to upload your own reference video for the Reference Image Mapper enrichment? Some of my reference videos turned out to be a bit longer than 3 minutes and I wanted to cut out beginning and ending parts to make it fit and then use those. Or is there any built-in tool for video editing?
Second question - if I created a recording in a different workspace, is it possible to transfer it to another workspace?
Thanks, Iga
Hi @user-453f5f! Regarding your questions:
Unfortunately, eye-tracking recordings are 15 minutes long and we are now away from the scene... Guess we'll have to go back then. Thank you! π
Hi pupil labs. A quick question about densepose (https://docs.pupil-labs.com/alpha-lab/dense-pose/): If our video has multiple people to be identified, does this dense pose script have a way of differentiating multiple people? Is there a way we can label each person individually?
Hi @user-b9d000 ! The code per se has no way to differentiate people. You could adapt it to track the bounding boxes, and if the subjects remain in the FoV it should be simpler.
If the subject gets out is a bit more complicated, as you will need probably to manully input when they reappear.
Hi Pupil Labs,
I'm new here and have lots of questions; I need help! Is it possible to connect the Invisible with Psychopy? If so, which version of Psychopy supports it? Additionally, I've read the documents about Core connecting with Psychopy (https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html). Can Invisible achieve the same functionality? Also, the Pupil Lab plugin manager can't successfully download. I tried the manual method and checked the pip list (https://github.com/psychopy/psychopy-eyetracker-pupil-labs) , which shows it's there, but still, I can't find the eye-tracking device with Pupil. I apologize for the barrage of questions, but any answers would greatly assist me. Thanks!
Hi, @user-d1b99d - if you're on Mac, there is a known issue with the current standalone build of PsychoPy which prevents the latest Pupil Labs plugin from being installed, but we are working with the PsychoPy team to fix it, and it should be working in the next standalone release. This issue is not present on other operating systems or in the pip
-installed version of PsychoPy.
Having said that, the plugin is designed to support Neon, and has not been developed or tested with Invisible
I'm not sure what is going on but the enrichment runs, but no heatmap is prepared. What are some potential issues?
Hi @user-91d7b2 π ! What do you mean by it runs but there is no heatmap? Does the enrichment finishes successfully? If so, if you go to the visualisations tab, do you see the option to create a heatmap based on the enrichment?
We are looking back to our project from last year and we are noticing that the videos in pupil cloud are not playing correctly or at all but the videos we downloaded from the devices are playing fine. Is there a way to reupload these files to the cloud? specially since the error seemed to have happened during upload? We also have a handful of children who were deleted from pupil cloud by an RA but we have their raw data/videos backed up in our drive? Can we reupload those too?
Hi @user-41ad85 - Re-uploading is not possible - however, the recordings might be recoverable. Could you please go to our π troubleshooting β channel and create a ticket? This will create a private chat where you can share more details about your recordings (eg recording ID).
Hi @user-91d7b2 ! I see the issue, you need two recordings in a project, a scanning recording and a normal recording, you used the same video for both. Scanning recordings are automatically removed from the data.
If you can access the court again, simply make another recording, can be with the classes in the hand, then use that one as a scanning recording. If you would like to use a subsection of a recording as scanning recording, please upvote this feature request.
I have removed myself already from your workspace, but let me know if there is anything else you need assistance with.
That's odd as in the past (about a year ago) it worked by just using the normal recording. I don't have access to the court again.
Dear Colleagues! The phone does not upload data to the cloud. There are no errors. The recordings are played back on the phone. Android 11, OnePlus 8T.
Hi @user-0a5287 π ! Can you please check the phone is connected to a network that has internet access and then try logging out and back into the Companion app and let us know if this triggers the upload?
If not, could you please visit https://speedtest.cloud.pupil-labs.com/ on the phone's browser to ensure the phone can access the Pupil Cloud servers.
Excuse me, I want to know what happened to my pupil-labs application -INVISBLE COMPANION ,it appear "errno 4 error" and can't open it today. Could you tell me what happening to my application ,and tell me how to deal with it
Hi π. This looks like it could be a USB cable issue. I see you've already sent us an email to which I've responded. Let's continue the conversation there!
Hi. I get this error on video from cloud "Video transcoding failed for this recording. We have been notified of the error and will work on a fix. Please check back later or get in touch with [email removed]
but it works on a phone mobile application. What should i do to fix it?
Hi @user-612622 - Can you please create a ticket in our π troubleshooting channel? This will create a private chat where you can share more details about your recordings (eg recording ID).
I have a grey icon in the outer circle; the scene camera icon... Am i overlooking something?
Hi @user-272517 - can you please create a ticket in our β π troubleshooting channel? This will create a private chat where you can share more debugging steps to solve your issue.
the saved recordings are uploaded in the cloud, but there is no video, so on the left there is just a grey box with this
@user-d407c1 @user-53a8c4 I try to merge pupil diameters informations from pupil_positions.csv to fixations.csv data. There is a small difference in world_timestamp and pupil_timestamp, they vary. How can i combine this data? Should i make some average to these timestamps? Thanks for any suggestions
Hi @user-612622! Pupil Core's eye cameras and scene camera can operate at different sampling rates. So, differences in those timestamps can be expected. But essentially, it sounds like you need to find all pupil data within the period of a fixation. Cell 5 of this Pupil tutorial shows how you can do that. Although you might not be working with surface-mapped fixation as in the tutorial, the same principles apply.
Why i cant upload pupil invisible exported recording in "pupil format" to the pupil player? It says "InvalidRecordingException: There is no info file in the target directory. "
I've tried to download from cloud and companion app and i get the same result
Are you dragging and dropping the whole recording directory into the Player window, and not just a video file, for example?
hello, the pupil invisible glasses, where exactly is the microphone? we recently had some issues with missing audio and wondered if we maybe covered the microphone.
Hi @user-0b4995 ! the microphone is in the scene camera module. Check all the sensors here
ok thanks!
Hi! I have question about the data provided by Marker Mapper from Pupil Cloud. I have one group who have been wearing the Invisible eye-tracking glasses at the same time. With the help of AprilTags and the Pupil Cloud Marker Mapper, I have defined and marked two different surfaces from the recordings. Then I have downloaded it from the Pupil Cloud and now I am running some analysis. However, I wonder why the number of fixations per participant found in these files are different, although they were all recorded during the same session? For example, Marker Mapper AOI_1 tells me that participant fixated 20 times, but the Marker Mapper AOI_2 tells me that the same participant fixated 31 times. I wonder if the detected fixations are somehow related to the visibility of the AprilTags markers, but I couldn't find any explicit information about this from the documentation? If so, how many AprilTags must be visible for the fixation detection? If I remember right, I have seen a bit conflicting information that either 1 or 2 markers must be visible.
Hi @user-2251c4 π ! Is there an aspect of your experimental design that forces each observer to have the same number of fixations per AOI? Typically, the number of fixations will not only vary for each AOI, but also for each observer, so getting a different number of fixations for each AOI is expected, as many factors can effect what is fixated and for how long/often. When using the Marker Mapper, for a surface to be detected, at least 2 markers must be visible, but it is better when 3 or more are detected, as then data are more reliable. For best results, a surface is defined with 4 or more AprilTags.
Hi! I have a question about how to extract (calibration parameters) information from calibration.bin file
Hi! I tried to check and debug my code but the issue persists. I computed them like this: fixations_count_per_recording = fixations_df.groupby('recording id')['fixation id'].nunique().reset_index(name='fixations_count'). Please let me know if this is a wrong way to do it!
Hi @user-2251c4 , I tried to reproduce this issue, but I do not get this error. The code that you sent is to count unique # of fixations, but the table that you show in the original post was made with code that instead computes total fixation duration. I would recommend to double check what happens there.
Hi Pupil labs, i'm using the real time API on python and i have a question. Can i use multithreading with the real-time API?
You will want to do all of your realtime API interactions within the same thread, but that can be a background thread or the main thread. There's also an asynchronous interface
Hi Pupil labs, If I download the timeseries CSV and timeseries SCV and scene video, only the info.json file is downloaded. As I am am particularly interested in the CSV files I am wondering how to access them.
Hi @user-df855f π ! It sounds like you could be using the Safari browser? If so, check out this part of our Troubleshooting docs
Hi Pupil Labs,
During a recording session, I encountered this issue: " Recording error: We have detected an error during recording!" but I am unsure of the cause. After replugging the USB-C cable, the issue was resolved. To ensure smooth operation for upcoming experiments, could you provide guidance on how to prevent this from happening again?
Thank you in advance
Hi @user-e33a15! Please open a support ticket in the π troubleshooting channel and we can assist you from there.