πŸ•Ά invisible


nmt 01 June, 2023, 05:38:08

Events

user-518de2 01 June, 2023, 06:17:52

Hi guys,

I am having some issues with the 'reference image mapper' function. Every time I try to create an enrichment, it doesn't 'run' properly. In the past, prior to the change in interface I was able to get this function to work however now it seems to have been altered? Just for further information: I have a reference image which is clear + a scanning video of me holding the glasses while going over the scene for 40 seconds from multiple angles. As I said, this was enough in the past now it keeps failing. I've tried multiple different pictures + scanning videos but still it either doesn't 'run' or 'runs' and then has an error.

nmt 01 June, 2023, 09:08:21

Reference Image Mapper testing

user-20a5eb 01 June, 2023, 12:40:37

Hi everyone, I didn't think we could record blinking with Pupil Invisible but when looking at the data downloaded from the pupil cloud I found a blink.csv. Is blinking now recorded by Pupil invisible?

nmt 01 June, 2023, 12:43:41

Hey @user-20a5eb! Blinks are computed when recordings are uploaded to Pupil Cloud. Read more about that here: https://docs.pupil-labs.com/invisible/basic-concepts/data-streams/#blinks

user-828fbe 02 June, 2023, 08:01:13

Hey, I have a recording on my phone with a duration of 0 sec. When I click on it, the recording actually plays for 6.22 min. Is there a way for me to properly extract this data from the phone and analyze it in Pupil player?

user-828fbe 02 June, 2023, 08:41:41

Hey, second question. I want to use the Pupil player but I miss some important files. I export the data from the phone but I miss the timestamps and video's. How can I fix this?

Chat image

nmt 02 June, 2023, 10:56:52

Recording duration inconsistent

user-746503 06 June, 2023, 16:59:58

Hi, I was wondering, is there any way to download enrichments using the cloud API v2? I am able to download recordings, listing projects etc, but no luck for enrichments. Andit would be supercool to create enrichments with the API. Is there any hidden feature not documented here (https://api.cloud.pupil-labs.com/v2/), or any plans to allow that in future versions? Or am I missing something completely?

user-d407c1 07 June, 2023, 06:18:47

Hi @user-d4e38a ! I reply to your message https://discord.com/channels/285728493612957698/285728493612957698/1115829174603628616 here, as it seems like you are using Pupil Invisible.

This sounds an issue with the cable but to be sure, would you mind contacting us at info@pupil-labs.com so we can follow up with further debugging steps?

user-cd03b7 07 June, 2023, 18:39:05

Hey guys, what kind of text environment / scan environment disparities have you been able to get away with for reference image mapping? Let's say I want to scan the inside of a subway car cabin; the banner ads across different subway cabins will differ, but the core geometry of the inside of the cabin will remain the same. The specific features I'm trying to measure are consistent across cabins (directions, security information, etc), but I'm concerned that the disparities in the environments will not be accepted by Pupil Cloud when I try to upload them.

marc 08 June, 2023, 08:28:57

Hi @user-cd03b7! Its difficult to give a definitive answer to this. Generally subway cabins are rather feature rich with e.g. patterns on the seats, salient emergency information on the walls, lots of geometry etc. For the reference image mapper it is generally okay if some of the feature-rich regions do change in appearance (like the ad banners), as long as other feature-rich remain, which would be the case here. On the subway, the number of other passengers in the cabine would clearly also play a role.

I would expect the Reference Image Mapper to perform decently in this setting, but I'd definitely recommend a pilot test to double-check. When selecting a reference image you should make sure that enough static contextual information is available. If e.g. the ad banenrs were one of the AOIs, the reference image should be taken from far enough away such that additional feature-rich regions are visible in the image, which the algorithm can "hold on to".

user-c16926 09 June, 2023, 07:19:30

Hello, I am a new user of pupil invisible. I wonder if there is a way to directly extract gaze positions from raw data, without using pupil cloud or pupil player?

user-cdcab0 09 June, 2023, 07:27:01

Hi, @user-c16926 πŸ‘‹πŸ½ - indeed you can (see: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html), although it should be noted that gaze data that's been processed on Pupil Cloud will have a higher sample rate than exporting directly from the device

user-c16926 09 June, 2023, 07:32:50

Thanks, Dom! I have exported the data from the smartphone to my PC. I get a lot of files like gaze_ps1.raw or gaze_right_ps1.time. Is there a way to extract these type of data to the same human-readable format like gaze_positions.csv?

user-cdcab0 09 June, 2023, 08:03:55

Are you comfortable with a little Python?

def read_timestamps(path: str) -> np.ndarray:
    """Read gaze timestamps from a binary file.gaze psX.time"""
    return np.fromfile(str(path), dtype="<u8")


def read_raw(path: str) -> np.ndarray:
    """Read gaze raw data from a binary file.gaze psX.raw"""
    raw_data = np.fromfile(str(path), "<f4")
    raw_data_dtype = raw_data.dtype
    raw_data.shape = (-1, 2)
    return np.asarray(raw_data, dtype=raw_data_dtype)
user-c16926 09 June, 2023, 08:55:18

Thanks a lot guys, that was really helpful!

user-e40297 09 June, 2023, 12:13:39

Hi, I'm not sure if the question has been asked before (it probably ha, but I couldn't find it). But I've made a recording by pupil invisible (and hopefully soon Neon πŸ™‚ for which I would like to read out gaze data and gyroscope data using Python without pulling it through pupil player since it is so time consuming to export it (at least the writing of the movie). Is there any way to do so?

marc 09 June, 2023, 12:18:33

Hi @user-e40297

The most convenient way would be to use the Pupil Cloud Downloads: https://docs.pupil-labs.com/export-formats/recording-data/invisible/#imu-csv

Alternatively, you can access the data also via the real-time API: https://docs.pupil-labs.com/invisible/real-time-api/introduction/

It's also possible to read the raw sensor files saved on the phone. Reading gaze data is described here: https://discord.com/channels/285728493612957698/633564003846717444/1116638798005162006

user-e40297 09 June, 2023, 12:24:02

Thanks a lot. But this is on Neon isn't it? Does it work the same for invisible? In my recording I can't seem to find the json and csv files (only after pupilplayer)

Ideally I would not like to use the cloud for privacy reasons (patients) until my organisation has made a decision

marc 09 June, 2023, 12:27:34

Sorry, I did post indeed post the links to the Neon documentation. I updated them! Data formats are mostly identical for Neon and Invisible though.

user-e40297 09 June, 2023, 12:25:03

I mainly see npy and pldata

marc 09 June, 2023, 12:28:11

When you open a recording in Pupil Player, it edits some of the files in the recording folder. npy nad pldata files are all generated by Pupil Player.

marc 09 June, 2023, 12:29:16

If you download a recording off of the phone, you shoud see files like e.g. gaze ps1.raw which you could read using the Python code.

marc 09 June, 2023, 12:29:40

The json and CSV files are only available in the download from Pupil Cloud.

user-e40297 10 June, 2023, 09:53:03

I'm sorry but still don't follow. Raw files are image files aren't day? How do I open this file and read the content?

user-787054 09 June, 2023, 14:03:17

During recording through pupil invisible glasses we found error and recording was saved later, I have uploaded the recording file on google cloud but I am unable to open/can't play the recording . How can I solve this problem? Please help me

user-d407c1 09 June, 2023, 14:30:04

Hi @user-787054 ! Could you develop what error did you saw? Was there any pop-up in the app?

When you say Google Cloud, do you mean Pupil Cloud?

Would it be possible for you to get the recording ID and sharing it with us, either here or at [email removed]

You can find this ID by right-clicking the recordings on Cloud, selecting view information and clicking on the copy icon next to the ID

user-787054 09 June, 2023, 16:00:30

Sorry for the mistake actually it is Pupil Cloud not google cloud. Ok recording ID is 7e172289-33c1-4510-9576-69b690a956c1 and aa4bbea1-2b18-42c9-b330-1207d4bff64d .

nmt 10 June, 2023, 10:51:17

Hi @user-e40297! The .raw files are our intermediate binary format. They're not really meant to be consumed directly, but you should be able to parse them with the code snippet my colleague, @user-cdcab0, shared earlier

user-787054 10 June, 2023, 11:04:22

Chat image

user-e40297 10 June, 2023, 11:05:00

Hi Neil, Thanks a lot. Is there a reason I shouldn't use it?

marc 12 June, 2023, 06:52:57

Reasons that speak against using the raw binary data include - lack of convenience in working with this format. - lack of documentation and example code because all content is assuming different formats - because it is an internal format, breaking changes to it may happen more frequently.

If you don't mind those issues, using the raw binary format is fine. I can see how waiting for the video export can be annoying, especially for longer recordings. We are currently working on an official offline script for exporting the raw binary data to CSV (and also handling the video). Handling the video with it will still take a moment, but this script should make it a lot easier to batch export a bunch of recordings so you only have to wait a bit once and it should also allow you to skip video export in case you don't need it. Shouldn't take too much longer until that is available, but until then you can use script snippets above!

user-e40297 10 June, 2023, 11:13:02

I would be perfectly happy using the csv fies. Only exporting and creating the movie requires a lot of time

user-7c714e 12 June, 2023, 09:27:42

Hi @ Marc, do you intend to make more than 1 AOI visible at the same time in Pupil Cloud (as it is the case in Pupil Player)? I think this is the most important thing and a great disadvantage of PupilCloud, because when I set one AOI (in the middle) and I want to define two more to the left and the right it is impossible to know where the boundaries of the first AOI are. In this matter Pupil Player is much better, but no Fixations are being calculated and one can research only each subject's gaze separately (I can't put all participants at once).

user-df1f44 12 June, 2023, 10:13:39

Hey folks, I can't seem to generate any heatmaps using the reference image mapper - I am quite sure I have followed the recommended steps to a tee. Help!

user-cca81b 12 June, 2023, 12:17:45

Processing videos in the cloud takes an unusually long time, several hours even for very short data. What can I do?

Chat image

user-480f4c 12 June, 2023, 12:21:19

Hey @user-df1f44 πŸ‘‹ ! Could you please clarify if the Reference Image Mapper enrichment has been completed successfully? Can you see the gaze mapping from the recording onto your reference image?

user-df1f44 12 June, 2023, 12:25:19

Hi @user-480f4c - Well, the enrichment starts successfully but I don't get any feedback on successful completion - and No, I do not see any (live - if that is what you mean) gaze mapping onto the reference image while the video +enrichment is running.

user-480f4c 12 June, 2023, 12:50:50

Thanks for the clarification @user-df1f44! Usually, it takes a while until the enrichment is completed. It's also possible that the enrichment has already been completed, but the browser hasn't reflected that. In this case, try performing a hard refresh (command + shift + R).

However, if the enrichment cannot be completed, this might indicate that the mapping failed. I'd recommend checking our scanning best practices https://docs.pupil-labs.com/enrichments/reference-image-mapper/#scanning-best-practices

user-df1f44 12 June, 2023, 13:00:34

Thanks, will have another go and let you know how we get on.

marc 12 June, 2023, 13:18:32

Hi @user-7c714e! Thanks for the feedback! We have gotten similar requests from other users too and are in fact currently looking into this! I don't have a precise ETA yet, but we'll have an update to address this soon.

user-7c714e 12 June, 2023, 17:16:37

Hi @marc , I hope Pupil Labs can develop it ASAP, because I can only use Pupil Player when it comes to AOI on roads and it is extremely difficult and time consuming with the amount of subjects we have. Thanks!

user-cca81b 13 June, 2023, 08:10:24

Video Data is still processing, can you help?

Chat image

marc 13 June, 2023, 08:57:36

Hi @user-cca81b! >16 hours is longer than expected for sure. Could you please share the recording ID of one of the affected recordings? You can find it when right-clicking onto it and selecting View recording information.

user-162faa 13 June, 2023, 11:05:18

I am also having this issue. How long should processing normally take?

marc 13 June, 2023, 11:14:41

Hi @user-162faa! Could you please also share a recording ID of one of the affected recordings with me? Processing time depends on the length of the video and on the general load on the servers, so it does vary. Usually, processing times should not be much longer than a small multiple of the recording duration. Even in extreme cases they should not exceed a couple hours for recordings that are just a couple minutes long.

Sorry for the inconvenience here! Given that there is multiple reports there definitely seems to be an issue on our end somewhere! The team is already looking into it!

user-057596 13 June, 2023, 14:07:38

With the new Pupil Cloud formats do you have to carry out again the enrichments you have already applied to uploaded video or are they instantly transferred into the new format?

marc 13 June, 2023, 14:56:42

@user-162faa @user-cca81b We have found and resolved the cause of the issue with recording processing. Computation for processing should now finally actually start and the accumulated queue of recordings should be finished within the next 1-2 hours. So check back soon!

user-cca81b 14 June, 2023, 08:35:41

@marc Unfortunately, this did not work, and most of the recordings are still being processed or have stopped, giving me this feedback: "Video transcoding failed for this capture. Gaze pipeline failed for this capture. We have been notified of this issue and are working on a fix. Please check back later or contact [email removed] However, some short videos were processed correctly.

user-6127f0 14 June, 2023, 09:39:26

Hi, me and my resaerch team was doing a test with Pupil Invisible and once we did the recordings, it was giving an error and did not allow to create a project. Do you know what can be the problem?

Chat image

marc 14 June, 2023, 10:15:10

Hi @user-6127f0 and @user-cca81b! Indeed seems like the issue from yesterday is not entirely resolved. Apologies again for the inconvenience. We are again actively working on it!

user-6127f0 14 June, 2023, 10:15:37

Thank you!

marc 14 June, 2023, 11:13:34

@user-6127f0 @user-cca81b We have localised the issue that led to the error during recording processing and fixed it. We have triggered all affected recordings to be processed again. Its quite a few recordings, so it will take a couple hours until all is done, but then they should finally all be available and future recordings should go through with no more issues!

user-c46729 15 June, 2023, 03:08:26

hi there, I am new here and have a quick question. When we watch our recording on the cloud, the gaze is clearly visible, yet when we download the recording, the gaze disappears. Could you please help me out with how to download, keeping gaze in? it is very crucial for us to be able to do this. thanks

user-cdcab0 15 June, 2023, 04:42:51

Hi, @user-c46729 πŸ‘‹πŸ½ ! You can use the Gaze Overlay enrichment to create a downloadable video that has what you want. Information about different enrichment options, including Gaze Overlay, is available here: https://docs.pupil-labs.com/enrichments/

user-3437df 15 June, 2023, 05:04:12

Hi there, are there any suggestions for post-hoc gaze calibration for pupil invisible? In our study, we've calibrated the glasses prior to each recording. The participants are about 1.5m apart, and as we start the recording, we ask participants to describe where they are looking at. However, there still seems to be an offset and we are wondering what are some ways to account for this. Thanks!

nmt 15 June, 2023, 12:44:37

Hey @user-3437df! There's currently no way to do a post-hoc offset correction in the Companion App or Pupil Cloud, although this is an open feature request. Feel free to upvote here: https://feedback.pupil-labs.com/pupil-cloud/p/post-hoc-offset-correction As a workaround, check out this message: https://discord.com/channels/285728493612957698/633564003846717444/820977054090002442

user-4771db 15 June, 2023, 12:58:00

Hi Pupil Labs Team, I have 2 questions regarding the pupil cloud data download. Maybe you can help me out here: 1. After the Marker Mapper Enrichement, I downloaded the data and saw a file called "surface_positions.csv". Am I correct assuming that this is the data you described as "aoi_positions.csv" in you documentation (https://docs.pupil-labs.com/export-formats/enrichment-data/marker-mapper/#aoi-positions-csv) and that only the name is different? 2. I wanted to download the raw data export, but after clicking "download" the api closes (see screenshot). My project is quite large with 567 recordings of about 4 min each. Could this be the problem? Do you know a workaround for that?

Thanks in advance for your help! πŸ™‚

Chat image

marc 15 June, 2023, 14:41:23

Hi @user-4771db!

1) Yes, your assumption is correct! That's a bug in the docs πŸͺ²

2) Yeah, 567 recordings means the ZIP that's build in the background would contain 1000s of files. This might very well be a challenge for the current infrastructure. A workaround to this would be to download the recordings from the Recordings-view rather than Project view. You could filter the recordings bty project association and then download them in smaller chunks. The data you'd get this way is the same.

user-e31301 16 June, 2023, 10:50:30

Hi! I am starting using the new Pupil Cloud version and I am wondering how can I compute my analysis? I have created two Reference Image Mapper enrichment and run them . Now, in one of them I see a comment saying that the enrochment is ready to be computed, but how can I do that? And the second enrichment resulted in an error. Why could that happened? Thanks for any help you could provide πŸ˜‰

user-d407c1 16 June, 2023, 13:31:13

Hi @user-e31301 ! Could you please share the enrichment IDs with us? You can obtain those by right-clicking over the enrichment in the left panel.

For the one that says ready to ruun, can you click on the purple button Run and check if something happens? And on the other one is there some aditional info of the error? Thank you

user-e40297 16 June, 2023, 13:21:16

I was hoping to get the roll-array out of the result extimu ps1.raw file. I found the x, y, z gyroscope and accelaration input-data. Is there "an easy way" to calculate this roll variable without using pupil player Or is there a way you can tell pupil player to spit out the csv files without making a new video in the output?

user-d407c1 16 June, 2023, 13:58:12

Hi @user-e40297 ! I assume that you don't want to use the Cloud? That's the easiest way to directly obtain roll, but if you want to programmatically obtain it please have a look at https://github.com/pupil-labs/pupil/blob/c309547099099bdff9fac829dc97b546e270d1b6/pupil_src/shared_modules/imu_timeline.py#L91

user-5b371f 17 June, 2023, 08:49:08

Hi Pupil Labs I am trying to undistort the world video using the Cloud, which works fine, but I don't seem to have accesses to the new gaze coordinates. Is there a way of getting those from the cloud?

user-5c56d0 17 June, 2023, 13:21:01

Thank you for your management. Could you please answer the following questions? The following are Pupil invisible and neon questions.

Q1. In the csv named blinks, there are start_time and end_time. What are the definitions of these? For example, which of the following is start_time?γ€€(1) the frame at the moment the eyelid closes (i.e., the blink is recognized from the eyelid border), or (2) the frame at the moment the pupil begins to be hidden by the eyelid.

Q2 Which of the following is the blink recognition method?γ€€(1) Recognizing the blink from the eyelid border. (2) Recognizing the blink from whether the pupil is hidden or not.

Q3 If I want a timestamp or index of the turning point of a blink (the point at which the eye is completely closed), by what means can I obtain it? The blink turnaround point is the point where the eye is completely closed after the start of the blink, as shown in the right figure in the attached image.

user-5c56d0 17 June, 2023, 13:21:15

Chat image

user-6cf4d7 18 June, 2023, 17:58:39

Hi there! I managed to create a heatmap using the Reference image mapper. Now I want to distinguish between two different groups in the same project and create a heatmap with them. for example, I want to create a heatmap of the recordings called 'used before' and a separate heatmap of the recordings called 'never used'. I have already tried to do this by adding a filter but then it still seems to create a heatmap of all the images in the project. How can I create two different heatmaps in the same project? Can you help me out? Thanks!

user-d407c1 19 June, 2023, 07:13:24

Hi @user-5b371f ! There is currently no way to obtain the corrected gaze positions in Cloud. I have gone ahead created a feedback entry in our canny board to request this feature https://feedback.pupil-labs.com/pupil-cloud/p/provide-undistorted-gaze-positions

Feel free to upvote it. In the meantime, here you have a tutorial on how to undistort the video and get the new gaze data. https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/intrinsics/

user-5b371f 19 June, 2023, 09:18:50

Thanks

user-480f4c 19 June, 2023, 07:15:16

Hi JellaπŸ‘‹ ! The heatmap generated in one project includes the data from all recordings present in this project. I’d recommend you split your recordings into two projects, e.g., β€œused before” and β€œnever used”, assign the relevant recordings in each one, and apply a Reference Image Mapper enrichment in each project to generate a "used before" and "never used" heatmap. I hope this helps!

user-6cf4d7 19 June, 2023, 08:41:06

Hi Nadia! Thank you for your help! I will try this instead πŸ™‚

user-d407c1 19 June, 2023, 07:18:25

Hi @user-5c56d0 ! Here you have the description of all fields in the blinks.csv file that you can obtain from Neon and Pupil Invisible https://docs.pupil-labs.com/export-formats/recording-data/neon/#blinks-csv

Unlike with Core, Neon and Invisible employ a different blink detector operating directly on the eye video to detect the movement patterns of blinks, you can read more about this here: https://docs.google.com/document/d/1JLBhC7fmBr6BR59IT3cWgYyqiaM8HLpFxv5KImrN-qE/export?format=pdf

user-5c56d0 19 June, 2023, 09:04:40

Thank you for your [email removed]

user-94f03a 19 June, 2023, 14:27:26

Hi! our android device for the invisible device broke down. We want to still do some (informal) testing (i.e. not proper datacollection) while we are figuring out how to replace it. I saw that the companion app only works with compatiible oneplus, which we dont have. Is there an option to use it with a non-recommended device for now?

user-c2d375 19 June, 2023, 14:49:28

Hi @user-94f03a πŸ‘‹ I am sorry to hear that your Companion device is not working. Unfortunately, the Invisible Companion App is compatible only for OnePlus 6, 8 and 8T. Feel free to send us an email to info@pupil-labs.com and we can help you get a running device.

user-518de2 20 June, 2023, 01:34:00

Hi guys! Hope you're well. I am just wondering if it's possible on pupil cloud... I recorded some data on a different workspace, is it possible to move those recordings do another workspace... I am struggling to see if I can upload files rather than transfer them onto the workspace via the cloud. Thanks.

user-480f4c 20 June, 2023, 06:34:22

Hey @user-518de2 πŸ‘‹ ! It is currently not possible to move recordings between workspaces. Could you clarify what do you mean "see if I can upload files rather than transfer them onto the workspace via the cloud"?

user-cc819b 20 June, 2023, 12:19:00

In the Pupil Invisible Companion App, when setting a manual exposure what units are used for this value? I am assuming milliseconds but can't find a definitive answer in the docs

user-d407c1 21 June, 2023, 08:51:53

Hi @user-cc819b ! Exposure Time (Absolute) Control is used to specify the length of exposure. This value is expressed in 100Β΅s units, where 1 is 1/10,000th of a second, and 1,000 would be 0.1 second.

Note that high exposure values can reduce the frame rate of the scene camera. In auto exposure the frame rate is locked at 30 FPS but this is not the case for manual exposure

user-cc819b 22 June, 2023, 10:26:28

thanks!

user-eebae9 21 June, 2023, 14:43:46

I'm having some problems with the Reference Image Mapper on Pupil Cloud. It's the first time I'm trying to add the RIM enrichment on the new cloud, but it doesn't seem to work. When I click "Run", I get an "internal error message" but when I run it again it says "Enrichment Running". I don't see a progress bar of the enrichment processing anywhere (as it was the case on the old pupil cloud). How do I know if the processing has actually started and is running? I started a few RIM enrichment yesterday, but I don't see the gaze mapping on the reference image, so I'm not sure if its working or not...

user-480f4c 21 June, 2023, 15:59:46

Hey @user-eebae9 πŸ‘‹ ! Thanks for bringing this to our attention. We addressed the issue and it should work fine now. Could you try creating/running the enrichment again? It usually takes a few minutes until the enrichment is completed but if you don't see any updates after a while, it's also possible that the enrichment is done, but the browser hasn't reflected that. In this case, try performing a hard refresh (command + shift + R). Hope this helps! Please let us know if you keep having issues with that

user-eebae9 22 June, 2023, 09:42:26

Thanks Nadia, it worked!

user-3437df 22 June, 2023, 03:49:59

In one of our recordings, participants are asked to vocalize what they're looking at. What's confusing is the first time they look at something, the gaze tracking is quite far off. After looking at various other objects, they eventually come back to the same object but the gaze tracking is very good. The participant appears to be in the same position, and the position of the object hasn't really changed relative to the participant. Do know what could be happening there?

user-480f4c 22 June, 2023, 12:09:13

Hey @user-e66c8c - I'm replying to your message (https://discord.com/channels/285728493612957698/285728493612957698/1121410590754226246) here. To get the data in Pupil Player format, please go to the Workspace Settings section and then click on the toggle next to "Show Raw Sensor Data". There you can enable this option in the download menu. Please keep in mind that you need to enable this feature for each workspace you'd like to download the raw data in the pupil player format.

user-e66c8c 22 June, 2023, 12:12:25

@user-480f4c can you tell me where is Workspace Settings ? I can't find them

user-e66c8c 22 June, 2023, 12:12:39

ok, I've found it

user-e66c8c 22 June, 2023, 12:14:02

@user-480f4c thanks for your help, I've managed to resolve the problem

user-2ecd13 23 June, 2023, 19:47:53

Hey, quick question. Is there anyway to recover a recording that's been deleted from the Pupil Invisible app before it could be uploaded to the cloud.

Additionally, are there any plans for adding a confirmation button before deleting a single recording?

user-726828 24 June, 2023, 12:56:18

Hey, I have question. I accidentally deleted a recording on Pupil Cloud. I checked "show trashed" to restore the deleted recordings. However, I get "Table filter url is invalid","Go back to default filter" and could not find the erased recording. Can I access the recycle bin and restore deleted recordings?

user-480f4c 26 June, 2023, 12:05:33

Hey @user-726828 πŸ‘‹ ! You are right, there was an issue, but it should be fixed now. To restore the recording, right-click on the trashed recording and the option of restoring it should appear. Could you please check again and let us know if you can view/restore the trashed recordings?

user-3c26e4 26 June, 2023, 10:03:20

Hi @marc , today I can't login in pupil cloud. Is there something wrong with the server?

marc 26 June, 2023, 10:31:27

HI @user-3c26e4! Ye, there was an issue which has been resolved in the meantime. Login should work normal again now!

user-eebae9 26 June, 2023, 12:51:36

Hey Anne ExoInsights πŸ‘‹ Thanks for

user-726828 26 June, 2023, 13:54:19

Hey @user-480f4c I just checked and was able to view the deleted recording. I was also able to restore the recording. Thank you very much for your support!

user-3437df 27 June, 2023, 03:51:38

Hi there, in some of our recordings, participants are asked to vocalize what they're looking at. What's confusing is the first time they look at something, the gaze tracking is quite far off. After looking at various other objects, they eventually come back to the same object but the gaze tracking is very good. The participant appears to be in the same position, and the position of the object hasn't really changed relative to the participant. Do you know what could be happening there? Thanks

mpk 27 June, 2023, 06:07:26

@user-3437df this sounds a bit unusual. Would you be able to share a recording of that?

user-3437df 27 June, 2023, 11:12:57

the charles this sounds a bit unusual

user-3c26e4 27 June, 2023, 12:10:40

Hi, could you please tell me ASAP how long would be the gaze with world index 433? Is the difference between gaze timestamp in ms?

user-3c26e4 27 June, 2023, 12:10:42

Chat image

user-7c714e 27 June, 2023, 12:57:38

Or is it in [s]?

user-8b8c72 27 June, 2023, 15:05:30

Hi Marc, I have a similar question. I currently have 2 OnePlus 6 Invisibles, but only one cable works. maria has suggested a new cable for me, but my next question is whether there is an upgrade path for handsets. it looks like I have to remain on Android 8.1 with my current handsets... (in the chat, Android 9 is not ok).... or is it possible to directly install Android 11? Alternatively, what are admissable new handsets to buy? (I can no longer use my work wifi because they only allow Android 10 or 11.) Thanks

user-480f4c 27 June, 2023, 15:24:56

Hi @user-8b8c72 πŸ‘‹ ! We recommend that you don't allow Android system updates on your device. Some Android versions have issues with accessing USB devices, rendering them incompatible with Pupil Invisible. Please refer to our documentation regarding the compatible companion devices and the currently supported Android versions: https://docs.pupil-labs.com/invisible/glasses-and-companion/companion-device/#companion-device I hope this helps!

user-8b8c72 27 June, 2023, 15:44:24

Hi Nadia, thanks for that. I always tell my students, never upgrade anything (ever) πŸ™‚ So my best option would be an 8T, I will see if the budget can stretch. Thank you.

user-a98526 28 June, 2023, 07:01:21

Hi@marc,I would like to know if Invisible's real-time API can get some advanced information such as blink index and fixation index.

user-d5d71f 29 June, 2023, 05:27:42

Hi, I had the same question, especially in regards to blink index. I want to use the real-time python API to stream blink information; is this possible in any way?

user-e40297 28 June, 2023, 12:49:43

Hi, I'm experiencing difficulties by interpreting the .time files. I'm using the code provided by you guys earlier def read_timestamps(path: str) -> np.ndarray: """Read gaze timestamps from a binary file.gaze psX.time""" return np.fromfile(str(path), dtype="<u8")

And are interested in the world-cam time values (PI world v1 ps1.time?) gaze time data (gaze_right ps1.time?) and gyroscope times (extimu ps1.time?) But the time vales seem to vary from the values from the csv files...

user-d407c1 28 June, 2023, 14:04:21

Hi @user-e40297 πŸ‘‹ ! Are you comparing the .csv files produced by Cloud with the gaze raw files?

Those would not be exactly comparable. The Companion Device would complain that too much power is drained from the phone if gaze estimates are made at 200Hz, that's the reason why from the Companion Device/phone you can get gaze estimates at 120Hz, it is when the videos are processed in the Cloud that you get gaze estimates at 200Hz.

That might be the reason why you see such differences, and one of the reasons we do recommend not to work with the raw files directly.

user-e40297 28 June, 2023, 13:52:03

Am I using the correct files using the correct snippit?

user-e40297 28 June, 2023, 14:29:01

No I"m looking at the files stored on the phone

user-d407c1 29 June, 2023, 07:41:10

Hi @user-a98526 & @user-d5d71f ! Blinks and fixations are not immediately available via the real-time API. However, we plan to open-source the code for our blink and fixation detectors in the near future, including potential real-time versions that could be used.

Please note that our blink detector requires eye images, which are only available in real-time for Neon. Therefore, using our algorithm, real-time blink detection with Pupil Invisible is not possible.

On the other hand, our fixation detector uses the gaze signal and scene video, both of which are available in real time via the API for Pupil Invisible. This means that real-time fixation detection with Pupil Invisible will be possible once released.

user-e93961 29 June, 2023, 11:47:09

Hi, I tried to install the real-time api by writing "RUN pip install pupil-labs-realtime-api" in my docker file. But it gets the error "could not find a version that satisfies the requirement pupil-labs-realtime-api (from versions: none)", "no matching distribution found for pupil-labs-realtime-api", can someone help me out? Thanks!

user-20a5eb 29 June, 2023, 12:51:20

Hi, I want to try something with Pupil lab invisible. The idea is that the wearer would be seated in front of a scene and someone on the scene would move. How can I get categories of what the wearer is looking at considering tat the person on stage would be at different coordinates along time?

user-480f4c 29 June, 2023, 13:05:56

Hey @user-20a5eb πŸ‘‹ ! Are you interested in capturing gaze onto the moving person on the scene? If so, check out our Alpha Lab tutorial on how to map gaze onto body parts: https://docs.pupil-labs.com/alpha-lab/dense-pose/

user-20a5eb 29 June, 2023, 13:08:15

That's amazing @user-480f4c , thanks! Is it possible to also capture other object of interest that would be moving? Like a lamp a dancer would take with them?

user-480f4c 29 June, 2023, 13:12:20

This tutorial relies on DensePose, a method for human pose estimation and human body part segmentation, so it would not work for tracking single objects and mapping gaze onto them.

user-9894cd 29 June, 2023, 15:56:54

Hi, I have a question regarding gaze/fixation overlay in exported video from pupil cloud. I intend to annotate world camera video in the Elan annotation tool. We've previously done this with recordings from older Pupil Labs glasses by simply exporting video with gaze overlay using the Pupil Player. However, I can't get recordings made with Invisible to work with the Player app, and I find no option to export video with overlay directly from Pupil Cloud. How do you suggest I approach this?

user-c2d375 29 June, 2023, 16:08:36

Hi @user-9894cd πŸ‘‹ I wanted to ask if you've been exporting recordings from the Companion device or Pupil Cloud for uploading to Pupil Player. In the second case, in the latest UI version of Pupil Cloud you'll need to enable the download of the Pupil Player compatible format in your workspace settings by toggling the "Show Raw Sensor Data" option. By the way, you can also generate and download gaze overlay videos using the Gaze Overlay enrichment. For more information, please refer to our documentation at this link: https://docs.pupil-labs.com/enrichments/gaze-overlay/#gaze-overlay

user-9894cd 29 June, 2023, 16:24:57

Sorry, found it!

user-9894cd 29 June, 2023, 16:23:06

The overlay also seems like a very good option but I didn't realize I could download the gaze overlayed video. How do I do that? I mean, I know I can create the gaze overlay enritchment, it runs and says "completed". But how do I donwload the generated video?

user-9894cd 29 June, 2023, 16:21:12

Thanks a lot for the suggestions, I turned enabled the raw format in the settings and now it plays just fine.

user-9894cd 29 June, 2023, 16:25:28

The download button at the bottom of the screen was not the most visible, but it works πŸ˜‰

user-c2d375 29 June, 2023, 16:26:09

Thanks for the feedback! I am glad to hear you're able to download the gaze overlay video πŸ˜„

user-9894cd 29 June, 2023, 16:26:03

Many thanx for the help!

user-9894cd 29 June, 2023, 16:26:44

I'll take the oportunity to also ask something else.

user-9894cd 29 June, 2023, 16:27:34

I've used the real time API in Python and it works very well, but for testing it would be convenient to also be able to playback recorded data. Can I do that through the API?

user-9894cd 29 June, 2023, 16:28:56

Specifically, I want to grab each fram and gaze data from a recording and use it in a similar was as I get frames and gaze through the api.

user-9894cd 29 June, 2023, 16:30:28

I realize it's not that difficult to just open the video using OpenCV and parse the gaze-csv-files manually, but a more uniform interface allowing me to write code that could interface with either the live stream or a recording would be very convenient.

user-e93961 30 June, 2023, 02:04:39

(sorry for repeated message) Hi, I tried to install the real-time api by writing "RUN pip install pupil-labs-realtime-api" in my docker file. But it gets the error "could not find a version that satisfies the requirement pupil-labs-realtime-api (from versions: none)", "no matching distribution found for pupil-labs-realtime-api", can someone help me out? Thanks!

user-cdcab0 30 June, 2023, 04:34:38

Hi, Mingtao - what version of Python are you building against?

user-e93961 30 June, 2023, 05:54:07

Hi dom @user-cdcab0 ! I think I'm using python 3.6.9 in the image. I checked the documentation it seems the latest release requires 3.7, can I assign some older version that still surports 3.6.9?

user-cdcab0 30 June, 2023, 05:57:35

No, I'm afraid that package has always required at least 3.7

user-e93961 30 June, 2023, 06:05:20

Ok, thank you!

user-4b9ddd 30 June, 2023, 09:27:54

Hi, we bought the Pulpil Invisible in old version. But we didn’t book corresponding mobile phone. We are informed that we can book the mobile phone respectively.

user-480f4c 30 June, 2023, 09:28:36

Hey @user-4b9ddd! I just replied to your message here: https://discord.com/channels/285728493612957698/446977689690177536/1124270142487011389

user-4b9ddd 30 June, 2023, 09:28:02

This one

user-4b9ddd 30 June, 2023, 09:28:02

Chat image

user-4b9ddd 30 June, 2023, 09:29:09

Yes, I saw it! Appreciate!!!

user-4b9ddd 30 June, 2023, 09:29:19
user-225cbc 30 June, 2023, 11:43:20

Hi, is there maybe maintenance going on in pupil cloud? It does not allow me to login and I have tried multiple browsers already. I saw someone else had the same problem a few days ago.

marc 30 June, 2023, 12:00:08

Hi @user-225cbc! We have trouble finding an issue as login seems to work fine from what we can see so far. Can you double check if the login still doesn't work for you now?

user-7ad436 30 June, 2023, 15:38:16

Hi! Could I ask what is included with the hardware at level of software and warranty? The same question for the core model, otherwise I ask it directly in the corresponding chat

user-480f4c 30 June, 2023, 15:45:16

Hi @user-7ad436 πŸ‘‹ ! The software you need for recording and analysing data with Pupil Invisible is free (i.e., the Companion App on the phone is freely available on Google Play Store and Pupil Cloud access is included). Similarly, the software for Pupil Core is also free (Pupil Capture & Pupil Player). Our hardware is covered under 12 month warranty. πŸ™‚

user-7ad436 30 June, 2023, 16:01:06

thank you so much!

End of June archive