πŸ•Ά invisible


user-6e1fb1 01 May, 2023, 09:50:09

Hi, I have had multiple times the occurrence with pupil companion app crashing, usually in between recordings and not wanting to open for cause of zmq_msg_recv: errno 4

What does that mean exactly? Is there a way to avoid this, and how do I get the app back and running as quickly as possible again?

It has usually taken me about 5 minutes of force-stopping the app, restarting the phone, etc to get it back running

user-6b1aa9 02 May, 2023, 10:13:46

Hi, I am trying to test our pair of glasses and it keeps throwing up an error message saying we need to enable 'PI world V1' when we try to use the Invisible Companion app - any idea what we do here?

user-6b1aa9 02 May, 2023, 10:17:09

Chat image

user-cdcab0 02 May, 2023, 10:54:25

Hi, @user-6b1aa9 πŸ‘‹ - that appears to be a permission request, not an error message. It's a security feature of Android devices that apps must request permission from the user before accessing cameras. You'll want to click "Ok" on that message and the ones that follow. Once you've granted permissions for the Companion App to access the cameras it needs, you should never see those permission request screens again.

marc 02 May, 2023, 11:35:33

Also, in this type of permission prompt make sure to click the checkbox before hitting ok!

nmt 02 May, 2023, 12:50:23

zmq_msg_recv: errno 4

nmt 02 May, 2023, 13:13:12

Hi @user-4334c3 πŸ‘‹. Responding to your message (https://discord.com/channels/285728493612957698/285728493612957698/1102924666395435008) here. Our real-time api already offers the possibility to stream gaze and scene video over the network. It should be pretty trivial to receive this in ROS. Have you already looked into this? If not, you can find the documentation here: https://pupil-labs-realtime-api.readthedocs.io/en/stable/examples/index.html

user-4334c3 02 May, 2023, 14:03:47

I already managed to recceive the data in a python script over the network. I could also figure out how to send it from there (python on a raspberry pi) to ROS. But my goal is to see the messages in ROS directly, without any further code on a Windows or Linux machine, so without any script except for whats running on the Smartphone.

user-4334c3 02 May, 2023, 13:59:09

(I was asked to post this here instead of πŸ‘ core)

Hi, I'm using Pupil Invisible for an eye-tracking project. The goal is to have a stream of the Video, as well as the x and y coordinates of the gaze in ROS as two individual messages. The best way to do this, would be to have an android application on the phone that is connected to the glasses, that already sends this data in the right format. Are there any resources from your side, that could assist me to develop such an app? Thanks in advance!

user-cd03b7 02 May, 2023, 18:15:17

Also - do you guys know if there's a way I can export the pupil cloud video preview? I need the video export to contain the fixations and saccades, but all the raw data export files are devoid of anything along those lines.

nmt 03 May, 2023, 19:41:40

So this is planned and on the roadmap.🀞 we won't be waiting too long for it πŸ™‚ https://feedback.pupil-labs.com/pupil-cloud/p/export-of-fixation-scanpath

user-13f46a 03 May, 2023, 22:57:51

Question 1: Is there an algorithm that does time-level alignment for IMU & Blink data with Raw egocentric video? -- No need for gaze-to-video time-level alignment in my case.

marc 04 May, 2023, 08:07:19

Hi @user-13f46a! Is there a reason you want to use the gaze ps1.raw files rather than the gaze.csv files? The CSV files are much easier to work with and all our documentation is based on those, so I'd recommend you try using those.

1) For guidance on temporally aligning data streams please check out this how-to guide: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/

2) The top left corner of the image is considered (0,0) and the order of the components is (x,y). You can find a visualizaton of this here: https://docs.pupil-labs.com/invisible/basic-concepts/data-streams/#gaze

3) When you say "data for the right eyes", what do you mean exactly? Note that the gaze right ps1.raw file contains debugging information only and does not constitute independent gaze estimate from the right eye. The gaze estimates are generated using images of both eyes simultaneously.

4) Those are correct estimates. If subjects try hard enough they can look outside the FOV of the scene camera. The gaze estimation pipeline is capable of making gaze estimates outside of the FOV to a small extend.

user-13f46a 03 May, 2023, 23:22:52

Question 2: Also, how to understand the coordinate system given in "gaze ps1.raw"? For example, given its squarish world view map of 1080x1088, if I got [[ 100, 800 ],...], should the first value be interpreted as...

1) (x=100, y=800) where the origin is in the upper left? => Looking at 7 o-clock ish in Quadrant 3? 2) (x=100, y=800) where the origin is in the lower left? => Looking at 10 o-clock ish in Quadrant 2? 3) (y=100, x=800), origin upper left? => 2 oclock at Quad 1? 4) (y=100, x=800), origin lower left? => 5 oclock at Quad 4?

Question 3: Given that the camera is situated in the opposite side for the right eye, does the same rule above hold true for reading the data for the right eyes, or the different rule applies (e.g., 1 but origin is upper right)

Question 4: I see (very few) values exceeding 1088 in the gaze raw file -- what do they signify? I just want to make sure I correctly normalize the value into "2D Normalized Space" format from https://docs.pupil-labs.com/core/terminology/#coordinate-system -- or if it's already given, where can I find it?

user-b37306 04 May, 2023, 06:46:47

I would like to ask your opinion about the apparatus and which enrichment method is best for my recording. The experiment is as follows:

We are in an apparatus gymnastics training stadium, specifically in the vault area. We want to record the eye movement of athletes with invisible glasses during the time period from the athlete's approach initiation to the moment of contact of the hands on the table before the aerial phase of the jump. So we have 4 periods: Starting Standing Phase, run-up, a touch of springboard, and aerial phase until the moment of hand contact for the push on the table. I am sending you a related video. First, we placed a series of Apriltag in the different areas of the athlete's path, as shown in the photo. We would like your opinion on the number of Apriltag, their position, and their size, as well as which enrichment method is best for grading - calibration of the area where the athlete moves. We are definitely interested in the run-up area, the springboard area under the table, and the area of the table where his hands will rest. We have done a pilot recording, also. The glasses seems stable at first and the raw data ok. We chose the Reference Image Mapper as the enrichment method. We don't know how is better to record the view of the scene for enrichment because the starting point is far away from the vault that athlete approach while running.

Chat image

user-480f4c 04 May, 2023, 07:17:56

Hey @user-b37306πŸ‘‹ . Thanks for sharing the info about your setup! Regarding the enrichments, given your setup, you could try running multiple Reference Image Mapper enrichments depending on your area of interest (e.g., springboard, table etc). This would give you a clearer idea of where the athlete is looking depending on your area of interest at the different periods of your "trial" (e.g., starting standing phase, run up, etc). For this enrichment, you do not need the Apriltag markers. You can follow this tutorial on our Alpha lab page - you might it find useful https://docs.pupil-labs.com/alpha-lab/multiple-rim/. Hope this helps, but please let me know if you have any further questions!

user-b37306 04 May, 2023, 07:41:43

Thanks Nadia for yours suggestions. It is very helpful! You suggest to use different scanning videos of the area. These videos scanning the area in a random trajectory (freely), like one that scanning the whole area from the starting position, and another like the athletes moving trajectory? Is it ok? And after that i run a number of Reference Image Mapper enrichments based on event annotations that i previously i had create?

user-480f4c 04 May, 2023, 08:00:41

Happy to hear that you found this tutorial helpful @user-b37306! As an example, you could first have a RIM with a whole-area scanning for the "Starting position" phase. For the run-up phase, you could have a more close-up reference image of the run-up area (along with its scanning recording). Later on, for the "touch of springboard" you could have a scanning recording of the springboard so that you can capture the eye movementsΒ specifically onto this area. Similarly,Β for the last phase ("hand contact for the push on the table"), you could have a RIM with the table as a reference image (and a scanning recording of it).Β For the scanning recording, I'd recommend to follow the best practices for optimal scanning. https://docs.pupil-labs.com/invisible/enrichments/reference-image-mapper/#scanning-best-practices

user-b37306 04 May, 2023, 08:05:37

Perfect! Thanks! I will follow your suggestions!

user-a98526 04 May, 2023, 09:18:18

Hi @marc , I'm doing some research with Pupil Invisible. In my experience, Pupil-invisible does not have a file describing calibration accuracy like Pupil-Core. However the reviewer asked me this question: It is recommended to add a description of the eye tracker’s accuracy criteria and the confidence level of each subject. Is there a relevant document or paper reporting the accuracy of INVISIBLE for each user?

marc 04 May, 2023, 10:32:13

Hi @user-a98526! There is no confidence concept for Pupil Invisible (or Neon). One could argue the algorithm always claims 100% confidence. Since there is no calibration, there is also no classic calibration accuracy. You could in in theory calculate the average gaze estimation error per subject though by comparing estimates with a range of ground truth targets. Given that you're paper is already submitted you may no longer have access to the subjects though, so this is no longer an option I guess?

You could refer to our white paper which discussed average performance on a large population: https://arxiv.org/pdf/2009.00508.pdf

user-a98526 04 May, 2023, 11:29:30

Yes, my paper has been submitted and the reviewer suggested me to complete this content In fact, I found that the accuracy of the Invisible varies in different areas during the experiment, for example, the accuracy is high in the center of the visual field and decreases at the edges.

marc 04 May, 2023, 11:31:02

That observation is correct! Accuracy is not perfectly uniform but a bit worse towards the edges. The paper is going into more detail on this.

user-a98526 04 May, 2023, 11:36:21

Thank you very much, it is very helpful.

user-1f9e4f 05 May, 2023, 13:30:53

Hello, I am encountering this error during data collection with Pupil Invisible Glasses - please see the attached picture below have been experiencing this for the past three days, and it stops abruptly between without as stopping the recording. What could be the cause/solution for this? Please advise. Thank you

Chat image

user-c2d375 05 May, 2023, 13:47:01

Hi @user-1f9e4f πŸ‘‹ I'm sorry to hear you're experiencing issues with your Pupil Invisible. May I ask you to send an email referring to this message at [email removed] A member of the team will help you to swiftly diagnose the issue and get you running with working hardware ASAP.

user-ace7a4 08 May, 2023, 09:07:43

This might be a trivial question, but is it possible so solely download a specific recording from a project on pupil cloud? I have four recordings stored in a project and already downloaded one. Now I want to get the remaining three recordings. When navigating to enrichment I can only select all 4 recordings at once for the raw data export (or any other enrichment as well). What am I missing here? Thanks a lot! :))

marc 08 May, 2023, 09:49:22

Hi @user-ace7a4! Going through the Raw Data Exporter enrichment there is no way to filter down the included recordings. If you download recordings from drive view (via right click on the selection -> Download -> Timeseries and Video) you get the same data but with more control over what is included.

We are aware that this is a bit confusing and inconvenient. We have a big UI update coming for Pupil Cloud in a couple days that will make things like this much easier!

user-f6894e 08 May, 2023, 11:19:57

For my minor Neuromarketing I use the invisible companion to make use of the blurring feature. Through my school I have been put in a workspace where this is possible. Now I have sadly failed in every attempt to use the blurring. Can anyone help?

marc 08 May, 2023, 11:33:59

Hi @user-f6894e! Do you mean the automatic face blurring feature? This is a beta feature that has only been enabled for a small number of users for testing purposes and is not generally available.

user-f6894e 08 May, 2023, 11:36:09

I am aware. My school has this possibility and @user-e684ed has created the possibility for me to use this feature for school.

marc 08 May, 2023, 11:43:19

I see! The blurring needs to be turned on during workspace creation. So if they created a new workspace for you this should already be enabled in theory. If the configuration was not done correctly during creation, a new workspace will need to be created.

user-f6894e 08 May, 2023, 12:16:07

It works! I thought I had to enrich it myself. But the faces are blurred in the last video I have made. Thank you for your responses.

user-4bc389 09 May, 2023, 07:28:51

Hi @marc I encountered a problem when using Invisible, which is that when data collection stopped, the mobile phone got stuck and couldn't stop normally. Additionally, the collection duration was inconsistent with the actual recording duration, such as the actual recording duration being 10 minutes but only playing back for 6 minutes. Is there any solution to this problem. The phone is one plus 8T. Android 11 system

marc 09 May, 2023, 13:32:22

Hi @user-4bc389! This may indicate an issue with your Pupil Invisible hardware. Please reach out to [email removed] for support in diagnosing and a potential repair!

user-c9d495 09 May, 2023, 07:51:08

HI @user-d407c1 I recently was in touch about some questions re: detectron (mapping gaze onto body parts). I have been able to run this over some files, but noticed one thing: The input video (world.mp4) and the output (densepose.mp4) have a different duration. also, if I run then next to each other, the frames are off - just a bit. I suppose this has to do with variable frame rates/compression issue, which I understand in principle, but am not technical expert. I will try to e.g. split the video into frames, assuming that then there should be the same number of frames and I can match and reassemble those. But the broader question question is whether this is a known issue (or not seen as an issue at all)?

user-4334c3 09 May, 2023, 08:28:14

Hi, I'm trying to watch the video stream of my invisible glasses with opencv. How can I do that with an rtsp url?

user-d407c1 09 May, 2023, 08:33:30

Hi @user-4334c3 πŸ‘‹ ! Please have a look at https://pupil-labs-realtime-api.readthedocs.io/en/stable/examples/simple.html#scene-camera-video-with-overlayed-gaze

user-dfefdf 10 May, 2023, 12:15:08

Greetings! We currently purchased a the PupilLabs Invisible for our lab. We are wondering, whether the recording software also exists for windows or only the smartphone it was delivered with? Regards Eric

marc 10 May, 2023, 12:20:23

Hi @user-dfefdf! Recording data only works via the Companion smartphone and the Companion app. Connecting the glasses directly to a computer does not work.

user-dfefdf 10 May, 2023, 12:25:59

Thanks for the information πŸ™‚

user-dfefdf 10 May, 2023, 13:02:50

Another question: is there a way to calibrate the camera? We are experiencing a slight offset towards the left

user-c2d375 10 May, 2023, 13:20:48

Hi @user-dfefdf πŸ‘‹ The Invisible Companion app offers the possibility to manually set an offset correction. To do this, click on the name of the currently selected wearer on the home screen and then click "Adjust". The app provides instructions on how to set the offset. Once you set the offset, it will be saved in the wearer profile and automatically applied to future recordings of that wearer.

user-8664dc 11 May, 2023, 00:25:30

For the gaze.cvs downloaded from pupil cloud, why are the timestamp intervals not at even?

nmt 11 May, 2023, 08:56:33

Hey @user-8664dc πŸ‘‹. What is the variability you're seeing in your data?

user-6c1e7f 11 May, 2023, 10:19:44

Hello! Do you have best practice advice for this case? If one is running a study in a car with Invisible and interested in its dashboards, but these dashboards show up very dark in the scene video because of strong sunlight coming through the windows. Do you have advice on how the camera can better cope with this environment? I know that Neon has settings for this, do you have experience with how much the configuration can be tweaked to work in a case like this?

user-c2d375 11 May, 2023, 10:30:58

Hi @user-6c1e7f πŸ‘‹ Based on my experience in driving contexts, I recommend adjusting the exposure to manual in the Companion App settings. This will result in a brighter dashboard view, but the windshield will appear very bright as well, making it difficult to see what's happening outside the car. However, if your objective is just to examine gaze behavior within the car interior, this approach may work. It is a matter of trials and errors, so I suggest experimenting with various exposure settings to achieve the desired results.

user-ace7a4 11 May, 2023, 11:51:38

Hi! Does the policy of pupil cloud follow the DSVGO guidelines?

user-480f4c 11 May, 2023, 12:04:14

Hey @user-ace7a4 πŸ‘‹ . Pupil Cloud is GDPR compliant. You can find all the details in our privacy policy https://pupil-labs.com/legal/privacy and in this document as well: https://docs.google.com/document/d/18yaGOFfIbCeIj-3_GSin3GoXhYwwgORu9_7Z-grZ-2U/export?format=pdf

user-6c1e7f 11 May, 2023, 12:08:17

Hi Kerstin iMotions 2270 πŸ‘‹ Based on my

user-ace7a4 11 May, 2023, 12:52:29

Thanks a lot @user-480f4c :)))

user-6a29d4 11 May, 2023, 15:40:38

Hi all! A practical question: is it possible to use the Invisible eye tracker glasses on test persons that already wear glasses?

nmt 11 May, 2023, 17:56:40

Hey @user-6a29d4! This sometimes works, but it really depends on the glasses size and shape, and the wearer. The goal would be to ensure that the eye cameras are unobstructed. You may notice a reduction in gaze accuracy, or it might just work. It's a case of trial and error in my experience.

user-8e415a 11 May, 2023, 16:05:32

Hello! I have the pupil glasses' video streaming information locally hosted at http://pi.local:8080/, and now want to stream the video into a Unity3D application. After looking through the docs at https://pupil-labs-realtime-api.readthedocs.io/en/stable/guides/under-the-hood.html, it seems that a RTP solution is the way to go, but I am unsure where to begin. Any suggestions or examples for this topic? Thanks!

marc 12 May, 2023, 07:43:13

Hi @user-8e415a! There is the Python client, which you could use as a template for the C# implementation. This would allow you to receive the gaze data in your Unity application. The next step would be to figure out the transformation from scene camera gaze coordinates into the VR world, i.e. the screens of your VR headset.

For Neon we will soon start developing Unity client ourselves to make Neon integrations into various VR/AR headsets possible. This could also serve as a template, although I can not give you an ETA yet on when we'd be able to first release something.

user-c414da 11 May, 2023, 22:15:29

Hi there! I am using the Reference Image Mapper enrichment with the invisible software. Even though I am using the same scanning video and reference image, I've only been able to successfully get the enrichment to work once. How long should it take for the enrichment to process? I keep on getting the message "not matched" or "not computed" next to the scanning video and/or the video I am trying to analyze. Thanks!

user-cdcab0 11 May, 2023, 22:58:29

Hi, @user-c414da - in my experience, creating a workable scan video is a bit of an art. If you're able to share your video, we might be able to provide better guidance on what you might do differently to improve your results

marc 12 May, 2023, 07:47:57

Also, regarding those specific states note their meaning: - Not computed: This means that the enrichment definition would apply to the recording, but the computation process has not been started yet. So it might be a matter of hitting the "Run" button to start computing. - Not matched: If you use start and stop events as part of the enrichment definition, the enrichment will only be computed on recordings that contain the according enrichment pair. Recordings that do not have those enrichment will show up as Not matched, so for those you'd need to add the events in order to move them to Not computed.

That being said, there is a chance some bugs are left in the new UI. So if things still don't seem to make sense please let us know! Ideally, you'd share a screenshot of the problematic enrichment view (if its sensitive you could share it via DM).

user-aaa726 12 May, 2023, 05:17:42

Hi, could you help me to calculate the task time (in min:sec.millisec) from the file export_info.csv using python? key value 0 Player Software Version 3.5.1 1 Data Format Version 2.0 2 Export Date 25.04.2023 3 Export Time 09:56:29 4 Frame Index Range: 856 - 5894 5 Relative Time Range 00:28.822 - 03:16.418 6 Absolute Time Range 162453.032179 - 162620.627951

user-cdcab0 12 May, 2023, 05:49:51

Looking at those values, the absolute time range is probably the easiest to work with. 162453.032179 - 162620.627951 = 167.595772 167.595772 seconds = 2:47.596

user-787054 12 May, 2023, 11:38:01

I am not getting to downloads the pupil invisible raw data in pupil player format from the pupil cloud. But Few days ago I have download the data in this format. Can anyone from pupil invisible confirm me why i am not getting this option currently from pupil cloud?

user-480f4c 12 May, 2023, 11:39:53

Hey Pavan! πŸ‘‹ To get the data in Pupil Player format, please go to the Workspace Settings section and then click on the toggle next to "Show Raw Sensor Data". There you can enable this option in the download menu. Please keep in mind that you need to enable this feature for each workspace you'd like to download the raw data in the pupil player format

user-4bc389 15 May, 2023, 07:30:06

Hi Using invisible to continuously record eye movements multiple times, sometimes the following graphical issues occur, resulting in interruption of eye movement recording (the app will continue to run), and the recorded duration is inconsistent with the actual situation. What is the problem,

Chat image

user-c2d375 15 May, 2023, 07:33:20

Hi @user-4bc389 πŸ‘‹ I’m sorry to hear you have issues with your Pupil Invisible. Please reach out to info@pupil-labs.com and a member of the team will help you to quickly diagnose the issue and get you running with working hardware ASAP.

user-f1fcca 15 May, 2023, 11:08:23

Hey pupil! I saw that in the live streaming and when i'm recording the red circle has a "stable" offset at were im exactly looking. Is there a way to download the video with the gaze overlay enrichment but move the red circle a bit ? Make it go some inches down for example? Is maybe the fact that im wearing contact lenses ? Thank you!

user-f1fcca 15 May, 2023, 11:46:52

It's okey i took the video and overlayed a circle and recreated the gazes with the small offset, everything is fine!

nmt 15 May, 2023, 12:29:24

Hey @user-f1fcca πŸ‘‹. There is an offset correction feature in-app, but you'd need to set it prior to recordings. Check out this message for reference: https://discord.com/channels/285728493612957698/633564003846717444/1105846907982581900

user-787054 15 May, 2023, 13:14:38

Any one can help from pupil labs, I have recorded gaze related data using pupil invisibles glasses . I have upload data on pupil cloud and download this data in timeseries + scene video format and pupil player format in my laptop. After that I upload pupil player format recording on pupil player app and export the data from their. The file contains word video of scene camera, world timestamps of corresponding frame. but the world_time stamps .csv file have two colums #timestamps and pts Eg. 0.914737 , 0 : 0.918952, 276 etc. So my doubt is how do i know this time in to IST or UTC time format for each frame.

marc 16 May, 2023, 07:45:29

Hi @user-787054 If you have both formats available, I'd recommend simply using Pupil Cloud's "Timeseries Data + Scene Video" format. The CSV files their contain UTC timestamps for every sample.

The export of Pupil Player is a little more complicated and initially only relative timestamps since the recording start are given.

user-2ecd13 15 May, 2023, 18:56:33

I reached out to @nmt months ago and never received an update on this matter.

user-2ecd13 15 May, 2023, 19:10:04

@nmt I actually apologize, it seems Marc also replied to the message but it wasn't in the thread. He recommended deleting the apps data as well.

We've been ticking the box every time it pops up, but what's tricky about the dialog box is that it doesn't always appear right away. We've had instances where I've accidently clicked out of the prompt because of how long it took to pop up, restarted the device and permissions, and it seemed to work for 5-minutes before it popped up again during our testing session with a family. We ended up losing that data

user-2ecd13 15 May, 2023, 19:12:17

Do you have any idea what causes it to pop up again besides a newer cable? Any other recommended work arounds?

nmt 15 May, 2023, 19:20:46

πŸ€” did you click out of the prompt without ticking the box? If a disconnect happened it might ask for permission again in that case

user-60d698 16 May, 2023, 13:49:29

Hello! I just wanted to ask how safe it is to update the oneplus 8. Since you put a rollback tutorial on the pupil labs website, I am not sure if it was more a preventive measure or a necessary one if you accidentally updated it. Did anyone have a problem with that at some point?

marc 16 May, 2023, 13:55:52

Generally speaking, yes, there have been problems with updating Android versions and the official recommendation is to not update unless the higher version is supported. Between different Android versions there are often changes regarding e.g. how USB devices are accessed. In some cases new bugs are introduced which make using an Android version entirely impossible (e.g. Android 10 did contain such a bug). Thoroughly testing an OS version for compatibility is a big effort and thus we typically only officially support a few versions.

user-8e415a 16 May, 2023, 14:08:09

Hello! I was looking into grabbing snapshots of the video feed using HTTP web requests, this reddit thread I found suggests that many cameras post the latest frame to a separate endpoint: https://www.reddit.com/r/Unity3D/comments/ohdp0t/comment/h4oucmp/ Is this true for the pupillabs implementation or should I continue trying to use RTSP in c# using something like https://github.com/ngraziano/SharpRTSP/tree/master

user-cdcab0 16 May, 2023, 20:56:05

Hi, @user-8e415a - no, I don't believe there's an endpoint for that, so you'll want to use the RTSP stream. If you're trying to process video in realtime you probably wouldn't want to encode/decode individual frames in jpg anyway.

Since you're looking at Unity resources and posting in the Invisible channel, does that mean you're looking to integrate Pupil Invisible with a Unity desktop or mobile application?

user-ff83d1 16 May, 2023, 21:04:18

Hi there, I sent you guys an email with more specific information this morning (let me know if you received it!). But my lab is fairly certain that the frame of one of our glasses is damaged. From the troubleshooting we've done, the issue does not seem to be related to the Android device/software version, the invisible companion app permissions, the scene camera, or the cable that connects the android device to the glasses. Do you have any suggestions on how to proceed? Can we send the glasses back for repair?

nmt 17 May, 2023, 06:55:53

Hey @user-ff83d1! We've got you're email. A member of the team will respond there to see about debugging/repair of your damaged frames!

user-84f758 17 May, 2023, 03:07:24

Hi, while downloading my recording from pupil cloud, the connections are unstable and I am failing now. Something happens to the server?

nmt 17 May, 2023, 06:57:50

Hey @user-84f758! Can you please check your connection speed to our servers using this tool: https://speedtest.cloud.pupil-labs.com/ ?

user-e65190 17 May, 2023, 19:42:20

Hello Pupil, I understand that Pupil Cloud automatically generates and visualizes gaze circle, fixation lines and scanpath overlays on the uploaded recordings. I would like to export these as playable mp4s. I understand that gaze circles can be exported using the 'gaze overlay' enrichment. I wonder if can I do the same for the fixation lines & scanpaths? So far, the only method I can think of is to screen a recording that's playing - which takes a lot of time.

user-cdcab0 17 May, 2023, 21:44:24

Hey, @user-e65190 - when you're setting up the Gaze Overlay enrichment, there's a checkbox for "Show Fixations", which overlays the fixations and the scanpath along with the gaze circles in the rendered video. This is new with the recently revamped Cloud user interface!

user-df1f44 18 May, 2023, 15:07:04

Can I just say as an early adopter and shameless fan, This new CloudUI?. [email removed] epic!... Sorry but that best expresses how I feel about the overhaul... Well fxcxn done! folks . πŸ’ - That is all So far so good - Need to dig into the Docs obviously but for now. πŸ‘

marc 18 May, 2023, 15:30:38

Thank you so much for the kind words!! ❀️ It was forwarded to the team πŸ™‚ The new UI was a lot of work and I am super happy to hear it has fans!

user-4bc389 19 May, 2023, 03:52:10

Hi There was an issue with the video during playback. May I know the possible reason,Thanks

marc 19 May, 2023, 07:27:18

Hi @user-4bc389! Do you have the same playback issue in Pupil Cloud and/or Pupil Player? If you do, this might be a hardware issue and you should reach out to [email removed] to facilitate a repair.

user-df1f44 19 May, 2023, 23:45:41

Hey PL. Trying out the Ref_Img_Map Enrichment - Not sure what I am missing here - See images above in sequence - image load is great >> settings used >> "Run" output - Broken preview - Anybody seen this issue?... It's probably me being slow,πŸ€¦β€β™‚οΈ but πŸ€·β€β™‚οΈ . SOS

Chat image Chat image Chat image

user-1afabc 20 May, 2023, 16:49:28

Hey, I also have a problem with this system. I managed to get everything set up and running but after a few hours of waiting nothing happened. Can't generate results and an error pops up. What could be the cause?

marc 21 May, 2023, 11:20:34

Hi @user-df1f44! The image not loading looks like a bug! Could you share the enrichment ID please so we can investigate?

Note that in the meantime you can always fall back to the old ui at old.cloud.pupil-labs.com

user-208db6 20 May, 2023, 05:18:18

Hi Is it possible to create enrichment in the multiple recordings at the same time. I want to create gaze overlay of about 15 seconds in 200 recordings using post ad-hoc events (a1 and a2). I have defined a1 and a2 events manually in each recording. Thanks. This will save a lot of time.

user-208db6 20 May, 2023, 05:19:25

@marc @user-480f4c

marc 21 May, 2023, 11:23:36

When creating the gaze overlay enrichment you can select the events to mark the start and end of your section of interest in the advanced settings of the enrichment. The enrichment will then be calculated automatically on all recordings in the project that contain those two events. So assuming you have all your recordings in a project and the events already defined, its just a matter of defining the enrichment using the events and computing it!

user-1afabc 21 May, 2023, 11:25:46

5535753d-e745-45a2-828f-2422f46952b1

user-1afabc 21 May, 2023, 11:25:59

This is it

user-df1f44 21 May, 2023, 19:37:56

6da164be-da26-40c1-9fd2-93b49520d431 @marc .

marc 22 May, 2023, 07:45:47

Thanks @user-df1f44 and @user-1afabc! We are looking into it!

marc 22 May, 2023, 08:32:04

@user-df1f44 @user-1afabc The issue should be fixed, please try again!

user-df1f44 22 May, 2023, 08:41:08

Looks like it is now running just fine. Thanks again for the lightning fast fix.

user-1afabc 22 May, 2023, 18:13:31

Thanks for help. But unfortunately an error appeared and I could do nothing about it. What could be the problem? Because I have 9 other recordings and the same problem with each one. I can't generate the results.

user-1f9e4f 22 May, 2023, 13:22:48

Hello. For a Pupil Invisible recording uploaded to the Pupil Cloud site, what could prevent the "Pupil Player" download format from appearing?

Chat image

user-d407c1 22 May, 2023, 13:33:57

Hi @user-1f9e4f πŸ‘‹ ! Please go to the Workspace settings and enable the raw download. See my previous message https://discord.com/channels/285728493612957698/1047111711230009405/1108647823173484615

user-1afabc 23 May, 2023, 09:56:10

Of course, I agree to access.

user-1afabc 23 May, 2023, 12:52:22

How can I give you the access?

marc 23 May, 2023, 12:55:27

Thanks @user-1afabc! Actually, the easiest thing would be if you could simply add me as a member to your workspace. If that is an option for you I'll DM you my email address!

marc 23 May, 2023, 14:32:24

@user-1afabc Thanks for the access! The issue with the enrichments is with the scanning recordings that were selected. Those recordings are regular "subject's recordings" that do not fulfill the requirements for "scaning" very well. Ideally you make custom recordings where you hold the glasses in your hand and carefully scan the environment from different angles.

Please refer to the instructions in the docs here for more details and also check out the examples of scanning videos: https://docs.pupil-labs.com/enrichments/reference-image-mapper/#scanning-best-practices

user-1afabc 23 May, 2023, 18:59:57

I would like to ask one more question. Is there any possibility of making something out of these recordings that I alreadyΒ have.

user-1afabc 23 May, 2023, 14:53:36

Thank you very much for your help. I'll check this information.

user-d407c1 24 May, 2023, 08:00:20

https://discord.com/channels/285728493612957698/285728493612957698/1110833501265207376 Hi! for now you can use https://old.cloud.pupil-labs.com/auth/change-password @user-6e2996

user-6e2996 24 May, 2023, 15:15:33

Thanks, it works!

user-cd03b7 24 May, 2023, 20:38:44

Hey, what kind of success have you guys had making reference image environment scans in "open" areas? Like, a large park, or a beach, for example? We've experienced success in hallways and similarly confined spaces, but haven't tried something with a more open space

nmt 25 May, 2023, 05:40:29

Hey @user-cd03b7! You would likely be able to capture a smaller area of a beach or park that was feature rich. The key would be to find features that are 'close enough' to the camera to generate sufficient motion parallax.

In this example I managed to capture a whole building facade by walking back and forth to record the width of the building in the scanning video (also rotating the camera up-and-down): https://docs.pupil-labs.com/enrichments/reference-image-mapper/#_4-an-entire-building.

You'll run into issues when trying to capture a panoramic, or where everything in the scene is super-far away. This is because no matter how much you walk around during the scanning recording, you'll generate very little motion parallax and thus insufficient perspectives for 'under-the-hood' processes to build a 3D model (shown by the point cloud if it's successful).

user-8e415a 30 May, 2023, 14:19:46

Hello! I am trying to stream the data from pupil glasses using RTSP, and when the glasses are up and running, I can see a copy of the stream at http://pi.local:8080/ However, when I run VLC or alternatives and try to watch the RTSP stream using this link, I get this error (or a similar one)

http error: cannot resolve pi.local: No such host is known. 
access error: HTTP connection failure
main error: cannot resolve pi.local port 8080 : No such host is known. 
http error: cannot connect to pi.local:8080

Is this the right link to read the RTSP stream from, and how should I view the stream in an RTSP viewer instead of just the browser link? Thanks!

It may also be worth noting that I haven't been able to get VLC to run any sample streams from the internet either, with errors like

user-cdcab0 30 May, 2023, 19:10:01

The first error you see is a name resolution error - VLC can't figure out where pi.local is supposed to point to for whatever reason. Easiest way to solve that is to use your companion device's IP instead of pi.local. You can find it easily by clicking on the hamburger menu in the app and clicking on Streaming

Second, you'll want to try this instead (where <DEVICE_IP> is your device's IP address): rtsp://<DEVICE_IP>:8086/?camera=world

user-8e415a 30 May, 2023, 20:43:26

Awesome, Thanks! The second one worked for me.

user-e84c0a 31 May, 2023, 10:23:24

Hi Pupil Labs, after the release of the latest update to the Pupil Cloud's UI we have discovered that some of the events we have set as markers in our recordings have been lost. About half of them are now missing, most of them had been set at about the same time as another marker. Has this issue already occured frequently and is there any chance of restoring the data?

user-532fee 31 May, 2023, 17:45:15

Hi! Do you know if loading the data from Pupil Cloud into Pupil Player is possible? And if is, do you know if there exists step-by-step documentation to do that? Thank you in advance for your help.

user-cdcab0 01 June, 2023, 00:22:19

Hi, @user-532fee - yes, it's quite simple! Just right-click on a recording in Pupil Cloud, mouse over Download in the popup menu and select Pupil Player Format. That will give a zip file you can extract and then open in Pupil Player

End of May archive