πŸ•Ά invisible


user-f6ea66 02 July, 2024, 13:26:59

Hi, I have a question that's important to me before I initiate a research project where loss of data would be a very serious problem.

I tested the pupil invisible and pupil cloud functions with a two minutes video and when i did the marker mapper enrichment it took about 5 minutes for the pupil cloud to do the calculations to add the features to the 2 minute video

The thing is, for our research project we want to two procedures of up to 75 minutes for each participant So I'm wondering if that will be a problem and what your experience and recommendation is? Can the cloud reliably process a 75 minute video (or does the app have a timer limit?) ? When adding the marker mapper enrichment, do i risk losing data/the video, if the enrichment addition fails? Does internet speed or other factors affect the ability of the cloud to add the enrichment feature? If a 75 minutes video isn't possible, what do you recommend? (our study kind of depends on the participants doing a procedure that long)

user-480f4c 02 July, 2024, 13:42:51

Hi @user-f6ea66! πŸ™‚

Regarding your questions, I've summarised some points below:

  • Enrichment processing time: The processing time for your enrichment can vary depending on several factors. The duration of your recordings and the current workload on our servers are two major variables.

  • How to speed up enrichment processing: We highly recommend to slice the recording into pieces and apply the enrichments only in the periods of interest, using Events. This can significantly reduce the processing and completion time for your enrichments. Here's an example tutorial of how you can use events and apply multiple enrichments on the same recording (Reference Image Mapping in this case)

  • Enrichment fails: The Marker Mapper enrichment can fail if the markers are not reliably detected. However, you do not lose the recording's data or video.

Could you explain a bit further your setup? This will help me provide more precise feedback

user-f6ea66 02 July, 2024, 13:46:56

We want to have two sets of students (of 12 each). They all have to do two procedures in a simulator program on a computer (working with the mouse and a haptic device in their right hand, while viewing the surgical simulation on a screen).

user-f6ea66 02 July, 2024, 13:49:31

All students must do two procedures of 75 minutes each. During those procedures we want to estimate their eye movements to observe gaze entropy and where they're looking. Ideally during the entire procedure. I guess it could be possible to pause the procedures and stop the recordings at several points during those 75 minutes if necessary (and then upload) and start them again. But it would not be ideal, since it would give the students time to "rest" which isn't ideal for the study.

user-480f4c 02 July, 2024, 14:20:57

Thanks for providing more details @user-f6ea66!

If you need to map gaze onto the screen/simulator program during the entire session, then you can apply the enrichment for the entire recording. Analysis in this case would take longer as you will be applying the mapping to 12 subjects x 2 groups for 75-min long recordings.

Alternatively, if you do not need the full recording data and you prefer opting for faster analysis, you could create Events to slice up the recording and apply the Marker Mapper only on a predefined section of e.g, 30 minutes.

user-f6ea66 02 July, 2024, 16:02:17

Ok, thanks. So you don't think there is any risk of a 75 minute continuous recording not uploading correctly/safely to the pupil cloud?

And when it's uploaded, it shouldn't be an issue for the pupil cloud 'software' to add the marker mapper enrichment to the entire 75 minutes in one go? (provided that the 'apriltags' are recognized of course, which they were in my two minutes test) That it takes time shouldn't be a problem since it's done after the procedures safely uploaded to the cloud. But do you have a rough estimate of how long it would probably take to add the marker mapper enrichment to a 75 minute video?

If that is the case, then we will make 75 minute recordings to be able to see the entire process in one analysis, including how gaze entropy changes during the procedure.

Thanks!

user-f6ea66 02 July, 2024, 18:12:07

(sorry if I'm repeating somehow - just want to make sure I've understood it right)

user-d407c1 03 July, 2024, 06:07:28

Hi @user-f6ea66 !

There should be no issues with uploading recordings to the Cloud, regardless of their duration. It will take longer to upload, but the data on the device remains unchanged.

Here are the details regarding your concerns:

  • Marker Mapper Duration: Yes, Marker Mapper can handle recordings longer than 75 minutes, there is no timer or time duration limit.
  • Note that when a recording is uploaded, that recording will be transcoded first in the Cloud and then the neural network will be run again to ensure you get the data at 200Hz, so you may want to consider this buffer time too.
  • As my colleague @user-480f4c mentioned previously, the enrichment processing time depends on the duration of the recording and the current workload on our servers which is hard to estimate To be safe, allow a couple of hours for the entire process.

If you have further questions or need additional clarification, feel free to ask.

user-f6ea66 03 July, 2024, 08:43:27

Fantastic - thanks! ^^

user-58c1ce 03 July, 2024, 18:25:53

Hi,

Is there any way to trim the video I took? It's a reference video and is a little bit over 3 minutes. How can I cut that extra part? Thank you.

user-07e923 04 July, 2024, 05:49:25

Hey @user-58c1ce, it's currently not possible to do that on Pupil Cloud. If you'd like this feature, please upvote https://discord.com/channels/285728493612957698/1212053314527830026.

user-f6ea66 04 July, 2024, 13:25:16

Hi again, Do the Pupil Invisible glasses record and output pupil diameter metrics/data? I see the parameter in one of the output files (3d_eye_states), but it is empty. Thanks!

user-480f4c 04 July, 2024, 13:28:09

Hi @user-f6ea66! Pupil Invisible does not provide pupillometry measurements. These are available in the download folder of Neon recordings only. Please find here the available data streams with Invisible: https://docs.pupil-labs.com/invisible/data-collection/data-streams/

user-f6ea66 04 July, 2024, 13:28:33

Ok, thanks for the swift reply : )

user-58c1ce 05 July, 2024, 10:30:06

Hi, The two phones that I have are updated automatically. I have a OnePlus 8 and a OnePlus 8T. Currently, after the rollback, the 8T is on Android 12 and the 8 is on Android 10. I am having problems downloading the app and capturing the data properly. Can you provide the correct steps to take? Note that the files you put on your website for the rollback are not working.

nmt 05 July, 2024, 10:39:35

Hi @user-58c1ce πŸ‘‹. Please open a support ticket in πŸ›Ÿ troubleshooting and we can try to help you get onto the right Android version!

user-236f7a 10 July, 2024, 06:48:16

Hello, I am running a study using pupil invisible to measure gaze beaviour while walking. Many of the participants will be using a gait aid (e.g., walker, cane), so they won't be able to hold the device while walking. I want to avoid using pockets to keep the device if I can. Anyone have any ideas/things that have done to keep/hold the device while a particpant is walking? Thank you!

user-07e923 10 July, 2024, 07:49:33

Hi @user-f8a8b8, thanks for reaching out πŸ™‚ Have you considered something like a sports armband, a small bag, or a waist pouch? See https://discord.com/channels/285728493612957698/1047111711230009405/1257997423670722710

user-f8a8b8 10 July, 2024, 10:44:58

Thanks @user-07e923 , I think one of these options will work for us!

user-58c1ce 11 July, 2024, 14:32:10

Hi, I am trying to create the reference image mapper, but I keep encountering issues. It either results in an error or displays the message: "Reference image mapper definition ID does not match the specified ID." My scanning video is of a historical building, which I took to test this enrichment. The video is about 1-2 minutes long, as recommended, and I also took a photo of the building. Could you please assist me with this? Additionally, I would like to know if the heat map will still work if our perspective changes and we are moving a lot while the object of interest remains static (like in golf). Thank you

user-d407c1 11 July, 2024, 14:48:07

Hi @user-58c1ce ! Do I understand correctly that you have only one recording in your project? Kindly note, that the reference image mapper requires one scanning recording (that's the one that should not take longer than 3 min), one reference image and at least one normal recording.

We can certainly have a look at your enrichment if you want and let you know why it might have failed, to do that, feel free to invite us to your workspace.

user-58c1ce 11 July, 2024, 14:54:24

Thank you, Miguel, for your answer. I would like to know if, for our future research in golf, where the player will move a lot, this enrichment will work?

user-bda2e6 11 July, 2024, 19:53:55

Hi! A quick question! Does invisible have an offline desktop software similar to Pupil Cloud?

user-bda2e6 11 July, 2024, 20:11:01

And is there an offline desktop data recording software?

user-07e923 12 July, 2024, 05:23:30

Hi @user-bda2e6, thnaks for getting in touch πŸ™‚ You can export recordings from the Companion app and play them using Pupil Player.

Pupil Invisible must be connect to the Companion device for data collection. This is all offline, as no internet is required to use the device or the app.

There's no desktop software to make recordings, if this is what you meant. May I ask why you'd like to record with a computer?

user-58c1ce 12 July, 2024, 12:18:34

Hi, I have a quick question. If you have an environment like a golf course with many different objects at various distances, for example, a tree which is 2 meters away from the wearer or the flag which is 5 meters away from the wearer, will the reference image mapper enrichment work? Thank you .

user-d407c1 12 July, 2024, 12:28:01

Hi @user-58c1ce ! Sy I missed your answer! It's hard to say whether it would work or not without seeing the environment as there are many factors that play a role here:

  • Can enough features be tracked and matched?
  • Is the illumination changing?
  • Is the field of view occluded by any object?

My recommendation would be to read our Best Practices if you haven't done so and pilot the data on similar environment.

user-58c1ce 12 July, 2024, 12:32:07

Thank you, Miguel, for your answer.

1.Yes, we have flags, bunkers, water, etc. 2.Not really; it will be outside in a normal environment. 3.No. Thank you again.

user-d407c1 12 July, 2024, 12:33:49

Minimum occlusions like the one you see here from the person, and and far distances like the flag in the second one can work. But... If it becomes cloudy you might need a second scanning recording with that light conditions. Also if the flag moves, you won't be able to track it there.

Chat image Chat image

user-58c1ce 12 July, 2024, 12:35:21

Thank you

user-bda2e6 15 July, 2024, 16:38:59

Hello! Is there a better way to calibrate the gaze location than simply dragging it with a finger in invisible companion app?

user-d407c1 16 July, 2024, 07:35:51

Hi @user-bda2e6 πŸ‘‹ ! I assume by calibration you mean gaze offset correction, right? In that case, note that with the latest Cloud update, you can also perform the gaze offset correction post-hoc using your mouse. Check out this link

user-5ab4f5 16 July, 2024, 06:49:53

Hey guys, If i wear the glasses and then shift my head and look relatively centered,.. is it the same as if my head not tiled and i look centered. Do i get the same coordinates out for gaze? Because both are technically focussed or directed towards the center part of the world camera frame?

user-d407c1 16 July, 2024, 07:41:30

Hi @user-5ab4f5 πŸ‘‹ ! Gaze is provided in the scene camera coordinates, that's it, regardless of your head rotation. If you were to rotate your head, while keeping your eyes completely steady (although that would be quite complicated) the gaze coordinates would not change.

Our fixation detector on the other hand takes into account this head movement and corrects it, as even though your gaze coordinates remained the same, you are no longer fixating the same object.

user-dd5440 18 July, 2024, 04:05:21

I need to download the Invisible Companion App but it's unavailable - is there another way to get it?

user-d407c1 18 July, 2024, 05:11:08

Hi @user-dd5440 ! The Invisible Companion App can be download exclusively from the Playstore. If it does not appear available might be because you are trying to download it in a non-compatible device or your Companion Device is in a non officially supported version of Android.

If it is the latest, try to roll-back your device to a compatible version and create ticket on πŸ›Ÿ troubleshooting if you face issues along the path.

user-5ab4f5 18 July, 2024, 07:25:04

@user-d407c1 Meaning. If i just take the coords from the fixation.csv it these coords account for the head movement. Meaning if i look straight in front of me. And later after 3 fixations i tilt my head to the left, but my eyes remain focussed on the middle, those will be different coordinates?

user-58c1ce 18 July, 2024, 11:24:14

Hi, for our research in golf, we would like to identify areas of interest. I tried to create a heat map, but it did not work well because our participants are moving so much within their environment. I also tried using Pupil Player, but I noticed an offset when using Pupil Invisible with iPhone 8 and 8T. Do you have another method to determine the areas of interest? Thank you.

user-07e923 18 July, 2024, 12:12:08

Hi @user-58c1ce, thanks for reaching out. It's difficult to know what you'd like to do without seeing the task. Would you mind inviting me to your workspace so that I can take a look at some of your recordings? This would help me give you more precise feedback.

It would be great if you could create a "test" project with some example recordings so that I can find them.

Please invite ||wee@pupil-labs.com ||to the workspace as a collaborator.

Btw, the Invisible Companion app isn't supported on the iPhone. The app is only supported on specific OnePlus models. This means that if you've encountered bugs or issues on the unsupported devices, then our ability to help you recitfy the issue is greatly diminished.

user-d407c1 18 July, 2024, 13:07:30

Being in the USA should not present any problem. What happens when you click on this link from the device https://play.google.com/store/apps/details?id=com.pupillabs.invisiblecomp&hl=en ?

user-dd5440 18 July, 2024, 13:40:46

When I use the link you provided, I receive this message from the Play Store:

Chat image

user-d407c1 18 July, 2024, 14:02:21

@user-dd5440 would you mind creating a ticket on πŸ›Ÿ troubleshooting and sharing the specific phone model and android version?

nmt 20 July, 2024, 03:31:11

For the benefit of others, this issue was due to admin limitations on what was accessible in the Play Store. If Android 11 is installed on a OnePlus 8 or 8T, the Invisible companion app should be available for download. If it isn't, ensure that if there is institutional management of the device, there are no restrictions on the Play Store or that the restrictions don't include the Invisible companion app. πŸ™‚

user-a4e1be 25 July, 2024, 12:20:08

Hey Guys anyone knows the new link for this? https://docs.pupil-labs.com/invisible/data-collection/compatible-devices/#oneplus-6-companion-device-setup

user-07e923 25 July, 2024, 12:21:22

Hey @user-bef103, I've moved your message to this channel since it's about Invisible. Btw, thanks for catching this πŸ™‚ The new link is here. May I ask if you're having troubles with setting up the Invisible Companion device?

user-bef103 25 July, 2024, 12:22:31

thanks

user-bef103 25 July, 2024, 12:36:28

Much appreciated! @user-07e923

user-bda2e6 25 July, 2024, 13:29:44

Hello! Is there a way to directly download Pupil Player format data from the phone directly for Invisible or Neon?

user-480f4c 25 July, 2024, 13:33:39

Hi @user-bda2e6. You can simply transfer the Pupil Invisible recording directly from the phone to your laptop and load it on Pupil Player. You can find the instructions here.

Please note that Neon recordings are not compatible with Pupil Player; instead, Neon has its own desktop application, Neon Player which you can download here. The idea is the same: Simply transfer your Neon recordings from the phone to the laptop following these instructions and drag-and-drop the recording folder on the Neon Player window.

user-5fb5a4 25 July, 2024, 13:37:17

Hello. I have different recordings with areas of interest delimited by aruco markers. My participants were completing a task looking at the screen and the screen is an area of interest (delimited by the markers). In each recording I created some time of interest that indicate what subtask the participant was doing. Is it possible to generate heatmaps of a specific time of interest (subtask) and not the whole recording in pupil cloud? Can't figure out how. Also my recording are in a single project

user-480f4c 25 July, 2024, 13:43:39

Hi @user-5fb5a4! Yes, this is possible. Let me explain:

Heatmaps on Pupil Cloud are generated based on enrichments. This means that when you create a heatmap visualization, you have to select the enrichment you'd like to visualize. Consequently, the heatmap will "inherit" the data of the enrichment. If you have applied a Marker Mapper enrichment only on a specific chunk of the recording (using Events), then the heatmap visualization will include only the data of this specific time of interest.

For example, see the images attached. I have created a Marker Mapper enrichment, called man_1, that was only applied between events man_1_start and man_1_end. Then, I created a heatmap based on this enrichment. The visualization includes data points only between events man_1_start and man_1_end. You can, therefore, do the same for your subtasks.

Chat image Chat image Chat image

user-480f4c 25 July, 2024, 13:57:30

@user-5fb5a4 on another note, I'm not familiar with the aruco markers you're using. The Marker Mapper enrichment on Cloud supports only AprilTag markers that are listed here: https://docs.pupil-labs.com/invisible/pupil-cloud/enrichments/marker-mapper/#setup

user-5fb5a4 25 July, 2024, 14:02:07

yes, I am using these supported markers

user-5fb5a4 25 July, 2024, 14:00:55

thank you!, very clear. I have one more question, in case the subtask I want to create the heatmap is repeated during the recording, for example multiple pictures of men are shown, and in between there are picture of women. The label should be the same man_1_start, man_1_end for each of the men's picture? can i label them e.g. man_2_start, man_2_end and so on and create a single enrichment and then create the heatmap from that?

user-480f4c 25 July, 2024, 14:07:29

yes, the way you will organize your enrichment is completely up to you. If you have the same condition/subtask repeated over the experiment, you can create the same events to mark the temporal periods that belong to this subtask (see example attached). In that case, the enrichment will include the data from all defined periods, therefore, the heatmap will also include the data over all the repetitions of the subtask.

If you want to have different heatmaps for different repetitions of the same subtask, then you'd have to create different enrichments for each repetition. In that case, you'd have to create events with different names.

Chat image

user-5fb5a4 25 July, 2024, 14:15:49

Understood, so I can't select more than one start and end label in the temporal selection. for example in temporal selection: start -> man_1_start, man_2_start, etc. In my case I named them like this for many recordings that's why I am asking, I will have to re-label them. Thank you so much again!

user-480f4c 25 July, 2024, 14:53:36

Thanks for explaining! Yes, you cannot select more than one start/end label in the temporal selection. In that case, you'd have to rename the events such that they all have the same name if you want to include all these periods of interest within the same enrichment.

user-bda2e6 25 July, 2024, 22:08:22

Is there something I can do only with Pupil Cloud but not Neon Player?

user-5fb5a4 26 July, 2024, 08:38:27

Hello, sorry if I came back to this. I don't get how the image on the right, where the heatmap will be superimposed gets selected. In my case, that frame is not present in the temporal selection I selected, nor the frame in which I defined the aoi surface

Chat image

user-480f4c 26 July, 2024, 08:58:00

Hi @user-5fb5a4! Are there different scenes on your screen between the events that you've included in your temporal selection (e.g., in the period between unload_1 and rev_u_1)? Or is it only the scene on the left panel that appears throughout this period?

user-5fb5a4 26 July, 2024, 09:02:27

what do you mean by scenes?

user-5fb5a4 26 July, 2024, 09:03:28

the scene changes slightly because the machine moves a bit

user-5fb5a4 26 July, 2024, 09:04:04

in this case it is pretty much what you see on the left

user-5fb5a4 26 July, 2024, 09:05:14

I created another enrichment from on another participant and i get the same image on the right

user-5fb5a4 26 July, 2024, 09:06:08

for the same temporal selection unload_1 and rev_u_1

user-480f4c 26 July, 2024, 09:09:31

@user-5fb5a4 could you maybe invite me to your workspace so that I can have a better look and give you more precise feedback? If yes, let me know and I can send you my email.

user-5fb5a4 26 July, 2024, 09:09:50

of course

user-480f4c 26 July, 2024, 11:20:07

@user-5fb5a4 I'd like to share more information regarding the issue you reported for the image selection when running your Marker Mapper enrichment:

Currently, the image is cropped from the frame of the scene video that contains the largest number of markers. The temporal sectioning of the enrichment using events is not taken into account at this step, but the frame is selected out of all frames in the entire recording. This is why the image on the right panel doesn't match the scene that appears on-screen during your defined period of interest.

We agree that this is not optimal and can lead to image selections that do not work well for the heatmap. We are working on an update that will allow you to manually upload any image showing the surface, which will be used instead of the auto-generated crop. This could be a screenshot of the preferred frame in the scene video, or a high-quality image taken with e.g. the phone's camera.

We do not have an exact release date yet, but expect this to be released in August. Thanks for your understanding and apologies for any inconvenience this might have caused with your intended heatmap generation πŸ™πŸ½

user-5fb5a4 26 July, 2024, 11:33:02

thank you for the explanation, looking forward to the update!

user-d407c1 26 July, 2024, 13:56:04

Hi @user-5fb5a4 ! As a temporary workaround until the update, you may want to try the following:

  • For the heatmaps: The data does still respect the section you selected and therefore you can either recreate it with the data or simply download the heatmap visualisation, which should contain one image with and one without the background, (i.e. transparent background). Then, you can simply re-apply it over your image, ensuring the scale is the same.

  • For the AOIs: As for the areas of interests, the main issue you might be facing right now, is that you can not draw them over a different picture accurately, correct? Well, what you can do is to use any picture you want and follow this tutorial which will upload the mask to the Cloud, regardless of the original image, in Cloud. Again the only thing you need is to grab a picture that respects the scale of the original, and is cropped by the markers, as in the one selected automatically.

user-5fb5a4 29 July, 2024, 13:00:06

Hello, yes the problem now is to have a screenshot of the scene fitting the heatmap png (without background), so that I can apply the heatmap on top. I was thinking of using the video rendered tool to have the video undistorted, then take a screenshot for the background and then match the sizes somehow. It should be ok. Thank you!

user-386c4e 29 July, 2024, 05:39:37

Thanks for your reply, I am not sure what I am using, I have just recorded videos on pupil cloud until now and I have the glasses, they are supposed be the latest version apparently according to my college professor, just some basic visuals which give very basic insights into fixations or other stuff

user-480f4c 29 July, 2024, 05:41:44

Hi @user-3e3f5a, since you're working with Pupil Invisible, I moved your messages to the πŸ•Ά invisible channel. May I ask what kind of visualizations do you need? If you could also explain briefly your setup, that will help me provide better feedback.

user-e115cf 29 July, 2024, 05:39:51

Just checked it out i think pupil invisible

user-3e3f5a 29 July, 2024, 20:28:47

I am unaware about any analysis features offered with the glasses and how to access them

user-3e3f5a 29 July, 2024, 20:29:37

If you could just let me know how to access any analysis or visualisation features and use them, it would be great

user-5fb5a4 31 July, 2024, 09:17:56

hello! I am working on the same project as the earlier discussion (I put an image for reference). I would like to create a scanpath from the screen (delimited by the markers basically) from one or more participant from again an event of the recording. Is it necessary (and possible) to create a reference image mapper enrichment and then use python script? Also, what if I am recording an event that is happening on a computer screen and it is moving? In that case, I can't really have the 1 minute video of the environment. Am I missing something?

Chat image

user-480f4c 31 July, 2024, 09:56:23

Hi @user-5fb5a4 - Let me try to break down the questions:

  • Scanpath over the video: If you want a scanpath over the video, you can get that using the Video Renderer visualization and selecting "Show fixations". Would this work, or do you need a scanpath over the screen recording?

  • Scanpath over the image: Currently it's not possible to get the scanpath overlayed on the image of your surface as defined by the Marker Mapper. If you want a static scanpath over a reference image, you can use the Reference Image Mapper enrichment to map gaze onto a screenshot of your video content (again using events). Then you can use this Google Colab tutorial to create a static scanpath over the image.

  • Regarding your last question:

Also, what if I am recording an event that is happening on a computer screen and it is moving? In that case, I can't really have the 1 minute video of the environment.

Are you referring to the scanning recording that is required for the Reference Image Mapper enrichment?

user-5fb5a4 31 July, 2024, 10:12:16

I'd need the scanpath (from a whole scene of the recording) applied to a image that represents that scene. Something like this image. Even without the mapping.

Chat image

user-480f4c 31 July, 2024, 10:15:06

I'd need the scanpath (from a whole scene of the recording) applied to a image that represents that scene.

Just to make sure I understand it right: By this do you mean just a screenshot (static image) from your recording? In that case, you need a Reference Image Mapper enrichment. Then, you can either use the Alpha Lab tutorial using python, or simply the Google Colab tutorial that I shared in my previous message that does the same.

user-5fb5a4 31 July, 2024, 10:13:10

As for the last question yes, I was referring to the scanning recording for the reference image mapper

user-480f4c 31 July, 2024, 10:19:26

Regarding the scanning recording duration: This needs to have a duration of max. 3 minutes. Note that to run a Reference Image Mapper enrichment, you need the following 3 things:

  1. Your eye tracking recording(s), eg your user wearing the glasses and looking at the screen. There's no duration limit for this.
  2. A reference image of your area of interest, e.g., a screenshot of your monitor. This image will be used later on for the scanpath overlay.
  3. A scanning recording of your area of interest. For that, simply take the glasses in your hand and slowly scan the area of interest from all possible angles and distances. This recording needs to have a duration of max. 3 minutes!

Details on the Reference Image Mapper enrichment can be also found in our docs: https://docs.pupil-labs.com/invisible/pupil-cloud/enrichments/reference-image-mapper/

user-fd46cc 31 July, 2024, 11:40:08

Hi, I am using the PupilInvisible for the first time, and trying to get the option to stream to a laptop for a demonstration. However, I think I must have an older version of the companion app, as I do not have the 'Streaming' option available when I open the Companion app. Is it still possible for me to use the Monitor app and functions? Many thanks, Clare

user-07e923 31 July, 2024, 11:46:02

Hey @user-fd46cc, thanks for reaching out πŸ™‚ To use the Monitor app, you'll need to connect both your device and the Invisible Companion device to the same local WiFi. No internet connection is needed.

user-07e923 31 July, 2024, 14:36:05

Hotspot with Invisible

user-228c95 31 July, 2024, 17:50:17

Hello, the data I uploaded is not being transferred to Pupilcloud. What could be the reason for this? How can we resolve it?

user-228c95 31 July, 2024, 17:50:39

Chat image

user-d407c1 31 July, 2024, 19:03:47

Hi @user-228c95 πŸ‘‹! Can you check if the phone is connected to a network with internet access, log out and back into the Companion app, and see if this triggers the upload? If the issue persists, could you please visit https://speedtest.cloud.pupil-labs.com/ on the phone's browser to ensure the phone can access the Pupil Cloud servers.

user-a7b1b5 31 July, 2024, 18:08:01

Is there any chance to cut recorded video?

user-d407c1 31 July, 2024, 19:08:47

Hi @user-a7b1b5 πŸ‘‹! As in just the visualisation? Or the data? For the first one, you can generate a video renderer visualisation in Cloud and you can select specific sections by using events and adjusting the settings in Enrichment creation under Advanced Settings > Temporal Selection.

In Pupil Player, you can trim the data using trim marks.

user-228c95 31 July, 2024, 18:23:54

I have another question. As a beginner in research, I'm unsure what data I will need for analysis. I will be studying performance in basketball players, and I'm having trouble understanding which data I need to analyze, especially the data I have obtained in CSV format. If you could share any beginner-friendly resources for analysis, I would greatly appreciate it.

user-d407c1 31 July, 2024, 20:02:17

@user-228c95 It's hard to give concrete advice on a specific field, but I recommend starting with the scientific literature in your field. For instance, you might find this study on the role of Quiet Eye in basketball interesting.

From then, you can formulate different hypotheses and test our analysis tools. For example, you can compare the fixations scan path of a point guard with that of a power forward before moving /passing the ball. Or use the -reference image mapper to see where they look before a free throw or try the DensePose tutorial to explore how hands-up defence blocks a player’s view.

Is there anything specific you are looking for?

End of July archive