πŸ•Ά invisible


user-e93961 01 July, 2023, 05:06:16

Hi, I wonder if I can access to a pupil invisible through realtime api, within a docker project using ubuntu, currently I successfully install the package within the docker project, but still cannot detect the device (device = discover_one_device(max_search_duration_seconds=10)). Btw I used mac under the same network, and could detect the device. I checked the api documentation, I suppose I kind of need to configure the project so that MDNS and UDP is allowed??

user-cdcab0 01 July, 2023, 20:11:52

It might be a good idea to try manually connecting the device using its IP address first (e.g., device = Device('192.168.0.123', 8080)). If that doesn't work, nothing else will.

user-3437df 01 July, 2023, 23:06:50

Hi there, I see that in the description and evaluation of pupil invisible paper, the pupil invisible glasses "deliver unbiased gaze estimates with less than about 4β—¦ of random spread over most of the field of view".

Is it possible to derive a radius/buffer that accounts for this 4 degrees of spread given the gaze data, and if so how should we go about doing this? Also, would it be accurate to say that it's possible that the true gaze may lie somewhere within this buffer?

nmt 03 July, 2023, 19:08:04

Greetings, @user-3437df! One can generally think of the true gaze point being somewhere within the gaze circle rendered by Invisible. Note that the accuracy figure reported is averaged over the whole field of vision and without any offset correction. In more central vision, you can expect better accuracy. It will vary from person-to-person. This third-party tool is pretty useful if you want to measure that: https://link.springer.com/article/10.3758/s13428-023-02105-5

user-ab605c 03 July, 2023, 15:12:58

Hello, I am having trouble running the Reference Image Mapper (and Face Mapper as well) on Pupil Cloud. I believe other users also faced similar issues in the recent weeks?

I tried to start running them both from the new (cloud.pupil-labs.com) and old (old.cloud.pupil-labs.com) pages without any luck. The progress bar visible on the old page is always stuck at "0% Done". I tried to wait 30–60 minutes, and hard refresh the page several times, without any changes. I also tried to run brief samples in a new project (like 2 X 2-minutes recordings), again without any luck...

nmt 03 July, 2023, 18:54:02

Hello I am having trouble running the

user-94f03a 05 July, 2023, 05:52:24

Hello, I would like to ask 2 questions:

  1. is there any roadmap for offset correct in the cloud? Sometimes we can only catch offsets post-hoc, when we are back in ght lab. If/As more people are using the cloud for preprocessing including AOI detection, it would make sense to enable such a feature. For example in our project we have to take our analysis of line using the pupil player + plugins, for this reason, although the cloud analytics are otherwise sufficient for our purpose.

  2. in the offset correct in the pupil player, is the offset constant across the scene camera space or is it spherical/angular, so smaller in the centre and wider on the periphery? I wonder if i can take the processed data and apply the offset correction when i do the analysis downstream (e.g. in R)

Thanks!

marc 06 July, 2023, 08:10:35

Hi @user-94f03a!

1) Post hoc offset correction in cloud is currently not concretley on the roadmap. I do understand where you are coming from though and see the use case. There is already a ticket for this in on our feedback board though and upvoting it there would certainly help to give this more priority! https://feedback.pupil-labs.com/pupil-cloud/p/post-hoc-offset-correction

2) The offset corrections in both Pupil Player and the Companion app are constant across the scene camera space and do not happen in angular space. So technically the correction is slightly off in the outter regions of the image where the distortion is strong. But this also means that you can indeed trivially add a similar correction yourself by simply adding an according constant to the gaze/fixation data samples.

user-1ab098 07 July, 2023, 07:04:11

Hi there, I have a question regarding the Pupil Player Software. I used it in a recent video to analyse my Pupil Invisible recordings. I was interested in two things: The scene video and the pupils of my participant. When I downloaded the respective file from the Pupil Cloud and tried to open it in the Pupil Player yesterday, all I got was a grey screen. I already recognized that you modified the layout of the Pupil Cloud and the downloaded folder contained less files. Can I still use the Pupil Player and if not, what is my alternative? Thanks!

marc 07 July, 2023, 07:22:51

Hi @user-1ab098! The UI in Pupil Cloud did indeed change, but your downloads should not contain less files. Combined with a gray scene video, this sounds like an error during recording where some of the sensor streams were not recorded and are therefore missing from the downloads. Did you get any error messages from the Companion app while recording? Does Pupil Cloud indicate any errors for the recording? Could you share a screenshot that shows what files are included in your download?

marc 07 July, 2023, 09:18:56

Thanks for the additional info. That's interesting! The export format has not changed in theory, so I wonder what is going on. When you say "did not work" you mean the scene video is gray, or do you also see other symptoms? Which files are missing? And could we maybe inspect one of the recordings? It would suffice if you let us know the recording ID (available e.g. in the info.invisible.json file) and state your explicit permission for us to access the data.

user-1ab098 07 July, 2023, 09:32:35

With the "old file", I see the scene video plus the eye videos. With the "new file", I only see the scene video in grey, no eye videos and no sound. If I open the scene video with VLC Media Player etc., I do have visuals and sound though. Regarding the inspection, I need to double check because the recordings contain sensitive information, give me a minute please.

user-1ab098 07 July, 2023, 10:04:59

Okay, so I have an older ID that does not work (205e3483-11d7-4eca-ada5-6d4ce2b5d3e5) and a new ID that does not work either (c853518d-e71a-4e88-8bb0-2b462e51b0a2)

user-a06969 07 July, 2023, 10:12:17

Dear Pupil Labs: when i am trying to upload the video to the cloud, it is not working with error code:zmq_msg_recov errno4 in the phone. i tried to link with different wifi, it is not working. how should i slove the problem of uploading the video? thanks

user-3c26e4 08 July, 2023, 09:18:03

Hi @marc. I have to questions which are really important for my recent study. 1. When I plot "fixation x [normalized]" and "fixation y [normalized]" I don't understand why the fixations are distributed in the way you can see. Which is the main gaze axis and what are the boundaries (e.g. -1,+1 in x and y)? As far as I know the origin of the coordinate system is in the upper left corner with x to the right and y downward. But in all my cases it looks totally different with many negative values. Please help me understand how I can analyze the graphs. 2. When I move between frames in Pupil Cloud with the arrow keys I always jump in 5s steps, but I need a much finer transition. Why isn't it possible to change the frame steps? Please advise haw I should do this. Thanks a lot.

Chat image

marc 10 July, 2023, 09:43:28

Hi @user-3c26e4!

1) I am assuming fixation x/y [normalized] values are coming from an export of the Marker Mapper. You are correct that (0, 0) corresponds to the top-left corner of the surface. (1,1) would be the bottom-right corner. Thus all samples with a x and y in [0, 1] would correspond to fixation samples inside of the surface. All other samples with values >1 or <0 correspond to samples outside of the surface.

So if you see a lot of negative values that just means that a lot of the time the fixation is not inside of the surface.

2) In the help menu of Pupil Cloud, which you can find in the top-right of the UI (the questionmark icon), you can find an entry called Keyboard Shortcuts. There you can see that you can jump in steps of 0.03 seconds using Shift + Left/Right Arrow Key. This allows you to jump roughly frame by frame through the video as the scene camera's framerate is 30 Hz.

user-a95892 10 July, 2023, 18:59:43

Hello,

I’m using a Pupil Invisible in a driving study and seeking help integrating GPS data into the output stream. I know the companion device has GPS capability, but after some initial looking, I don’t see an option in the app to engage this metric. I could collect the GPS data externally, but that brings sync issues with the Invisible’s frame rate. Has anyone tried this and could offer any suggestions for integration? Wondering what the best approach would be going forward.

I’d appreciate all answers!

marc 11 July, 2023, 07:17:02

Hi @user-a95892! The Pupil Invisible Companion app does currently not record GPS data. As you say, you could use a 3rd party app to collect the GPS data. In theory, all data get timestamped on the phone, so syncing should be straight forward. You can consider this guide on how to match data based on timestamps: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/

user-2ecd13 11 July, 2023, 18:30:47

Hey Pupil, we received a few error messages while recording, but our RAs were only able to take the attached picture. The other error messages mentioned something about a "sensor malfunction".

Could you provide some details on what might cause these issues and how to quickly resolve them in the future?

Chat image

user-480f4c 12 July, 2023, 06:57:04

Hi @user-2ecd13 πŸ‘‹ ! Sorry to hear that you were experiencing issues with your recordings. Would you be able to provide more details of the setup and when exactly the errors appear?

user-355442 12 July, 2023, 14:55:22

Hi @marc,

user-355442 12 July, 2023, 14:56:03

[email removed] (Pupil Labs), what do negative gaze positions mean please?

marc 12 July, 2023, 14:58:16

Hi @user-355442 For the raw gaze signal that would mean that the gaze point lies outside of the visible field of view of the camera. Given how large the field of view is that does not occur often, but its possible.

In the context of the marker mapper, it could also mean that the gaze data is outside of the defined surface.

user-20a5eb 13 July, 2023, 16:40:51

Hi everyone, I'm currently trying out the project from the alpha lab using densepose (https://docs.pupil-labs.com/alpha-lab/dense-pose/). When using it we get in output a video and different csvs. One of them: densepose.csv is the same as gaze.csv but we a supplementary column indicating gazed body parts. However in this column, there are sometimes multiple different body parts + 'background' indicated for a single nanosecond. I can't find anywhere if it's the body parts ordered by certainty for this specific nanoseconds or something else. Could someone enlighten me on that point?

user-d407c1 13 July, 2023, 18:09:26

Hi! Happy to hear you are trying out the densepose tutorial. Basically, we set a circle radii here https://github.com/pupil-labs/densepose-module/blob/e92aef1ed880571a30c90deb5af5ef8b09007373/src/pupil_labs/dense_pose/pose.py#L173 and we check all the pixels inside the circle, to see whether they touch the prediction bounding box of any body part detected in the scene, if so, that body part is added.

PS. I could make the gaze circle (detection circle) as an argument if you prefer, such that you can easily modify it, or you can directly modify at the line I post above

user-355442 13 July, 2023, 23:23:42

[email removed] (Pupil Labs), is there a document that explains how the pupil diameters were derived and how it can be converted to more realistic dimensions?

user-a15383 14 July, 2023, 08:19:48

[email removed] (Pupil Labs), how can I overlay the heatmap with a section of the video recording from PupilCloud. Do I have to distort the heatmap so that the corners of the png match the corners of the trapezoid, or do I have to put the heatmap on the whole image section?

Chat image

marc 14 July, 2023, 08:40:04

Hi @user-a15383! The heatmap you can download from Pupil Cloud is always rectangular and corresponds to the rectangle you get when undistorting the trapezoid of the surface into a rectangle.

the image background in the heatmap is such an undistorted crop from the scene video. This is essentially a homography transformation. Let me know if you need further input on how to get this done!

user-08c118 14 July, 2023, 10:30:52

Dear Pupil Labs Team, we are currently running a study in Berlin, in which we want to measure the perception of participants while walking through the city. We use the Pupil Invisible to record eye tracking data. Unfortunately, some of recordings, after being uploaded in the cloud are just grey. This is very confusing as the camera seems to be recording and we didn’t receive error messages. I attached an screenshot so you can see what I’m referring to. Do you what might be reasons for this? Moreover, some times the recording just stopped without notifying us. Instead of a recording that was supposed to last almost 2 hours, we now have only 20 minutes… We are also using the LSL relay, as we want to synchronize eye tracking with other physiological data. Can this be a problem as well, so that the recording is less stable? Can you give me any information about what the reliability while walking through the city should be like?

Best regards Anton Voss TU Berlin

marc 14 July, 2023, 10:35:34

Hi @user-08c118! Sorry to hear about your issues! I have a couple of questions to clarify things: - It is normal for the first couple of seconds of scene video to be gray. This happens because the scene camera takes a couple seconds to initialize while the other sensors are already recording. Do I understand correctly that in your problematic recordings the scene video is gray throughout the entire recording? - Do those recordings play fine in the Companion app on the phone? - Could your share the ID of one of the affected recordings and give us explicit permission to access the recording data, so we can take a closer look at what is going on?

user-08c118 14 July, 2023, 10:31:34

Chat image

user-08c118 14 July, 2023, 10:38:38

Yess sure its not just the first seconds of the recording, which are grey. I also can not play the recordings on the companion. An ID of one of the affected videos is: cb918757-a2bf-41ea-87f4-fcb6c9f670b8

user-08c118 14 July, 2023, 10:39:30

Same for recording: 0e3b68ee-3872-4c6c-9474-6602e1ef1563

user-08c118 14 July, 2023, 10:42:23

And for this recording after 46mins: 5b556c54-d111-4c9c-8695-47c7f9ae9a67

user-20a5eb 14 July, 2023, 12:00:39

I have another different question: I have 12 different phones and for each of them when I want to change the wearer from the app, it's endlessly showing a screen 'syncing wearers'. I found a post about this from 2020 but solutions didn't work. My phones are connected to internet, restarting them didn't work. Do you have any idea?

marc 14 July, 2023, 12:34:21

Hi @user-20a5eb! Could you try clearing the Companion app's internal storage via the following steps? Note, this will not delete any recording data on the phone, but only internal data of the app. If you want existing recordings to show up in the Companion app again after doing this, you will have to import them again from the app's settings view.

Steps to clear the app's storage - Press and hold the app's icon on the homescreen - Select App info -> Storage usage - Hit Clear data and press Ok - Restart the app by clicking Force stop on App info screen and then Open

user-20a5eb 14 July, 2023, 13:00:54

It worked thank you @marc !

user-f01a4e 17 July, 2023, 05:32:48

Hi, i have been experiencing some trouble lately, on the phone i am unable to get any feed when i put the glasses on. i am unable to know if the recording is happing or not. when the device is not worn then the video feed is seen. can i get some help.

user-480f4c 17 July, 2023, 06:28:56

Hi @user-f01a4e πŸ‘‹ ! Sorry to hear that you've been having issues with your glasses. Could you please elaborate a bit on the issue? Do you mean that you connecting the glasses to the phone and you cannot preview the feed of the scene camera with the gaze estimation (ie red circle)?

user-08c118 17 July, 2023, 14:03:07

Where exactly can I lower the sampling rate? Did not find this option on the app ...

marc 17 July, 2023, 14:07:48

Turns out this featuer is currently only available in the Neon Companion app. My bad! What you can do in the Pupil Invisible Companion app though is to disable eye video compression in the settings!

user-08c118 17 July, 2023, 14:10:06

Okk no worries. This is the same as "eye video transcoding" I assume?

marc 17 July, 2023, 14:33:54

Correct

user-f01a4e 18 July, 2023, 05:24:39

Hi, @user-480f4c thank you for prompt reply. so my problem starts when I connect the device with the phone, the app starts to flicker a bit. and then normalizes. after that when I preview the feed (with out wearing the glasses), I have a clear view of what the device is seeing but when I wear the device and then preview the feed, I am getting a black screen on the feed. there is no feed and I am unable to understand if I should do the recording or not. I have been using the feed to calibrate each different user but without the feed I afraid to use the device. Hope this is elaborate. Please let me know.

user-480f4c 18 July, 2023, 06:02:41

Hi @user-f01a4e πŸ‘‹ ! Thanks for clarifying. Sorry to hear that you're having issues with your recordings. Please send an email to info@pupil-labs.com and I will follow up with some debugging steps. πŸ™‚

user-f01a4e 18 July, 2023, 06:19:40

@user-480f4c Sure will do. thank you so much πŸ™Œ

user-1391e7 18 July, 2023, 10:47:05

hi Marc, hello everyone πŸ™‚

I took a look at the recording exports, under https://docs.pupil-labs.com/export-formats/recording-data/invisible/ When you're talking about recording exports, this isn't just hitting export on a recording in the companion app, or is it?

user-1391e7 18 July, 2023, 10:59:40

it's something you do via the cloud?

user-d407c1 18 July, 2023, 11:01:12

Hi @user-1391e7 πŸ‘‹ ! The page that you link refers to the recording exports from the Cloud.

marc 18 July, 2023, 11:05:32

Hi @user-1391e7! To add onto the response, exporting your data through Pupil Cloud is the recommended path. After uploading a recording to Pupil Cloud additional data streams are added to the recording such as blinks and fixations. These are not available when accessing the data directly from the phone.

It is however possible to read the raw sensor data from the phone. This requires opening the binary files using e.g. Python though and is not super straight forward.

user-1391e7 18 July, 2023, 11:07:50

thanks! Yes, I've played around with that a little. currently using the realtime-api to run an online version of an I-DT fixation algorithm. now just looking to compare a little, see how far off the mark I am πŸ˜‰

user-1391e7 18 July, 2023, 11:10:56

is there information on what thresholds you set for the calculations on the cloud?

user-1391e7 18 July, 2023, 11:14:54

like say.. 1Β° of visual angle, minimum duration of 100ms? both higher?

user-d407c1 18 July, 2023, 11:18:35

Hi @user-1391e7 ! You might not be able to fully compare them, the fixation algorithm employed in the Cloud does use the scene camera optic flow to compensate for VOR and head movements, you can read more about it here https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf

This fixation algorithm, is currently being peer reviewed and once the process is over, we plan to make it publicly available, so that you can directly implement it with the realtime API. So please stay tuned! πŸ˜‰

user-1391e7 18 July, 2023, 11:23:15

gotcha! thanks for the link!

user-1391e7 18 July, 2023, 11:26:28

how are blinks handled in this case? right now, I noticed that gaze data comes in anyways, even if my eyes are closed. would that be a separate detection that runs concurrently to fixation detection, get filtered or would the input for the fixation algorithm change in dependence of a blink detection?

marc 18 July, 2023, 11:32:16

Blinks are calculated independently of fixations and gaze in Pupil Cloud. The white paper of the algorithm can be found here: https://docs.google.com/document/d/1JLBhC7fmBr6BR59IT3cWgYyqiaM8HLpFxv5KImrN-qE/export?format=pdf

As you saw yourself, the gaze signal does not respect blinks and always gives a result. The fixation detector does not consider blinks either. In practice blinks lead to a short "jump" in the gaze signal, which in turn would cause the fixation detection to consider an ongoing fixation to be finished.

Considering blinks during fixation detection would make sense, however the robustness of the blink detector is still limited. In some settings it can have a high rate of false positives, which would have a negative effect on the fixation detection, which in turn makes considering blinks in the first place undesireable.

user-1391e7 18 July, 2023, 11:48:22

I noticed the jump as well. had some cracked ideas on how to detect that jump, but it's not identical for every user. and like you said, the jump itself finishes a fixation in any case. but when we're looking at the saccades more closely, rather than the fixations, then I eventually have to (with some degree of accuracy or certainty) discern between what was caused by a blink and what was a real saccade

marc 18 July, 2023, 11:51:22

Yes, that is true! If there is a blink that happens during a proper fixation, there will be a false (implicit) saccade detection and two shorter fixations instead.

user-1391e7 18 July, 2023, 13:13:46

one more minor thing, just for clarification: with respect to the logged gaze values, as well as the ones sent via realtime api:

both have the offset I've set via the companion app applied, right? or would the log be the absolute raw data so to speak and I'd have to reapply the offset recorded in info.json? I could see either way make sense, in a way, but I'm guessing they're logged with the offset applied.

marc 18 July, 2023, 13:14:36

Yes, the offset is already applied to the gaze values from all sources and also to the fixations!

user-1391e7 18 July, 2023, 13:14:30

also thank you for the many answers, you've been most helpful!

marc 18 July, 2023, 13:14:44

You're very welcome!

user-a98526 19 July, 2023, 08:11:17

Hi@marc, Pupil Cloud has a wonderful update! In previous versions, I could download raw data in Pupil cloud for processing in Pupil player. I was wondering in knowing if I can still get this type of data from the pupil cloud now or if I can only get it via USB.

marc 19 July, 2023, 08:20:09

Hi @user-a98526! I am glad to hear you like it! Yes, this download is still available, it was just hidden away a little bit. You need to enable this additional download option per workspace in the workspace settings. Look for "Show Raw Sensor Data" in the settings. Once enabled you can find the additional download option in the right-click menu as before.

user-a98526 19 July, 2023, 08:25:56

That's so cool and very helpful! Thanks!

user-1391e7 19 July, 2023, 10:29:52

while in the cloud, with my workspace open, whenever a new recording is currently uploading ~~or is being processed~~ (just the upload), I seem to be unable use playback of other recordings that are uploaded and have finished processing. Is this intended behaviour?

user-1391e7 19 July, 2023, 10:34:03

I do see the recording get loaded for a second or so in the player, with the thumbnails in place and the first frame loaded and then it disappears, the play button becomes greyed out

user-1391e7 19 July, 2023, 10:35:46

might it be a browser or operating system issue? Windows 10 with Firefox & Chrome

user-d407c1 19 July, 2023, 10:39:07

Hi @user-1391e7 ! Would you mind doing a hard refresh on the page? Did that solved it? Also could you send us a screenshot or screen recording of such behaviour at info@pupil-labs.com

Once recordings are processed, you should be available to play and work with them, independently of whether there are other recordings being upload.

user-1391e7 19 July, 2023, 10:55:16

refreshing doesn't change the behavior, it always happens if I'm trying to look at a video that is old, is finished, was processed, while I also see another video is being uploaded

user-1391e7 19 July, 2023, 10:56:02

it's like the little upload progress visualisation kills the loaded video (different recording than the one being uploaded)

user-1391e7 19 July, 2023, 11:02:03

I have no recordings left at the moment to upload, but I'll just make another real quick. just need it to be long enough so the upload progress stays for a little while

user-1391e7 19 July, 2023, 11:15:48

done, hope that helps

user-1391e7 19 July, 2023, 11:17:48

if it's not just my system somehow, I'm guessing it's just caused by this little upload progress circle. where the update of that visualisation somehow unloads the video or doesn't allow me to play the already processed video at the same time as another one is being uploaded and visible

user-d407c1 19 July, 2023, 11:18:22

thanks @user-1391e7 , we would look into it

user-1391e7 19 July, 2023, 11:19:09

found a temporary workaround as well, so it doesn't hinder me in the end. I just need to add the videos I want to watch to a project, switch to project view (then the video being uploaded isn't in the list) and it works

user-e40297 20 July, 2023, 19:50:11

Hi, I'm struggling with pupil player (using Invisible). There doesn't seem to be a world cam movie. There seemed to be a new update. Any idea what might be happening?

user-d407c1 21 July, 2023, 06:20:39

Hi @user-e40297 πŸ‘‹ ! Did you got your data from Cloud or directly from the phone? If you got it from the Cloud, did you download the Pupil Player Format? You may have to enable it on your workspace, see my prev message on how to enable it. https://discord.com/channels/285728493612957698/1047111711230009405/1108647823173484615

user-1e429b 21 July, 2023, 08:17:53

Hello everyone!) our team experiencing some trouble with data analysis via PupilCloud. I’ve got a few questions. Firstly, we put several recordings in one project, let’s say it’s 12 videos. But after making the enrichment and downloading files - there’s lack of recording IDs. It seems like the fixation file contains data for only 6 recordings? We don’t understand why is this happening? And second question is about timestamps. It’s a UTC timestamp, but in what units do the time and the fixation duration measure, seconds or milliseconds? I hope my explanation is clear) Can I get some help, please?

user-1391e7 21 July, 2023, 09:15:58

inside the files of the download (e.g. fixations.csv), you should be able to find the recording ID as one of the columns. in the same files, you can see the measurement units in the header-line. so for fixation duration, we have "duration [ms]", meaning milliseconds. for the timestamps, we have - for example - "end timestamp [ns]", meaning epoch timestamp in nanoseconds

user-1e429b 21 July, 2023, 11:03:55

Oh, thank you so much for your reply πŸ™πŸ» the link is very helpful. But, still we have problem with the number of recordings. There’s less recording IDs in the export enrichment file than in the project.

user-6a29d4 24 July, 2023, 09:52:59

Hi Pupil Labs team! I have a quick question. We are going to perform a measurement campaign next week where we will use the Pupil Labs Invisible device. We will record quite some data each day, and we don't have the possibility to upload all of it over the internet. However, we only get one shot for doing the measurements, and therefore, I'd like to back-up the data over USB. When connecting the phone to a laptop, I cannot see the folder where the recordings are stored, however. Where can I find this? nB I know it is possible to make exports of each recording and transfer these over USB, but can I do without loss of data? We would like to compute gaze and fixation data afterwards, which I know can only be done (for now) upon uploading in the cloud.

nmt 24 July, 2023, 11:16:21

Hi @user-6a29d4! Detailed instructions on how to export and transfer recordings via USB can be found here: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html#export-from-invisible-companion-app You can copy these recordings over without losing access to them in the Companion App. Recordings can be uploaded from the Companion App later on when you have internet.

user-f01a4e 24 July, 2023, 11:30:27

@user-dc2c1d hi, anuj here. we have a scheduled pick up of the "invisible" your team through Fed-ex. there has been an update on that. kindly check your mail and let me know

nmt 24 July, 2023, 17:46:45

Local transfers are mainly used for those who want access to the raw data via Pupil Player, our desktop app. But you don't get access to the enrichments in Cloud that way.

user-e52094 24 July, 2023, 18:13:25

When I download videos from pupil labs cloud workspace, the download folders do not contain the following files: gaze ps1.raw, gaze ps1.tim, PI world v1 ps1time, PI world v1 ps1.mp4, extimu ps1.raw, extimu ps1.time, and info.json. I am using pupil invisible glasses.

However, these files were in my downloads from pupil labs cloud in April. Can any help me figure out how to download these files? I need them to run analysis in iMotions.

user-e52094 24 July, 2023, 18:19:13

I found the solution from help posted in a previous post. Thanks.

user-99b85c 25 July, 2023, 05:28:35

Hey Pupil Labs team! I'm reaching out for assistance with some issues I'm encountering in the Pupil Cloud workspace. I keep receiving an error message that says "Precondition failed: If-Unmodified-Header Too Old." This error appears when I attempt to delete files or enrichments, move certain video files into a project, or create a reference image mapper enrichment. I'm using Microsoft Edge as my browser. I've tried using alternative browsers like Chrome, which allows me to perform some tasks, but I still encounter "internal server errors" when attempting to move certain videos into a project or create an enrichment. All our videos were recorded using the Pupil Invisible glasses and uploaded using the companion device and they play back with no issues. They are all max 10 mins long and have been recorded over the last couple of months. Any help or guidance you can provide would be greatly appreciated.

user-480f4c 25 July, 2023, 08:39:51

Hi @user-99b85c πŸ‘‹ ! Sorry to hear you've been experiencing these issues. Could you please share with us the account and workspace ID to [email removed] You can find the workspace ID in the URL: https://cloud.pupil-labs.com/workspaces/<workspace ID>/

user-d23b52 25 July, 2023, 08:05:04

I trust you are well. We encountered an issue during a recent experiment and I'm hoping to tap into your collective knowledge for possible solutions.

During the experiment, we had 12 invisible devices and their respective companion devices activated simultaneously using the asynchronous API. We've been sending periodic messages to these devices. These companion devices, enclosed in a silicon case (designed for air flow) and worn around the neck, have ample storage space and were fully charged prior to the performance.

However, approximately 30 minutes into recording, one of the invisible glasses began to flash red and the companion devices vibrated. Of note is that both devices were notably hot to the touch at this point.

Any thoughts or suggestions on this would be highly appreciated.

Best Regards,

user-c2d375 25 July, 2023, 11:53:33

Hi @user-d23b52 πŸ‘‹ I am sorry to hear you're experiencing issues with your Pupil Invisible. May I ask you to reach out to info@pupil-labs.com in this regard? A member of the team will help you in quickly diagnose the issue and get you running with working hardware asap.

user-f01a4e 26 July, 2023, 10:36:44

@user-dc2c1d hi anuj here, we had a query regarding the dispatch of the invisble. we have mailed the details. kIndly have a look and let us know.

user-c16926 27 July, 2023, 09:14:28

Hello, I am trying to associate gaze with each frame of the video. I am using pupil invisible and I directly got the data from the phone without using Pupil Cloud. I extract frame of the world.mp4 using ffmpeg, and got the gaze data by dragging the folder into the Pupil Player and clicking the export button. However I found that for some videos, there is a slight difference in the frame number (about 20 frames). For example, the 20-th frame by ffmpeg corresponds to the 0-th frame by Pupil Player. Is there any idea of how and why this happens?

nmt 27 July, 2023, 13:57:00

Hi @user-c16926! Is there a particular reason why you're using ffmpeg to extract the video frames rather than just using Pupil Player and making an export? ffmpeg is known to extract more frames than expected if the original video has some variable durations between frames. Depending on the lighting conditions and/or autoexposure settings, this might be the case with your recording. One thing to try is to set the vsync 0 option in ffmpeg to account for variable frame rate.

user-c16926 27 July, 2023, 09:47:55

For example, the left image is the 1221-th frame by Pupil Player (you can see it from the bottom) and the right image 1221-th frame by ffmpeg. Due to the slight time difference, if I put the gaze of the 1221-th frame to the ffmpeg extracted frame, it is not at the correct position.

Chat image Chat image

user-c16926 27 July, 2023, 14:02:13

Thanks @nmt ! I was trying to use only some clips in the whole video, so I tried to generate those clips by first extracting frames and then creating the clips using the corresponding frames. I will first try vsync 0 to see whether this can resolve my issue!

nmt 27 July, 2023, 14:05:33

Did you know you can select certain portions of the video to export from Pupil Player? Just drag and drop the trim marks in the scroll bar πŸ™‚

nmt 27 July, 2023, 14:05:57

Then you can run the 'World Video Exporter' plugin. Very easy to do.

nmt 27 July, 2023, 14:06:34

Further instructions here: https://docs.pupil-labs.com/core/software/pupil-player/#world-video-exporter

user-c16926 27 July, 2023, 14:24:47

Thanks, but I have too many clips to extract so it is very hard to do it by hand

user-5e7326 28 July, 2023, 09:29:55

Hello, my question is: Is it necessary to use the cloud or is it possible to bypass it?

user-c2d375 28 July, 2023, 09:41:01

Hi @user-5e7326 πŸ‘‹ While Pupil Cloud is the recommended workflow for storing, playback, and enrich your recordings, we understand that it might not be feasible for everyone. If you are unable to use Pupil Cloud, you can still export recordings from the Pupil Invisible Companion App, transfer them to your laptop/desktop via USB, and then utilize Pupil Player for playback, visualization, and data export for individual recordings. To get a better understanding of this workflow, feel free to check out our documentation (https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/transfer-recordings-via-usb.html#transfer-recordings-via-usb) (https://docs.pupil-labs.com/core/software/pupil-player/#pupil-player).

nmt 28 July, 2023, 14:00:35

Hi @user-5e7326, is there a reason why you don't want to use Pupil Cloud? It has a lot of features and analysis tools that aren't available in Pupil Player, like, blinks, fixations, and advanced scene recognition algorithms. Plus we're always working behind the scenes to add additional functionality to make life easier for researchers πŸ˜…

user-5e7326 28 July, 2023, 10:02:45

Thank xou. IΒ΄ll try it.

user-5e7326 28 July, 2023, 10:41:21

And the player is just online. theres no desktop version?

user-c2d375 28 July, 2023, 13:34:02

Pupil Cloud is only available online, whereas Pupil Player is a desktop software that you can freely download from our website. You can access the download link here: https://github.com/pupil-labs/pupil/releases/tag/v3.5

user-d407c1 28 July, 2023, 11:27:00

FFMPEG plotting gaze

End of July archive