πŸ•Ά invisible


user-f01a4e 02 January, 2023, 09:37:26

Hi, i am facing some issues with the enrichment. i downloaded the raw data and dropped them in pupil player, but i am not getting any video or audio. not gaze data nothing. how do i rectify this?

user-d407c1 02 January, 2023, 09:48:46

Hi @user-f01a4e! Are you using MacOS and Safari?

user-5543ca 02 January, 2023, 15:22:35

Hi! I have a problem with my Invisible device. It is not recording the scene video. I tried the same a few days ago - it didn't work and then I saw the following notification (see the attached picture)

Chat image

papr 02 January, 2023, 15:24:57

Hi, this sounds as if there is a connection issue. Do you have a different USB C cable that you could try?

user-5543ca 02 January, 2023, 15:23:27

shoutAttached:sensor is not available:UVCMicSensor

user-5543ca 02 January, 2023, 15:24:04

However, when I tried it today - the problem remains but I didn't manage to reproduce the error notification.

user-5543ca 02 January, 2023, 15:24:21

Could you please help me out?

user-5543ca 02 January, 2023, 15:25:22

Yes, I do. One sec..

user-5543ca 02 January, 2023, 15:27:34

It didn't work.

papr 02 January, 2023, 15:28:26

Could you please install and use this app to check whether it recognizes the three cameras (scene and two eye cams) https://discord.com/channels/285728493612957698/1058009279581409350/1058011293581320232

user-5543ca 02 January, 2023, 15:28:36

I cannot see the preview when the glasses are connected. I can still press "RECORD", but I cannot see the video later with the message ."no scene video"

user-5543ca 02 January, 2023, 15:29:45

Sure.

user-5543ca 02 January, 2023, 15:33:08

Chat image

papr 02 January, 2023, 15:34:54

Ok, please contact info@pupil-labs.com regarding a possible hardware issue + repair.

user-5543ca 02 January, 2023, 15:33:25

this is what I see.

user-5543ca 02 January, 2023, 15:34:37

When I first opened the app, I also got this notification

Chat image

user-5543ca 02 January, 2023, 15:35:28

OK, thanks!

user-f01a4e 03 January, 2023, 03:19:46

@user-d407c1 I am using window 11 & chrome

papr 03 January, 2023, 06:24:03

Hi, please note that Pupil Player is not designed to play back an Pupil Cloud enrichment. Only the corresponding Player format that you can download in the recording / drive view

user-f01a4e 03 January, 2023, 10:36:34

@user-d407c1 oh ok thanks. is there a possible way that i can download induvial users heat map from the enrichment, togeather?

user-d407c1 03 January, 2023, 12:57:16

Hi Anuj! To filter generated heatmaps in the Marker Mapper or the Reference Image Mapper enrichments, you can right-click on the enrichment in the Enrichment's page and select Heatmap or left click to select the enrichment and click on the heatmap ( the one next to the pencil) icon A pop-up will open with a recordings and a settings panel, you can use the search bar to filter the recordings, once selected the recording you are interested, you can click on the blue button Download. This will download the aggregate heatmap of the recordings appearing on the left sidebar, if only one recording is there you will download the individual heatmap.

user-f01a4e 04 January, 2023, 05:29:07

@user-d407c1 Hello again, i have done the above mentioned no doubt its amazing. but i wish to know if there is a way where i can download all individual heat maps of all users in one go, rather than searching them one by one and downloading them. Oh and BTW i no its late but still, happy new year to you and your team πŸ˜€

marc 04 January, 2023, 08:38:24

Happy new year to you too! There is currently no way to generate heatmaps per subject in one click. The approach @user-d407c1 described is the only option using the UI. An alternative could be e.g. a Python script that generates those programmatically. While we could provide code for generating the heatmap from data, this would still require some effort on your end though.

If you think the ability to easily generate heatmaps per subject is important, we'd be happy to get a feature request here to take this into consideration for future updates! https://feedback.pupil-labs.com/pupil-cloud

user-f01a4e 04 January, 2023, 06:21:45

@user-d407c1 also i have recently found that some users enrichment data has not been generated. how do i rectify this?

marc 04 January, 2023, 08:42:33

What exactly is the symptom? Are recording folders missing from the enrichment download? Reasons for this could be: 1) The enrichment may be defined with a pair of events that does not exist in every recording. 2) The enrichment has been computed while not all recordings have been added to the project yet.

In both cases, the solution would be to add additional events as necessary and hit the compute button again to "fill in the gaps". If there is another issue it might be helpful if we you could add someone from our team briefly to your workspace, such that we can see what the issue is first hand.

user-f01a4e 04 January, 2023, 09:05:51

@marc Noted the points you mentioned and they haven't been the case for the issues. the upload was done on the day of the event and enhancement has been conducted the day after. So as per solution, i would really appreciate if a member of your team can help me out on the work space. i would be able to clear some more smaller doubts as well.

marc 04 January, 2023, 09:11:05

Alright, I will DM you about providing access to the workspace!

user-ccf2f6 04 January, 2023, 17:38:20

Hi, I'm trying to use the pupil-labs-realtime-api on Jetson but facing dependency errors (cchardet at the moment). Is there a thread for installing Pupil API dependencies on ARM setups?

papr 04 January, 2023, 18:03:05

Hey, which python version are you using? Some how I associate this issue being a Python 3.11 issue not an Arm issue

user-9bfe0d 05 January, 2023, 03:39:41

Hi, when I wear pupil invisible glasses, it can not detect worn. It always prints false. Why this happen? Is it the hardware problem?

user-e29e0d 05 January, 2023, 16:57:11

Hi happy new year

I have a quick question what is the range of gaze coordinates in pupil labs data?

marc 05 January, 2023, 16:58:45

Hi @user-e29e0d! Happy new year to you too! Do you mean range as in what field of view is covered?

marc 05 January, 2023, 17:00:24

That would be as follows for the different products: Neon 132Β°x81Β° Invisible 82Β°x82Β° Core 99Β°x53Β°

user-e29e0d 05 January, 2023, 17:00:35

No, I meant the gaze data that is sent by Pupil invisible API

user-e29e0d 05 January, 2023, 17:01:12

what are ranges of x and y?

marc 05 January, 2023, 17:02:19

Ah, okay! Those are gaze coordinates in the scene video in pixels. The resolution of the scene image is 1088 x 1080 pixels.

marc 05 January, 2023, 17:03:04

To a small extend Pupil Invisible can estimate gaze points outside of the image boundaries, so the possible values in practice can go slightly out of bounds of that.

user-e29e0d 05 January, 2023, 17:03:37

Thanks a lot. it was very helpful.

user-df1f44 06 January, 2023, 09:53:38

Hello folks, Quick one: Has anyone had the vocals of their invisible recordings just go off? Currently experiencing no sound in some of my workspaces. Is this recoverable? What am I missing please? Cheers.

papr 06 January, 2023, 09:55:39

Hi! Just to confirm, if you create a new recording in workspace A it has sound, but if you record it as part of workspace B it does not have sound?

Or do you not get sound for any new recording, independent of the selected workspace?

user-df1f44 06 January, 2023, 12:17:41

P.S. recent recordings have been done by a colleague - I am reviewing remotely. atm. Older vids, done by me seems to be having the same issue as well... πŸ€”

user-cd03b7 16 January, 2023, 02:30:10

Hey guys, I'm trying to run the reference image mapper enrichment in the cloud project editor, this progress bar has been stuck like this since I uploaded it half an hour ago - are you guys getting similar processing times?

user-d407c1 16 January, 2023, 10:15:11

Please note that the processing time for your request may vary depending on the current workload and job queue. If our servers are at capacity, it may take longer to complete your task. It's also possible that your job has already been completed, but the browser hasn't reflected that. If you're experiencing this issue, try performing a hard refresh (command + shift + R) to check for updates. If you're using a Windows device, you can perform a hard refresh by pressing the "Ctrl" and "F5" keys simultaneously.

user-cd03b7 16 January, 2023, 02:30:13

Chat image

user-cd03b7 16 January, 2023, 02:51:30

Update - with a 2:14 reference "scan" and a 14, 16, & 17 second test run, it took the cloud 46 minutes to give me a heatmap. Interested to hear if you guys are getting similar results

nmt 16 January, 2023, 10:37:38

Hi @user-cd03b7! Just wanted to add a few points, in addition to @user-d407c1's notes about checking progress and refreshing browsers. The reference image mapper is quite computationally intensive, which is why we employ Cloud computing to run it! There are actually two steps to the process, 1. model building and 2. mapping. The model building part takes the longest, but only needs to be performed once per scanning recording + reference image combination. When you're making scanning recordings of larger features, e.g. in the built environment (https://docs.pupil-labs.com/invisible/enrichments/reference-image-mapper/#_4-an-entire-building), it can take some time to complete, and what you describe sounds reasonable in my experience.

user-6a1b78 16 January, 2023, 13:03:10

Hi, was wondering where to provide feedback on the UX of the use of the Reference image mapper (beta) enrichment. I'll just put it here for now. Reference image mapper requires to have the scanning recording in the project to be able to point the software which recording tot use for Scanning Recording. The problem is that the recording (although no eyetracking data availlable, hence not wearing teh glasses while recording the Scanning recording) seems to be taken into the analysys for generating the heatmap. When deleting the recording the heatmap is rebuild to a more correct one. Imho the scanning recording should be handled (made visible) more on the background, just like the reference image is. So only visible in the settings of the enrichment and not per se as an item in the project. Another thing is, that it is not clear howto analyse a later added recording. I can add it to the project, but it doesn't get analysed automatically within the enrichment I suppose. But when entering the enrichment tab, there is no way of pushing the 'Play' button again to analyse another recording. Might it be the consequence of deleting the Scanning recording? Have not tried to leave the scanning recording in the project and then adding another recording to be analysed. How does/should this work? Also, it is not possible to hit the 'Play'button in the enrichment, when using a begin and end event defining a reletively short scene (eg. 3 seconds). The button to analyse is then greyed out. Could it be that the selected scene is also used on the Scanning recording, which is then to short to build a model? Update: Forget the last comment. I was using event definitions that weren't set in the recording. I used events from within recordings with different type of activity. Should't scenes be somehow restricted to be used within a project or something like that? Update2: Analysys is only possible if new added recording also has same events as used in enrichment (logically)

user-d407c1 16 January, 2023, 14:35:06

Hi @user-6a1b78 !

Thanks for your detailed feedback! We use canny.io for feedback (you can find the link on the docs, the cloud, or directly here at feedback ). I've logged your request for you here (https://pupil-labs.canny.io/pupil-cloud/p/rim-improve-ux-and-scanning-recording-handling), so you don't have to. But you can upvote it!

We are working towards improve the usability of the Cloud, but in the meantime... - Correct Heatmap: You should not remove the scanning recording, but rather when clicking on the enrichment's heatmap, use the filter box (see screenshot below). Only the recordings listed below are used for the aggregated heatmap.

  • To add new recordings, simply add them onto the project, the enrichment progress bar will show a new empty section and it will allow you to click the play button to run it on the new recordings. This would not work if the events delimiting the sections on which to run the enrichment do not have the same name.

Chat image

user-6a1b78 16 January, 2023, 14:46:29

Thanks for your reply.

I guess using events other than the recording.begin & recording.end made it a bit confusing how adding recordings work. I understand now by trial and error, but could use some instruction to understand (at least for me :)).

Concerning the heatmap analysys. It is possible to just delete the recording from the project and still be able to add another recording and have that analysed afterwards. But I've tried your filter option and that works great. It depends on naming the recordings in a way that you can filter the right ones for the correct collection of files to be in the heatmap. If you're aware of that no problem. A checkbox beside the name of recording (in front of name or after) could be usefull if one has not named files properly... I might enter that feature in the feedback channel myself.

Thanks so far! Looking forward to see more!

user-cd03b7 19 January, 2023, 00:39:08

Hey guys, couple quick reference image mapper questions -

  1. For my reference scan, I have a couple people moving in the distance of the recording - will this impact heatmap/scan accuracy? I eventually move closer to scan the areas where people were moving (obviously scanning now that they're gone), but is this something I should be worried about?

  2. How does occlusion mitigation work if there are other people in the tester recordings? Testers seem to occasionally look at other people's backs as they navigate through the environment that was scanned without people.

  3. My current heatmapping effort includes seven tester recordings, one very thorough area scan, and six reference images - how should I go about event creation? The explainer on the pupil site doesn't cover this very well, and the Alpha Lab RIM explainer doesn't cover this either - do I need to create events for each of the six reference images in the scan recording, in each of the tester recordings, or both?

  4. Can heatmapping for one reference image overlap with other reference images? My testers are walking along a path and don't always focus on one reference image at a time, so I'm worried that if I isolate reference image events to a specific start and stop time it'll inhibit heatmapping for other reference images - ie, a tester is walking past a sign, so I event stamp that time period as "sign", but during that period when they SHOULD be looking at the sign they're looking at an escalator, which is in a totally different event stamp.

nmt 19 January, 2023, 18:30:46

Hey @user-cd03b7! Responses below: 1. This is often unavoidable in the wild for sure. So long as you captured enough data when the people had gone, it shouldn't matter in my experience!

  1. Good question! This is definitely a factor that might need manual intervention. If you think it's going to affect your results significantly that is. Unfortunately, there isn't a super easy solution for this just yet. Although we are working on finding one. Check out these manual workarounds: https://docs.pupil-labs.com/invisible/enrichments/reference-image-mapper/#occlusions

  2. You'll need to create a separate RIM enrichment for each reference image. Then there are actually two approaches: 3.1. You could create a start + end event that corresponds to when a participant was within the respective area. You'd just create, e.g. start-sign and end-sign. You'd use these same events for all your participants, as presumably they all walked the same route. This focuses the enrichment on a smaller part of each recording (i.e. when each participant walked past the sign), thus saving some computation time. 3.2. You could also just forget the events and run each enrichment on the whole recording. It will just take longer to run.

In Alpha Lab (https://docs.pupil-labs.com/alpha-lab/multiple-rim/#map-and-visualize-gaze-onto-multiple-reference-images-taken-from-the-same-environment), we took the first approach.

  1. Each enrichment will only look for its respective reference image. Overlapping in this sense should not be a problem.
user-cd03b7 20 January, 2023, 19:43:47

Great stuff, thank you for the detailed response!

Re: 3.1 - do I need to create any enrichments for the reference scan image? ie, sign-scan-start and sign-scan-end ?

user-cd03b7 20 January, 2023, 19:44:16

Or do scans just get imported into the project and selected in the enrichment section as a raw recording?

nmt 20 January, 2023, 20:07:51

Simply add your scanning recordings to the project and you're good to go. You can select the appropriate scanning recording when you create an enrichment.

user-cd03b7 20 January, 2023, 20:15:02

Sounds great, thank you!

user-cd03b7 20 January, 2023, 20:16:22

Last thing - if I have multiple scans of the relatively same area (with ample overlap, the three minute scan length stipulation restricted me from scanning the whole environment), is there a way I can select multiple scans for one reference image in the enrichment? Right now I only see a way to select one scan video.

user-cd03b7 20 January, 2023, 22:21:46

Quick question for you guys, I'm curious if anyone's running into a similar issue -

I made two RIM enrichments, each using the same two testers, same environment scan, and same reference image for heatmapping. The only difference between the enrichments is that one is using a start+stop events for when the tester was in the area of interest, while the other uses the raw tester recording (recording.begin and recording.end). On paper, I should've gotten the same heatmap, but in reality I got drastically different heatmaps (below).

user-cd03b7 20 January, 2023, 22:22:02

Chat image

user-cd03b7 20 January, 2023, 22:23:27

The image above is the heatmap I got using event enrichments for when the tester was in the area of interest / in proximity to the reference image zone (signage.begin and signage.end, each roughly 15 seconds apart from each other), the heatmap below is using the "raw" tester recordings (pulling from recording.begin and recording.end, each roughly 2 minutes apart from each other)

user-cd03b7 20 January, 2023, 22:23:43

Chat image

user-cd03b7 20 January, 2023, 22:24:30

Is this normal? Is using recording.begin and recording.end for RIM enrichments supposed to give relatively skewed results? Not sure if I made a mistake somewhere or is someone else is running into similar issues.

user-cd03b7 20 January, 2023, 22:25:50

It may be worth noting I reduced the "scale" factor in the heatmapping settings bar from 5 to 3 for the above images.

user-cd03b7 20 January, 2023, 22:47:35

Chat image

user-cd03b7 20 January, 2023, 22:48:52

I went back and looked at the scene/reference image view, the mapping seems to be a bit off... it's like this for most of the recording, the red fixation visualizer on the right doesn't align with where the tester is looking on the left.

nmt 21 January, 2023, 09:36:10

Hey @user-cd03b7! Firstly, thank you for sharing a detailed overview of how you are using the Reference Image Mapper, including screenshots. It is fascinating and pretty awesome to see it being leveraged in this real-world scenario and your feedback is very helpful. Now, let’s address some of your questions! 1. It's only possible to use one scanning recording per reference image at the moment 2. Judging by the screenshot you've shared, we think that something may have gone awry πŸ€”. Would you be able to ping me the enrichment id via DM and give the Cloud team permission to debug it?

user-cd03b7 21 January, 2023, 23:18:47

@nmt done & done! Just sent you a DM

user-fb64e4 21 January, 2023, 17:58:27

Hi there, Is there a limit to the amount of data we can upload to Pupil Cloud? We're trying to upload a long recording (3GB) to Pupil Cload and it fails and the internet speed is good and stable.

nmt 24 January, 2023, 10:13:49

Hey @user-fb64e4! There's no limit. Has the upload finished yet? You can test out the upload speed to Pupil Cloud here: https://speedtest.cloud.pupil-labs.com/

user-2e5a7e 24 January, 2023, 14:12:06

Does Pupil Invisible LSL Relay work for any pupil core device or only pupil invisible?

papr 24 January, 2023, 14:13:27

Only Pupil Invisible (and in the future for Neon). For Pupil Core, check out https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture

user-2e5a7e 24 January, 2023, 14:15:51

Okay, thank you

user-2e5a7e 24 January, 2023, 14:16:54

Second question. Does the Pupil Core pupil_capture work to stream real-time data?

papr 24 January, 2023, 14:17:35

Pupil Core streams data in realtime, yes. As does the Pupil Invisible Companion app.

user-cd03b7 24 January, 2023, 19:42:29

Hey, how have you guys been moving recordings from one workspace to another? I have a coworker who wants to check out some of my recordings but they're on a separate account, and I don't see anything that allows me to upload a recording

marc 25 January, 2023, 03:43:12

@user-cd03b7 it is currently not possible to move recordings between workspaces. You could however invite your colleague to the workspace as a member (maybe with just viewer permissions) to give him access to the recordings.

user-fb64e4 27 January, 2023, 11:00:44

Thanks for your reply! No it doesn't even start uploading. Could there be a problem with the account? Another device using another workspace doesn't have a problem uploading. We have uploaded recordings on this workspace before but it's not working anymore.

Chat image Chat image

papr 27 January, 2023, 11:02:09

Please try logging out and back in within the app.

user-b1de2d 27 January, 2023, 15:04:36

Hello, is this the appropriate place to ask questions about the Pupil Labs Cloud?

papr 27 January, 2023, 15:10:07

It is πŸ™‚

user-b1de2d 27 January, 2023, 15:27:48

ok thank you, I currently have 2 recordings on my pupil cloud that are listed as 99% uploaded, but have remained at this stage for a number of days now. Is there a way I can resolve this?

papr 27 January, 2023, 15:31:55

Please send an email to info@pupil-labs.com with the recording ids and their corresponding workspace ids.

user-b1de2d 27 January, 2023, 15:33:11

thank you

user-c77f71 30 January, 2023, 06:28:58

Hi not sure if this is the right channel to post in - however... I am trying out AOIs in the Reference Image using the instructions you provide here (https://docs.pupil-labs.com/alpha-lab/gaze-metrics-in-aois/). It worked perfectly well using your workspace data/fixations however... when trying to use our data the fixations.csv that gets exported is missing the column that says "fixation detected in reference image". When I export your trial data directly from the workspace cloud, it is also missing this column... however the file you uploaded has this column. Just wondering if there is a special option/setting that needs to be selected so that the csv file that is exported directly from the cloud includes this column of data. I have attached fixationsExample1 (with the column - what you use as an example csv for the trial of paintings) and fixationsExample2 (without the column - what is created when you download the data directly from the cloud)

FixationsExample1.csv FixationsExample2.csv

marc 30 January, 2023, 08:49:24

Hi @user-c77f71! That column should for sure be there in theory! I have downloaded a couple of the enrichments from the Demo Workspace and found the column in the respective fixations.csv. Could you let me know the IDs of the enrichment you had trouble with in the demo workspace, as well as in you personal workspace? You can find the ID doing right-click on the enrichment in the list and selecting "View Details".

user-c77f71 30 January, 2023, 11:27:13

I will check the recordings on our end tomorrow at work - but for now the ID of the workspace video is b561aeee-7cc8-4a9f-9840-930db6677a66 and it is labelled Jack_Standing_Upstairs

marc 30 January, 2023, 11:46:48

Ah, I think I see where the confusion is coming from now!

The FixationsExample2.csv file you shared is the download of recording data rather than enrichment data.

Both recording and (Reference Image Mapper) enrichment data contain a fixations.csv file. For the recording it is fixations within the scene video, for the enrichment it is fixations on the reference image. Accordingly, only the enrichment data contains the fixation detected in reference image column. So the download you are looking for is not the recording download, but the enrichment download.

For the demo workspace, you could download the various enrichments from this view: https://cloud.pupil-labs.com/workspace/78cddeee-772e-4e54-9963-1cc2f62825f9/project/cdfde655-3c8a-45c5-b6e2-5d5754d7a4f0/enrich

user-c77f71 30 January, 2023, 11:27:59

We are basically just trying to upload the fixations.csv to the python code for the AOIs but without that column of data being exported within the csv it won't work

user-c77f71 30 January, 2023, 22:17:09

Oh fantastic - I knew it was something I was doing incorrectly... thank you so much that was really helpful πŸ˜„ P.S. this feature is AMAZING! Our lab is very impressed πŸ™‚

user-cd03b7 30 January, 2023, 23:33:25

Hey guys, quick RIM question - when I run two identical enrichments in the same project enrichment editor space (same two testers, same start and stop events, and same reference image and scan recording, just different enrichment names), I get two slightly different heatmaps -

marc 31 January, 2023, 09:29:11

Hi @user-cd03b7! The algorithms of the Reference Image Mapper are not fully deterministic, so there is always a chance for slight variations due to different seed values. The differences here are larger than one might expect though, so digging into this a bit might be in order. I'd be happy to investigate this a bit further and confirm the reason if you want to. We could sync regarding access to the data via DM.

Regarding the other issue with the stitched reference image, the problem might be the stitching itself. While the image looks good for human consumption, it might technically no longer accurately comply with the geometry of camera projection, which would make it impossible for the algorithm to build its 3D model around it. If you need stitched images in the end, the better approach would be to stich the images with the heatmap already on them, such that you can input "natural" images into the algorithm.

user-cd03b7 30 January, 2023, 23:33:56

Chat image Chat image

user-cd03b7 30 January, 2023, 23:34:27

Again, fairly negligible differences, but on paper I should be getting identical heatmaps here, right? Did I do something wrong?

user-cd03b7 31 January, 2023, 05:58:28

Follow up question/issue - I ran the following two RIM enrichments with the same three tester recordings and the same environment scan, but each with a different reference image. These reference images are two halfs the same sign, with one RIM enrichment for each half because I wasn't able to get an angle on the narrow platform to get the whole sign. (note, scale is "5" and colormap is "turbo")

user-cd03b7 31 January, 2023, 05:58:53

Chat image

user-cd03b7 31 January, 2023, 05:59:12

Chat image

user-cd03b7 31 January, 2023, 06:01:18

However, when I (finally got smart enough to) stitch the two reference images together to create a reference image of both halfs and run the same RIM enrichment (with the same three tester recordings and environment scan, just with this new "combined" reference image), I get the following output - also with a scale of "5" and Turbo colormap

user-cd03b7 31 January, 2023, 06:01:48

Chat image

user-cd03b7 31 January, 2023, 06:06:09

Considering I'm working with the same test data and environment scan, why is there such a sharp disparity between the separated scans and the combined scan? There's virtually zero activity around the "3rd Ave & Pine" area of the combined RIM, but in the separated RIM there's tons of plotted activity over that section.

And in the uppermost image from the separated RIM, there's heavy activity on the right side of the image over "Westlake", which happens to contain overlapping imagery in the second half of the separated RIM. But in the second half of the RIM, the overlapping imagery doesn't contain the same plotted activity over the text "Westlake". Am I doing something wrong here? Has anyone encountered a similar issue?

user-cd03b7 31 January, 2023, 17:03:42

Thanks Marc, sounds great! Just sent you a message regarding the first bit, and your explanation on image stitching makes complete sense as well. Thanks for the pointers on this one, I'm gonna poke around and see what I can do!

End of January archive