invisible


Year

user-d407c1 01 December, 2022, 07:58:19

AOIs in a transit station

user-478e7e 01 December, 2022, 15:28:21

I am looking for an easy and non-technical way to stream the gaze data from the pupil lab invivible to a macbook pro. this is mostly for demonstration purpose. I have tried the following link https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/monitor-your-data-collection-in-real-time.html as suggested on the website, but the page does not seem to open. I would be very grateful for an easy and non-technical solution... many thanks

user-d407c1 01 December, 2022, 15:56:48

Hi @user-478e7e,
There are several reasons why http://pi.local:8080/ may have not worked for you. I would like to ask, are you on an institutional network (i.e. eduroam or any network from the university?). University or company firewalls can be quite restrictive, you can try using a hotspot using a 3rd party device.

Nonetheless, please check out these steps:

  1. Ensure both your computer and Companion Device are connected to the same network
  2. Open the companion app, on the bottom left you should see a hamburger icon (three lines), click on them
  3. Now on the pop-up click on Streaming
  4. Besides the pi.local you should see an IP address (one with numbers and starting usually with 192.168.X.XX)
  5. Write that IP address in the url of your Macbook browser including the :8080

Let us know if by accessing through the IP address you can stream it.

user-50cf87 02 December, 2022, 09:54:52

Hello! I take videos with pupil invisible for my thesis, and I put my videos on pupil clouds but I noticed that the image of some video cuts in the middle of recordings. I would like to select the parts of the video that work but I can not do it on pupil cloud, it takes into account the entire recording.. how can I do?

marc 02 December, 2022, 10:04:32

Hi @user-50cf87! What exactly do you need to get out in the end? - For a gaze overlay video you can cut down the recording using events in the gaze overlay enrichment definition - If it is just about the video, you can cut it using any video editing software. - When exporting all data of a recording, there is not automatic way to crop it down. But the CSV data is ordered by its timestamp, so you could filter it this way.

user-50cf87 02 December, 2022, 10:11:02

Thank you very much!!

nmt 02 December, 2022, 15:23:30

Head Pose Tracker

user-746d07 02 December, 2022, 18:22:16

Hello, I have a question. I used to use pupil core and in particular the 3d coordinates of the gaze point, can I get that with pupil invisible as well?

user-c2d375 05 December, 2022, 08:16:05

Hi @user-746d07 👋 Pupil Invisible does provide gaze data only in 2D coordinates.

user-df1f44 04 December, 2022, 15:35:53

Hello folks at Pupil Labs - Last time I checked transferring recordings between workspaces was not possible - Is there any chance this is something in the pipeline for the future? 🥺 . It seems to be a bit of a no-brainer to my mind - to cater for the eventuality of inadvertently capturing recordings while set to the wrong workspace - Unless there is a benevolent reason for not allowing this that I am unaware of.... Plus, there is the small issue of separating concerns for clients post-capture - something I am currently facing...😩

marc 05 December, 2022, 09:16:03

Hi @user-df1f44! This is currently still not possible via the UI. If you have a concrete issue with recordings in the wrong workspace we could try to help you manually! I agree, from a UX standpoint this is a rather obvious feature. It's not quite as trivial to implement as one might expect though, which is why we have deprioritized it so far. Given the interest, we should re-evaluate this!

It took the liberty of creating a feature request for this: https://feedback.pupil-labs.com/pupil-cloud/p/recording-transfer-between-workspaces

user-df1f44 05 December, 2022, 09:25:33

Thanks for the prompt response. I take your point that it is not a trivial task for sure. However, I am hopeful since we now have a feature request. 👍 . In the meantime, we'll tweak our capturing protocols for workspace... while we keep an eye out for the future.

user-a98526 06 December, 2022, 14:00:33

Hi@marc,in my use of invisible, it often fails to connect, like this

marc 06 December, 2022, 14:26:28

Hi @user-a98526! Connection issues like this can point to a hardware issue. This could be a problem with the phone, cable or Pupil Invisible device. Please contact [email removed] regarding this for help with the diagnosis and most likely some sort of hardware replacement!

user-a98526 06 December, 2022, 14:00:59

Chat image

user-a98526 06 December, 2022, 14:01:09

It usually takes 100 to 200 attempts to connect successfully, and I'm not sure if this is a problem with the invisible or the phone.

user-a98526 06 December, 2022, 14:28:10

Can any Typec-Typec cable be used, as I plan to use other Typec cables.

marc 06 December, 2022, 14:31:09

The bandwidth transferred through the cable is quite high and only high-quality cables will work robustly. When using 3rd party cables of lower quality there might be intermitted disconnects. We recommend using the included cable, but in principle other high-quality cables work too.

marc 06 December, 2022, 14:31:41

I'd recommend testing the cable for a couple hours before relying on it!

user-a98526 06 December, 2022, 14:36:17

Specifically, can I try Apple's Typec, which I plan to use?

user-a98526 06 December, 2022, 14:36:54

Or Oneplus' typec.

marc 06 December, 2022, 14:38:36

We have never tried using those cables, so I am not sure. Sorry!

user-a98526 06 December, 2022, 14:39:35

Or can you provide the type of typec you are using?

marc 06 December, 2022, 14:45:41

I am not sure what exactly you mean with "type". The cable we include is a custom made cable, so while you could get more of those from us, there are no other varieties of the same cable or something like that. May I ask what issue you have with the included cable?

user-a98526 06 December, 2022, 14:48:22

I think this connection failure problem may come from the cable, and I plan to buy a new cable to test if the problem mentioned above still occurs.

marc 06 December, 2022, 14:49:25

Got it! If the cable is indeed the issue, we'd also be happy to send you a new one!

user-a98526 06 December, 2022, 14:49:50

Because I had this same connection problem with both of my Invisible (unfortunately, I lost a cable, so they used the same one). So I think it's more likely that the cable is faulty.

user-a98526 06 December, 2022, 14:54:20

@marcThanks for your help, I am currently working on head pose estimation work (displacement and pose) and hope to help Invisible in the future.

user-5b4cde 07 December, 2022, 17:00:52

Hello, I'm a newbie of pupil invisible. And I encounter a trouble(like picture),which always when the video longer than 25mins. Is it overheat? Does anyone know any solution?

I will appreciate it!

user-d407c1 07 December, 2022, 17:11:02

Hi @user-5b4cde Could you contact info@pupil-labs.com regarding this issue?

user-5b4cde 07 December, 2022, 17:01:01

Chat image

user-5b4cde 07 December, 2022, 17:16:57

@user-d407c1 You means just email to this address about this issue?

user-d407c1 07 December, 2022, 17:17:17

Yes

user-5b4cde 07 December, 2022, 17:17:41

Sure, thanks.

user-bda200 08 December, 2022, 15:47:46

Hello, at our institute we are using pupil invisible. I remember when I used it last year for a research project there was a "fixation" plugin available in the plugin manager of pupil player. I started working again with the glasses recently and noticed that there is no fixation plugin available anymore in the version that I have installed now (the newest). In the online user guide I also read now that it is not available for pupil invisible. Was it removed in between versions or do I just don't remember correctly that this option existed? Thank you.

marc 08 December, 2022, 15:52:00

Hi @user-bda200! You are remembering correctly! In older version of Pupil Player the Fixation Detector originally designed for Pupil Core was available for Pupil Invisible recordings. This fixation detector is however performing very poorly with Pupil Invisible's gaze data and it is not recommended to use it. Thus in newer versions it was disabled. There is a new fixation detector available for Pupil Invisible in Pupil Cloud, which we recommend using. This one is quite accurate and robust!

user-46e93e 08 December, 2022, 17:38:41

Hello! We are using pupil invisible for a TSST dyadic task (two people) and wanted to know if there was an easy way to set up event markers with psychopy to sync the two recordings

nmt 09 December, 2022, 15:57:45

Hey @user-46e93e. Are you using PsychoPy to present stimuli? Check out this tutorial for tracking events using Python: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/track-your-experiment-progress-using-events/#track-your-experiment-progress-using-events

user-46e93e 09 December, 2022, 18:59:22

Yes! Thank you. I'll take a look

user-746d07 12 December, 2022, 14:47:39

Hello, I have a question I understand that pupil invisible only provides 2D line-of-sight coordinates, but is it possible to obtain 3D line-of-sight vectors from this information?

marc 13 December, 2022, 08:53:08

Hi @user-746d07! Yes, a 2D gaze point and a 3D line-of-sight are basically equivalent and you can transform each into the other. To do this you need some camera geometry. Using the inverse of the projection matrix of the camera you can "unproject" the 2D gaze point back into the 3D gaze ray.

This might sound complicated if you have not dealt with this before, but there are lots of resources on the internet about this! You can check this how-to guide on how to obtain the camera intrinsics (which contain the camera matrix and distortion coefficients). https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/intrinsics/

The guide contains the same math/OpenCV method calls you would need to use. The following line is what it comes down to:

rays_3d = cv2.undistortPoints(
    points_2d.reshape(-1, 2), camera_matrix, dist_coeffs, P=None
)

Let me know if you have questions!

user-746d07 13 December, 2022, 12:25:23

Thank you, @marc I am currently conducting an experiment to measure eye contact during daily activities at home.I am currently experimenting with pupil core, but I am considering pupil invisible because of the problem of not detecting pupils well (I am in the process of asking about this problem with core as well) and the fact that it has to be wired to a PC. So my question is, in pupil invisible, does the scene camera cover the entire field of view? For example, if the pupil is very lowered, will the scene camera not be able to see the line of sight?

marc 13 December, 2022, 12:36:23

@user-746d07 Yes, the scene camera of Pupil Invisible covers the entire active human FOV. The fields of views of our various scene cameras are as follows (horizontal x vertical):

Core: 99°x53° (by default. When using 1080p it goes up to 139°x83° but has a lot of distortion) Invisible: 82°x82° Neon: 132°x81°

So as you can see the vertical FOV of Pupil Invisible is much larger than for Core. You can not change the direction of the scene camera like for Pupil Core, but given the size of the FOV its also not necessary. Pupil Invisible covers the natural FOV very well and even more extreme scenarios, like e.g. interaction with a smartphone which usually happens at steep downwards angles, are covered well.

Neon has an even larger horizontal FOV, but the motivation of this is rather to cover peripheral vision better than to capture more of the active FOV, which is already covered well. If your timeline allows it I'd recommend purchase Neon over Pupil Invisible.

user-746d07 13 December, 2022, 13:17:29

OK,thank you I understood that pupil invisible has a large FOV and covers, for example, when interacting with a smartphone. In pupil invisible, unlike pupil core, I was told that gaze point supports only 2D output. Considering what you said earlier about camera intrinsics, am I right in understanding that if the orientation and position of the face is known, it is possible to obtain the gaze direction in an absolute coordinate system?

marc 13 December, 2022, 14:12:20

Yeah kind of! In order to obtain a 3D point from the 3D gaze direction you would 1) need to find the pose of the scene camera in the world (which is more or less the same as finding the pose of the face) and 2) find the intersection point of the 3D gaze direction (originating in the scene camera) with the world. So you would need some kind of model of the world as well!

user-b9c94f 13 December, 2022, 16:07:41

Hi, I just found your company in google. I would like to share with you our idea of research. We are working on visual attention, for this we work with database of heatmap gaze (like MIT1003). We would like to construct our own database which will correspond to our cases of studies (static scenes). We would like also to make some studies about eye fixation on a video. Did you have a solution to propose to us?

nmt 14 December, 2022, 10:00:26

Hi @user-b9c94f 👋. It is possible to do what you propose with our eye trackers. Note, however, that we specialise in wearable systems that can be used in both off and on-screen contexts. If you're only doing on-screen work, perhaps a remote system would be preferable. Since our systems are wearable, there are a few steps necessary to get gaze on-screen: - You can use April Tag markers: https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/#marker-mapper - We also have a way to map gaze onto dynamic video content: https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/#map-and-visualise-gaze-onto-a-display-content-using-the-reference-image-mapper I hope this helps!

user-50cf87 14 December, 2022, 06:42:52

Hi, I’m using pupilcloud for my phd and I have a problem to download a gaze overlay enrichment.. nothing happen when I try to download this enrichment (only for this video in particular and I don’t know why). Do you know how I can upload it?

Chat image

user-d407c1 14 December, 2022, 07:39:58

Hi @user-50cf87! Would you mind contacting info@pupil-labs.com with the enrichment ID (Right click on the enrichment, select details and there you can find it).

user-f01a4e 15 December, 2022, 04:54:36

Hi, i am using pupil to have a small eye tracking event. i wish to know how can i calibrate it to the center as it always tends to point to the left of the actual view.

user-d407c1 15 December, 2022, 08:00:37

Hi @user-f01a4e What you might be seeing is a parallax error. Because the scene camera position off to the side of the frame, a parallax error is introduced for all wearers that is dependent on viewing distance.

Go into preview mode (bottom right of the app screen), hold your finger on the screen and move the red circle to the correct point of regard. Then click apply. That offset is then used for all future recordings for that wearer.

user-5b6152 15 December, 2022, 15:06:15

Hi, I've some problem with the uploading of my videos on the cloud with the Pupil Invisible App. I'm connected on internet and the uploading of the videos stay at 0%. I'd tried to delete all the files of my Pupil Cloud to make space on my account but it doasn't change anything. Another solution could be to export the video files locally (from and to the phone) but the option is not available on the app. Does anyone has a solution ?

nmt 15 December, 2022, 15:54:06

Hi @user-5b6152. In the first instance, please try logging out and back into the app, and see if that triggers the upload

nmt 15 December, 2022, 15:58:09

@user-5b6152, some more debugging tips here: https://docs.pupil-labs.com/invisible/troubleshooting/#recordings-are-not-uploading-to-pupil-cloud-successfully

user-5b0955 15 December, 2022, 16:34:04

We are using real time python API (https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/index.html). We are just is there any way to change the recording filename in the cloud when we are starting the recording using python api method (Device.recording_start())?

nmt 15 December, 2022, 17:04:52

I've responded to your Q in software-dev

user-46e93e 15 December, 2022, 20:28:46

Hi Neil, would it possible to set up a time to call with someone who might be able to walk us through tracking events for our specific task? I have it set up but still unsure about how to test it. We are also looking to send it to two separate invisible devices at the same time

nmt 16 December, 2022, 09:47:13

Hi @user-46e93e, I'd kindly ask you to check out our support/consultancy page for dedicated video/call support. Further details can be found here: https://pupil-labs.com/products/support/

user-f01a4e 16 December, 2022, 05:49:32

Hello again, I tried the above mentioned method for parallax correction, tbh one of the best ways of calibration i have yet seen. my only difficulty is, when the calibration is done from a distance or close proximity, the same calibration tends to change when the distance is changed respectively. how can this be fixed or is SOP for calibration which should be followed? and is it better to calibrate with every new user?

nmt 16 December, 2022, 09:51:07

Thanks for the feedback! To be honest, this is not a 'calibration' in the conventional sense, but rather an offset correction. To better understand why it's sometimes needed, there are two main sources of error to consider:

Parallax error: Due to the scene camera position off to the side of the frame, a parallax error is introduced for all wearers that is dependent on viewing distance. For distances >1 meter, this error is not really noticeable, but for distances <1 meter it becomes increasingly large

Physiological parameters: In some users, there is a constant person-specific offset in the gaze estimation. This offset is not really affected by things like gaze angle or viewing distance. You can spot these errors by asking the wearer to gaze at something 2–3 meters away. If there is an offset, it’s likely to be caused by physiological parameters

It is important to consider which source of error you’re dealing with when using the offset correction feature: - For physiological parameters, the offset correction should be applied when the wearer is gazing at a target >1 meter away - For parallax error, it’s important to set the offset correction at the viewing distance you will be recording. It’s not possible to find an offset that corrects parallax errors perfectly at both close distances and further away distances

nmt 16 December, 2022, 09:54:44

So, whether you use the offset correction really depends on things like the experimental task, the participant, whether the non-offset corrected gaze is accurate enough for your needs etc.

I typically do a qualitative assessment with each participant before deciding!

user-5543ca 19 December, 2022, 20:37:48

UGREEN USB C Hub 4K60Hz, 5-in-1 Gigabit...

user-746d07 21 December, 2022, 10:17:39

hello. I am currently learning how to get the 3D gaze direction from gaze x,y[px].

Last time you told me about the following code, am I correct that the 3D line of sight direction is a vector with 1 in the z-axis of the value obtained by the following code?

ray=cv2.undistortPoints(
    point_2d.reshape(-1, 2),camera_matrix, distCoeffs=dist_coefs,P = None
)
marc 21 December, 2022, 10:19:39

Yes, that should be correct!

user-746d07 21 December, 2022, 13:52:36

As an additional question, is it safe to use [m] as the unit for this vector? I understand that the vector size is variable because it is a direction vector, but since it was originally a pixel, I am curious.

user-746d07 21 December, 2022, 10:24:20

Thanks!

user-3fba95 21 December, 2022, 10:26:46

Hello,

I'm looking into acquiring a wearable eye tracker. I'm hoping to get some recommendations based on our requirements. I'm currently using iMotions for on-site user testing of interactive computer video games. I'm using a screen-based eye tracker and Facial Expression Analysis while recording game footage.

I'd like to start testing games with touch controls. But touching the screen causes the user to lean in closer to the screen, so the screen-based tracker loses track. I think a wearable tracker would be better if it can handle this scenario.

Any advice would be appreciated, regarding things to look our for or trade offs to consider. Thank you

marc 21 December, 2022, 11:08:22

Hi @user-3fba95!

Doing screen-based gaze estimation with wearable eye trackers is possible, but it comes with some additional challenges. The output space of a screen-based eye tracker is pixel coordinates on the screen. For a wearable eye tracker the output space is the pixel coordinates of the scene camera. So there is an additional step of converting the gaze estimates from scene camera coordinates to screen coordinates.

In order to do this one has to track the location of the screen in the scene video. Knowing the pose of the screen in the scene video allows you to transform the data. This step will be the major bottleneck for robustness in your setup. The various eye tracking vendors all have solutions for this, but they are all limited.

One option is to do this mapping between coordinate systems manually, i.e. go through the data frame by frame and manually indicate where on the screen the subject is looking at using mouse clicks. This is very labor intensive, but it works. The Tobii software has an interface for this, I am not entirely sure about iMotions. We do not currently have an interface for this, but will add something to Pupil Cloud in ~Q2 2023.

marc 21 December, 2022, 11:08:27

The other option is automatic mapping tools. Most software options offer some implementation of this, but they differ a lot in robustness. The options we have are: the Marker Mapper and the Reference Image Mapper.

The Marker Mapper works robustly but requires markers placed around the screen. If that was acceptable for you, this is probably the most robust solution available for screen tracking. You can find the documentation here: https://docs.pupil-labs.com/invisible/enrichments/marker-mapper/

The Reference Image Mapper only requires only reference image of the screen without markers. Using it to map to a screen is presented in detail here: https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/ I believe that the Reference Image Mapper is the most robust marker-free option currently available anywhere, but its robustness is also limited.

Competing with screen-based eye trackers is not super easy for any wearable eye tracker. I can't promise that our setup would work better in the end than you screen-based system. But I believe there is a chance and its probably the most reliable wearable setup you could try. If you are interested I'd recommend testing the system within our 30 day return period!

user-3fba95 21 December, 2022, 11:14:20

Thanks Marc! That is very helpful. It's good to know about the 30 day return period for testing.

user-d407c1 21 December, 2022, 12:11:45

Just to add to @marc comment, iMotions has a manual mapping option, with some kind of interpolation between frames. https://imotions.com/blog/learning/product-news/product-release-new-areas-of-interest-aoi-editor-in-imotions-9-0/

user-5b6152 21 December, 2022, 12:42:54

Hi invisible community, I'm working on blinking and using Pupil Invisible to catch the moment when we blink. I'd tried to use the raw data in the blink.csv file and convert the start timestamp in frame to compare with the video in Pupil Player. To do it : 1. I've deduct the recording begin data (found in the event.csv) from the start timestamp data to have the moment of the blink from the beginning of the video in nanosecond (and not UTC). 2. I've divided the result by 1/30 because the framerate of the frontal video recorded by the smartglasses is 30 Hrz. I've rounded the result to obtain a frame number. BUT when I compare this frame number with the video in Pupil Player with the eye overlay the eyes are not closed. I check three videos and each time the real blinking on the overlay eyes video in pupil and the supposed blinks from the raw data are not corresponding. For the first video, there is approximately -40 frames between the frame number in the raw data and the real blink in the overlay eyes. For the second video, there is approximately +10 between the frame number in the raw data and the real blink in the overlay eyes. And for the last one, there is absolutely no correspondance between the frame number in the raw data and the real blink in the overlay eyes. Do you have any idea where is the problem ?

marc 21 December, 2022, 12:53:22

Hi @user-5b6152! Your approach makes sense but there are a few caveats! Your technique in 1) is correct, this converts the timestamps from absolute timestamps to relative timestamps since the beginning of the recording. Note however, that the time measured in Pupil Player is relative time since the beginning of the scene video, which is not the same as the beginning of the recording. When the recording starts, all the sensors get initialized, which takes up to a couple of seconds (depending on the sensor) and only then the sensors start recording. The scene video takes usually about 1-2 seconds to initialize (which explains 10-40 frames difference).

So to convert timestamps into "Pupil Player Time", you'd have to subtract the first timestamp of the scene video, rather than the recording.begin event timestamp.

The idea of dividing the result by 30 to get a mapping to world frame indices would only yield a roughly correct mapping. This is because the actual framerate is not perfectly solid 30 FPS. In dark environments the exposure time might be longer than 1/30 of a second, which reduces the framerate. Even in perfect conditions the framerate is not perfectly stable but will fluctuate just slightly. Thus you would accumulate a drift error over time with this strategy.

The recommended strategy to find the corresponding world frames would be to match the timestamps of the events with the scene video timestamps directly to find the closest matches. This can be done with relative easy using Pandas pd.merge_asof method. How this works in detail is described in this how-to guide: https://docs.pupil-labs.com/invisible/how-tos/advanced-analysis/syncing-sensors/

user-5b6152 21 December, 2022, 13:24:02

Thank you for your quick answer ! Where could I find the first timestamp of the scene video (world.mp4) in UTC in the raw data of Pupil Invisible ?

marc 21 December, 2022, 13:42:20

That would be the first timestamp in the world_timestamps.csv file!

user-5b6152 21 December, 2022, 14:00:32

Unfortunately, it doesn't change anything because the first timestamp of world_timestamps.csv is always the same than recording.begin timestamp of the event.csv.

marc 21 December, 2022, 14:31:17

I see! I thought the recording.begin event would be generated using the timestamp from the info.json file, my mistake! The other points still hold though, so please check the linked guide!

user-5b6152 21 December, 2022, 14:42:19

I'm trying but I'm not use to code so I was looking for another way to do this.

marc 21 December, 2022, 14:33:57

This depends on the context. The direction is initially just a direction without any units. If you can localize the head/glasses in a 3D coordinate system measured in meters, then you can have a "gaze ray" of a certain length in this direction, and that could be measured in meters.

user-746d07 21 December, 2022, 14:51:47

This time I used pupil invisible to get 2d gaze. From this data, I have a value obtained by calculating the 3d gaze direction using the function I just used (for example, when looking completely to the right, one of the 3d directions of gaze could be [1.3,0,1]). In this case, can we consider it as a single point on the direction vector, [1.3m,0m,1m] in the camera shown in the figure?

I am trying to represent a point on the direction vector in the absolute coordinate system by obtaining the position and posture of the camera from the human posture data.

Chat image

marc 21 December, 2022, 14:51:29

Okay! We have no other immediate solution. Tomorrow I could provide you with a script that does the required calculations. You'd still need to run it though, but that is fairly easy.

user-5b6152 21 December, 2022, 14:53:22

Super ! Many thanks !

user-5b6152 05 January, 2023, 04:15:26

Hi @marc did you'd time to make the script ? Happy 2023

marc 21 December, 2022, 15:03:50

I am not sure I understand the question. The coordinate system in the image seems correct. The 3D gaze point is a point somewhere along the gaze direction originating from the 3D camera position.

user-746d07 21 December, 2022, 15:48:58

The method I was calculating earlier used undistortPoints from gaze x,y[px] to find the direction vector G(a,b,1). This vector would pass through one point G(a,b,1) on the gaze direction when the starting point is the origin on the camera coordinate system. What is the unit of G at this point?

Chat image

marc 22 December, 2022, 08:54:03

G is a vector with no intrinsic physical unit at this point. Its just a direction encoded by the ratio of those numbers. The way undistortPoints works will simply scale this directional vector such that the z coordinate equals 1.

user-746d07 22 December, 2022, 09:15:12

I understood that (a,b,1) is a vector. Is this the direction vector described by the camera coordinate system of pupil invisible that I sent you before? If so, by setting the starting point O as the origin and the end point as G(a,b,1)[m] in the camera coordinate system, is the vector OG a 3-dimensional direction vector of the line of sight?

marc 22 December, 2022, 09:21:52

Yes, both is correct!

user-746d07 22 December, 2022, 09:24:05

I understand! Thank you very much for your kind words.

user-f1fcca 22 December, 2022, 13:39:24

Hello! I was wondering how i can open a json file ? When I used the time alignment method, it created a time_alignment_parameters.json file. What information can i get from this file ? Thank you in advance!

user-d407c1 22 December, 2022, 13:50:53

Hi @user-f1fcca ! You can open a JSON file with almost any text editor like Notepad or VSCode. You can find the transformations to timestamps to LSL and how this works under the hood here https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/time_alignment.html#output-of-the-post-hoc-time-alignment

user-f1fcca 22 December, 2022, 13:40:29

I know it's for sync between streams of lab recorder, but what does this file tells me? The time difference between the lab recorder and the record from the companion app for example ?

user-f1fcca 22 December, 2022, 13:52:28

Yea, so basically the cloud_to_lsl intercept , is the time that the companion app was delayed to lsl recording ?

user-d407c1 22 December, 2022, 14:11:39

Kind of, you can consider it as the offset between the clocks from the device running LSL and the Companion device clock, this is not necessarily the transfer delay/latency. See here how the time-stamping works with LSL https://pupil-invisible-lsl-relay.readthedocs.io/en/stable/guides/overview.html#timestamps

These parameters can be used to extrapolate/remap the timestamps (using linear regression) to match gaze data from the Cloud which goes at 200Hz, with data from LSL stream which records at ~66 Hz.

Kindly note that data recorded directly on the Companion device is also not at 200Hz but 120Hz (using a OnePlus 8).

user-f1fcca 22 December, 2022, 15:12:08

Thank you!

user-746d07 23 December, 2022, 09:51:47

Hi, I have done a series of recordings using pupil invisible. I saw the recordings in pupil cloud, and the uploaded recordings have automatic fixation positions and saccade lines drawn. Is there any documentation of how these are estimated? Also, can I change the fixation threshold myself?

marc 23 December, 2022, 10:27:10

They are estimated using a novel velocity-based algorithm. The approach is similar to other algorithms, but features additional head movement compensation making it more robust in dynamic settings.

We are overdue on releasing the accompanying paper. I could send you the pre-print though if you are interested.

The parameters of the algorithm are currently not exposed.

user-f01a4e 26 December, 2022, 05:16:39

Hello again, recently i had recorded some videos for reference image mapper. but the upload is stuck on 97% and doesnt seem to go futher. i have tried pause and resume it but after pausing the resume is not working. for safety case i have exported the said video to my system through usb. it their a possibility i can upload the same to the cloud from system? as the export has many files i am confused.

nmt 26 December, 2022, 08:19:06

Hi @user-f01a4e. Firstly, please double-check that Cloud Uploads are still enabled in the Companion app settings. Then visit https://api.cloud.pupil-labs.com via the phone’s browser. This is to check the device can access our server and there is not a firewall on your network blocking access. If the page loads okay, you will see Pupil Cloud 1.0 in the top left

user-f01a4e 26 December, 2022, 08:49:37

@nmt Hi, thanks for your reply. i have checked the mentioned solution. the uploads is enabled and I am also able to access the given site.

nmt 26 December, 2022, 09:04:06

In that case, please try logging out and back into the Companion App

user-f01a4e 26 December, 2022, 09:25:24

@nmt Thank you so much. that worked! 😁

nmt 26 December, 2022, 09:36:29

Awesome!

user-f01a4e 26 December, 2022, 09:45:36

@nmt Hi, i have another doubt. does the companion device need to be on while accessing cloud? cuz as of now i am unable to play any recording on the cloud.

nmt 26 December, 2022, 10:30:24

Once the recordings are fully uploaded, there will be some processing time in Cloud. Could be a few minutes before you can play them. Try refreshing your browser window as well. The companion device doesn't need to be on after upload.

user-a98526 29 December, 2022, 13:11:09

Hi@nmt, I think there is something wrong with my Pupil Invisible, it can't connect to the companion device (Oneplus 6) successfully. I think I need some after-sales service, who do I contact?

End of December archive