invisible


Year

user-886683 02 May, 2022, 03:09:38

Is there a lib for converting Pupil Player/Capture raw data to CSV??

marc 02 May, 2022, 05:50:25

If you export a recording from Pupil Player, it will be in CSV format! https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-886683 02 May, 2022, 08:26:32

@marc Thanks! We aware that it converts raw to csv, but our dataset are divided into blocks and participants. The numbers are too many to individually drag & drop. I was looking for a method that can automatically converts it to csv by programming (python).

papr 02 May, 2022, 08:31:14

Hi, what kind of data are you looking to extract?

user-e7cf71 02 May, 2022, 09:39:46

Hi, are there any general delays known regarding the timestamps? Maybe a time offset in the world and single eye timestamps?

user-9429ba 02 May, 2022, 14:36:20

Hi @user-e7cf71 👋 The Pupil Invisible Companion App runs its own monotonic clock for timestamping all data streams. Read more in the documentation: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/legacy-api.html#time-synchronization

user-30a16e 02 May, 2022, 11:30:47

We accidentally updated the OnePlus 6 operating system (so the app no longer works). Is there a simple method to restore the old system? Should we copy any files before trying?

user-886683 02 May, 2022, 13:15:20

@papr Event triggers, pupil size, gaze anlge, saccade etc..

user-886683 02 May, 2022, 13:15:56

Chat image

papr 02 May, 2022, 13:19:09

Ok, looks like this are Pupil Core recordings. See this example on how to extract pupil diameter values from multiple recordings without opening Pupil Player. https://gist.github.com/papr/743784a4510a95d6f462970bd1c23972 You can adapt it to extract events (annotations.pldata) and gaze data (gaze.pldata). The recording format is documented here https://docs.pupil-labs.com/developer/core/recording-format/

For saccades, you will need to apply custom code to detect them based on the gaze data.

user-886683 02 May, 2022, 13:16:05

Chat image

user-886683 02 May, 2022, 13:16:20

Currently it is divided into trial blocks

user-886683 02 May, 2022, 13:16:57

Each participants have 12 folders each. So.. I'm looking for a method that could automate this process

user-e7cf71 03 May, 2022, 08:02:57

Hello and thanks for the answer. Are the frame rates from the exported world video and the single eye videos the same as streaming them with the real time api? It appears to be a little time delay (~1 second) in the timestamp we receive from the mobile phone and the reception timestamp of the images in our system. - But only for the single eye video stream.

papr 03 May, 2022, 08:06:42

The system will attempt to stream the data at full frame rate but if the bandwidth is insufficient, it will drop frames. Streaming is independent and the app will always record video in the full frame rate.

Where are you receiving the eye video data? In a custom script or in Pupil Capture? Can you elaborate on how the delay becomes visible?

papr 03 May, 2022, 08:03:25

Hi, this sounds like you are using ndsi, is that correct?

user-e7cf71 03 May, 2022, 08:04:11

Yes

user-e7cf71 03 May, 2022, 08:13:27

I'm receiving the video streams in a custom script and the delay becomes visible while comparing the timestamp from the mobile phone and from the system clock we receive the images.

papr 03 May, 2022, 08:31:47

So what you are measuring is the transport delay, correct? When processing data via ndsi, it is important to know how the frame buffering works under the hood: Incoming data is buffered up to a specific point in time. Once the internal queue is full, ndsi (or better: zmq) drops new incoming frames. It does not drop the oldest frames from the queue.

In other words: You might not be receiving the most recent frame and therefore perceive the stream with a delay. The solution is something like this:

most_recent = None
for datum in sensor.fetch_data():
  most_recent = datum
if most_recent is not None:
  process(most_recent)

The code above will only process the most recent frame.

papr 03 May, 2022, 08:35:19

Hey, please checkout https://gist.github.com/papr/40d332498bfacb5980a754c5692068ec

user-08ada6 08 May, 2022, 15:23:07

Sorry for the delay! Thank you very much, very useful for me. I am currently using Matlab to call the Python script. I await other updates in the future for the Invisible.

user-3b5a61 04 May, 2022, 04:41:12

Hi folks, I have a few questions regarding the smartphone and other ways to use the glasses. My current project involves people wearing the glasses all day. Is there a way to charge the phone while using the glasses? Also, I understand that the amount of data from this collection will be massive, therefore, is it possible to determine the glasses to turn on (say) for 10 minutes every 30 minutes or something? Thanks, Victor

marc 04 May, 2022, 07:32:49

Hi @user-3b5a61! You can connect a "powered USB-C hub" to the phone and connect the Pupil Invisible Glasses and a power source in parallel to record while charging.

The alternative would be to swap phones mid recording.

The app itself does not allow you to set a recording schedule like this. However you can remote control the phone using the real-time API, which allows you to start and stop recordings. This would make it relatively easy to write a custom script to realize such a schedule.

user-3b5a61 04 May, 2022, 07:34:53

Thank you so much, Marc!

Where can I find this API that you mention?

marc 04 May, 2022, 07:40:21

You can find an introduction here: https://docs.pupil-labs.com/invisible/how-tos/integrate-with-the-real-time-api/introduction/

user-3b5a61 04 May, 2022, 07:51:36

Thx again :)

user-f36bd4 04 May, 2022, 12:33:53

hello pupil team, i have oneplus 6 with android 11 . the invisible companion app does not work . it shows that the version of android is not compatible with invisible companion. it was android 8.1.0 before and it was working but somehow the phone has updated itself and the app is not working anymore. i have tried to delete app data & cache , downgrade the app and restore the mobilephone to factory settings but it could not help any help please 🙂 thanks

papr 04 May, 2022, 12:48:32

Hi, you will need to roll back the operating system. Follow the instructions of this link for that https://discord.com/channels/285728493612957698/633564003846717444/654683156972175360

user-f36bd4 04 May, 2022, 12:36:37

i also have oneplus 8 with android 11 and the app is working on it. is there anything which i can do <:whatthefuck:400815732880965632>

user-f36bd4 04 May, 2022, 12:53:15

the link is not working anymore. but i am looking fot it , thank you so much.

user-f36bd4 04 May, 2022, 12:56:43

i meant the download link, but it works now : )

user-f36bd4 04 May, 2022, 12:56:48

thanks

user-b811bd 04 May, 2022, 15:29:11

@papr hello! For some reason I have the audio now for my videos, but could you plz let me know what is this type of error as I want to download the video on my pc?

user-b811bd 04 May, 2022, 15:29:19

Chat image

papr 04 May, 2022, 15:43:32

Hey, Pupil Invisible does not perform any pupil detection. As a result, its recording does not contain pupil data and this warning is expected for Invisible recordings.

user-b811bd 04 May, 2022, 15:30:36

@papr I have the scene and I have the eye cameras, I have the red circle of tracking when I play the merged video on pc, but I can’t download them

user-b811bd 04 May, 2022, 15:44:55

So why I cannot download the video that I have on pupil player?! That’s my issue

papr 04 May, 2022, 15:46:12

I am not sure I understand which video you are referring to? And are you referring to a Pupil Cloud download or a Pupil Player export?

user-b811bd 04 May, 2022, 15:55:31

I have exported the folders for each of my recordings on my laptop. I have installed the pupil player. I drag the folder into the player and I can see the merged scene/ eye camera recordings with the red dot/ circle as an indicator of the point of view. Then, when I want to download (have this complete video) on my own pc, it fails. Sometimes I have the error I sent, sometimes the downloaded video which is mp4, shown as an interrupted file…

papr 04 May, 2022, 16:19:52

If the video is shown as interrupted/incomplete/corrupted then the export process is still in progress. The World Video Exporter menu has a progress indicator that let's you know when the export is finished.

papr 04 May, 2022, 15:56:34

exported the folders for each of my recordings on my laptop Exported as in exported via the Companion app and then copied via USB?

user-b811bd 04 May, 2022, 15:57:44

True

papr 04 May, 2022, 16:09:37

~~Can you confirm that the "world video exporter" is enabled in the plugin manager?~~ From the shared screenshot, I can see that this is the case.

user-f408eb 06 May, 2022, 01:15:56

Hello, I'm getting back to you about a technical query. As I have already indicated, I am currently working on a study in which the time required to look from one area marked with markers to another area marked with markers is surveyed. This will then be compared between the experimental and control groups. Unfortunately, so far I have only been able to mark one area at a time with the Marker Mapper Enrichment when I add an enrichment to the recording. So I am not sure how to get the data of the times between the last viewpoint on one marked area and the first viewpoint on the other marked area. I would be very pleased if you could offer help with this.

marc 06 May, 2022, 07:40:58

Hi @user-f408eb! To track multiple surfaces you need to create multiple Marker Mapper enrichments. The corresponding exports will share the same timestamps, such that you can merge and compare the data after exporting. Let me know in case this does not yet clear things up for you!

user-64c4d3 07 May, 2022, 13:47:36

Hi, I was wondering how the fixation in the raw data exported from pupillab Cloud were calculated? What is the maximum degree and minimum duration? Can these two values be adjusted?

user-b0a007 11 May, 2022, 10:11:33

Hi @user-64c4d3! I am a Research Engineer at Pupil Labs. Regarding your question:

The novel fixation detection algorithm is essentially a classic velocity-based algorithm, however with an additional stage that subtracts components from the gaze velocity which are in line with the optical flow of the image. This is because optic flow approximates the expected slip of the gaze point when the user is moving their head while fixating a target at the same time. If the user is not moving their head, this head-motion compensation stage is not doing anything.

After that calculation, a velocity-threshold of 900 px/s or around 68°/s is applied. Then, we filter out small saccades (“micro-saccades”) which are likely to be misdetections, and merge the neighboring fixations. Saccades are removed if they have a smaller amplitude than 1.5° (amplitude = distance from start to end point) and if they are shorter than 60 ms. Finally, we also remove fixations which are likely to be misdetections, which are fixations shorter than 60 ms.

How did we obtain these values? The exact values represent an optimal parameter set for our fixation detector when tuned on an annotated in-house dataset including various use-cases, such as observing visual stimuli on a screen, but also highly dynamic scenarios, such as wearing the Pupil Invisible headset during highly dynamic scenarios, such as performing a search task in a real-world environment. So they are explicitly tuned toward best fixation detection performance in various research settings.

user-64c4d3 07 May, 2022, 13:50:54

In older versions of the pupilplayer, the fixation value seemed to be adjustable

marc 07 May, 2022, 13:53:48

Hi @user-64c4d3! Fixations are calculated using a novel velocity based algorithm that is using optical flow to compensate head movements. Its not the same algorithm used in Pupil Player. We are currently working on a pier reviewed paper on the algorithm. For exact values on the velocity thresholds etc I need to ask my colleagues who will be available again on Monday. The values are not adjustable.

user-64c4d3 07 May, 2022, 14:05:22

Ok, please let me know after you confirm the exact values, thank you

user-e7cf71 09 May, 2022, 12:17:58

Hi! I'm wondering what the FPS from IMU should be. While using ndsi I'm receiving a framerate of around 3.5 - is that correct?

papr 09 May, 2022, 12:19:24

Hi 🙂 ndsi transmits multiple imu samples in one frame. So, yes, the received frame rate is low but the contained sample rate is higher.

user-e7cf71 09 May, 2022, 12:29:46

Thanks for the fast reply :). What do you exactly mean by "contained sampe rate"? Is this from the mobile device or are we receiving bigger packages (such as receiving a package three times per second with multiple data)?

papr 09 May, 2022, 12:32:32

If you are using pyndsi, pyndsi should be taking care of the unpacking for you.

papr 09 May, 2022, 12:30:20

Each package should contain multiple data points

user-e7cf71 09 May, 2022, 12:41:48

Thank you papr I'll take a look at it!

user-3b418f 09 May, 2022, 15:42:16

Hi folks, quick question about the 'heat mapping' enrichment feature. In order for this to be applied to a video taking, am I right in thinking that the only way to apply this enrichment is if those marker icons were printed off and stuck to the focus prior to recording using the glasses? Is there no way to add the markers retrospectively?

nmt 10 May, 2022, 06:56:28

Hi @user-3b418f and @user-ce3bd9 👋. There are two enrichments you can use in Pupil Cloud to generate heatmaps, 1. Marker Mapper and 2. Reference Image Mapper. 1. Marker Mapper - requires AprilTag markers to be placed in the environment. If these weren't present during a recording, then you won't be able to use the Marker Mapper to generate heatmaps 2. Reference Image Mapper - doesn't require markers, but rather it depends on what we call a 'scanning video' of the feature/region of the environment that you're interested in, in addition to a photo. Depending on where you made your recordings, it might be possible to go back to the location and generate these. The main thing to consider is that the feature/region of the environment should be the same now as when you collected your original data. You can read more about how to set up this enrichment here: https://docs.pupil-labs.com/invisible/explainers/enrichments/#reference-image-mapper

user-ce3bd9 10 May, 2022, 01:47:41

i am interested in the same question...

user-ce3bd9 10 May, 2022, 01:48:52

i would like to create a "heat map" but i didn't print the marker icons ... now i can't do nothing ?

user-a98526 10 May, 2022, 06:49:23

Hi @papr, I get excited every time I communicate with you. The documentation claims: the exact framerate of this signal depends on the model of phone you use as Companion device. On a OnePlus 6 device, the framerate is around 50 Hz, while on a OnePlus 8 it is around 65 Hz. I want to know if this frequency can be increased after I use One plus 9, and whether this frequency can be fixed.

papr 10 May, 2022, 07:45:34

Hi, thank you for the compliment. Much appreciated 🙏 Please be aware that we currently only support two Pupil Invisible Companion devices: The OnePlus 6 and OnePlus 8. Other devices are currently not supported. The effective frame rates vary due to hardware constraints. If it is higher realtime gaze sampling rate that you want I recommend to stay tuned and keeping an eye on announce. We will be able to share some good news in this regard very soon.

user-a98526 10 May, 2022, 08:06:26

Thank you very much, I will be watching this announcement closely.

user-98789c 10 May, 2022, 12:29:51

This fixationscsv file comes from a recording with Pupil Core. I need to know if and how I can have the same file, most importantly including start_frame_indexand end_frame_indexfor a recodring using Pupil Invisible?

Chat image

user-42203b 10 May, 2022, 12:30:00

Hello everyone 🙆‍♂️ I’ll be working with the pupil invisible glasses in the upcoming days and wanted to ask, if it’s alright to post any questions regarding error messages/connection issues here on discord? And: Is English the preferred language? 🙂

Have a nice day!

marc 10 May, 2022, 12:31:33

Hi @user-42203b! Yes, you are very welcome to post any questions here! And yes, English is the preferred language. You too have a nice day! 🙂

marc 10 May, 2022, 12:35:26

Fixation data for Pupil Invisible can be obtained via Pupil Cloud. Its included in the recording download and the raw data export. The format is a bit different though. The full documentation is here: https://docs.pupil-labs.com/invisible/reference/export-formats.html#fixations-csv

The fixations there have a start and end timestamp corresponding to the contained gaze samples. A frame index is not directly given, but using the fixation timestamps in combination with the scene video timestamps the index can be inferred.

user-98789c 10 May, 2022, 12:37:24

thank you @marc 👍

user-42fccc 11 May, 2022, 15:54:06

Hi there, I am a researcher using the Pupil Invisible for gaze tracking in my research experiments that last about 1 to 2 hours long. During which the glasses get really warm for the wearer causing discomfort. Are there any solutions for this? If not, I was looking to 3D print an insulating cover to slip on that could reduce the heat transferred to the wearer's skin. To do so, I was hoping I could get some CAD files of just the exterior frame arms to use as a reference for the 3D printed cover.

marc 12 May, 2022, 08:08:53

Hi @user-42fccc! Let's schedule a video call to discuss options! I'll DM you a scheduling link.

user-b811bd 12 May, 2022, 18:43:53

@marc hi, could you plz LMK what is the issue with such files that make the pupil player crashes? Tnx

papr 12 May, 2022, 21:48:56

That would be my responsibility. Could you please share the full traceback/error message with us?

user-b811bd 12 May, 2022, 18:44:04

Chat image

user-b811bd 12 May, 2022, 18:44:38

I have 4 recordings that make the player crash

user-b811bd 13 May, 2022, 00:22:21

This is just the photo I sent. As soon as I drag the folder it comes in two seconds and player crashes by itself

papr 13 May, 2022, 05:35:13

Does this happen for all of your recordings or only this one in particular? Could you please share the player.log file? You can find it in the pupil_player_settings folder.

user-a98526 13 May, 2022, 02:12:43

Hi @papr, I found that the scene image obtained by pupil-invisible seems to be somewhat distorted, does this require calibration.

papr 13 May, 2022, 05:38:12

Pupil Cloud and Pupil Player take care to correct for the distortion behind the scenes, e.g. for gaze estimation or marker mapping. It is normal that the video preview looks like this. 🙂

user-a98526 13 May, 2022, 02:13:18

Here is a picture example.

Chat image

user-a98526 13 May, 2022, 05:58:26

I want to perform object detection on the scene image (in other words: YOLO), do I need to correct the scene image? If Pupil-cloud corrects the image in gaze-mapper, does it mean that I also need to calibrate for object detection.

marc 13 May, 2022, 07:23:04

@user-a98526 What you see in the video is lens distortion due to the large field of view of the camera. This distortion can be compensated using the camera intrinsic values, which are measured for every Pupil Invisible device during manufacturing. YOLO would probably work decently on the distorted video, but undistorting the video often improves the results of such algorithms. We have not tested YOLO explicitly and I'd recommend just trying it out on the distorted video first.

If you want to undistort the video, you have two options:

Option 1) You could download the intrinsic values for your device using the following link, where you need to insert the serial number of you scene camera, which you can find in the info.json file or on the back of the scene camera module. You need to be logged in to Pupil Cloud in the browser for the link to work! https://api.cloud-staging.pupil-labs.com/v2/hardware/<serial number>/calibration.v1?json

And then you can use e.g. OpenCV's cv2.undistort method to undistort the images. https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga69f2545a8b62a6b0fc2ee060dc30559d

Option 2) You can use the gaze overlay enrichment! This enrichment usually allows you to render a custom gaze overlay visualization. One of the options it provides is however to undistort the video in the process. You can set the gaze overlay to be transparent to effectively yield the raw but undistorted video. The enrichment is basically implementing Option 1) for you.

user-a98526 13 May, 2022, 05:59:30

@paprThis situation happens for all of my recordings

user-a98526 13 May, 2022, 06:03:54

I found this two files:

user-a98526 13 May, 2022, 06:04:23

info.invisible.json info.player.json

marc 13 May, 2022, 07:29:25

One thing I forgot to mention: The gaze date is in distorted image coordinates. Since you probably want to relate the gaze data to the YOLO detections, you would need to undistort the gaze points as well, if you distort the video, because YOLO detections would be in undistorted coordinates then.

The gaze overlay hack does unfortunately not handle this for you as well, but you can use the intrinsics as above and the cv2.undistortPoints method. https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e

user-a98526 13 May, 2022, 07:43:44

Thanks for your@marc explanation, I have some doubts, do you mean the gaze data from and Raw data export is distorted?

marc 13 May, 2022, 07:49:57

What I mean is that the gaze data is in the original scene camera image coordinates. If you change the scene camera images by undistorting them, you need to equally change the gaze data as well.

This is might sound more complicated than it is though. In Python pseudo code it would look something like this:

K, D = download_camera_intrinsics()

undistorted_image = cv2.undistort(image, K, D)
undistorted_gaze = cv2.undistortPoints(gaze, K, D)

Plotting gaze onto image would be correct, but the image looks distorted. Plotting undistorted_gaze onto undistorted_image would also be correct and the image does not look distorted.

Plotting gaze onto undistorted_image would be wrong, and the gaze data would be slightly off.

user-a98526 13 May, 2022, 07:51:23

This is very helpful and it really reduces errors in my work!

user-a98526 13 May, 2022, 07:50:40

Besides that, I have another question. I want to utilize the eye image images obtained by pupil-invisible, but I found an interesting thing. The number of gaze points and the number of eye images do not match. In other words, eye_image_number (Pupil player)<gaze_points (Pupil cloud-Raw Data Export)<eye_image_number (.mjpeg file).

marc 13 May, 2022, 07:58:09

The left and right eye cameras are running independently, i.e. they do not record images at the exact same time. To calculate gaze, the algorithm needs to build pairs of eye images first. This is done by iterating through the left eye camera images and picking the closest available image from the right eye camera. Thus, the number of gaze samples should be equal to the number of frames in the left eye video.

I am not exactly sure how Pupil Player calculates the eye image number and how it should relate to the number of gaze samples. Maybe @papr can explain that?

@user-a98526 how do you calculate the number of frames in the MJPEG file? Using PyAV or ffmpeg should yield the correct number, but OpenCV can run into issues when reading videos frame-wise.

user-a98526 13 May, 2022, 08:01:34

1.For Pupil Player, I used the exported eye0.mp4.

marc 13 May, 2022, 08:13:41

eye0 is the right eye. You would need to count the frames in eye1. Those should be equal to the number of gaze samples, or if you use OpenCV to count potentially lower, but not higher.

user-a98526 13 May, 2022, 08:01:37

Chat image

papr 13 May, 2022, 08:02:51

Is this from the intermediate recording format or exported from Player using the eye video exporter?

user-a98526 13 May, 2022, 08:04:00

It is from Player using the eye video exporter.

user-a98526 13 May, 2022, 08:05:29
  1. For mjpeg files, I use Opencv, I counted the number using the following code: *def get_frames(path): ''' :param path: 视频地址 ''' frame_count = 0 frame_buff = [] capture = cv2.VideoCapture(path) while capture.isOpened(): ret, frame = capture.read() if not ret: print("Can't receive frame (stream end?). Exiting ...") break
    frame_count = frame_count + 1
    # frame_buff.append(frame)
    

    return frame_count*

papr 13 May, 2022, 08:09:01

Yeah, as @marc mentioned, Opencv is not reliable in this regard. Let me look up an example with pyav 🔍

marc 13 May, 2022, 08:11:21

The issue with OpenCV is that it sometimes skips frames and would thus yield a frame count that is too low. This could not explain why you find more eye frames than gaze samples.

papr 13 May, 2022, 08:14:44

@user-a98526

pip install av
import av

container = av.open("eye1.mp4")
count = 0
for frame in container.decode(video=0):
  count += 1
print(count)
user-a98526 13 May, 2022, 08:21:22

In fact, results is as follows: opencv count :3761 ,av count: 3761 "gaze.csv:3566"

user-a98526 13 May, 2022, 08:26:17

Can I upload my data for a detailed analysis?

marc 13 May, 2022, 08:27:03

Yes, that would be helpful! Please share with [email removed]

user-a98526 13 May, 2022, 08:38:51

I have sent an email. In fact, I found that my Pupil Invisible companion produces a 1-2s gap per recording, which is ignored when using Pupil Player but not in Pupil Cloud.

papr 13 May, 2022, 08:40:53

Yes, there is a short delay between the start-recording button-press in the Companion app and the scene camera actually starting to record. Pupil Cloud starts its playback at the button press and Pupil Player at the first scene video frame.

user-a98526 13 May, 2022, 08:43:03

This gap is a solid gray image of 1~2 seconds, and I doubt whether the gaze data during this time is ignored by the Raw Data Exporter.

papr 13 May, 2022, 08:43:39

Pupil Cloud gaze data includes gaze that was generated during this period.

user-a98526 13 May, 2022, 08:45:01

papr 13 May, 2022, 08:54:10

All sensors have some start up time. I recommend starting the recording before starting the condition/task in your experiment.

user-a98526 13 May, 2022, 08:56:45

Yes, I actually did, my remaining doubts are mostly inconsistencies in gaze counts and eye images, and I've shared my data to [email removed] hope this helps.

papr 13 May, 2022, 09:01:51

I will have a look this afternoon

user-a98526 13 May, 2022, 09:06:24

Thank you very much.

papr 13 May, 2022, 11:44:30

@user-a98526 Please see my notes below: - after pressing the start button, the eye cameras take a short time to start (~0.43 seconds in your recording) - after pressing the stop button, the eye cameras take a short time to stop (0.5-1.0 seconds) - Pupil Cloud matches a right eye frame to every left eye frame as long as it is not further away than 5ms. Left eye frames without a matching right eye frames are not processed for gaze - gaze.csv only contains gaze data between the presses of the start and end buttons

The discrepancy between number of gaze samples in gaze.csv and the number of frames in eye1.mp4 is due to the last point. You can verify this by plotting the timestamps (see attached plot).

Chat image

marc 13 May, 2022, 12:14:01

What this also means is that there is no issue with the data, all of this is actually expected! 🙂

user-a98526 13 May, 2022, 12:19:35

Is there a way to get the eye image corresponding to the gaze point?

marc 13 May, 2022, 12:25:13

Yes, the eye frames and the gaze samples have the same timestamps, which you can use to match them.

user-a98526 13 May, 2022, 12:22:14

From your@papr explanation, I understand why the number of eye images is greater than the number of gaze points, which solves my doubts.

marc 13 May, 2022, 12:26:58

More precisely, the eye1.time file contains the corresponding timestamps of the left eye frames, which correspond to the gaze ps1.time timestamps. eye0.time contains timestamps of the right eye, which correspond to the timestamps in gaze_right ps1.time.

marc 13 May, 2022, 12:28:18

The timestamps in gaze.csv only contain the timestamps of the left eye (a subset of gaze ps1.time as described above) and would only allow you to match the left eye.

marc 13 May, 2022, 12:29:09

You can open the time files in Python using np.fromfile("gaze ps1.time", dtype="int64").

user-a98526 13 May, 2022, 12:34:38

Okay, I'll try it, thank you very much for your help today. This will be very helpful for my research.

user-0f58b0 13 May, 2022, 13:07:53

Hi Pupil friends 😉

just a quick question about the Invisible and Markers vs Image Mapper For "art like pictures" an a white background, which method would you recommend for analysis?

Markers or Image Mapper?

Thanks for you opinion Philipp

marc 13 May, 2022, 13:11:37

Hey @user-0f58b0!

We have recently recorded an example dataset (which will be released very soon!) in a gallery, where we used the Reference Image Mapper with great success! So I would for sure recommend it!

It depends a little bit on how the pictures look like though. If it was very uniform art work, say paintings of just solid color, the algorithm will not work well. But for most types of art it should work well!

I can send you a teaser of the dataset if you are interested in comparing it with the setup you are going to have.

user-0f58b0 13 May, 2022, 13:34:07

thanks Marc, you guys/girls are the best. we´ll give it a try! also the Stream thingy is awesome!

user-b811bd 13 May, 2022, 14:14:32

@papr

Chat image

papr 13 May, 2022, 14:14:58

the player file is what I need 🙂

user-b811bd 13 May, 2022, 14:15:54

Chat image

papr 13 May, 2022, 14:17:38

could you please try to open one of the problematic recordings and then upload the file here?

user-b811bd 13 May, 2022, 14:21:34

Chat image

user-b811bd 13 May, 2022, 14:21:48

Chat image

user-b811bd 13 May, 2022, 14:22:02

This is what you want?!

marc 13 May, 2022, 14:23:38

@user-b811bd That's not quite what we need! Please first open one of the problematic recordings in Pupil Player, which will log the error in the player.log file. Then, upload the player.log file here. Please upload the actual file rather than taking an image of it!

user-b811bd 13 May, 2022, 14:28:25

Ok, got it

user-7c714e 13 May, 2022, 15:04:23

Hello, can I disable the sound in PupilCloud when creating Gaze Overlay as .mp4?

user-b811bd 13 May, 2022, 15:05:01

player.log

papr 13 May, 2022, 15:08:01

ok, thank you, this is very helpful. This was an issue on older Pupil Invisible Companion app versions when it attempted to recover an unfinished recording. Please share the info.json files of the affected recordings with [email removed] We can fix them for you!

marc 13 May, 2022, 15:05:42

No, that is not an option currently. You would have to use an external tool to remove the audio from the video.

user-7c714e 13 May, 2022, 15:20:50

Thanks Marc

user-b811bd 13 May, 2022, 15:09:51

sounds great, thank you so much

papr 13 May, 2022, 15:29:08

Those recordings that have info.player.json and info.invisible.json files have a separate issue. Please attempt to open these recording, too, and share the player.log file again. Note, that the log file is being overwritten each time Player is restarted. Therefore, it is important to make a backup of the file between opening different recordings.

user-328d3b 13 May, 2022, 17:39:51

Hi, i need to know if i can use pupil lab open source software with Gaze Point GP3 hardware? thank you so much

papr 16 May, 2022, 06:55:22

Hi @user-328d3b GazePoint only sells remote eye trackers. Pupil Labs software is designed for head-mounted eye trackers and is therefore not conceptually compatible with their hardware.

user-a98526 14 May, 2022, 06:37:33

Sorry to bother you@papr @marcagain, I tried your method, but I found that the number of data in gaze ps1.time (767, from Down Raw Data) is much smaller than the amount of eye image 1(3761). Also, the timestamp in gaze.csv(from Pupil Cloud Raw Date Export) and the timestamp in eye1.time are not equal. So I'm still not sure how to match them.

papr 16 May, 2022, 07:09:22

It is expected that gaze ps1.* contains less samples as this is the realtime estimated gaze at ~55-65 Hz. And as explained, gaze.csv only contains data up to the stop-recording button press while the eye video can go a bit longer.

Every timestamp [ns] in gaze.csv should be present in PI left v1 ps*.time. Its index corresponds to the frame index.

This is how the matching could work roughly

gaze = read_csv("gaze.csv")
left_eye_time = read_binary("PI left v1 ps1.time")
left_eye_video = video("PI left v1 ps1.mp4")

for row in gaze:
  idx_in_left_eye = left_eye_time.index(row["timestamp [ns]"])
  left_eye_img = left_eye_video.frame_for_index(idx_in_left_eye)
user-4f3037 16 May, 2022, 06:39:50

Hello everyone! I have a technical problem with pupil invisible. I am using OnePlus 6. The system has been updated by mistake and (as expected) the app was not working anymore. We managed to reinstall the older version (Android version 8.1.0), however, we are still having some issues. Eyes are not detected and the gaze pointer during offset correction is not visible or fixed on the scene video. The recording seems to start correctly, but it will then give an error message. Any idea about how to fix this problem? Please, let me know if you need any more info. Thanks a lot!

papr 16 May, 2022, 07:11:26

Hi @user-4f3037 Sorry to hear about the issues. Could you please contact info@pupil-labs.com in this regard and attach the error message? Ideally, please attach the recording's android.log.zip file as well.

user-4f3037 16 May, 2022, 07:21:32

Thanks, I'll do that. I have the error message given by the app. Could you please tell me where I can find the android.log.zip file?

papr 16 May, 2022, 07:21:51

Was the recording uploaded to Pupil Cloud?

user-4f3037 16 May, 2022, 07:24:21

When the error message is given, the recording is not saved (there is not file in the recording folder). For some reasons, sometimes the recording is saved, but without gaze. I can try to get one of those and upload it in Pupil Cloud

papr 16 May, 2022, 07:25:42

Does that mean that you usually copy the recordings from the phone via USB?

user-4f3037 16 May, 2022, 07:28:07

I usually uploaded them on the Cloud when the app used to work. But now it just does not record anything and just give me an error message

user-a98526 16 May, 2022, 07:27:54

Thanks @papr . By the way, I'm trying to use https://api.cloud-staging.pupil-labs.com/v2/ to get the camera parameters, but I don't seem to have access.

user-a98526 16 May, 2022, 07:27:56

Chat image

marc 16 May, 2022, 07:37:39

For the API to work you need to be logged in to Pupil Cloud. So please first visit cloud.pupil-labs.com, log in, and then visit the API UR!

papr 16 May, 2022, 07:29:59

Just to clarify, you get the error message when attempting to start the recording, not later after the recording has started already, correct? What is the error message?

user-4f3037 16 May, 2022, 07:34:42

yes, correct! This is the error I get. I get it just after the recording started

Chat image Chat image

papr 16 May, 2022, 07:35:24

ok, please contact info@pupil-labs.com with this information

user-4f3037 16 May, 2022, 07:37:06

Ok, thanks a lot!

user-a98526 16 May, 2022, 07:42:08

I tried, but it still doesn't seem to work. My serial number is 7PQ45.

papr 16 May, 2022, 07:59:58

These do not seem to be a scene camera serial number but those of the glasses frame. You can find the former on the inside of the scene camera (you need to detach the camera to see it).

papr 16 May, 2022, 07:54:27

@user-a98526 ~~Please try https://api.cloud.pupil-labs.com/v2/ instead (Without the -staging part)~~ Looks like UI does not show the hardware endpoint.

user-a98526 16 May, 2022, 08:01:15

Is it YWSCB?

marc 16 May, 2022, 08:03:41

Yes, that is a valid serial. In the URL you need to spell it without caps though, i.e. please visit: https://api.cloud.pupil-labs.com/v2/hardware/ywscb/calibration.v1?json

user-a98526 16 May, 2022, 08:06:10

This is very useful, thanks!

user-a98526 16 May, 2022, 09:24:17

I have a small suggestion that I hope will help you improve your Pupil Invisible experience. The Gaze overlay plugin for Pupil Cloud may add functionality similar to Pupil Player's eye overlay.

papr 16 May, 2022, 09:35:05

The gaze overlay enrichment exports the scene video with the closest gaze point. The issue here is that the scene video is recorded at a much lower frequency than the eye videos/gaze. Overlaying the eye videos would mean to only render the closest eye image as well. As a result, the gaze overlay video does not contain all gaze/eye video images and would surely not be the right tool for your annotation task.

papr 16 May, 2022, 09:25:14

Thank you for the feedback! Could you elaborate on what you are using the overlay for? Or in other words: What is your use case for visualizing the eye videos?

user-42203b 16 May, 2022, 09:29:21

Hello everyone 🙆‍♂️ I’m using the pupil invisible glasses for an interview and while checking the recording in the invisible companion app, I’ve noticed that the ET circle doesn’t move at all. Any ideas what I can do to prevent this from happening again? Thank you very much in advance 🙂 EDIT: I just checked the cloud and thankfully the recording seems to be fine there!!!

papr 16 May, 2022, 09:31:10

Could you specify when (during vs after) and where (in-app vs Cloud) you checked the recording?

user-a98526 16 May, 2022, 09:30:22

My application is eye activity classification (fixaition, saccade, smooth pursuit, PSO, etc.), which will require experienced users to label the data. During the tagging process, eye images, gaze points can provide the user with a more accurate judgement. This is why I am so obsessed with matching gaze points to eye images.

papr 16 May, 2022, 09:31:31

Ok, thank you for elaborating 👍

user-42203b 16 May, 2022, 09:35:37

In-app and during - but it only appears that way when checking the recording in-app; I just viewed the video in the cloud and it seems to be fine there 🙂

papr 16 May, 2022, 09:36:46

To monitor the recording in realtime, make sure to check out this newly released tool: https://docs.pupil-labs.com/invisible/how-tos/data-collection-with-the-companion-app/monitor-your-data-collection-in-real-time.html

user-6c1e7f 16 May, 2022, 09:40:23

From what age onwards would you recommend using Pupil Invisible?

marc 16 May, 2022, 10:14:18

Hi Kerstin! The official answer would have to be that we have never properly evaluated usage with children and can't make solid recommendations. But we can share some anecdotal advice.

The biggest issue when using Pupil Invisible with children is that the glasses are not sitting well on their small heads. This can be partially circumvented using the head strap, which gives some stability.

In all cases we have tested, the algorithms have been working well even on children as young as 2-3 years. But in that age group the glasses sit very loose. More creative ways of fixating the glasses may help. At maybe 8-10 years I'd expect the issues to stop as the glasses should start to sit well when using the head strap.

user-6c1e7f 16 May, 2022, 12:56:31

Thank you! 🙂

user-a98526 16 May, 2022, 13:52:40

Hi @marc, I found an interesting thing. Sometimes an eye0(t) image will be closest to two adjacent eye1(t) and eye1(t+1) images, while eye0(t+1) is not closest to any eye1 image, does this mean that some eye0 (i.e., eye0(t)) images will be used twice in the gaze point estimation.

papr 16 May, 2022, 13:53:54

Yes, that can happen.

user-a98526 16 May, 2022, 13:55:22

Thanks for the answer, this is a very interesting situation!

user-bd34eb 17 May, 2022, 09:02:27

Hi there! I have a problem as well with pupil invisible glasses and the recordings in the invisible companion app. The ET circle seems to stop moving after 10 seconds to 2,5 minutes, even though the recording overall continues. Is there a possibility that I can prevent this from happening so I can check the recordings in the app (not live nor in the cloud) with a working ET circle? Thank you in advance!

marc 17 May, 2022, 09:05:00

Hi @user-bd34eb! Does the gaze circle only stop moving when you play the recording back in the app, or also during playback in Pupil Cloud/Pupil Player/ Live Preview? Would you be able to share the recording with [email removed] so we can take a look?

user-bd34eb 17 May, 2022, 11:26:17

Hi! Thank you for the fast response! I have send you a recording via that emailadress. I'm unsure whether the ET circle also stops when looking at the live feed during recording, since we only use the recording in the app...

The recording of the circle most often stops at 1:13 min, but sometimes after 10 seconds or 2:15 min.

marc 17 May, 2022, 14:26:32

Thanks @user-bd34eb! After a first look it seems like there is nothing wrong with the recording itself. The gaze data is fully available in the recording. I have forwarded the recording to the Android team to investigate why it might not play correctly in the app. We'll keep you up to date!

user-2d66f7 18 May, 2022, 12:54:12

Hi Marc, we might have the same issue. The ET circle stops moving while the recording continues. We use these recordings in the rehabilitation to give feedback on scanning behaviour, so having recordings with a moving ET circle is crucial for us. If you have found the problem I would like to hear it as well!

user-bd34eb 17 May, 2022, 14:54:28

Thank you very much!

marc 18 May, 2022, 13:05:30

Thanks for the report @user-2d66f7! We are still investigating the issue! If you wouldn't mind, could you also share one of the affected recordings with [email removed] so we have another instance to work with?

user-2d66f7 18 May, 2022, 14:16:24

Yes, I will do that as soon as possible.

marc 19 May, 2022, 15:23:24

@user-2d66f7 @user-bd34eb We were able to locate the problem and fixed it with a new release, so please update your apps. The release should become available tomorrow! 🙂 @user-a9589e If you could anyway still send us your example recording that would be great to double check!

user-710092 20 May, 2022, 09:54:48

Hello. We are small company considering to buy Pupil Labs Invisible. We cannot find anything regarding the warrenty of the products though. Can you inform me of the warrenty? 🙂 Cheers, Frederik

mpk 23 May, 2022, 08:19:25

@user-710092 We give 12 months of warranty. In case of a defect we will ship the unit back to us, repair it and ship it back to you for free. Passt warranty we have a repair service, our repair pricing is fair: it covers part and labor cost only.

user-d1dd9a 25 May, 2022, 08:20:53

Is it possible to get the companion app outside the play store ?

marc 25 May, 2022, 08:27:52

No, generally not. Can't you use the Play Store?

user-d1dd9a 25 May, 2022, 08:36:56

You need a google account for that. It would be great if the apk would also be made available outside of the play store.

marc 25 May, 2022, 08:37:55

Okay, I see! Thanks for the feedback, I'll forward it!

user-d1dd9a 25 May, 2022, 09:07:01

For a official educational institution such a account binding is difficult. We use the OnePlus in a tight defined way only as (local) recorder of our eyetracking data. So it would be really great to have another option to update the companion app.

marc 25 May, 2022, 09:10:26

What stops you from using a "throw-away account" whos only purpose is to download the app?

user-d1dd9a 25 May, 2022, 10:12:13

I'm not so happy with that but i did that right now.

user-e0a93f 26 May, 2022, 15:02:31

Hi 🙂 I noticed something weird in my data. I always get a 1.0 confidence score for all of the frames. However, sometimes my participant's eyes are almost closed (not enough to be a blink) and in these moments the gaze estimate in bad laterally. I highly doubt that the gaze orientation is right at these instants. Do you have an idea on how to detect these instant other than I have the feeling that this is not right? Is the confidence score usually reliable?

Chat image

nmt 26 May, 2022, 15:11:37

Hi @user-e0a93f. Please see this message for what confidence means in the context of an Invisible recording: https://discord.com/channels/285728493612957698/633564003846717444/963724325586870303

user-e0a93f 26 May, 2022, 15:54:00

Thanks for you answer, it explains why it is always 1.0. Do you have an idea on how to detect half closed eyes?

user-e0a93f 10 June, 2022, 13:39:41

@papr or anyone, is it possible to get a confidence score for the AI prediction of the gaze orientation? Otherwise, I guess I will have to train my own AI to detect when the eyes are half closed. Did you do something like that for the blink detection which could be reused in my case?

user-100d6e 31 May, 2022, 14:02:20

Hi! Is the video frame rate of the pupil invisible world camera variable?

papr 31 May, 2022, 14:12:59

Hi, yes, it is 🙂

user-100d6e 31 May, 2022, 14:46:50

In our videos we assume a slight increase and then decrease to 'normal' of the playback speed. This happens for a few seconds throughout the raw world video.

Could that be possible?

marc 01 June, 2022, 05:49:49

The framerate should be gaussian distributed around 30 FPS and a local average should always yield ~30 FPS. The changes in speed should not be perceivable when playing back the recordings. Are you playing back the recordings using either the Companion App, Pupil Cloud or Pupil Player? If you could share an affected recording with [email removed] I'd be happy to take a look if something appears wrong!

user-6a91a7 31 May, 2022, 19:30:23

Does anyone know how to resolve issues with uploading to the cloud -- files have been stuck at 0% for several days?

Files/videos of similar length have uploaded to the cloud perfectly fine in the past and the specific files not uploading can be viewed with overlaid gaze by clicking on the file. I have stopped the app completely and rebooted the phone provided by pupil but have not had luck in the files uploading.

Chat image

nmt 31 May, 2022, 21:20:12

Hi @user-6a91a7. Please try logging out of the app and logging back in again.

user-6a91a7 01 June, 2022, 17:04:14

Thank you @nmt ! That worked!

End of May archive