πŸ’» software-dev


user-83ea3f 02 March, 2021, 05:00:36

hey, can I get any examples for using messagepack API? want to use surface tracker, world view camera module, calibration, and recording, and export function to use!

papr 02 March, 2021, 09:29:47

See this for starting and stopping calibration and recordings: https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote

See these for receiving surface tracker and world view data: https://github.com/pupil-labs/pupil-helpers/tree/master/python

Exports cannot be triggered remotely since Pupil Player does not expose a Network API.

papr 02 March, 2021, 18:56:00

auto exposure window, if I had to guess

user-10fa94 08 March, 2021, 05:01:30

Thank you! A second question, since this was developed for the eye cameras, are the thresholds and AE windows that are hardcoded specific to the eye cameras? If yes, would it be possible to provide some guidance on how these values were selected?

user-ac0304 03 March, 2021, 20:44:27

hi, i'm working on trying to project the pupil gaze data onto video captured by a second camera. I'm working with the raw gaze data in the .pldata files and wanted to know how I might use the gaze_point_3d and other data from the gaze and fixation files to achieve the same projection as seen when you load the data in the pupil player

papr 03 March, 2021, 20:58:23

Do you know how to read the raw data already?

papr 03 March, 2021, 20:45:52

Do I understand correctly that your gaze data was not calibrated to this second camera, but to the primary Core headset scene camera?

user-ac0304 03 March, 2021, 20:52:23

yes it's been calibrated to the scene camera and we're translating the data to the second camera

papr 03 March, 2021, 20:53:38

ok, thanks. I need one more clarification. Do you know how to do the necessary translation/transformation already? Or is this what you like to know?

user-ac0304 03 March, 2021, 20:54:58

we know the transformation, I guess what I'd really like to know is what data from the files i need/how they are being processed to make the projection

papr 03 March, 2021, 20:57:29

The code implementing our camera (un)projections and the default camera intrinsics can be found here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py

papr 03 March, 2021, 20:56:15

ok, great, that I can help you with πŸ‘ Check out our coordinate system reference: https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-ac0304 03 March, 2021, 21:01:14

yes, we've already read the data using the file_methods.py functions, this should be enough to get started with the projection. thank you!

user-a98526 08 March, 2021, 09:35:02

Hi@papr, the result is as shown in the screenshot.

Chat image

papr 08 March, 2021, 09:55:24

Please check the ...\site-packages\uvc\ folder for a file called turbojpeg.dll and copy it to the pupl_external folder

papr 08 March, 2021, 09:52:36

Thank you, now I know what is actually missing. You have pyuvc installed already, correct?

user-a98526 08 March, 2021, 09:43:20

I still don't know how to fix this problem.

user-a98526 08 March, 2021, 09:59:47

I think I don't have UVC installed, I just configured the environment according to Pupil Invisible Monitor's tutorial.

papr 08 March, 2021, 10:02:40

@user-a98526 ok, no problem, this is the missing file. It is also part of the Pupil Invisible bundle.

turbojpeg.dll.zip

user-a98526 08 March, 2021, 10:12:24

Yes it works, thanks for your help.

user-a98526 08 March, 2021, 12:13:09

Hi@papr, Another question, how do I get IMU data in real time. Another question, how do I get IMU data in real time, I modified SENSOR_TYPES, but the following problems occurred: Traceback (most recent call last): File "E:/pupil/pupil_invisible/pupil-invisible-monitor/src/invisible_data_get.py", line 102, in <module> main() # Execute example File "E:/pupil/pupil_invisible/pupil-invisible-monitor/src/invisible_data_get.py", line 50, in main for data in sensor.fetch_data(): File "E:\Anaconda\envs\pupil_invisible\lib\site-packages\ndsi\sensor.py", line 308, in fetch_data value = self.formatter.decode_msg(data_msg=data_msg) File "E:\Anaconda\envs\pupil_invisible\lib\site-packages\ndsi\formatter.py", line 308, in decode_msg return IMUValue(content) TypeError: new() takes 8 positional arguments but 18 were given*

papr 08 March, 2021, 12:15:39

Could you please specify the code that you are running, including your changes? You can share it easily using https://gist.github.com

user-a98526 08 March, 2021, 12:19:07

This is the Gist: https://gist.github.com/yb525533341/a55d79d5f98253f7ce9d20757c479a81

papr 08 March, 2021, 12:22:44

Thanks. The changes should work. I will give it a try and come back to you once I was able to reproduce the issue (probably tomorrow). Could you let me know which Python version and ndsi version you are running?

user-a98526 08 March, 2021, 12:25:30

python version=3.6.12 ndsi version=1.3 Thank you for your patient help.

papr 08 March, 2021, 12:43:51

Looks like the application's requirements were not updated for a while. I just updated them. In your ndsi repository copy (pupil-invisible-monitor folder) you can do:

git pull
python -m pip install .
papr 08 March, 2021, 12:39:45

Actually, you should upgrade to pyndsi 1.4. It looks like your issue was fixed already in this version https://github.com/pupil-labs/pyndsi/pull/57

nmt 08 March, 2021, 12:43:28

@papr and @user-a98526 python=3.6.12 with ndsi=v1.4 works for me (just tested)

nmt 08 March, 2021, 12:43:34

https://github.com/pupil-labs/pyndsi/releases/tag/v1.4

papr 08 March, 2021, 12:44:03

Thanks for checking!

user-a98526 08 March, 2021, 12:51:07

ndsi=v1.4 works for me, thanks for the help of @papr and @nmt.

user-a98526 09 March, 2021, 09:06:25

Hi@papr and@nmt, sorry to disturb you again. I am going to use the real-time images and gaze points obtained by Pupil-invisible for intention object recognition. I used YOLOv3 for object detection after obtaining world_image, and it took about 0.8s for each frame. But this has a problem. After adding the target detection, the program has been stuck in the function part of the received image, and no gaze information has been obtained. This is my gist. To simplify, I replaced the object detection process with a delay of 0.8 seconds. https://gist.github.com/yb525533341/4f86123641a9b1aa18cc0c88c892abd6

papr 09 March, 2021, 09:16:30

So to clarify, if you remove the time.sleep(0.8) part, it works as expected. If you add it, the program gets stuck? Can you determine in which line?

user-a98526 09 March, 2021, 09:10:39

I guess that pupil_invisible data should be in the form of a queue, and the data release process is slowed down with a delay of 0.8s.

user-a98526 09 March, 2021, 09:19:10

Yes, if I delete the 0.8s delay (or YOLO detection program), it will work normally. The code is on line 55.

papr 09 March, 2021, 09:20:03

Apologies, I meant where/in which line does the program get stuck if you add line 55 (time.sleep(0.8))

papr 09 March, 2021, 09:21:59

Also, I suggest moving that to line 64, right before the visualization. I would recommend to avoid any heavy processing in the fetch_data() loop

user-a98526 09 March, 2021, 09:22:52

Sorry, in fact, the code is not stuck, but falls into a loop at lines 52-55.

papr 09 March, 2021, 09:24:23

Ah, ok, that makes sense. The fetch_data() loop returns data as long as the sensor has some available. If the loop iteration is slower than new data arrives, it will continue. Moving the YOLO/sleep out of there will solve your issue.

user-a98526 09 March, 2021, 09:30:59

This is fascinating, and it does work. By the way, is there any other way of judging whether there is a wearer?

papr 09 March, 2021, 09:31:58

Not at the moment.

user-82e5bd 09 March, 2021, 20:25:44

Well i have a question. I wanted to callibrate the device with this https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection link. However if i am following the steps i do not see the option vis eye video overlay but sapparetly

papr 09 March, 2021, 20:29:00

It is called Eye Overlay, you can turn it on in the Plugin Manager

user-82e5bd 09 March, 2021, 20:27:19

It is in the version pupil player v3.1.16

user-82e5bd 09 March, 2021, 20:33:01

Ok, but then i don`t see the button for move overlay etc, and if i am goning to the next step and do a offline pupil detection it is a post hoc. Is this because i kalibrated also on the phone in the app

user-82e5bd 09 March, 2021, 20:33:24

and more of the functions in the video are no the same as in my pupil player

papr 09 March, 2021, 20:34:03

The overlay is independent of the post hoc detection and calibration. You should be able to move the overlay by dragging it

user-82e5bd 09 March, 2021, 20:37:33

oke i understand that bu is the post hoc calibration than the same as the offline calibration because that function i doesn`t have

papr 09 March, 2021, 20:38:10

post-hoc == offline; real-time == online

papr 09 March, 2021, 20:37:57

Yes, it is the same

user-82e5bd 09 March, 2021, 20:43:14

oke than i need to try istead of asking! Maybe ineed to try before asking! Thanks and i will return with questions

user-98789c 17 March, 2021, 10:27:23

I'm trying to visualize my pupil diameter recording in each of my experiment's trials, to have the timestamps on a primary x axis and the annotation labels on a secondary x axis. I have written such a script:

` if 'TRI' in trial_label:

    fig, ax1 = plt.subplots()
    ax2 = ax1.twiny()

    fig.set_figheight(5)
    fig.set_figwidth(25)
    ax1.yaxis.grid()

    plt.xticks(ticks, labels)

    plt.plot(pupil_data_in_trial_eye0['pupil_timestamp'].loc[pupil_data_in_trial_eye0['trial'] == trial_label], pupil_data_in_trial_eye0['diameter_3d'].loc[pupil_data_in_trial_eye0['trial'] == trial_label])
    plt.plot(pupil_data_in_trial_eye1['pupil_timestamp'].loc[pupil_data_in_trial_eye1['trial'] == trial_label], pupil_data_in_trial_eye1['diameter_3d'].loc[pupil_data_in_trial_eye1['trial'] == trial_label])

    plt.legend(['eye0', 'eye1'])
    ax1.set_xlabel('Timestamps [s]')
    ax1.set_ylabel('Diameter [mm]')
    plt.title('Pupil Diameter in ' + str(label) )
    plt.grid(b=True)`

and I get my trial visualization as shown, in which primary x axis timestamps are no correct.

Chat image

user-98789c 17 March, 2021, 10:30:25

I have also tried plotting my main data on ax1 and the ticks and labels on ax2, and also tried separating ticks and labels like

ax2.set_xticks(ticks) ax2.set_xticklabels(labels)

but still won't work.

papr 17 March, 2021, 10:38:18

You are mixing pyplot and Axes apis. I suggest sticking to the Axes api. This allows for more explicit control over which object you are manipulating.

- plt.xticks(ticks, labels)
+ ax2.set_xticks(ticks)
+ ax2.set_xticklabels(labels)

- plt.plot(...)
+ ax1.plot(...) 

The issue here is that ax1 and ax2 might not share the same scale. I would use https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.axes.Axes.axvline.html?highlight=axes%20vline to visualize labels in ax1 in addition instead of using ax2.yaxis.grid().

user-98789c 17 March, 2021, 10:30:42

So.. suggestions are most welcome!

user-98789c 17 March, 2021, 11:00:05

This would, as I had also tried before, correct ax1 timestamps, but would bring all the trial annotations in each one trial:

Chat image

papr 17 March, 2021, 11:02:44

This looks like there is an issue with the data selection then. Please make sure that you are only drawing lines for the events that you actually want to draw.

user-430fc1 18 March, 2021, 10:02:31

Hello, the Eye Overlay plugin keep crashing player when I try to enable it. I'm using the latest version of the software. Any ideas what's going on?

papr 18 March, 2021, 10:03:29

This is a Windows-only issue. The release was updated with a fix on Monday. Please use v3.2-16 instead of v3.2-15

user-d1072e 21 March, 2021, 00:28:47

Hello, this might be a naive question. I'm trying to find the gaze point in the world view's video. After doing the experiment, I downloaded the raw video data, and got a bunch of file. I think that the gaze_position.csv is the one I should look for, but there are few columns that I don't quite understand. I am thinking of using norm_pos_x, norm_pos_y or gaze_point_3d_x, gaze_point_3d_y, gaze_point_3d_z to find the gaze point. Which one should I use? Or maybe should I suppose to use the data from different files? Thank you!

Chat image

papr 21 March, 2021, 16:04:20

These refer to the same position in different coordinate systems. Read more about them here https://docs.pupil-labs.com/core/terminology/#coordinate-system See this tutorial on how to render gaze into scene video frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

user-d1072e 21 March, 2021, 16:33:12

Thank you so much for your support! There's one thing I am quite confused about the data. On the website, it says the normalized data should be in range 0 to 1 but I see a lot of negative data. Why is it so?

Chat image

papr 21 March, 2021, 16:35:06

The 0-1 range refers to within-image coordinates. Negative values or values larger than 1 refer to coordinates outside of the image. This usually happens due to low quality data being mapped inaccurately. You can see that the confidence is quite low for these. I suggest filtering your data with a confidence threshold of at least 0.6 (default Player minimum data confidence)

user-d1072e 21 March, 2021, 16:41:30

I also have several data with confidence exactly 1, but having negative coordinates. Is this because of poor calibration, or anything else? What should I do with this number? I think it doesn't matter anyway since this value will be 0 in frame image coordinate.

Chat image

papr 21 March, 2021, 16:43:05

Since there is no visual context for them, I would drop them, as you suggested. A poor calibration is an explanation for such values.

user-d1072e 21 March, 2021, 16:55:38

OK, thanks a lot for your help. There's one more thing I want to ask about calibration. In the best practice , it says I should do validation, https://docs.pupil-labs.com/core/best-practices/#validate-your-calibration. How much accuracy error would be considered as high, ex: 5%, 10%, 15%,...? And if the error is high, I think I should do the validation again instead redoing the whole calibration process like in recommendation "you need to perform another calibration", right? Or maybe I misunderstand this sentence. Finally, does "use different points for validation than you used for calibration" mean that if initially I use screen marker to do calibration, I have to use "Single Marker" or "Natural features" to do the validation?

papr 21 March, 2021, 17:00:02

Feel free to share an example recording of you doing the calibration to get specific feedback. Please share the data with [email removed] Accuracy is usually calculated in degrees https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy If the calibration error is high, the validation error will be so to. The validation does not change your calibration though. If you have insufficient accuracy, please recalibrate. The minimum recommended accuracy depends on your experiment.

papr 21 March, 2021, 17:00:34

Screen marker calib. alredy uses different locations for the validation.

user-d1072e 21 March, 2021, 17:05:24

OK, I understand. Thank you so much!

user-98789c 22 March, 2021, 10:26:04

In https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py, the function send_trigger is meant only for the purpose of sending annotations, right?

papr 22 March, 2021, 10:28:10

Technically, it can be used for every kind of Pupil Capture compatible message. Please note, that custom topics will require custom receivers in order for the data to be stored during a recording.

user-98789c 23 March, 2021, 11:09:32

Is it possible to use blink_detection.py or saccade_detection.py in my own script? Modules like pyglui or plugin are not recognized.

papr 23 March, 2021, 13:01:07

Saccade detection is not implemented. You can either extract the functional part of the plugin into an external script or install the source dependencies https://github.com/pupil-labs/pupil/#installing-dependencies

user-98789c 23 March, 2021, 13:11:29

Thank you. As I already have working Pupil softwares on my system, should I go through all the steps on this page? and it would work with Python 3.8, right?

papr 23 March, 2021, 13:12:38

You only need to install those dependencies that you need for running the code. You can run the bundled software independently of your source dependency.

user-98789c 23 March, 2021, 14:02:23

I installed pyglui manually and the concerning error was resolved. Can you tell me how to find out what dependencies I need for 'plugin'?

papr 23 March, 2021, 14:02:59

The simplest way is try importing it and check for ImportErrors. So trial and error basically. The code base is unfortunately a bit convoluted since it has grown much over the time.

user-98789c 23 March, 2021, 14:10:28

Is there such a command available also for plugin, or a link explaining about it? python -m pip install git+https://github.com/pupil-labs/pyglui

papr 23 March, 2021, 14:12:11

No, as plugins were not designed to run standalone. But you can actually try installing python -m pip install -r requirements.txt You can find this file in the top-level directory of the pupil source code.

papr 23 March, 2021, 14:13:14

Additionally, you might want to run this code before importing the plugin:

import sys
# TODO: adjust part
sys.path.append("<path to pupil repo clone>/pupil_src/shared_modules/")
papr 23 March, 2021, 14:13:30

This will make other plugin code available for import

user-cb599e 23 March, 2021, 18:00:39

Is there a way to estimate camera intrinsics for custom eye cameras? I tried starting the plugin as an eye plugins, but of course it didn't work. Is it something that can be guessed from the camera specs? Thanks!

papr 23 March, 2021, 18:01:55

You can select the custom eye cam as a scene camera and use the default plugin. If your camera has an IR filter, it is possible that the calibration pattern is not visible though.

user-cb599e 23 March, 2021, 18:07:09

Of course I had not thought about that super simple solution πŸ€¦β€β™‚οΈ . Thanks!

user-cb599e 23 March, 2021, 18:50:22

Does the calibration needs to be placed at the same distance as the eye from the camera? At the focal length? Would it be better to print the calibration pattern at an adapted scale and place it under the camera?

papr 23 March, 2021, 18:51:33

You can show the pattern in window mode and resize the window. This will scale the pattern down. Alternatively, printing a scaled-down version should work, too.

user-98789c 24 March, 2021, 10:33:07

What does the base_data column in blinks.csv represent?

papr 24 March, 2021, 10:33:53

Blinks are based on a series of pupil data points. The base_data field refers to which of these are affected by the blink.

user-98789c 24 March, 2021, 10:34:30

Do you suggest rejecting the blinks using their timestamps or their frames?

papr 24 March, 2021, 10:39:22

I would (always) recommend time-based over frame-index-based processing. The cameras are synced in time, not by their frame rate. Frame indices are therefore only valid within a specific camera stream and one has to take extra care that indices from different cameras are not being mixed up

user-98789c 24 March, 2021, 10:40:01

Great, thanksss

user-98789c 24 March, 2021, 11:01:50

The confidence column in blinks.csv is the confidence with which the blink is detected, right? and it's not the confidence with which pupil diameter is detetcted? I guess I got confused because of the blink ocnfidence measures in my recording, which do happen to be more than the blink confidence threshold.

papr 24 March, 2021, 11:04:02

blink confidence is the mean of the absolute filter response during the blink event, clamped at 1. https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/blink_detection.py#L418-L422

user-98789c 24 March, 2021, 17:29:40

I'm trying to record an expeirment using PsychoPy while remotely controlling Pupil Core. The PsychoPy coder complains about the line: pub_socket.connect("tcp://127.0.0.1:{}".format(pub_port)) The syntax checker doesn't accept this line and gives out the error:/* Syntax Error: Fix Python code */ When I remove this part: .format(pub_port), the error resolves, but obviously the remote connection isn't established. Any idea what's wrong?

papr 24 March, 2021, 17:43:37

I have never seen this issue before. Which python version are you using?

user-98789c 25 March, 2021, 15:25:02

Python 3.8 (64 bit)

I changed the line topub_socket.connect("tcp://127.0.0.1:"+ str(pub_port))and it works now.

papr 25 March, 2021, 15:31:35

But I am happy to see that you were able to work around the issue

papr 25 March, 2021, 15:28:08

That line works for me when using Python 3.8. I feel like there might be a hidden character somewhere that causes the syntax error

user-98789c 25 March, 2021, 15:33:36

I'll see if I can find where the error comes from.

user-98789c 26 March, 2021, 08:58:35

I am rejecting blinks from my pupil size recordings like this:

`data_high_conf_df['blink'] = None blink_id = blinks_pd_frame['id']

for id in blink_id: this_blink_start = blinks_pd_frame['start_timestamp'].loc[blinks_pd_frame['id'] == id] this_blink_start = this_blink_start.iat[-1] this_blink_end = blinks_pd_frame['end_timestamp'].loc[blinks_pd_frame['id'] == id] this_blink_end = this_blink_end.iat[-1] blink_boolean_mask = data_high_conf_df['pupil_timestamp'].between(this_blink_start,this_blink_end) data_high_conf_df['blink'].loc[blink_boolean_mask] = 'yes' blink_mask_not_available = data_high_conf_df['blink'].isna() data_high_conf_df = data_high_conf_df.loc[blink_mask_not_available]`

and it works fine. I just wanted to make sure if it seems fine, also theoretically?

papr 26 March, 2021, 09:12:56

Hey,

Yes, generating a boolean mask for the pupil data in order to remove blinks makes sense.

I would like to add some suggestions that should help simplify the code: - use https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.itertuples.html#pandas.DataFrame.itertuples instead of iterating over the ids and making .loc[df.id == id] subselections every time. This also avoids the use of .iat[] - use bool type for the data_high_conf_df['blink'] column. You can do that by assigning False initially, and True instead of 'yes'. Then you can use this line to filter

data_high_conf_df = data_high_conf_df.loc[~data_high_conf_df.blinks]`

(~ is the boolean invert operator; https://docs.python.org/3/library/operator.html#mapping-operators-to-functions) It makes sense that moving the last line out of the loop does not change anything. With each loop iteration you are increasing your blink_mask_not_available. In the code above, you first remove section a, then a and b, then a, b and c. If you move the line out of the loop, you remove a, b and c all at once. (a,b,c are blink ranges in this example)

user-98789c 26 March, 2021, 09:01:58

or do I need to append all the non-blink data that I get, to a new data frame? 🧐

user-98789c 26 March, 2021, 09:08:21

I just moved the last two line out of the loop, and seems to give out the same results!

papr 26 March, 2021, 09:18:08

(I usually would not recommend iterating over rows manually, bu this case is an exception as .between() is not flexibel enough.)

user-98789c 26 March, 2021, 09:24:08

Hey! perfect, thanks! I learn a lot from you 😊

papr 26 March, 2021, 09:26:27

In the long run, I would be happy if we could extract some pupil tutorials (https://github.com/pupil-labs/pupil-tutorials) from your work. πŸ™‚ I think some of your implemented steps, e.g. pupil data rejection based on blinks, could be highly reusable for others.

user-98789c 26 March, 2021, 09:28:05

Sure! I would be happy to prepare anything that would be helpful πŸ‘

papr 26 March, 2021, 09:29:37

I think we should go over this once your project is finalized. I don't want to side track you. πŸ™‚

user-98789c 26 March, 2021, 09:32:46

We can defnitely do that! I will let you know when it's almost finalized. Actually, as the first step of my experiments, I am almost done designing a pipeline for the use of Pupil Labs devices at our lab, as a base for all out future experiments. I'd be glad to go over it with you too.

papr 26 March, 2021, 09:34:41

Sounds great! Let's move this into a private conversation for more detailed organization.

user-5e8fad 26 March, 2021, 10:49:41

Hello everyone. We recently bought an invisible device. I have two questions regarding the LSL plugin. 1. The github repo says that the coordinates are normalized to [0-1], however the recordings contain much larger values (e.g. 900 or so). Why does this happen? 2. Is there any way to also push the video through LSL? Thanks in dvance πŸ™‚

user-b14f98 26 March, 2021, 14:19:15

Hello! I've having trouble installing dependencies on my Win10 machine.

user-b14f98 26 March, 2021, 14:19:40

(RitNet) D:\PupilSource\pupil>pip install -r requirements.txt Collecting pyre@ https://github.com/zeromq/pyre/archive/master.zip Using cached https://github.com/zeromq/pyre/archive/master.zip Ignoring cysignals: markers 'platform_system != "Windows"' don't match your environment Ignoring av: markers 'platform_system != "Windows"' don't match your environment ERROR: av-0.4.6-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform.

user-b14f98 26 March, 2021, 14:20:59

Is this a common issue, and is there a known workaround?

user-b14f98 26 March, 2021, 14:25:25

I had success with 'pip install av'

user-b14f98 26 March, 2021, 14:26:04

but the same issue then with uvc

user-b14f98 26 March, 2021, 14:50:14

@papr ever see this before?

user-b14f98 26 March, 2021, 14:50:20

Pythno version is 3.7

user-b14f98 26 March, 2021, 14:52:01

Same error for pyav, pyuvc, pyglui

user-b14f98 26 March, 2021, 14:52:19

Going to try with Python 3.6

user-b14f98 26 March, 2021, 14:55:04

Yep, that was it.

user-b14f98 26 March, 2021, 14:57:33

and only now did I see the text "NOTE: Currently our build process for WIndows does not yet support Python 3.7 as pyaudio does not yet have prebuild wheels for Windows available for Python 3.7. You are thus highly encouraged to use the latest stable version of Python 3.6."

user-b14f98 26 March, 2021, 15:14:12

However, your link to the shared FFMPEG binaries is no longer valid.

user-b14f98 26 March, 2021, 15:15:01

...and theh latest build does not have all the required DLL (it's missing postproc-55.dll). I do see that there is a github package for the 4.3 shared binaries, but I'm new to github packages

user-17da3a 28 March, 2021, 18:17:41

Hi Guys! I am doing a visual experiment generated with MATLAB and Psychtoolbox. I would need to synchronize MATLAB and Pupil companion for sending the exact timestamp of events from MATLAB to the pupil invisible recording timestamps using the local network. Only in this way we can be precise enough regarding the timestamps, considering the issue that we record in a dark condition and our experiment suffers from low temporal precision because of this issue. If anybody probably has done this before and would be willing to give me some hint or info on the procedure, I would be so grateful. Thanks a lot in advance!

wrp 29 March, 2021, 03:25:20

@user-17da3a have you already looked at: https://docs.pupil-labs.com/developer/invisible/#events ?

user-1efa49 29 March, 2021, 03:44:32

@wrp How would this be executed through Matlab?

wrp 29 March, 2021, 03:45:56

One would need to write a matlab script to communicate with Pupil Invisible Companion similar to the helper script we wrote for Pupil Core + Matlab here: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab

user-1efa49 29 March, 2021, 03:48:07

@wrp the zmq solution is not trivial - fyi.

papr 29 March, 2021, 12:27:42

One would also need to use a Matlab-ZRE* implementation to communicate with Pupil Invisible Companion.

* https://rfc.zeromq.org/spec:36/ZRE/ The linked Python example uses https://github.com/zeromq/pyre

wrp 29 March, 2021, 03:51:53

Getting zmq set up for matlab, unfortunately, is non-trivial that's true.

user-141bcd 29 March, 2021, 12:23:57

hello! @papr quick question & sorry for cross-posting. I originally asked this already in the "core" channel - but it fits better here: in my exports from pupil player, the data in pupil_positions.csv has a few less observations than the according eye videos have frames. More precisely: eye0.mp4 has 99598 frames, but pupil_positions.csv only has 99576 observations for eye0. Of course, 22 frames out of 10k frames is not a lot. However, I wonder where they "get lost". I'm trying to get a mapping between the observationas and the video frames. Any ideas? Thanks already! [Bg info: data was collected with the 200Hz VR/AR add-on and exported with pupil player v1.22. ]

papr 29 March, 2021, 12:28:41

Apologies for the delayed response. Are you talking about the original eye0.mp4 or the exported one?

user-141bcd 29 March, 2021, 12:31:22

no problem. thanks. Should be the original. Tbh, I wasn't aware that a video could also be exported.

papr 29 March, 2021, 12:36:34

In this case a difference is to be expected. Player only exports data inside the trim marks (first and last world video frame by default). It is likely that the original eye video is slightly longer.

user-141bcd 29 March, 2021, 12:34:30

ah, I see it now in v3.2.16 - I think v1.22 (that I used to work with), didn't have it - or maybe I overlooked it. However, I'm talking about the original video.

user-141bcd 29 March, 2021, 12:43:16

ah ok. Is it a "fair" assumption that the 22 missing frames are cut off approx. equally from the beginning and end of the recording (aka. 11 from the start and 11 from the end)?

papr 29 March, 2021, 12:46:56

No, I would not make any assumptions about this. Instead, I would match the timestamps in eye0_timestamps.npy to the timestamps exported in pupil_positions.csv. Similar to this

for ts, frame in eye_video:
  if ts not in exported_pupil_data:
    continue
  process(frame, exported_pupil_datap[ts])
user-141bcd 29 March, 2021, 12:51:04

ah ja, good point. I was just working with the csv exports as I'm doing analyses in R. Didn't think of looking into th .npy files. Thx! So eye0_timestamps.npy should have the same number of observations as eye0.mp4 has frames?

papr 29 March, 2021, 12:52:15

If you export the eye videos, you also get a csv file for the exported eye video frame timestamps.

So eye0_timestamps.npy should have the same number of observations as eye0.mp4 has frames? correct

papr 29 March, 2021, 12:53:12

Also, you might need to be careful on how you extract the frames. Some software, e.g. OpenCV's video reader, may return an incorrect number of frames

papr 29 March, 2021, 12:54:23

This tutorial contains an ffmpeg command to extract the correct number of frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

user-141bcd 29 March, 2021, 12:54:14

kk, thx. I checked with ffmpeg

user-141bcd 29 March, 2021, 12:53:32

yea, I see it now with the "new" exports. As I said, for the original ones, I didn not export the eye videos (I think it was not implemented yet).

papr 29 March, 2021, 12:53:54

Possible, yes

user-141bcd 29 March, 2021, 12:55:40

last question: exporting was quite bothersome (it's ~250 recordings). Is there an option (or scripting example) for bulk exporting?

papr 29 March, 2021, 12:57:19

Unfortunately, there are none built-in. There are some community projects that read the intermediate data files in bulk and export the data to CSV https://github.com/pupil-labs/pupil-community

user-141bcd 29 March, 2021, 12:59:13

great, found https://github.com/tombullock/batchExportPupilLabs and will try that. Thanks for the awesome service again!

user-d4549c 29 March, 2021, 19:54:36

Hello there, few weeks back I was able to connect Pupil Capture with Pupil Invisible App so I was getting all the data from Pupil Invisible while being on same network. Now (I moved) I can't connect to Pupil Capture, the Video Source doesnt give me an option "One Plus 6". The only difference between previous successful connection and now was that I was connected via ethernet to my PC and now I am on the network via WiFi, do you think that this could be the problem? Thanks for answer. Boris

papr 30 March, 2021, 07:39:29

Could you share the Home directory -> pupil_capture_settings -> capture.log file with us? It is likely, that Capture chooses an incorrect network interface for communication.

user-d4549c 01 April, 2021, 09:43:55

Thanks for the answer but I found out that my Firewall was blocking the connection, I am using third party antivirus software.

End of March archive