πŸ‘ core


user-1a6a43 01 February, 2021, 06:29:47

is there a dedicated youtube series/documentation for how to actually plug in eye tracking information to projects

user-1a6a43 01 February, 2021, 06:29:49

in code

user-1a6a43 01 February, 2021, 06:30:09

I'm interested in using it in MATLAB, but anything would be helpful

user-0ee84d 01 February, 2021, 07:40:57

May be this ? https://github.com/pupil-labs/pupil-helpers/tree/master/matlab

user-b3ad25 01 February, 2021, 08:57:51

Hi, Is it possible to use pye3d-detector without using Pupil Capture? Our project needs to develop our own UI to visualize both Pupil eyes video and other sensors. Thank you

papr 01 February, 2021, 09:09:21

In a stand-alone context, we allow the usage of pye3d only for (1) academic use or (2) commercial use in combination with the Pupil Core headset.

user-b3ad25 01 February, 2021, 09:31:46

Our project is for academic research. Are there any examples/documentation of the stand-alone usage for pye3d-detector?

user-0ee84d 01 February, 2021, 13:10:49

For now I’m using Pupil Labs Marker to calibrate the device... I suppose the calibration taken into account of the 2d image points and the 3d points of the marker to calibrate... I would like to know if it’s possible to replace the marker with my own calibration technique (that also gives me the image points and the 3d points)?

papr 01 February, 2021, 16:25:33

So this is the base choreography class https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/base_plugin.py#L99

It is responsible for aggregating pupil and reference data during calibration/validation. After the calibration/validation is done, it publishes the gathered data via its on_choreography_successfull() function.

~~Pupil data is gathered automatically already.~~ Pupil data is collected best during recent_events() https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/hmd_plugin.py#L68-L72

The most important part is storing the reference data in .ref_list. See this example from them hmd calibration were Unity sends VR reference data to Capture: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/hmd_plugin.py#L84

papr 01 February, 2021, 13:26:59

Ah, understood. Yes, it is possible since version 2.0. Let me look up the necessary class to subclass in a plugin.

user-0ee84d 01 February, 2021, 15:32:07

That would be great πŸ˜‰ thank you!

user-d8853d 01 February, 2021, 13:46:34

Hi pupil labs team, I was reading your paper and I came across the system latency figure. What I understood from that figure is, gaze mapping only happens on every 4th frame, other three frames are ignored? I am confused because it shows the latency of 0.119 secs between every gaze estimation. Also it takes about world pipeline being 0.124 second. It means the gaze mapping result are available only 0.124 sec apart (gaze mapping result for every 4th frame) and in between world frames are ignored and are only available for offline mapping. Did I understand it correctly?

Chat image

papr 01 February, 2021, 14:25:34

Each eye video frame generates a pupil datum. The pupil datum is sent to the world process. The world process queues all received pupil data, passes it to the gaze mapper, and publishes the resulting gaze [1]. The gaze mapper tries to match binocular pupil pairs with high confidence. [2]

[1] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_data_relay.py#L32-L41 [2] https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/matching.py#L58-L102

papr 01 February, 2021, 14:17:36

Hey, could you be more specific which paper you are referring to? It is either not ours or out of date. πŸ™‚

user-d8853d 01 February, 2021, 14:20:10

From the paper: Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction

papr 01 February, 2021, 14:21:08

Yeah, that paper is partially out of date. The 2d detection algorithm description is still accurate.

user-d8853d 01 February, 2021, 14:20:40

https://dl.acm.org/doi/pdf/10.1145/2638728.2641695

user-d8853d 01 February, 2021, 14:28:42

Thanks @papr for the explanation. But what does here 0.119 sec refer to. I thought pupil data will be mapped as soon as pupil data is available to nearest world frame by comparing the datum. But why the wait of 0.119 sec? With exposure time of 0.033 sec then it waited (0.119-0.033) sec.

papr 01 February, 2021, 14:45:13

To be honest, I am not sure where that number comes from. I think it is important to understand the different components contributing to the pipeline delay: 1. Frame time (time it takes to expose the image + compress it) 2. Frame transmission (time it takes to transfer the image data via USB to PC) 3. [eye] Pupil detections (2d + 3d) 4. [eye] publishing results onto IPC 5. [world] queuing all available pupil data 6. [world] run pupil data matching 7. [world] gaze estimation 8. [world] publishing gaze results onto IPC

Delays 3-8 do not play a role for recorded data, as data is processed based on timestamps recorded during 1 (linux + mac) or after 2 (windows).

The world process always visualizes the most recent gaze and the most recent world video frame. It does not buffer frames.

user-d8853d 01 February, 2021, 14:49:10

This makes a lot of sense. I got confused because of those numbers. Thank you @papr . As always spitze explanation πŸ™‚

user-6b3ffb 01 February, 2021, 16:01:40

Is there any example on how to read jpeg from frame publisher. I changed the python example frame format but doesn't work.

papr 01 February, 2021, 16:17:52

I think you can use pillow to read jpeg data https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.fromarray

papr 01 February, 2021, 16:30:22

Other functionality of our concrete choreography implementations: Circle marker detection + visualization

user-0ee84d 01 February, 2021, 16:46:02

Exactly what I was looking for!!!! Thanks a ton πŸ™Œ

user-98789c 02 February, 2021, 11:46:29

I have Pupil Capture v 3.0.7. How can I activate the Marker Tracking plugin? The link here doesn't work: https://github.com/pupil-labs/pupil/blob/marker_tracking/pupil_src/shared_modules/square_marker_detect.py

papr 02 February, 2021, 11:48:10

The plugin is named "Surface Tracker". The linked file implements our legacy marker detection. I strongly advise against using our legacy markers but AprilTags as stated in our documentation: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-98789c 02 February, 2021, 11:49:24

Ah ok, I thought it's something new. I have been using Surface Tracker a lot already.

nmt 02 February, 2021, 12:29:44

Look under the 'Fixation Filter Thresholds' heading of best practices

user-7daa32 02 February, 2021, 13:49:59

Thanks. The maximum dispersion and minimum duration can be set posthoc on pupil player to decide preferences. I am currently going through these links. Thanks

user-7daa32 02 February, 2021, 13:54:13

It looks like we have to set up our timer to guide us on what filter to use. If the participant gaze for 400ms, the setting shouldn't be less.

user-b3ad25 02 February, 2021, 14:24:29

Hi, I meet a problem while import Detector 2D from pupil detectors. Any suggestions? Thank you

Chat image

papr 02 February, 2021, 15:29:16

What is the output for otool -L <path to detector_2d.cpython-38-darwin.so> file?

papr 02 February, 2021, 15:27:59

It looks like the linked Opencv lib cannot be found in your anaconda environment.

user-b3ad25 02 February, 2021, 15:32:04

here is the result

Chat image

papr 02 February, 2021, 15:36:35

This is my output. As you can see, opencv is linked to absolute paths. Not sure why this is different for you, but I expect this to be due to Anaconda.

detector_2d.cpython-38-darwin.so:
    /usr/local/opt/opencv/lib/libopencv_core.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
    /usr/local/opt/opencv/lib/libopencv_highgui.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
    /usr/local/opt/opencv/lib/libopencv_videoio.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
    /usr/local/opt/opencv/lib/libopencv_imgcodecs.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
    /usr/local/opt/opencv/lib/libopencv_imgproc.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
    /usr/local/opt/opencv/lib/libopencv_video.4.2.dylib (compatibility version 4.2.0, current version 4.2.0)
    /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 400.9.4)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.250.1)
user-b3ad25 02 February, 2021, 15:39:49

Thank you, I'll check my Anaconda env, maybe this is because I install opencv from pip.

papr 02 February, 2021, 15:42:23

The detectors need opencv headers and libraries to link against. These do not come with the pip installed opencv (at least not the headers).

papr 02 February, 2021, 15:43:18

In other words, the detectors would not have built if you did not have the requirements installed. But they did. So the necessary files are somewhere on your system but anaconda is not finding them.

user-b3ad25 03 February, 2021, 11:23:51

Thank youπŸ‘. I do find them under Anaconda, env-name/lib/python3.8/site-packages/../../, as I'm using Mac, I apply install_name_tool -change [email removed] to absolute path manually. One can also use install_name_tool -add_rpath to replace [email removed]

user-1a6a43 03 February, 2021, 06:52:34

hey guys, how many bits of data with above 0.8 confidence level should I gather before averaging and using them

user-1a6a43 03 February, 2021, 06:53:23

I want to measure pupil dilation and eye virgence, but want to make sure the data is accurate. so what steps should I take in analyzing what's output by the system

papr 03 February, 2021, 11:25:34

Do I understand it correctly, that you were able to fix the problem by replacing [email removed] with absolute paths and it is now working as epxected?

user-b3ad25 03 February, 2021, 11:35:56

Yes, it works now

user-0ee84d 03 February, 2021, 12:46:55

@papr why is that subscribing to β€œgaze” slows down the application extremely? (6-12 seconds of delay)

papr 03 February, 2021, 13:18:30

Which application slows down? Pupil Capture or your own?

user-0ee84d 03 February, 2021, 14:12:37

My own

papr 03 February, 2021, 13:21:40

If you are rendering every gaze point one-by-one, it is expected to become slow as the rendering takes much more time (60hz for many monitors) than the incoming signal (up to 200hz).

user-0ee84d 03 February, 2021, 14:13:06

How do I limit it?

user-98789c 03 February, 2021, 13:23:46

Is there a way to get the pupil core time using a function in Matlab?

papr 03 February, 2021, 13:26:17

Yes, using the pupil remote t command. Please be aware that there is transmission delay which can be estimated by measuring how much time the command takes. See this hmd-eyes implementation: https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/TimeSync.cs#L66-L71

Once you know the offset between your own and Pupil clock time, you can apply it to new timestamps from your own clock to get it in Pupil time.

papr 03 February, 2021, 14:19:27

Using two loops. A render loop and and an inner processing loop.

while program_should_run:
  while socket.has_data:
    data = process_data_from(socket)
  render(data)

Consider this PI network example: https://docs.pupil-labs.com/developer/invisible/#network-api

while network.running: is the outer render loop with the cv2 calls at the end. for sensor in SENSORS.values(): ... for data in sensor.fetch_data(): is the inner loop processing data, storing it in the world_img and gaze variables.

user-0ee84d 03 February, 2021, 15:08:19

It doesn’t work... it throws the error in the dependency level.... ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

papr 03 February, 2021, 15:08:39

Please try to reinstall numpy

user-0ee84d 03 February, 2021, 15:16:03

Now it says Devices with outdated ndsi version found. Please update these device

papr 03 February, 2021, 15:18:07

Apologies for not being clear. By "consider this example" I meant looking at it, not running it. The example is meant for streaming Pupil Invisible scene video and gaze data. It uses a different API than Pupil Core. Nonetheless, it showcases how to use the two loops to display the most recent data.

user-0ee84d 03 February, 2021, 15:21:21

The documentation is extremely confusing and are all around the places in bits and pieces.

papr 03 February, 2021, 15:28:46

I agree that the documentation could be structured in a more user friendly way. This specific issue is more of an software architectural problem though and is not necessarily related to the network API itself. If you want to read more about how the network api works under the hood, checkout this part of the ZMQ guid about the PUB-SUB pattern https://zguide.zeromq.org/docs/chapter5/

user-0ee84d 04 February, 2021, 11:51:38

9 gaze samples were being dispatched per frame. By persisting single gaze sample per frame while ignoring the rest solves the issue. πŸ™‚ thank you

user-1a6a43 03 February, 2021, 17:31:02

Could you please provide a skeleton loop for what a reading from matlab portion of pupil helpers would look like? what I've done is create a while loop that breaks when we the escape key is pressed, and that checks each bit of data for its confidence threshold. when 10 pieces of information are gathered that have a confidence threshold over 0.8, then the array with the data is averaged, and saved. the problem is, this results in the program falling behind at times. what should I do?

papr 03 February, 2021, 17:33:47

I have not sufficient experience in Matlab to do that, unfortunately. I can only provide conceptual help. Maybe @user-98789c can help you out here. They are also using the network api with Matlab.

user-1a6a43 03 February, 2021, 17:37:21

any conceptual help would also be greatly appreciated. For example, I am taking an array of size 10, and averaging out the pieces of data that fill it as data is being gathered (that has a confidence threshold above 0.8). after 10 pieces of data are gathered, I average out the values in the array, and use the information. Is that conceptually fine? note, I'm gathering information using the 2d pupil capture software.

papr 04 February, 2021, 12:09:19

That would mean that you average data over time. You could think about this approach as if it was a simple lowpass filter with varying window size. One would only apply such a methodology if one wants to remove high-frequency gaze. And even then, one would probably use a lowpass filter that does not introduce lag, e.g. the OneEuro filter.

I might be able to give concrete processing recommendations if you share more details about your goal or issues that you are trying to solve.

user-1a6a43 03 February, 2021, 17:38:45

furthermore, in my loop I wait until another 10 pieces of viable data is gathered before I average out the array, effectively meaning that each piece of data is used only once. should I change this? more importantly, am I missing the picture entirely?

user-7daa32 04 February, 2021, 17:30:24

Is this now how the pupil detector look like in v3.7?

Chat image

papr 04 February, 2021, 17:38:33

There is no version 3.7. I guess you are referring to 3.0, correct? What you see is the new 3d detector "algorithm" view, correct. If you are wondering about the different circle colors and their meaning you should read https://docs.pupil-labs.com/developer/core/pye3d/

user-7daa32 04 February, 2021, 17:51:59

I have read it before. I will go through it again. Thanks. Please another question. I am still yet to solve the issue of annotation notification display showing up multiple times. I just can't do without doing a prolong click on the keyboard

user-7daa32 04 February, 2021, 17:52:35

Can it be done in such a way that a prolong stay on a keyboard key doesn't increase the number of notifications?

user-1a6a43 05 February, 2021, 07:27:06

I'm sorry for the late reply. Here's a quick summary of the problem and what is trying to be accomplished. We have is an issue where the data we are collecting through the TCP connection grows further and further behind real time. The way this code works is that 10 samples worth of data with 0.8 confidence is gathered, and written to a file. A function reads the saved file in real time and averages the results internally. The while loop is looping as fast as matlab physically allows it to, and yet the data output is falling behind. There is some backlog of data being stored in the buffer that is growing and growing, and the issue is that I need to know that the information I'm receiving is current, not from 10 seconds ago.

papr 05 February, 2021, 08:53:47

Writing files is very slow. I guess that that part is slowing down your loop substantially.

If you are falling behind regardless, then Matlab is maybe the wrong tool for your use case. :-/

user-1a6a43 05 February, 2021, 07:27:37

So I have a few questions, but would also appreciate thoughts and recommendations based on the above description. #1) Can we query from the top rather from the bottom? #2) Can we purge the buffer to keep it from filling up with old data. #3) How can we query faster in order to get all of the data. Can we grab all the data at once or does it have to be sample by sample?

papr 05 February, 2021, 08:50:37

1) No, the sub socket implements a first in, first out queue. 2) Yes, by recv() data from it and discarding it immediately. 3) only possible sample by sample.

user-1a6a43 05 February, 2021, 07:28:19

I think the is that the buffer fills up and we fall out of time regardless of saving to disk or packing our array.

user-e8e825 05 February, 2021, 14:47:16

Is there a way to extract data from pupil capture without using zmq?

papr 05 February, 2021, 15:08:30

In the end, you are free to write your own plugin that uses what ever method you like to publish the data, similar to the hololens relay.

papr 05 February, 2021, 14:51:30

You can use LSL to receive data from Capture using this plugin https://github.com/labstreaminglayer/App-PupilLabs/tree/v2.1/pupil_capture

Alternatively, you can use the build-in hololens relay to receive the data via a udp socket

papr 05 February, 2021, 14:52:04

I have not used the latter for a while though. Might be that it is not working as expected.

user-e8e825 05 February, 2021, 14:54:46

Any other option as this is under LGPL?

papr 05 February, 2021, 14:59:11

Is licensing holding you back from using zmq, too?

user-e8e825 05 February, 2021, 14:59:38

Yeah

papr 05 February, 2021, 15:01:09

Apologies, but this confuses me. zmq is ~~BSD~~ MPL licensed. You do not get much more freedom than that, AFAIK. Am I overlooking something?

https://tldrlegal.com/license/mozilla-public-license-2.0-(mpl-2)

user-e8e825 05 February, 2021, 15:40:56

Is it MPL or BSD? When i looked at pyzmq i think it says BSD?

papr 05 February, 2021, 15:45:43

pyzmq is BSD https://github.com/zeromq/pyzmq/blob/master/COPYING.BSD czmq is MPL https://github.com/zeromq/czmq/blob/master/LICENSE

I thought pyzmq were bindings for czmq, but they are for libzmq.

libzmq is licensed under LPGL as it looks like (with static linking exception) http://wiki.zeromq.org/area:licensing

Looks like they are trying to transition to MPL as well https://github.com/zeromq/libzmq/issues/2376

user-e8e825 05 February, 2021, 15:51:53

Does using pyzmq come under LGPL or is it solely BSD? We don't want to use LGPL

papr 05 February, 2021, 15:54:30

I guess this leaves you with the custom plugin choice.

papr 05 February, 2021, 15:52:19

Looks like they are transitioning as well https://github.com/zeromq/pyzmq/issues/1039

user-e8e825 05 February, 2021, 16:02:35

Can using czmq be an alternative?

papr 05 February, 2021, 16:03:18

Technically, that is a possibility, too.

user-e8e825 05 February, 2021, 16:18:39

My apologies but have u seen any instances where data is extracted from pupil core with other message brokers.

papr 05 February, 2021, 16:19:14

Only those that I have mentioned already.

user-e8e825 05 February, 2021, 16:19:28

Thanks

user-1a6a43 05 February, 2021, 17:03:48

Thank you for the reply! I have a quick followup: based on filter_messages.m, there is no way to clear the buffer or discard the information. What command should I use for that?

papr 05 February, 2021, 17:05:13

You need to call recv_message() repeatedly to clear the buffered messages https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/filter_messages.m#L68

user-1a6a43 05 February, 2021, 17:07:35

Oh I see, so the recv_message function clears it when being called. Alright, so every iteration of my while loop does [topic, note] = recv_message(sub_socket, 2048); which I'm guessing should clear the buffer. If that's not the problem, then the issue is probably in the way the while loop also writes to a file every 10 iterations

papr 05 February, 2021, 17:07:55

It removes a single message from the queue! (in case that was not clear)

user-1a6a43 05 February, 2021, 17:08:14

ooh. is there a way to clear it entirely, and take the most recent one?

papr 05 February, 2021, 17:10:29

Not entirely sure how to do this with the matlab bindings, but you can check if messages are available with this command: https://github.com/pupil-labs/pupil/blob/bb7d15ca78f3e4f25e2291a02b5670a312260f04/pupil_src/shared_modules/zmq_tools.py#L128

You should loop while this returns true, and process the most recent result

user-1a6a43 05 February, 2021, 17:09:16

and as a way of replacing the write to file process (to connect the reading of data to a separate MATLAB thread), should I use ports?

papr 05 February, 2021, 17:11:37

I do not have enough experience with Matlab what the respective best practices are in matlab. I suggest searching for the keywords "matlab multiprocessing communication"

user-1a6a43 05 February, 2021, 17:10:07

or what's the most efficient way of accomplishing communication via MATLAB, both from pupil-labs to the reading gaze file, and from the read gaze file to other files/processes in my project that need the information

user-1a6a43 05 February, 2021, 17:12:59

got it. thank you sir!

user-1a6a43 05 February, 2021, 17:13:08

I really appreciate the time you've taken in answering my questions

user-8bd1e2 06 February, 2021, 14:56:55

Hey there core community! Im having a hardware issue and I'm hoping someone on here can point me in the right direction: I'm trying to get pupil core to work on my Linux laptop. The machine detects the cameras in the usb port, but pupil software can't detect the cameras. I've put the headset on my PC and it works fine. I've tried rebooting, unplugging, different usb ports ( I have 2 on the laptop ), and reinstalling the software.

papr 06 February, 2021, 19:56:41

Hey, your user needs to be added the plugdev group. Afterward, restart the computer to apply the change.

user-8bd1e2 07 February, 2021, 00:06:31

thanks! I think that's definitely the source of the problem... I'm new to linux, but there seems to be an issue on my machine. I've added user to plugdev, it tells me user "already exists" and then when I look at the groups the user is in, plugdev isn't one of them. There also seems to be a permissions issue where I can't look into the /etc/group directory (permission denied) to make sure plugdev is in there. Anyway, this looks like it's more a software/linux issue than a pupil labs one. Thanks for pointing me in the right direction!

papr 07 February, 2021, 18:28:56

The command to add your current user to the group is sudo usermod -a -G plugdev $USER (you will need the admin rights to perform that command)

user-8bd1e2 07 February, 2021, 20:15:08

That worked! I already have a successful recording logged. Thanks so much!

user-a16c82 09 February, 2021, 03:01:57

Hi, does anyone happen to know of a specific functional alternative for the Microsoft LifeCam HD-6000? I've been looking into it on behalf of a student group project and we were told that the camera should have these specs: "Webcam - UVC compliant cameras are supported by Pupil Capture. If you want to use the built-in UVC backend, your camera needs to fulfil the following: -UVC compatible (Chapters below refer to this document) -Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) -Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) -Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) -Support UVC_FRAME_FORMAT_COMPRESSED frame format"

None of us have enough experience to be certain whether certain models meet these requirements and we haven't found any documentation that references these codes.

user-a13d4c 09 February, 2021, 21:32:23

Hi, first time poster here: We have an HTC Vive style Eye-tracking system, that we are testing with our own Android app running on a phone (Pixel 2XL, Android version 10). I'm trying to detect when the cameras attach to the USB bus and am enumerating the PID:VIDs. Unfortunately the cameras come up with the wrong IDs (0x507:0x581), whereas on Linux and on a different smartphone (Galaxy S21, Android version 11) the IDs come up as (0x0c45:0x64ab). On all systems the Microchip hub comes up correct (0x04d8:0x00dd). Any guesses as to what could cause this discrepancy?

user-4f103b 10 February, 2021, 01:40:28

Hi, I have a ball on my screen which moving from left to right.. How can I detect that my gaze are focusing on the ball? Thank you.

papr 10 February, 2021, 10:21:20

You should have a look at surface tracking for this πŸ™‚ https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-f195c6 10 February, 2021, 10:23:06

Good morning! I have a problem with time syncronization... So, I'm using the core glasses to track eye's movements in specific tasks drawn in psychopy. However, in specific space/eye location, they don't have the same timestamp... There's always a (different) delay associated... Does anyone know what's the problem here? Thank you!

papr 10 February, 2021, 10:24:23

Hey πŸ‘‹ What do you mean by "same timestamp"? What are you comparing it to?

user-f195c6 10 February, 2021, 10:27:35

I'm using a python code to extract both in 2 excel files and then plot in MatLab, but the stimulus (psychopy) and the data collected from the glasses don't match...

user-f195c6 10 February, 2021, 10:26:14

I mean, when a dot (from psychopy) is in a place of the screen, the timestamp associated does not correspond to the timestamp of the glasses when the eye is looking to that position...

papr 10 February, 2021, 10:27:19

Which clock do you use in Psychopy to measure the timestamp? And how large is the offset between the psychopy timestamp and the pupil timestamp?

user-f195c6 10 February, 2021, 10:29:13

How can I check which clock is beeing used in psychopy? The delay between them has been between 8 to 16 seconds, more or less. This number is always different...

papr 10 February, 2021, 10:38:09

Look at the experiment code that was responsible for generating your dat data. It is probably https://www.psychopy.org/api/clock.html

user-f195c6 10 February, 2021, 10:32:28

Also other thing that I noticed (not directly connected with this problem) is that the clock plugin in pupil player (vis_datetime) is giving me one hour less. I travelled between countries with 1h of difference... is this the reason? I mean, the clock in my computer is right...

papr 10 February, 2021, 10:37:07

The plugin shows datetime in UTC if I remember correctly. That should be independent of your time sync issue.

papr 10 February, 2021, 10:38:34

Did you implemented explicit time sync with Capture in your experiment?

user-f195c6 10 February, 2021, 10:41:13

I don't think so... I'm using a python code to generate the timestamp from the core data, then it goes for an excel file and then I plot in Matlab...

papr 10 February, 2021, 10:42:08

I'm using a python code to generate the timestamp from the core data Ah, ok. Let's move this to πŸ’» software-dev then. Can you share the mentioned code there?

user-f195c6 10 February, 2021, 10:42:39

Yes!

user-4f103b 10 February, 2021, 17:17:23

Hi, thank you for the response. Surface tracking will tell me that my gaze are on the surface or not. But how can I detect that my gaze are at specific object inside the surface?. Thanks

papr 10 February, 2021, 18:12:08

I assumed that you have knowledge over the object's current position, such that you could compare it to the gaze coordinates. Pupil Capture does not provide built-in object detection

user-98789c 11 February, 2021, 10:33:42

Hi, does anyone know of a plugin or a simple python script for pupil data visualisation, e.g. pupil size over time?

papr 11 February, 2021, 10:46:28

Check out https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb

user-98789c 11 February, 2021, 11:22:51

I have trials throughout my data and I want to plot the pupil diameter for each trial, all in one figure (with different colors, maybe). For that, should I iterate through pupil timestamp? that makes it a dotted plot, rather than a continuous plot. How else should I iterate through tirals?

user-98789c 11 February, 2021, 13:42:00

Can you think of a reason, why my recording is saved like this, missing some values?

Chat image

papr 11 February, 2021, 13:45:56

The rows with the "missing" values are 2d detections (see method column). This data does not have 3d information. Please be aware that this file include 2d + 3d data, as well as data from left and right eyes. Therefore, you will have to group by eye_id and method first, before you even can start visualizing data relative to your trial starts.

In order to do the latter, you need to slice the data into the block sections and subtract the first timestamp of each block from the remaining timestamps. This will give you relative timestamps since block start.

user-98789c 11 February, 2021, 17:15:08

To be able to slice the data into trial blocks, I am about to merge pupil_positions.csv and the "label" column in annotations.csv , which is easy. But my concern is that the timestamps in the two files are not the same; meaning that I don't have pupil data at the timestamps that I have annotations. Is there a way to save annotations in the same csv file as the pupil_positions, or a way to save the exact same timestamps for both? or should I think of a whole other way to sync the two files?

user-98789c 11 February, 2021, 13:59:06

Is there a way to only record wih the 3d method?

papr 11 February, 2021, 14:00:18

No. It depends on the 2d method. Usually, there are utilities for removing unwanted rows. You might also want to remove data below a specific confidence threshold.

papr 11 February, 2021, 14:02:58

removing rows from a pandas dataframe (as we use it in the tutorial) is easy. See cell 7 in the "Visualize Pupillometry Data" section of the tutorial.

user-98789c 11 February, 2021, 14:04:43

and do you have a suggestion about iterating through trials? https://discord.com/channels/285728493612957698/285728493612957698/809383951772549150

papr 11 February, 2021, 14:05:46

Yes, my second paragraph of this message was meant as a response to your question: https://discord.com/channels/285728493612957698/285728493612957698/809419958261907529

user-98789c 11 February, 2021, 14:08:02

Thanksss πŸ‘

papr 11 February, 2021, 14:12:42

Should my response not satisfy your question, then it is possible that I misunderstood your question. Feel free to tell me, if that was the case, or should it happen anytime in the future.

user-98789c 11 February, 2021, 14:17:45

Sure, I'm looking into implementing your comments, I'll ask again for further guidance, thanks a lot.

papr 11 February, 2021, 17:20:04

It is in the nature of events that they do not happen at the very same moment as the eye camera would record a video frame. Therefore, it is very unlikely that they would have the same timestamps. Instead, I would ~~filter between timestamps~~ slice data between two event timestamps.

Again, based on the tutorial, you can do:

trial_start = ...
trial_end = ...

mask_included = eye0_df['pupil_timestamp'].between(trial_start, trial_end)
trial_data_eye0 = eye0_df.loc[mask_included]
user-98789c 12 February, 2021, 13:30:32

I'm trying to see if I understood it correctly: I have to merge my two files: annotations and pupil_positions. From the annotations, I only need the timestamp and the label columns. My annotations' labels are my trial numbers, so I am going to slice the pupil_positions when the annotation labels change. For this to happen correctly, I have to merge the pupil_timestamps (from pupil_positions) and timestamps (from annotations) all in one columns (and of course, in chronological order). Is there a way to do this? to combine two columns from two data frames, into one column, and in an ascending chronological order?

user-da621d 12 February, 2021, 04:39:43

hi, why the eye tracking hardware is not working suddenly

Chat image

papr 12 February, 2021, 14:55:04

Please contact info@pupil-labs.com in this regard

user-bfa25e 12 February, 2021, 12:19:42

Hello, I'm interested in IR safety of the PupilCore, as I have to get an Ethics Committee aproval. Is there any documentation where the IR safety is specidied?

papr 12 February, 2021, 13:31:10

Please contact info@pupil-labs.com in this regard.

papr 12 February, 2021, 13:32:53

What program are you using to process the csv files?

user-98789c 12 February, 2021, 13:33:34

Python and all the related packages (csv, pandas, etc.)

papr 12 February, 2021, 13:41:53

@user-98789c I would not merge the data frames. Since they have different columns, that is difficult to do. Instead, you can do this:

import pandas as pd

pupil = pd.read_csv("pupil_positions.csv")
# creates new column and fills with None
pupil["trial"] = None

# (1) calculate start and end timestamps based on
# annotions.csv
trial_1_timestamp_start = ...
trial_1_timestamp_end = ...
trial_1_label = ...
# (2) boolean mask that is true for rows that are
# between start and end timestamp
trial_boolean_mask = pupil["pupil_timestamp"].between(
  trial_1_timestamp_start,
  trial_1_timestamp_end,
)
# (3) set `trial` values to `trial_1_label` for rows
# belonging to trial 1
pupil["trial"].loc[trial_boolean_mask] = trial_1_label

# (4) repeat steps 1-3 for all trials

# (5) remove data that does not belong to any trial
mask_trial_not_available = pupil["trial"].isna()
mask_trial_available = ~mask_trial_not_available
pupil_clean = pupil.loc[mask_trial_available]
user-98789c 12 February, 2021, 14:16:28

Thanks, this is a good way to do it. The problem is, I have 100 trials per run, 3 runs per participant and at least 20 participants! So, to calculate the trial start and end timestamps by hand would take a lot of time.. that's why I wanted to merge the two data frames, so that I have the timestamps in one place and slice through them based on labels..

user-821b71 12 February, 2021, 14:27:22

Hey everyone, I am trying to integrate a custom pupil detector into the Pupil Player. I followed the instructions given in the documentation and found the example file artificial_2d_pupil_detector.py on github (https://gist.github.com/papr/ed35ab38b80658594da2ab8660f1697c#file-artificial_2d_pupil_detector-py). Now to get started I wanted to see what this artificial detector does and put the file under ~/pupil_player_settings/plugins However the plugin does not show up in the Plugin Manager. I tested on Pupil Player v3.1 and v2.6 without success. In the log files it says that the python plugin file is imported and scanned but the plugin does not show up in the Plugin Manager. What am I missing?

papr 12 February, 2021, 14:35:49

~~Let's move that discussion to πŸ’» software-dev~~ Actually, the plugin is not supposed to appear in the plugin manager as this is an eye process plugin. Please check your eye window. It should have a new menu entry for the artificial detector.

user-821b71 12 February, 2021, 14:28:15

Here is the log file from Pupil Player v2.6 after launch

Chat image

papr 12 February, 2021, 14:35:10

Of course, I would recommend calculating the start and stop timestamps automatically. But that depends on your annotation data structure and it is therefore up to you to implement the start/stop timestamp calculation :).

user-98789c 12 February, 2021, 17:13:00

So far I wrote this, and I'm trying to make sure it's working correctly. Can you take a look at it, please?

` import os import pandas as pd from IPython.display import display

recording_location = 'C:/Users/CVBE/recordings/2021_02_12'

exported_pupil_csv = os.path.join(recording_location, '003', 'exports', '000', 'pupil_positions.csv') pupil_pd_frame = pd.read_csv(exported_pupil_csv)

exported_annotations_csv = os.path.join(recording_location, '003', 'exports', '000', 'annotations.csv') annotations_pd_frame = pd.read_csv(exported_annotations_csv)

detector_3d_data = pupil_pd_frame[pupil_pd_frame.method == 'pye3d 0.0.4 real-time'] detector_3d_data["trial"] = None

all_trials = max(annotations_pd_frame['label'])

plt.figure(figsize=(16, 5))

for i in 1,all_trials: trial_label = i trial_data = annotations_pd_frame[annotations_pd_frame.label == i] trial_start_timestamp = min(trial_data['timestamp']) trial_end_timestamp = max(trial_data['timestamp']) trial_boolean_mask = detector_3d_data['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) detector_3d_data["trial"].loc[trial_boolean_mask] = trial_label mask_trial_not_available = detector_3d_data["trial"].isna() mask_trial_available = ~mask_trial_not_available pupil_data_in_trial = detector_3d_data.loc[mask_trial_available] trial_data_eye0 = pupil_data_in_trial[pupil_data_in_trial.eye_id == 0] trial_data_eye1 = pupil_data_in_trial[pupil_data_in_trial.eye_id == 1] plt.plot(trial_data_eye0['pupil_timestamp'], trial_data_eye0['diameter_3d']) plt.plot(trial_data_eye1['pupil_timestamp'], trial_data_eye1['diameter_3d'])`

user-821b71 12 February, 2021, 14:43:06

@papr Ah sorry, you are correct. This plugin was meant to work with Pupil Capture I guess. Thanks. What I am trying to achieve is to run a custom pupil detection after data collection in the Pupil Player. Is this possible?

papr 12 February, 2021, 14:43:38

That is possible. For that, you will need to put the plugin in ~/pupil_player_settings/plugins and run the post-hoc pupil detection plugin

user-821b71 12 February, 2021, 14:47:11

Okay thanks a lot! I was just using the wrong plugin template then πŸ™‚

user-6cdb90 12 February, 2021, 14:49:18

Hello, Currently, I am using Pupil Core for my research. During the last experiment, one of the cameras has been stopped working suddenly and it showed "Init failed, Capture is started in ghost mode". However, when I hold the wires in a certain position, it starts to work again. I think the possible issue is related to the disconnection of one of the wires in the attached image; so, I appreciate it if you could help me to fix the issue.

Chat image

papr 12 February, 2021, 14:54:52

Please contact info@pupil-labs.com in this regard

user-6cdb90 12 February, 2021, 14:58:52

Thank you for your response. I already eamiled them and I am waiting for the response. However, I was wondering what is the service and repair procedure in terms of Hardware issues. As I am under pressure to conduct the experiments as soon as possible, I just wanted to know how long does it take if any repair is needed?

user-a16c82 13 February, 2021, 20:06:58

Hi, does anyone happen to know of a specific functional alternative for the Microsoft LifeCam HD-6000? I've been looking into it on behalf of a student group project and we were told that the camera should have these specs: "Webcam - UVC compliant cameras are supported by Pupil Capture. If you want to use the built-in UVC backend, your camera needs to fulfil the following: -UVC compatible (Chapters below refer to this document) -Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) -Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) -Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) -Support UVC_FRAME_FORMAT_COMPRESSED frame format"

None of us have enough experience to be certain whether certain models meet these requirements and we haven't found any documentation that references these codes.

user-39962f 13 February, 2021, 20:29:38

Good question! @user-a16c82 I am facing the exact same issue. Any help would be great, I'm located in Canada, so I would prefer a hardware option that is accessible from here.

user-331121 14 February, 2021, 20:35:20

Hi, what is the best way to get the camera pose and orientation relative to the eyeball sphere?

papr 15 February, 2021, 08:36:05

Pupil data, including the eyeball sphere, is in eye camera coordinates. Therefore, this also gives you the camera pose relative to the eyeball

papr 15 February, 2021, 08:33:46

Conceptually, this looks correct.

Syntactically, this line looks incorrect:

for i in 1,all_trials: You probably meant for i in range(1, all_trials):.

user-98789c 24 February, 2021, 12:09:31

I have changed my code to this:

for index in all_indices: trial_label = annotations_pd_frame['label'].loc[annotations_pd_frame['index'] == index] trial_data = annotations_pd_frame[annotations_pd_frame == str(trial_label)] trial_start_timestamp = min(trial_data['timestamp']) trial_end_timestamp = max(trial_data['timestamp']) trial_boolean_mask_eye0 = eye0_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) trial_boolean_mask_eye1 = eye1_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) eye0_high_conf_df["trial"].loc[trial_boolean_mask_eye0] = trial_label eye1_high_conf_df["trial"].loc[trial_boolean_mask_eye1] = trial_label mask_trial_not_available_eye0 = eye0_high_conf_df["trial"].isna() mask_trial_not_available_eye1 = eye1_high_conf_df["trial"].isna() mask_trial_available_eye0 = ~mask_trial_not_available_eye0 mask_trial_available_eye1 = ~mask_trial_not_available_eye1 pupil_data_in_trial_eye0 = eye0_high_conf_df.loc[mask_trial_available_eye0] pupil_data_in_trial_eye1 = eye1_high_conf_df.loc[mask_trial_available_eye1]

In one run I have 4 conditions. Each condition has a number of trials (not the same number for all). Each trial repeats for 10 times. To slice my data, I can use indices and timestamps. My annotations contain the name of the trial as their labels, as shown in the attached screenshots (10 of each sequence, and the name could be a number or a word).

Now, for a reason that I can not figure out, nothing goes into the trial_data on the 3rd line.

Any idea what I'm doing wrong?

Chat image

user-6e3d0f 15 February, 2021, 10:33:41

How can I translate the timestamps that are given in my export file to real world time or timestamps in my video length?

papr 15 February, 2021, 10:37:13

Time can be translated to Unix time using code like this https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb

The export also includes a world_timestamps.csv file that is a mapping from frame index <-> frame pts <-> pupil timestamp. frame pts are the video file's internal timestamps. Multiplied with the video stream's time base, you can calculate the video frame's timestamp relative to the video's start.

user-6e3d0f 15 February, 2021, 10:39:22

Looking at my export file "Fixations on surface..." there is no frame pts in it? Only worldtimestamp, world index, fixation id, start_timestamp, duration...?

papr 15 February, 2021, 10:40:20

The file should be called world_timestamps.csv and is exported together with the world.mp4 by the World Video Exporter plugin

user-a9e72d 15 February, 2021, 12:30:25

I want to know why this app can not work successfully like this.

Chat image

papr 15 February, 2021, 14:58:41

Could you please translate the error message to English for me, please? Does it say which file is missing exactly?

user-98789c 15 February, 2021, 13:25:56

In order to make sure Pupil Core actually tracks the pupil diameter : Is there some kind of plugin or any platform to visualise the changes in pupil diameter as a simple stimulation is being presented, like a simple flickering box, whose flickering frequency, duration, etc. changes and we can see the influences of these changes on the pupil diameter?

papr 15 February, 2021, 15:34:03

No, there is no such plugin built-in. It might be possible to build a stimulus like that in psychopy.

user-98789c 15 February, 2021, 14:50:41

I have 20 trials here and I'm expecting all of them to be plotted in the same figure, but somehow only the last one is plotted. Python is supposed to hold the plots automatically, right?

papr 15 February, 2021, 15:36:03

It should indeed have plotted 40 graphs. Make sure - that your for loop is called indeed 20 times - that trial_data_eye0 and trial_data_eye1 are not empty (you can check by printing their shape, print() - that trial_data_eye1['diameter_3d'] contains numeric data (instead of NaN values)

user-331121 15 February, 2021, 15:22:23

The eyeball center (x,y,z) looks good but i could not find the information of the eyeball center orientation with respect to camera. I tried to find uskng solvePnP between projectedsphere(x,y) and sphere(x,y,z) but those values looks off. If this is already provided in the export files let me know. Thanks

papr 15 February, 2021, 15:23:35

The circle_3d_normal_x/y/z column contains the current eye ball orientations for each datum

user-331121 15 February, 2021, 15:27:36

Isn’t it the normal vector for 3D pupil? May be there is a misunderstanding I wanted to know where the eye ball is relative to camera (or say camera orientation) and I do not care about the gaze direction at this moment.

papr 15 February, 2021, 15:29:24

There might be a misunderstanding, indeed. Could you elaborate what you mean by

eyeball center orientation The eye model has an orientation: Its gaze direction. What orientation are you referring to?

user-331121 15 February, 2021, 16:10:28

Chat image

user-331121 15 February, 2021, 16:11:33

Not the gaze direction but what the orientation of eyeball wrt to camera. Theta and phi in this case

papr 15 February, 2021, 16:12:23

Theta and phi are just the gaze direction in spherical coordinates πŸ™‚

user-331121 15 February, 2021, 16:19:16

Chat image

user-331121 15 February, 2021, 16:20:30

The gaze direction is different I suppose. Phi_c and theta_c should be same for a video provided that there is no head gear slippage .

papr 16 February, 2021, 11:19:30

You can convert the cartesian eye center to spherical coordinates with this function: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/visualizer_pye3d/utilities.py#L14-L19

same for a video provided that there is no head gear slippage The same applies to the eye ball center.

papr 15 February, 2021, 16:22:38

That is a helpful drawing, thank you! I will have a closer look on how to calculate this tomorrow.

user-331121 15 February, 2021, 16:22:58

That would be a great help. Thanks

user-331121 15 February, 2021, 16:23:50

I tried to find using solvePnP between projectedsphere(x,y) and sphere(x,y,z) but those values looks off.

user-82b4f1 15 February, 2021, 17:33:48

Hello I'm new to pupil (and to Discord as well, I am more used to Slack).

user-82b4f1 15 February, 2021, 17:37:38

I am interested in pupil as an easy way to record eye movements, and I like very much the idea of 3d tracking. What is the most straight forward way to get it running and obtain data from it with the simplest setup? (so far I just have an ordinary webcam). Is there some docs that I can read to get myself started?

nmt 16 February, 2021, 09:41:30

Hey @user-82b4f1. I think it would be helpful if you could elaborate more on your setup and how you are planning to perform eye tracking with your webcam. Are you implementing something like Pupil DIY (https://docs.pupil-labs.com/core/diy/#diy)?

user-82b4f1 15 February, 2021, 18:21:14

Hello @papr (I see you are the most active) is here the right place to ask for one just starting? Are there examples ready to run? E.g. one that use video snippets instead of a real time camera stream, just to be sure that everything is setup rightly?

user-10fa94 15 February, 2021, 19:08:34

Has anyone had an issue with fisheye model based undistortion with opencv for videos with 1920,1080 resolution? Because the pupil player export creates a rectangular video padding with black pixels, my undistortion function behaves badly horizontally, but very well vertically. It performs well in general for the 1280,720 resolution videos, but i need the FOV that the 1920,1080 provides. Would sincerely appreciate any tips or tricks people have found

papr 16 February, 2021, 09:50:46

These are the utility classes that we use in Core to handle camera distortion:

https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py

Checkout the Fisheye_Dist_Camera class and its undistort() method.

user-da621d 16 February, 2021, 02:15:40

hi, there is a blue circle and a red circle on the pupil, what is the difference? If I only want the blue one, how can I do?

Chat image

papr 16 February, 2021, 10:03:03

Blue refers to 2d detection results. Red to 3d detection. If red is less stable this is due to the 3d eye model not being fit well. Roll your eyes to improve its fit. The green outline should be slightly smaller than your eye ball when fit well.

user-da621d 16 February, 2021, 02:16:56

Because I think the blue one is more stable than the red one.

user-da621d 16 February, 2021, 06:29:52

hello, anybody here. Does is ablut pupil.0.2d pupil0.3d

user-83ea3f 16 February, 2021, 07:46:15

hey, I realized i am not able to install Pupil Invisible Companion with my phone Pixel 3a Xl. Would you let me know the reason I am not able to install the app? let me know. thanks!

nmt 16 February, 2021, 09:49:53

Hi @user-83ea3f. The Invisible Companion App is only supported on OnePlus 6 and OnePlus 8 devices. The glasses do not work with other models

user-83ea3f 16 February, 2021, 08:18:13

Also, am I able to connect my Core with Invisible app? let me know!

nmt 16 February, 2021, 09:52:12

Also, Core is not compatible with the Invisible Companion App. They can be considered as separate entities. If you want to use Core in mobile studies, you might consider using a small form-factor laptop/tablet style PC

papr 16 February, 2021, 09:42:58

@user-82b4f1 To follow up on this note: Please understand that Pupil Capture only works for head-mounted eye trackers. It does not work for remote eye trackers (e.g. webcam placed in front of the subject).

user-82b4f1 17 February, 2021, 17:01:19

Hi @papr, thank you. Our goal is to use the software with a piece of hardware made by us. So first of all we need to know whether the LGPL licence allows us to use it as it is, before building our client application. Yes of course having the camera well firm with respect to the head will make everything much easier. Even if we are not interested in gaze data, but only in eye movements (so no world camera) of course we are tempted to start from buying the headset from your store on shapeways . But we need to know: Of the following options a) using the software as it is; b) using the headset too, as it is; c) make modification to the software, or d) to the headset; which would be allowed by the current licensing scheme? Of course after some r&d the intention is, if it works, to make a product out of it.

user-331121 16 February, 2021, 17:25:51

Thanks, @papr, I will have a closer look at this. Additionally, I need the specification of the camera sensor (size, FOV) used for Pupil Labs core eye tracker (one with resolution 640 x 480). Is it possible to share the detail?

papr 16 February, 2021, 17:28:22

If you need the information regarding the sensor size anyway, please contact info@pupil-labs.com in this regard.

papr 16 February, 2021, 17:27:49

You can read the focal length from the camera intrinsics in https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L192-L211

If you just want to convert between pixels and 3d coordinate, use the intrinsics. There is usually no need for knowing the sensor size.

user-39962f 16 February, 2021, 20:33:00

Hi, does anyone happen to know of a specific functional alternative for the Microsoft LifeCam HD-6000? I've been looking into it on behalf of a student group project and we were told that the camera should have these specs: "Webcam - UVC compliant cameras are supported by Pupil Capture. If you want to use the built-in UVC backend, your camera needs to fulfil the following: -UVC compatible (Chapters below refer to this document) -Support Video Interface Class Code 0x0E CC_VIDEO (see A.1) -Support Video Subclass Code 0x02 SC_VIDEOSTREAMING (see A.2) -Support for the UVC_VS_FRAME_MJPEG (0x07) video streaming interface descriptor subtype (A.6) -Support UVC_FRAME_FORMAT_COMPRESSED frame format"

None of us have enough experience to be certain whether certain models meet these requirements and we haven't found any documentation that references these codes.

papr 17 February, 2021, 11:24:35

Unfortunately, I do not know any specific model that fulfils these requirements. I extracted the requirements by looking at the pyuvc source code. It is possible to use other cameras but it would require you to use a custom version of pyuvc.

Alternatively, you can also use this community-provided video backend https://github.com/Lifestohack/pupil-video-backend that uses OpenCV to access the camera and streams the frames over the network to Pupil Capture. OpenCV supports a lot of different cameras.

user-39962f 17 February, 2021, 01:17:48

@papr

user-430fc1 17 February, 2021, 11:02:09

I have msgpack=0.5.6 in a conda environment where I also have msgpack-python and msgpack numpy. Is there a way to make sure my code imports the right version of msgpack? I run into an error that I think is the result of my module importing the wrong version. As an aside, is it still necessary to use msgpack=0.5.6?

papr 17 February, 2021, 11:26:38

msgpack-python has been deprecated in favor of msgpack. I would highly recommend uninstalling msgpack-python. Also, starting with Pupil 3.0, we also support newer versions of msgpack

user-430fc1 17 February, 2021, 11:27:22

OK, great. Thanks!

user-da621d 17 February, 2021, 14:12:14

hi, blinking eyes have an influence on my detection, so how can I filter blinking operation

papr 17 February, 2021, 14:15:32

You can use the post-hoc blink detection in Player. Please be aware that it requires good 2d pupil detection to work well. The plugin will export time ranges in which blinks were detected. Afterward, you will have to exclude these time ranges from the pupil/gaze data in your post-processing script.

user-da621d 17 February, 2021, 14:13:49

@papr

user-da621d 17 February, 2021, 14:16:00

and how can I know the distance between my eye and the eye-tracking hardware?

papr 17 February, 2021, 14:18:42

You could calculate the norm of the sphere_center_x/y/z vector to calculate the distance between eye camera origin and estimated eye ball center in mm if that is helpful to you. What do you need this information for? It is the first time someone asked this question.

user-da621d 17 February, 2021, 14:22:34

the distance is one of the parameters in my project, I should test the distance in real time.

papr 17 February, 2021, 14:26:36

And the recommend use of the headset is to place the headset on the subject, adjust the cameras once, and avoid any slippage of the headset as much as possible. Your description would violate this recommendation. Are you sure that the parameter is "headset distance to eyes" and not "stimulus distance to eye"?

papr 17 February, 2021, 14:23:52

The eye sphere requires a series of pupil detections. In other words, it requires time to adjust! If you are moving the camera, sphere_center might not be valid!

user-da621d 17 February, 2021, 14:23:10

Thank you papr, I will think about your suggestions.

user-200ca9 17 February, 2021, 14:26:25

@papr we are building a test for our subjects to test the eye behavior. for that we have a video to show them. this video is about.. lets say 5 minutes, and in order to understand where the eye is looking in relation to a reference point we use the post-hoc method and create the reference points our selves. As you might imagine that's a lot of points(!). just to give an example - on a short video (1min) its about 800 points. Do you know of a better way to create the reference points?

papr 17 February, 2021, 14:28:24

If you use the circle marker as reference point, it can be detected automatically. The results are cached. You could try to find custom reference points and cache them in the same format. Afterward, Pupil Player should be able to use them.

user-da621d 17 February, 2021, 14:29:10

sorry papr, my expression maybe not clear, I say again, I should know the distance after I move the relative positon between eyes and hardware.

papr 17 February, 2021, 14:29:56

ok, understood. In this case, I suggest adding an explicit model fitting phase* after moving the headset.

  • ask your subjects to roll their eyes
user-200ca9 17 February, 2021, 14:29:53

the thing is we have a different "marker" that the software doesn't understand. Can we program a new shape to be our new marker and thus the software itself will create the reference points to us?

papr 17 February, 2021, 14:30:38

That is possible too, but you will have to run the software from source. It is not possible to implement this easily as a plugin

user-da621d 17 February, 2021, 14:31:47

thank you papr

user-200ca9 17 February, 2021, 14:32:12

unfortunatly we don't have the time to write a new plugin, we're a bit low on time 😦

user-200ca9 17 February, 2021, 14:33:10

thanks anyway! if any idea comes to mind that will help us a lot! πŸ™‚

user-200ca9 17 February, 2021, 14:35:27

@papr this is our "marker" in case it helps to understand:

Chat image

papr 17 February, 2021, 16:31:57

Have you setup surface tracking for the monitor displaying that stimulus?

user-42269b 17 February, 2021, 16:30:29

hello! I have a problem with capture - the EYE 1 camera cannot be found :/

papr 17 February, 2021, 16:30:59

Please contact [email removed] in this regard

user-42269b 17 February, 2021, 16:30:45

this is the error message I am getting eye1 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. eye1 - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution (192, 192)! eye1 - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy! eye1 - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation! eye1 - [WARNING] launchables.eye: Process started.

user-42269b 17 February, 2021, 16:31:27

ok thank you!

user-200ca9 17 February, 2021, 16:34:23

yes we have, why do you ask?

papr 17 February, 2021, 16:36:22

If you know the position of the stimulus in in surface coordinates, you can calculate it backwards to scene camera coordinates, which you then in turn can use as reference points. But this would require programming, too.

user-200ca9 17 February, 2021, 16:39:12

the thing is we don't really know the position in the surface coordinates because we don't use psycopy (it creates a really slow clunky video), so we generate the video using python and then record it and use the psycopy to manage the timing.

papr 17 February, 2021, 17:07:41

a) LGPL allows (commercial) use as is, see https://tldrlegal.com/license/gnu-lesser-general-public-license-v3-(lgpl-3) b) of course c) allowed under the conditions mentioned in a) d) Modifications to the headset are allowed (but looses warranty). You can also buy specific parts only as well. Contact info@pupil-labs.com in this case.

Please keep in mind that pye3d, our new 3d detector, is not LGPL licensed. Its use is allowed for (1) all academic uses, (2) when used as part of our bundled software releases, or (3) stand-alone in combination with official Pupil Labs hardware.

user-82b4f1 17 February, 2021, 17:17:24

That's interesting and encouraging. About pye3d, do you confirm that conditions (1), (2) and (3) are all in OR logical relation?

papr 17 February, 2021, 17:18:57

Correct. Any of the cases allows the use of pye3d

user-82b4f1 18 February, 2021, 10:24:24

Ok. So, following (2), while we setup our own hardware, is there a way how I can use recorded videos of the eye cameras, and supply them to pupil_service and therefore be able to read the data (via the network API I suppose) as if they were obtained in real time? Is there also maybe a repository with test videos?

user-2d66f7 18 February, 2021, 10:10:03

Hi! I have an question about the data output of the surface tracker plugin. For a research project, we want to use an apriltag marker as a reference point in the environment. Hereby, I need the position of the marker in the camera coordinate frame. The surface position output only gives the transforming matrix of surface from image to surf or surf to image. Is it possible to calculated the position of the marker in the camera coordinate frame soley with these matrices? It has been a while since I worked with matrices, so my mathematics knowledge needs recab/update.

papr 18 February, 2021, 10:12:38

You can use the matrices to transform surface coordinates (e.g. corners) to scene image pixel coordinates. See this as a reference https://github.com/pupil-labs/pupil-tutorials/blob/master/04_visualize_surface_positions_in_world_camera_pixel_space.ipynb

user-2d66f7 18 February, 2021, 10:16:00

Thank you!

papr 18 February, 2021, 10:28:08

There is a hidden feature where you simply drop a (Capture) recorded video onto the window and it will start using it as input. There is no support for synchronised playback fo multiple videos though. You can find an example recording here: - core example https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing - vr example https://drive.google.com/file/d/11J3ZvpH7ujZONzjknzUXLCjUKIzDwrSp/view?usp=sharing

user-82b4f1 22 February, 2021, 18:30:50

So, I tried to use the recordings found in the surface tracker. My current setup is:

user-61bfe5 18 February, 2021, 11:26:36

Hi. Is there a way to better track surfaces that are distorted in the wide FOV camera? As you can see a lot of the screen is cropped out of the surface due to the distortion.

Chat image

papr 18 February, 2021, 15:04:45

The surface tracker corrects for distortion internally. The preview just connects the corners with straight lines to give a rough idea of the surface area.

user-f22600 18 February, 2021, 15:01:46

Hi, very silly question. Does the player only replays the recording or does it run the 2d and 3d detectors again?

papr 18 February, 2021, 15:03:37

You can run pupil detection and calibration post-hoc as well.

user-f22600 18 February, 2021, 15:06:42

So, if I make some changes in the 3d detector code and run player, will it run the modified code? I want to re-use the recorded data and test with the modified code.

papr 18 February, 2021, 15:08:40

It will run the modified code. You can simply verify this by changing the detectors "method" output. That should reflect accordingly in the exported data

user-f22600 18 February, 2021, 15:09:02

Thanks!

user-f22600 18 February, 2021, 21:40:42

Hi, I'm trying to run player from source on Big Sur. I've installed all the dependencies and when I run player on I get following message File "/Users/yoncha01/Project/git/pupil/pupil_src/launchables/player.py", line 826, in player_drop from pupil_recording.update import update_recording File "/Users/yoncha01/Project/git/pupil/pupil_src/shared_modules/pupil_recording/update/init.py", line 15, in <module> from video_capture.file_backend import File_Source File "/Users/yoncha01/Project/git/pupil/pupil_src/shared_modules/video_capture/init.py", line 40, in <module> from .uvc_backend import UVC_Manager, UVC_Source File "/Users/yoncha01/Project/git/pupil/pupil_src/shared_modules/video_capture/uvc_backend.py", line 25, in <module> import uvc File "/Users/yoncha01/Project/git/pupil/pupil_venv/lib/python3.9/site-packages/uvc/init.py", line 4, in <module> import urlparse ModuleNotFoundError: No module named 'urlparse'

So if i tried to install urlparse, then I get ERROR: Could not find a version that satisfies the requirement urlparse ERROR: No matching distribution found for urlparse

Can you help me what I'm doing wrong?

user-430fc1 19 February, 2021, 09:42:39

Is it still possible to programmatically freeze the 3d model in the latest version of Pupil Capture? My notifications no longer have any effect. `{ "subject": "pupil_detector.set_property.3d", "name": "model_is_frozen", "value": True, # set to False to unfreeze

# remove this line if you want
# to freeze the model both eyes
"target": "eye0",

}`

papr 19 February, 2021, 10:13:21

Please note that the pupil detector network api changed in Pupil Core v3 https://github.com/pupil-labs/pupil/releases/tag/v3.0

papr 19 February, 2021, 10:15:08

It it possible that you did not install the Pupil Labs pyuvc? Please use pip install -r requirements.txt to install the requirements, or check its content if you want to install specific dependencies only.

user-f22600 22 February, 2021, 17:05:54

Now I'm getting the following message, seems like PyAv issue.

looking for avformat_open_input... missing looking for pyav_function_should_not_exist... missing looking for av_calloc... missing looking for av_frame_get_best_effort_timestamp... missing looking for avformat_alloc_output_context2... missing looking for avformat_close_input... missing looking for avcodec_send_packet... missing looking for AV_OPT_TYPE_INT...found looking for PYAV_ENUM_SHOULD_NOT_EXIST...missing looking for AV_OPT_TYPE_BOOL...found looking for AVStream.index... found looking for PyAV.struct_should_not_exist... missing looking for AVFrame.mb_type... missing

We didn't find avformat_open_input in the libraries. We look for it only as a sanity check to make sure the build process is working as expected. It is not, so we must abort.

user-430fc1 19 February, 2021, 10:28:02

{ 'topic': 'notify.pupil_detector.set_properties', 'subject': 'pupil_detector.set_properties', 'values': {'freeze model':True}, 'eye_id': 0, 'detector_plugin_class_name': 'Detector3DPlugin', } Is this the correct name for the new plugin?

papr 19 February, 2021, 10:28:57

The plugin name is Pye3DPlugin

papr 19 February, 2021, 10:31:22

I recommend subscribing to notify.pupil_detector.properties and sending

{
    'topic': 'notify.pupil_detector.broadcast_properties',
    'subject': 'pupil_detector.broadcast_properties',
    'eye_id': 0
}

to get an overview over all possible properties and detector classes.

user-430fc1 19 February, 2021, 10:38:15

Sorry, I don't think I understand. Subscribe to the topic you mentioned, and then send that notification?

papr 19 February, 2021, 10:39:42

correct. In an extra script perhaps. Also add a short time.sleep() between the subscription and sending the notification to make sure the subscription goes through.

user-430fc1 19 February, 2021, 10:47:50

Got it, thank you!

user-98789c 19 February, 2021, 12:21:29

Are there any studies/python scripts/plugins done on the frequency analysis of pupil diameter? for example, pupil diameter recorded while participant stares at a flickering stimulus with a predefined frequency, and then by the means of FFT, ICA, etc. on the pupil diameter time series, the frequency is extracted.

user-430fc1 19 February, 2021, 13:17:10

Not sure if this is the sort of thing you mean but it may be of interest: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148805

user-c828f5 19 February, 2021, 15:03:17

Hey @papr , could you point me to documentation which lists how the confidence/goodness measure is calculated. I'm trying to find the relevant code with no luck so far.

papr 22 February, 2021, 09:21:38

Hey, the detection algorithms in Pupil Core do not perform any iris detection. Therefore, it is not possible with the built-in tools.

user-10fa94 20 February, 2021, 16:35:20

Is there a way to get 2d diameter of the Iris in addition to the pupil?

user-19bba3 20 February, 2021, 23:08:40

Which parts of the Pupil Capture software source code would need to change to use a different world camera via the USB-C mount? I'd like to use a zed mini (https://www.stereolabs.com/zed-mini/)

papr 22 February, 2021, 09:23:45

You would need to write your own video backend, similar to the realsense plugin (https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29) or https://github.com/Lifestohack/pupil-video-backend/

user-ae6ade 21 February, 2021, 13:52:53

hi, I'm a complete noob πŸ™ˆ the logs show me this

Chat image

user-ae6ade 21 February, 2021, 13:53:45

anyone got an idea what I can change? I'm not able to see where I'm looking at, but eye0 and eye1 are calibrated and recognized

papr 22 February, 2021, 09:28:45

This issue is unrelated to you not being able to see the gaze prediction. It is related to the underlying user interface library. It is not able to open your "clipboard" (Zwischenablage) which is used to copy/paste text. Not sure why the access would be denied.

If you like share an example recording of you calibrating with [email removed] We can have a look and give specific feedback.

user-58edf1 22 February, 2021, 07:00:37

Hello, everyone, why does surface6 enter and surfce2 enters? Shouldn't surface6 come out first and then surface2 enter?

Chat image

papr 22 February, 2021, 09:30:15

Surfaces are allowed to overlap. Your sequence of events is possible if Surface 2 is e.g. a subset of Surface 6.

user-ae6ade 22 February, 2021, 09:29:38

thanks, papr!

user-98789c 22 February, 2021, 15:55:17

Any idea why I get these instances of zero pupil diameter?

Chat image

user-58edf1 22 February, 2021, 16:04:57

Thank you ! what kind of situation will generate subset to overlap?

papr 22 February, 2021, 16:42:30

This can either be intentional by setting up the surfaces appropriately (e.g. keyboard on desk; Surface A for keyboard, Surface B for desk), or unintentionally if the surfaces are separate due to the surface being incorrectly estimated. This can happen if there are only few markers available.

Looking at your screenshot again, it looks like all events happened in the same scene video frame. I suggest looking at the gaze_on_surface data for that frame to checkout the exact gaze behaviour.

papr 22 February, 2021, 16:07:12

The diameter is zero during blinks or other low confidence values for example. I suggest filtering your data by confidence first, before processing the data further. See the Plot Pupil Diameter section of on how to remove low confidence data: https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb

user-58edf1 22 February, 2021, 16:35:54

I see, Thank you.

user-98789c 22 February, 2021, 16:10:44

I have zero pupil size also in high confidnce instances. They must be blinks, right?

Chat image

papr 22 February, 2021, 16:44:46

These are probably not due to blinks but other measuring errors. I suggest following preprocessing guides, e.g. https://link.springer.com/article/10.3758%2Fs13428-018-1075-y One of the steps is to only include data points from a physiologically feasible range (e.g. 2.5-9mm)

papr 22 February, 2021, 17:31:46

Looks pyav is not able to find your ffmpeg correctly. Have you installed ffmpeg via brew?

user-f22600 22 February, 2021, 17:32:47

yes, I did, I'm getting following message Warning: ffmpeg 4.3.2 is already installed and up-to-date. To reinstall 4.3.2, run: brew reinstall ffmpeg

papr 22 February, 2021, 17:32:59

What is your output for brew --prefix ffmpeg?

user-f22600 22 February, 2021, 17:33:32

/usr/local/opt/ffmpeg

papr 22 February, 2021, 17:44:53

Looks like the ffmpeg path is correctly used. It continues to not finding the ffmpeg symbols correctly.

What is your output for python -c "import platform; print(platform.platform())"?

user-f22600 22 February, 2021, 17:34:09

Is it static vs dynamic library issue?

papr 22 February, 2021, 17:34:27

dynamic. One sec, I think I know a solution.

papr 22 February, 2021, 17:36:11

Please try running `FFMPEG_DIR=$(brew --prefix ffmpeg) pip install "av @ [email removed] /cc @user-764f72 please try this one as well

user-764f72 22 February, 2021, 17:45:38

same as before

custom_ffmpeg_av_pip_install_fail.log

user-f22600 22 February, 2021, 17:37:25

ERROR: Failed building wheel for av Failed to build av ERROR: Could not build wheels for av which use PEP 517 and cannot be installed directly

papr 22 February, 2021, 17:39:15

Is there more information that you can share with us. If the error message is very long, please use https://gist.github.com/ or https://pastebin.com/ to share the complete output.

user-f22600 22 February, 2021, 17:41:18

I've copied the complete output to https://gist.github.com/yjaycha/fe7e46a37e20846ceda49b9099566b5c

user-f22600 22 February, 2021, 17:46:09

macOS-11.2.1-x86_64-i386-64bit

papr 22 February, 2021, 17:47:34

Please download this wheel and install it via:

pip install <path to wheel>

https://drive.google.com/file/d/135iNhqSHlU7BPwKfUfMFaLC2vp2PGMkS/view?usp=sharing

user-f22600 22 February, 2021, 17:49:26

Done, do I need to run the requirment.txt again?

papr 22 February, 2021, 17:50:18

No, that might overwrite your av installation. Please try running Capture to see if anything is missing.

user-f22600 22 February, 2021, 17:51:19

ahh... it is not finding uvc this time...

Traceback (most recent call last): File "/Users/yoncha01/Project/git/pupil/pupil_src/launchables/world.py", line 140, in world from uvc import get_time_monotonic ModuleNotFoundError: No module named 'uvc'

user-f22600 22 February, 2021, 17:52:52

This was the output re. uvc when running requirement.txt, seemed alright

Building wheel for uvc (PEP 517) ... done Created wheel for uvc: filename=uvc-0.14-cp39-cp39-macosx_11_0_x86_64.whl size=200040 sha256=c5368201f2ef0d6158dfc381476bf44e0a4c4a591a704eec059797038fce0d81 Stored in directory: /private/var/folders/l5/dxqs5b1x67309qvyym_8b9640000gp/T/pip-ephem-wheel-cache-k6zr8dx_/wheels/cb/a5/c2/4db7d009c02fa34f5b138db80bfe561442e29aebcff314c5e9

papr 22 February, 2021, 17:54:06

The installation might not have gone through. Please try ```pip install "uvc @ [email removed]

user-f22600 22 February, 2021, 18:04:02

Danke Sehr!! it works. I had to install pupil_detector and ndsi again after the pyuvc. Thank you so much!! You really saved me today.

papr 22 February, 2021, 18:04:36

Great to hear that this is working now πŸ™‚

user-82b4f1 22 February, 2021, 18:33:42
  • I start pupil_capture, and I have two windows with pupil detection running, each using one of the supplied files eye0.mp4, eye1.mp4
user-82b4f1 22 February, 2021, 18:42:21
  • I have a jupyter session open where I test connecting to the service via the Network API (I think), where I test commands. I just suceeded to get some basic output but no pupil coordinates yet of any sort. More precisely I could follow the Network API guide but I am stuck here (https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone) where I can't get no output (actually I modified the loop so that it is not infinite, but no luck).
user-82b4f1 22 February, 2021, 18:44:38
...continued from above Assumes sub_port to be set to the current subscription port

subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}')

topic='pupil.0.2d.' subscriber.subscribe(topic) # receive all gaze messages

we need a serializer

import msgpack

i=0 while i<10: topic, payload = subscriber.recv_multipart() message = msgpack.loads(payload) print(f"{topic}: {message}") i+=1

papr 23 February, 2021, 09:57:58

I think the trailing dot in pupil.0.2d. is too much. Subscription works by prefix-matching. i.e. only messages whose topics start with this string will be received. The pupil data topics do not have a trailing . as far as I remember.

user-82b4f1 22 February, 2021, 18:48:03

(sorry, I am still not good at formatting code in messages in Discord...)

papr 23 February, 2021, 09:53:51

There is a great markdown overview by Discord: https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline-

user-82b4f1 23 February, 2021, 09:48:05

Hello EyeTracker, how did you generate this graph? I am trying to do something similar (from jupyter), but on pupil position, not diameter. But I'm still not able to read eye tracking data after the subscriber.connect(f'tcp://{ip}:{sub_port}') and subscriber.subscribe(topic). Maybe I'm doing it the wrong way. Any working example would greatly ease my start. Can you point me to some?

user-98789c 04 March, 2021, 17:42:36

I noticed this message only now. sorry. I'd be happy to help in any way I can.

papr 23 February, 2021, 09:55:27

Do you need the data in real time then? @user-98789c is following the pupil tutorials that work with Pupil Player exported csv files https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb

user-82b4f1 23 February, 2021, 10:02:14

Yes real time will be a plus, but obviously I'll like to reanalize recorded data as well. Oh that's great it now works! it was only that extra dot!

user-82b4f1 23 February, 2021, 09:57:36

Yes, I found it myself yesterday. Kind of unexpected that the good things can be done only within code blocks, e.g. one cannot make bullet lists or tyope a CTRL+ENTER to make a newline within the message? I must study it better

papr 23 February, 2021, 09:59:43

You can use SHIFT+ENTER to make

new lines. πŸ™‚

user-82b4f1 23 February, 2021, 10:03:11

πŸ‘ That's a combination I didn't try.

So a few additional questions: 0. I am interested in eye positions only (unrelated to a world view). Is starting pupil_service and query the server via the network API the right thing? Are there leaner options? 1. Is there any tutorial available for real time read/graph/analyze for pupil tracking data? 2. I use to switch on and off the two "Detect eye 0 / 1" buttons in the main window to start capturing the two mp4 files. This way I can only switch one at a time (you mentioned there is no way to sync the two playbacks at the moment). Maybe adding a "start/stop eyes together [ ]" control to that window would ease the sync. Can you point me to the right place in the source where that window is managed?

papr 23 February, 2021, 17:49:37
  1. Sounds correct. Please be aware that you won't be able to create recordings with Pupil Service, only with Capture.
  2. This script shows a simple way to receive pupil data. You can use it with your favorite realtime plotting library. https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
  3. Even if the command is send to both windows at a time, it does not guarantee that the videos are played back in sync. This is the code for the ui elements https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/world.py#L617-L632 You can call
start_stop_eye(0, True)
start_stop_eye(1, True)

to start both eyes.

papr 23 February, 2021, 10:07:10

Let's hear them πŸ™‚

user-aa7c07 24 February, 2021, 01:15:16

Is it possible at all with the PupilCore device to test vergence of the eyes - how well they work together? I am interested in testing for subtle skew deviations between the two eyes. I understand the calibration task is taking a constant average between the gaze points for higher accuracy, but is there any way around that? What would you suggest? Or is this the wrong device for that? Many thanks in advance πŸ™‚

nmt 24 February, 2021, 09:11:30

With Pupil Core, you can access the normal vectors for each eye individually. These describe the visual axis that goes through the eye ball center and the object that is looked at. You can see data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter. Also, this study that examined vergence using Pupil Core might be of interest: https://doi.org/10.1038/s41598-020-79089-1

user-98789c 24 February, 2021, 12:09:33

Chat image

papr 24 February, 2021, 13:37:01

You are comparing the whole data frame to your trial label, instead of comparing it to the trial column

- annotations_pd_frame[annotations_pd_frame == str(trial_label)]
+ annotations_pd_frame[annotations_pd_frame.label == str(trial_label)]

Although I think a nicer solution would be to group by label:

annotations_pd_frame.groupby("label").timestamp.agg(["min", "max"])

and loop over that result instead of the indices.

Checkout this tutorial for more information on pandas grouping functionality https://realpython.com/pandas-groupby/

user-98789c 25 February, 2021, 09:50:45

message.txt

user-98789c 24 February, 2021, 15:42:38

Thanks, I'm working on it. Is there a relationship between the indices in the annotations.csv and the world indices in the pupil_positions.csv? and can they be set to be syncronous?

papr 24 February, 2021, 19:07:11

no, there is not. If you want to associate data of different frequency with each other, you need something similar to https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L359-L373

user-aa7c07 24 February, 2021, 18:22:22

@nmt thank you very much!

user-83ea3f 25 February, 2021, 03:13:59

Hi there, is it possible to turn on surface tracker and add surface with Pupil Remote?

papr 25 February, 2021, 08:50:10

Hey. Yes, it is possible to turn it on via a notification. But it is not possible to add a surface programatically. The intended workflow assumes a one-time setup of the surfaces and reusing the definitions across multiple sessions.

user-82b4f1 25 February, 2021, 09:50:26

hello @papr , does the pye3d model publish also corneal shape data? Under which topic?

papr 25 February, 2021, 09:52:35

No, it does not. It assumes an average corneal shape (@user-92dca7 please correct me if this is incorrect/inaccurate). The pupil.0.3d and pupil.1.3d data contains already everything that pye3d publishes.

user-98789c 25 February, 2021, 09:53:47

changed it according to your suggestion:

`import csv

all_labels = annotations_pd_frame['label'] label_names = list(all_labels)

eye0_high_conf_df["trial"] = None eye1_high_conf_df["trial"] = None

all_indices = annotations_pd_frame['index']

groups = annotations_pd_frame.groupby("label").timestamp.agg(["min", "max"])

for label in label_names: trial_label = label trial_start_timestamp = groups['min'].loc[trial_label] trial_end_timestamp = groups['max'].loc[trial_label] trial_boolean_mask_eye0 = eye0_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) trial_boolean_mask_eye1 = eye1_high_conf_df['pupil_timestamp'].between(trial_start_timestamp,trial_end_timestamp) eye0_high_conf_df["trial"].loc[trial_boolean_mask_eye0] = trial_label eye1_high_conf_df["trial"].loc[trial_boolean_mask_eye1] = trial_label mask_trial_not_available_eye0 = eye0_high_conf_df["trial"].isna() mask_trial_not_available_eye1 = eye1_high_conf_df["trial"].isna() mask_trial_available_eye0 = ~mask_trial_not_available_eye0 mask_trial_available_eye1 = ~mask_trial_not_available_eye1 pupil_data_in_trial_eye0 = eye0_high_conf_df.loc[mask_trial_available_eye0] pupil_data_in_trial_eye1 = eye1_high_conf_df.loc[mask_trial_available_eye1]

label_names = list(dict.fromkeys(label_names)) for label in list(label_names): plt.figure(figsize=(16, 5)) plt.plot(pupil_data_in_trial_eye0['pupil_timestamp'], pupil_data_in_trial_eye0['diameter_3d']) plt.plot(pupil_data_in_trial_eye1['pupil_timestamp'], pupil_data_in_trial_eye1['diameter_3d']) plt.legend(['eye0', 'eye1']) plt.xlabel('Timestamps [s]') plt.ylabel('Diameter [mm]') plt.title('Pupil Diameter in ' + str(label) + ' sequences' )`

now the grouping is done correctly, but the same data goes into pupil_data_in_trial_eye0 and it's not sliced according to 'trial'.

papr 25 February, 2021, 14:16:13

I (or my colleagues) would be happy to dive into code review/help you with custom implementation questions. In order for me to devote the time and attention required, I ask you to consider purchasing a support contract.

user-82b4f1 25 February, 2021, 09:59:27

So it does not really fit the corneal shape (a first model could be e.g. an off centred spherical surface whose main parameters are central protrusion and central curvature), it uses a fixed one from external data

user-92dca7 25 February, 2021, 14:16:17

You are correct @user-82b4f1, we are using population averages of relevant physiological parameters, i.e. assume an average cornea in our correction.

user-bf78bf 25 February, 2021, 10:35:36

hi, i have recently started working with the pupil core eye tracker for a psychological experiment. for that, i have to record the gaze positions of the participants on the monitor with respect to the monitor itself as we have to plot them on later.

i am able to extract data from the python api but i’m not sure what the data means or what would be the optimal data i should be using to carry out my experiment. can someone guide me as to what norm_pos_x, gaze_3d_x or some better data that will help me with the same?

nmt 25 February, 2021, 11:38:37

Hi @user-bf78bf, it sounds like you are trying to map your gaze points to the monitor. Have a look at our surface tracking plugin. You can add fiducial markers (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking) to your monitor and use the Surface Tracker plugin in Pupil Capture to automatically detect the markers, define your monitor as a surface, and obtain gaze positions relative to that. Read about surface tracking and data made available here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker

user-58edf1 25 February, 2021, 12:03:14

Can pupil core output the log of the eyeball in and out of each surface? Thank you.

Chat image

nmt 25 February, 2021, 13:24:11

After you have used the Surface Tracker plugin in Pupil Capture to automatically detect the markers, and defined your surfaces, you can export the gaze and surface data. The exported file: 'surface_events.csv' contains a log of when gaze enters and exits each surface. Read more about that here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker

user-430fc1 25 February, 2021, 13:03:53

Hello, when I start Capture I am seeing this error and one of the eye cameras is failing to provide an image. It just started happening randomly. The eye camera window says the camera is disconnected, but everything seems to be well-connected.

Chat image

user-10cdee 22 March, 2021, 13:19:31

Hi! I'm experince the same issue, did you manage to get this solved?

user-0dec4c 25 February, 2021, 14:12:25

Hey, is there a possibility to write the detected blinks onsets and offsets into the LSL stream as markers?

user-f22600 25 February, 2021, 16:02:35

Hi, can I run the player from the command line giving path to the recordings as an argument. So the player starts immediately?

papr 25 February, 2021, 16:25:39

Yes, you can.

user-82b4f1 26 February, 2021, 11:33:10

(sorry to come in this conversation with @user-98789c ) beside the translation, it rather seems that the there is a ~160Β° rotation (not a one axis flip) of the two coordinate systems with respect to each other ...

user-98789c 25 February, 2021, 20:09:20

Does this difference in the y axis between the two eyes have any meaning? Can it mean that the participant was tilting their head?

Chat image

papr 25 February, 2021, 20:11:53

The eye cameras have independent coordinate systems. There is no special meaning in the coordinate differences in this case.

user-83ea3f 26 February, 2021, 01:59:33

do you know the how much white space should I need to have for detecting?

papr 26 February, 2021, 08:11:45

You need at least a border of 1 "marker pixel". So the required size of the border scales with the displayed/printed size of the marker.

user-430fc1 26 February, 2021, 11:29:31

Hi all, does anyone have any data on how the new pye3d detector plugin compares to the old 3d model in terms of the accuracy of absolute units of pupil diameter? My initial impression is that it could be underestimating pupil size (in comparison), but maybe my pupils are really that small.

user-82b4f1 26 February, 2021, 12:06:02

Hello @papr, I have gotten myself two videos of eyes and am trying to get them "captured" with pupil_capture, but I get the message _"Could not playback file! Please check if file exists and if corresponding timestamps file is present."_How can I generate such data files? Do I need more data files other than that? (in fact I supposed that was the purpose of pupil_capture to generate all files after actual video grabbed in real time, or, just for test purposes, from a video recording.)

papr 26 February, 2021, 14:14:09

The file backend is primarily used in Pupil Player and is meant to playback Capture recorded videos. These are meant to be recorded from a live video source. You can use the eye videos from this example recording https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing It includes the timestamps. You can also generate timestamps for the video. See the docs for the required format: https://docs.pupil-labs.com/developer/core/recording-format/#timestamp-files

user-82b4f1 27 February, 2021, 11:50:02

I tried to supply pupil_capture or pupil_service with my own videos eye0.mp4 and eye1.mp4, (but not world.mp4 so far) and made timestamps files (identical) for each of them. But that doesn't seem to be enough. I get this error message No camera intrinsics available for camera eye0 at resolution (1280, 720)! (which looks strange, as my video is 400x200; in fact a crop of a larger image via ffmpeg). Do I need any other file in the directory where I have the eye files? Do I need a (dummy) world video maybe?

user-98789c 26 February, 2021, 14:52:07

In https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb the only difference between the first and the second plots is that the second one only contains high confidence values. How come the y-axis label in the second one is pixels, while in the first one is mm?

papr 26 February, 2021, 14:52:53

That is a mistake. Both should be in mm. Thank you for pointing it out!

user-98789c 26 February, 2021, 14:53:36

Thanks πŸ‘

user-b772cc 28 February, 2021, 10:28:43

Hi @papr , I have questions around surface definition for multiple static face stimuli (e.g. 20) across participants (e.g. 20). I understand if the AOI is certain part of the face, eg mouth area, i can only do post hoc surface definition for each face and each participant which add up to 20 surfaces for each participant - I can’t reuse the surface define for the same face across participants? Is there a more efficient way to define the AOIs? Thank you in advance.

End of February archive