👁 core


user-7b683e 01 September, 2021, 09:04:08

This request can be done with 3 ways. By; 1- using pupil player as papr has mentioned you 2- investigating recording files 3- collenting data from Network API as video stream and serialized object - so json.

papr 01 September, 2021, 09:04:52

small technical correction: The Network API uses msgpack to serialize data, not json. 🙂

user-7b683e 01 September, 2021, 09:07:38

Yes, I agree. I guess, however, msgpsck only converts json phrases to more small expression with their own compress method.

papr 01 September, 2021, 09:08:49

msgpack is conceptionally very similar to json, yes, with the main difference that json uses text and msgpack uses binary data (and therefore being more efficient in various ways)

user-6a9ca1 01 September, 2021, 12:44:32

dear pupil-labs, I would actually like to hook on to your mentioning of the msgpack. I'm having difficulty to read in pldata in python. I'm getting the error "unpickling stack underflow" (on top of the error "ExtraData: unpack(b) received extra data"), where the internet states the pickle data is corrupt, but loading the data from the player seems to works fine. I'm using python 3.8 and I've downgraded msgpack to 0.6.2 following suggestion from an earlier error.

papr 01 September, 2021, 12:55:34

Could you specify what code you use to read the file?

user-6a9ca1 01 September, 2021, 13:00:00

Thanks for your prompt reply! from lib.pupil.pupil_src.shared_modules import file_methods as pl_file_methods

pl_file_methods.load_object('/Users/marsman/organized_data/s001_b6/raw/pl/pupil.pldata')

papr 01 September, 2021, 13:05:18

load_object is not meant for pldata files. Please use load_pldata_file instead

user-6a9ca1 01 September, 2021, 13:15:48

Thanks a bunch! that solved my issue! (after upgrading msgpack again)

user-678fa4 01 September, 2021, 15:34:30

Hello, I'm having an issue where Pupil Capture is crashing on startup. I've been using the software reliably for about 5 months now, but when switching to a new computer, the software isn't responding. The eye windows connect, but then instantly go blank and stop streaming video. Then the whole software will crash. I've tried reinstalling thr software, rebooting the computer and reconnecting the headset. Any ideas why this might be happening?

papr 01 September, 2021, 15:36:41

Hi! I am sorry to hear that. Could you share the Home directory -> pupil_capture_settings -> capture.log file with us?

user-678fa4 01 September, 2021, 15:44:14

Here is the file

capture.log

user-678fa4 01 September, 2021, 15:46:26

No recorded intrinsics found for camera pupil... perhaps this error might point us in the right direction?

papr 01 September, 2021, 16:34:14

Mmh, I do not see any traces of a crash or similar in the log file. Please delete the user_settings_* files in the same folder as the capture.log file and try again. If this does not resolve your issue please contact info@pupil-labs.com in this regard.

user-678fa4 01 September, 2021, 16:35:05

Will try that. Thanks for looking into it! 😊

user-a79827 02 September, 2021, 09:51:31

when i try to start the pupil player i get this error. The window closes very fast. Does anyone know how to fix it ? Iwork on windows 10/ 64 bit.

Chat image

papr 02 September, 2021, 14:52:12

Hi, could you please share some information about your system (System Settings > System > About)?

user-4d0392 02 September, 2021, 19:00:43

Hi, I am facing an issue while working with pupil core. While measuring pupil positions data, I can see that the amount for the right eye data and left eye data are not equal. I have tried to test multiple times; but every time, there is less amount of right eye data than left eye data. This is why the values get de-synced with time. Has anyone faced this issue? If you have, how have you overcome it?

mpk 06 September, 2021, 08:52:24

the cameras are free running, so you will always have different sample counts. This is ok since you can use the timestamps to find matching pairs for left and right data. check out https://docs.pupil-labs.com/developer/core/overview/#timing-data-conventions for more.

user-7b683e 05 September, 2021, 08:50:11

Hey, According to my experiences, this case is very normal. Because each sensor (instrument) has a truncation (or precision) rate. For example, 3D model of Pupil Core has an accuracy between about 1,5 and 3 degree. So, you can get nearly 2 cm different result what did you look actually when you have 50 cm distance from a target - i.g. a laptop display. In your case, you have 2 eye cameras and these cameras can give you data with different precision rate. This thing I said can be first thing that effect the confidence of the system. However, on the other hand, other thing that effect the accuracy can be your eye structure. In some subjects, an eye can be seen different according to other eye.

mpk 03 September, 2021, 13:07:27

Hi, the headset is made of up to three usb cameras and a usb hub with voltage regulators. You can in theory connect any usb device and also connect the cameras directly with a usb cable.

user-7b683e 03 September, 2021, 13:37:47

Thanks for your reply. Well, are there different components except for cameras and hub in currently architecture, such as gyroscope?

mpk 03 September, 2021, 13:56:27

in Pupil Invsible Yes, in core not.

user-561f5a 04 September, 2021, 01:19:02

Hi everyone. Just starting to explore the exciting world of pupil labs. 🥳 I've researched gaze interaction earlier, using remote gaze trackers on desktops. (Can share more about that, if anyone's interest.)

As a start, I want to test the software, so looking to use my phone as the "eye camera." Have been reading up the earlier discussions which has been very useful. (Thanks for that, everyone) I might try using the pupil-video-backend to feel the eye camera (in Pupil Capture) from my phone. Has already else done something similar? Thanks in advance!

user-7b683e 04 September, 2021, 13:32:54

I don't think so because of general attributes (like infrared usage, frequency) and resolutions of cameras that were bound to headset.

user-561f5a 04 September, 2021, 04:35:54

How did it go @user-670bd6 ? Thanks.

user-670bd6 13 September, 2021, 07:27:43

unfortunately i havent gotten any help with regards to this issue, i think the lens recommended is just not compatible

user-7b683e 04 September, 2021, 13:49:36

On the other hand, I guess, Pupil Labs' bussiness model is hardware selling. In this viewpoint, actually costs of softwares that are made by Pubil Labs are taken with this way. By this reason, I don't think you can use Pupil Core with your own cameras except for used cameras in currently headset. However, I wonder the answer developers will give.

user-7b683e 05 September, 2021, 08:54:56

My suggestion is that you can use some statistical model for your data. For example, if confidance rate of your each data that income from eye cameras is enough rate - i.g. > .8, you can use an interpolation calculation and find a recess value.

user-21055f 05 September, 2021, 16:25:09

Hi, I can't start recording in Pupil Core and get this error I would like to know the solution. Thank you very much for your help.

Chat image

user-1d3558 05 September, 2021, 17:22:40

Hey, I'm trying to use pupil player to process eye videos using post-hoc pupil detection. The 3d eye positions in the very beginning of the videos is very inaccurate and it gets far more accurate after about a 90 seconds. I know that the eye cameras do not move significantly during the video, so how can I get the eye location processed into a single location for the entire video?

wrp 06 September, 2021, 06:36:20

@user-1d3558 you could re-run pupil detection post-hoc in Pupil Player and freeze the eye model (if there is no movement of the headset).

papr 06 September, 2021, 08:49:27

@user-1d3558 To extend this response: Once the eye model is frozen during the post-hoc pupil detection, you can restart the detection process from the menu and the frozen model will be applied to the complete recording.

user-835f47 06 September, 2021, 06:40:37

Hi, how are you? I'm trying to work with pupil core and Intel Realsense d415 camera mounted on top of it. My issue is that pupil capture can't recognize the d415 camera although my laptop recognize it. I have tried to implement what is suggested in troubleshooting but it did not helped. I have HP pavilion with windows 10. Thanks a lot and have a nice day

mpk 06 September, 2021, 08:47:14

Hi, welcome to pupil labs! You can actually connect any usb camera to Pupil Capture, but you will need the cameras mounted a certain way (check out what the HW looks like: https://pupil-labs.com/products/core/) and the eye camera need to operate in IR. Without this, the SW will not produce anything meaningful. As @user-7b683e we finance all our work through selling the HW, but we are keeping Pupil Core open source and we dont lock or force you to use our HW. (We do think that we offer fair prices for what our HW and SW can do though 🙂 .)

user-561f5a 08 September, 2021, 09:31:56

Thanks for the welcome and the reply, @mpk. I was also interested in the Moverio add-on, but looks like the model of Moverio glasses being supported has been discontinued by Epson. Any thoughts/expectations on that front?

user-e242bc 06 September, 2021, 09:42:22

Hi. I wanna ask if it is possible to use Screen Marker Calibration for three screens at the same time? since our experiment will use three screens connected and we have to do do calibration for three screens, but I only found calibration option for each screen separately

papr 06 September, 2021, 09:52:12

Hi, with Pupil Core, gaze is not calibrate relative to a specific screen but to the scene camera's field of view. The screen selection in the calibration menu is more about where the markers are being displayed. If you are interested in gaze that is relative to your screens have a look at our surface tracking plugin: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking

user-e242bc 06 September, 2021, 12:42:55

Thank you for your answer! So is it possible to show the markers on the three screens at the same time? because I think the area that showed markers will be the most accurate part in FOV if I understand correctly

user-1d3558 06 September, 2021, 15:55:45

Hey, I am having trouble with my export "pupil_positions.csv" file. I built a system that works well when the "method" column is purely "3d c++" and my exports used to do this method exclusively. Now I get a mix of "pye3d 0.1.1 post-hoc" and "2d c++" as the method and this is causing problems with how we are reading the data. Is there a way to guarentee that the method used is 3d c++?

papr 06 September, 2021, 15:57:37

Correct. Unfortunately, the screen marker calibration does not support displaying markers on multiple displays at once. Instead, I suggest using our single marker choreography where you display a fixed marker and ask the subject to fixate it while rotating the head. This way you have full control over the calibration area.

user-e242bc 06 September, 2021, 19:35:46

Thank you! it helps me a lot!

papr 06 September, 2021, 16:01:49

Hi, starting with Pupil Core 2.0, we run both (2d and 3d) pupil detectors in parallel. As a result, you get two rows per frame. Starting with Pupil Core 3.0, we replaced the legacy 3d detector (3d c++) with pye3d. The latest (with your system compatible) Pupil Core version is therefore https://github.com/pupil-labs/pupil/releases/v1.23 Alternatively, I recommend adjusting the system to accept the newer exports and dropping the rows that you are not interested in.

user-1d3558 06 September, 2021, 16:10:28

Ok thanks, it shouldn't be too difficult to ignore or remove those rows

user-1d3558 06 September, 2021, 18:56:56

player - [INFO] camera_models: Loading previously recorded intrinsics... player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 624, in player File "plugin.py", line 398, in init File "plugin.py", line 421, in add File "gaze_producer\gaze_from_offline_calibration.py", line 57, in init File "gaze_producer\gaze_from_offline_calibration.py", line 114, in _setup_controllers File "gaze_producer\controller\gaze_mapper_controller.py", line 42, in init File "gaze_producer\controller\gaze_mapper_controller.py", line 129, in publish_all_enabled_mappers File "gaze_producer\controller\gaze_mapper_controller.py", line 151, in _create_gaze_bisector_from_all_enabled_mappers File "player_methods.py", line 46, in init ValueError: Each element in data requires a corresponding timestamp in data_ts

player - [INFO] launchables.player: Process shutting down.

user-1d3558 06 September, 2021, 18:57:34

When I try to use pupil player I get this error, I've tried verifying the files and reinstalling but nothing seems to fix it

papr 07 September, 2021, 13:06:48

You can reset the recording by deleting the offline_data data. Afterward, it should open as expected in the latest Player release.

user-1d3558 06 September, 2021, 19:15:39

I appears that glitch only appears in data sets I have already opened, when I open a fresh data set the Pupil capture eye 0/1 windows used is post-hoc pupil detection crash and I get this error message

Chat image

user-10631a 07 September, 2021, 07:52:49

Hi, i need help with a problem that occurred recently. I have never had problems using pupil core, but now every time I open pupil capture everything opens correctly, but as soon as I move the world camera the program crash. What could be the reason?

papr 07 September, 2021, 07:54:28

Please contact info@pupil-labs.com in this regard. 🙂

user-10631a 07 September, 2021, 07:58:58

Ok thanks

user-85ba36 07 September, 2021, 12:19:36

Hi, I use Pupil Mobile with Moto Z3 when I need small recording device, but sometimes my recording are truncated, why is this happening ?

papr 07 September, 2021, 13:09:16

This happens if the app is terminated unexpectedly. This can have a variety of reasons. Unfortunately, the data can only be recovered partially. 😕

user-1d3558 07 September, 2021, 13:07:00

I did find a solution to this problem. I had to uninstall my ciso vpn and that fixed it

papr 07 September, 2021, 13:07:59

Our next release will also no longer crash if this edge case happens.

user-1d3558 07 September, 2021, 13:09:02

I also got this one fixed by reinstalling the software from my Program Files (x86) to the root of the drive

papr 07 September, 2021, 13:11:19

This should not have made a difference. What might have happened is that you installed a different version which resets the session settings. When you open the recordings afterward, it will load the gaze data from recording by default. The issue is related to the post-hoc calibration which needs to be enabled manually.

user-85ba36 07 September, 2021, 13:15:37

Te application is running all the time and displays the recording time

papr 07 September, 2021, 13:17:27

It is possible that the recording (background) process is what is being terminated (not necessarily a user action). Unfortunately, I cannot offer an investigation of the possible causes as we no longer maintain Pupil Mobile. 😕

user-85ba36 07 September, 2021, 13:34:57

Can I use another mobile Phone or do you recommend to use Moto Z3?

papr 07 September, 2021, 13:36:52

We have also made good experiences with the 1+6, but that phone is no longer sold by OnePlus directly. Newer phones might have newer Android versions installed with which Pupil Mobile was not tested.

user-85ba36 07 September, 2021, 13:40:28

Okey, thank you very much

user-069b04 08 September, 2021, 08:47:46

Hello, I am using Pupil Core for the first time and I can not see my eyes in the Pupil Capture. I already tried to run the exe as administrator - nothing changed. This is the error message:

Chat image

user-7bd058 08 September, 2021, 08:47:49

Hello! I annotated some videos manually with the annotation player. I wrote down the mistakes I made (as there is no possibility to correct them) like "Fixation 1345 --> label y instaed of label x". Now I wanted to manually correct these mistake in Excel but the annotation csv file does not include the corresponding fixation to the label. Is there any possibility to correct them? Thanks a lot

papr 08 September, 2021, 09:01:45

Could you please make sure the headset is connected and try "Restart with defaults" from the general settings?

user-069b04 08 September, 2021, 09:03:07

The headset is connected via USB and the restart did not help

user-069b04 08 September, 2021, 09:04:16

I can see "world view" so my computer at least recognizes one camera

papr 08 September, 2021, 09:06:58

If you go to the "Video source" menu, enable "Manual camera selection", and open the "Select source" selector, what values are being listed?

user-069b04 08 September, 2021, 09:10:17

Where do I find the "Select Source" selector?

Chat image

papr 08 September, 2021, 09:13:28

My bad, I was referring to Select camera. Your screenshot shows the necessary information, thank you! As you can see, there are two "unknown" cameras. This means that the automatic driver installation for the eye cameras did not work. Please perform steps 1-7 from here [1] for both eye cameras (Pupil Cam2 ID0/1, or similar). Afterward, Capture should list all three cameras by their right name.

[1] https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md

user-069b04 08 September, 2021, 09:16:45

Okay thank you, I will follow these steps

user-7bd058 08 September, 2021, 09:35:41

Hello! I annotated some videos manually with the annotation player. I wrote down the mistakes I made (as there is no possibility to correct them) like "Fixation 1345 --> label y instaed of label x". Now I wanted to manually correct these mistake in Excel but the annotation csv file does not include the corresponding fixation to the label. Is there any possibility to correct them? Thanks a lot

papr 08 September, 2021, 09:48:41

In reverse, look up the start_frame_index-end_frame_index range for a given id in fixations.csv, and find the annotation.csv row whose index lies within that range.

papr 08 September, 2021, 09:46:21

Fixations often span multiple frames but manual annotations are associated with a single timestamp/scene video frame. You can identify the fixations based on the annotations index value and the fixations start_frame_index-end_frame_index range. (Index always refers to scene video frame index in this context)

user-7bd058 08 September, 2021, 09:56:37

thank you very much it works, maybe it would make sense to add the fixation id into the annotation.csv? in the future

papr 08 September, 2021, 09:59:26

For these special cases, we recommend building custom plugins or post-processing scripts that implement the needed functionality. Should you need help with that, feel free to contact info@pupil-labs.com for more information.

papr 08 September, 2021, 09:57:36

It would for your very specific use case, but it does not generalise to everyone using annotations. Annotations are agnostic to what they are annotating in order to be as flexible as possible. 🙂

user-7bd058 08 September, 2021, 09:57:23

it's just a recommendation, but maybe our case is very specific 😁

user-7bd058 08 September, 2021, 10:00:05

alright thank you

user-069b04 08 September, 2021, 10:57:12

Hey again I followed the steps and the Pupil Cam1 ID2 is presented in the device manager, but the eyes are still not presented in Pupil Capture Is it possible that the camera is broken?

papr 08 September, 2021, 11:05:47

You might want to run steps 1-5 from here https://docs.pupil-labs.com/core/software/pupil-capture/#windows before trying to manually install drivers again

papr 08 September, 2021, 11:04:51

If they are listed in the device manager, the cameras are fine. They just need to be in the correct category (libUSBk)

user-069b04 08 September, 2021, 11:07:26

yes one camera is listed (my model has only one pupil tracking camera on the right side)

Chat image

papr 08 September, 2021, 11:11:31

Also, it looks like you might have installed the libusbk drivers for your fingerprint reader by accident. It is possible that this prevents the fingerprint reader from working correctly. You might want to uninstall the driver for this particular entry.

papr 08 September, 2021, 11:08:08

Just to make sure: Is there a Pupil Cam2 ID0 listed somewhere in the device manager? The camera visible in this category is the scene camera that already is being listed correctly in Capture.

user-069b04 08 September, 2021, 11:07:35

okay I follow these steps again

user-069b04 08 September, 2021, 11:09:20

ah okay, no there is no Cam2 ID0

papr 08 September, 2021, 11:09:55

Neither under Imaging Devices or Camera? In this case, please contact info@pupil-labs.com

user-069b04 08 September, 2021, 11:11:17

no nowhere, I will contact them. Thank you very much

user-069b04 08 September, 2021, 11:15:15

It doesnt change anything

papr 08 September, 2021, 11:15:37

In what regard?

user-069b04 08 September, 2021, 11:16:23

I unistalled the fingerprint reader and there is still no Cam2 Id0

papr 08 September, 2021, 11:17:09

Ah, yeah, that was expected. I suggested this to ensure that your fingerprint reader was working as expected

user-069b04 08 September, 2021, 11:17:24

ah okay 😄 sorry

user-836031 08 September, 2021, 13:27:18

I have not ever been able to get the eye camera to connect. on the pupil w120 e200. Any solutions?

papr 08 September, 2021, 13:34:44

Please see my conversation with @user-069b04 above 🙂

user-836031 08 September, 2021, 13:35:18

I did and am looking for a way to run the .exe on mac. Could you send the link please for Mac

user-4eef49 09 September, 2021, 07:23:42

Hi. I have pupil core glases. I am trying to get gaze information from it via API. I successfuly connect to the API, get the subscription port via zmq. I subscribe to topic 'pupil' and 'gaze', but I get only pupil messages. In loggings of recordings I can see gaze messages. What am I doing wrong?

papr 09 September, 2021, 07:35:41

You need to calibrate before gaze data is being generated. Could you clarify what you mean by "in loggings of recordings"?

user-4eef49 09 September, 2021, 07:44:04

I did calibration process using pupil capture software. I can see gaze messages in recordubgs: gaze.pldata.

papr 09 September, 2021, 07:46:41

ok, thanks for the clarification. This indicates an issue with your script.

user-4eef49 09 September, 2021, 07:53:22

Thank you. Can you please check my script?

test_com.py

papr 09 September, 2021, 07:56:05

Ah, you shouldn't sleep in that loop. zmq receives and buffers the data in the background. Once the background queue is full, it will drop new messages instead of dropping the old ones.

user-4eef49 09 September, 2021, 08:20:19

ok, I deleted time.sleep line, but there are still no gaze messages. There are still no gaze messages there. So I tried to run calibration again using pupil capture, but I get this error: no sufficiant pupil data available.

papr 09 September, 2021, 08:22:52

I suggest only printing the topic for a start. This makes it a bit easier to check if you got gaze data or not. You will need a successful calibration, though. Regarding the error message, it sounds like your eye processes are not running or the pupil detection is very bad, causing all samples to be discarded.

by the way, you can easily edit your message via the message context menu. There is no need for deleting them 🙂

user-4eef49 09 September, 2021, 08:29:12

You are big help. Thank you. You are right. The processes are not running. I tried to start them, but I get error: eye0 - [WARNING] launchables.eye: Process started. Estimated / selected altsetting bandwith : 151 / 256. !!!!Packets per transfer = 32 frameInterval = 82712 Process eye0: Traceback (most recent call last): File "launchables\eye.py", line 743, in eye File "zmq_tools.py", line 174, in send File "msgpack__init__.py", line 35, in packb File "msgpack_packer.pyx", line 120, in msgpack._cmsgpack.Packer.cinit MemoryError: Unable to allocate internal buffer.

papr 09 September, 2021, 08:31:31

This sounds like you are out of memory. 🙂

user-4eef49 09 September, 2021, 10:40:00

Where can I get position from gaze data? I am interested in dx and dy values for consecutive measurements. Is it norm_position? What are its max and min values?

user-4eef49 09 September, 2021, 08:55:10

thank you very much. Solved my day.

papr 09 September, 2021, 10:40:21

See https://docs.pupil-labs.com/core/terminology/#coordinate-system

user-4eef49 09 September, 2021, 11:23:49

thank you a lot

user-10631a 10 September, 2021, 08:31:06

Hi, I have a problem subscribing to a topic. When I use the subscribe method I get this error " object has no attribute 'subscribe'". This is only when i try to subscribe, i can successfully connect to pupil_remote and get sub and pub port. What could be the problem?

papr 10 September, 2021, 08:31:53

Let's move this to 💻 software-dev

user-10631a 10 September, 2021, 08:32:09

Ok thanks

user-835f47 10 September, 2021, 10:31:59

Hi I'm trying to work with pupil core and Intel Realsense d415 camera mounted on top of it. My issue is that pupil capture can't recognize the d415 camera although my laptop recognize it. I have tried to implement what is suggested in troubleshooting but it did not helped. I have HP pavilion with windows 10. Thanks a lot and have a nice day

papr 13 September, 2021, 07:41:54

Hey, which version of Pupil Capture are you using? And have you installed the corresponding Realsense plugin? https://github.com/pupil-labs/pupil-community#plugins

user-3f477e 10 September, 2021, 11:15:12

Hi all, some of the participants in my gaze tracking study will need to use their prescription lenses to be able to perform the intended study task. I have read on this server that you recommend using contact lenses instead or mounting pupil core underneath the prescription lenses. I'm afraid I have only limited control over which kind of corrective lenses the participants will use.

That is why I would like to test a setup allowing to use prescription glasses (if necessary). Unfortunately, I failed to combine pupil core with glasses in my test. 🙁 When I started putting pupil core on and then tried to add the glasses in a second step, the glasses' temples collided with the eye camera arm sliders. I also tried mounting the temples below or above the sliders but this also seem not to work well (glasses are then askew and cannot be worn properly): - below: nose pads of glasses then can barely be worn on the nose - above: temple tips of glasses then end far above the ears In addition, the distance between eye camera and eye is very limited for both options, not allowing me to record the whole eye or generally adjusting the eye camera position and orientation without scratching the glass.

If I have understood previous comments on this topic correctly, the pupil dev team succeeds in using pupil core with prescription lenses. So I may just have misunderstood how this works exactly? Could you give me an explanation (or an explanatory image) how pupil core and prescription lenses may work together? Are there any alternative mounting options available for users of prescription lenses?

Thanks for your help!

user-6e98ae 10 September, 2021, 19:39:19

Hi!

Where can I find a list of notification message topics/categories/subjects and their respective commands (e.g. recording.should_stop where recording is the topic and should_stop is the command)? Also, what is the purpose of notification messages? On the webpage, it says that notification messages are used to coordinate activities. How? What happens when you don't use notification messages?

Thank you!

user-86d8ec 13 September, 2021, 00:05:31

Hi. Is it true the pupil mobile is no longer supported? We purchased the Moto Z3 right before the pandemic and now that we are open to research I am reading that it has been depreciated and not not compatible with newer data formats. Are there any best practices for researchers who would still like to capture data with their cell phones (ie recommended version of pupil mobile, pupil capture)? Thanks

papr 13 September, 2021, 07:45:38

@user-3f477e Please see https://discord.com/channels/285728493612957698/285728493612957698/847395213638762496 for reference

user-3f477e 16 September, 2021, 11:31:17

Thanks for your response! I had already read nmt's suggestion. To be honest, it was too vague for me to succeed (or maybe I just misunderstood). Should "below" mean vertically below the lower eyelid, or just beneath the eyeglasses?

user-835f47 13 September, 2021, 07:45:42

Thanks for replying. I already solved the problem by installing an old version of pupil capture.

user-a1611a 13 September, 2021, 13:12:50

Hi, I am trying to run pupil from source. I keep getting this error Going through Pupillabs GitHub on dependencies, I found out I need Visual studio, however, I have it installed.

Chat image

papr 13 September, 2021, 14:15:56

Hi, how did you install pyuvc? Have you also seen the note regarding the Microsoft Visual C++ 2010 Redistributable in the docs? https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md

user-a1611a 20 September, 2021, 11:25:53

Hi @papr, I followed all the instruction, but I kept getting the error, I even had to use another computer. So I decided to try using another separate computer with Ubuntu 18.04, I was able to build and run pupil capture, but the cameras aren't coming up. Attached is the screenshot from pupil capture.

user-7daa32 13 September, 2021, 17:57:36

Please is it possible to calculate the total scan path length for a trial of experiment?

papr 13 September, 2021, 18:30:21

Do you mean spatial or temporal length? With all respect, but are these meaningful metrics? If so, do you have references as to how they are defined?

user-7daa32 13 September, 2021, 18:51:07

Spatial length

papr 13 September, 2021, 18:51:54

And do you define the scanpath as the path between fixations?

user-7daa32 13 September, 2021, 18:53:02

Yes ... Mixed of fixations and saccades

papr 13 September, 2021, 18:54:34

Then you would have to simply define a temporal start and end point and sum the distance between consecutive fixations, correct?

user-7daa32 13 September, 2021, 19:17:45

How do you calculate the distance between fixations?

papr 13 September, 2021, 19:18:31

That depends on the coordinate system. If you have a 2d coordinate system, e.g. fixations on an AOI, euclidian distance might be most reasonable.

user-7daa32 13 September, 2021, 19:19:47

Yes, I calculated the ideal Euclidean distance from between AOIs

user-7daa32 13 September, 2021, 19:22:20

I used PowerPoint

papr 13 September, 2021, 19:20:56

You need to be very careful to not mix up coordinate systems by accident.

user-7daa32 13 September, 2021, 19:21:09

Now what to know how to do that for the whole Scanpath. The distance between fixation can be very insignificant and difficult to calculate

papr 13 September, 2021, 19:22:22

But the scanpath is just a series of fixations, correct? Summing them up sounds most intuitive to me. The only question: Based on which reference does a scanpath start and stop?

user-7daa32 13 September, 2021, 19:23:18

There is a start AOI and Target AOI. And the scanpath is a string of fixations and saccades between these AOIs

papr 13 September, 2021, 19:25:46

Are the AOIs fixed in relation to each other, or do they move?

user-7daa32 14 September, 2021, 22:37:08

They are fixed

papr 15 September, 2021, 02:34:02

The issue with the two-AOI approach is that the aois have their own coordinate systems. In other words, a fixation location in the start aoi is not comparable to the fixation location in the target aoi. At least, you cannot calculate differences between locations.

Instead, if your aois all lie in the same plane, you can define one big surface to which the fixations are being mapped. Then they are all part of the same coordinate system and you can calculate differences between them.

user-7daa32 15 September, 2021, 03:01:44

Thanks

The AOIs are in the same plane. Please are you saying I should create a single surface for the plane?

Are Data for calculating the differences on the recorded data?

The tracking video is made up of different trials but the same procedure.

What I want: To calculate scan path length from start to end of one trial.

papr 15 September, 2021, 03:09:01

As you probably know, the software will not calculate the differences for you. But yes, the fixation on surface data contains the needed information.

My concrete suggestion: Use your existing work flow to identify the ids of the start end end fixations based start and target aois. Afterward, define one big "global" surface, export the fixation on surface data for this global surface and use the Euclidean distance to calculate differences between fixation locations (you can identify the fixations belonging to the scan path based on their ids)

user-7daa32 15 September, 2021, 03:10:52

I'm measuring the distance between the last fixation on start and first fixation on end... Note scan path is going to have many fixations and saccades

papr 15 September, 2021, 03:12:02

Are you asking if you should do that, or are you stating that this is your goal?

user-7daa32 15 September, 2021, 03:13:02

Just asking if I can measure the distance between these two fixations

papr 15 September, 2021, 03:15:27

As you noted, a scan path is more than just a straight line. I suggest 1. identify start and end fixations by id (this defines your scan path) 2. calculate the Euclidean distance between each consecutive fixation 3. sum these differences to get the total scan path length

user-7daa32 15 September, 2021, 03:27:50

Thanks... I think I got this

user-7daa32 15 September, 2021, 03:24:56

Thanks.. initially, I plotted the scan path using the x and y data. Drawn a line along each saccades. Read and noted down the xy coordinate lengths for each fixation point as shown in Excel and then calculate the euclidean distances between each fixation and then. Sum all saccade lengths. From what you said, I don't need to plot the graph

user-7daa32 15 September, 2021, 07:50:30

Does the pupil lab eye tracker use corneal reflection?

papr 15 September, 2021, 14:35:51

You are referring to glints, correct? No, it does not.

user-343787 15 September, 2021, 14:33:56

I just purchased and received a pupil core, and am having issues adjusting the eye cameras

user-755e9e 15 September, 2021, 14:39:31

Hi @user-343787, are you referring to the arm extenders or adjusting the camera direction?

user-343787 15 September, 2021, 14:34:39

the ease shown in the video on your website is not anywhere close to my current experience

papr 15 September, 2021, 14:36:06

Which part feels difficult to move?

user-343787 15 September, 2021, 14:36:37

the whole thing extender arm

user-343787 15 September, 2021, 14:36:46

definitely feels like I'm gonna break it

user-343787 15 September, 2021, 14:37:14

I take it I need to remove the small screw? That is not apparent if so...

user-343787 15 September, 2021, 14:39:56

any and all of adjusting the eye cameras

user-343787 15 September, 2021, 14:44:02

they're completely rigid and none of the ease of the hardware videos is my current experience right now

user-755e9e 15 September, 2021, 14:44:25

To adjust the camera direction, you slightly unscrew the screw on the camera, which will allow you to move the part more easily. To allow the movement, please hold the plastic extension behind the sensor. The fit is tight in order for the camera to stay in place during the head movements.

user-343787 15 September, 2021, 14:45:30

I guess I should start with extending the arms. I can't even get my eyes in good central focus

papr 15 September, 2021, 14:46:33

Make sure to use the ball joint, too. This can make a big difference.

user-755e9e 15 September, 2021, 14:47:23

https://youtu.be/rJcNm5_L6QU this is what @papr is referring to

user-343787 15 September, 2021, 14:50:37

I'm in the voice channel now. Again, I find these videos not a good representation even after loosening the screws

user-755e9e 15 September, 2021, 14:56:02

Please send an email to info@pupil-labs.com and we'll be happy to take over the conversation there.

user-343787 15 September, 2021, 14:57:29

so you want me to go to a less responsive mode of communication for the hands-on help I'm seeking?

user-343787 15 September, 2021, 15:03:00

moving the ball joint on one of the eye cameras has now disconnected one of them

user-7daa32 15 September, 2021, 15:23:17

Please how does it detect the pupil position and movement? Refection?

papr 15 September, 2021, 17:57:12

We just try to find the area of the pupil and fit an ellipse to it.

user-f683f0 15 September, 2021, 15:50:48

Hi, I know the Pupil Core library is developed with Python. I want to know is there any possibility using the Pupil Core device with Microsoft DirectX 12 API? Is there any C/C++ library available for that?

user-8b1528 15 September, 2021, 16:05:25

Can anyone tell me the USB cable length of the Pupil Core product ?

user-755e9e 15 September, 2021, 16:15:47

The USB-C to USB-A 3.0 cable is 2 meters long.

user-8b1528 15 September, 2021, 16:31:24

thanks !

papr 15 September, 2021, 17:54:47

We have actually more people looking at the email inbox than people answering questions here. I am sorry if the procedure feels frustrating to you. We will try to guide you through the setup process. But I would appreciate it if you could refrain from making snarky / passive aggressive comments like this in the future.

user-343787 15 September, 2021, 23:18:38

Apologies, it was frustrating

papr 15 September, 2021, 18:03:05

I have just heard that we were already able to help you. Happy to hear that.

user-343787 15 September, 2021, 23:19:23

Was much appreciated! Thanks again

user-21a54e 15 September, 2021, 19:18:13

Hi. My lab has the core head cam set up. We are trying to record audio and video using the Pupil Capture software. Is this something that can be done?

papr 15 September, 2021, 19:26:24

Hi, unfortunately, we were no longer able to maintain supporting synchronized audio recording from different source across our supported operating system. The feature was removed in version 2.0.

user-21a54e 15 September, 2021, 19:20:35

We have the camera connected to a laptop. We can either use the built in microphone or a USB mic to provide the audio feed. However, when we try to record no audio gets recorded. Is audio / video recording possible with your Pupil Capture software platform?

user-21a54e 15 September, 2021, 19:24:43

If anyone on your team who sees this message can provide some guidance please e-mail me at [email removed] If not, please direct me to someone on your team that can. I am happy to jump back on Discord if needed. Thanks!

user-21a54e 15 September, 2021, 19:28:02

I see. I do have older versions of the software on my computer. Do you know if the audio recording worked back then?

papr 15 September, 2021, 19:28:50

It might work, but we were never able to get it working reliably. 😕

user-21a54e 15 September, 2021, 19:30:26

Ah ok. Is there a configuration that you know of where there is a possibility that it might work? Ie.) Using a USB mic vs. using a laptops built in mic.

papr 15 September, 2021, 19:32:15

I can't name any specific setup if that is what you mean, no.

user-21a54e 15 September, 2021, 19:30:52

Just wondering before we try to figure out a more convoluted solution.

user-21a54e 15 September, 2021, 19:32:38

Understood. Thank you either way for clarifying!

papr 15 September, 2021, 19:35:21

It is possible to collect audio using the Lab Streaming Layer (LSL) framework. This would provide very accurate synchronisation between audio and gaze data, but takes more steps to set up. You would need to: - Use the AudioCaptureWin App to record audio and publish it via LSL https://github.com/labstreaminglayer/App-AudioCapture#overview - Publish gaze data during a Pupil Capture recording with our LSL Plugin https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md - Record the LSL stream with the Lab Recorder App. https://github.com/labstreaminglayer/App-LabRecorder#overview - Extract timestamps and audio from the .xdf and convert to a listenable format - Do post-processing, e.g. make annotations at given sound stimuli etc

user-21a54e 15 September, 2021, 19:55:09

Thank you for the suggestion!

user-343787 15 September, 2021, 23:31:07

Good to know that about the email though. I had literally just unboxed my core and felt like I was getting the run around on my first time asking for help, so again was quite frustrated

papr 16 September, 2021, 10:37:03

Of course, no hard feelings. We should have clarified the intend when we asked you to contact the email address.

papr 16 September, 2021, 11:46:28

I can send you an example picture early next week :)

user-3f477e 24 September, 2021, 07:31:41

Hi papr, did you already have the chance to prepare a picture? Thanks for your help!

user-3f477e 16 September, 2021, 12:07:30

That would be great! 🙂 Should I give you my mail address or will you post it here?

papr 16 September, 2021, 12:24:46

I will post it here for future reference

user-3f477e 16 September, 2021, 12:26:25

Alright, thank you!

user-7bd058 16 September, 2021, 15:38:09

hello! the fixations in some of my videos are shifted (e.g. calibration check with crosses on a paper shows that the fixations are always shifted down). What is the best way to correct it? Thanks!

user-7b683e 17 September, 2021, 20:36:53

Hello,

As with every sensor which collects data from nature, Pupil Core has a precision rate (or truncation) in detecting gaze or fixation points. To surpass instandly unsuit jumping for data, I can suggest you to process your recording with some statistical methods such as Savitzky-Golay approach.

On the other hand, with current algorithms of Pupil Core, pupil may not be detected correctly because of amount of eyelid openning. So, you can try to start recording by lifting the magazine up.

papr 16 September, 2021, 16:07:26

If you use the Post-hoc calibration you can reapply the recorded calibration with a fixed offset

user-86d8ec 16 September, 2021, 18:30:18

Hi we are running the latest version of Pupil Core capture and player for windows 10. We start the recording then do the natural features calibration. Then we continue with the experiment for several minutes. When we view the recording with Pupil Player, we can see the world camera recording until the calibration is finished. Then the screen blanks out for a few seconds and then the recording continues. Is the a known bug? Is there something we can do to eliminate this from occurring?

user-7bd058 17 September, 2021, 06:35:49

thank you for your answer. I think you mean Manual Correction? Nevertheless when I activate post-hoc gaze calibration, the eyetracking data suddenly becomes worse. Because our recordings are quite long I tried it with your sample recording. When I choose "Post hoc gaze calibration" the eyetracking recording is suddenly really bad without even doing anything else. Here is a comparison before and after:

Chat image

papr 17 September, 2021, 08:23:03

In the offline gaze mapper make sure to use the correct recorded calibration. It might contain an outdated one which was chosen by default.

user-7bd058 17 September, 2021, 06:42:57

it's not the offset in the picture that bothers me, the fixations aren't as good as before

wrp 17 September, 2021, 06:44:52

Just to confirm/clarify. Are you using Pupil Core or Pupil Invisible?

user-52c504 19 September, 2021, 22:05:12

Hi, I'm conducting reading and translation experiments, and therefore my AOIs could well be each word in view. I wonder if it's possible to more efficiently define AOIs instead of manually drawing the lines, trying to separate each word?

Many thanks, Ted

user-7daa32 19 September, 2021, 22:38:19

Please the euclidean distances between fixations are in which unit?

The norm_x and norm_y data are in which unit?

papr 20 September, 2021, 06:00:49

See https://docs.pupil-labs.com/core/terminology/#coordinate-system for reference

user-7daa32 19 September, 2021, 22:40:12

I have found no reason to employ the post ad hoc calibration. Any advantage using it ?

wrp 19 September, 2021, 23:16:24

Is using OCR (optical character recognition) an option? Is the text large enough to be seen in scene camera? Or is text known apriori?

user-52c504 02 October, 2021, 10:05:56

Sorry I didn't receive notifications so missed the reply! I'm using several different formats, including Word documents, slides, PDFs, and txt files to be embedded in an experiment progamme and projected onto the screen. Each word is large enough to be seen in scene camera. The way I know how to define AOI is using the markers to draw areas, but that would be very time-consuming if I have to do that for each word, as each task contains around 200 words. Many thanks, Ted

user-7bd058 20 September, 2021, 06:41:23

I can only choose default alibration oder calculate a new one but this does not seem to change anything

papr 20 September, 2021, 06:46:34

That would mean that there was no recorded calibration. 🤔

user-a1611a 20 September, 2021, 11:26:27

Chat image

user-a1611a 20 September, 2021, 11:36:01

Oh got it working now

user-a1611a 20 September, 2021, 11:38:41

solved using this found on Pupil-labs issue page

Chat image

user-abb8ae 21 September, 2021, 03:20:50

code

user-4eef49 21 September, 2021, 13:07:42

Hi. I have a problem with my pupil labs: (pupil capture) it shows this message over and over again: Estimated / selected altsetting bandwith : 151 / 256. !!!!Packets per transfer = 32 frameInterval = 82712 eye1 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. eye1 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam2 ID1. eye1 - [INFO] camera_models: Loading default intrinsics! Estimated / selected altsetting bandwith : 151 / 256. !!!!Packets per transfer = 32 frameInterval = 82712

Version of software: 3.4

user-aaa726 21 September, 2021, 13:19:20

Hi everybody, I am new to pupil and I want to ask you, are there any guidelines how to progress the data we record with pupil core?

papr 21 September, 2021, 13:26:40

Hi, it looks like you might have a loose connection. Please contact info@pupil-labs.com in this regard.

user-4eef49 22 September, 2021, 06:01:33

Itried using another usb port on my computer. I noticed that the problem is there when I try to run pupil capture and pupil service at the same time. I neet pupil capture to calibrate the glases and I need service to recive gaze info from zhe glases to my program. Why cant I run both at the same time?

user-dba161 21 September, 2021, 16:17:13

Hello everone, I am new to pupil and already have one question: I changed the world camera and got all my calibration parameters. I am a bit lost regarding where I should exactly define my camera model and how to proceed in order for my defined intrinsics to be properly loaded. Thanks for the help!

papr 21 September, 2021, 16:19:11

Welcome! You can find some general best practices here https://docs.pupil-labs.com/core/best-practices/ The recommended workflow is to use Pupil Player to visualize and export the recorded data. See https://docs.pupil-labs.com/core/software/pupil-player/ We also have a series of tutorials that show case how the exported data can be used https://github.com/pupil-labs/pupil-tutorials

user-aaa726 23 September, 2021, 06:25:17

Thank you a lot 🙂

papr 21 September, 2021, 16:19:58

Hi, did you only change the lens or have you actually changed the whole camera?

user-dba161 22 September, 2021, 08:43:57

Hi! I have changed the whole camera. I have found pupil/pupil_src/shared_modules/camera_models.py, but not completeely sure where to define thee camera with the correctt intrinsics in order for pupil capture to load the custom camera

user-d44f4f 22 September, 2021, 02:21:14

@papr @wrp Hello, I have some questions about the eye gaze of HoloLens: 1. The HoloLens 2 only provides the direction of eye gaze (the cyclopean gaze), and calculates the hit position of the user's eye gaze ray with the target. However, we need to obtain the gaze depth directly, i.e., we want to calculate the gaze position through the left gaze direction and right gaze direction. Does anyone know how to obtain the gaze direction of left eye and right eye respectively from the HoloLens 2? You know, if you want to obtain the cyclopean gaze, you must calculate the gaze direction of left eye and right eye respectively. So, we think HoloLens 2 has the gaze direction of left eye and right eye but doesn't provide them. 2. Pupil Labs provide the add-on of eye tracking for the HoloLens 1. I wonder whether the add-on of eye tracking provides the cyclopean gaze direction for HoloLens 1, or further offers the gaze position from the gaze direction of left eye and right eye for HoloLens 1.

Chat image

wrp 22 September, 2021, 07:54:21

Please use either Pupil Capture or Service.

If you want to use Pupil Capture GUI for calibration you can also publish gaze info to your program with Capture. Pupil Service can also do calibration, but you will either need to display the calibration marker and send positions to service over the network.

user-4eef49 22 September, 2021, 08:02:51

I would prefer to use pupil capture. How can I access gaze info? Is this the right way? self.ctx = zmq.Context() self.pupil_remote = zmq.Socket(self.ctx, zmq.REQ) self.pupil_remote.setsockopt( zmq.RCVTIMEO, 500 ) self.pupil_remote.connect('tcp://127.0.0.1:50020') self.pupil_remote.send_string('SUB_PORT') sub_port = self.pupil_remote.recv_string() print("sub port: ", sub_port) self.subscriber = self.ctx.socket(zmq.SUB) self.subscriber.connect(f'tcp://127.0.0.1:{sub_port}') #subscriber.subscribe('pupil') self.subscriber.subscribe('pupil.1.2d') self.subscriber.subscribe('gaze.')

papr 22 September, 2021, 08:03:53

Looks correct on first sight 👍 See https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py for a full example

user-4eef49 22 September, 2021, 09:21:13

Thank you a lot. Subscribed to 'gaze' not 'gaze.'

user-027014 22 September, 2021, 12:08:50

Hi all, I've got a question regarding the pupil core set for pupil tracking. For scientific purposes we used to record eye data using an older (monocular, 120Hz) version of pupil labs in complete darkness. However, this device recently stopped working altogether so i've switched to one of our the pupil core's (200Hz) we had laying around. Yet i am having trouble getting clean eye data from it. Typically data is now very noisy as the pupil is not well tracked. Any suggestions on how to optimize pupil tracking in complete darkness (i.e. any specific settings that can be changed to improve the detection)? Or is there perhapse a manual/set of instruction on setting it up for recording in darkness? Many thanks, Jesse

nmt 23 September, 2021, 08:56:57

Hi @user-027014. Have you tried adjusting the eye camera exposure settings in each eye window, like in this video? https://drive.google.com/file/d/1SPwxL8iGRPJe8BFDBfzWWtvzA8UdqM6E/view?usp=sharing Increasing the exposure time can achieve better contrast between the pupil and the surrounding regions of the eye

user-699471 22 September, 2021, 19:28:26

Hi there, I am on an M1 macbook pro, on MacOS Monterey Beta 7, and Pupil Capture is unfortunately not working. I can confirm it is working on an M1 Macbook Air with Big Sur, also can confirm it working on Windows laptop, but it is not working on Monterey. Here are some logs:

world - [INFO] launchables.world: System Info: User: erenatas, Platform: Darwin, Machine: Erens-MBP, Release: 21.1.0, Version: Darwin Kernel Version 21.1.0: Sat Sep 11 12:27:45 PDT 2021; root:xnu-8019.40.67.171.4~1/RELEASE_ARM64_T8101
objc[2137]: Class CaptureDelegate is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d370) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_videoio.4.5.dylib (0x13d5b3948). One of the two will be used. Which one is undefined.
objc[2137]: Class CVWindow is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d3c0) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x13d3f7468). One of the two will be used. Which one is undefined.
objc[2137]: Class CVView is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d3e8) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x13d3f7490). One of the two will be used. Which one is undefined.
objc[2137]: Class CVSlider is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d410) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x13d3f74b8). One of the two will be used. Which one is undefined.
attempt to release unclaimed interface 0
world - [INFO] video_capture.uvc_backend: 0:3 matches Pupil Cam1 ID2 but is already in use or blocked.
world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.
world - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [1280, 720]!
world - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy!
world - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation!
world - [WARNING] launchables.world: Process started.
world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2.
...
eye0 - [INFO] pupil_detector_plugins: Using refraction corrected 3D pupil detector.
eye1 - [INFO] numexpr.utils: NumExpr defaulting to 8 threads.
attempt to release unclaimed interface 0
eye0 - [INFO] video_capture.uvc_backend: 0:4 matches Pupil Cam2 ID0 but is already in use or blocked.
eye0 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.
eye0 - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [192, 192]!
eye0 - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy!
eye0 - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation!
objc[2139]: Class CaptureDelegate is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12e7e8370) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_videoio.4.5.dylib (0x139f49948). One of the two will be used. Which one is undefined.
objc[2139]: Class CVWindow is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12e7e83c0) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x139d8d468). One of the two will be used. Which one is undefined.

Do you have any ideas for any workarounds? I have tried different cables, Reformatted the laptop. Any ideas are welcome.

papr 23 September, 2021, 05:35:03

macOS Monterey

papr 24 September, 2021, 08:03:22

How to wear glasses and the Pupil Core headset at the same time

user-8ed000 24 September, 2021, 12:33:56

Hi Is there a possibility to move Surface positions (or rather configurations)created in the Surface Tracker plugin from one PC to another without copying the measurement where it was created? I tried copying the surfaces folder that is created in the export folder, but that did not work

nmt 24 September, 2021, 12:57:56

The recording in which you defined your surfaces should contain a file named surface_definitions. You can copy and paste this to other recordings

user-98789c 24 September, 2021, 13:54:13

Is there a minimum to the size of the surfaces we can define for Pupil Core and track fixation in them with a good confidence?

nmt 24 September, 2021, 14:25:29

The minimum size for a useful surface definition will ultimately depend on calibration accuracy. The smaller that you define your surfaces, the more calibration accuracy you will need to ensure that surface mapped fixations are valid. Have a look at this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246

user-027014 24 September, 2021, 15:10:26

Hi all, How is the confidence of the pupil core computed?

user-09865d 26 September, 2021, 17:32:25

Hi everyone, I am looking for information about the USB-C clip to connect the 3 cameras to connect the Core headset: https://www.youtube.com/watch?v=zzYLRlVBhTY&ab_channel=PupilLabs Is the schematic/hardware available somewhere? We would like to change the form factor to accommodate our application. Is it a regular USB-C hub or there are some custom features? do you have any requirements for bandwidth? Let me know if you have any info/suggestions. Alexis

papr 26 September, 2021, 18:42:02

Hi, please contact info@pupil-labs.com in this regard. Please include some details about your application, too.

user-98789c 27 September, 2021, 13:03:10

After preprocessing and binning my data, I always get these vertical flat lines at the end of each trial. Do you have an idea about what might cause this?

Chat image

user-f2c890 27 September, 2021, 14:09:40

Hi! I have been doing a recording tryout today after a long period of not using the equipment during the covid and encounter two problems: 1) I cannot record sound (it looks like it is being recorded but nothing possible to hear) 2) offline surface tracker does not recognize surface markers. Surprisingly, even on old recordings where markers were previously recognized, now it is not possible. Any suggestions? Mac Os, Pupil Player version 1.12.17

papr 27 September, 2021, 14:11:23

Hey, could you share an example recording with data@pupil-labs.com s.t. we can have a detailed look?

user-f2c890 27 September, 2021, 14:13:46

Sure, thanks

papr 27 September, 2021, 14:40:28

Pupil v1.12 recordings

user-74c497 28 September, 2021, 10:05:35

Hi, we are recording in dark condition and we have an issue for the tracking. We lose the tracking when we switch off the light and then we switch on at low level on the light the tracking is very unstable. Any advise to improve tracking by the world camera at low light level ? Thanks

nmt 29 September, 2021, 15:32:01

Hi @user-74c497. Have you tried addressing what @papr detailed in the previous message, i.e. adjusting eye camera exposure and increasing max pupil size parameter?

user-027014 28 September, 2021, 10:51:34

Hi all, I'm trying to create a bayesian estimate of the combined eye gaze position. My targets are presented at a far enough distance for the eyes to not converge, so essentially I have two independent measures of the same gaze position. So what I would like is to get some information on the variance/std/precision of the pupil traces over time. The problem is that i am not sure how the confidence is computed and if it is related in anyway to the variance, OR I could compute it myself with a moving window of sorts, but here the problem is that during blinks both the x/y position are set to zero. Does anyone have a I thought on how to solve this? Any of the following would help me: 1) some measure of how pupil confidence is computed, 2) a way to turn off the blink set xy estimate to zero, or 3) any other tips 😉

user-7daa32 29 September, 2021, 00:31:20

Sorry for the question. It seems the answer should easily be found by reading pupil lab website. For a study that use chinrest and 3D pipeline, how long can we revalidate? All trials must be in one video and so we can't recalibrate. A trial can take few second to 1mins and then another another trial up to 8 or 10.

nmt 29 September, 2021, 15:38:43

It is generally better to split your trials into separate recordings, particularly if they last 8 or 10 mins. For every new recording, re-calibrating and validating will reset accumulated slippage errors.

user-7daa32 29 September, 2021, 01:05:22

I have asked this last year. Sorry I want to ask again to be sure. Are we to investigate for the visual angle of accuracy to adopt or really on the accuracy of the eye tracker (0.6) ? I remember that we can decide the size of the target based on it's distance from the participant and the adopted visual accuracy. And the inaccuracy must not go above the adopted one after validation. Is that correct? I also know that the fovea span 1 to 2 degrees of visual field

nmt 29 September, 2021, 15:49:05

If you calculated stimulus size based on viewing distance and accuracy, then you should ensure that the accuracy reported following a validation does not exceed the accuracy used in your calculation.

user-430fc1 29 September, 2021, 15:07:07

Hello, what are the units for the Absolute Exposure Time parameter?

papr 29 September, 2021, 15:08:23

The time parameter should be provided in units of 0.0001 seconds https://github.com/pupil-labs/pyuvc/blob/master/controls.pxi#L44-L60

user-7daa32 29 September, 2021, 16:16:38

Thanks so much

user-7daa32 30 September, 2021, 00:34:31

I see that the eye camera had a sampling rate of 200Hz what about those shown on the capture (30, 60 and 120)? There another shown at eye window. How can we harmonize these ?

The Gaze accuracy is 0.60 and precision of 0.02... I guess this is just Manufacturer's average because it varies with study populations

nmt 30 September, 2021, 08:55:30

For 200 Hz, set the eye camera resolution to 192x192 px. The maximum sampling rate for the world cam is 120 Hz.

Accuracy 0.6 and precision 0.02° is what can be achieved in ideal conditions (i.e. excellent pupil detection; minimal head movement; 2d calibration pipeline, subject follows instructions during calibration).

user-7daa32 30 September, 2021, 13:05:30

I guess this accuracy and precision is affected by individual differences, distance from the target and nature of the environment. If feel the need to use 3D calibration pipeline even though we will have little or no head movements. So we will have to adopt an accuracy based on our environmental conditions and set up.

End of September archive