πŸ‘ core


user-8cd2d1 01 April, 2020, 02:03:03

Hello, I am working on a project using the Pupil Labs Core in which a picture is displayed on either the right side of the screen or the left. I'm trying to use the pupil labs to determine if the user is looking at the picture. I tried getting the gaze positions for a few seconds and determining if the average x-position was less than 0.5 (assuming the world view x-coordinates are [0, 1] starting from the left). However, this is not giving me consistent results. Sometimes when I am looking left, it thinks I'm looking right and vice verse, but other times it works fine. Could anyone provide some help, resources, examples, etc... of how to determine the coordinates of where the user is looking?

wrp 01 April, 2020, 02:57:06

@user-d9a2e5 Thanks for following up. Some notes: 1) Good raw data + Pupil detection: As you note, it is very important to make sure that you are getting good raw data. This means adjusting eye cameras to get a good view of the eye and establish robust pupil detection. In most cases the default pupil detection parameters will work well, but if you have a very dark environment, or challenging subject, you may need to adjust pupil detection parameters (e.g. pupil min, max settings) 2) Calibration: What calibraiton method are you using? Screen based calibration method in Pupil Core, or another method? 3) Experiment: Based on what I understand, you are trying to understand where someone looks on an image. Image is the stimulus in your experiment. If you want to do quantative analysis on where exactly someone looked at the image you might want to check out surface tracking (if you haven't already): https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking 4) Concrete information: To provide you with better feedback, you can share example data with data@pupil-labs.com or share images here to help clarify so that the community can respond 😸

wrp 01 April, 2020, 03:01:57

Hi @user-8cd2d1 πŸ‘‹ thanks for the clear explanation of your experiment. Based on your description, it seems like this could be accuracy related. What accuracy measurement do you see post calibration in Pupil Capture? Please also see points 1 and 2 in my last response.

user-d9a2e5 01 April, 2020, 11:25:34

@wrp i'm using the basic calibration -which shows the screen white and asking you to look at circles at the screen, i will check your suggestion thanks ! πŸ˜„ , i'm adding video of what i tried to do - to put on the image that i looked on the gaze data, but first i looked at the farest points to normlize it on my code again, i put a 1 sec length of data every time (and not point based plot) , the next step i want to put gaze on video

user-d9a2e5 01 April, 2020, 18:08:03

@wrp hey , just to be sure - if i use surface-tracking , are my data normalized based on surface-tracking? , thanks for all your help! πŸ™‚

papr 01 April, 2020, 18:10:33

@user-d9a2e5 norm_pos in gaze data is normalised to the scene camera, norm_pos in gaze on surface is normalised to the surface.

user-d9a2e5 01 April, 2020, 18:12:07

so basicly here are the normalized for surface tracking right?

Chat image

papr 01 April, 2020, 18:12:12

@user-d9a2e5 Checkout this example which uses gaze on surface to move the mouse cursor: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py#L85

papr 01 April, 2020, 18:12:35

@user-d9a2e5 correct, these files are normalised to their respective surface

papr 01 April, 2020, 18:12:58

I was not sure if you were using the realtime api or the exported files

user-d9a2e5 01 April, 2020, 18:15:26

exported files , and i kinda understood how to use API as well , you are helping me so much in my project thank you very much!

user-8a4f8b 02 April, 2020, 08:46:53

hi I'm new to pupil core. I just connected it to my macbook pro and ran pupil capture, but it doesn't detect camera properly. It is just keep showing this

Chat image

papr 02 April, 2020, 08:50:34

@user-8a4f8b Please make sure that all cables are connected correctly. Also, please "restart with default settings" from the general settings menu. If the issue persists please contact info@pupil-labs.com regarding a potential hardware issue.

user-8a4f8b 02 April, 2020, 08:58:50

@papr thank you, I just tried "restart with default settings" and stopped showing the message, but now it only show gray screen, any idea?

papr 02 April, 2020, 09:00:24

@user-8a4f8b This sounds like a hardware issue. Please contact info@pupil-labs.com in this regard.

user-8a4f8b 02 April, 2020, 09:00:42

ok thank you

papr 02 April, 2020, 14:34:50

As a follow up: We were able to verify that the hardware was working correctly on a separate machine and are now looking into possible setup-related causes for the issue.

user-e70d87 02 April, 2020, 16:12:57

Does anyone have experience getting Core to work for people with glasses? Maybe 3D printed models to attach lenses to the frames (in a way that doesn't interfere with the eye cameras)?

user-ba5f64 03 April, 2020, 06:25:19

I've found it works fine with contacts. Glasses are a real challenge as they distort the image, so the camera would have to be placed behind the glass, I guess

user-8a4f8b 03 April, 2020, 09:43:25

Related to this question, what would be optimal angle/distance for the camera to be placed from the center of an eye?

user-e22b51 03 April, 2020, 18:12:20

hi. how does the Pupil eyetracker/ algorithm estimates the angle between the optical axis and the visual one? Is there any detailed documentation about this aspect? What assumptions are made? Is the 5 degrees estimation in normal vision used apriori? how the screen marker calibration is used if there is no knowledge of the distance to the screen? Do you currently use the glint free algorithm proposed in Kay's paper? THX

user-bec213 03 April, 2020, 18:27:15

Hi, would it be possible to get the 1.4v of the pupil player? I've been trying to get one online to no avail and any advice would be really appreciated !

user-fc194a 03 April, 2020, 18:36:47

Hi, I get a message when trying to open Pupil Player that Apple can't verify the program. How can this be resolved? When opening the program anyway, it runs for a few seconds (backgrounf prrocess) before closing.

papr 03 April, 2020, 18:37:49

@user-bec213 All releases are online on github https://github.com/pupil-labs/pupil/releases/tag/v1.4

papr 03 April, 2020, 18:39:22

@user-fc194a yes,right click the app,click open. the same dialogue will open but with the option to start the app.

user-bec213 03 April, 2020, 18:44:53

@papr Thanks for the help!

user-fc194a 03 April, 2020, 18:49:46

Ok this time I get a message saying Apple cannot verify if the program contains malware. It says to update it or contact the developper, with a link from where it was downloaded (latest release). If I go through Settings/Security, Pupil Player is there with Open anyway... When clicked, nothing happens. Looking at the Activity Monitor, Pupil Player opens with 2 more instances for a few seconds before closing.

papr 03 April, 2020, 18:51:08

@user-fc194a could you share the player.log file in the pupil_player_settings folder?

user-fc194a 03 April, 2020, 18:54:42

2020-04-03 14:49:04,111 - player - [INFO] numexpr.utils: NumExpr defaulting to 4 threads. 2020-04-03 14:49:05,573 - player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "launchables/player.py", line 730, in player_drop File "/usr/local/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module File "shared_modules/pupil_recording/update/init.py", line 15, in <module> File "/usr/local/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module File "shared_modules/video_capture/init.py", line 42, in <module> File "/usr/local/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module File "shared_modules/video_capture/uvc_backend.py", line 24, in <module> ImportError: dlopen(/private/var/folders/br/c6w3ng296bg35c0ksq08c2380000gn/T/AppTranslocation/E7129C0E-24D7-48F1-B397-49BFA6573DA1/d/Pupil Player.app/Contents/MacOS/uvc.cpython-37m-darwin.so, 2): Library not [email removed] Referenced from: /private/var/folders/br/c6w3ng296bg35c0ksq08c2380000gn/T/AppTranslocation/E7129C0E-24D7-48F1-B397-49BFA6573DA1/d/Pupil Player.app/Contents/MacOS/uvc.cpython-37m-darwin.so Reason: image not found

2020-04-03 14:49:06,987 - MainProcess - [INFO] os_utils: Re-enabled idle sleep.

papr 03 April, 2020, 18:55:16

thank you

papr 03 April, 2020, 18:56:41

@user-fc194a Which version are you running? The most recent (v1.23)?

user-fc194a 03 April, 2020, 18:57:48

Yes

papr 03 April, 2020, 18:59:00

Ok, I will look into the issue and try to upload a new bundle as soon as possible (latest by Monday). Please use Pupil Player v1.22 until then.

papr 04 April, 2020, 14:33:23

@user-fc194a Is it possible that you saved the bundle on an iCloud folder? Would mind moving the application into your /Applications folder and try again?

user-fc194a 04 April, 2020, 16:41:08

On PC, is there a command line to automate the export of a recording without the whole process of opening Player, drag-drop the recording and exporting?

user-fc194a 04 April, 2020, 16:41:32

For MacOs, I'll try that as soon as I return. Thanks

papr 04 April, 2020, 17:01:06

@user-fc194a Check out our community repo. There are third-party scripts that extract and export data from the intermediate recording format: https://github.com/pupil-labs/pupil-community

user-d9a2e5 05 April, 2020, 13:51:54

hello i need help with this :is it okay that its moving so much? do i have a way to make it more stable?thanks for help! πŸ™‚

papr 05 April, 2020, 13:54:15

@user-d9a2e5 I thought a surfaces required more than one marker to be detected πŸ€” Usually, the more markers you have, the more stable the surface will be

user-d9a2e5 05 April, 2020, 13:55:35

so i need to add more , okay i will try it ty! πŸ™‚

user-6779be 05 April, 2020, 20:28:50

Greetings, I am trying to get the Pupil core to run on my Windows PC unsuccessfully. The world camera does not show up all i see is the prompt (attached). I have gone through the troubleshooting steps mentioned on the website to no resolve. Could anyone point me in the right direction? Thanks in advance

Chat image

papr 06 April, 2020, 09:58:04

@user-fc194a Please checkout our recent v1.23-5 macOS release update. It should fix the issue that you have encountered.

papr 06 April, 2020, 09:58:39

@user-6779be Is the issue that the world window does not appear, or have you minimized it in your screenshot?

user-d9a2e5 06 April, 2020, 23:48:54

hello again πŸ™‚ , just to be sure - the gaze data normalized between 0 to 1 ? or -1 to 1?

user-6779be 07 April, 2020, 03:13:04

@papr The world window didn't appear. I have switched to v 1.20 and it seems to be working fine now though.

user-c5fb8b 07 April, 2020, 05:33:19

@user-d9a2e5 the gaze data is normalized between 0 and 1

user-c5fb8b 07 April, 2020, 05:36:00

@user-6779be please delete the folder pupil_capture_settings in your home folder and give v1.23 another try. If it still does not open, please share the log file with us, which you can find in your home folder > pupil_capture_settings > capture.log

user-d9a2e5 07 April, 2020, 22:00:15

@user-c5fb8b thanks! i asked a stupid question hahahha

user-c5fb8b 08 April, 2020, 07:12:09

@user-d9a2e5 definitely a valid question to ask! There's also some info on our docs, but it might be a bit hidden: https://docs.pupil-labs.com/core/terminology/#coordinate-system Please always come here and ask if you are not sure about something or cannot find the information elsewhere! πŸ™‚

user-9f8e50 08 April, 2020, 11:00:08

Hi, I have just started using Pupil Core. I have a question about "World camera [email removed] . As shown in the picture, it was set to 120frame rate and recorded. However, the video extracted using Pupil Playe was 50Hz. why? Please let me know if you know.

Chat image

wrp 08 April, 2020, 11:35:46

@user-9f8e50 when you say that the video "extracted" do you mean the video that was exported with Pupil Player? Where do you see 50Hz measurement?

user-9f8e50 08 April, 2020, 15:45:12

@wrp I 'm sorry that my explanation is not good. I would like to know more about exported video with pupil player. Can I change the frame rate and export the video? When the setting of world camera is 30Hz, I can export 30Hz video. However, When the setting of the world camera is 120 Hz, I can't be exported 120Hz video.

Could you please give me your thoughts on the questions ?

papr 08 April, 2020, 15:47:58

@user-9f8e50 Pupil Player should be exporting the video at the same frame rate as it was recorded in Pupil Capture. In our experiences, different video players can report different frame rates for the same video, depending on their measurement approach. Could you let us know which tool you are using to read out the exported frame rate?

user-9f8e50 09 April, 2020, 01:05:07

@papr Thank you very much for your prompt reply and great advice.The frame rate can be checked as shown in the picture. This video was recorded with the world camera set to 120Hz.

Chat image

wrp 09 April, 2020, 02:33:38

Hi @user-9f8e50 can you please try opening in VLC player and then getting info about the file

Chat image

user-9f8e50 09 April, 2020, 04:44:02

@wrp Thank you for advice. I checked the info of the video file using VLC player.

Chat image

wrp 09 April, 2020, 04:53:34

ok, thanks for the feedback @user-6eb76d could you please make a sample recording at 120Hz, upload to online storage, and send to data@pupil-labs.com and our team can investigate and respond via email or directly here.

user-9f8e50 09 April, 2020, 05:03:52

@wrp OK! I'll send you an email.

wrp 09 April, 2020, 06:02:18

@user-6eb76d email received, I or one of my colleagues will get back to you after reviewing the data. Thank you.

papr 09 April, 2020, 08:59:09

Hi @user-9f8e50, I looked at your recording and calculated the recorded the frame rate from the externally saved timestamps (world_timestamps.npy). The average frame rate is indeed 50.3. This means that this issue is not related to the exported video but the originally recorded video.

Pupil Capture will always run at best effort, i.e. it will process the video as fast as possible. If there are more frames than it can process in real time it will start dropping frames. I think this was the case here.

If you want to save resources, and you are not dependent on real time data, you can disable the pupil detection in the general menu. This way you will have to run the offline pupil detection and calibration in Player, but it will free a lot of resources during the recording.

In Capture, you can check the current frame rate by looking at the FPS graphs in the top left corners of the Pupil Capture windows (world, eye0, eye1).

user-8a4f8b 09 April, 2020, 13:56:34

Hello- has anyone done any eye tracking study while reading texts with pupil-lab core?

user-9f8e50 10 April, 2020, 01:18:20

@papr Thanks! I did what you told me. But, FPS graphs didn't change. Please advice me if you have other method. The pic shows the FPS when "detection & mapping mode" was "disabled" in general menu.

Chat image

wrp 10 April, 2020, 02:55:25

@user-8a4f8b you might want to check publication list: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit#gid=0 to see if there are studies similar to what you are trying to achieve.

wrp 10 April, 2020, 03:02:21

@user-9f8e50 what are the specs of your machine (CPU and ram). Based on your screenshot it looks like the CPU is running at 50%, which would lead me to believe the machine is sufficiently powerful. In your previous screenshot I saw 105 FPS

user-9f8e50 10 April, 2020, 04:06:53

@wrp My computer specs are Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, RAM 8.00GB. The previous photo is definitely 105FPS, but after calibration it is 50FPS.

wrp 10 April, 2020, 04:22:25

@user-9f8e50 machine specs are sufficient for sure. Did you modify any of the world camera settings other than the temporal resolution? Could you try the following: 1. Restart Pupil Capture with default settings 2. Change world camera to 120Hz setting 3. Calibrate 4. Minimize eye windows 5. Report a screenshot of the world window to show fps display and make a recording

user-9f8e50 10 April, 2020, 06:10:31

@wrp Thank you! I tried according to the procedure. Look at some pictures. I tried recording three times at different times(14s, 1min18s, 2min).The frame rate of the video was 76-98FPS(14s:79FPS,1min18s:98FPS,2min:76FPS). Recording time doesn't seem to be related to FPS. Do I need to use a higher spec PC?

Chat image

wrp 10 April, 2020, 06:12:15

@user-9f8e50 Thanks for following up and for the details info. I would certainly expect closer to 120 fps - would you be able to quickly test on a different machine to compare results. Unfortunately I am not able to replicate your results.

user-9f8e50 10 April, 2020, 06:14:58

@wrp Sorry! I mistake a upload.

Chat image

user-9f8e50 10 April, 2020, 06:30:46

I forgot to upload this pictuer. @wrp OK! I'll try it. I have a question. Have you ever recorded 120FPS video? If so, could you please tell me the spec of your machine?

Chat image

wrp 10 April, 2020, 06:32:34

@user-9f8e50 yes, I have recorded at 120 fps on macOS, linux, and windows with intel i7 processor and 16Gb ram.

user-9f8e50 10 April, 2020, 06:37:10

@wrp Thanks. Your information was very helpful. Please teach me again!

papr 10 April, 2020, 12:57:05

@user-9f8e50 Disabling the accuracy visualization might also save you a bit of processing resources. You can do so in the "Accuracy Visualizer" menu.

user-6448ad 10 April, 2020, 13:04:39

Hi there, I am using pupil core with the pupil software. in the export, timestamps are given as "pupil time". is there an easy way to convert to UNIX time? (jan, 1st 1970) background: I want to synchronize with a device using these UNIX timestamps.

user-7165e7 10 April, 2020, 13:28:04

Hey ! Hey I'm an assitant researcher on Human Factors. I'm looking for a team that's worked with saccades before.In order to validate my calculations which are in pixels per second. We can't convert to degrees because we're on a random placement of the subject. r We can't convert to degrees because we are on a random placement of the subject in relation to the visual area of interest. And it seems to me that the calculation of degree takes into account this distance (we did not control it).

user-430fc1 10 April, 2020, 14:29:40

Hey guys, does anyone know why the blink detector sometimes gives unexpected output such as this (on the right)? It's happened a few times where the file structure gets jumbled and colums are spammed with irrelevant data

Chat image

papr 10 April, 2020, 14:30:19

@user-430fc1 could you share the right file with us?

user-430fc1 10 April, 2020, 14:33:18

sure, here it is

blinks.csv

papr 10 April, 2020, 14:36:21

@user-430fc1 it looks like for some reason the file is missing a bunch of ,. Would you mind sharing the recording with [email removed] so we can try to reproduce it?

user-430fc1 10 April, 2020, 14:41:56

@papr Thanks, just sent a zip of the recording minus the mp4s.

papr 10 April, 2020, 14:42:30

@user-430fc1 I will come back to you early next week

user-5ef6c0 13 April, 2020, 13:28:23

Do different versions of pupil player save their settings on the same file/folder? I installed and ran 1.23, and now my settings in 1.20 are back to default

user-7d4a32 13 April, 2020, 15:24:30

Hello, I'm curious if its possible to get the output of x,y data as non-normalized data? As in, I'd like to see the coordinates the user is looking at on the screen.

just to clarify, the data I'm getting is normalized between [0, 1], and I'd like actual coordinates [0, max_pixel) If it is possible, do you know how accurate this data can be?

user-26fef5 13 April, 2020, 18:23:41

@user-7d4a32 If your gaze data is calibrated you can simply multiply the normalized coordinates with the cameras resolution that you are using.

user-c5fb8b 14 April, 2020, 06:55:09

Hi @user-5ef6c0, yes Pupil Player only has a central location for settings. When switching versions, the settings will be set back to default for the newly opened version.

user-c5fb8b 14 April, 2020, 07:04:05

Hi @user-7d4a32, do I understand correctly, that your subjects are looking at a computer screen and you want to get the screen coordinates of their gaze positions? To achieve this, you would have to use the surface tracker in order to track the screen in your world video, otherwise there is no way to automatically infer the relationship between the head-mounted eye-tracker and the screen. When you define the screen as a surface, you will be able to export normalized surface coordinates for gaze data, which range from 0 to 1 on the surface. When you multiply this with your screen resolution, you will get screen pixel coordinates. This process introduces some additional noise through the surface tracking system, but I'm not sure how much.

user-76c3e6 14 April, 2020, 08:34:23

Hi I am trying to record video's with Pupil Capture from two different camera's (via Pupil mobile). I opened Pupil Capture twice and connected one of the instances to the first mobile with Pupil Mobil on it and the other to the second mobile. In Pupil capture I do see the scenes for both camera's. However, when I hit the record button, only the camera connected to the first mobile starts recording. The other one says that it can't record because it has no connection (while I do see what the camera sees on in Capture). The phones (and laptop) are connected (wifi) via a local network without internet connection. Do you have an idea what the problem can be? Thanks

papr 14 April, 2020, 08:41:09

@user-76c3e6 Hey, the recommended workflow would be to record the data on the phones themselves instead of Capture as the video streaming is not guaranteed to transmit all the data. If the network is saturated, the video stream might start dropping frames.

You can use the Remote Recorder plugin to start the recording on both phones via a single Pupil Capture instance.

Do not forget to enable the Time Sync plugin in Pupil Capture such that the Pupil Mobile phones can adjust to a synchronized clock.

user-76c3e6 14 April, 2020, 09:08:15

@papr thanks! It seems that most of my troubles are a result of bad connections indeed. So yes, mayble it's better to record on the phones themselves. If I decide to record data on the phones themselves, it is then also possible to Synchronize time between the two phones? Or is that impossible?

papr 14 April, 2020, 09:09:12

@user-76c3e6 Yes, if you start the Time Sync plugin in Capture, the Mobile phones should synchronize to the Capture instance, and therefore follow the same clock.

papr 14 April, 2020, 09:10:08

@user-76c3e6 You can test this by making a test recording with both phones, open both in Player, and use the Video Overlay plugin to overlay one world video ontop of the other. They should be synchronized.

user-76c3e6 14 April, 2020, 09:11:58

@papr Ok, I'll check that. But just checking again (sorry), even if I record on the phones themselves, and I have Pupil capture open in the background (but don't record there), Pupil mobile knows that it should synchronize with the other phone?

papr 14 April, 2020, 09:13:42

@user-76c3e6 Correct. Both Pupil Capture and Mobile phones will detect each other in the local network, and the phones will adjust their clocks in order to be synchronized with Capture. You can verify that the phones have been detected in the menu of the Time Sync plugin. It lists all time sync group members.

user-76c3e6 14 April, 2020, 09:16:55

@papr great, thanks! Another question, what should the Status be? They now differ, one has "clock master" and the other is synced with the IP of the desktop

papr 14 April, 2020, 09:18:27

@user-76c3e6 For this setup, there should only be one Pupil Capture instance, instead of two (it can work with two, but for simplification I would recommend to only run one).

Pupil Capture will be the clock master, while the phones should be time followers, being synched to the computer running Capture.

user-76c3e6 14 April, 2020, 09:23:59

@papr thanks again! Is is also important to have the Pupil groups plugin on in this case? I put in on, but it doesn' seem to recognize the other Group members. Not sure if that's a problem?

papr 14 April, 2020, 09:25:18

@user-76c3e6 No, this plugin is not necessary in this case. Both plugins, Time Sync and Pupil Groups, use the same technology in the background, but the Groups plugin is meant for communication between multiple Capture instances which is not necessary for the new setup.

user-76c3e6 14 April, 2020, 10:07:58

@papr Ok... sorry me again... I feel a bit stupid, but can't find the recordings made with Pupil mobil (using remote recorder). The phones start recording, but after recording is seems that the recordings aren't stored in the local movies folders on the phones. Should they be there? (I didn't change any of the save locations).

papr 14 April, 2020, 10:10:31

@user-76c3e6 By default the recordings are stored to Internal storage -> Movies -> Pupil Mobile -> local_recording

papr 14 April, 2020, 10:11:14

You can check your Pupil Mobile settings. It will show the recording folder.

user-76c3e6 14 April, 2020, 10:13:22

@papr it says they are stored in the default folder (/storage/emulated/0/movies/Pupil Mobile). Earlier this morning, the files were indeed there. But now when I connect the phone to the desktop, the files aren't there anymore. On neither phone.

papr 14 April, 2020, 10:14:43

@user-76c3e6 Ok, that is the correct location. In my experience, it is sometimes necessary to reboot the phone in order to access new recordings from a computer.

user-76c3e6 14 April, 2020, 10:17:12

@papr thanks, rebooting is the solution.

user-76c3e6 14 April, 2020, 10:44:33

If I can't overlay the two video's (I put the plugin in, then dragged one video over the other video in the main player window), does that mean that the synchronization failed?

papr 14 April, 2020, 10:45:38

@user-76c3e6 Did you open the recording which you tried dragging in Player before? Player needs to convert Pupil Mobile recordings to Pupil Player recordings first before the overlay can work correctly.

user-76c3e6 14 April, 2020, 10:50:49

@papr Yes, I opened both videos in Player

papr 14 April, 2020, 10:52:01

Is there an error message when you try to overlay the scene video of one recording on the other? If not, would you mind sharing both recordings with [email removed] s.t. we can have a look at the recorded timestamps?

user-76c3e6 14 April, 2020, 10:55:58

There is no error, when I drag the one video over the other, then it simply replaces that video instead of playing them both. I'll try it once with new recordings. If that doesn't work, I am happy to share the recordings with you via email. Thanks

papr 14 April, 2020, 10:58:07

@user-76c3e6 Ah, are you dragging the complete recording folder? This actually opens the other recording. Please try to drag&drop the world.mp4/mjpeg file instead.

user-76c3e6 14 April, 2020, 11:01:17

@papr ok thanks, that solved the problem indeed πŸ™‚

user-5ef6c0 14 April, 2020, 14:15:18

Hi @user-5ef6c0, yes Pupil Player only has a central location for settings. When switching versions, the settings will be set back to default for the newly opened version. @user-c5fb8b thank you for the clarification.

user-5ef6c0 14 April, 2020, 14:23:07

A question regarding the "gaze history" feature in the vis polyline plugin. Is the robustness of the tracking improved when using fiducial markers and the head pose tracker plugin?

papr 14 April, 2020, 14:36:50

@user-5ef6c0 Even though the functionality does not use marker tracking explicitly, the markers will help the optical flow algorithm since they are salient features that are easy to track.

user-5ef6c0 14 April, 2020, 14:42:47

I see. Would it make sense to try to make them work together? Does the optical flow algorithm take into account 3d gaze data? Or is it just based on object detection/pixel tracking? If the former, I imagine that using head pose data could increase the accuracy of the gaze history visualization. Or maybe I'm talking non sense

papr 14 April, 2020, 14:47:42

@user-5ef6c0 Currently, the gaze history is simply based pixel tracking and 2d gaze. Yes, using the head pose tracker, it would be possible to build a similar but not accurate functionality. But it would have the disadvantage of being less general. People would have to setup their environment to work with the head pose tracker.

For this to work with the head pose tracker, you simply have to transform 3d gaze data into the "room" coordinate system based on the "head pose" of each frame. You can start building this based on the output of the raw data and head pose tracker exports.

user-7d4a32 14 April, 2020, 14:50:40

@user-c5fb8b Yes, you are right. Thank you for the information, I thought it would be possible to get real x,y data from the start without having to integrate each and every screen (since I might be working with many screens). Thank you both, @user-26fef5, for the information. .

user-5ef6c0 14 April, 2020, 14:50:52

@papr thank you

user-c5fb8b 14 April, 2020, 15:11:10

@user-7d4a32 you can also define multiple surfaces for multiple screen in the Surface Tracker.

user-141bcd 15 April, 2020, 07:38:29

Dear pupil community, just for a quick checkback: I plotted the time between my pupil samples (per eye) for one of my experimental blocks (approx 8 minutes) and this is what I get:

user-141bcd 15 April, 2020, 07:38:34

Chat image

user-141bcd 15 April, 2020, 07:41:51

The mean is very close to the 5ms that I was expecting (these data were recorded with the 200Hz VR add-on). I just wanted to ask whether that is expected behavior or whether I should be suspicious of something in my hardware (or software) setup. Thanks for feedback already!

user-64e880 15 April, 2020, 23:53:39

Hi Guys, I am about new to pupil lab and using pupil core mounted on Hololens. In the documentation it is mentioned 200Hz eye cameras (my current version) do not need to be adjusted, while I see that the accuracy in detection vary with adjusting the focus of eye cameras. I am very eager to know why there are some improvement with adjusting eye cameras! Moreover, what is the best calibration method of detection in different depth in room-like and bigger environment? (I mean from 1 meter to about 10 meter)

user-c6717a 16 April, 2020, 06:45:13

Hi @papr, I just realized I don't think I responded to this message quite some time ago. "@user-c6717a please be aware that the monotonic clock might use an additional offset while being time synced... Do you need to be time synced in real time or do you just need to synchronize your clocks after the effect? There are two clear solutions for both of these use cases. Let me know which one you need." We are still trying to solve the problem but have been distracted with other issues. To answer your question, we are going to be doing the synchronization after the fact, so no need for real-time synchronization. Can you remind me of the solution you had in mind? Thanks!

papr 16 April, 2020, 07:17:28

@user-c6717a In that case, you only need to calculate the difference between the pupil time and utc-0 date time during the recording, and apply the difference to all recorded timestamps. This difference can be calculated based on the start_time_system (utc-0 timestamp) and start_time_synced (pupil timestamp) values recorded in the info.csv/info.player.json file.

papr 16 April, 2020, 09:49:05

@user-141bcd The pupil timestamps are taken from their respective eye video frames. On Windows, the eye video frame timestamps are calculated based on the time they are received in the application and a fixed offset that compensates the transmission delay from the camera to the application.

Unfortunately, this real transmission delay is not fixed and can only be approximated. What you are seeing is the variance in this transmission delay.

papr 16 April, 2020, 09:54:19

@user-64e880 You should adjust the eye cameras such that the pupil is visible in as many eye positions as possible. What the documentation is referring to is that you should not be adjusting the focus of the eye cameras by trying to rotate the tiny lens.

I would recommend using the single marker calibration in manual marker mode (printed markers). You can read more abut in our documentation: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-methods

user-141bcd 16 April, 2020, 11:04:30

@papr thanks. I'm asking as I want to apply a saccade detection algorithm that expects a fixed sampling rate. If the variance in the recorded timestamps is actually driven by transmission times (not by recording time), would it be a legit assumption that the samples are actually equidistantly spread out over time originally?

papr 16 April, 2020, 11:07:37

@user-141bcd Yes. On operating systems where can measure the time of exposure (macOS/Linux), we see a much smaller variance in the timestamp differences.

user-141bcd 16 April, 2020, 11:11:28

@papr interesting. So for velocity calculations would you recommend using the pupil timestamps (in my case unfortunately recorded on Windows) or assuming a fixed delay of 5ms between two samples of the same eye?

papr 16 April, 2020, 11:13:11

I would assume a fixed 5ms duration between samples, but rely on the pupil timestamps to detect dropped frames, e.g. if the time difference between pupil timestamps >10ms

user-141bcd 16 April, 2020, 12:04:17

@papr thanks! so the 2nd/right peak at ~8ms in the histogram above (~25% of the data) does not consist of actually delayed frames due to dropped frames but of problems with the time estimation? Actually dropped frames are then the single points beyond 10ms (in my case <1%)? That's very good to know.

user-76ca8f 16 April, 2020, 15:05:58

Hi All - Can anyone advise on appropriate cleaners for the Core during COVID-19? Which are safe to use with the Pupil Core (which I think is just PLA material)? See FDA/EPA's recommended cleaners here: https://www.epa.gov/pesticide-registration/list-n-disinfectants-use-against-sars-cov-2. Thanks!

papr 16 April, 2020, 15:09:57

@user-141bcd yeah, especially since most of the data seems to be quicker than 200Hz given the <5ms time difference

user-141bcd 16 April, 2020, 15:21:25

@papr but the 200Hz is an upper bound defined by the hardware, right? So those rated beyond 200Hz (or below 5ms distance) are also artifacts of the software based timing estimate?

user-755e9e 16 April, 2020, 15:22:19

@user-76ca8f we use https://www.amazon.de/-/en/gp/product/B0849VG5GR/ref=ppx_yo_dt_b_asin_image_o00_s00?ie=UTF8&psc=1 for cleaning the Pupil headsets during the assembly process.

papr 16 April, 2020, 15:28:42

@user-141bcd both correct.

user-76ca8f 16 April, 2020, 17:04:34

@user-755e9e Thank you!

user-ab28f5 16 April, 2020, 23:38:50

Hi,I want to ask a question! Does anyone know What is the triangle's function on the surface ? Thank you!

papr 17 April, 2020, 06:28:09

@user-ab28f5 hey, yes, it shows the orientation of the surface. Specifically, it points "up".

user-ab28f5 17 April, 2020, 08:13:15

So the top of the triangle mean the top of the surface right?

user-ab28f5 17 April, 2020, 08:19:37

I have the other question... How can I know where is the origin of coordinates?

papr 17 April, 2020, 08:26:58

@user-ab28f5 The normalized coordinate system has its origin in the bottom left. https://docs.pupil-labs.com/core/terminology/#coordinate-system

Chat image

user-ab28f5 17 April, 2020, 08:40:31

@papr

Thank you!

And I want ask the another question...

How can I know the calibration is good or bad? Is the value of dismissing a good Reference point?

papr 17 April, 2020, 08:41:43

@user-ab28f5 There is some information on that in our best practices section: https://docs.pupil-labs.com/core/best-practices/

user-eaf50e 17 April, 2020, 09:09:10

Hello, the pose_t attribute from apriltags, does it give the position of the center of the tag? Cheers!

papr 17 April, 2020, 09:10:01

@user-894365 Could you respond to @user-eaf50e ?

user-894365 17 April, 2020, 10:13:43

@user-eaf50e yes, pose_t is the 3d position of the center of the tag with respect to the camera.

user-430fc1 17 April, 2020, 12:34:53

@user-430fc1 I will come back to you early next week @papr hey, were you able to figure out what might have been causing this?

papr 17 April, 2020, 12:41:04

@user-430fc1 You should have received a response via email. Turns out that there is no actual issue. The base_data field includes a series of -separated values. The length of this sequence depends on the duration of the blink. In your case, the blink is a false positive detection with a duration of 51 (?) seconds. Therefore, it includes a lot of base data which is not correctly displayed by excel.

user-430fc1 17 April, 2020, 12:54:09

@papr I didn't receive an email. But thanks, this makes sense.

papr 17 April, 2020, 12:57:13

@user-430fc1 For reference, the response was sent Apr 14, 2020, 9:09 PM UTC+2 (CEST) from [email removed]

user-430fc1 17 April, 2020, 13:23:22

@papr Thanks, must have got lost somewhere as I can't find it. Not to worry, case solved.

user-b4144f 19 April, 2020, 12:22:48

Hello everyone ! πŸ™‚ I'm working on custom mount glasses for a Pupil Lab. I'm finishing the project of another guy. And i got a small problem : the cables of each side camera were taken off from the connectors. Can someone send me a picture of the order in which i need to put them back ? It would be nice πŸ™‚

user-755e9e 20 April, 2020, 08:07:32

Hello @user-b4144f , the connection of the JST connector is as in the photo attached. We would still recommend you to get it repaired by our experts, in order to avoid additional hardware failures. If you want to proceed, please get in touch with [email removed]

Chat image

user-b4144f 20 April, 2020, 10:16:29

@user-755e9e Hello ! Thank you for your reply ! Actually, the JST connectors are not brocken, neither the cables. They were just "disasembled", so i only need to put them back together at the right place.

user-b4144f 20 April, 2020, 10:17:37

@user-755e9e On the other cable, the colors are not the same for the data ones. They are blue and yellow. Are you able to tell me their order too ? Thank you !

user-755e9e 20 April, 2020, 10:24:31

@user-b4144f The JST facing as in the photo above, from left to right: BLACK - BLUE - YELLOW - RED.

user-b4144f 20 April, 2020, 10:25:12

@user-755e9e Perfect ! Thanks a lot ! I appreciate your help πŸ˜‰

user-755e9e 20 April, 2020, 10:27:58

no problem, good luck with your project! @user-b4144f πŸ™‚

user-008d62 20 April, 2020, 20:36:06

hello - was wondering if anyone could help me extend the second camera for the eye tracker device

user-008d62 20 April, 2020, 20:36:37

apparently the two camera windows on pupil capture is only tracking one eye

user-008d62 20 April, 2020, 20:36:45

does anyone know how to fix this problem?

user-2e2394 21 April, 2020, 14:27:58

Hi

I am using the pupil core, and need live video feed and gaze position in Unity.

I am using aruco markers that I've placed around a room, and I then use an openCV library to get the aruco marker id's and the position of each corner of the aruco marker. I use this to calculate distance between an aruco marker and the gaze position, to figure out if I am looking at a marker. However, I am having trouble with reliably recognizing when I am looking at one of these aruco markers.

It seem to be that I am not always getting the latest gaze data using the network API, which is a problem for my project. When I check the eye position in Pupil Capture, it is shown directly on top of the aruco marker, which is why it is confusing to me why I am unable to recognize that is where I am looking.

I am creating indivual threads for subscribing to frame.world and gaze.2d.0.. I have changed the eye camera to only use 30 frames per sec, such that there should be the same amount of messages with the video feed as the gaze position.

Since I am only interested in live data, is there any way to force the pupil capture to drop everything that is not live? such that the threads I am creating in unity can not fall behind on getting the data from the glasses.

Also, I am pretty sure I have a version with the 120Hz eye camera, atleast that is the highest framerate I can choose. There is no lens adjuster in the package and I am not having any luck using my fingers to twist the lens. Should I just try with more force? I don't want to end up breaking the camera.

user-74be48 22 April, 2020, 10:35:04

Hi, I have just started using Pupil Mobile. I'd like to ask a question - Is it possible to turn off the world camera from Pupil Mobile App? What I need is only recordings from eye cameras, and I'd like to save storage if possible. I'd appreciate for any comments and advices.

user-6b3ffb 22 April, 2020, 14:14:48

Hi, I would like to ask how the confidence value is calculated? I'm trying to identify the source code but I'm frustrated.

papr 22 April, 2020, 15:22:51

@user-2e2394 Hi. If you use apriltag markers, you can use the build in surface tracking feature to check if you are looking at an area of interest or not.

Pupil Capture will also only send data over the network that has been subscribed to. Therefore, if you are not receiving as much data as expected, Capture is dropping data already because you are not processing it fast enough.

Please be aware, that Capture maps gaze at the beginning of each event loop iteration. Lowering the world frame rate will therefore map + publish more pupil positions at once.

Can you elaborate on why you want to twist the lens? Generally, we do not recommend adjusting the lens manually.

papr 22 April, 2020, 15:27:29

@user-74be48 This is not possible as far as I know.

papr 22 April, 2020, 15:29:24

@user-6b3ffb The 2d confidence value is based on how well the fitted ellipse matches the candidate pupil edge pixels. Reference to the implementation: https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_2d/detect_2d.hpp#L600

user-2e2394 22 April, 2020, 15:32:44

@papr, from the pupil labs hardware documentation: If you have a 120Hz eye camera, make sure the eye camera is in focus. Twist the lens focus ring of the eye camera with your fingers or lens adjuster tool to bring the eye camera into focus.

This is why I wanted to twist the lens, so make sure it had the right focus.

papr 22 April, 2020, 15:34:51

@user-2e2394 In this case, there should have been a lens-adjustment tool in the original package. It looks like a gear and should be colored orange (depending on how old your device is)

user-2e2394 22 April, 2020, 15:37:34

That is my problem, there is no lens adjuster in the package. The eye camera only goes up to 120 frames per sec in the pupil capture software, which is what makes me think it is the 120Hz eye camera, and might need to be adjusted. But I have been hesistent to just use more force, as I do not want to break it.

papr 22 April, 2020, 15:38:09

@user-2e2394 Which resolution do you have selected?

user-2e2394 22 April, 2020, 15:38:26

1920x1080, using the zoomed in world camera lens.

user-2e2394 22 April, 2020, 15:38:34

sorry, my bad

user-2e2394 22 April, 2020, 15:38:53

400x400

papr 22 April, 2020, 15:39:17

That is the 200Hz camera. For enabling the 200Hz mode, please set the resolution to 192x192

papr 22 April, 2020, 15:39:48

This camera does not come with a lens adjustment tool as the lens is not adjustable.

user-2e2394 22 April, 2020, 15:40:21

Yeah, got that from the hardware doc aswell, just thought I had the 120Hz. Glad I didn't just try to force it πŸ™‚ Thanks.

papr 22 April, 2020, 15:40:54

@user-2e2394 Regarding the data reception issue, do you have any questions?

user-2e2394 22 April, 2020, 15:51:00

Not really to the data reception issue, if pupil capture is already dropping data if I am not requesting it quick enough, then my problem is obviously with another part of my program. I am subscribed to the IPC topic for the world camera and gaze position, and receive data continously in seperate threads for each topic.

You mention lowering the frame rate of the world camera. However I only have the option of using 30frames, no matter what resolution I choose.

papr 22 April, 2020, 15:53:38

@user-2e2394 How do you know that you are missing data? Also, please check the fps graphs in the world and eye windows to check if they provide the theoretical maximum.

papr 22 April, 2020, 15:54:10

Subscribing each topic in separate threads sounds like the correct approach to ensure as much data processing as possible.

user-2e2394 22 April, 2020, 16:10:14

The fps graphs show the same frame rate as chosen, +-1 frame. So ~30fps for both the world and eye camera.

It was not that I am missing data, just that the gaze position I got in unity seemed to lag behind what is shown in pupil capture. So I need to wait more time looking at a specific object, until the gaze position I received in unity caught up to where it was suppose to be.

Pretty sure from your first message, that this is not how pupil capture works, and I can't possibly be lagging behind. Maybe using the built in surface tracker, rather than a third party apriltag reader, is going to remove this problem.

And thanks for your help πŸ™‚

user-ab28f5 23 April, 2020, 02:51:43

I want to ask a question... What is the value of dissmissing mean? The value higher is good or the value lower is good?

Thank you for you reply!

papr 23 April, 2020, 09:31:49

@user-2e2394 In this case, I would either increase the world frame rate (smaller gaze mapping batches that are processed with less lag) or use Pupil Service. Pupil Service only provides a limited set of features, e.g. does not support making recordings, but is build to reduce the gaze mapping delay.

papr 23 April, 2020, 09:33:28

@user-ab28f5 By default, Pupil Player hides data with confidence less than 0.6. In some cases, it is worth to increase the threshold to 0.8.

user-b4144f 23 April, 2020, 10:41:46

Due to some previous cables failures on other projects (cables broke near JST connector), i've chosen another way to assemble my project : removal of the male JST, removal of the femal JST, and use of solder to put cables directly on the board, with an add of epoxy resin to strenghten everything ! Works like a charm πŸ™‚

Chat image

user-74be48 23 April, 2020, 13:47:34

@papr Thanks for answering my question. It was very helpful. Then, if I detached the world camera from Core, will the Pupil Mobile App work properly only with eye cameras? I'd like to know just in case I could not put it back. Thank you.

papr 23 April, 2020, 14:05:38

@user-74be48 The world camera is not pluggable. In other words, you would have to cut the cable. I do not think this is advisable. There might be a software workaround by resetting the camera permissions, and only granting access to the eye cameras. This will keep the world camera effectively offline. You will have to be careful to not accidentally grant permissions to the camera, though.

user-b4144f 23 April, 2020, 14:07:48

@papr Yes it is. At least on mine. You just need to unscrew the lens, unscrew the two screws in the back, release the cover and the camera board, then unplug the JST connector πŸ™‚

user-b4144f 23 April, 2020, 14:08:48

I got the pupil Labs for Hololens

user-b4144f 23 April, 2020, 14:16:13

@user-74be48

Chat image

user-b4144f 23 April, 2020, 14:16:21

Chat image

user-b4144f 23 April, 2020, 14:16:36

Chat image

user-b4144f 23 April, 2020, 14:16:37

Chat image

user-b4144f 23 April, 2020, 14:16:39

Chat image

user-b4144f 23 April, 2020, 14:16:39

Chat image

papr 23 April, 2020, 14:23:01

@user-b4144f @user-74be48 I need to correct myself. You indeed do not need to cut the cables.

@user-74be48 Nonetheless, I would recommend trying the suggested software workaround before attempting to unplug the scene camera. As you can see from the picture, the cables are very thin and can easily break.

user-b4144f 23 April, 2020, 14:24:49

Okay, i misunderstood πŸ™‚

Yep @user-74be48 : cables are thin, and easily break if too much manipulated. Happened to me several times (but only on eye cameras so far)

papr 23 April, 2020, 14:26:40

@user-b4144f Thanks for sharing the pictures btw. Your custom mount looks very interesting. Aren't you able to use the builtin camera as scene camera?

user-b4144f 23 April, 2020, 14:34:58

@papr Ahah thank you and your welcome for the pictures πŸ™‚
Yes, this custum build is pretty interesting, because it allow the pupil labs to work with people wearing glasses. The glasses that i use are designed for active 3D cinema. Right now, they are hollow (no electronic, no shutter), but i've already made a version with all the elements in it + the pupil lab ! It took me a lot of work, because i had to remove carefuly all the protections on pupil lab's cables to get them "naked", in order to fit them inside the glasses ^^

About the scene camera, i got some 3D printed support with neodyme magnet to hold the camera still. But as i'm not done working on it yet, the pictures above didn't show them.

papr 23 April, 2020, 14:37:32

@user-b4144f Do the glasses have to look "normal"? Or is the goal to simply provide eye tracking for people with impaired eye tracking? Have you thought about a 3d-printed clip that holds the glasses and can be attached to the existing Pupil Core headset?

user-b4144f 23 April, 2020, 14:42:37

@papr Thoses 3D glasses were designed to works with people already wearing glasses, and as i work for the compagny that designed and sell them, we already got a lot of "raw materials" available. So we decided to see if with some proper modifications they would be fitting with the pupil labs πŸ™‚

user-b4144f 23 April, 2020, 14:42:57

Chat image

user-b4144f 23 April, 2020, 14:43:00

Chat image

user-b4144f 23 April, 2020, 14:43:01

Chat image

user-b4144f 23 April, 2020, 14:43:03

Chat image

papr 23 April, 2020, 14:44:42

@user-b4144f Ah, so you would be wearing the prototype on top of the existing glasses? Aren't you worried about the actual glasses covering the eye cameras' views? This is usually the issue i people try to put the Core headset on top of their existing glasses. Btw, I like the magnet idea πŸ™‚

user-b4144f 23 April, 2020, 14:48:11

Ahah πŸ˜„ The problem you pointed about the Core headset is exactly why we used or solution. I never had the opportunity to try the core headset but others tried and told me about this problem. With my design, the eye camera are low enough to counter this issue πŸ™‚ We may lose a bit of precision and efficiency tho, because of this. But it seems to not be too much to cause any trouble for their purposes.

user-b4144f 23 April, 2020, 14:48:22

our*

papr 23 April, 2020, 14:49:09

As long as the pupils are visible to the eye cameras, you should be fine. πŸ‘

user-b4144f 23 April, 2020, 14:51:32

Exactly ! Dont worry, i've already produce 3 pairs of them, and they works fine ^^

Of course, due to the position of eye cameras, the pupils is way more eliptical, and maybe in some extrem angles (specialing looking at tops) we may lose a bit of precision. But nothing important, so no worries πŸ™‚

papr 23 April, 2020, 14:52:00

This is great to hear πŸ™‚

user-b4144f 23 April, 2020, 14:52:16

(sorry for mistakes, i type way too fast in my non-native language xD )

papr 23 April, 2020, 14:52:30

No worries πŸ™‚

user-b4144f 23 April, 2020, 14:55:24

anyway, it's insanely good what Pupil Labs have designed. This solution works like a charm, even better than others solutions, at a fraction of their prices ! I tried a lot of them, and Pupil Labs is definitly the best. Plus they provides a really great software, customisable, and open source. ❀️

papr 23 April, 2020, 14:57:12

Thank you very much for your kind words. 😊

user-b4144f 23 April, 2020, 14:57:43

Ahah, you are in the dev team ? πŸ˜„

papr 23 April, 2020, 14:58:50

Yes, software development. My knowledge about the hardware has its limits though as you might have noticed. πŸ™ˆ

user-b4144f 23 April, 2020, 14:59:29

Okay ^^ Good job then πŸ˜‰

No worries, i can understand that !

user-fe4a3e 23 April, 2020, 16:33:02

Hello, I'm a grad research assistant looking to incorporate the Pupil Core 3D design into a larger system. Is there any way/place that I can get/purchase the CAD file for the 3D-printed headset design?

user-b4144f 23 April, 2020, 16:34:35

Precise 3D Scanner πŸ˜„ (joke appart, i don't know)

papr 23 April, 2020, 16:36:13

@user-fe4a3e Some of the geometry is released under LGPL-3.0 at https://github.com/pupil-labs/pupil-geometry

papr 23 April, 2020, 16:37:38

If you need more please contact [email removed] πŸ™‚

user-fe4a3e 23 April, 2020, 16:47:27

Lol @user-b4144f I've tried that using the Structure Core scanner/Skanect scanning software but it didn't work too well. @papr I'll take a look here and let you know if this works for what we need, thanks!

user-8fd8f6 23 April, 2020, 18:11:23

@papr Hi, I have a question. In the pupil diameter data, Does the software consider off-axis compensation?

papr 23 April, 2020, 18:13:40

@user-8fd8f6 could you elaborate on what you mean by "off axis compensation"? Are you referring to the gaze-angle dependency in the 2d pupil diameter (major pupil ellipse axis in pixels)?

user-8fd8f6 23 April, 2020, 18:15:17

yes

papr 23 April, 2020, 18:16:06

@user-8fd8f6 the 2d pupil diameter is not compensated for that gaze-angle dependency but the 3d pupil diameter is.

user-8fd8f6 23 April, 2020, 18:18:41

Thank you, Do you have any publication I can cite for that?

papr 23 April, 2020, 18:22:38

Or maybe even better: https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf

user-e70d87 23 April, 2020, 20:15:53

Do you have any idea why the right eye is not tracking in this recording?

https://www.dropbox.com/sh/es3t5xbxehngwe7/AACFDImoPOIZ-6lbsNrOslF_a?dl=0

papr 23 April, 2020, 21:11:00

@user-e70d87 from a first review, it looks like the recorded pupils are close to the default "pupil max" value. Increase the value in the eye processes' pupil detector menus.

papr 23 April, 2020, 21:11:11

I can have a closer look tomorrow.

user-74be48 24 April, 2020, 11:04:28

@papr Thank you again for your advise! First I will look up camera permission control. @user-b4144f Thanks for sharing beautiful pictures! It'd be really helpful when I should move to plan B (detach, if not destroy).

user-eaf50e 24 April, 2020, 11:28:24

Hello, how can I map the 3d tag pose to the 2d image plane? I have the camera matrix & distortion coefficients and the translation and rotation of the tag. I tried to use cv2.projectPoints() but this gives me weird results. I know I can obtain center and corner coordinates for the tag on the image plane, but I'd need to do the transformation myself to test something.

papr 24 April, 2020, 11:31:13

@user-894365 could you help @user-eaf50e with that?

user-eaf50e 24 April, 2020, 12:38:25

Great! Another question: How can I write to a textfile from a plugin? It seems that the text is written to the pupil labs console, but not saved to my textfile.

papr 24 April, 2020, 12:39:16

@user-eaf50e how are you saving the data right now?

user-eaf50e 24 April, 2020, 12:51:32

text_file = open("file.txt", "w") text_file.write("My text") text_file.close()

user-e70d87 24 April, 2020, 15:11:06

@papr That didn't seem to help, I'm afraid.

To be clear - this is using the 120Hz eye cameras (they really do produce beautiful eye videos, don't they?)

Is it possible that some of the changes you have made since the introduction of the 200Hz cameras are causing issues here?

https://www.dropbox.com/s/orgjs22upfnf470/world.mp4?dl=0

papr 24 April, 2020, 15:27:10

@user-eaf50e This looks generally correct. Do you see the text file and it is empty? Or can't you find the text file? It is possible that it is in a different place than you expect it.

papr 24 April, 2020, 15:33:00

@user-e70d87 It looks like the right eye model is not fit 100% correctly. If I had to guess, the eye model is a bit closer to the eye camera than the eye is in reality. You can see this from the green circle (eye ball outline projected into the image) being bigger than on the left eye. This also explains why the pupil detection (red circle) is bigger than the actual pupil in the image. Use the options to pause and restart the pupil detection as well as reset and freeze the eye model to fit the model manually. Specifically, if you freeze the model in the middle of the recording, and hit redetect, the redetection will use the frozen model for the new detection.

user-894365 24 April, 2020, 16:19:37

@user-eaf50e Perhaps this script could help

    pose_R = detection.pose_R
    tvec = detection.pose_t.ravel()
    marker_center_3d = np.zeros(3)
    marker_corners_3d = (
        np.array([[-0.5, 0.5, 0], [0.5, 0.5, 0], [0.5, -0.5, 0], [-0.5, -0.5, 0]])
        * tag_size
    )

    marker_center_in_camera_coordinate = np.dot(pose_R, marker_center_3d.T).T + tvec
    marker_corners_in_camera_coordinate = np.dot(pose_R, marker_corners_3d.T).T + tvec

    projected_center = cv2.projectPoints(
        objectPoints=marker_center_in_camera_coordinate,
        rvec=np.zeros(3),
        tvec=np.zeros(3),
        cameraMatrix=cameraMatrix,
        distCoeffs=distCoeffs,
    )[0].reshape(-1, 2)
    projected_corners = cv2.projectPoints(
        objectPoints=marker_corners_in_camera_coordinate,
        rvec=np.zeros(3),
        tvec=np.zeros(3),
        cameraMatrix=cameraMatrix,
        distCoeffs=distCoeffs,
    )[0].reshape(-1, 2)
user-894365 24 April, 2020, 16:27:47

@user-eaf50e Please note that the tag pose output from the detector is a rough estimation since the detector does not take the distortion coefficients into account.

user-c629df 24 April, 2020, 21:40:10

The timestamps in the annotation.csv does not match the timestamps in other eye tracking docs, such as gaze_position.csv. How to solve this problem? @papr

user-6bd380 25 April, 2020, 07:08:01

Hello I am Srishti, for the first time I am trying to analyze the data but my pupil player is not working .I am getting the message saying install pyrealsense, is this common ?Kindly help me what shall I do? I am attaching the image.

Chat image

papr 25 April, 2020, 10:06:50

@user-6bd380 please close the window, delete the user_settings files in the pupil_player_settings folder and try again

user-6bd380 25 April, 2020, 11:03:54

@papr Thank you so much for your prompt reply πŸ™‚ I deleted the file its working now.

user-7d4a32 25 April, 2020, 15:13:38

Hello, I'm wondering about the confidence parameter found in all sorts of locations. I'm presuming it means how "sure" it is of its answer. Specifically under the blink feature, does confidence mean how sure the subject has blinked?

Also, anyone has any idea that if the "confidence" is relatively accurate?

user-905228 26 April, 2020, 08:45:32

@papr Hi, I would like to ask that is there any methods to read and export the location file recorded by pupil mobile? Thanks!

user-308350 26 April, 2020, 12:49:44

Hi, I just got a pupil core and am trying to install pupil capture on Win10 following the instruction on the Pupil Labs' web page. However, the installation does not go any further (for several hours) in the middle of the PupilDrvInst.exe command screen, showing "successfully deleted private key" as the last message. Does anyone have an idea on what is going on or what I should do?


Update

I left the machine overnight and closed all the relevant windows this morning. After restarting the app, then it worked!

Sorry for bothring you. I don't know what was really the problem, but it is OK anyways.

user-e70d87 26 April, 2020, 16:46:23

@papr I understand that the eye model is a bad fit, I am trying to understand why the algorithm is unable to find the pupil in what should be a pretty ideal eye video. It also performs poorly when I only detect in 2D mode.

I don't seem to have these issues when I use an older version of Pupil Capture/Player (1.7.42)

user-7d4a32 26 April, 2020, 20:26:41

@user-905228 If you mean to change the export location it is under "Recorder" from the menu on the right when opening the pupil eye. under "Path to Recordings"

user-905228 27 April, 2020, 01:46:50

@user-7d4a32 Thanks! πŸ™‚ But my question is how to open the .loc file? I've tried the software "EasyGPS" but it didn't work, and I couldn't any software to open it.

user-905228 27 April, 2020, 01:48:26

The location of the fixation point seems to be too high.

Chat image

user-905228 27 April, 2020, 01:49:28

I'll expect the fixation point falls in the right rear mirror.

Chat image

user-905228 27 April, 2020, 02:07:04

@papr And the other question is that can I modify the fixation location manually by pupil player plugin after it is calibrated and mapped? Some of my data have the same problem (location of the fixation points are too high) no matter how many times I reread by pupil player. (The data is collected by pupil mobile and calibrated by manual calibration marker.) Thank you so much!

papr 27 April, 2020, 09:06:38

@user-7d4a32 Hi. Yes, the confidence value is meant as an indicator for data quality. The confidence value is calculated differently for different types of data. In the case of the blink detector, it is based on how sharply the pupil confidence drops and increases. The sharper the drop/increase, the higher the blink confidence will be.

papr 27 April, 2020, 09:12:05

@user-905228 The location file is a custom binary file and does not follow any standards on gps data. Specifically, it is a series of N "location" data frames, where each data frame consists of 6 consecutive little endian float64 values (longitude, latitude, altitude, accuracy, bearing, speed). The corresponding N timestamps can be found in the location_*.time file.

papr 27 April, 2020, 09:14:16

@user-905228 Regarding the gaze estimation offset: I suspect this being due to slippage of the headset. You can set a manual offset correction for each "gaze mapper" in the offline calibration plugin menu. You can setup multiple gaze mappers with different offsets for different sections of the recording in order to compensate slippage over time. I would recommend to make sure that these sections do not overlap.

user-c5fb8b 27 April, 2020, 09:18:33

@user-6752ca regarding your question in πŸ₯½ core-xr, academic discounts are available. You can see more information when selecting the desired product on our website: https://pupil-labs.com/products/

user-905228 27 April, 2020, 09:41:05

@papr Thanks for the help! I'll try it. πŸ˜ƒ

user-7d4a32 27 April, 2020, 10:43:06

@user-7d4a32 Hi. Yes, the confidence value is meant as an indicator for data quality. The confidence value is calculated differently for different types of data. In the case of the blink detector, it is based on how sharply the pupil confidence drops and increases. The sharper the drop/increase, the higher the blink confidence will be. @papr Thanks!

user-eaf50e 27 April, 2020, 11:57:22

@user-eaf50e This looks generally correct. Do you see the text file and it is empty? Or can't you find the text file? It is possible that it is in a different place than you expect it. @papr I immediately check with: print(open('{}.txt'.format(filename)).read()) but no file gets written

user-eaf50e 27 April, 2020, 11:57:37

@user-eaf50e Perhaps this script could help ``` pose_R = detection.pose_R tvec = detection.pose_t.ravel() marker_center_3d = np.zeros(3) marker_corners_3d = ( np.array([[-0.5, 0.5, 0], [0.5, 0.5, 0], [0.5, -0.5, 0], [-0.5, -0.5, 0]]) * tag_size )

marker_center_in_camera_coordinate = np.dot(pose_R, marker_center_3d.T).T + tvec
marker_corners_in_camera_coordinate = np.dot(pose_R, marker_corners_3d.T).T + tvec

projected_center = cv2.projectPoints(
    objectPoints=marker_center_in_camera_coordinate,
    rvec=np.zeros(3),
    tvec=np.zeros(3),
    cameraMatrix=cameraMatrix,
    distCoeffs=distCoeffs,
)[0].reshape(-1, 2)
projected_corners = cv2.projectPoints(
    objectPoints=marker_corners_in_camera_coordinate,
    rvec=np.zeros(3),
    tvec=np.zeros(3),
    cameraMatrix=cameraMatrix,
    distCoeffs=distCoeffs,
)[0].reshape(-1, 2)

``` @user-894365

user-eaf50e 27 April, 2020, 11:57:53

thank you, I'll try that.

user-eaf50e 27 April, 2020, 12:00:56

so the 3d pose estimation is not accurate? I tried to match 3d gaze and 3d tag pose but it doesn't give me satisfying results, i.e. when the gaze is on the tag, the coordinates of the tag and the gaze don't match (even with an error margin)

papr 27 April, 2020, 12:05:02

@user-eaf50e Could you give this code a try? (requires Python 3.6 or higher)

print("START")
from pathlib import Path

test = "TEST"
file = "file.txt"
file_path = Path(file).resolve()
print(f"Writing to {file_path}")
with open(file, "w") as fh:
  fh.write(test)
print(f"Reading from {file_path}")
with open(file, "r") as fh:
  assert test == fh.read()
print("END")
papr 27 April, 2020, 12:05:31

You should see 4 lines being printed and no errors

papr 27 April, 2020, 12:06:44

Please be aware that with open(path) as file_handle is preferred over the manual version (file_handle = open(path); file_handle.close())

user-eaf50e 27 April, 2020, 12:11:07

this works thanks! I was looking indeed in the wrong directory πŸ™ˆ I thought my print check was enough, but I guess I was wrong. Sorry, for the trouble!

papr 27 April, 2020, 12:13:26

@user-eaf50e btw, I would highly recommend using the pathlib module to handle/manipulate paths. https://docs.python.org/3/library/pathlib.html#module-pathlib

user-2e2394 27 April, 2020, 12:21:03

@papr Hi, so I am trying to setup and use the surface tracker. The way the MessagePackSerializer i use in C# works, is by deserializing into a dictionary with type (string, object), I then need to cast the object to the correct type, such as (float)(double)unpacked["timestamp"]..

I am having trouble figuring out what types are used for the "gaze_on_surfaces" and "fixations_on_surfaces", is there somewhere where I can look up the correct types? or see examples of using the surface tacker in c#?

papr 27 April, 2020, 12:22:00

@user-e70d87 I am getting very good pupil detections with the 2d detection. Even better with the following settings: 10 intensity range, 60 pupil min, 120 pupil max.

I think the reason why the confidence is lower than the left eye on average is that the led reflections are often on the pupil boundary, interrupting the black, elliptic pupil edge. 2d confidence is the ratio of how many candidate pixels (light blue in algorithm view) match the fitted ellipse (red).

I reduced the intensity range to reduce some candidate pixels that where inside the pupil instead at its boundary.

papr 27 April, 2020, 12:23:18

@user-2e2394 I understand. These two keys are actually lists of more dictionaries. I can get you some example data.

user-2e2394 27 April, 2020, 12:28:55

That would be lovely.

In the end, I just need to translate the gaze / surface tracker data into whether or not the user is currently looking at Surface1, Surface2 or Nothing.

papr 27 April, 2020, 12:32:31

@user-2e2394 https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py In this case, simply check this example python script.

filtered_surface is the root dictionary of the surface payload gaze_positions is the list of surface-mapped gaze positions gaze_pos is a dictionary gaze_pos['norm_pos'] is a list of 64bit floats and with 0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1 you can check if the gaze was on the surface

papr 27 April, 2020, 12:33:31

do you need to specify the types of all keys? or is it sufficient to know the types of fields mentioned above?

user-2e2394 27 April, 2020, 12:43:07

I will need to cast every value from the dictionary to the right type, but the above should be enough to go on with.

Do you have it written down somewhere similar to the pupil and gaze datum format found at: https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format ?

papr 27 April, 2020, 12:44:00

@user-2e2394 Actually, we don't have that yet. I will update the documentation with some example data.

papr 28 April, 2020, 08:44:06

@user-2e2394 For reference, this is an example payload of a surface message:

{
    "topic": "surfaces.surface_name",
    "name": "surface_name",
    "surf_to_img_trans": (
        (-394.2704714040225, 62.996680859974035, 833.0782341017057),
        (24.939461954010476, 264.1698344383364, 171.09768247735033),
        (-0.0031580300961504023, 0.07378146751738948, 1.0),
    ),
    "img_to_surf_trans": (
        (-0.002552357406770253, 1.5534025217146223e-05, 2.1236555655143734),
        (0.00025853538051076233, 0.003973842600569134, -0.8952954577358644),
        (-2.71355412859636e-05, -0.00029314688183396006, 1.0727627809231568),
    ),
    "gaze_on_surfaces": (
        {
            "topic": "gaze.3d.1._on_surface",
            "norm_pos": (-0.6709809899330139, 0.41052111983299255),
            "confidence": 0.5594810076623645,
            "on_surf": False,
            "base_data": ("gaze.3d.1.", 714040.132285),
            "timestamp": 714040.132285,
        },
        ...,
    ),
    "fixations_on_surfaces": (
        {
            "topic": "fixations_on_surface",
            "norm_pos": (-0.9006409049034119, 0.7738968133926392),
            "confidence": 0.8663407531808505,
            "on_surf": False,
            "base_data": ("fixations", 714039.771958),
            "timestamp": 714039.771958,
            "id": 27,
            "duration": 306.62299995310605,
            "dispersion": 1.4730711610581475,
        },
        ...,
    ),
    "timestamp": 714040.103912,
}
papr 28 April, 2020, 09:19:58

I also started a pull request to add this information to the documentation https://github.com/pupil-labs/pupil-docs/pull/367

user-a98526 28 April, 2020, 11:15:11

@papr Hi,I used Pupil to get the eye data, I want to know if there is any way to get the head movement data, similar to Pupi's head-mounted.

papr 28 April, 2020, 11:16:30

@user-a98526 What device are you currently using to get the eye data?

user-a98526 28 April, 2020, 11:40:27

Used for gaze estimation, and want to use head movement data to obtain more accurate gaze estimation

papr 28 April, 2020, 11:41:59

@user-a98526 Have you checked out the head pose tracking feature? https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins It requires you to setup your environment with apriltag markers. Is this what you are referring to?

user-a98526 28 April, 2020, 11:54:00

@papr I think this is helpful, but can this plugin provide online head pose estimation, I want to use this information to control the robot in real time.

papr 28 April, 2020, 11:54:27

@user-a98526 Yes, there is a realtime version for Pupil Capture, too.

user-a98526 28 April, 2020, 11:55:04

@papr Thank you very much, it was very helpful!

user-2e2394 28 April, 2020, 13:52:03

@papr Appreciate the example.

It seems to me that the boolean from the python example "0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1" is already evaluated and saved in the dictionary entry "on_surf".

papr 28 April, 2020, 13:52:17

@user-2e2394 correct

user-5ef6c0 29 April, 2020, 13:39:19

Is there a way to trim a pupil core recording? i.e. export a segment of my recording, that I can then import back to pupil player.

user-c5fb8b 29 April, 2020, 13:41:28

@user-5ef6c0 there are two handles left and right of the timeline that you can drag around to specify a subsection of the recording. This trimmed section will be used when you export your data. There's no way to import that back into Player though. Player will always keep your whole recording as it is.

user-5ef6c0 29 April, 2020, 14:37:28

@user-c5fb8b thank you. Yes, I was aware of that. I have 8 minute long recordings with multiple sections I'm analyzing separately. I thought it would be useful to be able to export them once and then open separately in player.

user-c5fb8b 30 April, 2020, 08:12:47

@user-5ef6c0 Normally we recommend to make separate recordings if your experiment allows splitting into reasonable blocks, maybe this could be an option for you? Otherwise you will have to use the trim marks to specify one section at a time, do all your analysis and export. If there's any additional post-hoc analysis needed, you'll have to do this manually on your expoted data for now.

user-894e55 30 April, 2020, 19:34:36

hi, Can I adapt pupil labs core to Vuzix M300xl?

End of April archive