πŸ‘ core


user-8ed000 01 October, 2021, 09:24:44

Hello. I have a problem when exporting Gaze positions on surfaces from the pupil player. I set the export to exporting all frames from Frame 0 to the last Frame, but in the exported .csv-data, there are frames missing. Sometimes only Frame 0, sometimes frames are skipped in the middle (see screenshot). Any idea how I can fix this?

Chat image

papr 01 October, 2021, 09:58:54

This happens if there is either no gaze or no surface detection for a given frame. :)

user-8ed000 01 October, 2021, 10:02:16

Thank you for your fast answer. Thats understandable. Does the frame number in the export stay the same as in the video though or are they all shifting when this happens?

papr 01 October, 2021, 10:07:41

They should stay the same as in the video. The only question is if they shift if you perform a trimmed export, but I do not think so. I would have to look that up

user-8ed000 01 October, 2021, 11:53:05

Ok, thank you. We don't perform a trimmed export, we trim the data in our program later, so we don't need further information

papr 02 October, 2021, 10:13:15

Hi, which experiment program are you using? We are working on a native psychopy integration which you could give a try.

user-52c504 02 October, 2021, 10:15:33

I wonder if I'm simply using printed out documents, would there be an easy solution to create word-based AOIs?

user-52c504 02 October, 2021, 10:14:18

Oh that'd be great. I am currently using a self-created programme written in Matlab

user-1e4d85 03 October, 2021, 16:55:16

Hello, can Pupil mobile eye tracking headset i mean the product in this link https://www.shapeways.com/product/LQJJK2CHQ/pupil-mobile-eye-tracking-headset?optionId=43013976&li=shops give me the data like eye cascade, eye fixation , velocity and all the data than can be captured? Or which data this product can give me if i buy it?

papr 04 October, 2021, 14:59:55

See https://docs.pupil-labs.com/core/software/pupil-capture/#plugins for what can be detected in real-time or https://docs.pupil-labs.com/core/software/pupil-player/#plugins for what can be calculated post-hoc. We, currently, do not provide a saccade detector.

Please be aware that the DIY kit cameras have a lower sampling rate, i.e. the temporal resolution of e.g. fixations might be reduced.

user-864bf1 04 October, 2021, 03:32:16

Hi @papr . Can you please direct me to guidelines on the best placement/settings for the eye cameras? I am having some trouble achieving consistent tracking amongst different users, and wanted to learn how to best personalize the setup for each individual. Thanks!

papr 04 October, 2021, 15:01:17

You can find some info here https://docs.pupil-labs.com/core/#_3-check-pupil-detection and here https://docs.pupil-labs.com/core/best-practices/#pupillometry

user-864bf1 04 October, 2021, 16:43:21

Thank you!

nmt 04 October, 2021, 15:08:52

@user-1e4d85 Note that Pupil DIY is mainly intended for those who are prototyping and have the time to experiment and tinker with both hardware (and often software). It will get you basic eye tracking. However, it uses β€˜off-the-shelf’ webcams (purchased separately) which have certain limitations as described by @papr. You can read more about Pupil DIY here: https://docs.pupil-labs.com/core/diy/ and a list of things you would need to obtain here: https://docs.google.com/a/pupil-labs.com/spreadsheets/d/1NRv2WixyXNINiq1WQQVs5upn20jakKyEl1R8NObrTgU/pub?single=true&gid=0&output=html If you need eye tracking that's ready to go out of the box, you might want to consider Pupil Core. For a more detailed comparison of DIY vs Core, check out this post: https://discord.com/channels/285728493612957698/285728493612957698/689004362642620529

user-1e4d85 04 October, 2021, 16:26:37

How much pupil core? Thank you

user-1e4d85 04 October, 2021, 16:24:34

You mean that if I want to buy the pupil mobile eye tracking I have to buy a special camera to get all the data I need?

user-864bf1 04 October, 2021, 17:00:20

Follow up question: If unable to achieve high confidence with both eyes, is it better to adjust the cameras to have high confidence in 1 eye at all times (even if that means the other eye is not in view at all in certain instances, such as corner of FOV). To broaden the question a bit - Is the tracking approach using data from both eyes simultaneously, or using the one with the higher confidence at any time point? Thank you.

papr 04 October, 2021, 19:09:00

I am not sure why you would differentiate these cases. The eye cameras can be adjusted independently. You should be able to adjust both of them such that they get a good angle on the eye. To your second second question, the software attempts to pair high confidence detections. If this is not possible the software will map the data monocularly.

user-d41611 04 October, 2021, 18:59:55

@user-311e43 @papr Hello I am integrating the pupil core into an underwater application and have needed to sub some of the cameras suggested in the diy documentation. I have selected this camera from bluerobotics and have consulted the uvc compatibility specifications. I have requested from the vendor to confirm if the camera is uvc compatible and they have said it should be providing the following specifications attached in the file. I was hoping someone at pupil labs can confirm this camera should work. I'm not exactly a software engineer or camera expert so any help is much appreciated!

bluerobotics_camera.docx

papr 04 October, 2021, 19:13:51

This looks pretty good! Of course, I cannot tell for 100% sure since I have not tested the camera myself. But I am fairly confident that this camera should work.

user-d41611 04 October, 2021, 19:17:33

@papr Thanks for the help!

papr 04 October, 2021, 19:19:14

I would appreciate it if you could let us know how it goes if you decide to purchase the camera.

user-d41611 04 October, 2021, 19:44:01

Yeah, of course! I will post an update. We are trying to integrate the two eye cameras and the world view into a usb hub similarly to how the pupil core base equipment is integrated into one board with a usb-c out. We have been trying to find an off-the-shelf board with 3 jstph connections and a usb-out but have been trying to figure out if there are any special features that allow the cameras to interface with the usb (forgive my lack of electrical engineering jargon). Water proofing is a b**** so the way it has been done inline on the base headset is ideal, less opportunity for intrusion.

user-0952ce 05 October, 2021, 10:46:30

Hi, are there any docmentations about developing Pupil Player with python or c#, such as, adding the gaze coordinates in the recording movie (the default is pupil diameter and its confidence?) and developer defined parameters

papr 05 October, 2021, 14:17:09

Pupil Player has a set of plugins that can visualize gaze. The world video exporter plugin can render these into the exported video. Generally, it is possible to extend Capture and Player via our plugin api. https://docs.pupil-labs.com/developer/core/plugin-api Unfortunately, the documentation for the plugin api is not in its best place right now. I would be happy to give you pointers or even a helping hand if you want to do something specific.

user-6af96b 05 October, 2021, 14:15:00

Sorry, is it possible to get the video via the network api?

papr 05 October, 2021, 14:15:37

yes, from the frame publisher. See this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py

user-6af96b 05 October, 2021, 14:19:39

wow thanks for the quick response ❀️ πŸ™‚

user-0a22da 06 October, 2021, 19:25:20

Is it possible to identify an object, such as a golf ball and track gaze fixations on a moving target using core?

user-1672dc 07 October, 2021, 07:55:50

Hello, we are experiencing a problem with the manual post hoc gaze calibration. The vis circle disappears even though there is a calibration within the video. Does somebody have an idea on what could be the issue? Thank you in advance for your help!

nmt 07 October, 2021, 10:30:00

There are several things that could cause the gaze circle to disappear when doing post-hoc calibration. I highly recommend replicating the steps taken in these videos: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-7bd058 07 October, 2021, 10:01:12

Hello, in one of our recordings one eye got detected badly but the one has no issues. Is it possible to exclude one eye?

nmt 07 October, 2021, 15:55:02

To improve your gaze estimate by excluding one eye video, you would need to do post-hoc calibration. Post-hoc calibration requires that you recorded a calibration choreography as shown here: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration

marc 07 October, 2021, 10:10:57

@user-0a22da The definition of what a fixation exactly is, is not super straight forward. According to the most common definition a fixation is when the eye is not moving. If someone is "fixating" on a moving object, like a rolling golf ball, this is usually called a "smooth pursuit" movement, which is different from a regular fixation from a cognitive perspective.

The algorithm in Pupil Player is detecting fixations that fit the basic definition of a non-moving eye. It can not detect smooth pursuit eye movements. To my knowledge there is currently no robust algorithm available for smooth pursuit detection.

In your special case of golf, it may be an option to implement a "golf ball" detector and to build a custom smooth pursuit detector on top of that, where you simply check if the gaze stays on top of the ball.

user-695fcf 07 October, 2021, 13:10:12

Hi, I am trying to decide what the best duration and dispersion threshold would be for my recordings, which consists of the wearer looking at physical photos (ca. 2m distance) and the wearer looking at a video on a computer screen (ca. 1.5m distance). I am assuming that the thresholds will vary for these two tasks, but the information I have found in research so far claims very different estimates of fixation durations (anything from 100-400-ish).

My question is: how do I decide which threshold to go with? In your guide (Best practices) you describe the potential errors when having the variables set too high or too low, but how do I know for sure whether the variables are off?

user-f9f006 07 October, 2021, 13:36:18

does anyone know what the model and the transmission protocol of the eye camera are? Thank youπŸ˜ƒ

Chat image

nmt 07 October, 2021, 16:10:22

Hi @user-695fcf πŸ‘‹. One could adopt a quantitative approach in deciding optimal thresholds for your tasks, like the chess paper linked in our Best Practices. But this is obviously time consuming and could be a study on its own. As you have identified, the problem with using previous literature is that estimates of fixation durations can vary wildly. Other researchers choose to test different thresholds against a manual coder(s) in order to ensure they are accurate when classifying fixations. Which ever approach you take, I think the important thing is to understand the potential for errors and thus be able to identify them in your results and adjust accordingly.

user-695fcf 11 October, 2021, 12:15:30

Thanks for the answer πŸ™‚ I will look into the options.

user-7bd058 08 October, 2021, 09:20:18

thank you for your answer. We know that we need to do post-hoc calibration but we do not know how to exclude one eye video?

papr 08 October, 2021, 09:28:23

You can rename the video file before opening the recording in Player and rerunning the Post-hoc pupil detection. Afterward, there will only be data for the originally named video file

nmt 08 October, 2021, 09:34:07

Hi @user-f9f006. Comprehensive eye camera specifications are available here: https://www.ovt.com/sensors/OV6211

user-f9f006 11 October, 2021, 06:22:27

Thank you very much

user-93c34e 09 October, 2021, 17:29:55

Hi - I've recently started experimenting with a DIY pupil labs set up (after working with Tobii setups previously), and I'm hoping someone might be able to offer some advice.

I currently have 2x Microsoft HD6000 eye cams, and I'm trying to use them inside a HUD where I can't set up a world frame camera. I'd like to track where on the HUD screen the participant is looking.

The way I was thinking of attempting this was to:

(1) set up the eye cams inside the HUD (2) tell the participants to look at 9x target locations on the HUD in a similar choreography to the screen-based calibration in Pupil Capture (i.e. center, top left, middle left, bottom left, etc..) (3) manually create the correspondence between the coordinates from the eye cams with the locations on the HUD screen that the participants are instructed to follow.

If I can read an (x,y) coordinate from the eye cam when the participant is starting at each of the 9x target locations on the HUD, I should have a pretty good mapping from eyecam output to screen location.

At least this is what I'm hoping.

Assuming this is correct, I'd imagine that the best way to read from the eye cam is to use the IPC backbone (https://docs.pupil-labs.com/developer/core/network-api/#ipc-backbone) subscribe to the 'gaze' topic and somehow find the pupil coordinates (norm_pose_x/y) or (gaze_point_x/y)? Though I doubt the gaze point is going to be possible without a world cam.

Appreciate your advice! Thank you

nmt 11 October, 2021, 10:54:30

Hi @user-93c34e πŸ‘‹. There isn’t usually a fixed spatial relationship between a head-up display and the person viewing it. Any relative movement between the head and the display would break any mapping. If one assumes a fixed spatial relationship, e.g. using a head-mounted display, then the concepts you describe are reasonable. We use bundle adjustment to estimate the physical relationship between eye and world camera. You can subscribe to pupil data via the network API, yes. But for the kind of custom calibration you describe, I’d probably run Pupil from source.

user-eeecc7 10 October, 2021, 22:13:32

Hi All, I am looking for a way to record data of a person reading along with the video. I understand that the audio capture plugin was disabled following v1.23. 1) Is there a way, scripted or otherwise which can help me record and play the audio with pupil core? I would like to use an external microphone to record the audio and generate the audio.mp4 and the audio_timestamps.npy. 2) Also, if this is possible, will the latest version of Pupil player play the audio?

Any help in this direction would be great. Thanks in advance.

nmt 11 October, 2021, 10:59:04

I'd recommend using the Lab Streaming Layer framework. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/887783672114208808

user-eeecc7 11 October, 2021, 18:02:41

Hi Neil, Thankyou so much. I will check this out and let you know

user-93c34e 12 October, 2021, 03:25:25

Thank you Neil. Would that topic (norm_x/y) be the correct topic for pupil coordinates? And can I assume gaze_point is meaningless without using the Pupil Core standard calibration methods? Thanks again!

papr 12 October, 2021, 06:37:45

Hi, please be aware that we differentiate pupil and gaze data (see https://docs.pupil-labs.com/core/terminology/#pupil-positions) and that both have normalized coordiates (https://docs.pupil-labs.com/core/terminology/#coordinate-system). And correct, gaze data requires a calibration to be meaningful. πŸ™‚

user-0952ce 12 October, 2021, 08:33:53

Hi, does Pupil Core support smartphone-driven gaze collection? Are there any docs for it?

papr 12 October, 2021, 08:36:17

Hi, this is only supported for Pupil πŸ•Ά invisible πŸ™‚

user-0952ce 12 October, 2021, 08:37:29

Thank you

user-074809 12 October, 2021, 16:28:33

Hi there, I am looking to convert fixation location from the world cam to pixel location on the presentation screen. what is the best way to do this? been playing with using the surface tracker but worry about how large the apriltags need to be if they are shown digitally. should I be attaching them to the screen instead? or should I use some calibration procedure to transform world cam points to pixel locations? thank you for any advice

user-5ef6c0 12 October, 2021, 17:41:07

hello, I wonder it it's possible to change the hotkeys assigned to default actions in pupil player. For instance, I'd like to assign "e" currently assigned to the annotation player. Same with "f" (currently assigned to next fixation)

papr 13 October, 2021, 06:49:45

Hi, unfortunately, the hotkeys are not configurable. You would have to run from a modified source code version for that.

user-7daa32 12 October, 2021, 18:15:39

It seems both the pupil capture now run both 2D and 3D modes for pupil detection. We select one based on what we set for calibration right ?

user-7daa32 12 October, 2021, 18:16:56

It is confusing... They run concurrently for pupil detection, but I don't understand the pipeline selection in calibration plugin

papr 13 October, 2021, 06:50:37

Yes, 2d and 3d pupil detection happens in parallel but only one of them will be used for gaze mapping depending on your selection in the calibration choreography plugin

user-7daa32 13 October, 2021, 14:33:33

Thanks

user-0952ce 13 October, 2021, 07:33:38

About the directory of plugins, source said "Each user directory has a plugins subdirectory into which the plugin files need to be placed. The Pupil Core software will attempt to load the files during the next launch.", can we change the directory position e.g., on the directory in dropbox.

papr 13 October, 2021, 07:40:23

Yes, by replacing the plugins folder with a symlink to the target directory.

user-0952ce 13 October, 2021, 07:52:20

I've done it, thank you

user-cd8c20 13 October, 2021, 08:40:52

Hey guys, im trying to use an older version of the pupil core. I cant get the world camera to work. The world camera appears as an INTEL Real Sense Camera in my device manager but does not show up in the pupil capture software. I tried https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting multiple times now. Any Ideas on how to get it to work?

papr 13 October, 2021, 08:41:48

Hi πŸ™‚ Do you need the older version just for the realsense support?

user-cd8c20 13 October, 2021, 08:42:18

no thats just all i have access to πŸ™‚

papr 13 October, 2021, 08:47:21

Do I understand it correctly that you could not use a different version even if you wanted to? Could you let me know the exact version?

user-cd8c20 13 October, 2021, 08:48:48

I mean i got the Pupil Labs Core from my old Lab. We currently do not have a newer or different version of your eye tracking systems. The box sais pupil w120 e200b

papr 13 October, 2021, 14:34:58

This module should come with a scene camera that works out of the box. If you want to use an intel realsense camera for the scene video you need to use a custom video backend https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29

papr 13 October, 2021, 08:49:19

Ah, apologies, I was referring to the Pupil Core software not hardware. πŸ™‚

user-cd8c20 13 October, 2021, 08:49:45

ah there i am running 3.4.0

user-7daa32 13 October, 2021, 14:42:23

@papr sorry, If I took information from the Pupil Lab website, how can I properly site it? I don't really like having website references

papr 13 October, 2021, 14:50:45

That depends on what information you want to cite. If you copied specific information that is not present in the references below you should cite the website.

To cite Core see https://docs.pupil-labs.com/core/academic-citation/ To cite Invisible see https://docs.pupil-labs.com/invisible/academic-citation/

user-cd8c20 13 October, 2021, 14:52:20

I am using the camera that came with the headset. but it shows up as an intel realsense.

Chat image

papr 13 October, 2021, 14:53:14

Yes, indeed, this is a R200 Intel realsense camera. In this case, you cannot use the user plugin mentioned above. Let me look for a software version that should work with that hardware.

user-6af96b 13 October, 2021, 16:18:19

Live Fisheye de-distortion Hello there, when using fish-eye lense with iMotions-Exporter-Plugin in PupilPlayer it is possible to un-distort the image and adapt the gaze-Data accordingly. The plugin works on recorded / offline data. I want to work on a python-script that does this with the live data from the network-api (live video and live gaze data). I thought about re-using the script from iMotions-Exporter (because it is also in python).

My question: I was wondering why this doesn't already exist for the live-data (in Pupil Capture) (or maybe it exists and I don't know about it^^). So I was wondering if my idea is good πŸ˜„ Or will I run in any problems I don't see^^ (Maybe a drop of the video-framerate could be a problem for some use cases I guess but this would be fine for mine.)

papr 13 October, 2021, 16:21:17

Hi, the image undistortion is actually quite costly and will likely result in lower frame rates. It mostly does not exists because there has been no demand for doing this in realtime. What is your application?

user-5ef6c0 13 October, 2021, 16:50:14

thank you. I have another question: is there a way to delete an annotation in the annotation player? that would be so helpful but I haven't found a way to do so

papr 13 October, 2021, 16:55:57

Unfortunately, there isn't one. But it is on our todo list.

user-6af96b 13 October, 2021, 17:40:36

My use case is about marking digital school/study-tasks (which appear on a monitor) with qr-codes / AruCo marker for defining the tasks' position/area in the video and check if the gaze is in the area of a task to know on which task a person is looking at which time to get a better understanding of the solving process of a learning person. The video needs to be undistorted to detect the marker and to create a correct area of a task and maybe in the future to detect objects in the room to blur them for privacy protection.

Chat image

papr 13 October, 2021, 21:07:55

But all of this could be done post-hoc, correct? Nonetheless, I would be happy to help you with building a real-time "undistortion viewer". I think you are already on the right track regarding the general software components. You only need to select the more specific functionalities and combine them into one script. πŸ™‚

user-6af96b 13 October, 2021, 17:42:20

and btw thanks for the answer πŸ™‚

user-ba974b 13 October, 2021, 17:46:48

Hello, I'd like to speak with someone from Sales or Product Development over the phone to discuss questions our team has before we purchase 3 pupil invisible units

nmt 13 October, 2021, 19:46:52

Hi @user-ba974b πŸ‘‹ . Please contact info@pupil-labs.com in this regard πŸ™‚

user-ba974b 13 October, 2021, 18:45:02

Hello?

user-5ef6c0 13 October, 2021, 20:10:15

Thank you. As someone who has spent hours doing these annotations, I'd love to suggest a few ideas such as being able to see all annotations on a table, being able to edit the tags, being able to export and import keys/tags to reuse in other projects, etc.

papr 13 October, 2021, 21:05:08

The latter will be possible in our upcoming release. πŸ™‚

user-93c34e 13 October, 2021, 21:38:24

Hi - Does anyone have recommendations for pupil cameras (not the HD6000) that could be used in a DIY kit? The HD6000 focal length/resolution seem sub-optimal and was hoping that someone might have advice: re other cameras that have been used successfully. Thanks!

user-cd8c20 14 October, 2021, 10:33:36

Hi again, did you by any chance have the oportunity too check for older versions of the software?

papr 14 October, 2021, 10:43:56

btw, you can post-process recordings from older versions of Capture in the most recent version of Player.

papr 14 October, 2021, 10:42:41

The built-in realsense backends were removed in v1.22. You can find the latest version with built-in support here: https://github.com/pupil-labs/pupil/releases/v1.21

user-cd8c20 14 October, 2021, 10:43:09

ty ill try it directly. I just tried 1.23 ^^

user-cd8c20 14 October, 2021, 11:00:26

Sorry for asking again - I have to select the R200 on the backend manager right? Cause that instantly causes a software crash

papr 14 October, 2021, 11:02:35

That is correct. Can you share the capture.log file? It is possible that this is related to the driver. Please be aware that Intel has stopped the driver support for the R200 a while ago.

user-cd8c20 14 October, 2021, 11:04:43

where do i find the log data?

papr 14 October, 2021, 11:05:06

Home directory -> pupil_capture_settings -> capture.log

user-cd8c20 14 October, 2021, 11:05:46

Chat image

papr 14 October, 2021, 11:10:15

This is not the correct folder. By home directory, I was referring to your user home directory, not the folder that was extracted from the Capture download

user-cd8c20 14 October, 2021, 11:11:37

ahhh sorry of course - trying to listen to a training while doing this at the same time πŸ˜‰

capture.log

user-cd8c20 14 October, 2021, 11:10:28

this is what the console puts out after switching to the R200

Chat image

papr 14 October, 2021, 11:14:36

Since Intel dropped the support, I do not know if the drivers are still available somewhere.

papr 14 October, 2021, 11:11:50

ok, thanks, that information is sufficient. This is a driver issue. πŸ˜•

user-cd8c20 14 October, 2021, 11:14:55

ok ty anyways - ill try my best to find some

user-cd8c20 14 October, 2021, 11:25:16

just wanted to let you know that discord has no issues using the R200 as a camera

papr 14 October, 2021, 11:29:20

But Discord is only accessing the RGB part of the camera, correct? The Realsense backend is meant to take full advantage of the camera, i.e. process the depth stream as well. Unfortunately, the RGB camera is not compatible with our (default) UVC backend (see https://discord.com/channels/285728493612957698/285728493612957698/725357994379968589 for constraints)

user-6af96b 14 October, 2021, 12:43:00

Yes theoretically it could be done in post-processing, but the requirement in the future will be that the information of a student should be stored in realtime and then read by an intelligent system which helps this student in realtime when he or she is struggleing with a task at the moment. I wouldn't say no to help from you πŸ˜„ πŸ™ Me and another person will have a look at that the following days - checking out how the iMotion-Exporter works and see how far we come and will probably come back to you with specific questions 😁

papr 14 October, 2021, 12:43:34

Sounds good to me πŸ‘

user-8ed000 14 October, 2021, 14:07:17

Hello. How can I Apply the manual Correction in the Post hoc Gaze calibration? Does it apply automatically when I change the value or do I have to recalculate the Gaze Mapper?

papr 14 October, 2021, 14:09:30

Hi, you will need to recalculate.

user-8ed000 14 October, 2021, 14:10:05

Thats what I guessed but I wanted to be sure. Thanks for the quick reply

papr 14 October, 2021, 14:10:40

If you do not need post-hoc calibration, you can also apply manual offset correction via this plugin https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433 it previews the amount of offset. See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin on how to install it.

user-8ed000 14 October, 2021, 14:12:11

Thank you very much, I will have a look at it

user-10a43b 14 October, 2021, 16:19:07

Hello all, is there a way to get the world_index (i.e. frame number) in LabStreamingLayer? I am using the time frame of the world video stream to determine naturally occurring visual onset of events for EEG analysis. Alternatively, is there a way to match the time_series data of the .xdf with the world_index exported from Pupil Player?

papr 14 October, 2021, 17:09:48

I have been working on a new Capture plugin that records LSL streams as CSV files within a Core recording. It syncs the LSL streams to Capture time. It is just the first draft but maybe you could give it a try and let me know what you think? https://github.com/labstreaminglayer/App-PupilLabs/blob/1078b6d26429f3b2d08246e0b03a1e5795352183/pupil_capture/pupil_capture_lsl_recorder.py

user-10a43b 14 October, 2021, 17:34:21

Thanks for the super prompt response Papr!! and please forgive my lack of tech "savvyism", I have added the pupil_capture_lsl_recorder.py file to my App-PupilLabs/pupil_capture should I see any modification in the LSL plugin on Pupil_capture or must I make a LSL recording to inspect the exports. Cheers

papr 14 October, 2021, 17:36:08

You need to copy the file next to the other, already installed, relay plugin. Then, start Capture, go to the Plugin Manager menu and start the Pupil Capture LSL Recorder plugin. It should show up next to the other plugin.

user-10a43b 14 October, 2021, 17:42:57

All working for now! you da man! I will make a pilot recording this afternoon (chile time) and let you know how it goes! Cheers. V

papr 14 October, 2021, 17:43:40

It definitively needs stress testing

user-93c34e 14 October, 2021, 18:46:07

Hiya - A question on syncing pupil.pldata info with video recordings in Pupil Capture. I'd like to reference pupil locations for each frame of the video. The pupil topic has timestamps for each data packet, and I'm wondering if it's possible to get the analogous timestamps for each frame of the video recording?

papr 14 October, 2021, 18:47:23

Do you want to do that in real-time? Or just post-hoc?

user-93c34e 14 October, 2021, 18:48:19

Post-hoc first, to check the validity, and then real-time if that works out.

papr 14 October, 2021, 19:58:49

Basically, you need to buffer the data and match the pupil data to the frame with the smallest timestamp difference.

user-93c34e 14 October, 2021, 18:49:08

In real-time I think I can use opencv frame grabbing from the video feed with the zmq API for the pupil data. But not sure how to align in post-hoc processing.

papr 14 October, 2021, 20:00:40

Post-hoc, given two lists of timestamps, you can use this function to find the closest matches: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L386-L400 A slower version is https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L403-L443 but it is more applicable to the realtime use-case

user-93c34e 14 October, 2021, 20:01:52

Thanks, and each frame that's recorded will have a timestamp in the meta-data?

papr 14 October, 2021, 20:02:25

Correct! Every data point in Capture has a timestamp.

user-93c34e 14 October, 2021, 20:26:41

Great, but I think I'm still missing something: the timestamps for each frame of the video are embedded within the mp4 file? I know that timestamps for notifications / pupil / gaze / etc. are in both pldata and .npy files, and get that those helper functions can help align those lists of timestamps...but before we even get to that step, where to find the list of timestamps for the video i.e. how to attach each image frame of the video to a timestamp?

user-c6ead9 14 October, 2021, 20:08:22

Sheree Josephson, Weber State University, a highly cited communications researcher using eye-tracking, has asked me to evaluate the Pupil Lab's Invisible glasses. Sheree plans to update her current system in November. We have found receiving a sample raw data file and its corresponding video very helpful in the past to start our evaluation. Can someone here help us with this or provide contact with an appropriate person at Pupil Labs. Thx.

papr 14 October, 2021, 20:13:00

Hi, you can find example raw data linked in the Tech Specs sections of our products https://pupil-labs.com/products/ The format is explained in our documentation at https://docs.pupil-labs.com/ and you can playback and export the recordings using Pupil Player https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads

For all remaining questions, please contact info@pupil-labs.com πŸ™‚

user-c6ead9 14 October, 2021, 20:13:50

Perfect! Thx.

papr 14 October, 2021, 21:00:26

The videos have their own *_timestamps.npy file. See https://docs.pupil-labs.com/developer/core/recording-format/ Re real-time, in the case of this example, you can access the timestamp via msg["timestamp"] https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py#L82

user-93c34e 15 October, 2021, 03:18:28

Ah phenomenal, thank you!

user-1a6a43 15 October, 2021, 19:34:45

Hey! I’d greatly appreciate any tips on the following situation

user-1a6a43 15 October, 2021, 19:36:17

We’re trying to use pupilcore with eeg lab (from matlab), and want to use LSL to align the information properly. Can you point me to the proper documentation for doing so, or even an existing implementation in some repository

user-b9005d 15 October, 2021, 20:06:40

Question: I just downloaded the newest version of the pupil labs desktop applications. We’re using the pupil labs core headset and trying to gather gaze position data. However, we aren’t seeing the ellipses around the eye, only orange annotations on the pupil. In previous versions, we were able to see a green circle and red target on pupil. Is there a manual to explain this change so we can see the ellipses again.

user-364e5c 02 November, 2021, 20:07:56

I've had exactly this problem, have you found a solution?

user-ac21fb 15 October, 2021, 20:24:14

Hi - I created a recording yesterday but it is no longer in my recordings folder. Has this happened to anyone? I cannot find it anywhere.

user-93c34e 16 October, 2021, 01:35:55

Hi - Has anyone had experience using a DIY kit with Microsoft LifeCam HD6000s on a Windows 10 system? The newest Pupil Core software I downloaded and installed (v3.4.0) doesn't seem to recognize the USB webcams. Linked an issue in github here: https://github.com/pupil-labs/pupil/issues/2198 but if anyone has successfully used the HD6000s on a Windows 10 system, would super appreciate any advice. Btw, these cameras work flawlessly on a mac with the latest pupil core software. Go figure. Thanks!

user-93c34e 18 October, 2021, 03:36:11

Found a solution! Posted it on github: https://github.com/pupil-labs/pupil/issues/2198

user-7daa32 16 October, 2021, 06:27:12

Pupil Lab say the eye tracker has gaze accuracy of 0.6deg. In another place, they said a 3D pipeline will achieve 1.5 to 2.5deg accuracy. Is this 0.6 for 2D pipeline? because β€Ή 1deg is specified for 2D pipeline. Typically, we experienced 2deg average accuracy after calibration. Is that even okay?

I am confused about the 0?6deg in the specifications. Then the 1deg for 2D and that for the 3D

user-ed7a94 16 October, 2021, 16:52:56

hello, everyone, I want to get access to real-time data via network API, I follow the instruction on this link https://docs.pupil-labs.com/developer/core/network-api/#pupil-groups. But I the python demo did show any data. it's just led to infinite waiting. then I set "topic, payload = subscriber.recv_multipart(flags=zmq.NOBLOCK)" , it reports" Traceback (most recent call last): File "getbackbonemessages.py", line 37, in <module> topic, payload = subscriber.recv_multipart(flags=zmq.NOBLOCK) File "D:\Users\splut\Anaconda3\envs\Furhat\lib\site-packages\zmq\sugar\socket.py", line 491, in recv_multipart parts = [self.recv(flags, copy=copy, track=track)] File "zmq\backend\cython\socket.pyx", line 791, in zmq.backend.cython.socket.Socket.recv File "zmq\backend\cython\socket.pyx", line 827, in zmq.backend.cython.socket.Socket.recv File "zmq\backend\cython\socket.pyx", line 191, in zmq.backend.cython.socket._recv_copy File "zmq\backend\cython\socket.pyx", line 186, in zmq.backend.cython.socket._recv_copy File "zmq\backend\cython\checkrc.pxd", line 20, in zmq.backend.cython.checkrc._check_rc zmq.error.Again: Resource temporarily unavailable " can anyone help on this??

papr 16 October, 2021, 18:37:47

This just means that there is no data to receive. Which is also why it shows the "infinite loop" behavior. It is just waiting but not getting anything. Feel free to share more code context in πŸ’» software-dev if you want us to have a look for more concrete feedback

user-1a6a43 17 October, 2021, 02:48:32

Hey! I’d greatly appreciate any tips on the following situation

We’re trying to use pupilcore with eeg lab (from matlab), and want to use LSL to align the information properly. Can you point me to the proper documentation for doing so, or even an existing implementation in some repository

papr 18 October, 2021, 10:25:05

You basically have two options: https://github.com/labstreaminglayer/App-PupilLabs/tree/lsl_recorder/pupil_capture

  • relay: publishes a subset of Pupil Core's data via LSL outlets. Might not include what you need. Published format follows https://github.com/sccn/xdf/wiki/Gaze-Meta-Data
  • recorder (beta): records available LSL streams as CSV files. Does no longer require the Lab Recorder. Import of recorded eeg data into EEG Lab might need to be done manually.
papr 17 October, 2021, 07:21:21

Do I understand it correctly that you want to use LSL to record Core and EEG data and process the latter via eeg lab?

user-1a6a43 17 October, 2021, 07:25:08

yes

papr 17 October, 2021, 08:05:01

And what Pupil Core data are you interested in specifically?

user-fabb18 18 October, 2021, 18:04:35

Hello- I'm wondering, I just got a pupil invisible v5.5. If Pupil comes out with another version or any updates, how would I get those?

papr 18 October, 2021, 18:08:53

You are referring to Pupil Cloud, correct? Updates to Pupil Cloud are free and happen automatically.

user-fabb18 18 October, 2021, 18:10:45

I believe I am referring to the cloud, there isn't any software in the glasses themselves that will have any updates?

papr 18 October, 2021, 18:11:14

There is, the Android phone companion software. You can get its update via the Google Playstore.

user-fabb18 18 October, 2021, 18:12:15

Ok, perfect. So it can't push to the glasses automatically, I will have to go get the update in Google Playstore.

papr 18 October, 2021, 18:16:31

What do you mean by push?

user-fabb18 18 October, 2021, 18:18:38

To get any updates from the companion software I'll have to go myself to the google playstore and install any updates. That doesn't happen automatically, correct?

papr 18 October, 2021, 18:19:20

You can set the PlayStore to auto-update

papr 18 October, 2021, 18:19:45

See https://support.google.com/googleplay/answer/113412?hl=en

user-fabb18 18 October, 2021, 18:21:05

ahhh, perfect. Thank you!

papr 18 October, 2021, 18:21:27

Please see the updated link. I replaced the former one with the official documentation.

user-fabb18 18 October, 2021, 18:21:38

πŸ‘

user-c9f8b8 18 October, 2021, 22:27:20

Hi everyone, I am super new to pupil-labs. I would like to synchronize pupil timestamps with our driving simulator. I found some example code in pupil-helpers/python/. I am wondering how to use these from the app? Would I have to write a script for an plug-in? Is there any successful examples anyone like to share?

Also, if I launch the pupil software on one computer but have the participant work on a different computer, do I have to calibrate on the working computer or just the pupil computer?

user-abb8ae 19 October, 2021, 02:06:09

Hi, is "Reset 3D model" able to trigger via the remote notification?

user-1a6a43 19 October, 2021, 04:11:18

thanks. I'm reading through the documentation of LSL trying to get it set up, and it's a pain and a half. I think it'd be super cool if a video is posted on the pupil-labs channel on how to set up LSL for pupil-core, and setting up LSL itself on EEGLAB, is done.

papr 19 October, 2021, 04:16:41

Yesterday, I found a new way for easily installing pylsl for Pupil Core 1. Download wheel for your platform https://pypi.org/project/pylsl/1.15.0/#files 2. Save to pupil_capture_settings/plugins 3. Rename .whl extension to .zip 4. Unzip file. If the content was extracted into a subfolder, move the pylsl* folders into pupil_capture_settings/plugins

This way, there is no need for a separate python installation and symlinking the pip-installed files.

Please checkout the eeglab documentation in how to use it with lsl.

user-1a6a43 19 October, 2021, 04:21:49

thanks! though I will need to install it on MATLAB, which is why everything is so painful

user-1a6a43 19 October, 2021, 04:22:18

unless the LSL works for both pupil core and our bio-semi EEG device, which I'm guessing is possible

user-1a6a43 19 October, 2021, 04:23:57

it may be possible to gather the data with python and run the experimental setup in matlab

user-1a6a43 19 October, 2021, 04:25:54

though I still think that a video showing the installation of LSL on MATLAB, and then the connection of it with pupil-core would be very beneficial

papr 19 October, 2021, 07:16:27

The standard LSL workflow is to use the https://github.com/labstreaminglayer/App-LabRecorder to record the data. It can receive streams from multiple sources, e.g. from Pupil Capture via the LSL Relay plugin or other LSL applications. It stores the data to xdf files. From what I have read, EEG Lab is able to read these files post-hoc.

Even though I feel your pain regarding installing Matlab dependencies, setting up third-party software falls wide outside of our area of expertise.

user-b9005d 19 October, 2021, 14:19:31

Hey, just following up on this question to see if anyone has any ideas

papr 19 October, 2021, 14:56:48

Could you please share a Screenshot of that phenomenon?

user-b9005d 19 October, 2021, 16:16:30

Sorry for the poor images. The image with the green circles was from a previous version and the image with the yellow pupils are from the newest version of the desktop application

Chat image Chat image

papr 19 October, 2021, 16:18:30

You are running from source, correct?

user-b9005d 19 October, 2021, 16:16:40

@papr

user-b9005d 19 October, 2021, 16:26:12

I believe so. Is that the default?

papr 19 October, 2021, 16:30:04

No, usually, people would run from bundle. It looks like the pye3d plugin is not running. Could you try restarting from defaults. You can do so in the world window general settings.

user-b9005d 19 October, 2021, 17:00:25

Done, but it doesn’t seem to have changed anything

papr 19 October, 2021, 17:00:49

Could you please share the ~/pupil_capture_settings/capture.log file?

user-b9005d 19 October, 2021, 17:05:01

capture.log

papr 19 October, 2021, 19:21:13

There seems to be something wrong with the libopencv dependency that is shipping with the bundle. I will have to have a closer look. As a result, the 3d detector is not loaded.

user-5ef6c0 19 October, 2021, 19:16:55

quick question about pupil player: I made annotations for each fixation (I would click on the "next fixation") button to nagivate the footage. However, the index frame for the fixations and annotations are offset by 2 frames

papr 19 October, 2021, 19:22:15

Because jump to next fixation jumps to the fixation mid_frame_index, not the start index

user-5ef6c0 19 October, 2021, 19:17:15

is this to be expected?

user-5ef6c0 19 October, 2021, 19:22:29

perfect, ty

user-b9005d 19 October, 2021, 19:25:32

All right, I will wait to hear from you. Thank you!

papr 20 October, 2021, 08:18:02

Hi, the issue is that this release is no longer compatible with macOS High Sierra. The latest release to support this operating system is Pupil 3.0 https://github.com/pupil-labs/pupil/releases/v3.0

user-83fc01 20 October, 2021, 10:25:18

Hi, everyone, I'm developing with an industrial camera as the world camera, it's not a UVC one so it cannot be detected by the pupil lab software. Can I feed the numpy data from the industrial camera to the main program to perform the gaze mapping? Or is there any API for the world camera?

user-7daa32 20 October, 2021, 13:01:41

Hello everyone

Is it possible to have a significantly different "calculated sampling rate" compared to the manufacturer specification?

papr 20 October, 2021, 13:03:59

Technically, yes. If your computer does not have the computational resources to run at the full sampling rate, the software will dismiss/drop samples in order to keep up with the incoming real-time data. This can lead to lower sampling rates.

papr 20 October, 2021, 13:02:17

You could give this a try https://github.com/Lifestohack/pupil-video-backend

user-83fc01 29 October, 2021, 11:46:49

Hi, papr, I have used the project you introduced to publish the camera's frames to the pupil lab successfully, but then I got trouble with the calibration. You can find the calibration logs from lines 424-441, it shows no pupil data is collected (I used monocular). I'm pretty sure the pupil is captured correctly you can find Dismissing 4.89% pupil data due to confidence < 0.80. I'm not sure if there's anything wrong with the video_backend program or anything else.

user-83fc01 20 October, 2021, 13:08:40

Thank you very much! It may help!

user-dba6e9 20 October, 2021, 13:27:07

Hi Pupil Labs, we were looking at using April Tags with our Pupil Core and we're having trouble generating the tags we need. Do you have an easier way of getting these or have any pngs you'd be willing to send? We've been trying to use the GitHub link with little success

user-7daa32 20 October, 2021, 13:30:38

Thank you. There are two sampling rate. One for the eye cameras and for for the scene camera. Both could change when the sampling rate of the computer is low right ?

papr 20 October, 2021, 13:31:27

Correct.

user-7daa32 20 October, 2021, 13:37:28

In Pupil Capture a maximum of 120 Hz for the scene camera and 60 Hz for the eye camera. And then there is another 200 Hz pasted on the box. The one we calculated was less than 160 and 200

user-7daa32 20 October, 2021, 13:37:59

Just trying to understand the differences and how both can be obtained from our study

papr 20 October, 2021, 13:42:27

The scene camera can only reach 120hz for specific resolutions. The default resolution (720p) has a maximum sampling rate of 60hz. Similarly, the eye cameras only support 200hz in the 192x192 resolution.

To calculate the effectively recorded sampling rate, you to calculate the multiplicative inverse of the timestamp differences.

1/(t1-t0)

This gives you the frame rate for each sample. This way you can see if the sampling rate changed throughout the recording.

Alternatively, you can calculate the average framerate of your recording.

user-7daa32 20 October, 2021, 15:15:19

I have been using 400 x 400 resolution. Seems to be the best for pupil detection. If at all they are related

user-7daa32 20 October, 2021, 15:16:57

It just seemed easy working with 400 X 400. Don't know if I am making sense.

user-7daa32 20 October, 2021, 15:19:36

The sampling rate we calculated is for the scene camera?

Sorry for too many questions

papr 20 October, 2021, 15:26:56

I cannot answer that since I do not know what you calculated in detail.

user-7daa32 20 October, 2021, 15:24:41

?? Please

papr 20 October, 2021, 15:19:53

In this case, the maximum sampling rate is 120 Hz

user-7daa32 20 October, 2021, 15:20:14

Yes

papr 20 October, 2021, 15:26:14

Summarized: Eye camera: max 120 Hz @ 400x400 max 200 Hz @ 192x192

Scene camera: max 30 Hz @ 1080p max 60 Hz @ 720p max 120 Hz @ 480p

user-7daa32 21 October, 2021, 13:57:05

Thanks

user-7daa32 21 October, 2021, 13:57:42

I'm here again

The image in Pupil Capture is distorted. Please how can I have undistorted image?

papr 21 October, 2021, 13:58:40

You can use the iMotions exporter to get undistorted scene video and gaze signal.

user-7daa32 21 October, 2021, 14:07:03

Thanks. I found no eye movement in tht imotion video

papr 21 October, 2021, 14:07:38

~~Could you please clarify?~~

user-7daa32 21 October, 2021, 14:07:51

No gaze signals

papr 21 October, 2021, 14:08:12

The gaze data is undistorted separately, It is not rendered into the video.

user-7daa32 21 October, 2021, 14:09:53

What's now the purpose of the imotion video?

papr 21 October, 2021, 14:10:18

The export can be uploaded to https://imotions.com/ for further analysis

user-7daa32 21 October, 2021, 14:16:04

I will have to find a way of transforming the video shown in player. The images are curved and doesn't sharp edges. They zoomed out from the centre

user-b9005d 21 October, 2021, 17:46:18

Does the release work with Mojave? Or do we have to go newer than that too?

papr 21 October, 2021, 17:50:00

Mojave should work :) but knowing Apple, they will deprecated Mojave fairly soon. When that happens, homebrew stops supporting it too, and then it becomes very difficult for us to maintain the bundle.

If I were you, I would upgrade at least to Catalina.

user-b9005d 21 October, 2021, 17:51:35

All right, currently upgrading to Catalina. So then should the problem just go away after the upgrade or are there settings I need to adjust first?

papr 21 October, 2021, 17:51:53

It should just work.

user-75e3bb 22 October, 2021, 15:20:56

Hello, is there anyway I can direct to a specific fixation on Pupil Players? Such as, maybe I can type in the fixation number the player will show that fixation on video

papr 25 October, 2021, 08:10:46

Unfortunately, no. But you can jump to the next and previous fixations using the f and shift+f keys

user-d41611 22 October, 2021, 18:52:54

Have any of you worked with the world view camera mounted on the side of the head instead of the center? And if so were there any additional required changes to the underlying programming required to compensate for the offset?

user-eeecc7 23 October, 2021, 20:58:01

Hi All, I am trying to set up an experiment where the subject is reading from physical texts. When I try the screen marker calibration and then place a paper in front of the same screen, I am more or less able to get word level gaze locations, however when the book is being read naturally, this doesn't work. I understand that this is because of the region of calibration and a change in depth. My question is whether there is a better way to set this up? I tried the single marker calibration but that seems to be going horribly wrong. Is there something fundamental that I am missing?

user-0643f6 25 October, 2021, 07:16:57

Hello is anyone here from the Pupil Labs Company??

user-0643f6 25 October, 2021, 07:18:02

I am interesting to talk with the sales department regarding delivery times of Core

papr 25 October, 2021, 07:20:21

Hi, have you reached out to info@pupil-labs.com already?

user-0643f6 25 October, 2021, 07:18:09

It is somehow urgent

user-0643f6 25 October, 2021, 07:20:52

Yes I sent an email at 7:04 Greek time

user-0643f6 25 October, 2021, 07:21:04

should be 6:04 German time

papr 25 October, 2021, 07:24:57

I see your email. It has been assigned already. We will come back to you soon.

user-0643f6 25 October, 2021, 12:02:23

I received the answer, everything explained perfectly, great service, thank you!

user-0643f6 25 October, 2021, 07:25:24

Ok that's fine

user-0643f6 25 October, 2021, 07:25:49

Thank you

user-0e09ab 26 October, 2021, 14:07:52

Hi there ! Kind of weird to come here but I tought hey... Those people knows a lot about eye tracking I guess...

I am a simracer. I have a Super Ultra Wide samsung 49'' monitor but this is not enough for me. I've tried VR, it gives me headakes but because of the strap, I get the eyes crosssided after 2Hrs of Formula 1 and I was sweating too much, I prefer my super ultra wide monitor for real.

Ok but now I heard about Eye tracking !! It's lighter than a VR I guess, and it would be PERFECT for what I need: I only want to be able to look into my sides mirrors ! I have heard about Tobii tracker, that is 200USD right now, my question is:

Is there any cheaper tracker ? Just to be able to get that ok I move a bit my head to right so you know..

I also heard about Pupil ! Am I at the right place ? On the right path ? ahah... I see it's more of a software than hardware...

papr 26 October, 2021, 14:12:17

Hi πŸ™‚ Yeah, Pupil Core products are designed for researchers and not necessarily the everyday end user. But I would be happy to give you my thoughts on your issue if you are still interested.

user-0e09ab 26 October, 2021, 14:11:14

https://pupil-labs.com/products/core/accessories/ holly molly ok... kind of the right path but not the right path... 650euro... I am not spitting on it it's just not what I am looking for, this must do SO FAR much more than for looking the sides mirror imao like controlling the cursor and high tecs captors... I don't need this 😦 I only want to look the mirrors ahah

user-0e09ab 26 October, 2021, 14:13:45

From an expert like you It would be amazing to have your toughts

papr 26 October, 2021, 14:15:28

My first question would be what problem you are trying to solve using an eye tracker. The eye tracker will not increase the field of view of your monitor, unless the software performs specific actions, e.g. blending in the side mirrors if you are looking at a specific place.

user-0e09ab 26 October, 2021, 14:18:23

Correct, the FOV will still be the same in degrees, but, you know in ACC (Assetto corsa Competizione), if we press the right cross button, the FOV goes right a bit, know what I mean

user-0e09ab 26 October, 2021, 14:18:37

The centering of it change

user-bba76e 26 October, 2021, 14:27:43

Hi, we bought pupil core with 200hz eye camera and realsense d415 world camera.

user-bba76e 26 October, 2021, 14:31:49

we want to substitute the world camera because of many problem we had/have. can we have a list of cameras that we can use?

papr 26 October, 2021, 14:33:40

Please contact info@pupil-labs.com in this regard.

user-38dc0e 26 October, 2021, 17:02:14

@papr yes, I will email you @ info@pupil-labs.com πŸ˜„

user-b9005d 26 October, 2021, 18:37:11

@papr In pupil player, I’m trying to set up annotations. I see how to flag individual frames, but is there a way to make the annotations last over a stretch of time? I assumed there would be since there’s a β€˜duration’ output in the annotations csv

papr 26 October, 2021, 19:06:04

Hi, I understand the confusion. Pupil Player cannot create annotations with durations through the UI. The durations field can be set for real-time annotations sent via the Network API.

user-b9005d 26 October, 2021, 19:13:47

So how would I do these real-time annotations? Sorry, I’m still getting familiar with the pupil labs software

user-d1ed67 27 October, 2021, 03:32:31

Hi. Is there any way to export the 3D eye model and import it into the Pupil Capture again so that I would have the same eye model all the time?

papr 27 October, 2021, 06:48:01

Hi, setting the eye model to a specific location is not supported. Usually it is also very unlikely that the eyes will be at the very same place every time you wear the hardware.

user-027014 27 October, 2021, 13:49:01

Hi! I notice that when I run the new pupil core with pupil capture v3.4, It has trouble to keep up with the desires sampling rate (set at 200hz) specifically when I run both eyes at the same time. CPU goes to 250% or so, and actual Fs is roughly in between 120-160 fps per eye. What system requirements are needed in order to smoothly run at 200fps with both eyes simultaniously? Many thanks, Jesse

papr 27 October, 2021, 14:01:02

Hi, yes, unfortunately, the pye3d detector introduced in Core 3.0 requires more resources than our previous 3d detector. If you are looking for new hardware, I can recommend using the M1 macs with macOS Big Sur. They run the software extremely smoothly. Otherwise look for high CPU speed (>3 GHz)

user-dba6e9 27 October, 2021, 14:18:40

Hello, in the process of switching out the regular and fish-eye cameras, we accidentally overtightened the new camera and cracked the plate under the removable camera part. Our eye tracker (Core) is still under warranty, so what do we need to do to get this replaced?

papr 27 October, 2021, 14:19:09

Please contact info@pupil-labs.com in this regard

user-dba6e9 27 October, 2021, 14:20:06

Will do, thanks

user-dba6e9 27 October, 2021, 14:19:59

Also, the April Tags we're using aren't being picked up with the fish-eye camera, but do get picked up with the regular camera. We need to use the fish-eye camera for our experiment, so how do we get the April Tags to work under fish-eye view?

papr 27 October, 2021, 14:20:46

You will likely need to increase their size. πŸ˜•

user-dba6e9 27 October, 2021, 14:21:53

Ok. Is there a good way to do that without losing resolution?

papr 27 October, 2021, 14:24:37

If you use nearest neighborhood scaling, you do not loose any quality.

user-dba6e9 27 October, 2021, 14:25:43

Ah, ok. Thanks!

user-8365f4 28 October, 2021, 17:51:30

I am researching this product for use in a seizure detection project where user's pupils will be tracked through the day -- my main concern is mobility, does this device connect to Pupil Mobile app like Pupil Invisible can? Or is there a way to run Pupil Capture software off a smaller device than a computer?

papr 29 October, 2021, 08:21:32

Hi, Pupil Mobile has been deprecated and its use for new projects is discouraged. Also, Pupil Mobile does not perform on-device pupil detection, i.e. you would need a Pupil Capture or Player instance anyway. Also, Pupil Capture is fairly resource intensive. I suggest using the pupil detector code directly https://github.com/pupil-labs/pupil-detectors/ within a piece of code that runs on a smaller device that fits your constraints.

user-4f89e9 28 October, 2021, 18:41:52

Quick question if anyone knows the solution, when i try to launch pupil capture the "world view" opens then stops responding on a white screen

user-4f89e9 28 October, 2021, 18:43:09

i know it was setup probably a couple years ago by somebody else so im guessing the software version is different?

user-4f89e9 28 October, 2021, 19:37:18

i went back to an old version and got it to work

user-d1ed67 29 October, 2021, 01:50:27

Hi. We modified the Pupil Core a little bit such that the subject will always wear the Pupil Core at the same location, so we want to fix the 3D eye model to ensure we can get the same result every time.

Since exporting and importing the 3D eye model is not supported, could you please point out which part of the source code I should check to implement this feature? If possible, can I implement this feature using just a plugin? If not possible, could you please illustrate how to build the software from a different version of source code?

Also, is it possible to modify the eye model parameters (such as eye radius and cornea radius)?

papr 29 October, 2021, 08:14:45

Hi, yes, this can be done via custom plugin and just calling existing pye3d functionality! πŸ™‚ You can read about general plugin development here https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin (it contains a section about pupil detection plugins at the bottom, too).

The plugin class for you to overwrite can be found here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py#L36

It interacts with this pye3d class https://github.com/pupil-labs/pye3d-detector/blob/91769309e16655976176f880fa6f24d328af2e41/pye3d/detector_3d.py#L102

You can freeze, replace and set the locations of the models that are being used in background.

I will be able to help you out more concretely once our next 3.5 release has been published.

user-5fd3dd 29 October, 2021, 05:41:25

helper

user-83fc01 29 October, 2021, 11:46:59

capture.log

papr 29 October, 2021, 13:21:30

Please share a Capture recording of you performing the calibration with data@pupil-labs.com This way can give the most concrete feedback.

user-83fc01 29 October, 2021, 16:14:44

Thank you! I've sent you the recording.

user-4f89e9 29 October, 2021, 20:12:14

Is it possible to upload pupil core footage to pupil cloud for easier analysis?

papr 30 October, 2021, 06:41:55

Hi, unfortunately this is not possible as the data formats are not compatible.

user-4f89e9 29 October, 2021, 20:51:26

also if i want to set up markers for 2 surfaces do they need to be from different families?

papr 30 October, 2021, 06:42:30

Not different families, no. But each marker has an ID that needs to be unique within the recorded video s.t. the software can recognize the surface.

user-4bad8e 31 October, 2021, 08:18:35

Hello. Is there recommended value of the Onset/offset confidence threshold and filter length for blink detection? I want to measure subjects' blink numbers while they are relaxing in rooms.

nmt 31 October, 2021, 09:30:56

You can read about the blink detector in our online docs: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector; and about onset/offset thresholds in the Best practices section: https://docs.pupil-labs.com/core/best-practices/#blink-detector-thresholds

user-4bad8e 31 October, 2021, 12:24:42

Thank you very much for the information!

user-e33900 31 October, 2021, 13:56:39

Hello I have a similar issue right now. and I follow these steps but it was not fixed.
I wanted to ask for advice as I am trying to find out if the issue I address is a technical issue or not.

I am using pupil adds-on in HTC vive, and I am connecting it in Unity and use Pupil capture. Suddenly I have issues with the Capture as the world cam from unity is not displayed on screen as you can see in the picture below. I followed the troubleshooting suggested here to re install the drivers and I see two of them:called Pupil Cam1 ID0. (Also checked the hidden cameras) I don’t see one related to the word cam for the unity scene. I can see both eye on pupil capture but just grey screen as world screen. The connection in unity seems to work property as displayed in the second picture attached.

I have tried it before and there was not problem so I am wondering if there is something else to try or if this is a technical issue.

Chat image

papr 31 October, 2021, 17:01:16

Hi, please see the hmd-eyes docs regarding screencasting. Unity needs to stream its virtual main camera to Capture. hmd-eyes has built-in support for that. This is not a driver issue since there is no physical camera that needs to be connected. The eye camera drivers are installed correctly already πŸ‘

user-e33900 31 October, 2021, 13:56:41

Chat image

End of October archive