core


user-55ae67 01 February, 2022, 13:57:20

Hello, we want to fix these cables back. Could you please specify the order of the cables? Thank you.

Chat image

papr 01 February, 2022, 13:59:50

Please contact info@pupil-labs.com in this regard

user-c8e5c4 01 February, 2022, 14:57:41

hey, i'm having an open interview with participants wearing the eye tracker and during answering the questions they will likely move their head around a lot. I would prefer not telling them to keep still or fixate their heads so it's more of a real conversation. Is that a big problem for accuracy or could head movements be corrected someway? Thanks for your help πŸ™‚

nmt 01 February, 2022, 15:08:14

Hi @user-c8e5c4. How much is 'a lot' 😸? Head movements during a typical conversation shouldn't be a problem when using the 3D gaze mapping pipeline, as it will update to accommodate headset slippage due to head movement and/or facial expressions: https://docs.pupil-labs.com/core/best-practices/#choose-the-right-gaze-mapping-pipeline Obviously this becomes more challenging during, e.g. very dynamic sporting movements

user-c8e5c4 01 February, 2022, 15:15:06

thank you! 3d gaze mapping seems like a good option! just one more question: I will later on extract all the frames from the world video, use a face recognition algorithm on it and measure in every frame whether participant is looking at the eyes or not. so far, i used the norm_pos_y / norm_pos_x data to determine gaze position in the frame. I'm guessing i would then need to use the gaze_point_3d columns?

nmt 01 February, 2022, 15:31:40

The face recognition algorithm will presumably output facial features in 2D coordinates (pixel space). One can correlate Pupil Core's 2d gaze with these directly. Note that you can convert norm_pos_x/y to pixel space using the video frame size. See Cell 10 of our frame extraction tutorial for details: https://nbviewer.jupyter.org/github/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb

user-c8e5c4 01 February, 2022, 15:34:15

correct, i've got facial features in 2d pixel space. i already got it working using norm_pos_x/y. my question would be whether i need to account for 3d gaze mapping somehow when i do this analysis or can i still use the 2d gaze info? hope it's clear πŸ™‚

nmt 01 February, 2022, 15:35:29

Conceptually, I believe it's valid to work directly with the 2D gaze data

user-c8e5c4 01 February, 2022, 15:35:39

perfect, thank you!

user-0d33c1 02 February, 2022, 10:24:15

Hi all, I'm using Presentation software from Neurobehavioral Systems to control my experiment measuring EEG and eye tracking (we have a Pupil Core). I would like to send messages from the PC with Presentation to the PC with Pupil Capture, in order to remotely rec & stop and/or to send annotations, all synchronized with my EEG triggers. From what I understand, sending these messages requires the ZMQ library, which does not seem to be compatible with Presentation. Do you have any solution for sending triggers from Presentation software? If not, are you planning to have a network protocol compatible with it? (I really need to use Presentation for the temporal accuracy it provides, as I present auditory stimuli to the participants).

Thanks in advance! FΓ©lix

nmt 02 February, 2022, 13:16:44

Hi @user-0d33c1 πŸ‘‹. It looks like 'Presentation' software has a Python interface: https://www.neurobs.com/menu_presentation/menu_features/python One of the use cases: "simultaneously using other hardware or software during the experiment that interfaces easily with Python, but does not interface easily with Presentation directly" It should therefore be possible to communicate with/control Pupil Capture from the interface. For further reference, this script shows how you can send events/annotations to Pupil Capture over the network: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py

nmt 02 February, 2022, 13:19:46

@user-0d33c1 another option for synchronisation would be to leverage our Lab Streaming Layer (LSL) relay: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture#pupil-capture-lsl-relay-plugin It would seem 'Presentation' also support LSL: https://www.neurobs.com/pres_docs/html/03_presentation/06_hardware_interfacing/02_lab_streaming_layer.htm

papr 02 February, 2022, 13:32:15

If you use the Presentation LSL integration, you can also record the published events within a native Pupil Capture recording using this plugin https://github.com/labstreaminglayer/App-PupilLabs/blob/lsl_recorder/pupil_capture/pupil_capture_lsl_recorder.py

user-027014 02 February, 2022, 13:51:41

Why cant pupil capture/pl core track blue eyes? So far in our research labs we have known about this issues for quite some time, and chosen to ignore it and just measure dark eyed participants. However, given the current lack of dark eyed people within our group and the Netherlands in general it is starting the become a bit annoying. So far I fail to understand why the algorithm is not capable of detecting the pupil here in this example. Calibration of blue/bright eyed participants results in a loss of 80-90% of the data due to low confidence, and any eye trace data we do record afterwards is mostly noise. I notice that dark eyes in infrared camera's appear a bit brighter then bright eyes, and therefor contrast between the iris and pupil is slightly better for dark eyes. Yet, any pupil detection algorithm should be capable to find the pupil here.

I'll appreciate any help to circumvent or solve this issue from my end (i.e. things that i can do here to improve tracking of bright eyed people), but also i feel there should be some work done on the pupil labs pupil detection algorithm.

Note that the photo added below is taken in light circumstances, yet for both bright eye pupil tracking in the light as well as dark this problem exists.

Chat image

nmt 02 February, 2022, 15:27:36

Hi @user-027014 πŸ‘‹. The pupil detector should perform consistently with blue or darker coloured irises. We have certainly never encountered a systematic reduction in performance with blue irises. Note that there are several 2d detector settings that can be tweaked: https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings. If these were optimised specifically for darker irises, it could negatively influence detection with lighter irises, and vice versa. In the first instance, I would try reverting to the default settings which work well in most cases: Click 'Restart with default settings' from within the settings menu of Pupil Capture. Also, feel free to share a recording with data@pupil-labs.com for feedback πŸ‘

user-0d33c1 02 February, 2022, 14:33:51

Thanks @nmt ! I'll look at these 2 options. I was also thinking of a backup plan which would be to run a "server.py" on the eye-tracker PC, able to receive the messages from Presentation (because using standard TCP protocol I'm able to receive messages from Presentation in a python server), then forward this message to Pupil Capture, using ZMQ this time. This would be an intermediate interface. The main risk I see is reliability, notably in terms of lags.

papr 02 February, 2022, 14:39:00

I would recommend using LSL over the tcp socket option. 1) You do not have to implement anything yourself. 2) LSL takes care of the time sync and network delays for you.

user-0d33c1 02 February, 2022, 14:35:48

In general, do you know if the delay with which remote messages (e.g., "R" for record) reaches Pupil Capture (via ethernet) should be consistent? Or is it random? I'm trying to find a way to quantify that as well.

papr 02 February, 2022, 14:37:58

It should be fairly consistent regarding Pupil Capture processing the command. Network delays may vary though.

papr 02 February, 2022, 14:41:04

Generally, I would not recommend estimating these kind of delays. I would rather recommend subscribing to the events published by Capture's Network API. The recording.started notification contains the start time of the recording.

user-0d33c1 02 February, 2022, 14:43:16

Okay then I should assert that the clocks of the different PCs are synchronized I suppose, and if so, I could realign using the "recording.started" timestamp you mentioned

papr 02 February, 2022, 14:41:59

@user-0d33c1 For general information on time syncs see https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py

user-0d33c1 02 February, 2022, 14:44:32

Thanks a lot for all your replies, it's much appreciated! I'll look at all these options.

user-0d33c1 02 February, 2022, 14:45:10

A last question: if I run a 1-hour experiment with 40 trials, do you think it's reasonably doable to record the Pupil Core continuously 1 hour while sending annotations of the events, or is it necessary to "rec/stop" for each trial? I imagine because of the main video data that may be heavy, the risk of recording 1 hour is to crash the PC, if it's not particularly powerful?

papr 02 February, 2022, 14:46:43

I would recommend 3-4 smaller recordings instead of one 1-hour long one.

user-0d33c1 02 February, 2022, 14:47:29

Okay thanks very much! πŸ™‚

user-f6b744 02 February, 2022, 16:09:38

Hi - I've sent an email to the info pupil labs account, but we’re currently in-field conducting some eye-tracking today and we’ve had an issue with one of the interviews. I can see in our Cloud area that the file is 9:56 long (it is still currently uploading), but in-field we're only able to see the first 40 seconds-ish of the recording on the phone. Are you able to help us with this and identify what has happened? – We can’t see anything from the device settings etc on our side and we have another session very soon where we don't want the same issue to occur.

File name - 2022-02-02_10:00:22

ID - 9f03e566-d635-4932-a92d-e28d2f5ac377

papr 02 February, 2022, 16:31:03

Hey, thanks for getting in touch and providing the details. We have also received your email. We will have a look and follow up via email.

user-5b9b5d 02 February, 2022, 16:28:51

hello everyone! I desperately need to understand what's wrong. my set up includes two pc's: The first pc has only pupil labs installed (on window) the second pc contains the TASK on MATLAB/PSYCHTOOLBOX with ZMQ library (with zmq-matlab and zmq-matlabmsgpack). the two PCs communicate via local network: via MATLAB (second PC on UBUNTU) the start/end recording and time synchronisation scripts work perfectly. Is there a way to send notifications (/triggers) for each TASK stimulus in psychtoolbox via MATLAB to the first pc (windows with pupil labs)? am i missing something? do i need to install something else? Thank you very much in advance for your help!

papr 02 February, 2022, 16:32:27

Just to clarify: Do the the start/end recording and time synchronisation scripts work on the Windows PC?

papr 02 February, 2022, 16:34:18

If yes, you only need to call the corresponding functions from the example scripts within your psychotoolbox experiment.

user-5b9b5d 02 February, 2022, 16:36:53

thank you very much for your reply! unfortunately the scripts on matlab notifications do not work! and I need to create timestamps for each stimulus present during the task

user-5b9b5d 02 February, 2022, 16:35:34

yes, if you are asking me if pupil capture starts, yes! if i start the scripts from MATLAB (ubuntu, second pc) pupil capture (first pc with win.) starts recording etc..

papr 02 February, 2022, 16:38:15

Ah, I misunderstood the issue. start/stop recording + time sync works. Sending events does not. Did I understand correctly?

user-5b9b5d 02 February, 2022, 16:38:48

yes!

papr 02 February, 2022, 16:39:25

ok, how do you determine that sending events does not work? Does the script crash? Does the event not show up as expected in Capture?

user-5b9b5d 02 February, 2022, 16:42:10

the script crash when I try to send a annotation

user-5b9b5d 02 February, 2022, 16:42:22

the script crash when I try to send a annotation

papr 02 February, 2022, 16:42:51

ok, does it let you know about the error causing the crash? If yes, could you share it with us?

user-5b9b5d 02 February, 2022, 16:45:58

Chat image

papr 02 February, 2022, 16:47:35

You need to specify the socket and the annotation that you want so send https://github.com/pupil-labs/pupil-helpers/blob/master/matlab/send_annotation.m#L2-L8

user-5b9b5d 02 February, 2022, 16:49:56

ok, being several different annotations, do i need to create them a priori on pupil capture? or can i create them by specifying on the matlab script? and thanks a lot for your support!

papr 02 February, 2022, 16:51:17

You do not specify them a priori in Capture. Just follow the format described in the link above.

user-5b9b5d 02 February, 2022, 16:51:31

thanks a lot!

user-027014 03 February, 2022, 14:11:11

Hi @nmt, thanks for your reply. I believe the 2d detector settings are already at default settings as we use the 3d detector for gaze mapping. We record pupil data with auto aperture setting, so that pupil capture finds it own optimal light setting, then we fix the 3D eye model, and then continue with a the calibration and recording. After the weekend I'll do a test recording with my own (blue) eyes, and share it (as we typically do not store the recordings due to the high noise and I therefor have nothing lying around atm).

nmt 03 February, 2022, 15:48:48

Note that the 2d detector is always running regardless of which gaze mapping pipeline you select. In addition, the 3d pupil detection is based on results of the 2d detector. As such, changing the 2d settings will influence the detection result. Please restart with default settings prior to making a recording for feedback πŸ™‚

user-d50398 03 February, 2022, 14:55:22

Hello, as far as I know that we are able to remotely start/stop Pupil Recorder through zqm (e.g. socket.send_string("R")) and setting the Pupil time (e.g. socket.send_string("T 0.0")).

I am wondering are there any comments that is able to set the recording path and the recording session name?

papr 03 February, 2022, 14:57:06

Hey, that can be done via socket.send_string("R <path to recording or session name>")

papr 03 February, 2022, 14:58:53

Please note that we no longer recommend using the T command. See this documentation and example on how we recommend to do time sync https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py

user-d50398 03 February, 2022, 15:00:50

Thank you very much!

user-4f103b 03 February, 2022, 17:14:22

Hi, does pupil core support Monocular vision?

papr 03 February, 2022, 17:16:53

Hey, could you elaborate on what you mean by that?

user-4f103b 03 February, 2022, 17:17:19

Can I use my only one eye at a time?

papr 03 February, 2022, 17:18:06

Yes, if you use this dual-monocular plugin https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350 It will prevent Capture from combining data from both eyes into one binocular estimate.

user-4f103b 03 February, 2022, 17:18:41

Thank you so much. I will check it

user-6a8e0b 03 February, 2022, 19:12:56

Hi guys. I am curious how can I read the gaze coordinates over time. The only files I am seeing are gaze.pldata and gaze_timestamps.npy . The latter only shows the time vector. Are the coordinates stored in gaze.pldata? If yes, how can I open them in windows?

nmt 04 February, 2022, 08:55:50

Hi @user-6a8e0b πŸ‘‹. The easiest way is to load your recordings into Pupil Player, our desktop software, and export the data as *.csv files. Details of all exportable data here: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter

user-d1ed67 04 February, 2022, 06:02:42

Hi @papr . We bought an Intel Realsense depth camera, and we would like to integrate its depth information into the post-hoc calibration routine (i.e. instead of letting the calibration routine to move both the calibration target and the eye orientation, we want to fix the calibration target and let the calibration routine only change the eye orientation). Could you please point out what I should do in addition to installing the depth camera plugin in the pupil capture? Are there any existing code that implements this feature?

user-5b9b5d 04 February, 2022, 09:12:11

hi everyone! I've tried to modify in every way the script "send_annotation.m", but what I get are only the reception with empty timestamp and label! can someone help me to understand how to configure the script for this to happen?

I summarize briefly my set up: I have two PCs connected via LAN, the first with only pupil labs (win) and the second with ubuntu operating system, in which there is the task created with MATLAB/PSYCHTOOLBOX. during the task there will be about 300 trials, I need to send triggers for each stimulus within the individual trials. The two PCs communicate perfectly for synchronization, start/end recording via zmq/matlab. I am stuck with triggers/annotations, because without them it is impossible to analyze 300 trials. Also, I bypassed the use of markers, they are not useful to send annotations for stimuli that I already have with psychtoolbox in the second pc, right? thank you very much in advance!

papr 04 February, 2022, 09:15:53

You do not need to modify the send_annotation script. You just need to pass arguments to the function it defines. Can you give an example on how you call it in you experiment?

user-5b9b5d 04 February, 2022, 09:22:52

these are some of the many ways

Chat image

papr 04 February, 2022, 09:58:25

You were very close to getting it right. The only issue was that the value variable did not contain the expected type of information

papr 04 February, 2022, 09:56:49

Read more about the recommended way to time sync here: https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py

papr 04 February, 2022, 09:53:57

Yeah, unfortunately the https://de.mathworks.com/help/matlab/ref/containers.map.html docs are not perfectly clear on how to create such an object. Please try

send_notification(
  socket,
  containers.Map(
    {'topic', 'label', 'timestamp', 'duration'},
    {
      'annotation.my_custom_annotation',  % must start with `annotation.`
      'my custom label',
      42.0,  % use time synced value here
      0.0,  % duration
    }
  )
)
user-5b9b5d 04 February, 2022, 10:01:49

thank you so much, really, I'll try right now! thank you so much for your time!

papr 04 February, 2022, 10:02:09

No worries, happy to help!

user-47a821 05 February, 2022, 20:28:13

Does the "Binocular Add-on" for VIVE have SDK support for linux. I see the pupil-labs github repos and from what I can tell the company generally supports linux. Specifically I am asking if the Binocular Add-on is supported by pupil-labs SDK's on linux (before I buy one).

nmt 07 February, 2022, 10:27:47

Hi @user-47a821 πŸ‘‹. I have responded in the vr-ar channel: https://discord.com/channels/285728493612957698/285728635267186688/940191912453877770

user-34e908 05 February, 2022, 20:54:28

Hi, Pupil core can be use with camara web or unique function with device of Pupil

nmt 07 February, 2022, 10:32:06

Hi @user-34e908 πŸ‘‹. Pupil Core software is compatible with third-party cameras that are UVC compliant. Check out this Discord message for details: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498. You might also be interested in Pupil DIY: https://docs.pupil-labs.com/core/diy/#diy

user-729de6 07 February, 2022, 03:59:39

Hi, I was wondering if it was possible to project the coordinates from each of the eye cameras into the coordinate system or the image of the world camera?

user-9429ba 07 February, 2022, 11:09:03

Hi @user-729de6 Once you perform a calibration gaze will be translated into the world camera coordinate system. You can read more about calibration in our Docs, available here: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-process

nmt 07 February, 2022, 11:02:59

Hi @user-729de6 πŸ‘‹. We use calibration to map data from eye camera to scene camera coordinates. During calibration, the wearer fixates or follows a known target (reference location) while Pupil Capture records and matches pupil data to these locations. Based on this data, a gaze estimation function maps newly detected pupil positions to gaze positions in scene camera coordinates. Read more about calibration here: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-process

user-729de6 07 February, 2022, 15:15:20

So would this gaze estimation function be applicable to recordings and purely using the pupil data? Also is the source code for the gaze estimation function available?

nmt 07 February, 2022, 15:41:35

Recordings made after calibration will contain pupil positions mapped to scene camera coordinates. More specifically, we use bundle adjustment to find the physical relation of the cameras to each other, and use this relationship to linearly map pupil vectors that are yielded by the 3D eye model (pye3d: https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-uses-a-model-based-approach).

Source code available here: https://github.com/pupil-labs/pupil/blob/cd0662570a1a495c42b0185ed4b28630d1ba70ee/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py

user-d1ed67 07 February, 2022, 15:28:59

Hi @papr (Pupil Labs) . We bought an Intel Realsense depth camera, and we would like to integrate its depth information into the post-hoc calibration routine (i.e. instead of letting the calibration routine to move both the calibration target and the eye orientation, we want to fix the calibration target and let the calibration routine only change the eye orientation). Could you please point out what I should do in addition to installing the depth camera plugin in the pupil capture? Are there any existing code that implements this feature?

nmt 07 February, 2022, 16:22:56

Hi @user-d1ed67. papr is currently on vacation. I'm not sure that I understand what you are trying to do – would you be able to elaborate a bit more on "fix the calibration target and let the calibration routine only change the eye orientation"?

user-729de6 07 February, 2022, 15:52:51

Thank you so much!

user-f9f006 08 February, 2022, 07:04:53

Hi~ I have a question about the code in pye3d/observation.py in the image. Why is the direction of the self.gaze_3d_pair "circle_3d_pair[i].center + circle_3d_pair[i].normal" (the red line), not just "circle_3d_pair[i].normal" ? Because I think the normal vector of circle_3d_pair is the same as the gaze vector/direction of 3d_gaze_pair. (the yellow lines). Thank you!

Chat image

papr 22 February, 2022, 14:24:38

Hi, we had a more detailed look at this section. You are right that annotated code was incorrect. We fixed it in https://github.com/pupil-labs/pye3d-detector/pull/47 Interestingly, the effect of the change on the 2d projection of that line is barely measurable which is why the issue remained undetected for so long.

user-92dca7 08 February, 2022, 11:48:21

Hey @user-f9f006, thanks for your question. You are right, the directions of the two line segments in gaze_3d_pair are given by the two unprojected normals. Note, however, the line segments have a specific location in actual 3D eye camera coordinates, as they have a start and end point. This is important for calculating their projections into 2D image space (see definition of self.gaze_2d in your code snippet), as well as for defining what in the code is referred to as Dierkes lines (see a few lines later in observations.py). The latter being linear constraints for eye ball position in 3D eye camera coordinates. More details about the involved algorithm for eyeball center estimation can be also found here: https://www.researchgate.net/publication/333490770_A_fast_approach_to_refraction-aware_eye-model_fitting_and_gaze_prediction

user-f9f006 08 February, 2022, 14:53:58

But I think in the class "Line" defining the gaze_3d_pair, the meaning of "direction" should be a direction vector that is irrelevant to the start point or end point. Also, in the function "project_line_into_image_plane", the start point and end point of a 3D line segment are used to find the 2D projected line, and there is already a part in the code using the parameter "line" to get the start point and end point (red line in the image), which is a point stating that the second parameter (direction) of the class "Line" should be a vector that is irrelevant o the start point or end point.

Chat image

user-5b79c2 08 February, 2022, 14:41:20

Hi! Last week, I noticed that the right eye camera stopped working, even though the world view and the left eye camera both work fine. There were four error messages on the application, which ran in Linux Ubuntu’s system: EYE1: Could not connect to device! No images will be supplied EYE1: No camera intrinsics available for camera (disconnected) at resolution [192,192] EYE1: Loading dummy intrinsics, which might decrease accuracy EYE1: Consider selecting a different resolution, or running the Camera Intrinsics Estimation

nmt 08 February, 2022, 14:52:20

Hi @user-5b79c2 πŸ‘‹. Please check the cable is securely plugged in and follow these steps in the first instance: https://docs.pupil-labs.com/core/software/pupil-capture/#linux If that doesn't solve it, reach out to info@pupil-labs.com πŸ™‚

user-92dca7 08 February, 2022, 15:08:11

@user-f9f006 Thanks for following up and bringing up this issue. I do get your point now. We will look into this further.

user-f9f006 09 February, 2022, 04:09:53

Okay, thank you very much!

user-0ee84d 08 February, 2022, 22:00:07

@marc is it possible to vary enhance the image quality via pupil player?

marc 09 February, 2022, 09:08:31

@user-0ee84d No, the image quality of Pupil Invisible can unfortunately not be enhanced post hoc. Using the newest version of the Companion App, you could however manually change the exposure of the scene camera in the settings view to fit your recording environment better, which can lead to improvements especially in more difficult lighting scenarios.

user-0ee84d 08 February, 2022, 22:28:45

the image quality of invisible is worse than core for the same illumination.

user-cc353f 09 February, 2022, 12:01:21

Hi, is it possible to use pupil software with cheaper cameras, with a lower frame rate, than the ones suggested? E.g not 200hz but 30hz. What are the implications? Thank you

nmt 09 February, 2022, 13:04:38

Hi @user-cc353f πŸ‘‹. Please see this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/940193197517332480

user-027014 10 February, 2022, 12:54:10

Hi @nmt , I've recorded some example data for both blue and dark eyes. Our problems with the bright irises remain even after fiddling with the settings (the 2d was set to default). We took the pupil labs outside our labs, and that seemed to improve conditions with recording in the light (Which may be due to lighting in the office being continuous LED, while lighting in our labs is fluorescent tube lighting and thus flickering). However in the dark pupil tracking for the blue eyes is still poor compared to dark irises. I've sent you that data via email as well. Kind regards, Jesse

nmt 11 February, 2022, 13:06:12

Hi @user-027014. We have responded with feedback to improve pupil detection in dark environments to your email πŸ™‚

user-55ae67 10 February, 2022, 16:16:16

Hello, we are having problems opening our data in Pupil Player on Windows 10. The data were collected on a Mac, so can this be a compatibility issue? The folder can be opened and processed on the same Mac it is created. This is what happens each time we try to open it in another PC. Thanks in advance!

Chat image

user-b91cdf 11 February, 2022, 09:12:46

Hello all, yesterday during an experiment, the software had a hard time detecting my subject's pupil. During the day it was not a problem. (Same person)

It was not due to the orientation of the eye cameras. When I restarted the software, an exception/Error was thrown. "Eye0/Eye1 could not set Backlight compensation" and something regarding ZBeacon.

The solution then was to restart the software with factory settings and set the eye cameras to 192x192p and manual exposure, instead of 400x400p and auto mode.

I don't have the logs unfortunately. I still tried to reproduce the error. At the current startup I get the following warning in the logs.

2022-02-11 09:45:56,454 - eye0 - [WARNING] uvc: Could not set Value. 'Backlight Compensation'.

What can I do to make the "Backlight Compensation" work properly.

System: MacOs Big Sur 11.6.2 (Apple silicon) with the newest Desktop Software

Kind regards,

Cobe.

nmt 11 February, 2022, 14:37:08

Hi Cobe. The camera exposes this UVC control but does not actually allow to set it. This warning does not affect the eye tracking and can be ignored. For tracking at night or in the dark, I would suggest setting the exposure manually at the time of recording to optimise the contrast between the pupil and iris

user-b91cdf 11 February, 2022, 15:03:07

Ok, thanks πŸ™‚

nmt 11 February, 2022, 14:40:46

Hi @user-55ae67 πŸ‘‹. Would you mind sharing the names of a few files in the recording folder, e.g. "world.mp4". I want to check if any of them have been renamed.

user-55ae67 11 February, 2022, 15:07:50

Hi @nmt ! Thank you for your reply. Here is how the content of my folder looks like:

Chat image

nmt 11 February, 2022, 15:19:07

Thanks for sending that! Please follow these steps: 1. IMPORTANT: Make a backup of the recording 2. Rename any files that have "._" at the beginning of their name. For example: ._world.mp4 should be world.mp4 3. Try loading the recording in Pupil Player on your PC

user-55ae67 11 February, 2022, 16:33:30

I have just noticed that they are duplicates of the original videos and each is 4.00 KB. I cannot change their names from, say, ._world.mp4 to world.mp4because the original ones are still there and when I delete the dotted files I still get the same error.

nmt 11 February, 2022, 16:37:17

Unfortunately, I'm unfamiliar with MacOS. @papr will be able to help out when he returns from vacation next week πŸ™‚

user-55ae67 11 February, 2022, 16:38:12

I see. Thanks for the suggestions anyway. I'll try to solve it myself until that time :)

user-da7dca 12 February, 2022, 23:30:38

Hey, yeah in the end It came down to the python and Ubuntu version I used. In my case the only working combination was system python (3.6) on Ubuntu 18.04. Weirdly python 3.6 on Ubuntu 20.04 won’t work for me. Btw to test it more efficiently I used a docker container.

user-878608 16 February, 2022, 01:51:35

ah i see, i still get the same error on python 3.6.9 on ubuntu 18.04, it only crashes if my world cam is connected

user-6e1219 14 February, 2022, 07:19:34

Hey all, I am trying to create a LSL using the pupil labs LSL relay python file but got an unexpected error "No module named 'plugin' Can anyone tell me any way around to fix this problem? or guide me to properly install Pupil labs LSL

papr 14 February, 2022, 10:36:32

You seem to try to run the Python code directly. Instead, you need to load it as a plugin from the software's ui. See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin

user-d1ed67 14 February, 2022, 07:26:55

Hi @nmt . I checked the default calibration routine used by the Pupil Player here (https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/calibrate_3d.py#L48). This calibration routine is "guessing" where the calibration targets are to minimize the reprojection error (i.e. "optimizing" the depth of the calibration targets). I noticed that the default calibration routine allows me to set the calibration target as fixed by setting the field fix_gaze_targets (https://github.com/pupil-labs/pupil/blob/74865d3ed5b6d2b462533abf09a9d485df4f551e/pupil_src/shared_modules/gaze_mapping/gazer_3d/bundle_adjustment.py#L23). Also, I found that there is a plugin for Intel Realsense camera (https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29), so I bought an Intel Realsense camera to obtain ground-truth depth information of the world scene. However, that plugin was designed for Pupil Capture, and I need to do post-hoc gaze calibration because of our specific setup. Besides, I need to remotely stream the video to the Pupil Capture, so I built a customized video backend for Intel Realsense camera. I would like to change the post-hoc gaze calibration routine of the Pupil Player such that it can extract the depth information of the calibration points from the recorded world frame and perform gaze calibration with fix_gaze_targets set as True. Could you please where in the code should I look at to achieve this?

papr 14 February, 2022, 10:34:12

You can create a Player-user plugin that overwrites the existing behavior.

  1. Create a Python file in the Player (not Capture) plugin directory (pupil_player_settings/plugins)
  2. Subclass Gazer3D
from gaze_mapping.gazer_3d.gazer_headset import Gazer3D

class APSAPS_Gazer(Gazer3D):
  label = "APSAPS Calibration"

  ...
  1. Overwrite any necessary functionality
papr 14 February, 2022, 10:27:31

Hi, I think the world.mov file is causing the issue. Please rename it to something that does not start with world, delete the world_lookup.npy file, and try reopening the recording.

The ._-prefixed files can be ignored. These are generated by macOS if files are stored on external storage devices and contain meta information. Pupil Capture explicitly ignores those.

user-55ae67 15 February, 2022, 05:07:11

It worked! Thank you very much :)

user-6e1219 14 February, 2022, 10:37:09

Thank you so much.

user-573a28 14 February, 2022, 16:10:17

Hi, I've been trying to make an IR detecting camera by following: https://docs.pupil-labs.com/core/diy/#replace-ir-blocking-filter-on-the-eye-camera

I'm not using the recommended webcam, I've tried this with NexiGo N60 and AUSDOM AF640. Both have led to blurry images. I've tried using gloves while handling the film but that didn't really help. I'd really appreciate any help/advice you could give me on this - been stuck on it for a while now

user-55ae67 15 February, 2022, 06:59:08

Hi, I forward the question of one of our lab members:

"After annotating the data in Pupil Capture, there is a difference between the fixation numbers in the files I export. The annotations.csv file contains 121 more fixations than the fixations.csv file. What could be the reason for this difference?

Also, fixation durations show as 0 in the annotation.csv file. Is it possible to do post fixation duration detection in Pupil Player to see the fixation durations after the annotation?"

Thank you!

papr 15 February, 2022, 10:57:34

After rereading the question more carefully, I think there might be another misunderstanding. If you annotated fixations in realtime using Pupil Capture, then you are likely referring to fixation target stimuli that are part of your experiment, correct? The fixation detector does not process the scene video and is purely based on the gaze data, classifying fixations as special eye movements (moving the eye less than a given threshold within a minimum duration). It is possible that there are a series of false negative detections, causing the discrepancy. Check out our best practices on how to choose the threshold values: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds

papr 15 February, 2022, 10:47:53

Maybe to further understand the underlaying issue: What are you using the annotations for? The fixation detector works without them, too.

papr 15 February, 2022, 10:00:53

Hi,

Annotations created in Pupil Player are timestamped with the time of the current frame at which the annotation was created. Their duration is always set to zero. To model duration, you can annoate start and end events instead. Also, you can create multiple annotations per frame. There is no need for these to be equal to fixations.

user-55ae67 16 February, 2022, 05:52:06

Thank you for your reply! I think the problem is solved now :)

nmt 15 February, 2022, 10:26:55

Here are some steps you can take to improve pupil detection in dark environments: 1. Position the eye camera to minimise 'bright pupil' and/or glare 2. Manually change the exposure settings to optimise the contrast between the pupil and the iris: https://drive.google.com/file/d/1SPwxL8iGRPJe8BFDBfzWWtvzA8UdqM6E/view?usp=sharing 3. Set the ROI to only include the eye region, excluding the dark corners of the image. Note that it is important not to set it too small (watch to the end): https://drive.google.com/file/d/1NRozA9i0SDMe_uQdjC2jIr000iPjqqVH/view?usp=sharing 4. Modify the 2D detector settings: https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings 5. Adjust gain, brightness, and contrast: In the eye window, click Video Source > Image Post Processing

user-b91cdf 15 February, 2022, 12:53:50

Thank you !

user-b91cdf 15 February, 2022, 13:01:15

Does the 2D detection affects the 3d pipeline since i am using the 3D model ?

nmt 15 February, 2022, 13:24:20

Some notes about pupil detection: - The 2d detector is always running, regardless of which gaze mapping pipeline is used. - The 3d pupil detection is based on the results of the 2d detector (more details here: https://docs.pupil-labs.com/developer/core/pye3d/#pye3d-uses-a-model-based-approach) - Tweaking the 2d detector settings will influence subsequent pupil and gaze measurements

user-573a28 15 February, 2022, 15:48:38

Hi @nmt , I've been trying to make an IR detecting camera by following: https://docs.pupil-labs.com/core/diy/#replace-ir-blocking-filter-on-the-eye-camera

I'm not using the recommended webcam, I've tried this with NexiGo N60 and AUSDOM AF640. Both have led to blurry images. I've tried using gloves while handling the film but that didn't really help. I'd really appreciate any help/advice you could give me on this - been stuck on it for a while now

nmt 16 February, 2022, 12:18:14

Hi @user-573a28 πŸ‘‹. Unfortunately I have never modified such webcams and cannot offer much insight as to why your image is blurry πŸ˜•

user-d41611 15 February, 2022, 18:58:28

@&288503824266690561 is there information for the pupil core eyecamera manufacturer/ or a spec sheet for the camera and drivers that come with the base pupil model?

nmt 16 February, 2022, 12:18:58

Please contact info@pupil-labs.com in this regard πŸ™‚

user-6e1219 16 February, 2022, 06:34:37

Hello, I am able to established the LSL using Pupils Labs LSL plugin. Now the problem is I wanted to send triggers to mark the different part of the gaze data.

Suppose, I have System1 which is running the Eye tracker and system 2 which connected using the LSL. Now from system2 I wanted to send triggers that will mark very specific part of the record time stamps. Can anyone tell me how I can established the connection?

nmt 16 February, 2022, 12:35:41

May I ask, what are you using System 2 for? If System 2 is running LSL supported hardware or software with an LSL outlet, then any events/data from these should by definition be available on the LSL network. - If you are using pupil_capture_lsl_relay and LSL's 'Lab Recorder App', the events will be stored in a location defined by the Lab Recorder App alongside Pupil Core data - If you are using pupil_capture_lsl_recorder, then the events will be stored natively by Pupil Capture

papr 16 February, 2022, 07:46:17

Could you please share the full error trace back?

user-878608 17 February, 2022, 01:55:10

running on ubuntu 18.04 python 3.6.9 on the jetson nano

crash_k9c9ibup.log

user-b395ae 16 February, 2022, 11:23:19

Hi! People from my lab tell me that if they record on a tablet using Pupil Capture and turn off the screen during recording (to safe battery), when they turn on the screen again, everything looks fine on first glance, but the recording actually stopped. (The video just shows a frozen screen.) Is this intended behavior? Can we do something about it?

nmt 16 February, 2022, 13:03:17

Hi @user-b395ae πŸ‘‹. It sounds like the tablet is going into standby mode, which would ultimately pause any processes. There may be a way to prevent this within the power settings

user-b395ae 16 February, 2022, 15:04:21

Hi @nmt ! Good point! I'll have a look at the power settings (or ask the colleagues to do so πŸ˜‰

user-ad4608 16 February, 2022, 16:39:20

Hello together, I recently installed the pupil core app and I added the camera due a USB 3 port. But the world camera does not work. Could you help me how i can start it. I get the following error message:

user-ad4608 16 February, 2022, 16:39:25

Restoring original ctypes.util.find_library... Original ctypes.util.find_library restored. attempt to release unclaimed interface 0 world - [INFO] video_capture.uvc_backend: 20:6 matches Pupil Cam1 ID2 but is already in use or blocked. world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied. world - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [1280, 720]! world - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy! world - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation! world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2. attempt to release unclaimed interface 0

nmt 17 February, 2022, 11:13:13

Hi @user-ad4608 πŸ‘‹. In the first instance, please follow these steps: https://docs.pupil-labs.com/core/software/pupil-capture/#windows

user-ad4608 16 February, 2022, 16:39:35

Thank you in advance

user-b395ae 16 February, 2022, 17:11:18

Hi @nmt ! It turned out that the colleagues tried to turn off the screen of the tablet using the power button. But actually, they sent the tablet to sleep mode.... (But what is quite unfortunate is the fact that the videos all seem to have the correct length. Only from the moment when the tablet goes to sleep mode, they just show a still image. Therefore, this was only noticed after finishing the study.... Maybe it would be possible to include a warning in future versions? Something like "Warning: Video recording has been paused for some time and might be incomplete". But I'm not sure how difficult it is to detect that the cameras have been "sleeping".)

user-d41611 16 February, 2022, 18:43:17

@papr do you mind taking a look at the above question linked here? I am trying to figure out system specifications and am not seeing the information i need in the documentation. If you can provide the answer or point me in the right direction help is much appreciated. https://discord.com/channels/285728493612957698/285728493612957698/943219733539467265

papr 16 February, 2022, 19:32:11

Our cameras support the UVC standard. We recommend using them with our Pupil Core software.

user-b395ae 17 February, 2022, 10:30:44

Hi @nmt ! My current guess what happened is like that: It seems that older versions of Win10 had the option that the power button just turns off the screen. But newer versions have lost this setting and the power button sends the tablet into "partial" sleep mode. Now if Pupil Capture manages to keep running, but Windows shuts down communication with the cameras to safe battery, the software only "sees" the last frame in the buffer and happily keeps on capturing still images.

nmt 17 February, 2022, 11:10:41

Hi @user-b395ae. Thanks for the update.

You can try disabling any unnecessary Plugins during recording, such as the real-time surface tracker if you are using that, and disabling real-time pupil detection. This would reduce computational load and potentially help with battery life. Pupil detection and calibration can then be performed post-hoc.

If you try this, be sure to set up the eye cameras properly, and your recording should contain, a) sufficient gaze samples to fit the eye model, and b) a calibration choreography.

Read more about camera position and fitting the model here: https://docs.pupil-labs.com/core/#_3-check-pupil-detection and post-hoc pupil detection and calibration here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection

user-c8e5c4 17 February, 2022, 14:41:25

hey guys, i'm running an online experiment on pavlovia and would like to use eye tracking on that. is there any way to send triggers to pupil capture to separate trials and blocks?

papr 17 February, 2022, 14:49:54

Hi, unfortunately, browsers do not have the necessary access to our eye tracking hardware. To use our eye tracker, you will need to run the experiment through the PsychoPy Desktop application. Generally, yes, it is possible to send events to Pupil Capture during events to annotate things like trials/blocks.

user-c8e5c4 17 February, 2022, 14:51:02

thanks for replying! could you point me to some tutorial to learn how?

papr 17 February, 2022, 14:53:07

You can find more about the application here https://www.psychopy.org/ To use our eye tracker specifically, see https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html#pupil-labs-core

user-c8e5c4 17 February, 2022, 14:53:24

thank you!

user-b395ae 17 February, 2022, 14:53:27

Hi @nmt , Thanks for the suggestion. We might try this. I think it might also work to lock screen (as long as the tablet is configured to not go into sleep mode).

papr 17 February, 2022, 14:55:05

May I ask, are you using Pupil Core, already? And if not, how did you find this channel?

user-c8e5c4 17 February, 2022, 14:56:02

yes, i'm using it already, i found it via your homepage

user-c8e5c4 17 February, 2022, 14:57:10

i have several tasks i use it for, one is run on psychopy/pavlovia, the other one is very different. I've already been asking a lot of questions on this channel, hope it's fine πŸ™‚

papr 17 February, 2022, 14:58:12

This is great to hear! I just wondered since our new PsychoPy integration was just released last week!

user-c8e5c4 17 February, 2022, 14:58:43

no, but that's just great timing for me then πŸ™‚

papr 17 February, 2022, 14:59:13

Let us know if you encounter any issues with the docs or the software!

user-878608 18 February, 2022, 04:11:48

Hi which version of python is recommended to run pupil on? (for ubuntu 18.04(

papr 18 February, 2022, 07:24:08

You can use anything between and including 3.6 and 3.9

user-878608 18 February, 2022, 21:01:48

i have done symlink on virtual environment running python3.9 on ubuntu 18.04, im unsure how to resolve this error, my cv2 version is 3.2.0

Chat image

papr 21 February, 2022, 09:58:22

In your case, it is a different third-party dependency that is having issues: scikit-learn. While building aarch64 is possible through cross-compilation via tools like https://github.com/pypa/cibuildwheel, testing them is not possible on most CI systems as they run on x86_64 hardware.

user-4290b7 21 February, 2022, 15:47:10

Hey nice to meet you! I am look for a way to get Tobii eyetracker 5 to work in Arch based system when I encountered pupil-eye-tracking. However it was unclear to me if it would work with Tobii?

papr 21 February, 2022, 15:53:30

Hi, welcome to the Pupil Labs community πŸ‘‹ While our Pupil Core software supports non-Pupil Labs hardware via custom video source backends, I am not aware of one that supports Tobii hardware.

user-4290b7 21 February, 2022, 15:56:48

Thank you :) I find eye tracking and VR quite intriguing and am currently using Tobii on a windows based system. However I would like to migrate to arch based linux since I use that basically in any other scenario. Which caused me to find your interesting project in my search for compatible drivers

papr 21 February, 2022, 16:07:46

We do not provide pre-compiled versions of our software for Arch-based systems. We support Ubuntu 16 or newer, macOS 10.15 or newer and WIndows 10. But our software is fully open-source and can be compiled on various target systems.

user-4290b7 21 February, 2022, 16:09:22

I know that your version can be found in the AUR (Arch User Repository) which is where I encoutered it. My only concern is the compatibility with Tobii at this point :)

papr 21 February, 2022, 16:11:22

I know that your version can be found in the AUR I was actually not aware of that. Could you share a link to it?

My only concern is the compatibility with Tobii at this point By default, Pupil Core software is not compatible with Tobii hardware. And I am also not sure if their license agreements would even allow the usage of third-party software.

user-4290b7 21 February, 2022, 16:13:30

I was actually not aware of that. Could you share a link to it? here you go https://aur.archlinux.org/packages/pupil-eye-tracking-bin

By default, Pupil Core software is not compatible with Tobii hardware. And I am also not sure if their license agreements would even allow the usage of third-party software. also I didn't consider that aspect of it at all, would make sense if that's the case. I guess I'll stick to windows for now

papr 21 February, 2022, 16:14:57

Thank you for the link! That points to an out-dated version (2.3) and just references our Ubuntu build. I am not sure if that would work on Arch.

user-4290b7 21 February, 2022, 16:16:47

now that I look at it you're right. However I have yet to encounter a package on AUR that didin't work

papr 21 February, 2022, 16:17:41

If you get to test it, please let us know how it went! It would be great if one could reference the package to other Arch users.

user-4290b7 21 February, 2022, 16:18:24

I only have the Tobii hardware as of now so I wouldn't know how to test it. However I can try compile it

papr 21 February, 2022, 16:19:00

Have you tried contacting Tobii about this issue?

papr 21 February, 2022, 16:20:17

Their Python API should be available on Arch if I am not mistaken https://pypi.org/project/tobii-research/

user-4290b7 21 February, 2022, 16:24:01

the latest post which I saw (November 2021) from Tobii development suggest that they don't plan to provide linux support as of yet, however they are on the lookout to see if linux gains more popularity

user-4290b7 21 February, 2022, 16:37:29

This is what I get when I run it on Arch

Chat image

papr 21 February, 2022, 16:38:52

This is as much as you can test without a compatible hardware, thanks!

user-4290b7 21 February, 2022, 16:39:33

In that case the link I provided should be sufficient for Arch users :)

user-878608 22 February, 2022, 04:29:58

@papr /home/jetbot/.local/lib/python3.6/site-packages/cysignals/signals.cpython-36m-aarch64-linux-gnu.so is the error im getting as soon as i plug my world camera in (before crashing) do you know what's the issue?

papr 22 February, 2022, 06:18:22

Cysignals is only the module catching the segmentation fault. The error message reveals the actual cause.

user-878608 22 February, 2022, 09:22:01

oh thanks, do you have any idea what this error message reveals? πŸ™

crash_0f084tiu.log

papr 22 February, 2022, 09:57:25

Hi, this one looks like a race condition when calling uvc_stream_stop(#3) https://github.com/pupil-labs/libuvc/blob/master/src/stream.c#L1423 Not sure what is causing it

user-878608 23 February, 2022, 03:44:31

oh thats odd, occasionally it works find but crashes again if i change frame rate / resolution

user-6e1219 23 February, 2022, 08:56:33

Hello I am trying to install pupil april tag module in my PC to use the Apriltags .Though I have Visual studio 2022 installed still I am getting this error . Kindly anyone let me know a proper way to install Apriltags

Chat image

papr 23 February, 2022, 13:21:59

Hi, you will need to make a change to the code. This line https://github.com/pupil-labs/apriltags/blob/master/setup.py#L21 needs to be changed like this:

-     cmake_args.append("-GVisual Studio 15 2017 Win64")
+     cmake_args.append("-GVisual Studio 17 2022 Win64")
user-430fc1 23 February, 2022, 13:17:59

Hi folks, has anyone tried running capture on a 8gb raspberry pi 4 with 64-bit raspberry pi OS and the pi noir camera? If not, any thoughts / guesses on feesibility? Thanks!

papr 23 February, 2022, 13:19:17

Hi, you will need to compile everything from source as the bundled app is compiled for x86_64, not RPis' arm64 CPI architecture

user-b28eb5 23 February, 2022, 14:59:02

Hi, I am trying to record with the glasses core using a mobile with the app Pupil Mobile, but I can't get the video, someone can help me?? Thank you

papr 23 February, 2022, 15:09:38

Please make sure to enable OTG Storage in your phone's system settings.

user-b28eb5 23 February, 2022, 15:12:02

That is okay. The thing is that I can see more documents, but not the mp4 about the eyes and the other camera

papr 23 February, 2022, 15:12:33

Do you see the cameras being listed in the main view?

user-b28eb5 23 February, 2022, 15:13:30

Yes, and it put that they are active

papr 23 February, 2022, 15:14:06

And the video preview is displayed correctly, too? (tap the corresponding entry to open the preview)

user-b28eb5 23 February, 2022, 15:14:56

Yes, the preview is okay

user-b28eb5 23 February, 2022, 15:17:48

I see only 1 photo from each camera when I record

papr 23 February, 2022, 15:20:41

Pupil Mobile records the video in the mjpeg format. Internally, this is just a list of jpegs (images). Some media viewers only display the first one. If you open these files in VLC or the whole recording in Pupil Player, you will see that the video was recorded correctly.

user-b28eb5 23 February, 2022, 15:22:23

Okay, now it is working, thank you!

papr 23 February, 2022, 15:23:02

I saw that you also contacted us via email. Can I mark the email conversation as closed?

user-b28eb5 23 February, 2022, 15:24:37

Yes

user-6e1219 24 February, 2022, 10:22:04

Can anyone let me know what is the format of the timestamp of the exported file of Pupil Player?

papr 24 February, 2022, 10:23:26

Hi, Player exports seconds in Pupil Time. See https://docs.pupil-labs.com/core/terminology/#timing for more info on Pupil Time.

user-f62f53 24 February, 2022, 21:08:30

Within the data set I can see a file name "pupil_timestamps.npy" and one for blinks) but these files are indecipherable

Chat image

user-c92479 24 February, 2022, 14:56:17

Hello - we have suddenly encountered issues with the main camera getting blurrier - but the resolution/frame rate hasn't changed, and pupil cams still look okay. Resetting to default settings doesn't improve things. We have mostly been using a laptop version of pupil capture, but the change happened after testing with a mobile version. I can't really think why that would make a difference though... do you have any suggestions for anything else we could try? Many thanks!

papr 24 February, 2022, 14:58:15

Hi, the scene camera lens can be rotated to adjust the manual focus. It is likely that you rotated the lens by accident causing it to be out of focus. See here https://docs.pupil-labs.com/core/hardware/#focus-world-camera for details

user-c92479 24 February, 2022, 15:01:37

ah, cool - I did try to twist it but not very forcefully because I wasn't sure if it was supposed to move! Will try again. Thanks.

papr 24 February, 2022, 15:03:31

It should not take a lot of force. Also, it is unlikely that you need to turn a lot. Small movements should be sufficient.

user-c92479 24 February, 2022, 16:57:47

it was a little bit stuck but it did work in the end and we got it sorted! Thanks so much again for your help!

user-71f551 25 February, 2022, 02:39:31

Hi @papr ! I'm having a bit of trouble with pyplr. The message "error:multiple repeat" keeps appearing with every single pupil player recording I try to load using this guide https://pyplr.github.io/cvd_pupillometry/04d_analysis.html#Export-with-Pupil-Player. Maybe I just haven't read enough (I discovered pyplr yesterday). In any case, thank you very much for your time!

papr 25 February, 2022, 09:00:27

Since I am not the author, my possibilities to help you are a bit limited. Is it just a warning or does the program crash at that point? IIRC @user-430fc1 is the author of the package. Maybe they can jump in and help.

nmt 25 February, 2022, 08:39:59

Hi @user-f62f53 πŸ‘‹. This is a NumPy binary file. Trying to open it in a text editor will just get you gibberish. Instead, I'd recommend loading your recording in Pupil Player, our desktop software: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-player. From there you can export your data into human readable files. Check out the documentation for instructions: https://docs.pupil-labs.com/core/software/pupil-player/#export

user-f62f53 25 February, 2022, 13:43:15

terrific, thank you! We'll try that today

user-aae6c5 25 February, 2022, 09:42:26

abs

user-c19af4 25 February, 2022, 11:05:41

Hi there, sorry in advance if this is the wrong channel to post this questions, but it seems to be the correct one from my understanding.

I am using Pupil Labs in Unity with HTC Vive headset with eyetracking where I am currently trying to access the β€˜confidence’ variables. Specifically, I am trying to do so I can blink with one eye instead of having to blink with both eyes before it registers it. Any suggestions? πŸ™‚

papr 25 February, 2022, 11:06:51

Hi πŸ™‚ Your approach sounds correct. What is the exact issue that you are encountering?

user-c19af4 25 February, 2022, 11:16:47

I can only access to the overall confidence value, but I'd like to get it for each eye seperatly πŸ˜…

papr 25 February, 2022, 11:18:33

Am I assuming correctly, that you are processing gaze data, not pupil data?

user-c19af4 25 February, 2022, 11:20:30

Yes. We are accessing the confidence float from Gazedata.cs with the floatfromdictionary helper function. Is there a key for both confidence values or is there another solution?

papr 25 February, 2022, 11:23:10

Eye-specific confidence data can be found in the pupil data. Each gaze point sent over the network contains the pupil datum that it is based on. Unfortunately, hmd-eyes does not expose/process this base data. Instead, you will have to subscribe to the pupil data directly and use https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/PupilData.cs#L18

papr 25 February, 2022, 11:24:06

Please see the software-dev if you encounter any new software-development related questions πŸ™‚

user-c19af4 25 February, 2022, 11:24:36

Thank you, papr!

user-48fec1 26 February, 2022, 19:08:38

Hi is there a way to adjust the lsl plugin code so that it is able to send surface tracking gaze data as well? I noticed the gaze and pupil data are obtained from a β€œgaze” structure. Can we access the gaze positions on the surface through this β€œgaze” structure as well? If yes what’s the name of the child structure storing this information? Thank you!

papr 28 February, 2022, 16:17:44

Short update from my side: I started implementing this myself. There have been many people asking for this feature. I think it is time to get this done.

papr 28 February, 2022, 09:16:12

Alternatively, you can use this new (experimental) plugin: LSL Recorder https://github.com/labstreaminglayer/App-PupilLabs/blob/lsl_recorder/pupil_capture/pupil_capture_lsl_recorder.py It replaces the LSL Lab Recorder and records LSL streams and saves them as csv. I.e. you could record your other LSL data synchronized to the native Capture recording and get access to the surface gaze this way

papr 28 February, 2022, 09:10:25

Hi, yes, adjusting the plugin is possible. One would mostly need to process a different data structure that is exposed by the surface tracker. Instead of processing gaze here https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L48 one would need to process the surfaces data. One might need to adjust the LSL format, too.

user-71f551 28 February, 2022, 00:40:54

Thank you very much! I hope he does! The program crashes at that point

papr 28 February, 2022, 09:23:17

In that case, there should be a traceback, showing which lines of code caused the error. Could you share that with us?

user-430fc1 14 June, 2022, 16:35:39

Hi Manuel, I only just noticed this... did you figure it out? I need to update those tools at some point as they are overly-specific to the Pupil Capture export format that I was using at the time. Let me know if I can help.

user-7b683e 28 February, 2022, 05:55:30

Hey @papr ,

I wonder to know the model of the IR eye camera if it is possible. In the doc related with hardware I found only brief about world cam. It would be nice to know techical features of eye cam for my a manuscript.

Thanks a lot for your help till now.

papr 28 February, 2022, 09:20:48

Please contact info@pupil-labs.com in this regard

user-4bc389 28 February, 2022, 07:48:23

Hi Is there an electronic version of the product certificate of pupil core? Thank you

user-4f103b 28 February, 2022, 09:22:16

Hi, can we use Pupil Core with Unity? I can see a Unity project for HMD. I am wondering if it will work with Pupil Core eye tracker?

user-9429ba 28 February, 2022, 10:21:27

Hi @user-4f103b We maintain a project called HMD eyes that contains the building blocks to integrate Pupil VR with Unity3d. You can use this in combination with Pupil Core open-source software. Check out the repository here: https://github.com/pupil-labs/hmd-eyes#hmd-eyes

papr 28 February, 2022, 10:22:56

@user-4f103b and to follow up: Yes, it works with Pupil Core. But you will need to use one of the built-in calibrations of Pupil Capture instead of the hmd-eyes provided HDM calibration.

user-4f103b 28 February, 2022, 14:05:15

This is wonderful, thank you. it will open many opportunities to develop some great applications.

papr 28 February, 2022, 14:10:53

Please keep in mind, that you will need to translate the gaze coordinates between physical scene camera and unity coordinates. You can use Capture's built-in https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking for that. In that case, you will need to extend hmd-eyes to receive and process surface-mapped gaze (not built-in)

user-4f103b 28 February, 2022, 14:19:22

ok thank you great help.

user-ab5f54 28 February, 2022, 15:31:35

Hello, we're having an issue with Pupil Core where pupil detection doesn't work too well for people with long eyelashes. A quick search in this channel reveals that particular care needs to be taken when adjusting camera positions in order to avoid long eyelashes covering the pupil.

Interestingly, we're also observing that these cases coincide with autoexposure yielding severely overexposed eye videos. Is it possible that this is linked, i.e. is it recommended to set exposure manually when dealing with long eyelashes?

nmt 28 February, 2022, 15:46:59

Hi @user-ab5f54 πŸ‘‹. For people with long/straight eyelashes, I have found that positioning the cameras more underneath the eyes, so they effectively look up into the eyes, can help. With the appropriate adjustments it's usually possible to achieve good pupil detection. We also recommend avoiding the use of heavy makeup/mascara. Camera exposure and eyelashes aren't connected. If you think the images are overexposed, try toggling the autoexposure off and on again, or set it manually.

user-ab5f54 28 February, 2022, 16:00:07

Thanks for the tips on camera positioning! Good to know the apparent connection is likely to be a random coincidence.

user-ced35b 28 February, 2022, 18:24:41

Hello, what is the required IOS version to download the pupil core software? Right now I am working with Mac OS X 10.11.6. If thiis is too old of an OS, is there still an option to use Pupil Core apps with older systems?

papr 28 February, 2022, 20:09:54

Hi :) For the current Release this macos version is too old. But if you go back in our release history, you might find a version that works on that macos version.

user-ced35b 28 February, 2022, 20:34:34

Thank you!

End of February archive