Hello! I found there is small error in gaze calibration. It seems like the center is better than the edge. Is there a standard way to offset this error? Thanks!
Hi @user-45f4b0! This is already looking quite good. A linear offset wouldn't help in this case. Rather, it would be worth checking pupil detection confidence when the wearer is gazing in the top left. Are you still using a 31 point calibration?
Hello, I'm Priyanshu, from decision lab, IIT Kanpur, India. We have a pupil core device. We have been facing trouble with respect to the "noise" related issues (power line noise) from pupil device to other near by device (i.e. EEG). We are using LiveAmp EEG with pupil simultaneously to record. However, we see that those electrodes that lie close to the pupil coverage face tremendous noise issues. Therefore, I wanted to know following things: a. What parts of pupil core device generate/or are active source of this power line noise? b. How can we shield this powerline noise? c. Is there a way to cover any active regions such that this noise doesn't reflect in EEG channels. d. Are there other labs, who have been using EEG with Pupil device?If yes, can you please connect us to them?
Hi @user-9f4dff! Can you share a photo of your setup?
Hi @user-f43a29 2 quick question: I am looking to study pupil dillation (size) changes overtime.
@nmt Please could you help me with my question?
Hi @user-b31f13! For studying pupil size, we strongly recommend using diameter_3d. This is a measure of pupil size in millimetres, provided by our 3D eye model, as opposed to the apparent pupil size in pixels, as observed in the 2d eye image. The latter is affected by pupil foreshortening, meaning the camera perspective can significantly impact the measured pupil size. You can read more about this in the following section of the documentation: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Hi @nmt Thanks for your response. If I am not sure that the model was fitted well throughout the study, should I then filter out data based on model confidence?
Do you have the recordings? You can play them back in Pupil Player and check the fit. You can also do post-hoc pupil detection and model freezing.
@nmt I would like to keep the process efficient and standardised as I am working with many participants. Is excluding frames based on model confidence a bad way to go?
Model confidence is a worthwhile measure. But best practice for pupillometry would be to ensure the model was frozen at the point when the model was well fitted for each participant. If this didn't happen at the time of recording, it can be done post-hoc in Pupil Player, assuming there was no headset slippage. In sum, the goal would be to ensure a good fit that's specific for each participant. This might ultimately require some manual inspection of the recordings.
@nmt When you say if this didnt happen at the time of recording.. This happens automatically during recording right?
The model will tend to fit over time, yes. But freezing the model is a manual step.
Ok. Thanks
Hi, I am working in pupil with core, sharing the program with other coworkers. Since we work on different projects, there is an issue of recalibration after each person. Is there a possiblity to save individual calibration?
Hi @user-e0a306! Pupil Core does need to be calibrated each time it is put onto a new wearer. Depending on the requirements of your study, you might also want to think about re-calibrating during each block of testing. You can read more about that in our best practices: https://docs.pupil-labs.com/core/best-practices/#best-practices. As an aside, our newest eye tracker, Neon, is calibration free and avoids this need entirely.
hello, i am new and i am using pupil captue. i see the plugs in network api but then when i see in progrqam nd files i do not see the plugs in folders> can u tell me where i can down load the plugs in plese? i need network API and remote recorder . thanks aloooot
Hello! I really need support . I already created a ticket.
Hi @user-999a6c ! Please check out the replies on the ticket.
Hi! I am having a problem with the Pupil Core in Mac. I run the Capture program with the corresponding command: sudo /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture but there is a moment in time that the captura application closes and I receive the following message in Terminal: Unhandled SIGSEGV: A segmentation fault occurred. This probably occurred because a compiled module has a bug in it and is not properly wrapped with sig_on(), sig_off(). Python will now terminate.
Hi @user-6a6d64 , could you provide a copy of the logs in /Users/<username/pupil_capture_settings
, after the crash happens? Thanks!
hi, how do i create a ticket to look for help please?
Hi @user-c39646 π ! You can do so on the π troubleshooting channel, but kindly note that this is mostly meant for hardware issues.
You can simply ask here any question you may have π
oh great, i am new and i am using pupil capture. i see the plugs in "network api" but they do not appear to work. indeed, then when i look into "prograam" folder i do not see the plugs in . can u tell me where i can download the plugs in please? i need "network API" and "remote recorder" .
@user-c39646 These plugins come pre-installed with the software, so there's no need to install anything extra.
Just open the Plugins Manager from the sidebar and enable them (see screenshot).
You can find more details in the documentation.
ok so why i do not see them in the program folder?
They would not show in the program folder, because they are bundled together with the code. If you would like to check the code, it is open source and available in the repository.
May I ask how are you trying to connect? Is it from a computer to another computer?
because they do not work... i am not able to connect to the IP adress i see in the network API
local IP
its via python with the same computer. i want to send a command saying when it has to start ad stop calibration along with other commands. here is the screenshot of the error and the plugs in installed
I think there might be a small misconception β you donβt need the Remote Recorder
plugin for this. That plugin was mainly used with Pupil Mobile, which is now deprecated.
What youβre actually looking for is this:
π https://docs.pupil-labs.com/core/developer/network-api/#pupil-remote
Just make sure to update the IP address to match the one shown in the Network API plugin in Pupil Capture.
ok but i do need remote recorder if i want to connet and send commands via a second computer (that i will need to do afterwards). right ? i do not see the error in my code compared to what you told me... i am tryng to do calibration in my code but i do not manage to do it, and i am using the local IP code
Hi I am using Pupil Core to collect data and preparing to draw a dynamic area of interest. Do you have any guidance on this? Thank you
Hi @user-bd5142 π !
Pupil Core doesnβt support dynamic areas of interest out of the box.
However, if youβre open to using fiducial markers (also known as Apriltags), you can take advantage of the Surface Tracker plugin in Pupil Player.
Alternatively, you might find this community project helpfulβit explores a similar use case and could be a good starting point.
sure , here is the screenshot ; it cannot find pupil capture but it is actually running
As in the routine is executed? May I ask is Pupil Core actually connected? Mainly asking because the scene camera is not visible. Or is it a Core VR Add-on? π₯½ core-xr
the camera are instlled into a VR Headset exactly. I restarted the computer and it is now working, however it tells me that calibration need world capture video input: i do not need that> can't i do calibrtion just using the eye0 and eye1 camera?
Okay since this is a π₯½ core-xr question, let's move the conversation to that channel.
ok
hi again, how do i do a five-points calibration? do you have any script in your library?
Hi @user-c39646 π !
The 5-point calibration is the default method used for screen-based choreographies:
https://docs.pupil-labs.com/core/software/pupil-capture/#screen-marker-calibration-choreography
That said, based on our previous conversation, Iβm a bit confused about your current approach β particularly around your interest again in performing a calibration and how you plan to perform it without a scene camera.
i cannot find it but the one-point calibration
hi Miguel, you are right. i am actually testing several approaches trying to understand how to do with and without calibration. i have to thest this one you just suggested me, but in the previous one how does the calibration stops? and why do i have to click five times? thank you very much for your help
Hi @user-c39646 , I'm briefly stepping in for my colleague, @user-d407c1 .
I would recommend first working through the Getting Started guide.
To briefly answer your questions:
As @user-d407c1 mentioned, calibration is not necessary to measure pupil size and the standard Pupil Capture calibration routine will not be usable in your VR setup, without some modifications. For pupil size, you want a well-fit 3D eye model.
test*
hi, thank you very much. is there any script python in your library that i can modify to adjust it to my task? also is there a script in your library that tells to start and stop recording simply? thank you very much in advance
Hi @user-c39646 , there is not really a single Python script that can be modified. The calibration process has several interlocking mechanisms and multiple parts of the Pupil Capture code play a role. If you are looking for a method that saves the most time, then using hmd-eyes in Unity would be easiest, especially if these concepts are new. In other words, with hmd-eyes, the hard work of adjusting calibration for your task has already been done.
A script for starting/stopping a Pupil Core recording can be found here:
More details can be found in this part of the Documentation.
Hey there. I was looking into the DIY headset setup. Is there any way to get access to the 3d printing files for the components so that I can modify them for my setup?
Hi @user-3f4a1b , we have open-sourced the CAD files for the camera mounts. They can be found here.
Gotcha, so the mounts are open source but the eyeglass frame isn't correct?
@user-3f4a1b That is correct. The frame is not open-source. You can order just the frame however here.
Will do! I am starting to design an experiment and wanted to have a few different hardware setup options to find best fit. Found the Pupil system when I was doing the research
Ok! If you'd also like to see how our eyetrackers work in real-time, then feel free to schedule a Demo and Q&A call.
Thank you
pupil-helpers/python/remote_annotations....
Hello, I was searching a way to open pupil player without the interface (with python for example) as there are some lagging when I upload the file. Does someone have done that before? Also, I wanted to know if it is possible to have triggers from another software on the annotations file? I am currently using LSL but struggling with that part. If I am in the wrong channel for this question, please tell me and I can ask it in the right one.
Hi @user-69c9af , you are in the correct channel.
So, using Pupil Player's routines to export data without the GUI is not supported. Are you potentially loading long recordings? And, what Operating System are you using?
And, yes, you can send Annotations from other software to Pupil Capture during a recording. If it has a Python interface, then it is easiest to follow our documentation on Pupil Core's Network API, but if not, then any ZMQ interface will do.
While our LSL plugin does not record Annotation data, you can send standard Annotations to Pupil Capture and then post-hoc synchronize them to the LSL timeline.
Or, are you rather referring to transforming the triggers in the data file of a different device to Pupil Core's Annotation format?
Sorry, you are right that I used it a second time and everything worked fine. If this happens again I will send you the log file. Thx!
No problem. If you encounter the issue again, just send the logs here!
@user-f43a29 I had the problem again on my Mac; the main pupil player closed and the message I got was: world - [WARNING] uvc: Turbojpeg jpeg2yuv: b'Invalid JPEG file structure: two SOI markers' eye0 - [WARNING] uvc: Turbojpeg jpeg2yuv: b'Invalid JPEG file structure: two SOI markers' eye1 - [WARNING] video_capture.uvc_backend: Camera disconnected. Reconnecting...
Unhandled SIGSEGV: A segmentation fault occurred. This probably occurred because a compiled module has a bug in it and is not properly wrapped with sig_on(), sig_off(). Python will now terminate.
Unhandled SIGSEGV: A segmentation fault occurred. This probably occurred because a compiled module has a bug in it and is not properly wrapped with sig_on(), sig_off(). Python will now terminate.
world - [WARNING] uvc: Turbojpeg jpeg2yuv: b'Corrupt JPEG data: 658 extraneous bytes before marker 0xd7'
Hi, if anyone has already worked with apriltags detection with core, are there any examples of the code available?
Hi @user-e0a306 π ! Do you mean besides the surface tracker ?
Hi, yeah, have not seen this one exactly but would like to get acquainted with some diverse usage examples
Hello! Thank you for the great work on Pupil Core. Iβm currently conducting an experiment using Pupil Core that involves real-time gaze position tracking. Since the participantβs head is fixed in place during the experiment, the world camera data introduces unnecessary noise. Is there a way to perform calibration and measurement without using the world camera?
Hi @user-c6248a , coould you clarify the noise that you are seeing? Then, we can look into ways to resolve that. The world camera of Pupil Core is indeed necessary for Pupil Capture's calibration pipeline and also for Surface Tracking.
Hi, I am trying to use Pupil Core on MS Surface Pro 11, but it cannot see the ET or world-facing cameras. Do you have any suggestions to get this working?
Hi @user-22630f , we also received your email and responded there.
To clarify, Pupil Capture is not supported on ARM architectures, such as that used in the Microsoft Surface Pro 11.
If you intend to explicitly use Pupil Core with that device, then you could also try this third-party package to stream Pupil Core's camera feeds to a remote computer that is running the Pupil Capture software.
hi
i want to integrate this device with jspsych any idea how to do fully confused
Hi @user-cc5ee8 , what is the ultimate goal of your research?
I ask because communication with Pupil Core happens over ZMQ. With jspsych
, being that it runs in a browser, you would typically use the jszmq
package, but it requires using ZMQ over WebSockets, which is not explicitly supported by Pupil Core's Network API. It might work, but I cannot guarantee. Otherwise, at the moment, I have not found an alternative solution for ZMQ in the browser/JavaScript. In any event, it would also require some coding effort to write a compatible jspsych
plugin.
If you are not doing a real-time, gaze-contingent task, then you could probably just run a standard Pupil Capture recording with Surface Tracker. If you need to know when trials start & end, then then easiest solution might be to have jspsych
send a notification over a WebSocket to a listening Python script, which then forwards an Annotation to Pupil Capture via our standard Network API commands.
Hi there, We're suddenly unable to calibrate our pupil core. The red dots flicker and jump around on the screen wildly. Any ideas what could have caused that?
Hi @user-51e172 π ! When you refer to the red-dot, do you mean the 3D pupil detection on the eye cameras, the gaze point or something else.
And by unable to calibrate, do you mean the calibration routine fails?
To better assist you, can you share some video showcasing this behavior, or even better a short recording made in Pupil Capture? You can share it with data@pupil-labs.com
Hello all,
I am Oindrila from Georgia tech. I am currently trying to use the pupil labs core. I have an arduino based system( used for motor learning task in humans). I want the data in pupil labs to have time stamps of events (e.g - the person picked up the disc, I have this time stamp access in my matlab as I use it to control the arduino system.) How would be the best way to have the annotations in the pupil lab data.
Hi, I'm trying to run a recording directory on pupil player on a macOS, but when i drag and drop the file onto the player it either says pupil player cannot open files in the folder format or it just closes the app. I'm very new to this and I'm not sure whats wrong, is there a way to fix this?
Hi @user-0cb368 ! We have replied to you on the ticket.
Hi @user-e6fb99 π !
Pupil Core's Network API uses the ZeroMQ protocol.
If you are able to use zmq
and msgpack
as described here, you can follow this example.
Alternatively, on newer versions of MATLAB you can use Python directly, see here.
This, would allow you to use pyzmq
and follow this example.
In this. Both the example takes to the math works page. Do you have a GitHub example?
I wil try this out. Would it work if the matlab / python is in the same computer as the pupil labs ?
sure
Thanks a lot. I will let you know if it works and if I have further questions.
Using this, I get an error stating the zmq.core.ctx_new() is script but being called as a function. Should I make any changes?
I am running this is MATLAB 2017b
I understand that we need to install zmq for this process to work. Can you help with how to do that? Thanks again.
Hi @user-e6fb99 ! Do you already have the matlab-zmq
and matlab-msgpack
packages installed and working?
I have downloaded the packages from GitHub and have it in my Matlab path. When I open the configure.m from one of the folders . There are comment lines that asks to install the zmq. How can I install it? Please help.
Hi @user-6a6d64 ! Could you kindly create a ticket on π troubleshooting ?
Sorry, I am not able to create the ticket. I get the message that I don't have the right permission
You would need to install zmq
via one of these ways: https://zeromq.org/download/
If that approach is not working I suggest going through python interface in Matlab.
I tried downloading through here. The issue is, I couldnβt find the appropriate package for windows.
Can you guide me to the appropriate windows system package ?
You can find the binaries here use ZIP for windows.
That said, as noted in the README, the code using matlab-zmq
has only been tested on Ubuntu, and some users have reported issues on Windows.
If you're on Windows, you might find it easier to use MATLABβs Python interface with pyzmq
instead as suggested earlier.
Okay sounds good. I will try the python based solution.
Sure, seems I copy-pasted twice the same URL π , here you have it https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Thank you. I will let you know once I install the latest Matlab and try it out.
Hi there. Is world_index
an available data element on the gaze data when using the Network API in Pupil Capture?
Trying to associate outputs of the gaze tracking to the world video frames. Thanks! π
Hi @user-a7bee8 , to clarify, you are trying to do the association in real-time? If so, the incoming video frame and gaze data are timestamped, so you can simply use all gaze data that are newer than the current world frame. You can use this tutorial as a reference.