Hi, i s it possible to subscribe to the normalized fixation points on a surface directly using zmq filter such as "surfaces.surface_one..."? If not, are there any implementations available for reference of this filtering in c++? Thanks#
Hi, you will need to subscribe to surfaces....
. It will include the surface-mapped fixations if the realtime fixation detector is running. The field should be named fixations_on_surfaces
instead of gaze_on_surfaces
so, i want to subscribe to fixations on surface named a, it would be
surfaces.a.fixations_on_surfaces?
No, the fixation data is not published in a message of its own. You need to subscribe to surfaces.a
and then extract the fixations from the message (see the fixations_on_surfaces
) field
Hi, is the pupil core MRI compatible?
Hi @user-8697b3 👋. Would you be able to elaborate a bit on your proposed use of eye tracking and MRI compatibility?
Hi! I’m a beginner to this and haven’t messed with any code. I’m just trying to use the Downloadable software pupil capture for macOS and I’m having trouble. When I switch to local USB it’s telling me that the cameras are in use or blocked. How do I fix this?
Hello, How I can connect and use my eyetracker with smartphone?
Pupil Core requires a tablet or computer running Pupil Capture. 🙂
hi I made a code to extract gaze and pupil data from unity, however it for some reason only works when used in conjunction with old pupil capture software (1.0) and when using pupil capture 3.51 it keeps giving me the error message "notification without timestamp will not be saved". Does anyone have any idea how I can fix this?
Hi 🙂 I am not sure if the two things are related. Could you explain what your code does in more detail?
We have a loose wire (the red on) on the eye-tracking camera of out pupil core. How do we go about getting it fixed?
Hi! Please contact info@pupil-labs.com in this regard
Can anyone tell me what is meant by Outlier Threshold in the validation section of pupil player and why its default is 5 degrees?
Hi 👋 For post-hoc calibration, Pupil Player removes samples greater than the outlier threshold from its calculation. The default is samples with a larger angular error than 5.0 degrees, but this can be adjusted. If too many samples are disregarded from the below result this could indicate a problem with the camera set up. You can check out this explainer video for more information: https://www.youtube.com/watch?v=_Jnxi1OMMTc&feature=emb_rel_end
Hi team, I'm with Pupil. I tried a few lines of code to extract the position of the eyes and the gaze direction. I received a 3D position (x,y,z). What I couldn’t find is the position of the origin. Can anyone help me with that? Thanks in advance 🙂
Hi! What file are you using as input?
Hi, My question is probably answered somewhere but I can't find it. Does pupil labs use a mix of bright and dark pupil methods?
No, Pupil Core only looks for the the dark pupil. Reflections are not used.
Hi all, could anyone help me for launching the python source code of Pupil - eye tracking platform? I followed the two instructions 1) "> python -m pip install --upgrade pip wheel" 2) "> pip install -r requirements.txt". During the installation, I got one error with the setuptools>61.0. And when I launched the application "> python ./main", I got an error calling git. Thanks in advance
Hi! Please note that you need to clone the source code. It is not sufficient to download the zipped archive.
Which operating system and Python version are you using?
Hi, I am learning to use Pupil Core for the first time and have a question about the pupil detection and calibration. I followed the Pupil Core Quick Start guide but during the pupil detection the circles around my eyes are blue not green. Any suggestions on how to resolve this? Also, after calibration, I had a thin green rectangle with orange figures in each corner and a red circle in the middle. Refer to image below. Does anyone know what this is?
Hi @user-969151 👋. No worries about the blue circle (we update the colour scheme in our software) – check out the legend in the pye3d detector settings by clicking on the 3d icon in one of the eye windows. For a description of what the green and orange visualisations represent, open the 'Calibration' menu by clicking on the little target icon in the right of the Capture window. The red circle is your gaze point 🙂
Hi ✌️ I am trying to calculate head position angles via the Head Pose Tracker Plugin, while looking on a 2d screen. I put 3 april tags on my screen but the calculation of the 3d Modell uses just one of the april tags. I think this is because the other two april tags are not visible all the time but I dont see why this would be a problem. Is there a way to manually select the markers or is this no problem as I am just looking at a 2d window anyway? Thanks in advance 😃
Hi @user-2ab654! It looks like your markers are being detected, as shown by the green semi-opaque overlay. What you'll need to do is build up the 3D model by recording the markers from more angles and perspectives. Do that enough and the markers will appear red. Note that you can then use that model for other recordings (as long as the same markers are present of course). Relevant docs for reference: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Hi all, I want to add the pupil core application to an existing Unity Application. So I added a button in the Unity Application to launch the pupil core (DID IT). Now I want that the pupil core is running in the background with no graphical interface. In other words, is there a clean way to remove the UI from the pupil core? The goal of putting the pupil core in the background is to allow me to collect data. Thanks in advance 🙂
There is a --hide-ui
flag that you can pass when you call the exe. This will create a hidden window.
Hi, how are you I came to Germany from Mongolia and wrote an e-mail to get acquaint with your company and products, but I can't get in touch. What shoud i do?
Hi 👋 - I apologize that you haven't received a response yet. Please send us another email to info@pupil-labs.com and reference your discord user name. We are also happy to continue discussion here if relevant to the channel 😸
Hi all, I encountered difficulties running the pupil core from the source code. 1) When I export the data, I do not find any data on the gaze_positions, but the rest of the data is OK (purpil_positions, world_timestamp,...). 2) I do not receive the stream data from subscribe('gaze'). These two problems do not exist when I run the app from the installed program. Would anyone have any idea? Thank you in advance 🙂
Hi @user-80123a! The behaviour you describe is usually indicative of no calibration. Did you calibrate the eye tracker when running from source?
I forgot, I cannot use calibration because the Eye Tracking I use right now does not have a frontal camera
To get gaze data, it is necessary to calibrate. Check out the docs for future reference: https://docs.pupil-labs.com/core/#_4-calibration Indeed, a scene camera is also necessary 🙂
But I will use a new eye tracker soon, and I hope my code will be ready when the new eye tracker arrives
Hello, best productive days to everybody 🙂
I'm going through the documentation for a recent query in our lab regarding the fixation export file, fixation.csv.
Does the frame index of the video begin with 0 or 1; in that, if the first fixation is, say, at the 4th frame, is it then the 5th or the 4th frame of the video?
We're developing a frametime critical approach in our present work, a single frame is very important in this for us as a result. Hence the basic question, but it is not written explicitly in the documentation.
It starts at zero 🙂 You can verify it in Pupil Player. Player displays the current world frame index in the timeline, next to the relative world video time, as well as information about the current fixation (incl the fixation id) in the fixation detector menu.
May I suggest an alternative approach that may resolve your experienced variance?
Instead of comparing frames, use the fixation start timestamp. Fixations are based on gaze data, which has a higher temporal resolution than the scene camera. Assuming an event at scene video frame index J
and time T
, do not calculate the number of frames until the next fixation (fixation->start_frame_index - J
) but the duration in seconds until this fixation (fixation->start_timestamp - T
)
Great, thank you very much for your swift support as always 🙂
The variance in frame beginning and end points in the fixations output file have somehow become inconsistent in our current data set of about 40 participants. I'll most likely follow up soon with more detailed questions which I hope will benefit everybody.
Best wishes.
What do you refer to by "frame beginning"?
Hello
Can you please provide me with a step by step instructions of how to export pupil size data from my recording in an excel format?
Hi! To export Pupil Size data: Load your recording in Pupil Player -> Make sure you have the raw data exporter
plugin enabled from the settings menu, and click on the Export button (down arrow) on the far left hand side of the Player window. This will export .csv files you can open in excel. Full breakdown of the
pupil_positions.csv
export here: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-cs
I used the core device to do the recording
Hello! Where can I find the audio setting in Pupil Capture? I rendered the videos, and they have no audio input or audio.timestamp files.
Hi, we have removed audio-recording capabilities from Capture ~3-4 years ago. The feature did not work stable and reliable enough. I would recommend recording the audio with an external audio recording app.
Our study aims to observe children’s gaze in response to partner’s verbal labels during interaction, so we need the full transcript to further identify the key timings (it’s not fixed as other experimental paradigms). Do you know any audio program can receive external triggers?
I think the easiest option (if your experimental design allows it) would be to record a clearly visible and audible "clap", e.g. with something similar like a clapperboard. Then it is just a matter of finding the clap time within the audio recording and subtracting it from the transcript timestamps. This shifts the relative time starting point from file-beginning to clap event.
Next, find the frame index during which the clap happens and extract its absolute timestamp. Add the absolute time to the relative manuscript time, shifting the manuscript time to pupil recording time.
In your pupil positions export, are your theta and phi outputs using the physics or mathematics convention? We were hoping to use your sphere radius output in combination with these two to generate a calculation for degrees of eye movements
The sphere radius is a constant value. The phi and theta are not to convention but rotated by a specific amount. Let me look up the link
https://docs.pupil-labs.com/core/terminology/#coordinate-system see the eye model section
Hi all, is there a metric to identify if the calibration went well or not? Similar to the confidence parameter metric in the csv data, after exporting with pupil player, but can be used before starting to record. Thanks in advance.
Hi! The Accuracy Visualizer calculates the accuracy in angular error. That should be suitable.
Hi again, another question, can I subscribe to another topic than "gaze"? For example, if I want different information like pupil coordinates or angular error precision... If there is a list of topics to subscribe to. Thanks in advance
Yes, subscribing to other topics is possible. If you subscribe to the empty string, you will receive everything. Use it to find the data that you are interested in and then use a more specific subscription string later
Sorry I am new to this project, does this software require a infrared camera or would a regular dektop camera be sufficient to use the eye tracking technology
Hi! Welcome to the community. Yes, an IR camera is required. It is also required that it is head-mounted. A camera on the display/table in front of the subject (remote eye tracking) will not work with our software.
also hi everyone!
Ok thank you!
Hello and congratulations for the very nice work! I am interested in the pupil core system, I was wondering if it can be used with an ARM-based/embedded systems (e.g. a zedboard) instead that a normal desktop/laptop?
Hi, the pre-build software is only compiled for x86_64. You would need to install the dependencies and run from source. Note, that the board is likely not powerful enough to run the pupil detection in real time
I see thanks for the answer - is all the source code available? Speaking about power, does the system require a GPU?
https://github.com/pupil-labs/pupil no gpu is needed for the calculations. I am referring to CPU power in this case.
Note that the software needs to be able to open a window. So running it on an embedded system without a screen might not work.
Raspberry Pi users have used this tool https://github.com/Lifestohack/pupil-video-backend/ in the past to stream the video from the RPi to a laptop running the software
ok thanks a lot for the answers and the links I'll check them to get a better idea - it's a very interesting system!
Would you mind sharing a bit more about what you are trying to accomplish / your use case?
Broadly speaking, I would like to understand how easy would it be to use it in an embedded/ARM based-system with a camera and a projecting screen
What type of camera are we talking about here? Something like a webcam?
more like a wearable device - I am afraid I cannot go into more details for the moment
ok, just wanted to make sure that you were not attempting to perform remote eye tracking, which is not possible with our software. Good luck with your project!
Hi there, From pupil service, I subscribe the frame and trying to get the image and wanna save by myself. import zmq import msgpack ctx = zmq.Context() pupil_remote = ctx.socket(zmq.REQ) ip = 'localhost' port2 = 50020 pupil_remote.connect(f'tcp://{ip}:{port2}')
Request 'SUB_PORT' for reading datapupil_remote.send_string('SUB_PORT') sub_port = pupil_remote.recv_string()
Assumessub_port
to be set to the current subscription port
subscriber = ctx.socket(zmq.SUB) subscriber.connect(f'tcp://{ip}:{sub_port}') subscriber.subscribe('frame')
while True: array = subscriber.recv_multipart() item = msgpack.loads(array[2]) print(item) break
and I get the array which contains these 3 elements. [b'frame.eye.0', [email removed] b'\xff\xd8\xff\xc0\x00\x11\x.........................']
how can I translate this into an image or numpy array that I can save as an mp4 video? because I am receiving this error when I tried to load the 3rd element with msgpack item = msgpack.loads(array[2]) File "msgpack_unpacker.pyx", line 202, in msgpack._cmsgpack.unpackb msgpack.exceptions.ExtraData: unpack(b) received extra data.
Hi, check out this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Hi all, is there a possibility to develop the Network API with a different programming language? In C# for example.
Yes, that is possible. https://github.com/pupil-labs/hmd-eyes/ has a c# implementation of the api
Hi all, Thanks to @papr , I can get image from Network API. I got another problem. I could not get gaze information anymore. I was receiving gaze data previously from the network by using subscriber.subscribe('gaze'). But now I am receiving the following topic: pupil.0.2d pupil.0.3d frame.world frame.eye.0 How can I get back gaze information or gaze topic? Do I need to change anything in pupil capture?
Hi, you need to calibrate to receive gaze data.
Hello, what kind of material do you use to print the camera extender?
Hi Pupil Labs community! I'm thinking to use a chin rest along with Pupil Core. Are there any commercially available chin rests that you folks recommend?
Hi, I’m getting a lot of messages like this from Pupil Capture:
2022-09-19 16:47:33,981 - eye0 - [DEBUG] video_capture.uvc_backend: Received non-monotonic timestamps from UVC! Dropping frame. Last: 19869.178274, current: 19865.838207
How can this be and is there anything that can be done about it? It seems to disrupt pupil recognition quite severely. (I’m running from source on Mac OS Monterey in native (arm64) mode.)
Hey, this usually happens after a reconnect. You might have a loose connection
Thanks for your quick reply. But there is no external connection, everything is running locally on the same machine. How could a local connection become loose?
I am talking about the physical connection of the cameras. Sometimes the connectors become loose. I recommend checking the headset
Ah, I see. Thanks, I'll check.
Hello! I am trying to refresh my questions regarding the headset extender. I found the geometry map, but not sure of the appropriate material for printing. PLA, ABS, nylon? Do you have any advice?
I recently printed it in PLA. Works nicely!
great to know! thanks for sharing! 😀
Hi! Wondering what type of machine/laptop everyone is using for the pupil core software/ what works best to avoid overloading/crashing that I’ve been noticing on my Mac laptop.
The M1 macs work best in terms of performance. There is a known issue on Unix systems that may cause a crash if the hardware disconnects unexpectedly. If you are encountering the crash often, you might have a loose connection in your hardware.
I am working on debugging the mentioned software issue as we speak.
Hi guys ✌️
I have Recordings that are about 25 minutes long and try to do marker detection to extract the head-pose with the extension. The problem is, that the Pupil-Player keeps crashing. Does anyone know if there is a way to resolve it? The problem shouldnt be cpu, gpu or ram as it uses between 5-10 % on these.
Have a great day!
Hey! Could you share the player.log file after having attempted the marker detection and reproducing the crash? You can find it in the pupil_player_settings folder.
Hello, A wire to one of our pupil-cams broke, and we would like to fix it. I figured out, that I have to get it crimped. Do you maybe have the serial-numbers of the connector parts, or do you even sell replacements? Thank you.
Please contact info@pupil-labs.com in this regard
thanks
Copying this to the correct thread: https://discord.com/channels/285728493612957698/633564003846717444/1022212302986031218
Hi, this sounds like a hardware issue with the headset. Can you make sure the small connector at the right eye camera is correctly plugged in?
Is the stated discrepancy in accuracy/precision between VR/AR and core true in real world scenarios, if so, is there a technical reason for this?
I noticed that the pupil detection parameters in the eye windows (such as "Pupil intensity range") are not persistent across sessions, unlike the plugin parameters for the video source, for example. Is there a way to change that?
I am able to reproduce the issue. Let me check as to why that is.
Hi, good catch! The issue will be fixed in the next Pupil Core release. Meanwhile, you can use this plugin to fix the issue https://gist.github.com/papr/9b3ba71b227bc6f092088077b334cf9c
Hello everyone, when I use the pupil player, I can see a video of my gaze on the screen (in 2D). But when I export and open the gaze_position.csv file, I see a 3D gaze position (X, Y, Z). My question is: is the gaze position played on the pupil reader video simply the (Z and X) or (Z and Y) coordinates of the gaze position? Or is there a geometric transformation to translate the 3D coordinates into 2D coordinates? My goal is only to get the 2D coordinates of the gaze position. Thanks in advance 🙂
There is a geometric transformation involved that corrects for the lens distortion https://docs.pupil-labs.com/core/terminology/#coordinate-system
The norm_pos_*
fields are what you are looking for
Hello, -- New user here. I wanted to know if there is an existing script to stream live data into MATLAB. thanks
Hi, welcome! Check out https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
It happens irregularly, but on average every 10 minutes. Sometimes it recoveres after a while, but often I have to restart.
One dropped frame every 10 minutes, do I understand this correctly?
No, many dropped frames. And when the dropping of frames stops, the cameras are (and stay) out of sync.
out of sync, as in if you blink with both eyes at the same time, the blink appears in one eye windows first and then in the other?
That I couldn’t say (hard to see what’s happening during a blink...). But I keep getting "Resetting history" messages from the blink plugin, and the red circle in the world window splits into two independent ones, presumably one for each eye.
Just realised that I missed an important detail in your initial message. The timestamp would be zero on reconnect. Yours is just inconsistent. I am fairly sure that there is something going wrong with the handling of the camera's hardware timestamps. But I can't tell what it is right now. This will require more time to investigate.
As a work around, you can switch to software timestamps. These are a bit less precise but more consistent over time.
diff --git a/pupil_src/shared_modules/video_capture/uvc_backend.py b/pupil_src/shared_modules/video_capture/uvc_backend.py
index f376eadf5..945b168cf 100644
--- a/pupil_src/shared_modules/video_capture/uvc_backend.py
+++ [email removed] -272,7 [email removed] class UVC_Source(Base_Source):
def configure_capture(self, frame_size, frame_rate, uvc_controls):
# Set camera defaults. Override with previous settings afterwards
if "Pupil Cam" in self.uvc_capture.name:
- if platform.system() == "Windows":
+ if platform.system() in ("Windows", "Darwin"):
# NOTE: Hardware timestamps seem to be broken on windows. Needs further
# investigation! Disabling for now.
# TODO: Find accurate offsets for different resolutions!
Thanks a lot, I’ll give it a try!
A first test indicates that the problem may be fixed completely by this workaround!
so it seems all new MacBooks Pro have M2 chip. My question is whether that may generate any troubles. M1 are still in MacBook Air, but that has 8-core CPU and is comparable with Intel i5 (maybe i7), which is listed by Pupil Labs as minimum requirements. Does anyone have any experience with M2 chips? And if not, what concrete non-Apple new laptops are people using without hiccups?
Pupil Core performs much better on M1 than any Intel CPU I know 🙂 M2 shouldn't be any issue either.
25+ minute recordings might be likely for the experiments we plan to do. Do you have a specific recommendation of a laptop for this?
Usually, we recommend to split your recordings into blocks of no longer than 20-30 minutes, each with their own calibration. This has mostly two reasons: Over time, there usually is headset slippage, reducing the gaze estimation accuracy over time. The second reason is, based on your planned analysis, Pupil Player might need more RAM than what your system has to offer. The longer the recordings, the more RAM should your system have.
Note, that I cannot give any specific recommendations as to the amount of RAM or the system model. I can speak from personal experience with the M1 MBA and it works fine for my personal use cases (Development of the Pupil Core software.)
found device
Hello! I am trying to tag different windows of a software for a study. My test persons will sit between 80 and 100 cm away from the screen. Unfortunately, the tags (tag36h11) seem to have to be quite large at 260 x 260 pixels for the surface to be robustly detected. Are there ways to reduce the size of the tags and still have good tracking of the areas of interest?
Note that the markers require sufficient white border to be recognized. Is that included in the 260x260 pixel area?
Also, would printing the markers and attaching them to the outside of the screen an alternative for you?
Hi, when I click freeze model in Pupil Capture the red ellipse suddenly changes shape, becomes more round, and appears not to fit the pupil as well as when the model was not frozen. Is this expected behavior?
Hello everyone, is it possible to make pupil player work without drag and drop? using python parameters for example? My goal is to develop an application that uses pupil core and pupil player. Inside the application, I can:
(1) launch the pupil core,
(2) launch some experiments,
(3) start recording (using network API),
(4) stop recording (using network API),
(5) export the data.
I am stuck at the step (5), because I have to quit the application and launch the pupil player to export the data.
What kind of data are you looking to export? Are you performing any kind of post-processing? e.g. blinks or fixations?
Hello again, is it normal to have coordinate values outside [0,0] and [1,1] for x_norm and y_norm, for the file gaze_positions_on_surface_<surface_name>.csv ?
Yes, that is when the gaze was outside of the surface. Note, it is recommended to remove low confidence gaze values as they are likely inaccurate
Hello! I'm using Pupil Core and I've been experiencing pupil detecter jumping across eye camera FOV and not staying on users' pupil correctly. I tried to change the angle/position of camera, framerate, brightness of the room, even rebooted PC and turned off one of the right/left eye camera, but they did not improve the situation. Is this a common issue with Pupil Core?
Could you create a Pupil Capture recording of you rolling the eyes (capturing different eye positions) and share it with data@pupil-labs.com for concrete feedback?
how to solve this problem? win10
You could try this process:
Windows Videos do not appear in Pupil Capture. This could mean that drivers were not automatically installed when you run Pupil Capture as administrator. You should first try to run Pupil Capture as administrator (right click pupil_capture.exe > Run as administrator). If that does not work, follow the troubleshooting steps below:
In Device Manager (System > Device Manager) View > Show Hidden Devices Expand libUSBK Usb Devices, Cameras, and Imaging Devices categories. For each Pupil Cam device (even hidden devices) click Uninstall and check the box agreeing to Delete the driver software for this device and press OK Unplug Pupil headset (if plugged in) and plug back in. Right click on pupil_capture.exe > Run as administrator. This should install drivers automatically.
found here: https://docs.pupil-labs.com/core/software/pupil-capture/
I was having issues getting my cameras to detect and this fixed it for me
i have a usb cam
@papr
Hello everyone, the Pupil Сapture program does not display video from glasses. Laptop model: MacBook Pro (13-inch, 2019, Four Thunderbolt 3 ports), 2.4 GHz Quad-Core Intel Core i5 processor. Glasses model: Pupil Core Please tell me what the problem might be; thank you in advance
Hi @user-01c0ae ! Which version of MacOS are you using? If you are using Monterey (12) or above, please check out this note https://github.com/pupil-labs/pupil/issues/2240
Hi, I'm working with an older Pupil Pro Binocular rev 037 and Pupil Capture v3.5.1 but am having some issues with the setup. Cant detect both pupil cameras simultaneously, occasional program crashes when selecting camera sources, blue screen crashes. Is there some older software more suited to this hardware that I can use on a win10 machine?
Hi Pupil Labs, Just following up on this request. Is there any support for older Pupil Pro Binocular Rev 037 models? Thanks.
Hi all! When setting up the camera an error occurs and crash program if the camera gets too close to the field of dots
Hi, this is a known issue https://github.com/pupil-labs/pupil/pull/2248 Use the linked user plugin to avoid it.
Note: The camera calibration is usually not necessary and is not to be confused with the gaze calibration, which is necessary.
Colleagues, tell me, what is the difference in calibration with the flag "show undistorted image" on and off? When calibrating, you can see that the calibration area changes.
How does raising the flag affect the calibration?
Pupil Core estimates gaze in two coordinate systems: Distorted 2d image and undistorted 3d camera coordinate systems. https://docs.pupil-labs.com/core/terminology/#coordinate-system Instead of undistorting the image, which requires a lot of CPU resources, Pupil Capture uses a simplified mathematical representation of the distortion (camera intrinsics) to transform selected points between the two coordinate systems.
The "show undistorted image" option uses the current intrinsics to create preview of how well the intrinsics represent the actual distortion. If the intrinsics fit well, any straight edge in real life will also appear straight in the undistorted image. Inaccurate intrinsics will have a negative affect on the gaze estimation accuracy.
Note, that this is just a preview. The undistorted image is not being used otherwise.
How do I start GPU acceleration for pupil capture?
Pupil Capture does not support GPU acceleration. It purely relies on the CPU.
i alway can't use this cam .
in win10
i can't get the image of cam
please help me @papr
I already did what you asked, but it didn't work
If I open the app that came with Windows 10, I can get videos.
That means that the drivers are not correctly installed. Try installing them manually using steps 1-7 from these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
so,i think
i need your help
there are some trouble with :Pupil Capture v3.5.1
thanks for your help. Now.i will do it as the same ways.
Could you please show the picture of the successful replacement of the driver?
I don't useWindows at the moment. It might be easiest to jump into the #pupil-voice channel and do a quick screen share to get this issue fixed
@papr
can you give me a doc or a video about this operation?
No, the linked documentation is the only one we have.
i don't know how to us this code.
Step 8 does not need to be performed
i don't know how to use this code.
ok
Hi, I would like to know if any of you can help me with information to integrate PsychoPy and Pupil experiments. with LSL. UnU
Hi @user-869b8d 👋. We have an official PsychoPy integration: https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html Have you seen that already?
not yet! Thank you ❤️
If I have any doubts, I tell you
What is the consensus on using the pupil core software with 3rd party recording devices like a web camera instead of the eye tracker?
Hi @user-632640 👋. Do you mean in the context of remote eye tracking, i.e. having the cam mounted on or near a monitor, or head-mounted, i.e. close to the eye?