Hi there! I am currently wokring on a project with pupil core through a LSL to collect the data. However, I don't understand the output file. It does not have a label or anything. Its is a xdf file that I open with MATLab. Any insights in how to read properly xdf files generated through LSL?
LSL doesn't technically have an official file format, but LabRecorder saves XDF files. There are some official XDF tools available here: https://github.com/xdf-modules, which are all part of the LSL repo (https://github.com/sccn/labstreaminglayer)
Hi! Can Pupil Core users store their recordings in Pupil Cloud (free or paid)?
Hi @user-5adff6 👋 Unfortunately, Pupil Cloud is exclusively compatible with Pupil Invisible/Neon recordings. Could you kindly provide more details about your interest in using Pupil Cloud for Pupil Core recordings? Are you looking for just a cloud-based storage, or are you interested in one of the offered analysis tools?
Does the Pupil Labs add on for the Vive work with the newer Vive Focus 3?
Hi folks - longtime user. We have a unique issue - a recording has negative timestamps! All of them are negative.
Hey, @user-8779ef - pupil time has an arbitrary beginning, so I think negative timestamps are not unexpected and that this isn't a bug. The network api provides a very simple method for changing the pupil time to whatever you want during runtime: https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote
For data you've already collected, I don't think you want to multiply by -1
, as that would make timestamps decreasing instead of increasing. Instead, you'll want to add a fixed value to every timestamp
This is screwing with a pipeline we have. Of course we could just modify the timestamps - the delta-times seem to make sense, so it's reasonable to suggest multipylying by -1.
that breaks down our ability to match timestamps for the gaze_directions.csv to cached data, like the data in pupil.pldata
Is there simple fix for this issue / bug that would prevent us from having to scrap this file, or from having to write a good deal of one-time-use code to correct for it?
Hey, what does the fixation circle size represents in pupil player? As far as I can tell, the size cannot be changed, so does it represents some kind of uncertainty about the fixation location?
Hi, @user-5ef6c0 - the diameter doesn't carry any information and it can actually be modified by adding a Vis Fixation
element in the plugin manager. You'll then see a new button with the same icon as the fixation detector, and within those settings you can adjust the diameter. If you use that, you will then probably want to disable the Show fixations
setting in the Fixation Detector plugin.
Hello! I would like to directly access real-time pupil-related data, such as the pixel coordinates of the pupils, while using Pupil Capture. Currently, I can only export data as CSV files using Pupil Player after recording. How can I achieve this?
Hi, @user-a2c3c8 - have you looked into the network api? https://docs.pupil-labs.com/developer/core/network-api
Hello Pupil community, Does anyone know how to open the calibiration files (.plcal) after using Pupil Core glasses? If not, does anyone know where the calibiration values are stored in the export files of the recordings?
Greetings, @user-47b063 - the calibration file is basically just a msgpack
serialized version of a Calibration
instance dict (see: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_producer/model/calibration.py).
The process for loading a calibration is pretty straightforward, and you can see how we do it by starting here: https://github.com/pupil-labs/pupil/blob/c309547099099bdff9fac829dc97b546e270d1b6/pupil_src/shared_modules/gaze_producer/model/calibration_storage.py#L172
I'm curious about what you want to do with though - mind sharing?
Hello community, I want to try detecting pupil, but because I just have a C922 webcam so I replace the IR-blocking Filter and used it for the eye camera. After I run Pupil Capture, the camera keeps disconnecting and reconnecting. Does anyone know how to fix it or maybe there is an additional step after replacing the filter? In addition, the eye window does not show any image like this.
Hey, @user-6072e0 👋🏽 - can you confirm that the camera works on that machine with any other software?
Yup, the camera is working fine if I try with other software.
After I look into the code and searched the issue on pyuvc GitHub (https://github.com/pupil-labs/pyuvc/issues/4), they suggest changing bandwidth_factor
variable. So I just change it until the camera is able to retrieve the image (in my case, I change it to 0).
Maybe I want to ask, what bandwidth_factor
is for?
It's used to override the payload packet size. It seems it may be necessary when receiving multiple video streams over a single hub (see: https://github.com/pupil-labs/libuvc/blob/b7181e9625fb19eae80b54818537d2fd0a9090d3/src/stream.c#L1147)
Setting it to 0
as you have done appears to disable the overriding calculation and use the camera's default reported value
Hello! We purchased Pupil Core in summer 2021. Some colleagues then attached each part of the headset onto a band for us, to suit our testing needs. We used it for the first time this month, and one of the eye cameras immediately stopped working. We tested all the connections, and it does seem to be the camera itself that is the problem, as the other eye cam works when plugged into the first eye cam's connector. Is this something anyone has any experience of, and would it be covered by warranty?
Hi @user-26a65d 👋. Sorry to hear the eye cam on your custom headband doesn't appear to be working. First thing to try would actually be a driver debug. Please follow the instructions here and let me know if it solves the issue: https://docs.pupil-labs.com/core/software/pupil-capture/#windows
Hi -- Sorry if this has been answered previously. If a pupil core recording has 3 stored calibrations, is there a clear, objective way to identify the best one? During recording, I know there are output metrics shared to the terminal (accuracy and precision), but I don't know how to access them later. We used a single point and would often repeat it to try to improve accuracy and precision. And, if a recording has all 3 calibration files stored (as *.plcal files), when loaded into Player does it use all 3, or only the first, or only the last, or the best (when selecting Data Source: Gaze Data From Recording). Thanks very much!
Hey @user-025d4c! Each calibration is used until a new calibration takes its place. In a recording where 3 calibrations happen at the beginning of a recording then, indeed, only the last calibration would be utilised for the majority of the recording. But if calibration occurs at 3 different points, e.g. start, middle and end, of a recording, then by default, each calibration is used for the preceding section of the recording. You can also re-run calibrations post-hoc in Pupil Player. It's then possible to choose the one that gets the best accuracy. Recommend replicating the videos here get a better understanding of the process: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Hi, I want to ask about the Calibration Process. Does monocular calibration have to be done with the right eye or can we do it with just the left eye?
Then, after the calibration process, we get gaze visualization, right? What is needed to get better gaze results? maybe adding the camera's intrinsic or anything else?
One more, when I try to start recording, the Pupil Player crashes and the following error message appears ValueError: 'ForwardRef' object is not callable
. Do you know anything about it? (I use Ubuntu 20.04 and pulled the master branch).
Thank you.
If you have a monocular setup you can use either eye. The most important component of good gaze estimation starts with good pupil detection, so you'll need a clear, well-framed image of the eye across its entire range of motion. There's some good information for starting out here: https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detection and also here: https://www.youtube.com/watch?v=7wuVCwWcGnE.
Couple of remarks regarding that crash * "Pupil Capture" is used for creating recordings, "Pupil Player" is used for viewing them afterwards. Are you starting a recording with Pupil Capture or viewing a recording with Pupil Player? * Most people don't need to run from source and have a better experience using the pre-built release packages instead
Hi there, I'm noticing a phenomenon where when the Pupil Core Vive Pro inserts are configured via Pupil Capture to operate at 400x400 pixels, the resulting images have noticably lower contrast than if they're set to 192x192 pixels or 96x96 pixels.
The difference in contrast (if one exists) between 96x96px and 192x192px is not visibly noticeable, but the difference in contrast between 400x400px and 192x192px is significant. We are seeing eye tracking (gaze estimation) performance lower at 400x400px than at 192x192px, and we think this might be the cause.
Is this a known thing, and do you know what the cause might be?
Hi @user-3cff0d ! The edge detector kernel size is optimised for the lower resolutions, so it's normal that you see a better performance with 192x192 px resolution. Regarding the contrast, it's hard to tell as it could be hardware specific.
Good afternoon, I am running an experiment whereby participants view objects in front of them in various orientations. Currently the experiment runs through MATLAB and Pupil labs runs synchronously on the same PC machine for the duration of the study. Is there any available MATLAB code which can either trigger the pupil labs to stop/start between trials? Alternatively, any MATLAB code which can send a time stamp to the pupil capture/player data exports so I have precise timings for when a trial begins and ends? Many thanks.
Hi @user-3b4828 👋! Have you already seen https://github.com/pupil-labs/pupil-helpers/tree/master/matlab ?
This is the specific hardware that was used. We kept lighting consistent between trials, and it's present even when the vive itself is powered off
Thank you for the response!
Hi, Im new to this community. But may I know what I need to integrate this eye-tracking service into my own wearable glasses? Im very interested in learning how to build such a ML model for eye tracking purposes...
I guess that would depend on exactly how much of the Pupil Labs core pipeline you want to utilize. From a very high level, images are acquired (using pyuvc
), those go to a pupil detection algorithm, and that data goes to a gaze mapping algorithm. Depending on what you plan to do, you'll probably be replacing one or more of those steps. The entire Core pipeline is open source (https://github.com/pupil-labs/pupil/), so feel free to dig in! Also, just in case you haven't seen it yet, check out this page: https://docs.pupil-labs.com/core/diy/
Hi, during recording was problems with world camera as a result have few files with world video. I try to export it into iMotions and the soft use the one, the shortest. If I delete this file would it play the next one?
Hi @user-35fbd7 👋 ! You are using Pupil Invisible, right? I will reply to you in the 🕶 invisible channel for the sake of clarity.
And one more question - is it possible to download video recording - world+eye-tracking?
Hello everyone, I have one quick question: where can I find the serial number of an Pupil Core headset? Thanks!
Hi @user-820a68! Pupil Core systems don't have a serial number. Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/955426808399011861
Hello ,
I would like to automate the process of loading a Capture recording into Pupil Player and exporting it through a script. Is there already a solution that does this or could you please point me towards the right direction on how to do this?
Hey! What data would you like to batch export from your recordings? There are some community contributed tools for this. Have a look at https://github.com/tombullock/batchExportPupilLabs
Hello! I am using the Pupil Player to export the world video. In the Player, it appears correctly (left), but when exported as video, there are some issues. The later portions of the video were entirely in gray (right) but the fixation remains though. Even when I tried it on a different computer, the problem persisted. Is there any suggestion on how to fix this issue? Thank you very much.
🤔 looks like quite a long recording! What is its duration if you don't mind me asking?
Hi, we are using a single marker calibration with the physical marker attached to an object in 1,5 m distance. After the calibration (spiral pattern), we ask the participants to do another round of this movement to validate the calibration. It occured multiple times, that the validation shows lower accuracy than the calibration. We then start all over again. How important is the validation process? To shorten the duration of the experiment and to lower the load of the participant, it would be practical to eliminate the validation step, if it's not that important, as the participants report that the calibration/validation process can be exhausting.
Hi @user-ffe6c5 👋. Pupil Capture calculates gaze accuracy after calibration (C) and validation (T). The difference is that the validation uses (or rather should use) different locations in the subject's field of view than the calibration to test how accurately the calibration generalises. In general, it's fairly common then to see lower accuracy after a validation. Whether or not you need to re-calibrate depends on your experimental task. What sort of accuracy values are you seeing? And what is your experimental task?
HI there, we are having a really hard time getting the pylsl plugin to work, is gives an error saying no lib found. Can you help us?
world - [WARNING] plugin: Failed to load 'pupil_capture_lsl_recorder'. Reason: 'LSL binary library file was not found. Please make sure that the binary file can be found in the package lib folder (C:\Users\Mentalab\pupil_capture_settings\plugins\pylsl\lib) or specify the PYLSL_LIB environment variable.
[email removed] 👋🏽 - looks like you're missing the liblsl.dll
file. You can grab it from a release file here: https://github.com/sccn/liblsl/releases (if you're not sure which one you need, it's probably
liblsl-1.16.2-Win_amd64.zip
). Inside that zip you should see a bin
folder, and inside that you'll find liblsl.dll
.
Copy that file to C:\Users\Mentalab\pupil_capture_settings\plugins\pylsl\lib\liblsl.dll
, and you should be good to go 🙂
Hello, I've been using gaze.norm_pos
in my calculations but as it turns out, the z
dimension is also something of interest to me. I had a few questions I wanted a clarification on:
- Is gaze.gaze_point_3d
the right place to look at? If so, in what coordinate space is this point (I'm assuming it's with respect to the camera on top of the head).
- Also if gaze.gaze_point_3d
is the 3D point, how does this conversion from gaze.norm_pos
to gaze.gaze_point_3d
takes place?
Hey! gaze_point_3d
is indeed the correct place to look. However, it is known to be pretty inaccurate, especially at viewing distances over 1 m.
I have previously written about this, with some explanations of how the variable is calculated. Recommend checking out these messages: - How it's calculated: https://discord.com/channels/285728493612957698/285728493612957698/1098535377029038090 - Alternative way to investigate viewing depth: https://discord.com/channels/285728493612957698/285728493612957698/1100125103754321951
Is there documentation on best practices for doing monocular eye tracking?
Hi @user-fa2527 ! Unfortunately, there are no best practices for monocular eye tracking. Anything specific that you would like to address? Do you have a single camera set-up, or are you trying to record monocular data, from both eyes? In case of the latest, you would need https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350
Hi there, is there any way to capture and track the position of a moving calibration target in Unity? The user guide says the Calibration marker is automatically detected by Pupil software in the world video and I want to capture that data.
Hi @user-b62a38 👋 ! Just to be sure, are you using Pupil Core or are you using the VR Add-on with the HMD eyes package? In the HMD eyes the position of the calibration target is known, so there is no need for Pupil Capture to detect it on the world camera, instead, it is passed along. https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/Calibration.cs https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/CalibrationController.cs
Note that the standard calibration, would take 40 samples per target, present the target for 1s and ignore the first 0.1s. https://github.com/pupil-labs/hmd-eyes/blob/master/plugin/Scripts/CalibrationSettings.cs
If you would like to present a dynamic stimuli for calibration, you would need to modify the appointed scripts.
hi,may I ask how to install it on an arm device, such as Jetson
Hi @user-84cfa7 ! Unfortunately, we do not have native support for ARM processors, you could try to run it from source and compile wheels, but to my knowledge, there some additional dependencies which do not support ARM processors either, so, you would need to compile them too. Please have a look at https://discord.com/channels/285728493612957698/1039477832440631366
May I ask why would you like to run it on an ARM processor? If all of this sounds to complicate and you simply want some single board computer (SBC) to run it, some users have reported that the Lattepanda 3 delta works great, and without headaches as it's architecture is x86. https://discord.com/channels/285728493612957698/285728493612957698/1123484310649966602
Want to use an arm architecture computer for eye tracking control of the Arduino robotic arm
Perhaps the simplest way, without taking too much effort to make Capture run on the Jetson, would be to stream the cameras from there to another PC where Pupil Capture is running, if you have that possibility https://discord.com/channels/285728493612957698/285728493612957698/1108325281464332338
Has anyone implemented this method?
Capture also requires an amd64 architecture computer and cannot be installed on Jetson
Sorry, I thought we were talking about Pupil Capture all the time. Were you referring to a different software solution? Yes, Pupil Capture does not run on ARM devices by default. That's why I mentioned that you would need to either compile it from the source, use a different arch or use the Jetson as a proxy to stream the video to a supported device using that video backend.
Yes! People have used this in the past to stream video from devices like Raspberry Pi to another computer. You can read about it here (https://github.com/Lifestohack/pupil-video-backend). There are some minor things that one would need to take into consideration, such as it does not stream the camera intrinsics, so if you are not using Pupil Core cameras, you may need to modify them or that by default, it runs at 30fps, although there is a flag/option -vp
to change that.
Hello I would like to know if there is an option to make it more portable, I've tried with the old mobile app and with a raspberry but none of them work and I can't afford a new device
This comes up with some regularity - in fact, just yesterday https://discord.com/channels/285728493612957698/285728493612957698/1140896658574557204
You can find a lot more discussion using the search feature in this Discord server - including some comments from people who have tried running on a Raspberry Pi and other methods for getting gaze data to such a device
May I ask why I want to add a Plugin to the core
I want to add the plugin to the core
Hi @user-84cfa7 ! To add a plugin to Pupil Capture or Pupil Player, you can simply drag and drop it in the corresponding folder https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin
and windows
How can I improve it
Hi! @user-84cfa7! On Linux it should be a folder equally named under the home directory. Could you post the content of the plugin you are trying to load? Retro compatibility with custom plugins is not guaranteed unfortunately
how can i control robothand by core
Arduino uno bard is the controlcore
@user-d407c1Hi, excuse me.When I am using blink.csv file, I want to know what is the specific time of start_timestamp and end_timestamp.How should I do the conversion?
Hi @user-873c0a ! To what would you like to convert? Timestamps in Core are defined as described here https://docs.pupil-labs.com/core/terminology/#timestamps in Unix epoch.
thank you so much
Of course! It sounds like a fun and interesting project! If you feel so inclined, feel free to share your progress with this community. I'd personally love to see how it goes!
ok
TOPIC: Natural Features Calibration Issue
Hello all,
We are currently conducting psychophysics/EEG experiments using Pupil Core. We have a MATLAB computer (a Windows 7 one) with an external monitor inside the dark subject cabin. We have a separate Windows 10 computer for Pupil Labs, and the two computers communicate via LAN. There is no problem with data acquisition right now. However, since our stimulus computer is not the same as the Pupil labs computer and the subject cabin needs to be dark during the experiment, we can only use the Natural Features choreography for the calibration step. We perform this by presenting two separate arrays of calibration and test dots on the stimulus screen. But our average angular accuracy after calibration is around 3 visual degrees, which is not enough for us to detect fixations to a fixation spot accurately.
Do you have any specific advices for us to reach better angular accuracies?
Topic: gaze tracking on a screen with Pupil Core
Good evening! I am conducting an experiment with the Pupil Core and want to check if participants are looking away from the screen for extended periods of time. For that I used the calibration function of PupilCapture and performed the "Screen Marker" (default) choreography. I get the rectangle around my screen in the world view, so the configuration seems to work fine. I can't seem to make sense of the recorded data though. In the documentation it says:
Pupil Capture and Player automatically map gaze and fixation data to surface coordinates if a valid surface transformation is available. The surface-normalized coordinates of mapped gaze/fixations have the following properties: - origin: bottom left (pay attention to red triangle indicating surface up/top side) unit: surface width/height - bounds (if gaze/fixation is on AOI): x: [0, 1], y: [0, 1]
How can I use this, to check for someone who is NOT looking at the AOI? I tried looking away from the screen, but all measurements are still in the [0,1] range, so there's no indicator of whether I looked at the screen or not. I suppose I am using the wrong data here - could you tell me, what I am doing wrong, or rather what I should be checking to see if participants are looking at the screen?
Thanks in advance and have a nice week!
Hi, @user-19fd19 - are you using the Surface Tracking plugin (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking) ? You'll need to place AprilTag markers on or around then screen and then configure the surface corners using the plugin. You may also want to configure the surface dimensions - many users doing gaze on screens set the dimensions to match the screen resolution so that the reported surface gaze positions are in pixels. You could also use the physical dimensions of the display, or leave it as 1x1 for normalized coordinates.
With that configured, you'll have now have surface gaze data along with the regular gaze data. You'll know when the gaze is off screen if the surface gaze positions are outside the range of your configured surface dimensions.
Hello, I would like to send a USB port signal at a certain location through the red dot during tracking. May I know how to achieve this
Hi, @user-84cfa7 - have you seen this sample script? https://github.com/pupil-labs/pupil-helpers/blob/master/python/serial_bridge.py
Hi guys, I am learning to build a plugin for for pupil capture which aim to detect colour at the gaze positions in the world process: Get the gaze position (coordinates) Return the colour (by printing out the colour in the world process) I am thinking about overwriting the def recent_events(self, events) function for the plugin but could not find much information about the data associated with the argument "events" when calling this.
Is there a better way to do this? Could anyone shed some light on this please?
PS. I am not running from source
Thanks in advance!
An alternative approach to writing a custom plugin would be to write a separate app and use the Network API to receive the gaze data and scene camera frames
@user-d407c1 Would you be able to point me in the right direction on this please? Thanks for your help!
Hi, does the eye camera have a IR Filter ? If so, what is the range of this filter ?
Hi @user-5f1945! Eye cameras do have a filter blocking visible light and allowing NIR from 850nm onwards. Kindly note that most filters do not effectively block 100% of the light.
Hi, I'm just wondering, is the pupil core the most accurate camera set you have? Compared to the AR/VR which shows ~1.0 degree accuracy, the core shows ~0.6 degrees. Is that correct?
Hello,
We have been doing some experiments with Pupil Core since last month, but yesterday we started having some hardware issues. First, one of the eye cameras stopped responding. Then in some instances the world camera was not responding as well. We first thought it was a cable issue but it kept on going after we made sure that all cables are firmly attached. Then finally we got blue screen errors from Windows with the error code "ATTEMPTED_SWITCH_FROM_DPC". This computer has 16GB RAM and intel i3 CPU. Since the minimum requirements for the CPU is i5 according to your website, we installed the software and the drivers to another computer which has i7 and 32 GB RAM. However the result was similar: The world camera and eye 1 were appearing grey, with no recognized cameras. Only eye 0 was working. And whenever the world camera starts working, it freezes again once we try to move it a little bit. And a little while after we disconnected the USB in this new computer, we got another blue screen error from Windows, this time with the code 'MEMORY_MANAGEMENT'
We are making sure that we have all cameras enabled, we also tried decreasing the resolution to the lowest setting for each camera. None of these helped. Are we having a hardware issue? (I am sending this same message as e-mail to info@pupil-labs.com just in case).
Hi @user-5adff6 👋 ! I have followed up through email.
Thank you very much! I will run the troubleshoot steps and get back to you asap!
Hi pupil, I am using PupilLabs unity api, I want to ask is it possible to get confidence for individual eye?
Hi @user-13fa38 ! Yes! you should be able to use the https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#subscriptionscontroller to subscribe to any topic from Pupil Capture.
You can check all topics using https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py, the one you would be looking is pupil.X.Y
where X is the eye (0 or 1) and Y is the 2d or 3d gazer and which contains the confidence.
See more at https://docs.pupil-labs.com/developer/core/network-api/#message-topics
Hi, I am trying to re-calculate fixations and wrote this function to get dispersion:
`def vector_dispersion(x_norm_values, y_norm_values): """ This calculates angular distance between x, y gaze coordinates in 2D space and returns dispersion in radians """ np.array([[x_norm_values[i], y_norm_values[i]] for i in range(len(x_norm_values))]) # Drop rows with any NaN values vectors_without_nan = pd.DataFrame({"x_norm": x_norm_values, "y_norm": y_norm_values}).dropna()
distances = pdist(vectors_without_nan, metric="cosine")
dispersion_cosine = 1.0 - distances.max()
if np.isnan(dispersion_cosine): # Check for NaN resulting from division by zero
return np.nan
angular_dispersion = np.arccos(dispersion_cosine)
dispersion_radians = np.deg2rad(angular_dispersion)
return dispersion_radians`
Does this ultimaltely give you the dispersion in radians? We are getting really small numbers on our end? Is it better to use gaze_point_3d values? Could that lend to this difference?
Hi @user-908b50 👋 ! it looks good but perhaps you would like to use visual angle degrees, and not use the normalised coordinates. If you want to use fixations as is done in Pupil's Core software have a look at https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L130 As you could see, for the 2D detector, gaze is denormalised and undistorted to a 3D plane (lines 142 to 148).
Hello all, I try to synchronize Pupil Core with the software presentation and EEG. For this purpose, I would like to send some triggers from Presentation to Pupil Core. As I understand it so far, you can add the LSL Recorder plugin and send triggers to Pupil Core via LSL. So far the stream from the presentation is recognized (green button), but after starting the recording the capture software is frozen and a message like this is displayed.
Hi @user-dfd547 👋. The error is from pylsl timing out when receiving data from your presentation software - very difficult to guess what the issue could be. Are you able to record your presentation's stream using LSL's native Lab Recorder App: https://github.com/labstreaminglayer/App-LabRecorder ?
Does this mean that the data stream is not accepted from Software Presentation or do I have to set something in Presentation? Thank you in advance!
Hi 👋 I am experimenting with pupil core. The experiment setup has four screens: three on the table, and 4th one is a big screen behind the table, A person wearing the pupil core sits on a chair in front of the screen to monitor some parameters on the screen. I am sticking the April tag markers in the environment to use the head pose tracking plug-in of the pupil core.
In the pupil player, the plug-in detects all the markers in the environment but takes only a few for 3D visualization and exporting the head pose data. The accuracy of the translation and rotation exported data(in head_pose_tracker_poses.csv) is not good.
What are your suggestions for getting an accurate head pose? What are the factors that affect accurate head pose tracking? Thank You
Hi @user-0aefc0 ! could you please let us know which April tags markers do you use, what their size is and how where are the markers located? If you run it in capture or post hoc in player and if you triedresetting the model once the markers are visualized?
Hi 👋
I am having issue with Pupil Capture today when I tried launching it. Reinstalling the app did not resolve the issue. I have attached the log as below.
Thank you!
Hi @user-74a158! Can you please try deleting the 'pupil_capture_settings' folder from your home drive and then re-launching Capture? Note that if you have any custom plugins, you should back them up first.
Hi @nmt ! . Thank you for your quick reply. I have also tried it with the LSL's native Lab Recorder App. After recording, I received a xdf-file which I tried to read out with the help of the Matlab Importer. Unfortunately, for the Pupil Capture data, I have got unstructured floating point numbers for time series and time stamps and for the presentation data, I got strings and floating point numbers in one cell regarding the time series.
Hi @user-dfd547, the lab recorder app does indeed generate an xdf file which contains the unified data streams. I'm not sure I fully understand your problem, however. Is the issue with reading the contents of the xdf file?
hello how are you
I need to install software which I don't have on your website, could you help me?
Hi @user-4a0164 👋 Could you kindly provide further details regarding the specific software you intend to install? If you're interested in Pupil Core software, you can find additional information at this link: https://github.com/pupil-labs/pupil/releases/tag/v3.5
@user-64de47 ???
I am trying to gaze_point_3d values from gaze_positions.csv file with the surface_gaze_positions.csv file (after merging the surface file with blinks). My next goal is to The problem that I am seeing now is that for a given gaze_timestamp with a similar world index, the x_norm and y_norm coordinates are not like the norm_pos_x and norm_pos_y coordinates. I know that these are transformed. For a specific world_timestamp, you have multiple gaze_timestamps (presumably because the eye is moving of course), which is not seen in the gaze_positions.csv where you will only have a subset of the gaze_timestamps. I am not sure how to reconcile the difference. I would appreciate suggestions on tackling the different databases.
My next goal is to also merge diameter (from the pupil positions file) with the same surface file (with the fixation and blinks data merged). Running into a similar problem with reconciling databases as above.
Hi @user-908b50 ! Have you seen our pupil tutorials? There might require some update depending on your pandas version, but they will show you the basics of data handling, including how to merge databases https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb
or getting the pupil diameter by fixation on surface: https://github.com/pupil-labs/pupil-tutorials/blob/master/06_fixation_pupil_diameter.ipynb
Hello, I would like to use pupil core to track pupil diameter of research subjects! However, I am having difficulty getting it started... I downloaded softward onto my mac.. PLugged the pupilcore device into my computer (through a USB B to USC C converter) and the video feedback of the eyes and world is not showing up. PLus I am getting the following error message. Is anyone familiar? This is my first time using the device.
Hi, @user-63b5b0 👋🏽 - on newer versions of MacOS, you'll need to be sure that you run Pupil Capture with admin privileges. More info here: https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer
Hi Neil, I am trying to detect the pixel rgb values at the gaze position (single pixel value for now) but now it always return (255, 0, 102) because of the red gaze position marker. How can I get the original world image without the red gaze position marker in the image. Thanks for your help!