HI there, Im having trouble with the surface recording, the surface markers are set and saved, and the arrow is the right way, but there is nothing in the exported gaze_positions_on_Surface_[screen name] file. Any suggestions?
Hi @user-b62a38, thanks for reaching out! π Could you try clearing the pupil_player_settings folder and try running Pupil Player again? This folder is in Drive > Users > Username (on Windows),
Thank you! π
Hi, I'm currently looking at someone else's data. This data has the strange effect that when surface data is exported, the x and y coordinates on the surface are perfectly correlated (they're not the same, but almost). This can also be seen in the surface heatmap that can be displayed in in the Pupil Player - It is a straight line from the middle of the surface to the top right corner of the surface, with most of it located in the top right corner. I have never seen such an effect before with my own data. The surface seems OK, and the gaze coordinates as well. But when compared, the output surface coordinates do not match the expected result at all. Is there something obvious that I have overlooked that can result in such surface data?
Hi @user-d1f142 , could you restart Pupil Player with default settings and then give it another try? You can find this under the General Settings tab in the sidebar of Pupil Player.
Hello! Would like some guidance here - this may be a "noob" question we are new to eye tracking. We have a Pupil Labs Core and want to run an experiment with users looking at a screen (iPad or Mac) and having the ability to use the eye tracker to move the mouse cursor on the screen (mouse cursor follows the gaze) and click on any buttons (either by fixating their gaze on the button for a specified duration or by blinking some number of times). Is doing something like this straightforward, and are there any guides for setting this up? I saw this online (https://docs.pupil-labs.com/alpha-lab/gaze-contingency-assistive/) but it seems like it's for Neon, and we have a Core.
Hey @user-2c4f22! You might want to check out this community contributed project - https://github.com/emendir/PupilCore-CursorControl
Hey guys, I am trying to load previously used surface definitions for other recordings. I have copied the surface definitions v1 file into the other recording folder. However, when I load the recording into Pupil Player, it takes a long time to load with a white-ish screen. I have left it running for one hour with no success. I note here (https://docs.pupil-labs.com/core/software/pupil-player/) that it says it may take a few seconds because the surface tracker plugin and assessing the whole recording. My recordings are about 5 minutes long and I have 25 predefined surfaces. Is this normal for it to take this long to load?
Hi @user-3c6ee7 π. This can depend on the computer and available RAM, since the marker cache needs to be rebuilt for the detections in the new recording(s). Twenty-five surfaces is quite a lot, so the time you report is not outside the realm of possibility. Did it finish in the end?
Hi there. We are running into a continued issue with our pupil labs. We are running Pupil Core eye trackers through pupil capture v3.5.1. The glasses are wired directly to a laptop. However, upon starting, the right eye camera won't connect. Error message says "no camera intrinsic available" (among other things).
This error is consistent across both of our pupil core trackers, and occurs for both our Windows and Mac laptops. Any help would be appreciated!
Hi @user-885c29! Can you please open a support ticket and we can correspond there - we might need to exhange order IDs etc.
Hi @user-f43a29 , I tried this, it doesn't change anything. Disclosure: I'm the lead developer of Blickshift Analytics and I received the data from a customer. Our Software allows importing the Pupil Labs Surface Data into our Dynamic AOI Editor, and the Dynamic AOI Editor also allows transforming the gaze coordinates into the local coordinate system of the surfaces/AOIs. Surprisingly, if I do this, the result is correct, i.e. if I use the surf_positions together with the gaze_positions exported from Pupil Player, and do the transformation calculation on my side, I get the expected result, but it does not match the content of the gaze_positions_on_surface file.
From my side, I have solved the problem by telling our customer to use the Dynamic AOI Editor and import the surfaces. His deadline is today, so it won't make a difference if I can find the source of this problem, but I would still like to understand what is going on.
Hi @user-d1f142! This is Neil stepping in for @user-f43a29. Are you able to share the recording in question? I.e. the full recording directory?
Hi @nmt , I have received permission from the customer to share the data. I will send a link via pm.
Hi @user-d1f142! We reviewed the data and found why the issue arises. We can provide some concrete tips on to improve the set up for future recordings, but it's not going to be possible to get surface tracking data from those already made, unfortunately.
Explanation: For most of the tasks, none of the surface tracking markers are visible. Sometimes, one or two markers are in view, causing the surface to reduce to either a line or another erroneous polygon that doesn't encompass the area of interest. Additionally, the eye camera positioning is incorrect for some of the shared recordingsβthere are several instances where the pupil is not in view of the eye camera.
Steps for future recordings: 1. Ensure that at least three (ideally four or more) surface tracking markers are always in view of the scene camera to accurately define the area of interest 2. Double-check that the pupil is roughly in the centre of the eye camera field of view and that it remains visible during all gaze angles that will be recorded in the testing
For the existing recordings, the users might want to revert to manual annotation of the data, e.g. with the Annotation Plugin
We hope this helps!
How do I do this? (sorry)
Hey @user-885c29, you can create a support ticket in π troubleshooting π
Hi @nmt , weirdly enough when I tried again today, the surface definitions were able to load in a few seconds! However, there are still a few recordings for reasons unknown, which are not able to load the surface definitions in a timely manner (left on a white screen and loading for a long time). Even though these recordings are not longer than the others.
What does it mean when Pupil Player says "surface gaze mapping not finished"? I get this message occasionally when trying to export fixation data
I think it's also possible that the marker caches were nearly finished building when you reopened the recordings, but I can't say for sure. That message is literal. It means the surface mapping has not finished. It's recommended to wait for it to finish prior to exporting the data.
How do I know when surface mapping has finished? I am looking at the marker cache green progress bar at the bottom , which is fully green , but it still produces that message. Or does it throw that error if there are tiny blips in the marker cache (when the person briefly looked away and so there were no markers in their visual field in that brief moment) ?
@user-d407c1 Hi Miguel, I see that the ticket we had open has closed. Were you able to get that mp4 file working, or did it not work out? Best, Kieran
Hello! I use lab streaming layer to record pupillometry from my pupil core and align to triggers from an exteral device. However, I want to record from pupil capture and analyze in pupil player as well. Do you know if I can record the pupil core video with lab streaming layer so I can align my data with triggers?
Probably what you'll want to do here is to use remote annotations to send your events to Pupil Capture. They will be saved with the recording and visible in Player. Some sample code is available
Hi @user-fc393e ! Your ticket was close due to inactivity, as we did not receive a response for more than 10 days. We tried several times to communicate there, basically my colleague @user-cdcab0 found the issue and replied to you there on the 2024-04-10. I am recovering his response for you here:
Hi, @user-fc393e - the crash you're seeing appears to be the result of a single corrupt record in your gaze data. I have written a custom plugin for you which, when enabled, will apply a patch to the fixation detector. The patch simply tells the fixation detector to ignore corrupted gaze records.
- Close Pupil Player
- Save the attached file to
pupil_player_settings/plugins/fixation_patcher_kieran.py
- Start Pupil Player and enable the "Fixation Detector Patcher" plugin
- Enable the regular fixation detector plugin
@user-d407c1 already mentioned our list of best practices, but I'd like to also call attention to the Record Everything section. If this recording had included calibration like the recommendation, you'd be able to regenerate the gaze data post-hoc, which would have fixed the corrupt data.
Hi @user-d407c1, thank you for digging that up for me & sorry for the slow response. And thank you @user-cdcab0 for providing that patch.
The plugin works for detecting fixations, but I am still getting an error (either the same error, or just a frozen pupil player window) when trying to export those fixations. Is there any way to resolve the export (or another way to retrieve those fixations)?
And thank you for the info. I'll be sure to dig through the best practices a little more & implement those suggestions in our checklists.
Quick question: Is this fisheye effect in the Pupil Core world view image normal for a 1920x1080 resolution?
Hi @user-94183b, yes, the fish-eye distortion occurs when you select 1920x1080 resolution for the world camera. You can learn more about the camera intrinsics estimation here. You can also learn how to undistort images of the fish-eye lens by following this tutorial.
The links you provided are very informative. Thank you for your prompt reply.
Hi @nmt , I understand what you are saying (this is always an issue with this customer, I have talked to him repeatedly about this during prior projects - to no avail it seems ;-). But this is not the issue I am referring to. I am talking strictly about frames where the markers are recognized, the surface is tracked correctly and gaze data exists. Otherwise our software wouldn't be able to compute the correct surface coordinates, I only used data imported from Pupil Core for this. I have attached a screenshot. On the left is the video frame. You can see that the gaze point embedded int the video matches the gaze point overlaid by Blickshift Analytics exactly (blue circle). In the middle, the gaze data is mapped onto a screenshot of the surface (ignore the embedded gaze data, that's an artifact of the screenshot). The blue circle is the surface-local gaze point computed by BSA, using gaze data + surface data. As you can see, it matches the location on the video. The red circle is the surface-local gaze point exported by Pupil Core, for the same timestamp.
Hi @user-d1f142 , I'm stepping in briefly for @nmt on this message.
It seems the surface definitions somehow became corrupted across all of these recordings. Perhaps something went wrong when using Pupil Player to make the initial exports, but I am not 100% sure what exactly happened.
I tried the following and got results that now look sensible:
exports
, offline_data
, square_marker_cache
, surface_definitions_v01
, surfaces.pldata
, and surfaces_timestamps.npy
.I have attached a quick plot of what I now get out.
Could they try this on their end and see if that fixes it for them?
hello i wanted to know where can i find the video i just recorded with my pupil to drag it in the pupil player
Hi @user-dfcad3, thanks for reaching out π If you're on windows, you can find the recordings on C:\User\Username\recordings.
As you can see, it matches neither the point we compute, nor the video data. This is what I cannot explain. The red line in the line graph is the y-coordinate of the surface coordinate output by pupil labs. As you can see, it is almost constant, very close to 1. The light green one is the same coordinate computed by BSA. If it was a problem with the base data, how would I be able to compute this coordinate?
yes it worked thanks a lot
Hi @user-f43a29 , I have just tested it, and this indeed works. Thanks for the help. I would still have liked to understand what is going on (especially since the customer has about 40 data sets where the surfaces result in these incorrect exports), but I'll know what to do, if this happens again.
Hi, @user-fc393e - you're quite welcome! I still have a copy of your recording and the export works fine for me. Are you on the latest version of Pupil Player? Are there any meaningful messages in the log file?
Hi @user-311764 ! On the pupil_positions.csv
you can find a column named method
which tells you what detector was used, on each row.
Onto gaze, you can select which one is used for gaze estimation (see choose the right gaze mapping pipeline). Note that if you followed our best practices and recorded the calibration, you would be able to run it the gaze mapping post-hoc and choose a different pipeline.
hello i need to do a presentation for students and i wanted to know if there is some games that are playable with an eye tracker.
Hi @user-dfcad3, we don't have solutions for games that are playable with eye trackers. However, you might want to check out this community github page that utilizes Pupil Core's pipeline for human-computer interaction. Perhaps some of the projects might be interesting for you to demo to your students.
The files on that page are completely open-source. Feel free to tailor it to your use case.
ok thanks you
is it possible to control the mouse with the pupil core or pupil invisible ? if so, how to do it please
You can take a look at this project that uses Pupil Core and a headtracker to control the cursor. As for Pupil Invisible, you'll need to adapt this Alpha Lab's experimental tool with the real-time API to achieve some form of gaze-contingent mouse cursor control. Please note that while the tool written in Alpha Lab is intended for Neon, it should work with Invisible.
Hi, I am using the Pupil-Invisible and the Pupil-Core for my study. Unfortunately, there were problems with the recordings with the Core glasses, which I only found out when I analysed the recordings. The situation is as follows: The settings and calibration worked and the fixations can be seen at the beginning, but as soon as the test person starts riding his bike, the pupils are barely visible due to the sun and there are no more fixations on the images. How can I avoid overlighting? What exactly do I have to set? I made the settings and the recordings on my laptop. is there also a mobile phone app for the Core that I can use? With the Pupil-Invisible and the matching app, the recordings work perfectly under the same conditions. I need to use two pairs of glasses as I have to take two recordings at the same time. Is there any way I can edit the recordings that have already been taken so that I get the fixations? I need the recordings for my dissertation.
Hi @user-75e0ea! It sounds like the eye images recorded by Core are over-exposed. This can happen in direct sunlight. To mitigate this, you would need to set the eye cameras to auto-exposure mode in Pupil Capture. But it's not guaranteed to work in all cases. To determine if they are recoverable, we'd need to see an example. Can you share one of your recordings with [email removed] Also note that Pupil Core needs to be tethered to a laptop/desktop computer for operation.
Invisible doesn't have this issue as autoexposure is always on, and moreover, it uses a deep learning approach to gaze estimation that doesn't rely on traditional pupil detection. That's why it will have worked well even in the direct sunlight.
This is the error I'm getting
Do you mind opening a support ticket in π troubleshooting?
This is what I get if I don't run surfaces
I went down a similar path looking for games that are playable by mouse and then controlling the mouse via gaze. Here's a little "shooter" horde game I made and played while we were at a conference a couple of weeks ago. It's played using a Neon and the real-time screen gaze package
This looks like so much fun! I would love to try it!
Also back in the days I built something similar built with EyeGestures (opensource eye tracker of mine) running on web: https://eyegestures.com/game
it is far less robust that what you are showing, but it is funny as this was first type of game I wanted to implement.
Also with Neon and the real-time-screen-gaze package, here's gaze-controlled Fruit Ninja
oh thanks, do i need to install some python modules to use it or I just need to run the codes ?
For Core, you will need to configure a surface on your screen with Pupil Capture. Then, using Python, you can read the gaze coordinates from Capture and use that to control the mouse
so i don't need to use the real-time screen gaze that you used for neon for the core ?
Correct. The real-time-screen-gaze package essentially provides for Neon the same functionality that the Surface Tracker plugin provides for Core
ok thanks i will test it and come back i have some issues
Hi! I'm trying out post-hoc calibration with manually identified referents to try to correct for some slippage we saw over our experiment. Is there a way to check that this is having a positive impact on fixation estimates? I can see fixation detection number before and after the post-hoc calibration, but I'm not sure if there's a better way to determine if the referent calibration is actually helping? Or to quantify how much it may be helping (e.g., more so for some participants than others)?
Hi @user-5a4bba! I understand that you are comparing fixations that were detected during the recording (online fixation detection in Pupil Capture) vs. fixations detected after the post-hoc calibration using the offline fixation detector in Pupil Player, is that right?
In that case, the two sets of fixations are not directly comparable, and the post-hoc calibration is not the only factor that might explain differences between the two. Keep in mind that although both online and offline detectors implement a dispersion-based filter, these have slightly different implementations, so online results won't always correspond with offline results. This can explain different fixation counts. You can read more about that here
To quantify your fixations' data quality you could look at the confidence value that is provided for each fixation. See also this relevant message: https://discord.com/channels/285728493612957698/285728493612957698/1117708961697759282
Hi! I would like to know if there is a simple solution to automatize the export of fixation data with fixed parameters (i.e. dipersion, min/max duration). Thanks in advance for your help!
Hi @user-24c29d , although Pupil Player does not perform batch exports of data, you can check out the community contributed plugins, where there is a third-party batch exporter that you could try.
Pretty sure I've seen your posts on reddit!
Very likely!
I post a lot on my project recently
Hi, does Pupil CORE works properly with Mac? I am getting calibration errors.
Please get back to me.
Hi @user-698509! Yes, Pupil Core works properly with Mac. Can you please elaborate on what you mean by calibration errors? It would actually be helpful if you could share all the steps you've taken to set up the Core system prior to initiating a calibration. Screen shots/recordings are extra helpful!
Hello all, I am trying to do a study using the surface tracker plugin. In the study we give the subjects 15 images with four markers at the corners of the image. Each image has to be analyzed on its own, but to make things easier we thought of using the same markers for each image since between two consecutive images there's a quiz with no marker so we use the surface_events file to know when an image is displayed or not. However, we've seen that the surface recognition is not always accurate as you can see in the screenshots from pupil player. Could it be because the size of the images are different or is it something else? I can see the camera is not properly on perpendicular to the monitor, could that be it? We have also turned down the aperture time since the monitor was too bright and the camera had difficulties finding the markers for the calibration
Hi, @user-4c48eb - some notes:
A similar (but less error-prone) method which also doesn't require you to manually annotate and doesn't require separate surface definitions for each image would be to define a surface for the quizzes (using a different set of apriltag markers from the stimuli, but each quiz can use the same markers/surface as each other). That way, instead of looking for the disappearance of the stimulus followed by its next reappearance, you'd then be looking for the appearance of the quiz surface followed by the appearance of the stimulus surface.
Thank you @user-cdcab0 for the clarification and the quick response. I understand your last paragraph, but I need to track the gaze on the image. How can I use the surface on the quiz to see where the participant is looking on the image?
I'd also like your input on whether I should anchor the images to the corners of the screen or not. I'm looking to do an analysis like the one attached, but I'm not sure I could do it with the markers at the corners of the screen as the dimensions of the image depend on the dimensions of the screen itself. So, I think the best option would be to create different surface definitions for each image. That way, I wouldn't rely on the surface visibility event, but I'd have a separate file for each surface. What do you think?
And one last question: luckily the recording was pretty good, so all the surfaces are tracked correctly. To get accurate data from this recording, do you think I could manually export the data from the frames where only one image is present, deleting the previous definition of the surface and re-adding each time?
How can I use the surface on the quiz to see where the participant is looking on the image? You wouldn't. I only suggest the quiz-surface to provide more reliable surface event sequences in order to know which stimulus they are on. You will still need to define a surface on the stimuli themselves.
the dimensions of the image depend on the dimensions of the screen itself I'm not sure I understand this, but I'd like to re-iterate one of my suggestions. Determine the height of the tallest image you have and the width of the widest image you have. Then add a black frame to all of your images so that all of them are as tall as the tallest and as wide as the widest. Now all of your images are the exact same size and you can display your AprilTag markers just as you are now (anchored to the corners of the images) without having to worry about surface warp.
I think the best option would be to create different surface definitions for each image This certainly is a solution that would work, but I assumed you had a reason for avoiding it - like if there are many stimuli, defining a separate surface for each one could be very tedious.
do you think I could manually export the data from the frames where only one image is present, deleting the previous definition of the surface and re-adding each time? This would work too, but again, sounds very tedious to me.
Thank you very much, this was very helpful.
Hi there, I am researcher using the pupil core glasses. I am building a unique set up for a research project, and need to find a way to connect the cameras to USB. I am hoping to buy a connector/cord where one side of the wire should be USB and the other should be the type of connector suitable for these cameras. I have tried buying a few different USB to JST (I think that's the connector to the camera?) cables, but the camera connector is the wrong size each time. I'm attaching photos of what I am talking about. Is it possible to get the specs for that camera side connector, or do you possibly sell a USB to camera connector solution? Thanks so much!
Hi @user-8e1492 , Rob here π I hope all is good π
Please see the next message below for the details
Hi @user-f43a29 Fancy seeing you round these parts π Thanks so much! This is super helpful. Hope you are doing well!
@user-8e1492 my apologies. The connection needs to be JST type SH (1.0mm), 4 pin variety.
You can of course order your own, but you can also order these direct from us. We cannot make guarantees about 3rd party cables.
Just send an email to info@pupil-labs.com saying that you'd like to purchase JST to USB cables for Pupil Core.
Hey all, I'm with TeamOpenSmartGlasses & we're looking into eye+pupil tracking as a means of detecting the wearer's focus.
Does anyone have an assembled (or partially assembled) Pupil Core DIY for sale? Would like to save time building one, if possible.
P.S. Sorry if this is the wrong channel to ask!
Excuse me.
Is there a way to use Pupil Core in WINDOWS11?γLast year, the application did not work in WINDOWS 11.
Hi @user-5c56d0, thanks for getting in touch π. Pupil Core's software is working with Windows 11 and my system is also windows 11.
Please try installing the software again, and open a ticket in π troubleshooting if the application didn't work for you. We'll try to guide you through some debugging steps if that happens.
Thank you for your reply.
Hello, when trying to export a failed recording with Invisible using the Pupil Player, I get the following error message: "NumPy boolean array indexing assignment cannot assign 155640 input values to the 264126 output values where the mask is true" Is there anything I can fix to make it work?
Hi @user-0b4995 , the main error that caused the crash is above that one. It says that eye0_lookup.npy is missing. By "failed", you mean the software crashed during the recording?
yes it crashed and we copied the data off the phone
Also, since this is respect to Invisible, I hope you don't mind that I will move these messages to the πΆ invisible channel
sure no problem! any way I can solve the issue?
I will check with the others. What data do you need exactly? You want to recover all data that was recorded until the point of the crash or you want to just recover a subset of data streams, like gaze+world video?
Also, have you uploaded this recording to Pupil Cloud? What is its status there?
Well I just copied the eye1_lookup.npy' and eye0_lookup.npy' from another recording and it seemes to work now!. Skipping through the files and the gaze looks good
Please be aware that this is not recommended practice. Mixing data from recordings can lead to unexpected results and potentially incorrect conclusions.
correct! i will check the export but since we code this one manually and the gaze seems to make sense I suppose this should work in this case. thanks for your help!
I will still check and come back with a potential solution/explanation.
thanks rob!
Some quick DIY Core questions:
For the exposed and developed color film negative or Edmundβs 5-mm-diameter filter
part, is this the correct part?:
https://www.edmundoptics.com/p/5mm-diameter-optical-cast-plastic-ir-longpass-filter/41541/
I read somewhere that the accuracy of pupil diameter tracking heavily relies on your eye cam's resolution. Is the 720p Microsoft HD-6000 good enough for this, or do I need to modify the housing to accept another Logitech C615 (which is 1080p)?
Thanks!
Hi @user-14a4fc π 1. That filter should work! 2. If you're using the Pupil Core pipeline for pupillometry, resolution isn't typically an issue. As long as you have a clear image of the pupil with good contrast between the pupil and the iris, it should work! You can read more about the pipeline in this section of the docs. There are also examples that show you what an eye image typically looks like. If you want to grab some actual eye cam videos, head to our website and download an example recording. Good luck with your build!
Hi there, I was downloading your installation files for core on Linux Ubuntu 22.04 LTE. Installed it with Gdebi package installer and try to run via command line. However, I run into an error.
/opt/pupil_capture/libpython3.6m.so.1.0(+0x160a84)[0x7efd67160a84] /opt/pupil_capture/libpython3.6m.so.1.0(PyEval_EvalCodeEx+0x3e)[0x7efd6716106e] /opt/pupil_capture/libpython3.6m.so.1.0(PyEval_EvalCode+0x1b)[0x7efd6716109b] /opt/pupil_capture/pupil_capture(+0x372c)[0x55d7d5b8c72c] /opt/pupil_capture/pupil_capture(+0x3b1f)[0x55d7d5b8cb1f] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7efd67600d90] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7efd67600e40] /opt/pupil_capture/pupil_capture(+0x24fa)[0x55d7d5b8b4fa]
Unhandled SIGSEGV: A segmentation fault occurred. This probably occurred because a compiled module has a bug in it and is not properly wrapped with sig_on(), sig_off(). Python will now terminate.
Hi @user-a79410 , instead of running it from the command line, can you try the following:
Let us know if that improves things. In addition, I have the following question:
Are you on a laptop with an Nvidia graphics card?
Could you please advise if you have other install instructions? or how to overome that error?
many thanks in advance.