Hi Pupil Team. I hope this message finds you well!
Here is a beginner in eye-tracking technology. I am using Pupil Core for some research on human viewing behavior. I have some doubts about some parameters of the device and hope to get your help.
I appreciate your help.
Hi, @user-d7d74e - you'll almost always want to choose the highest resolution and the highest framerate. This will yield the most accurate and precise data for you in all circumstances. Lower resolutions or framerates would only be necessary in very specific and rare scenarios/configurations.
The heatmap visualization you reference could certainly use fixation data instead, but it would have to be adapted for that. As written, it assumes each data point has the same weight - which makes sense for gaze data because the sample frequency is more-or-less fixed (and so each gaze data point has the same duration). If you're only plotting data from fixations, you'd want to either: 1. Filter out gaze points that aren't part of a fixation, or 2. Build the histogram from fixation data directly and use the duration of each fixation as a weight
Option 1 is probably a little simpler
Hi Pupil team! I would like to know the specific parameters of the internal human eye camera of your Pupil core product, such as its Focal lengthοΌmmοΌand its sensor size. Can you tell me a way to get this information?
Hi @user-fafdae ππ½ ! Please contact [email removed] in this regard and we will follow up with the corresponding information.
hi..i have pupil wearable glasses and i have been collecting data earlier thro cloud.In that the gaze positions were not normalised and was in absolute values. But in offline data collected the norm_x and norm_y are giving normalised values. However for my algorithm i need absolute values. It is just scaled and reference is changed but from where do i find that scaling factor.
Hi @user-15edb3. Could you please first clarify which eye tracking system are you using? Pupil Core, Pupil Invisible, or Neon ?
Hi all, may I ask what is the reason for this offset? Got two users to do this both produce this offset.
Hi, @user-13b09a - are the simple colored circles the fixation targets and your expectation is that your heatmap circles would align with those? Can you tell us more about your configuration?
i use this
Ok, good. Looking at the markers on your display, a couple of things catch my attention.
I sent the relevant recording files and processing scripts to the email address [email removed] but no response yet.π
When did you send it and from what email address? I'm unable to find your message.
It does not seem to have come through. For what it's worth though, you can load your recording in Pupil Player with the surface tracker plugin enabled. If the corners of the surface do not stay "glued" to the corners of the display for the duration of the video, that would indicate some issues with marker tracking and the tips in my previous response should be reviewed
Thanks for the reminder. As well as I have re-sent a copy with a different email, please check it later. Thank you very much for your patience and help!
Hi Newbie to pupil labs core. Trying to setup the python software from https://github.com/pupil-labs/pupil. The line python -m pip install -r requirements.txt fails to complete. How do I troubleshoot the cause/error I have attached error output as a file. Thanks
Hey @user-57290c π. Welcome to the community! May I ask why you're running from source? The recommended workflow for most users is to run from our pre-compiled bundles: https://github.com/pupil-labs/pupil/releases/tag/v3.5 (scroll down to Assets).
Yes you can ask. I initially thought that I had to build it from source and I have subsequently installed version 3.5 as a windows package. So I can get my daughter up and running. However I would like to be able to build it from source so that we have control of the sequence of starting a recording and then trigger visual similia which is necessary for an experiment that my daughter needs to do. I will investigate if we can do what we require using API calls. I wondered if I needed to connect up the recording device beore building from source or if the build environment using the D: drive was the issue. Thanks for your response.
You can do all you state, and more, with our network API. Check out these helper scripts to get started: https://github.com/pupil-labs/pupil-helpers/tree/master/python
Hi all. What is the best calibration choreography when working with multiple screens? Does working with screens of different sizes (i.e., monitor and laptop) change effect the calibration significantly?
We have two different setups we are using. One has two 27 inch monitors next to each other, where the monitors are slightly angled in the middle. The other setup has the same two monitors, plus a small laptop off to the side.
The sizes of the screens don't really matter, but their distance from the viewer might have a small effect. As long as they are all approximately the same distance away though you should be alright. I suggest using the screen marker calibration choreography, ideally with the display that is the largest and most centered.
Note that if you are planning to use the surface tracker plugin you will want to define a separate surface for each screen.
Perfect! That is helpful. Thank you.
Hi, by any chance, did someone experience this: I'm not able to format the numbers in the duration of the fixations.csv spreadsheet, the numbers appear to be not in decimal as needed (at most one point per number), I racked my brain trying to fix this and I couldn't. I don't understand why the data is exported like this, since I used the application (Pupil Core) again, with the same spreadsheet application (Google Sheets), and the numbers were in the correct format. Please, if anyone in this community can help me, I will be forever grateful.
This is usually caused by your Excel locale/region settings mismatching our CSV format. Specifically, it's a problem with how .
and ,
are meant to be interpreted, but the fix is simple.
Hi all, I'm having problems with the calibration and validation process. Why is the accuracy and precision good after calibration? But immediately after calibration, the validation accuracy becomes poor?
Calibration settings question: which setting should I use if I printed april tags (4) and am attaching them on the rectangular surface participants will be looking at? Also, is the sample duration here the same as sampling rate. What is the sampling rate, is there a way to adjust it?
If the surface your participants will be viewing is a display, then I'd suggest screen marker calibration using that display. Otherwise I'd use the single marker choreography with the marker placed on the surface they will be viewing.
The sample duration there indicates how much data is collected from each of the sampling points in the screen marker choreography. The sampling rate is the same as the video source framerate and can be controlled in the video source settings
Hello, my lab has a pupil core which was purchased a while back. We thought that it is the newer version that can reach 200 Hz sampling for sampling eyetracking data, but when we test it out we're only getting around ~125-130 Hz. Is there any way to check if the model we have should be able to reach 200 Hz?
I'm attaching a photo of the eyetracking camera in case that helps.
Hi @user-8e1492 ! Have a look at our prev. message here https://discord.com/channels/285728493612957698/285728493612957698/1121795050947481670 You should be able to adjust the frame rate of the eye cameras, by going to the Eye Window > Sensor Settings > Frame Rate.
Kindly note, that for 200Hz you will need to use 192x192 res. and that you will need to do it for both eye cameras.
Hello, I would like to ask how can I get a proof document of CE mark for the core version glasses?
Hi @user-c075ce ππ½ ! Please reach out to info@pupil-labs.com in this regard! π
Hi all, I have a question about post-hoc calibration. There are two reference sections here, the first is the calibration at recording and the second is the verification at recording. When I "Set from Trim Marks" in the post-hoc calibration, should I include both sections or just use the first one?
Since you have recorded both, I would be inclined to use both. So long as you're sure the participant/wearer was gazing appropriately at the targets!
is there a python script out there that we could use to collect data from the eyetracker through psychopy?
Hi, @user-01bb59 - PsychoPy has built-in support for eyetrackers including Pupil Core. See: https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html
Hi! I have a question about the fixation detector on pupil player. Though I have successfully exported data previously, I am running into the issue where the fixation detector does not gather fixations from the entire video. I am mostly struggling because I have made sure that the parameters for detecting a fixation were the same as previously exported. The only adjustment I have made to the video file was adding another surface. Is there anyway to troubleshoot this issue?
Hi @user-904376 π. Would you be able to elaborate on what you mean by the fixation detector does not gather fixations from the entire video? A screencapture demonstrating the issue would be super helpful!
Hi, I'm receiving the following warning when running pupil capture: [WARNING] plugin: Failed to load 'pupil_labs_calib'. Reason: '[Errno 2] No such file or directory: 'C:\Users\Owner\Documents\MATLAB\PTBtransfer\NBackTaskBatteryPTB\LSL_Recording_Status.txt''. I have 'pupil_labs_calib' but this directory does not exist. How can I set the application to look in the proper directory?
Greetings, @user-c50c65! I don't recognise those file names. Are you running a custom plugin?
Do you known of anyone that ventured into an EyeLink->Pupil data conversion tool?
Hey @user-41f1bf! May I ask what your goal is with such a conversion?
Hey, why does the Enrichment creation in Pupil cloud take so long to process when running? and also what is the next step for the heatmap creation?
Hey @user-968a24! It depends a lot on the number of recordings added to a project, and their duration. Also our algorithms can be quite computationally expensive. That said, sometimes you might need to refresh your browser window to see that the enrichment has completed! I have some follow-up questions, which eye tracking system are you using, and which enrichments have you run to generate a heatmap?
Good morning, I bought the Mac that you recommend for the Pupil but it doesn't have a USB port, will it work well with an adapter? Thank you!
For example, specifying the minimum stuff that a data folder must have to be properly loaded into Pupil Player.
Seems like quite the undertaking! Recording format is documented here: https://docs.pupil-labs.com/core/developer/recording-format/#recording-format And I assume you have access to raw recordings if you need to examine specific files and their contents
like this
Also, In the past, I remember people saying that 2d calibration produced better results in the long run. For me, 3d calibration freezing the eye model after the participant eye rolling is good enough, so I am sticking to it.
Correct, 2D calibration is more accurate, and freezing the model improves this slightly. The main caveat being this setup is very sensitive to headset slippage.
What do you mean by raw recordings?
Raw/original Pupil Core recordings
They might be useful if you need to look closely at data formats
Ownn, yes, I have a bunch of them
Hi! Have already a pair of Neon glasses but I'm also interested about the Core product for future development. I don't find detailed info about the difference in terms of eye tracking accuracy between these two products, as well as a documents showing differences in terms of features. Can you please help me? π
Hi @user-8ddcc1 ππ½ ! Pupil Core and Neon indeed rely on different approaches.
You can find the general information about the specs of these products on our website (Core specs and Neon specs).
Could you elaborate on your planned research? This will help us provide better feedback. If you are interested in discussing in more details the differences between the two, feel free to send us an email at info@pupil-labs.com and we'd be happy to schedule an online demo and Q&A session to address all your questions π
hey all! I'm trying to use https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking for saccade analysis and I don't have any prior knowledge regarding this does anyone know a better and easier way to analyze the saccade data from existing eye data? Also, is anyone working on the saccades analysis or any eye-hand movement analysis please DM I greatly appreciate your help thank you!?
I'm looking for a python example code that captures video frames using the eye camera. I have the Core and VR add-on.
Hi @user-5251cb ! Pupil Core and VR Add-ons cameras are UVC compliant, capture them and control them is relatively straightforward, you should be able to use CV2 (cv2.VideoCapture
). Would you mind sharing what do you plan to achieve?
If you are planing to stream these cameras, I would recommend you having a look at this project
When I run cv2.VideoCapture(0), I encounter the following error. [email removed] global obsensor_uvc_stream_channel.cpp:159 cv::obsensor::getStreamChannelGroup Camera index out of range
Could you please provide more details? What OS are you using (Windows/Mac/Linux)? What version of python and openCV are you using? Do you have the bundle of Capture, Player installed? If not, could you install them to get the drivers of the cameras installed?
Also, have you tried other camera indexes? and check whether your OS has access to the cameras?
Hello all, where can I get some info on using Pupil Core with the new PsychoPy plugin? I need to know if it's supposed to automate my calibrations and recordings, or I still need to use my own code for all that?
Hi @user-98789c ! Have you seen https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html ?
Hello! I am trying to use pupil labs core software, using the set of pupil labs cameras that is used for tracking with the HoloLens, but in a custom configuration for a specific study (ie not mounted on the HoloLens) and only performing monocular tracking of my left eye.
When I configure the cameras such that the world view camera is mounted directly above my left eye, the world view is upside down in pupil capture. Unfortunately the camera needs to be mounted upside down in order to be aligned above my left eye.
I cannot seem to find a way to flip the world view camera in the software.
Any advice on how to do this rotation of the camera feed?
Alternatively, do I need to have the world camera positioned directly above/vertically parallel to my eye, or can it be offset?
Thanks!
Hi @user-9973bb π. Unfortunately, we don't have a feature to flip the world camera view in our software. Is there a specific reason you need it in that configuration? Technically, it's possible to change the orientation, but it would require a bit of hacking. The good news is that the world camera can be in any orientation or offset, as long as the calibration target is visible to it. The calibration will still be valid.
My development environment is Windows with Python 3.9.13 and OpenCV-Python 4.9.0.80. In Pupil Capture, the camera video is displayed correctly. I have tested camera indices up to 10000.
While surfing the web, I came across a suggestion to uninstall the device from the Device Manager and then run cv2.VideoCapture. I tried this approach. When testing with Pupil Core, no errors occurred. However, only the front camera is displayed correctly, and the eye camera appears as a black screen.
Do you mean the eye cameras are not properly displayed on Pupil Capture ?
Good morning, I bought the Mac that you recommend for the Pupil but it doesn't have a USB port, will it work well with an adapter? Thank you!
Hi @user-4514c3 ! You could connect Pupil Core with a Mac Air using a USB C cable on both ends, using a type C hub with the provided cable (if so, I'd recommend using an active one) or with a USB A to C adaptor
Hi @user-9973bb ! The camera can be offset and it would not matter if it is upside down, although it might be easier for you to work if is properly placed. You could write a plugin that flips the scene camera or modify the source code.
hi Team, happy new year! I am wondering if there are any existing resources that i can look into to analyse the pupilometry data and gaze data using python for data collected from multiple subjects? I have data for about 15 people for a few scenarios and I would like to compare the difference of their eye response for this different scenarios and try to summarize for all 15 subject whether they responded in a similar way.. it is quite a task but I hope there are examples I can use to pre-process and combine them. Thanks
hey all! I'm trying to use https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking for saccade analysis and I don't have any prior knowledge regarding this does anyone know a better and easier way to analyze the saccade data from existing eye data? Also, is anyone working on the saccades analysis or any eye-hand movement analysis please DM I greatly appreciate your help thank you!?
Hi everybody, and happy new year ! I am processing some data acquired via pupil core, using pupil player. I am looking for specifics events in quite long recordings. Given that I have the timestamps of these events, I was wondering if theres any way (eg. Plugins) to jump directly to these timestamps in pupil player, as this would avoid me to have to search manually for those ? Thank you @nmt !
Happy New Year, @user-825dca! I can see how that functionality might indeed be useful. That said, I don't think such a plugin exists. We do have a plugin API if you feel like contributing to our community repo
Hi @user-4514c3 ! In MacOS the bundles need to be started with administrator rights, have a look here.
Thank you!
hey, pupil labs! I'm trying to use https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking for saccade analysis and I don't have any prior knowledge regarding this, is this the one that should be used or is there any easier or better method?
Hi @user-870276. As far as I know, this community contributed filter is reasonably popular. Although I've never used it myself. If you did want to implement your own filter, Pupil Core exposes a lot of data that you could use. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/936235912696852490
What I want to do is try cv2.VideoCapture in Python. However, I'm encountering the 'cv::obsensor::getStreamChannelGroup Camera index out of range' error. According to https://github.com/pupil-labs/pupil-docs/issues/219, it seems that I need to uninstall the Pupil Labs driver. Is this the recommended solution?
Hi @user-5251cb! You can certainly give that a go. Our Drivers can be re-installed easily enough by running Capture as admin. That said, if you don't mind running pupil capture in the first place, you can access the raw eye video frames using our Network API. This example script shows you how: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Hi Team, i started to plot the pupil diameter and I am wondering whether my dataset has a lot of noise in it. Because compared the example tutorial, it seem to still have a lot of peaks eventhough I have already filtered the data to only .95 confidence level. My chart is as shown here.
Hi again @user-6cf287 - the sudden drops you can see in your signal are expected and most likely correspond to blinks. Since your confidence is high for this subset, you can move on with preprocessing this data by following the guidelines provided by previous literature (e.g., see here ).
Hello Devs! I inquired long back and forgot to follow up regarding data lag issue I have been experiencing. We are running a driving simulator which has an incredibly complex architecture. Without going into the details, our driving simulator subscribes to a surface defined in the environment and uses the OnSurf boolean value to determine if the driver is looking at the specified surface. We implemented the ZMQ PUB-SUB protocol in the C++, as the project is built in unreal engine. The eye data is fetched in the driving simulator on a parallel Unreal Engine's Runnable thread. We experience a lag between the drivers actual gaze location (i.e., if they are looking at the defined surface) and the OnSurf value which we retrieve. This lag increases as our simulation trial progress. So, we are assuming that this is a computational issue. It is even more likely since the driving simulation occupies 70-80% CPU, ~50% RAM, and ~90% GPU. But we have very capable hardware, 64 GB RAM, Intel i9 12 core processor, RTX 3080 with 10 GB VRAM.
It feels like we have reached a dead end with our project because we don't know how we can optimize our software. Since there is an increase in lag, we though that reducing the buffer queue and dropping all the older messages once the new messages are received (like a queue) can be implemented. In fact, ZMQ already has functionality to do this, ZMQ_CONFLATE. However, this is not supported for multi-part messages which is how its implemented by the the pupil API. Note that in our software, we are only interested in the latest data whenever recv() is called. This goal should significantly reduce the computational issue we have (yes?)
Do you have any suggestions?
While breifly skimming through the discord message history, I see that now one has experience such a big lag (of a few seconds), only in miliseconds. Could this not be computational issue? I know I haven't even given you the details, but we just need some potential suggestions on what the next steps could be.
Hi @user-572e3e ! I assume that you run Pupil Capture on the same computer as the simulator and that you need to know whenever the driver gazes on the surface and the coordinates in realtime, is that correct?
Decrease Capture workload: - You can run Capture in a second computer and then simply subscribe to onSurf and surface coordinates in the simulator, this way all computational resources dragged by the capture would be not blocking the sim. - If you do not need fixations or other plugins in real-time, you can disable some of these plugins that you do not need in realtime.
Consuming the ZMQ: I am not sure how you implement this but here you have the Unity example and the Python reference as examples.
Optimising the simulator: The resources that you mention seem great, but it seems the simulator is devouring them, is it well optimised? I don't have much experience with UE but in Unity baking the light makes a huge difference. Btw are you on UE4 or 5?
Thank you so much for the quick response @user-d407c1 , I really appreciate it!
We are actually planning to also record and compute the cognitive workload of the user using the available physiological measures implemented by the eye-tracker such as fixations, pupil diameter, etc. So, it is not possible to disable them, unfortunately.
We are actually using modified version CARLA (https://carla.org//), which is a driving simulator used for autonomous vehicle research and is terrible in terms of optimization due to its goals of supporting diverse use cases. It is built in UE4.
Unfortunately, the Unity and Python implementation would vastly differ in UE4 because it does not natively support ZMQ and desterilizing msgpack data, so we had to use third-party plug-ins (for ZMQ and) to desterilize it, which has one SINGIFANT limitation. It only supports string data type serialization/deserialization. So, we modified zmq_tools.py to send the dictionary with string key-values pairs (very similar to JSON) for the specific surface we have defined. I know this is a downgrade in performance, but it shouldn't be significant, no?
I have a few questions: 1. Setting up a different environment for independently running pupil-eye tracker is viable for us, but how much of the latency will increase compared to using one system. 2. Do you have any implementation in C++ 3. Why does pupil eye-tracker speratly sends the topic and message (as a multipart message)? Is it becuase of the PUB-SUB model? Can the code be modified to send it as single message, or does it break the code?
Hi @user-572e3e !
We are actually planning to also record and compute the cognitive workload of the user using the available phycological measures implemented by the eye-tracker such as fixations, pupil diameter, etc. So, it is not possible to disable them, unfortunately.
You can obtain fixations as a post-hoc parameter in Pupil Player, that's why I asked if you need them in realtime.
Ohh! I see, I did not knew it.
Hmm I see, perhaps if you use a 3rd party computer for capture, you can get the ZMQ messages and transform them to something you can stream to UE (just thinking out loud).
Reg. 1, latency in this case would depend on how you connect those two computers, ie. what network you use. If you use ethernet, it should be not much.
You can obtain fixations as a post-hoc parameter in Pupil Player, that's why I asked if you need them in realtime.
Oh I see.
These are some great starting points, and I will have to think about this more when I am back in lab tomorrow (its actually 1 am now). I will get back to you tomorrow. Thank you so much!
Hey @user-d407c1, I just wanted to update you on the current developments in our research.
We were able to finally eliminate lag of the surface data being received by the driving simulator. We tried so many solutions; and eventually one (a modification of your suggestion) worked!
We observed something really weird. The data received by a python [helper] script of the OnSurf
values was real-time (when ran in conjunction with the simulator), but not in our Unreal Engine game. Thus we figured that it may not really be a computational issue; but rather an issue with how UE handles ZMQ communications (despite that fact that it was running on a parallel thread). So, we created an intermediate python script that receives data from pupil, cleans and extract the OnSurf value, and sends the Boolean value to the driving simulator through a UDP socket. By this way, we get rid of the internal framework of ZMQ, and its internal architecture of keeping a buffer of messages not yet read (which cannot we modified using ZMQ_CONFLATE as it's a multi-part message). This allowed us to get real-time data.
Note that we still couldn't pin point to the EXACT issue, but our purpose is achieved. One strange thing to also note is that if the intermediate [python] script sent the OnSurf
value to UE via ZMQ PUB-SUB connection (instead of UDP), it wasn't real-time! (haha, what??)
Hey, I hope this isn't too simple a question, but I could really use your expertise. I'm working on a research project aiming to detect BPPV disease using pupil and head movement recorded from these glasses. Currently, i am looking to create a system where doctors can test users with the glasses and annotate the data/recordings, which will be stored in a server database. I'm wondering if there's a way to capture and record data from the front-end/client side, like in a React app. I'm a bit confused about it, and your feedback would be much appreciated.
Hi @user-5bac8b ! Are you using Pupil Core ? Or Neon/Invisible?
@user-d407c1 pupil core
Thanks for confirming. I am a bit confused. Do you want the doctors to use this frontend app rather than Pupil Capture? or both?
If they are going to run Pupil Capture, nonetheless, for the calibration, are you aware of the plugin's system ? Have you considered writing a plugin with a text box that includes this info and is stored in the user_info.csv
?
That way, the doctors would only need to use Pupil Capture and then you can upload the whole recording folder.
As a side note, may I ask how you record head movement?
Thanks for the reply @user-d407c1
I was thinking of using web serial API to communicate with the glass and record the video. if that's possible.
@user-5bac8b You can fully control Pupil Capture using our Network API. It seems a bit convoluted what you want to achieve, so perhaps you want to work over the source code.
I know that you have Pupil Core, but have you considered using Neon ? It does come with a 9dof IMU sensor, it is calibration free and to me sounds like a kind of a turnkey solution. Furthermore, you can use its realtime API to stream not only gaze or video but also the IMU data.
Thanks for your suggestions. I'll definitely look into it.
Hi team, i am not sure if this is within your scope of support or maybe others in this community could respond to. I am able to pre-process the pupil data and used the steps as suggested but my question now is what kind of statistical models can be used to compare the changes in pupil dilation between the participants and the different scenarios they were exposed to. So the question i am trying to answer is whether the pupil dilation was more in 1 scenario compared to others and whether this pattern exist among all participants. Thanks
Hi @user-6cf287! When it comes to comparing changes in pupil dilation among participants and in different scenarios, there are various approaches. For instance, some focus on extracting specific metrics within a predetermined time window, while others prefer cluster-based permutation analyses. The choice largely depends on your specific research questions and hypotheses. A good starting point is to explore previously published work that used similar paradigms or experimental setups. Following their analysis pipeline and understanding their reasoning behind it can provide valuable guidance.
A great start though for understanding the different approaches and analysis techniques are following papers: - MathΓ΄t & VilotijeviΔ, Methods in cognitive pupillometry: Design, preprocessing, and statistical analysis - Fink et al., From pre-processing to advanced dynamic modeling of pupil data
Hi ! Is there someone who can explain me how to download Pupil Player ? thanks
Hi @user-71f1de ! you can download it form here
Thank you !
Hello,
I would like to obtain the coordinates of gaze on a PC monitor via the Network API. Is it necessary to use markers for this purpose? Additionally, if markers are used, is standard calibration also required?
Thank you!
Hi @user-2d96f8 ! That depends on how you plan to detect the monitor and translate the coordinates from the scene camera to the screen.
Regardless, Pupil Core requires to be calibrated to obtain accurate gaze data.
If planning to rely on the surface tracker to track the monitor position, then yes, Apriltags markers are required.
Then you will need to subscribe to the surface coordinates topic.
If this is to control the cursor, have a look at previous community projects like https://github.com/emendir/PupilCore-CursorControl
If this is for a experiment, I recommend that you take a look at our Psychopy integration, which simplifies this.
Thank you! I would like to select targets based on the gaze position on the screen, so it's similar to controlling a cursor. If I can display tags on the screen at all times, is it okay to use just the four tags on the screen without any physical tags?
Sure! In that case, you may want to check this package my colleague wrote to show April tags on the screen.
Hi,
Could you please explain how to subscribe to the surface coordination topic from Python (or tell me the documentation)? I looked through the documentation at https://docs.pupil-labs.com/core/developer/network-api/, but couldn't find it.
Hey @user-2d96f8 π. This script shows how to receive surface-mapped gaze coordinates!
I had one more question:
Basically, the purpose of retrieving the real-time OnSurf
value was to design a timer that computes how long has the participant/driver been looking at the head-up display (HUD). In our pilot study, we observed that the timer was inaccurate: the timer would reset as soon as the gaze would fall out of the HUD (as expected); but this reset frequently occurred unintentionally: as a result of the user blinking, the gaze would jitter (I don't know the correct term - I mean short term irregular movement). I would assume this is a obvious issue with gaze mapping as it could not be determined when the user blinks. However, is there a way to "smoothen" the gaze, like the commercial Tobii eye-tracker does? Is there an internal implementation/feature for it? For now, we just set a OnSurf False value threshold: the OnSurf
value has to be false for more than 150ms (which is the average blink time) in order to count as a gaze-shift and reset the timer. Do you have any alternate reccomendation?
Hi, I would like to do a calibration, but the button is gray. Do you know how to fix it?
Can you please try logging out and back into the app, and re-connecting the device. Is the adjustment now available?
Hi. I have a question about the 3d eye model usage. As you may known, one can freeze the eye model. So,
1) Is it right to say that a freezed model will not compensate for frame slippage?
2) Is it right to say that you should freeze a model during calibration?
Thanks
Hi @user-41f1bf! 1. Correct 2. Incorrect. Freezing the model during calibration is not recommended. Freezing the model is really only necessary if you want to do pupillometry (see our best practices), or if you want slightly more precise gaze estimates post-calibration, but you must be sure to eliminate headset slippage
Hi, I am thinking of buying the Pupil Core eye tracker but I have a few questions:
Hi @user-f99202 ! Pupil Core needs Pupil Capture/Player to be installed on your PC. We support MacOS, Windows and Linux, so is not exactly "Plug & Play" but easy to configure.
With the purchase of Pupil Core you get a complimentary Onboarding session, such that you start on the right feet.
Shall you encounter any hardware or software issue, we will assist you through here or email.
And regarding the estimated delivery time, 5-10 business days for order fulfilment and 3-4 days for shipping to the US.
Let us know if there is anything else you need to know
Hi, would you please help me add two eye trackers to a group? I have installed the plugin, but they can not find each other in the pupil groups!
Hi everyone! How to create a Plugin that inherits from the Surface_Tracker?
I am trying to update my screen tracker plugin from earlier versions.
1) Is the plugin Api supposed to support multiple inheritance?
class ScreenTracker(Plugin, Surface_Tracker)
2) Is it supposed to support single inheritance from existing plugins?
class ScreenTracker(Surface_Tracker)
Best regards,
So, I successfully updated my screen tracker plugin. All I need to do was to figure out how to import all dependencies.
For my use case, I choose to not use AprilTags because they could catch too much attention as I am doing experiments with abstract black/white stimuli.
Hi there. We have a pupil core device that we are trying to record through pupil mobile, but it seems the mobile is not detecting the glasses. I can't seem to find any troubleshooting advice online. Can anyone help?
We know it's not a hardware issue because the glasses work perfectly when wired straight into a laptop
@nmt it's Johnny Parr btw, hope all is good!
Hey @user-885c29. All good thanks! What's the name of the app you're trying to use with the Core system?
Now I will try to update my participant driven calibration with the new choreography api
Hello, I'm currently using a pupil core device on an Ubuntu system and encountering some issues during calibration with pupil capture. Despite following the instructions, the calibration marker remains stationary at the center and it's unclear which key to press for confirmation or to exit the calibration process. Could anyone offer some guidance or tips on this matter?
Did you ensure good enough marker detection? For example, by controlling environmental light and video source settings.
Hi @user-ea64b5 π. Please share the capture.log file right after you've tried calibrating. Search on your machine, 'pupil_capture_settings'. Inside that directory is the capture.log
Hello everyone, I'm currently using a Pupil Core device on a Mac and attempting to remotely control it via the Network API. I've successfully used the API for recording and remote annotations by following the guidelines from the pupil-labs/pupil-helpers GitHub repository. However, I'm encountering difficulties with remote calibration. While I can initiate the calibration using "socket.send_string("C")", it concludes too quickly. I'd like to monitor the calibration process in real-time and conclude it with "socket.send_string("c")" once all markers are successfully calibrated. Could anyone guide me on how to track the calibration status?
For this point I would try to write a plugin with a custom calibration choreography
best practices docs suggests that one should calibrate and validate to track good/bad calibrations. However, it is time consuming. I am considering to use a participant driven calibration, allowing participants to calibrate on demand, as soon as me or they notice degradation in a realtime calibration feedback animation
Hi @user-25a842 π. The default screen-based calibration choreography concludes once sufficient gaze has been sampled at each target location that's presented. Please try removing the 'calibration should stop' part of your code. That would only be needed if using the single marker calibration. Calibration does indeed learn the physical relation of the eye and scene cameras: more in this message: https://discord.com/channels/285728493612957698/285728493612957698/940271081934159910
By the way, in regards to calibration with the Pupil Core device, does the process only calibrate the geometric relationship between the world camera and the eye cameras, or does it also take into account the positional information of the wearer's eyes relative to the eye cameras?
2d and 3d mappers accounts for eye and world images, they don't take 3d information of participants position in space into account
This way it will be possible to calibrate only when necessary. Of course, it requires a really comfortable setup, too much movement in an uncomfortable setup would require a lot of calibrations
Pupil Mobile
Hey, I am attaching a picture of the Pupil Core that our lab has. As seen in the photo, there in a triangular mount and eye camera only for the right eye. I want to collect data for both eyes. Can someone suggest what to be done for this case?
Hi @user-8db87d ! You can contact info@pupil-labs.com requesting to modify it to include 2 eye cameras, that said, besides the scene camera and potentially the hub, no much can be reused. If you want a quote for that you can requested on that email but perhaps it would be easier to get a new one with 2 eye cameras.
@user-cdcab0 , is your tracking strategy available somewhere?
No, it was just a tinker
FWIW, the Core scene camera can detect some IR light. In the lab where I did my doctoral studies we made AprilTag markers that were backlight with IR LEDs. It worked well enough for our study
I am using affordable UVC compatible cameras from ELP, a Chinese company. I removed the IR filter from the eye camera. For the scene camera, it is necessary to controll the sensor exposition time and ensure enough light in the room for 30 fps HD.
In my i5 notebook, with a discrete NVIDIA, the eye camera in running at 90 fps at 640:480
Do you know if Pupil Labs have any plans to fix the issue i opened about camera intrinsics?
Also, I think there is some error in your requirements.txt. I need to downgrade the packages module to version 22.
Also, for some reason, I am getting no error messages. The world process is crashing with my camera when running from source only.
Hey @user-41f1bf! At this point in time, we're unable to devote resources to updating the intrinsics estimation plugin or catching errors from third-party cameras. That said, we're always pleased to hear that users are still tinkering with the software and experimenting with different hardware. However, our main focus will largely be on ensuring stability with the stock Core hardware.
Hello, I have pupil video recorded via pupil capture/service, but my world video was recorded via a different service (calibration marker in that wold video). How can I import and sync the world video into pupil player so I can perform calibration and gaze mapping? I've tried including the world video in the same folder with the pupil video as "world.mp4" but nothing shows up in the pupil player world window
Also, my apolgoies for the naivete, but it's not possible to get gaze measures like fixations and saccades solely from pupillary data? You need calibration to map pupil data to world positions to get those measures?
Hi, bumping this thread in case it was missed. Happy to provide any further information
Hi there,
We are trying to compare the Tobii 4C to Pupil Labs Core in Unity to run some calibrations and short trials in 3D space, we are using hmd-eyes for this.
At the moment, we are a bit puzzled by the output data from hmd-eyes. I understand that 0,0,0 is at the centre of the camera, I assume the world camera? Or is it each eye's camera? Then, our other question is, how do we find out what units are being generated in the data files (i.e. for gaze point -0.13, -0.18, 0.93 for one eye)? I can't find anything explicitly stating - I assume the units are arbitrary and so we need to generate some points of a known location and then transform the points into the camera space or vice versa for x and y, then use two eyes to work out the z position by intersection? If you could point us in the right direction that would be a great help!
Hi Jasmine π !
HMD eyes is a wrapper of the Network API written in C# and ZeroMQ libraries. It is bundled into a Unity project together with a set of utilities to facilitate the integration within VR.
It is not exactly intended to be used with Unity3D outside a VR headset, as it assumes certain constraints of the scene camera of the headset moving altogether with the head and eye cameras. However, you could reuse some components and access the data within Unity.
You may want to have a look at the HMD eyes docs, but also at the Pupil Core docs and the network APIΒ .
Pupil Capture will provide you with the gaze data in the scene camera coordinates, but if you use the surface tracker and subscribe to the gaze on the surface (see this message https://discord.com/channels/285728493612957698/285728493612957698/1199577486330175539) you could obtain the gaze point in the "screen coordinates" as well, which then you could translate into Unity3D.
In the end, these two systems you want to compare are also different in their nature as well (fixed eye-tracker vs wearable eye-tracker). Not sure how well they will compare, but is an interesting study.
The first half of our data collection has already been completed with the Tobii 4C and awaiting analysis, so this pilot will help us find what differences exist. I will let you know what we find!
I want to detect the blinks in the Pupil Core eye videos. I'm aware that there is the built in blink detection in the pupil player software but want to explore alternative solutions. Does anyone have some advice on a robust detection method?
Hi
I am currently trying to analysis some recordings (from a pupil core headset) using pupil player. However I am experiencing a serious problem with pupil player crashing. I have repeatedly tried to run post-hoc calibration and gaze mapping but player crashes at the end or as soon as I try to do anything else and I have to force close it from task manager. When I reopen the recording after restarting player it says the mapping was completed however no gaze position appears in the recording and when I click export the exported gaze position csv file is empty. I have also found that player also crashes when I try to close the program, even if the only thing I've done is open a recording, wait a few minutes, then try to close. The only thing of note that I can thing of as a possible cause is the recordings are quite long (approx. 20 min).
Do you have any suggestions as to how I can solve this issue and prevent pupil player from continuously crashing? Or is it the case that player just does not work with long recordings (which would be a real pain)?
Edit: I have just tested out player with a much shorter 5 min recording and player still crashes when I try to close the program. The max memory use is only 50% which occurs after the crash and CPU max is around 25%. I am using Pupil player v3.5.1 on Windows.
Can you share the player.log
file? It will be in your pupil_player_settings
folder
Dear PupilLabs team,
I have been doing experiments with the PupilLabs core. Now I am trying to open the recordings in the Pupil Player, but it immediately crashes. The experiments were done by a student of mine and I am afraid that she started a recording in the software and then ran my script which should also start the recording via the API and set the pupil timestamp as it is described in this issue: https://github.com/pupil-labs/pupil/issues/1580
However - I need to find a way to access the data and get the gaze data since the experiments have already been done.
Find attached a gaze_timestamps.npy file
I hope you can help me,
Kind regards,
Cobe
Too add: no .log output Experiments were done on a windows machine (have to lookup the exact specs) Now i am working on a MacBook Pro M1Pro.
[email removed] Our fixation filter does use
Hi @user-b91cdf ! After loading your recording into Player, there should be a log file being generated. Could you share it?
Hi, The pupil Player immediatly crashes and the log file is not updated. hence it makes no sense...
Thanks for following up. Do you mean the file player.log
under pupil_player_settings
does not get updated? Could you also share whether you are running Player from the bundle? or source? and what version are you running?
I am running pupil player from the bundle. Yes i mean player.log under pupil_player_settings. I just downloaded the latest version v3.5 again and reinstalled.
here is also the info.player.json from the capture app:
Thanks for confirming. Could you please share one of these recordings that does not load with data[at]pupil-labs.com, such that we can further investigate this?
Simply upload them to any provider of your choice (Google Drive, OneDrive, etc) and invite that email.
sure - thanks!
Hello all, I'm trying to run the pupil capture from source in Ubuntu Touch emulator (linux based system designed for phones, but I'm using emulator not the real one). I installed all the requirements, but I get the error when I run "python3 main.py capture": b"The GLFW is not initialized" However pip shows that it is installed, and I even compiled it from source from glfw repository. Do you know by any chance what else could be done to solve the issue?