Could you share a recording with us that demonstrates the issue? Ideally, it would include the calibration choreography. Please share it with data@pupil-labs.com
sure will do so tomorrow but I will try to download the latest version of pupil lab first this might be the issue.
how did you do that ? can you recommend me some links and anything?
You can try the attached example exp in PsychoPy. We have an integration FYI: https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html
I am getting this type of data after putting the recording in the pupil player.
Check out the exports folder. You can find more information on exports here https://docs.pupil-labs.com/core/software/pupil-player/#export
and in the export file, i am getting these files... so i am not sure i how can i detect the blinks from the pupil_positions.csv file.
Read more about the blink detector here https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector
okay..
These are all steps in one message 1. Run the blink detector in Player 2. Export the data in Player to CSV 3. Apply the tutorial linked here https://discord.com/channels/285728493612957698/285728493612957698/1003549079248326758
@user-e91538 could you please also attach the jupyter notebook that we used to generate the plot above?
Found it on my disk 🙂
Thank you so much 🙂
one quick stupid question to run "temporal-correctness.ipynb" do i need to install psychopy ?
Check out https://jupyter.org/
Hi, I've got a problem with displaying the data from pupil invisible in pupil player. I can see this notification but don't know how to fix this. Can someone help? These are the notifications that I got.
Hi. We are running a research lab using Pupil Lab and we ran into some problems that we were wondering if you could help. Problem 1 is our pupil player is not working. Only the black screen is showing up but not the gray screen to drag the files in to play the eye tracking data. Secondly, we recorded a file and pupil recording had a glitch so when we went to save the data, they are missing files in the recording folders. However , there are mp4 files for the eye and world videos and we would really like to salvage those video. How could we do that? Thanks so much!
Hi @user-e91538 👋. Loading recordings from a shared drive account can be problematic. Could you move it to a local drive and try again?
I did download the entire file and last time if also worked with a format from a drive. Unfortunately, I can't get it on a local drive anytime soon. So I am not sure if this can be solved in another way. I do have a raw file and a normal file of it. The raw file does load, but doesn't show a video/recording. I tried to load it afterwards in it, but it gives a notification that the timestamps do not match up
Hi @user-e8411e 👋.
1. Please try deleting the user_settings_* files in your pupil_player_settings folder.
2. If possible, please share the recording with [email removed] and we can take a look!
sorry for the spam, but also, I could not locate where user_settings_* is on the pupil player settings folder. Would this be the pupil player windows folder here?
Hi Neil. I have shared the data with the address you sent me. Please let me know if you need additional information. Thank you!
Hi, the file is too big to share via email. Could we share it through google drive?
Hello, we think our Pupil Core headset has a hardware issue. The right eye camera repeatedly disconnects. It has been going on for a few months and only impacts the right eye. We tried downloading the newest version of the software but still having issues. Do you have a way to send it in for repairs?
Please reach out to info@pupil-labs.com with a description of the issue 🙂
Hey There, I was trying to get the Gaze position data. could you please help me?
Hi @user-2e1368 👋. In order to get gaze data, you'll need to calibrate prior to making a recording. Further details here: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration
Search on your machine, "pupil_player_settings". Inside that folder are the respective files.
It worked! Thank you. About the data salvaging problem, would you be able to give us a solution for that as well? Thanks again.
That's good to hear! We'll have a look tomorrow! A bit late in the evening here now 🙂
Perfect. Thank you again. Enjoy your evening.
Hello Pupil Labs, I am looking to buy Pupil Core glasses but I would like to chat with an expert before that. I already sent an e-mail to the team, but I am posting on the discord chat because I only have until Thursday to decide on the Eye Tracker that I want. Could someone contact me? Thanks!
Hi @user-488d6d 👋. We've just responded via email!
Hi, in comparing the fixations on surface csv files between subjects ran using an earlier version of Pupil Player and the newer version there is a difference. The older csv files list 1 fixation per row, listing the "id" column from 1 to however many fixation the participant had. The newer version lists fixation by some combination of fixation number and world frame and "fixation_id" can list multiple rows for the same fixation. Can you please explain this difference? In playing around with copies of newer subject files I was able to make the csv file look like the older format by removing the surface tracker plugin before running the fixation plugin, if that's of any help in describing what I'm talking about. Thank you for any guidance you can provide!
Hi @user-990e57 👋 The fixations_on_surface.csv will contain one row per world frame for each fixation. So a long fixation for example, will span multiple world frames, but retain its individual fixation id. The position of the fixation relative to the surface could, however, change on consecutive frames; see norm_pos_x/y Could you be refering to the fixations.csv export? This file will only contain one row per fixation id, but fixations are not mapped to the surface in this case.
Hi there Pupil, I've got a question regarding video-encoding. I noticed the world videos are either mjpeg encoded or h263. Is the latter done via quicksync or do you use a different way of encoding? What kind of CPU is recommended if we decide to record in h263?
And hi, welcome to the community! 👋
We use ffmpeg python bindings for the video encoding. For performance reasons, I would recommend recording the mjpeg video. I don't have any explicit CPU recommendations in this regard.
thanks, I appreciate your reply.
we will be doing recordings of 30-60 minutes per session, resulting in rather large files. Is the impact on performance big enough to justify this?
Recording mjpeg instead of h263 saves you performance in real time. It may have an influence on the recorded frame rate. I doubt that the ffmpeg shipped with the bundle has quicksync support. You would need to install a custom pyav (the python ffmpeg binding) as well as run Pupil Core from source.
Recordings of that length have a post-hoc performance impact in Pupil Player. I would recommend to split the recording into chunks of 15-20 minutes.
Would it be possible for Pupil to utilize Intel Quicksync via the ffmpeg encoding? Of course, this would require an supported cpu on the client side.
ok, great. Thanks for the help and the tips, much appreciated.
Thank you for your response, here are two screenshots to show you the difference I'm talking about. Sorry for not including these in the original question!
The files have different names, correct?
Hi, I am having some issues with the Pupil Core glasses saying the image coming from both eye cameras and the scene camera that says the image files are corrupted and is preventing me from recording. Do you know what may be causing this issue?
Could you please share a Screenshot of the error?
Please note that there is a conceptual difference between fixations.csv and fixations on surface xyz csv. The former represent fixations in scene camera coordinates. The latter are mapped to the corresponding surface.
Ah, okay, that makes sense. Yes, they do have different file names as they are from two different participant trials. Both were exports of fixations_on_surface csv (not fixations.csv).
Tomorrow, I can check how the norm pos for mapped fixations was calculated before we exported them frame by frame.
Thank you so much! That will be really helpful as the lead researcher is preparing for the final write up.
I found the old implementation. https://github.com/pupil-labs/pupil/blob/v1.0/pupil_src/shared_modules/offline_surface_tracker.py#L492
First, it collects all mapped fixations and then deduplicates based on fixation id (assuming that all mapped fixations are the same, which they are not if the surface moves -> norm_pos changes). The code above keeps the norm_pos of the fixation's last frame.
Thank you so much, all of this has been very helpful!!
Quick question, can I run a 3m usb-c cable with a Pupil Core without issues? What's the transfer rate of the included cable?
Longer usb cables might cause transmission errors. I recommend using "active" usb extension cables that have their own power source. The included cable is able to transmit the full frame rate. If you are seeing less fps than expected, the cause might be related to insufficient cpu resources.
ok, thanks!
Hi again, I have some questions about tracking fixations but to fully explain what my project intends to do, I'd have to share a video privately (it's a Youtube link). May I know who can I share the video with (and ask loads of further questions?)
Hi 👋 Please send the files to [email removed] for review.
Thanks Richard!
Hi there, I was wondering if anyone can help me. I used MatLab to run an experiment with the Core. Now I have the raw data on graphs, the data is plotted x= time y=pupil dilation (mm). THe numbers are far too large for it to be in mms and i was wondering if there was a scaling factor issue here I was missing. My supervisor ran the code to develop the experiment on MatLab and he is not sure - I am hoping it's not an error with programming! Any help would be greatly appreciated 🙂
Hi @user-25da3f 👋 Are you sure you are using diameter_3d which would be in millimetres provided by 3d eye model? diameter is in pixels, observed in the eye videos. You can read more about these exports in eye camera coordinates in our documentation: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv
Hi, I have my gaze positions.csv data and would like to calculate the number of macrosaccades and microsaccades. I have been using the formula: angle = arccos[(xa * xb + ya * yb) / (√(xa2 + ya2) * √(xb2 + yb2))] to calculate the degree of differences by xyz positions. (e.g., x= gaze point x, y= gaze point y..etc). I got the degrees, but I wonder if this is the correct way to get microsaccades (less than 1 degree) and macrosaccades (more than 1 degree)?
Hi @user-266adf See these previous messages in Discord for reference. Saccades: https://discord.com/channels/285728493612957698/285728493612957698/936235912696852490 https://discord.com/channels/285728493612957698/633564003846717444/966317102564778034 Microsaccades: https://discord.com/channels/285728493612957698/446977689690177536/766267166915559424
Anyone with experience investigating saccades before?
Hello, I am interested in using Pupil Core for my Ph.D. project, and I plan to buy a compatible motherboard (plus, possibly, an extension cable) for my setup. Could you let me know what the recommended spec of the USB connection is? For example, what is the USB version of the USB cables included with the glasses?
Hi! Pupil Core ships with a 2 meter USB-C to USB-A cable. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1004748082346463364
Hello, is there any tutorial or guide on how to plot gaze and fixation positions in matlab? I tried to do it, but the data obtained is not correct. I mean I recorded myself looking in a rectangular way through a screen but when I plotted the norm_x y and z and the scanning pattern is not correct. All gazes are shown around one point in the coordinates
Hi @user-eb6164 Which data are you using exactly? You can see a full breakdown of the gaze_positions.csv export here: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv
@user-e3f20f forwarding a message from @user-e91538 from the 🕶 invisible channel:
hello everyone; for the data analysis of pupil data i tried running the code from 'https://pyplr.github.io/cvd_pupillometry/04d_analysis.html' but i am not able to run it.. i am getting an error in plr processing.. i dont know what to do.. i want to do the preprocessing of pupil diameter but i am not able to do it..' does anyone have any solution regarding this?
Thank you for the reply! And what version is that? Because in this forum, one says Pupil Core uses 2.0 (https://discordapp.com/channels/285728493612957698/285728493612957698/690221394037964885), but another says the cable is 3.0 (https://discordapp.com/channels/285728493612957698/285728493612957698/887733449417437265), and I want to know how fast the transfer speed should be in order to avoid it to become a bottleneck.
Hi @user-53a74a, the standard Pupil Core configuration uses USB 2.0 technology, but we use a USB 3.0 cable because we also offer a Pupil Core configuration with a USB-C USB 3.0 connector instead of the world camera.
It should be USB 3.0. But I'm not 100% sure what the transfer speed is. @user-755e9e Can you confirm this?
Hello! Is there anywhere we could manually change the video setting when exporting in Pupil Player? I set 640*480 with 120 fps under the sensor setting in the Pupil Capture, but the exported file has sampling rate of around 49 fps only. I am using both program with the latest version 3.5.1. Thank you! (p.s. could the eye video be exported alone?)
Hi @user-e91538 👋 Have you tried this tutorial on working with pupil_positions data? https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb Note the difference between diameter in the .csv export and diameter_3d. The former is in eye camera pixel coordinates, while the latter is in millimeters from the eye model. It may also be useful for you to know that we have a new plugin (not yet in the official release) for editing blinks to remove them from the export: https://gist.github.com/papr/b7e5f86bdad8eb9ee98723f8d5053f5f
Hi 👋 FPS can fluctuate depending on the specs of your computer and other processes you have running - it may not be able to reach the full 120Hz. What machine are you using?
You can export export the eye videos alone. Just disable the other plugins in Pupil Player, but enable Eye Overlay.
We are using Dell optiplex 7060, one with i7 and another one with i5. Both experienced the fps fluctuations sometimes. Would this fluctuation occur during the middle of the recording? Or it would remain the same once the recording gets started? Our experiment is about 10 minutes but may extend up to 20 minutes or longer.
Another question - we using python script to send annotations for sync two pupil recordings. The exported annotation file includes the index and timestamp (we kept using the pupil time). Could you explain how to interpret pupil timestamp? What is the unit? The timestamp on the annotation file and worldview file cannot perfectly match, how to find the frame of annotation then? Many thanks!
Hello! My pupil core headset isn’t seeming to give me the option to record audio. When I navigate the menus, the only option is ‘sound only’ and with that on, we don’t get a sound output file
Hi @user-b9005d Pupil Core does not record audio. We made the decision to remove audio capture when we released Pupil Capture v2.0. Unfortunately, this feature was too difficult to maintain across our supported operating systems. It is possible to collect audio using the Lab Streaming Layer (LSL) framework. This would provide accurate synchronisation between audio and gaze data, but takes more steps to set up.
I would like to track my gaze on my computer screen in order to make clicking/navigating around easier e.g. when editing code in my IDE. Is this a use case pupilcore would do well in? Essentially, I’d like to be able to code on a 13-15 inch laptop screen as normal with the addition of eye tracking to help me navigate through code and the like.
Hi @user-3e8682 👋 There are some existing implementations similar to this which I'm sure you will find useful. Cursor Control:https://github.com/emendir/PupilCore-CursorControl#readme Mouse Control: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py and https://github.com/trishume/PolyMouse (a bit old but still helpful) There is also an accompanying blog article for this https://thume.ca/2017/11/10/eye-tracking-mouse-control-ideas/ Surface Tracker: If you add April Tag markers to the corners of a monitor you can map gaze to screen coordinates using our Surface Tracker plugin. More details here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking PsychoPy Integration: We have a software integration for PsychoPy which uses Surface Tracking. https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html There is an Eye Tracking Feature Demo in PsychoPy's Builder view menu which you can use with Pupil Core to control the cursor or simulate with a mouse.
hello guys, I have a question. do i need to use markers fir AOI to extract data (such as gaze point,fixation and etc) or can I record without them
Hi Reem! If you make recordings without using fiducial markers to define Surfaces, gaze data will then be in scene camera coordinates only. Markers are needed to map gaze to Surfaces/AOIs - it is important the markers are present in the recording as they cannot be added post-hoc.
so if we want to plot our gaze points what will be our x and y coordinates? and would be good to record data without AOI? since participants will be driving the car and we do not want to add any distraction
Hi @user-eb6164 Surface data is relative to the AOI in its own coordinate system. The origin is 0,0 at the bottom left of the surface to 1,1 at the top right. So x: [0.5, 0.5], y: [0.5, 0.5] is the surface center. If you don't want to use markers you can add annotations to your recordings to mark events with a timestamp when certain objects were looked at. Use the Annotation Plugin for this. Annotations can be added real-time in Pupil Capture, and post-hoc in Pupil Player. It may be useful for you to check out our publications database where others have had success with Pupil Core in driving scenarios. You can search the data base with keywords: https://pupil-labs.com/publications/
Also, we still met some issues related with image input: we tried another two window devices, and both received the error message"video_capture.uvc_bacend: could not connect to device. No image will be supplied" Any troubleshooting idea? Thanks! p.s. we tried unplug/replug back, checked the usbk driver etc.
Hi! If you search the error message video_capture.uvc_backend: could not connect to device directly in Discord there's plenty of previous messages on this 🙂 e.g. see this message for reference: https://discord.com/channels/285728493612957698/285728635267186688/974253545287204874 It could be that the cameras have become physically disconnected.
Hi @user-219de4 fps can fluctuate during recordings depending on CPU resources available. Details on Pupil Time in our documentation: https://docs.pupil-labs.com/core/terminology/#_2-pupil-time The cameras are free running and received frames are assigned a pupil-timestamp. There's a tutorial in frame identification here: https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb
I’m currently messing around with your camera intrinsic estimation plugin. Is there a way to have the undistorted image be the default recorded image? When I hit record, it records with the typical fisheye filter
Hi, this is not possible. Undistorting and reencoding the video is too taxing on the CPU. You can use the iMotions exporter plugin to undistort the video and gaze data post-hoc.
Hello, colleagues.
I have two questions: 1) Can you tell me if you can give a different name to a record folder, instead of "000"? How can this be changed? Is it possible to achieve a synchronization point by naming a file according to the system time of the computer? How asynchronous would such a time be to the real situation? 2) Are there any other labels (qr) other than square? It's not very convenient to use for a laptop screen, since the lower labels don't fit. And an additional question: What is the best way to position the monitor screen recorded on the glasses and recorded from the computer, by capturing the image? Are the square marks positioned in the center?
Hi, there is no option to avoid the numbered folders. They ensure that each recording can be stored to a unique folder.
Creating a file on disk is a comparably slow action. You might want to read out the "system start time" field in the recording's info.json file. It contains the recording start time in system time (unix epoch).
What do you mean by the lower labels don't fit? Did you know that you can place the markers in any co-planar location with the screen that you want and later adjust the surface definition to fit your screen? https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Hello folks, I have a question about calibration. In my current setup, I place AprilTags at each corner of my monitor frame for surface tracking, but I would like to know whether it is possible to place them inside my monitor (I.e., my monitor will display them all the time). While this sounded easy implementation to me, I soon realized that during the calibration phase, Pupil Capture would place markers at each corner by making a temporal full-screen window. This makes my inside-monitor-AprilTags dissapear during the calibration, but does this affect accuracy of calibration/surface tracking? Or as long as calibration and surface tracking are successfully conducted independently, this will not affect the performance?
Hello, Is there any Unit that you use to represent Pupil Diameter?
Hi, see https://docs.pupil-labs.com/core/best-practices/#pupillometry
@user-53a74a hi! Gaze estimation (eye to scene) and surface tracking (scene to surface) are two independent mappings. It is ok for the surface markers to disappear during the calibration. It will not affect performance. When displaying the apriltags make sure to include a sufficiently large white margin around them (required for detection)
In the 3d gaze export, how is the program calculating a position in space if there is only one eye present? Ultimately, we are interested in calculating the visual angle on certain eye movements and are wondering which export variable would help us make that calculation
It assumes a specific distance (which is hard coded). You might be better off by looking at differences between consecutive circle_3d_normals in the Pupil positions export
Hey @user-e3f20f , just following up on this one
Hi @user-e3f20f , I was wondering what purpose does leaving a gap between the glasses and the user's forehead serve? We have been advised to do this in response to us wanting to improve our calibration method.
If the headset touches the forehead, slippage of the headset is more likely e.g. when the subject raises their eyebrows
Is anyone else having a strange thing where pupil player just closes when you drag a folder on it to open it? It's happening on both 3.3.0 and 3.5.1. Not sure if the folder is too big (it's a 30-minute recording)
Hi 👋 Could you share the player.log file after having attempted to open the recording in 3.5.1? You can find it in the pupil_player_settings folder.
Sure! Shall I send it here?
Yes, please
Thank you. There seems to be an irregularity with your recording. Could you share the list of files that are contained in it?
I just noticed that the recording is located on One Drive. We have gotten multiple issue reports that were related to that in the past. Please either move the recording out of the One Drive folder and/or disable the One Drive sync.
yes! I just tried the original recording from a folder outside it and it just worked. strange but I'll do this for now
thanks again!
No problem!
Hi, we run a test experiment with pupil invisible glasses. Now I am in another country, and I was trying to review the videos directly from pupil cloud, but when I started a preview on the site, it opened the window with a black screen, and the video didn't start. Any help?
Hi! This looks like a connection issue. Do you continue experiencing it?
seems to work on another pc. Could be a sort of firewall?
yes, from different networks also. I am downloading the raw data and the download speed don't reach the 200 kb/s
Hi @user-e3f20f , I saw in the chat that unit of the "gaze_point_3d_xyz" is measured in mm, is that also true for Norm X and Norm Y values of gaze position data?
No, norm_pos is in distorted 2d coordinates. The unit is the field of view of the scene camera. 0,0 bottom left, 1,1 top right
Hello dear community, I noticed the last time I used the Pupil Labs Invisible glasses that they got pretty warm by the earpiece The usage time was about 14min. Is this normal that this gets so warm? Thanks for the help
It is normal that the glasses get warm on the right side as this is were the electronics lie.
Hello @user-e3f20f I have some questions. I was wondering, does this eye tracker will still work with the computer camera(in-built), for example who doesn’t have the resources and money for getting the tracking hardware? Also, if yeah, does will be applicable to 5x12, 5x12 resolution, say a doctor has to be mark the co-ordinates of gazes in an MRI? Can you please help me in answering the questions? Thank you😊
Hi, remote eye trackers, like the laptop cameras, are not supported.
So for this, the system, wherever the location is, the user must need to have the hardware, not the laptop camera?
The important thing is that the scene camera is mounted to the subject's head, i.e. rotates and moves with it. A remote eye tracker works conceptionally different. Maybe the DIY option could work for you https://docs.pupil-labs.com/core/diy/#diy
Hi if we are mainly interested in measuring pupil size (alongside gaze position), do you think its best if we freeze the pye3d model? Thank you!
If you can minimise headset slippage, then this is advisable. If you haven't already, check out the pupillometry best practices: https://docs.pupil-labs.com/core/best-practices/#pupillometry
Also, is there a way to open files with pupil player from google drive?
You'll have to download them to a local drive first
Hi. Are there any documents about pupil core's measurement uncertainty of fixation durations and fixation numbers ?? I guess it's depending on confidence, but I'm wondering how to describe it in research articles.
Hello, I noticed this error occured. so I followed this (https://github.com/pupil-labs/pupil/issues/2088) and found out there was no uvc.dll. How can I get an uvc.dll? I already installed uvc 0.14. Any help is appreciated.
hey, which python version are you using?
I'm using python 3.6.8
And how did you install pyuvc?
I installed pyuvc using pip install -r requirements.txt
let me try to reproduce the issue
When I first used DependenciesGui.exe, there was no turbojpeg.dll, but I installed it later.
Please download and install this wheel using the --force-reinstall flag.
Thanks to you, the problem has been solved. Thank you so much!
Thanks for testing and the feedback!
Hi we are presently testing and with one of our computers and we getting this notice on the world view that there is corrupt JPEG Data fortunately we have a spare device but how do we rectify this problem. Thanks Gary
Hi @user-057596 👋. Can you double check that all of the wired connections (PC to headset USB) are securely fastened, and try again? If the error is still showing, reach out to [email removed]
Fixations are only computed using high-confidence data by default. When deciding on 'uncertainty' about fixation duration and number, it would be more important to consider the duration and dispersion thresholds selected by the user. They must be appropriate for the task you're recording. You can read more about that here: https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds. You can also read more about the fixation algorithm here (with link to publication): https://docs.pupil-labs.com/core/terminology/#fixations
Thank you Neil. These documents helped me a lot!
Will do Neil we are currently testing at the moment but will reconnect the laptop at the end of testing and try again. Thanks.
Hi, How can I get the gaze position data? I did calibrate and export and I got all the data except the gaze position data. Thank you!
I'm looking at the pupil timestamp and the eye_id column (exported pupil_positions.xls file). I'm interested in knowing the pupil size of both eyes at the exact same timestamp (ie. what is the right eye doing while the left eye is fixating on something else - at the same time). How do I figure this out? I understand pupil time is arbitrary but it seems like the pair of identical timestamps are for one eye only.
Hi, can anyone help me? I'm trying to run pyuvc's example.py program but I'm having the following problem and I don't know how to solve it(I'm using linux mint) --> module 'uvc' has no attribute 'device_list'
Hey, it is very likely that you did not install Pupil Labs's custom pyuvc version but a pre-built version from a different fork/source. Please make sure to follow the instructions in the repository.
Hi @user-ced35b 👋. Pupil Core's eye cameras are free running, which means two pupil samples (one from each eye) may not share identical timestamps. That said, both eye cameras use the same clock (Pupil time), and so you just need to find the closest temporal match for a given pair.
Got it! Thanks so much!
If you calibrated successfully, gaze data should be in gaze_positions.csv. Are you sure that file is empty?
Hey, Thank you for your response. please see the screenshot there is no gaze data.
@user-2e1368 In addition to the above, make sure that you do not have the Post-hoc calibration enabled in Pupil Player. You need to have "Gaze from Recording" (see Gaze data menu) enabled to export the gaze data that was estimated during the recording.
Hi! Colleagues, tell me about a recording crash. How can I revive the files?
Unfortunately, it is not possible to recover this recording. It is missing crucial external timestamp information to align the video streams.
hello, what is the difference between diameter and diameter_3d? and which one is better for analyzing?
my motive is to check the pupil changes and I have found some codes for diameter_3d that's why I want to know if diameter_3d is better than the diameter to find out about pupil dilation ?
Please see https://docs.pupil-labs.com/core/best-practices/#pupillometry
Hello, I am researching gaze processing of pediatric patients using pupil core. In the case of a pupil core, is there a way to display the direction of movement (green point) of the eye, such as pupil capture, on a computer monitor?
For that you need surface tracking https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking The easiest way to get the visualization working is to use our PsychoPy integration https://psychopy.org/api/iohub/device/eyetracker_interface/PupilLabs_Core_Implementation_Notes.html#pupil-labs-core
Hello. I am studying the data in gaze_positions.csv and pubil_positions.csv.
I understand that the 3d gaze and normal_pos of the gaze_position are generated based on the data of the pupil_position, where I wonder what mathematical formula the normal_pos is obtained by.
The pipeline looks like this on a high level
1. Use 3d pupil data as input
2. Transform the circle_3d_normal and sphere_center into scene coordinates (using the transformation calculates during calibration)
3. Find the approximate intersection of the gaze rays (eye_center0/1 + gaze_normal0/1) -> resulting in gaze_point_3d
4. gaze_point_3d is then projected into the image plane and normalised. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L171-L179
Hi, I know this is a very basic question, but I'm new to using Pupil Core, just started to set it up to experiment and learn how it works. I'm using it on a MacbookAir, and I can't seam to get any videos appear in the pupil capture windows. World and Eye windows are both grey. Could you please help me find out what I am doing wrong? Thank you so much!
Hi @user-edaf68 👋. You'll need to start Capture with admin rights. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/995974229071777873
It's working, thank you so much!!🤩 I'm so excited to start!
Hi, everyone. I am thinking about using Pupil Labs together with Microsoft Kinect in an environment. Will the IR sensors of these two devices interfere with one another? Does anyone have expereinece doing this? Thank you in advance
Hey! As long as there isn't an IR reflection directly on the pupil edge, you should be fine.
Hi, could you give me Pupil Core User Manual with PDF file? I cannot find it. But user manual is visible in the website.
Hi, we don't have a pdf version of the documentation, sorry. But you can get a copy of the docs here and browse them offline here https://github.com/pupil-labs/pupil-docs/
Thanks
Hello, looking at my pupil_positions.csv file, what is the difference between the diameter column (which gives me very high values ranging from 24-40) and the diameter_3d column (which gives me more realistic values for pupil diameter)? I thought both columns were in mm when looking at the pye3d method. Do I use the diameter_3d column if I am interested in pupil diameter? Thanks in advance!
https://docs.pupil-labs.com/core/best-practices/#pupillometry please see the docs
Hi, I was using Pupil Player just fine a while ago but it suddenly stopped working -- crashed every time I tried to load a recording (the same recording).
I've tried restarting the laptop a few times and even reinstalled the pupil labs software but the problem persists. Any idea what's going on?
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
File "launchables\player.py", line 624, in player
File "plugin.py", line 409, in __init__
File "plugin.py", line 441, in add
File "observable.py", line 368, in __call__
File "gaze_producer\gaze_from_offline_calibration.py", line 201, in init_ui
File "gaze_producer\ui\select_and_refresh_menu.py", line 65, in render
File "gaze_producer\ui\select_and_refresh_menu.py", line 83, in _render_item_selector_and_current_item
File "gaze_producer\ui\select_and_refresh_menu.py", line 90, in _on_change_current_item
File "gaze_producer\ui\storage_edit_menu.py", line 81, in render_item
File "gaze_producer\ui\gaze_mapper_menu.py", line 66, in _render_custom_ui
File "gaze_producer\ui\gaze_mapper_menu.py", line 114, in _create_mapping_range_selector
File "gaze_producer\gaze_from_offline_calibration.py", line 224, in _index_range_as_str
File "gaze_producer\gaze_from_offline_calibration.py", line 229, in _index_time_as_str
IndexError: index 5941 is out of bounds for axis 0 with size 5813
player - [INFO] launchables.player: Process shutting down.
Thank you for your response. I enabled the Post-hoc gaze calibration on the Pupil Player and I got a massage (( Fixation detection: No data available to find fixation))
You need to disable post hoc calibration
I will look into it tomorrow
Never mind. It miraculously started working again today. 😒
finally, I got the gaze data. please, what about the head_pose_tracker_model data? there is no data. Thank you!
Have you followed the instructions for creating a model?
Yes
Could you share the recording folder with [email removed]
Yes, I will. Thank you so much!
Hello Team, I have already used your "annotations.py" file to do remote annotations on my Eye data but is there any way to do so with Matlab? Any annotations file for Matlab?
See https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Hello! My study design aims to utilize pupillab core with 120 fps. We currently met fps fluctuation during recording and can reach about 100 fps on average. Do you have any system recommendations about the optimal recording? Thank you.
Hello everyone, is there a sales team in China here? I am a hardware engineer at MiHoYo(the company that developed Genshin Impact), and we hope to get a live demonstration and support from the sales team in Shanghai. is it possible?
Please contact sales@pupil-labs.com in this regard
The M1 macs perform best. Otherwise, try to get a CPU with high clock speeds (> 2.8 Ghz)
Thanks for your advice; we got a quote for a desktop with the following features, do you think it's feasible to improve recording quality and maintain 120 fps as our designed setting? It has 10-core i5 at 3.7GHz, 32GB of RAM, and an nVidia GTX 1660 Super display adapter.
hello, I am using pupil labs camera for eye tracking and would like to know if there's a way to synchronise the timestamps from the 2 videos (left eye and right eye). then my other issue is I would like to open the posthoc mp4 video in matlab to look at it frame by frame but I did not find a way. any solution? Thanks
Hi! Each video has external timestamps that can be used to synchronize the frames between videos.
If matlab does not have an accurate way to extract frames, you can convert the video to a sequence of images using ffmpeg. See this tutorial https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
Thanks for your answer. Is there any timestamps synchronisation option in the pupil capture software?
Do you mean with another non-Pupil-Capture clock? Yes. You can either adjust the realtime clock or perform the sync to Unix epoch post-hoc
does this ffmpeg also works in windows?
yes
I mean is there any way that the Pupil Capture could already synchronize the time of each frame between the 2 videos?
Ah, you mean if it is possible that the n'th frame in each video has the same timestamp as the other? This is not possible because the cameras run independently. There is no hardware synchronization between them.
not necessarily. So what I wanted to do is to find the closest frames from each other in the 2 videos. If there's no possibility to do this through the pupil capture software then I will do it using the external timestamps of each videos. Thanks for your help!
after calibrating, Pupil Capture tries to find matching pupil data. You can find that information as part of the gaze data's base_data field. But that matching is not as accurate as matching timestamps explicitly post-hoc. I recommend you go with the second approach.
Hi! I have a quick question regarding confidence. In pupil_positions.csv, there are two columns: “confidence” and “model_confidence.” Am I correct to assume that the “confidence” column applies to 2D pupil detection and “model_confidence” applies to 3D pupil detection? I am just trying to filter out low confidence values but I’m wondering if values under the “confidence” column are even relevant/accurate for 3D pupil data (such as the “diameter_3d” column). Thanks in advance!
Hey, your assumptions are correct. confidence values are relevant. low confidence will likely lead to low model_confidence but does not have to. Likewise, model_confidence can be low but confidence high. I recommend removing any samples with low confidence and/or model_confidence.
Ok sounds good! Thank you for the quick reply
Hello, I have a pupil core. Can I use this code for pupil core? If so what do I need to change in the code? https://github.com/pupil-labs/realtime-matlab-experiment
Hi @user-ef3ca7. You'll need to use the Core Network API. Check out our Matlab helpers for Core here: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
Hello guys, I have a question. In surface fixation detection csv file exported. After doing my analysis, I found that my data is inconvenient to me. Then I realized that inside the csv file there are some fixations below 100 ms. When I checked the settings in pupil labs I saw that the minimum duration for fixation is set to 300 ms (which I should change). So why I have results for fixations below 300 ms if the minimum duration set is 300?
Could you share this recording such that we can reproduce this issue?
Sure where should I send it?
I don't understand the items below. I wonder how to find intersection and what mathematical formula is used to get gaze_3d.
Thank you for kindly informing me of my lack of knowledge. Also, I'm looking at the code of gazer_headset.py that you showed me for the first time
self.rotation_matrix = self.eye_camera_to_world_matrix[:3, :3] self.rotation_vector = cv2.Rodrigues(self.rotation_matrix)[0] self.translation_vector = self.eye_camera_to_world_matrix[:3, 3]
I want to know the value of eye_camera_to_world_matrix in . How do you know this?
This matrix is calculated during calibration and is published as part of the calibration.setup notification (subscribe to notify.calibration.setup to receive it)
I think it's too hard to understand the content at the undergraduate level.
Can't we just take the data shown in gaze_position.csv and pubil_position.csv, and then convert 3d_gaze_point to normal_pos?
gaze_positions.csv already includes the norm_pos equivalent for gaze_point_3d. 🙂
What is your final goal? I might be able to give pointers on how to achieve it.
Thank you so much for your care. You must be busy, but I'm sorry that my low-level questions seem to get you in trouble.
My final goal is to estimate the gaze_estimation without the pupil_core and map the gaze_point (norm_pos) on the screen with the 3d gaze_point obtained through it.
To do that, I wondered what formula 3d gaze_point data is converted to normal_pos in the pupil core itself.
What do you mean by screen?
I know that the pink circle shown on the world camera is the normal_pos value.
It means gaze_point on the world camera.
@user-e3f20f You notified me that my right eye camera is not working. Is there a quick way to fix this? Since I have multiple measurements today?
Unfortunately not. We believe this to be a hardware issue. Our hardware support team will be following up as soon as possible. I am very sorry.
All right, no worries. Thanks for the reply
Hi all, I have a question regarding eye-tracking data that I have collected with Pupil Core (glasses) and eye-tracking metrics (using Pupil Player). We downloaded the fixations.csv file and recorded the fixation when an AOI appeared and the fixation when the AOI disappeared. We see a big difference (43.8%) when we calculate the duration of the AOI being shown based on 1) timestamp of first and last fixation vs. 2) the sum of the duration of the fixations.
I am trying to better understand what this part (43.8%) is that is not a fixation. Therefore, my question is what this 43.8% could consist of? Is this, for example, missing data, gazes that are too short to count as a fixation (the shortest seems to be 80.69 ms; tbtw, is this used as the default threshold?), saccades, anything else?
Hi all,
I am completing my PhD at WSU. I am trying to integrate the Pupil Labs Core with an AR headset (HoloLens 2)
I can't seem to install the zmq dependency.... I have pip installed it and it shows up in the pip list, but when I run the program (pre-existing program) - I get an error - Module not found.
What are some possible causes?
I was thinking it could be that the system runs in a virtual environment, so installing to base Python will not work?
Sorry about the vague question... I am still learning how to program 🙂
May I ask how you integrated pupil core with hololens2? I am currently facing this problem
Yes, you need to activate the virtual environment first and install the dependencies into it.
Hi all, I have two types of Pupil Core. One has a pair of smaller eye cameras and it looks the same as the illustration in the documentation. The other one looks the same as the advertising pic. Do they work differently? Is there any difference in spec between them?
The only difference is there available resolutions and frame rates
Could you specify which data you use as input for your calculations? Does this ratio stay roughly the same for all your recordings?
Sure! Here is an example. Id = fixation id. I calculated the time of last fixation (ID 1047) minus the start time of the first fixation (ID 1034) (*1000 for milliseconds to more easily compare the results). Additionally, I calculated the result for fixation duration by summing up the milliseconds. See the result in bold. This is a pattern in all my videos (I have about 160 different videos with a similar pattern).
Hello, I am designing an experiment on Psychopy where I have integrated Pupil Labs eyetracker. I will going to present couple of pictures ( Stimulus) through Psychopy and wanted to monitor the reaction of our eyes when we look at the stimulus.
I have added a picture where I can get the Start and Ending time of the stimulus using the Psychopy generated CSV file but what I wanted to find the timestamps in the Pupil_position.csv file? Like , I wanted to know which part of the Pupil_position.csv will be in the first Stimulus start and End time . (Annotations for both start and end time)
It will be really helpful if you can let me know
Hi! Did you use the iohub integration or your own?
I have used the ioHub integration
This integration performs time sync in realtime and applies it to the data that it receives. Pupil Capture is not aware of PsychoPy. I recommend enabling the "Save HDF5" option in the "Data" tab of your experiment settings. This will store the gaze and pupillometry data as part of your psychopy data, in psychopy time and space.
How I can access the HDF5 data?
Dependencies:
pip install pandas tables
import pandas as pd
recording_hdf5 = pd.HDFStore(path_to_hdf5_file, "r")
binocular_data = recording_hdf5[
"/data_collection/events/eyetracker/BinocularEyeSampleEvent"
]
monocular_data = recording_hdf5[
"/data_collection/events/eyetracker/MonocularEyeSampleEvent"
]
binocular_data.to_csv("binocular_data.csv")
monocular_data.to_csv("binocular_data.csv")
Thank you so much
hello
I need help with eporting the files I recorded
exporting*
on the website it says that the files are supposed to be saved in a folder named Exports
where is that folder exaclty?
Also when I open the player and type "e" I get the following message: " player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those."
can someone help please?
@wrp
@user-d72f39 within the recording folder will be a folder named exports (see screenshot for example)
is that from the player or the recorder please?
This screenshot shows a "recording" folder that was created with Pupil Capture and processed with Pupil Player.
Yes, this indeed looks like there are too few fixations. It is difficult to tell what the cause is without having access to the recording. Could you share it with [email removed]
Do you know what a list of potential causes could be? I can see whether I can share an example recording.
this is the same issue as mine. I sent you again the link and also attached data of surfaces only in case the link is not working
I’m a bit confused by these variables. Since there are not outputs for both eyes, which eye is it using to generate these coordinates. Also, how do these coordinates translate to 3d space? Like Is the 0,0,0 point the center of the eyeball?
Pupil data is defined in the corresponding eye camera coordinate system. You can identify left/right eye by the "eye id". 0,0,0 is the origin of the eye camera. The circle_3d_normal originates in the sphere_center. You can read more about 3d coordinate systems in Pupil Core here https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hi @user-e3f20f, I am collecting gaze position data for a lot of trials and creating surface tracking boundaries to let me know when/how many times the participant is not looking at the designated area. Is there a way to copy/standardize the location of the surface tracker boundary to all the other trials so that I do not have to create a surface for every trial?
Yes, set up the surface definitions on one recording and then copy the surface_definitions_v01 file over to the other recordings.
I have another question regarding blink detection. Is there a way for data that is classified as a blink through blink detection to be excluded from the exported data files (specifically, gaze_positions_on_surface_...)?
There is no way to exclude that data before the export. But this tutorial shows how you can merge blink data with other exports https://github.com/pupil-labs/pupil-tutorials/blob/master/10_merge_fixation_and_blink_ids_into_gaze_dataframe.ipynb Once merged, you can easily discard the blink data
Hi. Is the Pupil Mobile App now unavailable on Google Play? Anywhere can I download the apk?
Pupil Mobile has been discontinued for a while now. We recommend running Pupil Capture on a tablet instead.
Okay, thanks for your information.
hi I'm in Korea and I want to buy eye camera, but do I have any customs clearance issues?
Please contact info@pupil-labs.com in this regard
Hi, anyone have tried gaming portable PCs like Ayaneo with Core for portability? https://www.ayaneo.com/
Hey, Please see the screenshot of the blinks file. I have 22 blinks and there is a start time, duration, and end time. my question is How can I adjust the start time that allows me to start from the first second instead of 2214.4019? Thank You!!!!!
Check out the info.player.json file. It contains the "start_time_synced_s" field. Subtract value from your "start_timestamp" and "end_timestamp" columns to receive the corresponding times since recording start.
Please see the screenshot of the info.player.json file. I changed the start time to 0 then I exported and still give me the same start time 2214.403, Thank you
There has been a misunderstanding. You must not set this value to 0. Instead, you need to copy the original value and subract it from the timestamp values in your Excel application.
I am trying to have the start time is 0 second. is it possible to have start time from 0 sec?
Hello everyone!
I wanted to ask a couple of questions and get some advice from the community:
1.Calibration:The participants in an on-road research I'm doing will use the Tesla in both automatic and manual mode. The participant workload will be assessed using the eye metrics (eye blinks, fixation, total eyes off the road, etc). I've been calibrating the eye tracker with a single physical marker, however it appears that in this situation that isn't the most effective approach. The eye tracker will be worn by the participants while they are driving. What method would, in your opinion, be the most effective for calibrating the eye tracker to cover a greater area?
Thank you in advance for your patience and help.
You can simply subtract the smallest start_timestamp value. But why do you need your time to be relative to the first blink instead of the actual recording start?
I want to plot a blinks diagram and start time from 0 sec.
Then it is simple as subtracting the smallest value from all other values 🙂
thank you so much!!!!
Where might I find information on how gaze mapping is calculated. I can see lots of information about pye3d model and how it detects the eyeballs and pupils, however can't find any information about how this data is used to map a gaze into the world cam footage. Can anyone advise? Thanks!
That information is indeed not as well documented as pye3d 😕 What level of information are you looking for?
Nothing too in-depth - it's for an MSc dissertation so more a checklist like "Pupil Labs utilise pye3d model for pupil detection, which works by.... " (which i've got plenty of info on) and likewise need to make note something like "For the gaze mapping, such and such is done".
Are you using 2d or 3d calibration?
3d
During 3d calibration, Pupil Core software uses "bundle adjustment" to estimate the physical relationship between the eye cameras and scene camera. To be more specific, it assumes a specific location of the eye model centers in scene camera coordinates and then rotates the eye camera coordinate systems such that the circle_3d_normal values point at the collected calibration target locations. The result are two transformation matrices that can be used to map new circle_3d_normal values from eye camera to scene camera coordinate systems (gaze rays).
During gaze estimation, the software tries to find the point that is closest to both gaze rays (gaze_point_3d) and projects it into 2d normalized coordinates (gaze norm_pos).
The implementation can be found here https://github.com/pupil-labs/pupil/tree/v3.5/pupil_src/shared_modules/gaze_mapping/gazer_3d
Thanks very much! Super helpful!
Is there a way to take the exported mp4 world videos and put them back into pupil player? We are trying to undistort the world video and then use that version to calibrate gaze positions
Yes, but you might need to delete the _lookup.npy file and replace the _timestamps.npy file with the exported one, too. Also note, that you will probably need to adjust the world.intrinsics file, too. May I ask what you are trying to achieve with this setup?
I think 10 cores are not necessary but 3.7 GHz sounds sufficient
Thanks for your confirm!
Our main concern is with the fisheye distortion on the world camera. We've compensated for that outside of pupil player with a python script, and are now wanting to use our undistorted video to do post hoc calibration and gaze estimation. Do you have any ideas on how we could go about this? Or if you may know of a better way we could be approaching this issue?
What problem/issue are you trying to resolve by importing the undistorted video in Player? Do you just want undistorted gaze and scene video?
Yes, that is the main thing we're trying to compensate for
Are you using 3d gaze data?
I believe so
In that case, you can use the iMotions exporter that undistorts the scene vide and gaze data for you
Hey! As of today I have a problem using Pupil Capture. After approx. 1 minute the main world view window gets unresponsive and closes itself. There is no error message in the Pupil Capture console. According to the Windows error logs there is a problem with the libusbk driver and on some occasions I get a blue screen with "MULTIPLE IRP COMPLETE REQUESTS" error message. The problem is independent from the host pc and Pupil Capture version. I hope there is someone who can give me some advice. Thanks!!
Has anything changed? Have you installed any new hardware, updated any drivers or performed Windows Updates? Could you share the logs that claim libUSBk's fault? I have MULTIPLE IRP COMPLETE REQUESTS seen as an issue once before but I was not able to find it yet.
Hi, I am trying to exclude blink data from the gaze_on_surface_ and I am using the blinks.csv file to determine the start and stop time of a blink. It is listed as start_timestamp. Do you know if this lines up with the world_timestamp or the gaze_timestamp?
Blinks are pupil datum based, i.e. the start_timestamp is not guaranteed to be part of the gaze_timestamps. But all these files use the same clock. You can discard all gaze samples that fullfill this condition:
start_timestamp <= gaze_on_surface.timestamp <= start_timestamp + duration
There hasn't been any change.. But I have done further testing and I think it could be an issue with the cables/connectors inside the USB-C connector clip. When I move the cable extending from the clip towards the glasses the world cam gets disconnected or sometimes reconnected. And just want to ask: if it's an hardware issue do you offer repair services?
We do. Please contact info@pupil-labs.com in this regard
Hi team, we are doing post-hoc calibration using recordings from the Core. Is there a way to adjust any of the eye camera parameters to get better pupil detection? Currently, Pupil Capture is unable to detect the pupil for a majority of the recording, despite a pretty clear eye image.
You can adjust pupil detection parameters by running the Post-hoc pupil detection in Player. Feel free to share an example recording with data@pupil-labs.com for concrete recommendations