The accuracy of gaze is bad in dark environments. Is there any tips on improvising the requirements? Note: I have calibrated it properly though.
This sounds like you are getting bad pupil detection in the dark? Have you looked into that?
The scene camera does not record at perfectly even intervals. Also, the head pose can only be estimated if the 3d model can be located with a scene frame, i.e. the markers need to be visible. During quick head movements, it is possible that marker detection fails due to motion blur. For these frames, no head pose can be estimated and the frame will be skipped.
@papr Thank you for the very clear and easy-to-understand explanation! It helps me a lot.
Yeah sometimes it flickers. Any fixes?
I would need to see an example recording for concrete feedback. One typical issue in dark environments is either (1) insufficient exposure (image too dark) or (2) or the pupil max parameter is set too low for the actual pupil dilation.
Generally, yes, but in case of the Pupil Core headset, incrementing phi for one eye camera does not result in an decrement for the other, because the right eye is upside down.
im a bit confused, sorry for bugging but i don't understand. is there a parameter that both pupils position have in common and that are not based on a separate coordinate system? ( since i understood that the phi and theta are also dependent an the position/orientation of the eye camera) i must quantify the horizontal and vertical movements of the eyes and that the parameters that are used for the measurements will be calculated so that the ratios of eye movements will be 1:1 (i.e. if both eyes move for the same direction the same amount i will be able to see from the data received that they made the same movement and that the distance/difference in their position is the same)
@papr Hi, I am integrating the pupil into a dive mask and was looking to swap the recommended world view camera to an underwater fishing camera (either the spydro or the gofish cam) I wasn't sure who I could talk to to see if these would be compatible. But you look like the man to reach out to. Any help?
How are you planning to record the video? To which device do you want to connect the eye cameras to?
@papr the eye cameras will wire into a laptop above water and the world camera will feed to the same laptop.
For the camera to be recognized by the software it needs to fulfil specific details. See this message https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
thanks i will check this out!
@papr Hi, I tried to install the pyglui as mentioned in ubuntu 18.04. But I encountered the problem. The error information is displayed in the txt. Can you help me?
Hi, there is a series of known build issues in pyglui. I have been working to fix them this week, but unfortunately I am not done yet. I will let you know, as soon as it should work again. Meanwhile, have you tried running the bundled application already?
@paprYeah, I installed the Pupil Capture, and tried to run the application directly. But I get the another error.
Which version of capture is that?
I installed the Pupil Capture 1.0 before, but I have uninstalled this application. I searched the solutions, which seems to be related to the early packages. The version I tried to run is 3.3.
Just to be clear, is the screenshot from running 1.0 or 3.3?
Sorry, the screenshot is from running 3.3.
I somehow get the feeling the uninstallation might not have worked as intended. Could you please run the Uninstaller again, delete the /opt/pupil folders manually in case they still exist afterward, and then use dpkg to install the deb files? sudo dpkg -i .deb
@papr I have tried to uninstall all the pupil* application, and use dpkg to install the deb files. But I get the same error.
OK, thank you. I am very sorry that the installation is still not working. Unfortunately, I cannot tell what the exact issue is, yet. Thank you for providing the traceback though, this is very helpful.
I installed this application in the Windows 10, and it works. But I found the 3D eye model seems to be abnormal. The 'No. of supporting pupil observations' always equals to zero. The mesh model doesn't show the 3D pupil circle.
Good to hear, that you are able to fallback to a different OS for now. π I fear the debug window has not been cleaned up since the early development of the pye3d integration. This value is likely no longer updated due to changes in the software architecture. I created an internal bug report to remove this inconsistency.
Thanks for your reply. I first found the debug widnow has such problems in Windows 10. So I tried to run the application in Linux. You know, I found the software always has the different performance in different OS, which I consulted you several months ago. But I get these errors in Linux as mentioned before.
This debug window bug will be present on Linux, too.
Does this debug window bug also include the abnormal 3D eye model?
What are you referring to by abnormal 3d eye model?
Also, short question, which hardware are you using?
I means the corneal mesh model doesn't overlay with the real region.
Ah, I understand. Please be aware that you can rotate and zoom in this view to align the model. You can hit the r
key in the window to reset the view.
The negative pupil diameter is indeed unexpected given how well fit the model looks. Are you referring to that?
Reset view
Please be aware that the displayed pupil is not corrected for refraction visually which is why it looks different than in the recorded eye image.
I use the Pupil Lab device. But I don't the hardware version.
Could you share a picture of the hardware? Depending on how old the hardware is, it is possible that the software does not provide default camera intrinsics for the eye cameras. this could explain the incorrect pupil diameter estimation.
Yes, thank you. It indeed looks like this is an older model without a pre-configured focal length.
Can you get it?
Can I calculate the IR camera intrinsics and set it into the software?
Yes, you need to select the eye camera as the scene camera in the world window (Video source menu -> enable manual camera selection -> Select camera) and run the camera intrinsics estimation plugin https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation
Thank you. You know, it is difficult for the IR camera to capture the image in compute screen. So I need to print the chessboard. How can I update the camera intrinsic parameters into the application? I don't get the idea from the webpage.
Please be aware that the plugin only works with the circle grid, not with the chessboard which is also a common pattern.
When the intrinsics were successfully estimated, they are stored as files in the user directory. The pre-recorded intrinsics are loaded from code. Both is explained in the Camera Intrinsics Persistancy
section below the linked section from above.
Please let me know if this does not answer your question sufficiently.
I have just tried to reproduce the issue with a freshly setup Ubuntu 16.04 VM. I am not able to reproduce the issue. Looking at the responses here https://answers.opencv.org/question/18461/importerror-lib64libavcodecso54-symbol-avpriv_update_lls-version-libavutil_52-not-defined-in-file-libavutilso52-with-link-time-reference/ it looks like this issue appears due conflicts with other ffmpeg installations on your system. π
Thank you very much. I have also found these responses before. I will try it.
Hi, I have a question: How can trigger be sent into pupil capture? Would it be possible to write down trigger (as annotiations) coming from e.g. Neurobs Presentation or also Matlab Psychtoolbox?
Yes, this is possible. See https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations
Just built an open source tracker per website. Installed latest Pupil Labs desktop software on Windows 10. Getting these errors when trying to run Pupil Capture. For some reason the eye and world cameras are not being detected. Windows Camera can see/play them. Any thoughts/hints to resolve this would be most appreciated. Thanks. Mike Brown, Boston MA
One would think that by release 3.x the Windows 10 install would be nailed down. No wonder Tobii is #1 in market share.
If you are referring to the DIY eye tracker, please follow steps 1-7 from this link https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Thanks for the pointer! Will give it a try. Mike
Hi everyone, sorry for the triviality of the question but this is my first time using pupil core. Eye 0 (right eye) connects without problems, Eye 1 (left eye) camera is disconnected. Do you know how I can solve this? If I choose manual camera selection I find only the camera of the Eye 0.
In this case, you need gaze data because the pupil data's phi/theta values are relative to their corresponding eye cameras. And since the eye cameras have different orientation in space, their horizontal/vertical axis are not comparable. Gaze uses the scene camera coordinate system.
so in this case, in the references it is described that the Z axis is the optical axis of the camera. and if that so, what does the center eye position describes? the location of the center of the pupil in our space? and if that is true, does the position of the Z axis changes in cases of eye movements? Thanks!
If you do not see an "unknown" entry in the list, it means that a second eye camera is not connected. Please be aware that this is an expected behavior should you be using a monocular headset (one camera arm only on the right; vs one on each side).
I have binocular glasses, I have two eyes. I see "unknown" in the list but if I select it, it still won't connect.
eye centers are the location of the fitted eye models relative to the origin of the scene camera in mm
so in order to calculate movement i must use the gaze data, am i correct?
ok, then it means that the drivers were not correctly installed for this specific camera. Could you let me know the name of the successfully connected eye camera?
Pupil Cam1 ID0 @user-b772cccal USB is connected
In order to calculate between-both-eyes-comparable horizontal/vertical eye movement, yes.
when i receive the gaze normals, what are the values normalized compared to? also, does the X and Y values represent the horizontal and vertical movements compared to the optical axis?
and if i use the surface tracker, then can the system still differentiate between the eyes?
The surface tracker will only map the combined gaze point to surface coordinates. But you are able to backtrack which scene-camera-gaze datum was used, which in turn contains the gaze_normals to calculate eye rotation
Pupil Capture runs the driver installation on launch. Have you tried restarting the application yet to give the installation an other try?
I have already tried closing and reopening the software several times, but at start-up it always tells me "Could not connect to device".
ok, please try following these steps https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting
Done! Thank you very much!!!!
They are normalized to have length 1. See this for the description of the coordinate system https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
so are the gaze_norm for each eye are the values of a vector that is going through the eyes? thanks
hi, can i know the recording rate of the eye tracker?
You can find the Pupil Core tech specs here https://pupil-labs.com/products/core/tech-specs/
thank you!
followup question is this correct for all eye-pupil models?
What do you mean by eye-pupil models? Are you referring to previous hardware iterations of the Pupil Core headset?
I'm just not sure if these specs match my model. How do I know which model I'm looking at?
The easiest way would be to check the order confirmation email.
there are multiple pupil core models correct?
There have been other hardware iterations with lower eye camera frame rates, yes.
i understand thank you
Should the order confirmation email not tell you enough information, let me know. There are other ways to find out. But they require access to the hardware.
Hi, does anyone have a CE certificate for the pupil core? Would you mind sending me a copy?
Please contact info@pupil-labs.com in this regard.
I see. Thanks!
Hi This happens when I use surface tracking. What is the reason?
Hi, this issue will be resolved in our next release. :)
OK thanks
I have an error "KeyError: 'gaze on surfaces'" when I try to run the filter_gaze_on_surface.py code in a PsychoPy experiment. Does this mean that the surfaces are not being properly sent to PsychoPy, and thus not recognized?
They are coordinates of where your gaze from that eye is, normalized within your surface bounds. I'm assuming this is for surface tracking?
Oh the 3d gaze is relative to the world camera
no, im trying to visualize the gaze from each eye. i know that their values are based on the world camera system but i want to make sure that the vectors they create are through the eye centers (basically that the vector of the gaze is originated from the eye and if this plot is correct)
The gaze normals do indeed describe the axis that goes through the eyeball centre and object that's looked at
Ohhhh that's definitely more of a papr question lol
thanks!
It works when I run the pupil script alone, the problem is when I integrate it into my PsychoPy experiment then..
You should use gaze_on_surfaces
not gaze on surfaces
. Also, have you tried debugging the object throwing the key error and inspecting it for tis actual content?
Edit: Also see nmt's follow up here https://discord.com/channels/285728493612957698/446977689690177536/861966175432736829
If I was to have the Pupil Core connected to a desktop computer, what would be the longest length of a USB extension where I would still have a reliable connection? I'm conducting a study measuring visual fixations during gait (walking), and need as much flexibility/slack from a USB extension as possible. For instance, would a 25' extension still yield consistent data collection?
We highly recommend using an extension with an active power supply and to test it thoroughly before conducting the actual experiment. Unfortunately, I cannot recommend a specific one.
Hi, my lab just got a pupil eyetracker so Iβm very new. I was wondering if there are guidelines to set the different thresholds for example blink onset/offset ?
Hi @user-5651f6 π. Have a look at this message for reference of how the blink detector works: https://discord.com/channels/285728493612957698/285728493612957698/842046323430916166 You can then examine a given recording and set the thresholds such that they accurately classify your blinks. Use the Eye overlay plugin in Pupil Player to view the eye videos and confirm the thresholds are correct
Hey, im looking to upgrade the Pupil Core DIY to a wide angle camera that is compatible with the software. https://www.amazon.com/Arducam-Computer-Fisheye-Microphone-Windows/dp/B07ZS75KZR Would this be okay? (it is UVC compatible)
Hi, can I somehow set it up that I will see on a recording the gaze with only one red dot ? .. since yesterday it's just nowhere
now I have only two green circles..
Hi @user-8e5c72 π . Did you change any of the visualization settings (https://docs.pupil-labs.com/core/software/pupil-player/#vis-circle) since you last looked at your recording in Pupil Player?
where do I see if the world camera is in focus
ahj Im lost a bit
is it possible to send you some pictures?
If you want to share them on here feel free, or else you can send them via DM
@nmt so I thought I send you some pics so you can better understand what I mean
You might also want to disable Vis Polyline plugin which draws a red line between the gaze points.
instead of a green circle or two green circle.. I need
such a red circle
If you look closely, there are two gaze points in your picture, too.
Generally, it is expected for Pupil Core recordings to display multiple gaze points per frame, because the eye cameras run at a higher frequency than the scene camera.
You can easily change the gaze circle visual properties, such as size and colour. Please follow these instructions: https://docs.pupil-labs.com/core/software/pupil-player/#vis-circle
Hmmm I gave it a try
let's say now I have two red dots without that line
These pictures look like the calibration did not work correctly though. Specifically, it looks like output from the "dummy gaze mapper" that was included in Pupil Capture prior to v2.0. Can you check the Pupil Player general settings and look up the recording software version?
but how can I combine them to only one dot so I would get
only this one circle
what else do I have to set up
hmm so.. I just had a one participant now and measured a bit in our study.. Now suddenly it looks ok
Hi, we are trying to use HoloLens 2 with the pupil labs hmd add on kit. Has anyone here designed 3d prints for mounting it on HoloLens 2?
Hi I encountered some problems, that is, when viewing the surface tracking data, I have some questions about the data in the red area in the figure. What time does world_timestamp refer to? Why do fixation_id and duration have multiple identical numbers? How do I choose from them, thank you.
Hi @user-4bc389. The world timestamps are the world camera time. The duration column shows fixation durations in milliseconds. There are identical ID and duration values because, by definition, fixations spread over multiple world camera samples. E.g. fixation ID: 18, has a duration of 80.104 milliseconds, and is spread over 3 consecutive world camera samples.
Does online or offline pupil detection tend to be more accurate?
Hi @user-7c46e8 π . The accuracy of pupil detection per se tends not to be influenced by running in an online or offline context. However, certain settings, such as eye camera exposure, sampling rate and resolution, can only be adjusted in real-time. Excessively low exposure, for example, can cause poor pupil detection, and you cannot change this post-hoc. So always check these are set correctly before you record data. There are other pupil detector settings that you can adjust, e.g. https://docs.pupil-labs.com/core/software/pupil-capture/#pupil-detector-2d-settings), and you can do this in real-time or post-hoc. From a research perspective, while it can be less stressful to adjust these in a post-hoc context (more time to fine-tine the settings), it is recommended to ensure that pupil detection is robust before you make recordings. Also ensure that pupil detection is good at all angles of eye rotation you will record.
@nmt So how to choose these same values, are all valid or just choose one from the same value, for example, the surface I defined, such as multiple same fixation durations, then how should I use these data, thank you
You need to group the data by fixation id. Per fixation id, start time and duration will be the same. Only the surface-mapped location of the fixation might be different across multiple world video frames. See this message for reference https://discord.com/channels/285728493612957698/285728493612957698/859379631282454548
Good morning! Is there any possibility to turn off 2D detection and only use 3D detection during recording? Glad for any help! π
Unfortunately not, because the 3d detector uses the 2d result as input. If you want to save resources, I would suggest the other way around: Turn off 3d detection and leave 2d detection on. The quality of the 2d detection is most important to get a good 3d model later. (You can fit the model post-hoc in Pupil Player)
So would you still recommend "rolling with eyes" even if in this case only 2D detection would be possible? Id like to prevent some fails in Post-Hoc processing since I want to use 3D data.
Yes, basically perform all the steps as if 3d detection was enabled. You also could leave pye3d enabled at the start, make sure that the subject is able to perform the procedure yielding a good model and turning pye3d off afterward
How I can 'fit the [3D?] model' in PupilPlayer? And yes, that sounds good. But how can I turn off pye3d while running PupilCapture and having checked that the model is fine?
In Player, you just use the post-hoc pupil detection. This will rerun both detection algorithms on the recorded eye videos.
how can I turn off pye3d while running PupilCapture You either need a custom plugin or you send the command via the network api.
and having checked that the model is fine? To clarify, you would check first and then disable it, assuming the post-hoc detection will reproduce a similar eye model
Thank you very much, @papr! Have a nice weekend!
Hi, I want to get pupil data for both eyes, at the moment I am using:
subscriber.subscribe('pupil.0.3d') subscriber.subscribe('pupil.1.3d')
and a while True block to capture the data, but in some iterations I am not getting both, is there a better way?
Let's move this to π» software-dev
sure
@papr I'm still a little confused. For example, I defined a Book surface, and now I need the fixation duration data of this surface. How can I extract these data with the same ID, choose all or only one? Thank you
Hello, if you have multiple different video recordings with the same apriltag defined surfaces, how can the surface definitions be shared between the recordings? So far I have tried to copy the surface_definitions-v01 file between recordings, but it doesn't work that way. Thanks in advance!
Hi! I'm working with pupil capture video to feed it into a YOLOv5 model to detect objects within the frame as a subject is walking through space (example frame output in file named 'yolo_pupil_ouput.png'). To feed the post processed images in, I'll be using ffmpeg & mp4fpsmod to generate a new .mp4 file from extracted jpeg's with preserved relative timestamps [since the pupil capture video has a variable frame rate]. I was able to convert the raw pupil capture video to jpegs and back again into a video with the same relative timestamps, but when when I try to launch Pupil Player with the remade mp4, I receive a 'Found no maching pts! Something is wrong!' error (frameiterator_error.png). I've also attached the ffprobe output for each file (the original & remade files), can anyone comment on why I might be receiving this error?
The issue is that Player expects that: 1. Every video stream packet includes exactly one video frame. If the packet is empty, it assumes that it has reached the end of the video stream. 2. That the packet and frame pts (presentation timestamps) are the same.
update - I realized I didn't copy the metadata from the original file, attached I have the ffprobe output on the remade file after adding in the metadata from the original file. I still have the same error when trying to use this new file
Sharing the surface_definitions*
files should work. π€ Please copy them while the recording is not opened in Player and only open the recordings afterward.
Ah, yes, it works like that. Thank you!
Does pupil core record pupil size?
It does :)
Hello, I had a problem with the right camera of the Pupil Labs Core, so I changed the left camera to the right side. Since most people have the right eye as the dominant one. When testing the equipment, the model is quite lost, and the large red circle in the photo appears. What should I set to make it work that way?
The large red circle is part of the algorithm view. If you change the mode in the general settings it will go away
Thanks! When working with monocular, the model loses precision? How should I do the calibration to ensure that it does not, since I am calibrating "Single Marker" at a distance of 5 meters which is where I expect the fixings.
The eye model is independent of the calibration. Try looking into different directions to fit the model, like on the left. (Note: This screenshot is taken with our latest version. The colors might differ from your version.)
But first, you need to make sure that the 2d pupil detection is stable. From your screenshot, it looks like your eye lashes might be detected as false positives. Please use the ROI mode to adjust the region of interest to exclude your eye lashes.
Thank you very much, I'll do that!
Hi - I'm having trouble downloading the pupil core software on windows 10. I'm trying to run from the command prompt but get an error saying
Hi, could you please let us know which software you used to uncompress the downloaded rar file? ~~@nmt Could you please try to reproduce the issue?~~ Resolved
'The installation package could not be opened'. Contact the application vendor to verify that this is a valid windows installer package'. Sorry I sent too early but any help would be great. Thanks!
When I download it, it has an .msi extension. Does this still need to be decompressed?
Which browser are you using?
Chrome
Okay I now seem to be able to run the installer. Is it correct that the publisher should be unknown? Application:
pupil_v3.4-0-g7019446_windows_x64.msi
Publisher:
Unknown publisher
All sorted and installed! Thanks for your help.
Nice! What did you change?
I didn't think .msi extension needed to uncompress so it was just a case of unraring
Hello, Iβm trying to figure out the time interval between fixations (by using the fixation plug in). I downloaded the data from the fixation plug in and may someone walk me through what does the βstart_timestampβ and βstart_frame_indexβ means and how would it be same or different from to the timeframe on the pupil player?
Hi. The frame indices are comparable between both. They refer to the scene video frame index. The timestamps are equivalent but not the same. The Player ui displays time relative to the first scene video frame; the export uses the original pupil time. https://docs.pupil-labs.com/core/terminology/#timing
Hi after installing the latest release, I am getting a "refraction corrected 3d pupil detector not available" error. What could be the cause of this?
Are you running from source or from bundle? If it is the latter, please let us know which operating system you are using.
From bundle, macOS mojave
If the error does not go away, please share the capture.log
file. You can find it in Home directory -> pupil_capture_settings
.
I am on a similar setup (Catalina instead of Mojave) and I cannot reproduce the issue. Please try restarting with default settings from the general settings.
Hi, can i display Heat map for several respondents ? How can I do this?
Hi, Pupil Core software does not have built-in support for multi-participant analysis. You will have to export the surface-mapped gaze data from every subject's recording and aggregate it by yourself.
Hello, We are using Pupil Labs on a computer that multiple researchers will log into. I installed Pupil lab under my user ID (I am an Admin) but the Pupil lab software does not show up under other user ID's, I have had to reinstall under each person user ID. Is there a way to have it install once and be used under each user ID?
The Pupil Labs software is installed to C:\Program Files (x86)\Pupil-Labs
by default. This directory should be accessible by all users. Nonetheless, you are right that the start menu entry and desktop shortcut are only installed for the current user. The next release will include a small change that will install start menu entries and desktop shortcuts for all users.
See https://github.com/pupil-labs/pupil/pull/2166 for reference
hi! Can I get additional guidance on the gaze_timestamps value from exports? I grabbed the pts values for each frame using ffmpeg & mps4fpsmod independently (both methods as a sanity check) and I'm not getting equivalent values when looking at the gaze_timestamp for the beginning frame if using 0 based indexing pts from the world.mp4 using ffmpeg/mps4fpsmod at frame #3 is 916.966666666667 if using 1 based indexing pts from the world.mp4 using ffmpeg/mps4fpsmod at frame #3 is 957.977777777778 where gaze_timestamp at first instance where world_index = 3, pts = 1.747019805
Additionally, is there any advice that you could give for reconstructing the world.mp4 video from ffmpeg extracted jpeg's not aligning with the expectations that pupil player has about packet contents/pts for that same video? I extracted those frames using ffmpeg with vsync=0, setting pts time with mps4fpsmod with the original pts values, and setting the video metadata with the original file.
In all, I need to check whether gaze (with diameter=0) collided with any bounded box (grabbed from extracted frames being run through yolo) AND get the timestamps for the gaze data. Can anyone advise?
Using -vsync 0
is important to get the correct number of frames, yes.
The video file comes with an additional *_timestamps.npy
(intermediate recording format) or _*timestamps.csv
(video exporter format) file. Each timestamp entry corresponds to a frame from the video file. See https://docs.pupil-labs.com/developer/core/recording-format/#timestamp-files
These timestamps can be used to correlate to the gaze_positions.csv. Alternatively, the gaze_positions.csv also includes a world_index column that can be used to find the world rame index to which the gaze datum is closest to.
Hi, unfortunately, I do not know if this is possible. I will have to look into that next week. Can you confirm that you are using Windows?
I am not sure I ever got an answer to this question: We are using Pupil Labs on a computer that multiple researchers will log into. I installed Pupil lab under my user ID (I am an Admin) but the Pupil lab software does not show up under other user ID's, I have had to reinstall under each person user ID. Is there a way to have it install once and be used under each user ID? We are Windows 10 Enterprise 1909.
We are using Windows 10 Enterprise Version 1909
Hello the support team. I have a pupil core and one of the eye's connection cable was broken. how can I fix it?
If the connection cable for the eye camera is broken, do I need to mail it back to the company to repair it?
Please contact info@pupil-labs.com in this regard
hi, any chance in using a realsense camera as environment cam? Or is the pupil hardware required for the software to run?
thxn
I see that you also wrote an email to [email removed] I responded to your question there.
i can't use it with capture 3.4, anyone knows the proper release version?
Hi,I have a question about pupil mobile app. I want use mobile app for researchers- in pupil capture I have "pupil detection color scheme" -then I can see if the pupil is correctly detected, but in mobile app there is no "pupil detection color scheme". If i want real results,Β what should I do to get my pupils correctly detected?
Thanks for your help
Hi. Pupil Mobile does not perform real-time pupil detection. Instead, you will have to transfer the recording from the phone to the computer, open it in Pupil Player, and perform the post-hoc pupil detection. You can also stream the eye video to Pupil Capture to get a real-time preview of the pupil detection but the results will not be stored as part of the Pupil Mobile recording.
okey so, I understand, that "pupil intensity range" is only in pupil capture and in pupil mobile app this parameter isn't taken into account? There are only post-hoc pupil deteciton, Yes?
This parameter is part of the pupil detection algorithm which cannot be run on the mobile phone. It can only be run on a desktop computer running Pupil Capture (realtime on video stream) or Pupil Player (post-hoc on recording).
Okey, thank you very much
hello everyone, I hope you are doing great. I have a pupilab simple question. i want to extract the pupil size for my eyes recording. anyone knows how or have a link to a walkthrough. thanks you in advance
The raw data exporter in Pupil Player creates a pupil_positions.csv
file on export that includes this information.
Please be aware that the quality of this data is highly dependent on the 2d and 3d pupil detection quality.
Thanks a lot
Hi, I had to reinstall pupillaps and the annotation list is removed. I already have annotated a few videos. Can I find the annotation list in the files?
Also, have you installed a different version than what you have been using before?
Have you reenabled the annotation plugin after the reinstall?
yes I have reenabled the annotation plugin. The annotations are visible when you click trough the video (they pop up) but the annotation list is removed.
In this case, I am not sure what you are referring to by "annotation list". Are you referring to the keybindings that one can setup to create new annotations?
https://media.discordapp.net/attachments/849077710671708182/865758903827693608/20210716_205611.jpg
I tried out the new release. Please how is it ?
There were times I saw very very faint and disappearing orange lines around the blue circles... I don't know if is by eyes fooling me.
https://media.discordapp.net/attachments/849077710671708182/865764333287899136/20210716_211759.jpg
yes the corresponding hot keys
Unfortunately, these are part of the session settings which are reset when a new version is detected. I fear you will have to manually set them up again. I will note this issue down though. It makes sense that the hotkeys are saved as part of the recording and not of the session settings.
@user-7daa32 Please see the release notes ("Dynamic pye3d model confidence") in this regard https://github.com/pupil-labs/pupil/releases/tag/v3.4
If chinrest is being used, do you think 2d pipeline is the best to use ? I have issue creating surfaces when the precision is low. I encountered a situation where participant looking at A but the gaze will be in B. Creating Surface A would have to extend to B
If you only need the 2d gaze point, you should be fine with using the 2d pipeline.
We are manually taking data by watching the video on play because we need the transition time between AOIs. Which though we can't get the transition time between the last gaze in last fixation in one AOI and first gaze in the first fixation of the other AOI. We are currently thinking of using the AOI fixations counts
Since we are manually taking data from the video, what do you think of the data quality?
"System Time uses the Unix epoch: January 1, 1970, 00:00:00 (UTC). Its timestamps count forward from this point."
I guess we are using the system time ?
"Freezing the eye model is only recommended in controlled environments as this prevents the model from adapting to slippage"
What's the purpose of freezing ? I thought freezing is for the purpose of creating a surface. I am actually concerned about the gaze being precise enough to hit the AOI. We might to train the participants before carrying out the experiment.
Freezing the eye model is independent of freezing the scene video stream for the purpose of setting up a surface definition. It is something completely different.
Okay.. I will check it out.
You don't have to worry about it if you use the 2d pipeline. It only applies to the 3d eye model data (red and blue circles in your screenshots)
I have been using the 3d eye model and yet to try out the 2d eye model. But since the participants don't need to move their heads, 2d eye model should give the most precise data
hi, would anyone know what exactly is in a .pldata file? I tried looking over the documentation but it's still unclear to me
I also tried opening a .pldata file using the code linked at the bottom of the page (https://github.com/pupil-labs/pupil/blob/315188dcfba9bef02a5b1d9a3770929d7510ae2f/pupil_src/shared_modules/file_methods.py#L57-L87) but I kept recieving this error
Use this code instead https://gist.github.com/papr/81163ada21e29469133bd5202de6893e
This code is not meant to load pldata files π
thanks!
How would one go about rewrapping edited .pldata files?
You can write pldata files with this class https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L164-L201 Here is an example usage of the class: https://github.com/pupil-labs/pupil/blob/701944661e44e729234037a03c32ab65bd0fdf7b/pupil_src/shared_modules/annotations.py#L261-L263
Hi there. Do I understand correctly that the DIY unit only has one camera? (I.e. pupil detection for one eye)
Hi, i purchased the upgrade to pupil core DIY - fish-eye lens (DSL235D-650-F3.2) but after replacing the lens, the camera appears to be completely out of focus? I have tried adjusting the absolute focus, it does not seem to help.
Correct, the DIY frame is monocular.
Thanks for that. At 2740eur this might become quite an expensive hobby! π
Hi, I am running post hoc pupil detection for the first time, and, was wondering if I have to re run the 'Raw Data Exporter' to get the updated raw data files after post hoc detection
Once the pupil detection is done, the internal pupil data (e.g. used to display in the diameter 3d in the time) is updated. Existing exports are not changed. You will have to start a new export.
The tokens are used internally to determine if data needs to be updated. E.g. the blink detector is based on pupil data. If it updates, it needs to recalculate blinks. To avoid unnecessary recalculations, we use tokens identifying the data versions. They do not actually contain data.
For reference, you can find the implementation here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/data_changed.py
Also what are those token files in the "offline_data" folder,what is the purpose, how do I open them. Thanks
@papr Thanks
Hi guys, do you Knowles if it is possible to record pupil data without recording world camera ? I'm just insterested in pupil size variations and world movies are huge and not useful for me. Thanks !
Technically, this is possible. I just do not know if there is a user interface option for it. I made note for myself to check that. If it is not yet possible, I will add an option to our next release. Until then, you can delete the world* files afterward. Player will adapt automatically.
OK, thanks for the answer. Sure, it could be nice to have an interface option to enable/diable world camera recording ! :)
Hi, thanks for the help so far. We are not sure where we're going wrong with rewrapping the .pldata file, but when we try to reopen the folder containing the edited .pldata file, it doesn't run on pupil player. We're not sure if we're using the suggested class correctly, but we tried following the example rprovided and while it didn't produce any errors, it doesn't run
Could you elaborate on what you are trying to achieve?
We're undergrad interns and have been asked to write a program that will allow users to crop the video shown on pupil player. Our approach was to crop each type of file individually and edit out timestamp data that was unimportant. So, we were attempting to open, edit, and resave the .pldata file without the timestamp data that was outside of the cropped video range. I think we're going wrong when it comes to rewriting the .pldata file after editing
Each pldata file also has a timestamp file, that needs to be adjusted. The PLData_Writer class linked above takes care of that.
Ah, so temporal crop, instead of spatial crop? Are you aware that you can temporally crop the export?
We were told you can manually crop videos, but we were asked to automate it, that's all we really know. And yes, we were able to edit the timestamps using that class, but when we reopened the .pldata file using the load function you mentioned earlier it threw an error
Could you clarify if you need to crop/trim the actual recording or if cropping (we call it trimming) the export in an automated fashion would be sufficient for your usecase?
The former is much more difficult and error prone than the latter. The latter is actually very easy.
So sorry, we're not sure what you mean by the export
If you hit e
in Player, an export will be started. By default, this includes the pupil and gaze data as CSV files as well as the scene video with gaze overlay. One can specify the (trim) range for which the export should be performed. https://docs.pupil-labs.com/core/software/pupil-player/#export
Our supervisor asked us to not use the exports; what is the best approach to trim the actual recording?
ok, in this case you are on the right path. Even though we do not give any guarantees regarding the intermediate format (in other words, it might change in the future; currently there are no such plans though).
Reading the pldata files and excluding samples outside of the trim range should work, too. You might want to adjust the info.player.json file as well in regards to the start_time and duration fields.
I suggest using https://github.com/pupil-labs/pyav for trimming the videos. Here you need to rewrite the timestamp files, too. Delete any _lookup.npy files that might exist already.
Okay thanks! We'll look into adjusting the info.player.json file as we didn't before and that may be the issue
Feel free to share the error message in π» software-dev. I might be able to give further hints.
Btw, if this works well, this tool might be useful to others as well! Should you consider open-sourcing your solution, please add a link to the project over at https://github.com/pupil-labs/pupil-community
Thanks for all your help! You'll probably be hearing from us in π» software-dev soon and we'll definitely consider open-sourcing our code if it runs well
Hi! I am new to using pupil labs and was wondering how to relate the timestamps given in the csv file to clock time. This is what my time stamp data looks like, and I am not sure how to interpret it. Would really appreciate some help!
Hi! The clock starting point is arbitrary. On Windows, the clock start can be negative sometimes. (Unfortunately, I have not been able to find out the reason for this). These timestamp unit is seconds.
Thank you! So the timestamp is just giving us the time in between each data point (for example the time between point 4 and 5 is 0.0079 seconds). But is there no way to correspond it to a standard clock time? I am trying to align this data with behavioral data and need to match the time points. I am not sure how to convert these time stamps to actual time
There is. Check out https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Thank you!!
Hi,
I have a question regarding the pupil_postions file generated after post-hoc processing.
1) I can see that pupil time stamps have changed. For example in the file generated from the recording the first pupil time stamp was '711705.765601999' but after post-hoc the first pupil time stamp changed to '711705.831237'. Shouldn't both be the same?
2)The 3d diameter generated by the recording was something like - '1.9...' or '2.4...' and it looked normal and in the expected range. However, after post-hoc I have certain rows, even with confidence level above 0.6, with 3d eye diameter as large as '174.75' or '55.7879' which look absurd. I am not sure if I did something wrong while doing the post-hoc or forgot to do something, or the data got corrupt.
Thanks for your help.
1) The post-processing generates two pupil datums per eye video frame (1x 2d, 1x 3d). In Capture, the eye video recording might start slightly delayed to the world process which is responsible for saving the pupil data to disk. This is why the recorded pupil data contains earlier timestamps. 2) The confidence column does not say anything about the quality of the 3d eye model which is responsible for inferring the 3d diameter. For fitting the eye model, it is best to look into different directions such that it can be triangulated well. Ideally, you have such a sequence recorded in your eye videos. The latest v3.4 release gives rough visual feedback regarding the fit and also adjusts the model_confidence value accordingly. See https://github.com/pupil-labs/pupil/releases/tag/v3.4 for details.
Pro tip: You can also restart the detection from the UI and it will keep your current model. this way you can reapply a well fit model to the beginning of the video.
@paprHi, I have a question about the solution of device slippage. In the Pupil Core 3.4, the short term model of eyeball is built for adapting to slippage. However, as far as I know, in order to compute the right 3D gaze in the scene image, the rotation and translation matrix obtained in the calibration should also be updated in real time according the newest eyeball model. But I cannot find the code segment from the newest version of Pupil Core. Can you help me?
Hi π
the rotation and translation matrix obtained in the calibration should also be updated in real time This is actually not the case. These transformation matrices describe the relation between
eye camera <-> scene camera
, noteye model <-> scene camera
. As long as you do not change the relative eye or scene camera positions, you do not need to update these matrices.
@papr Thanks for your reply. Which parameters have been calibrated in the calibration? I learned the code of Pupil 1.0 before. I think the matrices describe the relation between the sphere center <-> scene camera
.
No, they never have. The sphere center is estimated in eye camera coordinates, which is why there is no need to adjust the eye camera <-> scene camera
transform
These are the only two parameters estimated for the binocular gaze mapper: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L226-L227
@paprI think the person-specific kappa angle has been computed in the calibration. Do the two parameters include the information about the kappa angle?
the person-specific kappa angle has been computed in the calibration That is correct Do the two parameters include the information about the kappa angle? They do not explicitly, which is why the slippage compensation is not perfect and requires recalibration after some amount of time.
Hi, @nmt. Several days ago, I discussed the binocular 3D gaze calibration process with papr. Recently, I learned the 3D gaze calibration and found there is a question. Firstly, we think the eye_camera_to_world_matrix0
and eye_camera_to_world_matrix1
include the kappa angle implicitly, and the relation between eye camera
and scene camera
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L226-L227.
However, we found the sphere centers are transformed from eye camera coords to world coords using both eye_camera_to_world_matrix0
and eye_camera_to_world_matrix1
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L257-L258. We think it is unreasonable, because the transformation for sphere center doesn't need the kappa angle information. Can you help me with an answer?
Kappa is encoded implicitly within the transformation matrices.
Did you mean that the transformation matrices are the eye_camera_to_world_matrix0
and eye_camera_to_world_matrix1
?
Yes, I was referring to these two matrices for the binocular gaze mapper. Capture also fits two additional monocular gaze mappers which both own a separate transformation matrix.
So the eye_camera_to_world_matrix0
not only describes the relation between eye camera <-> scene camera
, but also includes the kappa angle. Why does the get_eye_cam_pos_in_world
state that the eye_cam_pose_in_world
is the eye camera pose in world coordinates?
@papr Hi, I mainly don't know, how to solve the slippage problem only by updating the eyeball model in real time?
Thank you for the clarification. I will try to think of a way to answer your questions more clearly than I have above.
Thank you very much.
Hi! I just converted the time stamps to date/time but was wondering what the timezone is
It converts to Unix epoch. So UTC+0
I'm really finding it difficult to learn Psychopy. It seems most of what's done there involved programing and coding.
What's the best way to approach it?
We are currently working on extending PsychoPy's newest "Builder" components for eye tracking. We are planning on adding support for Pupil Core. Unfortunately, this work will take some time. Until then, you will have to integrate the Network API manually into the python code generated by PsychoPy.
Hi, I'm using Pupil Core for a project, and I need to access to angular velocity of the headset as well as angular velocity of eye of a participant. I think that I can reach to eye gyro data. However I don't sure that how can I collect angular velocity of the headset via Network API. What can I do?
Hi, please see my notes below:
- eye rotation speed - Subscribed to pupil
, discard 2d data (method
field indicates if it is 2d/3d), and use the timestamp
and circle_3d->normal
fields to calculate rotation speed. You might want to discard low confidence
data points and make sure that your 3d model is fit well.
- headset rotation - In Core, it is only possible to achieve this using the head pose tracker which requires further setup https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking Subscribe to head_pose
to get real-time head pose estimations
In addition, I have a copy of Pupil Capture v1.7.
Hello! I am using pupil core and I am getting readings on Diameter_3D for pupil size that are negative values. Is this an issue that we can correct on our end or is this an issue with the tracker?
Negative 3d diameter values are a strong indicator for the 3d model not being fit well. First, make sure that the 2d detection works well (high confidence values in the top left graph) and then rotate your eyes to different locations such the eye can be triangulated. (The pupil visualization colors might differ if you are using an older version of Capture)
Other thing I have not understood is the data structure incame from Network API. In it there is a field named as "base_data". Sometimes I receive base_data field in a different base_data field, and each one have a norm_pos pair that is different than other identical keys. What is different among norm_pos data in gaze_on_srf and base_data fields?
Pupil Capture processes data by applying a series of transformations. These can include changes of coordinate system (while keeping the unit).
Each transformation generates a new datum, which references to its "ancestor" in base_data
This is the rough pipeline: 1. Capture eye image (eye camera coord system) 2. Pupil detection (eye camera coord system) 3. Gaze estimation (scene camera coord system) 4. Surface mapping (surface coord system)
Read more about the coordinate systems here: https://docs.pupil-labs.com/core/terminology/#coordinate-system
So in your case, norm_pos
in gaze_on_srf
is in surface coordinates, while the norm_pos
of its base_data
is in scene camera coordinates
Thanks for you reply. However I receive norm_pos data more than two in iterative base data fields. If I should show this structure simply: {"gaze_on_srf":{"norm_pos":"bla", "base_data":{"norm_pos":"bla", "base_data":{"norm_pos":"bla"}}}}
@user-7b683e
{
"gaze_on_srf": {
"norm_pos":"bla", # gaze in surface coordinates
"base_data":{
"norm_pos":"bla", # gaze in scene coordinates
"base_data":{
"norm_pos":"bla" # pupil data in eye camera coordinates
}
}
}
}
Hi guys, I've trouble with the sampling rate of the pupil camera when I record data using Pupil Capture. I fixed the sampling rate of the pupil camera (directly in pupil capture) to 120Hz and after data collection I noticed that the mean sampling rate for my recording was around 247Hz. If I fix the sampling rate to 200Hz it does not change anything. Moreover, the difference (in second) between the time points (t i + 1 - t i) are absolutely not constant, it oscillate between ~0.006 and ~0.014. Is it normal ?
Hey, you are looking at the gaze data, correct? Please note, that the gaze data is actually a combination of three streams (monocular left, monocular right, and binocular) gaze stream. Pupil Capture tries to match two pupil datums for the binocular gaze estimation. Since this algorithm needs to run in realtime and there is only limited knowledge about future data, the algorithm upsamples the pupil data. See https://nbviewer.jupyter.org/github/pupil-labs/pupil-matching-evaluation/blob/master/gaze_mapper_evaluation.ipynb for details
Hi papr, No, I am looking at the diameter data in the pupil_position. csv file.
In this case, make sure to differentiate between left and right, and 2d and 3d eye data. See the method
and eyeid
column (0: right, 1: left)
Yes, I already separate left and right eyes and 2d/3 data but it does not solve the problem, I still have varying difference between time points, while these difference should be constant if the sampling rate was correct (right ?)
No, the eye cameras are of varying sampling rate. 120/200 Hz is the target frame rate, but in practice it varies. In your case 1/120 = ~0.008. The 0.012 entry indicates that a frame was dropped.
0.007961 0.008041 0.007913 0.007999 0.008021 0.007962 0.008381 0.012261 0.003437 0.007916 for example, these are the 10 first differences between my time points
So ti+1 - ti
If you need evenly spaced samples, I suggest linear interpolation. In fact, I recommend looking at Kret, M. E., & Sjak-Shie, E. E. (2019). Preprocessing pupil size data: Guidelines and code. Behavior Research Methods, 51(3), 1336β1342. https://doi.org/10.3758/s13428-018-1075-y
Ok ! Thanks for the answer an the reference, I will have a look ! π I have another issue, this time using LSL. I am recording synchronised EEG and pupillometer data using a plateform that use LSL to synchronise the devices. When I look at the data I have even bigger differences between time points (until 12 seconds sometimes !). And it is not rare that I have negative difference that suggest that the [ti +1] point is before the [ti] point. Do you know what it means ?
Which version of the LSL relay did you use?
In this case the samping rate is closer than 250Hz than 200Hz. Even of the sampling rate is varying, is it normal to have such a big difference ?
1.1
It is the 2.1 sorry
2.1 forwards gaze, i.e. this is subject to the mixed streams mentioned above.
Unfortunately, in this version, it is not possible to reproduce the original pupil datum timestamp. Therefore, I suggest sorting the data by time and then applying the preprocessing linked above. And yes, this data is subject to higher timestamp-difference variance (due to using the gaze stream instead of the pupil stream).
And I'm using Pupil Capture v3.1.16
Larger time gaps indicate a transmission issue.
Ok, I understand. Thanks for your help papr π
Hi,I have a question about raw data-can i read the time to first fixation and the time of looking at a surface?
You will have to calculate these by yourself, as Pupil Player does not make any assumptions regarding the starting point.
hello, is the model number of the eye camera used in pupil core available for disclosure? I wonder where do you source cameras in such small form-factor while having really high refresh rate
I am not sure if this information can be disclosed publicly (if at all). Try sending an email to [email removed]
Hi, I am trying to use the surface tracker, I have 3 screens that are in the same plane but I have an additional screen that the participant will hold. It says in the doc that all surfaces have to be in the same plane. Is there any way to have surfaces defined in more than one 2d plane ? If this guideline is not followed will the results be worthless ?
I think the documentation says/should say that the markers are in the same plane as the surface that you want to map to. In your case, the additional screen just needs its own markers and surface definition. Then you should be fine.
oh yeah sorry my bad, I misread that and panicked for a second. Thanks for the prompt reply.
I have downloaded Capture v3.6 and received data via Network API. However, I couldn't see circle_3d->normal field you mentioned. In gaze_on_surfaces, there is only one base_data field. For example, "base_data":["gaze.3d.0", "12647.33"] was written on the field.
So, how can I receive circle_3d data on Capture v3.6 such as v1.7.
I was already wondering that your data had so many levels. We simplified the data structure at some point, because it blew the recording files up. In newer versions, surface-mapped gaze does not include the full pupil data.
Do you mean 2.6 or 3.4? 3.6 does not exist π
Opsh, yes 3.4.0. I'm sorry this day was so long
What can I do to be able to receiving angle of eye and head both? Is there any version these two data can be received?
Two questions: 1. Are you using the surface mapped gaze to estimate head rotation? 2. Do you need these two data streams in real-time or is it ok for you, if you matched them post-hoc?
You have said me that you can use head_pose sub key to receive headset rotation info. For this reason, I would like to get head_pose data.
I don't need to real-time degree data, but these must be sync.
Okey, I learnt that I can use recording file for this case. However, I want to ask my question more clearly again.
I want to receive two data - eye degree and head degree to be done on x-axis. By using these data, I will calculate velocity.
My questions are; 1) Can I get degree data (of eye and head) on Network API? 2) If I can do this, with which version of Pupil Capture?
1) Yes, you can. 2) You can use the latest version for this.
But gaze_on_surfaces
is not what you are looking for. Instead of using the surface tracker plugin, you need to you the head pose tracker plugin.
You need to subscribe to pupil
and head_pose
and process these two streams.
I would like to thank you. In base of versions, data structure was changed drastically. I have got data I want to now.
Hello everyone! As I am analyzing the pupil core data for my dissertation, I have a question related to fixation. Do we have a term defining the time elapsed between the first and the next fixation? I am attempting to call it "Fixation-changing time". Please let me know if anyone knows. I will continue to look it up on the documents from pupil labs and published papers. If I found anything, I will follow up here. Thanks!
Kia ora from New Zealand. I'm currently tasked with extracting saccades data from pupil labs tracking data. I've attempted to follow through the preprocessing work by @user-2be752 (https://github.com/teresa-canasbajo/bdd-driveratt). I'm a bit out of my depth and was hoping I could get a few pointers. I managed to get the 'make' file to complete, however when trying to run the python script I'm getting lots of errors. I was hoping someone might have some fairly detailed instructions that I could follow. Any help would be hugely appreciated. - Mat
Hello. Thank you for your kind supports. I am trying to use data of fixation.csv to estimate subjects' visual angles. However, sometimes variance of values of gaze_point_3d_x,y,z is extremely high. What can be expected as the cause of this error?
Hi @user-4bad8e. Variance like this usually coincides with low pupil detection confidence, e.g. when the pupils become obscured by eyelids/lashes. You will probably need to filter these low-confidence events from your data (e.g. <0.6). Confidence values are under the column header βconfidenceβ
hello, what is the best cell phone to use with the pupil core
?
I'm using one but with a lot of problems when we download the file to generate the results
Hi @user-ee72ae, please be aware that Pupil Mobile is no longer maintained. With that said, it should still work on, e.g. OnePlus devices. Which device are you currently using?
Hey papr,
After you have said to me about getting info on pupil and head pose, I have looked documentation of these fields. However I haven't found some details I would like to see on the doc.
Firstly, in pupil datum structure, I want to eye rotation amount as degree. But its unit has been declared as mm - I hope it is millimeters.
Second stuff, I haven't saw a guide on head_pose sub key. Like eye rotation, I want to get rotation amount of headset in degree unit.
I was hoping that you could mention some things in below; 1) Can I reach degree values to calculate angular velocities? If it is yes, which field can I receive? Apparently, circle_3d->normal isn't suit value due to its unit - so I need to degree. 2) If it is not possible, how can I calculate degree amounts by using existing fields in currently datum structure - so Capture 3.4.0. 3) Or is there any different method you can mention to me to calculate velocity amount of eye and headset in deg/sec unit.
Sorry, I don't think that the doc in this subject is enough for my purposes. By this reason, I have conveyed you many question.
Thanks.