Hello, I am working on a project using the Pupil Labs Core in which a picture is displayed on either the right side of the screen or the left. I'm trying to use the pupil labs to determine if the user is looking at the picture. I tried getting the gaze positions for a few seconds and determining if the average x-position was less than 0.5 (assuming the world view x-coordinates are [0, 1] starting from the left). However, this is not giving me consistent results. Sometimes when I am looking left, it thinks I'm looking right and vice verse, but other times it works fine. Could anyone provide some help, resources, examples, etc... of how to determine the coordinates of where the user is looking?
@user-d9a2e5 Thanks for following up. Some notes: 1) Good raw data + Pupil detection: As you note, it is very important to make sure that you are getting good raw data. This means adjusting eye cameras to get a good view of the eye and establish robust pupil detection. In most cases the default pupil detection parameters will work well, but if you have a very dark environment, or challenging subject, you may need to adjust pupil detection parameters (e.g. pupil min, max settings) 2) Calibration: What calibraiton method are you using? Screen based calibration method in Pupil Core, or another method? 3) Experiment: Based on what I understand, you are trying to understand where someone looks on an image. Image is the stimulus in your experiment. If you want to do quantative analysis on where exactly someone looked at the image you might want to check out surface tracking (if you haven't already): https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking 4) Concrete information: To provide you with better feedback, you can share example data with data@pupil-labs.com or share images here to help clarify so that the community can respond πΈ
Hi @user-8cd2d1 π thanks for the clear explanation of your experiment. Based on your description, it seems like this could be accuracy related. What accuracy measurement do you see post calibration in Pupil Capture? Please also see points 1 and 2 in my last response.
@wrp i'm using the basic calibration -which shows the screen white and asking you to look at circles at the screen, i will check your suggestion thanks ! π , i'm adding video of what i tried to do - to put on the image that i looked on the gaze data, but first i looked at the farest points to normlize it on my code again, i put a 1 sec length of data every time (and not point based plot) , the next step i want to put gaze on video
@wrp hey , just to be sure - if i use surface-tracking , are my data normalized based on surface-tracking? , thanks for all your help! π
@user-d9a2e5 norm_pos
in gaze
data is normalised to the scene camera, norm_pos
in gaze on surface
is normalised to the surface.
so basicly here are the normalized for surface tracking right?
@user-d9a2e5 Checkout this example which uses gaze on surface
to move the mouse cursor: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py#L85
@user-d9a2e5 correct, these files are normalised to their respective surface
I was not sure if you were using the realtime api or the exported files
exported files , and i kinda understood how to use API as well , you are helping me so much in my project thank you very much!
hi I'm new to pupil core. I just connected it to my macbook pro and ran pupil capture, but it doesn't detect camera properly. It is just keep showing this
@user-8a4f8b Please make sure that all cables are connected correctly. Also, please "restart with default settings" from the general settings menu. If the issue persists please contact info@pupil-labs.com regarding a potential hardware issue.
@papr thank you, I just tried "restart with default settings" and stopped showing the message, but now it only show gray screen, any idea?
@user-8a4f8b This sounds like a hardware issue. Please contact info@pupil-labs.com in this regard.
ok thank you
As a follow up: We were able to verify that the hardware was working correctly on a separate machine and are now looking into possible setup-related causes for the issue.
Does anyone have experience getting Core to work for people with glasses? Maybe 3D printed models to attach lenses to the frames (in a way that doesn't interfere with the eye cameras)?
I've found it works fine with contacts. Glasses are a real challenge as they distort the image, so the camera would have to be placed behind the glass, I guess
Related to this question, what would be optimal angle/distance for the camera to be placed from the center of an eye?
hi. how does the Pupil eyetracker/ algorithm estimates the angle between the optical axis and the visual one? Is there any detailed documentation about this aspect? What assumptions are made? Is the 5 degrees estimation in normal vision used apriori? how the screen marker calibration is used if there is no knowledge of the distance to the screen? Do you currently use the glint free algorithm proposed in Kay's paper? THX
Hi, would it be possible to get the 1.4v of the pupil player? I've been trying to get one online to no avail and any advice would be really appreciated !
Hi, I get a message when trying to open Pupil Player that Apple can't verify the program. How can this be resolved? When opening the program anyway, it runs for a few seconds (backgrounf prrocess) before closing.
@user-bec213 All releases are online on github https://github.com/pupil-labs/pupil/releases/tag/v1.4
@user-fc194a yes,right click the app,click open. the same dialogue will open but with the option to start the app.
@papr Thanks for the help!
Ok this time I get a message saying Apple cannot verify if the program contains malware. It says to update it or contact the developper, with a link from where it was downloaded (latest release). If I go through Settings/Security, Pupil Player is there with Open anyway... When clicked, nothing happens. Looking at the Activity Monitor, Pupil Player opens with 2 more instances for a few seconds before closing.
@user-fc194a could you share the player.log
file in the pupil_player_settings
folder?
2020-04-03 14:49:04,111 - player - [INFO] numexpr.utils: NumExpr defaulting to 4 threads. 2020-04-03 14:49:05,573 - player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "launchables/player.py", line 730, in player_drop File "/usr/local/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module File "shared_modules/pupil_recording/update/init.py", line 15, in <module> File "/usr/local/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module File "shared_modules/video_capture/init.py", line 42, in <module> File "/usr/local/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module File "shared_modules/video_capture/uvc_backend.py", line 24, in <module> ImportError: dlopen(/private/var/folders/br/c6w3ng296bg35c0ksq08c2380000gn/T/AppTranslocation/E7129C0E-24D7-48F1-B397-49BFA6573DA1/d/Pupil Player.app/Contents/MacOS/uvc.cpython-37m-darwin.so, 2): Library not [email removed] Referenced from: /private/var/folders/br/c6w3ng296bg35c0ksq08c2380000gn/T/AppTranslocation/E7129C0E-24D7-48F1-B397-49BFA6573DA1/d/Pupil Player.app/Contents/MacOS/uvc.cpython-37m-darwin.so Reason: image not found
2020-04-03 14:49:06,987 - MainProcess - [INFO] os_utils: Re-enabled idle sleep.
thank you
@user-fc194a Which version are you running? The most recent (v1.23)?
Yes
Ok, I will look into the issue and try to upload a new bundle as soon as possible (latest by Monday). Please use Pupil Player v1.22 until then.
@user-fc194a Is it possible that you saved the bundle on an iCloud folder? Would mind moving the application into your /Applications
folder and try again?
On PC, is there a command line to automate the export of a recording without the whole process of opening Player, drag-drop the recording and exporting?
For MacOs, I'll try that as soon as I return. Thanks
@user-fc194a Check out our community repo. There are third-party scripts that extract and export data from the intermediate recording format: https://github.com/pupil-labs/pupil-community
hello i need help with this :is it okay that its moving so much? do i have a way to make it more stable?thanks for help! π
@user-d9a2e5 I thought a surfaces required more than one marker to be detected π€ Usually, the more markers you have, the more stable the surface will be
so i need to add more , okay i will try it ty! π
Greetings, I am trying to get the Pupil core to run on my Windows PC unsuccessfully. The world camera does not show up all i see is the prompt (attached). I have gone through the troubleshooting steps mentioned on the website to no resolve. Could anyone point me in the right direction? Thanks in advance
@user-fc194a Please checkout our recent v1.23-5 macOS release update. It should fix the issue that you have encountered.
@user-6779be Is the issue that the world window does not appear, or have you minimized it in your screenshot?
hello again π , just to be sure - the gaze data normalized between 0 to 1 ? or -1 to 1?
@papr The world window didn't appear. I have switched to v 1.20 and it seems to be working fine now though.
@user-d9a2e5 the gaze data is normalized between 0 and 1
@user-6779be please delete the folder pupil_capture_settings
in your home folder and give v1.23 another try. If it still does not open, please share the log file with us, which you can find in your home folder > pupil_capture_settings > capture.log
@user-c5fb8b thanks! i asked a stupid question hahahha
@user-d9a2e5 definitely a valid question to ask! There's also some info on our docs, but it might be a bit hidden: https://docs.pupil-labs.com/core/terminology/#coordinate-system Please always come here and ask if you are not sure about something or cannot find the information elsewhere! π
Hi, I have just started using Pupil Core. I have a question about "World camera [email removed] . As shown in the picture, it was set to 120frame rate and recorded. However, the video extracted using Pupil Playe was 50Hz. why? Please let me know if you know.
@user-9f8e50 when you say that the video "extracted" do you mean the video that was exported with Pupil Player? Where do you see 50Hz measurement?
@wrp I 'm sorry that my explanation is not good. I would like to know more about exported video with pupil player. Can I change the frame rate and export the video? When the setting of world camera is 30Hz, I can export 30Hz video. However, When the setting of the world camera is 120 Hz, I can't be exported 120Hz video.
Could you please give me your thoughts on the questions ?
@user-9f8e50 Pupil Player should be exporting the video at the same frame rate as it was recorded in Pupil Capture. In our experiences, different video players can report different frame rates for the same video, depending on their measurement approach. Could you let us know which tool you are using to read out the exported frame rate?
@papr Thank you very much for your prompt reply and great advice.The frame rate can be checked as shown in the picture. This video was recorded with the world camera set to 120Hz.
Hi @user-9f8e50 can you please try opening in VLC player and then getting info about the file
@wrp Thank you for advice. I checked the info of the video file using VLC player.
ok, thanks for the feedback @user-6eb76d could you please make a sample recording at 120Hz, upload to online storage, and send to data@pupil-labs.com and our team can investigate and respond via email or directly here.
@wrp OK! I'll send you an email.
@user-6eb76d email received, I or one of my colleagues will get back to you after reviewing the data. Thank you.
Hi @user-9f8e50, I looked at your recording and calculated the recorded the frame rate from the externally saved timestamps (world_timestamps.npy
). The average frame rate is indeed 50.3. This means that this issue is not related to the exported video but the originally recorded video.
Pupil Capture will always run at best effort, i.e. it will process the video as fast as possible. If there are more frames than it can process in real time it will start dropping frames. I think this was the case here.
If you want to save resources, and you are not dependent on real time data, you can disable the pupil detection in the general menu. This way you will have to run the offline pupil detection and calibration in Player, but it will free a lot of resources during the recording.
In Capture, you can check the current frame rate by looking at the FPS graphs in the top left corners of the Pupil Capture windows (world, eye0, eye1).
Hello- has anyone done any eye tracking study while reading texts with pupil-lab core?
@papr Thanks! I did what you told me. But, FPS graphs didn't change. Please advice me if you have other method. The pic shows the FPS when "detection & mapping mode" was "disabled" in general menu.
@user-8a4f8b you might want to check publication list: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit#gid=0 to see if there are studies similar to what you are trying to achieve.
@user-9f8e50 what are the specs of your machine (CPU and ram). Based on your screenshot it looks like the CPU is running at 50%, which would lead me to believe the machine is sufficiently powerful. In your previous screenshot I saw 105 FPS
@wrp My computer specs are Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, RAM 8.00GB. The previous photo is definitely 105FPS, but after calibration it is 50FPS.
@user-9f8e50 machine specs are sufficient for sure. Did you modify any of the world camera settings other than the temporal resolution? Could you try the following: 1. Restart Pupil Capture with default settings 2. Change world camera to 120Hz setting 3. Calibrate 4. Minimize eye windows 5. Report a screenshot of the world window to show fps display and make a recording
@wrp Thank you! I tried according to the procedure. Look at some pictures. I tried recording three times at different times(14s, 1min18s, 2min).The frame rate of the video was 76-98FPS(14s:79FPS,1min18s:98FPS,2min:76FPS). Recording time doesn't seem to be related to FPS. Do I need to use a higher spec PC?
@user-9f8e50 Thanks for following up and for the details info. I would certainly expect closer to 120 fps - would you be able to quickly test on a different machine to compare results. Unfortunately I am not able to replicate your results.
@wrp Sorry! I mistake a upload.
I forgot to upload this pictuer. @wrp OK! I'll try it. I have a question. Have you ever recorded 120FPS video? If so, could you please tell me the spec of your machine?
@user-9f8e50 yes, I have recorded at 120 fps on macOS, linux, and windows with intel i7 processor and 16Gb ram.
@wrp Thanks. Your information was very helpful. Please teach me again!
@user-9f8e50 Disabling the accuracy visualization might also save you a bit of processing resources. You can do so in the "Accuracy Visualizer" menu.
Hi there, I am using pupil core with the pupil software. in the export, timestamps are given as "pupil time". is there an easy way to convert to UNIX time? (jan, 1st 1970) background: I want to synchronize with a device using these UNIX timestamps.
Hey ! Hey I'm an assitant researcher on Human Factors. I'm looking for a team that's worked with saccades before.In order to validate my calculations which are in pixels per second. We can't convert to degrees because we're on a random placement of the subject. r We can't convert to degrees because we are on a random placement of the subject in relation to the visual area of interest. And it seems to me that the calculation of degree takes into account this distance (we did not control it).
Hey guys, does anyone know why the blink detector sometimes gives unexpected output such as this (on the right)? It's happened a few times where the file structure gets jumbled and colums are spammed with irrelevant data
@user-430fc1 could you share the right file with us?
sure, here it is
@user-430fc1 it looks like for some reason the file is missing a bunch of ,
. Would you mind sharing the recording with [email removed] so we can try to reproduce it?
@papr Thanks, just sent a zip of the recording minus the mp4s.
@user-430fc1 I will come back to you early next week
Do different versions of pupil player save their settings on the same file/folder? I installed and ran 1.23, and now my settings in 1.20 are back to default
Hello, I'm curious if its possible to get the output of x,y data as non-normalized data? As in, I'd like to see the coordinates the user is looking at on the screen.
just to clarify, the data I'm getting is normalized between [0, 1], and I'd like actual coordinates [0, max_pixel) If it is possible, do you know how accurate this data can be?
@user-7d4a32 If your gaze data is calibrated you can simply multiply the normalized coordinates with the cameras resolution that you are using.
Hi @user-5ef6c0, yes Pupil Player only has a central location for settings. When switching versions, the settings will be set back to default for the newly opened version.
Hi @user-7d4a32, do I understand correctly, that your subjects are looking at a computer screen and you want to get the screen coordinates of their gaze positions? To achieve this, you would have to use the surface tracker in order to track the screen in your world video, otherwise there is no way to automatically infer the relationship between the head-mounted eye-tracker and the screen. When you define the screen as a surface, you will be able to export normalized surface coordinates for gaze data, which range from 0 to 1 on the surface. When you multiply this with your screen resolution, you will get screen pixel coordinates. This process introduces some additional noise through the surface tracking system, but I'm not sure how much.
Hi I am trying to record video's with Pupil Capture from two different camera's (via Pupil mobile). I opened Pupil Capture twice and connected one of the instances to the first mobile with Pupil Mobil on it and the other to the second mobile. In Pupil capture I do see the scenes for both camera's. However, when I hit the record button, only the camera connected to the first mobile starts recording. The other one says that it can't record because it has no connection (while I do see what the camera sees on in Capture). The phones (and laptop) are connected (wifi) via a local network without internet connection. Do you have an idea what the problem can be? Thanks
@user-76c3e6 Hey, the recommended workflow would be to record the data on the phones themselves instead of Capture as the video streaming is not guaranteed to transmit all the data. If the network is saturated, the video stream might start dropping frames.
You can use the Remote Recorder plugin to start the recording on both phones via a single Pupil Capture instance.
Do not forget to enable the Time Sync plugin in Pupil Capture such that the Pupil Mobile phones can adjust to a synchronized clock.
@papr thanks! It seems that most of my troubles are a result of bad connections indeed. So yes, mayble it's better to record on the phones themselves. If I decide to record data on the phones themselves, it is then also possible to Synchronize time between the two phones? Or is that impossible?
@user-76c3e6 Yes, if you start the Time Sync plugin in Capture, the Mobile phones should synchronize to the Capture instance, and therefore follow the same clock.
@user-76c3e6 You can test this by making a test recording with both phones, open both in Player, and use the Video Overlay plugin to overlay one world video ontop of the other. They should be synchronized.
@papr Ok, I'll check that. But just checking again (sorry), even if I record on the phones themselves, and I have Pupil capture open in the background (but don't record there), Pupil mobile knows that it should synchronize with the other phone?
@user-76c3e6 Correct. Both Pupil Capture and Mobile phones will detect each other in the local network, and the phones will adjust their clocks in order to be synchronized with Capture. You can verify that the phones have been detected in the menu of the Time Sync plugin. It lists all time sync group members.
@papr great, thanks! Another question, what should the Status be? They now differ, one has "clock master" and the other is synced with the IP of the desktop
@user-76c3e6 For this setup, there should only be one Pupil Capture instance, instead of two (it can work with two, but for simplification I would recommend to only run one).
Pupil Capture will be the clock master, while the phones should be time followers, being synched to the computer running Capture.
@papr thanks again! Is is also important to have the Pupil groups plugin on in this case? I put in on, but it doesn' seem to recognize the other Group members. Not sure if that's a problem?
@user-76c3e6 No, this plugin is not necessary in this case. Both plugins, Time Sync and Pupil Groups, use the same technology in the background, but the Groups plugin is meant for communication between multiple Capture instances which is not necessary for the new setup.
@papr Ok... sorry me again... I feel a bit stupid, but can't find the recordings made with Pupil mobil (using remote recorder). The phones start recording, but after recording is seems that the recordings aren't stored in the local movies folders on the phones. Should they be there? (I didn't change any of the save locations).
@user-76c3e6 By default the recordings are stored to Internal storage -> Movies -> Pupil Mobile -> local_recording
You can check your Pupil Mobile settings. It will show the recording folder.
@papr it says they are stored in the default folder (/storage/emulated/0/movies/Pupil Mobile). Earlier this morning, the files were indeed there. But now when I connect the phone to the desktop, the files aren't there anymore. On neither phone.
@user-76c3e6 Ok, that is the correct location. In my experience, it is sometimes necessary to reboot the phone in order to access new recordings from a computer.
@papr thanks, rebooting is the solution.
If I can't overlay the two video's (I put the plugin in, then dragged one video over the other video in the main player window), does that mean that the synchronization failed?
@user-76c3e6 Did you open the recording which you tried dragging in Player before? Player needs to convert Pupil Mobile recordings to Pupil Player recordings first before the overlay can work correctly.
@papr Yes, I opened both videos in Player
Is there an error message when you try to overlay the scene video of one recording on the other? If not, would you mind sharing both recordings with [email removed] s.t. we can have a look at the recorded timestamps?
There is no error, when I drag the one video over the other, then it simply replaces that video instead of playing them both. I'll try it once with new recordings. If that doesn't work, I am happy to share the recordings with you via email. Thanks
@user-76c3e6 Ah, are you dragging the complete recording folder? This actually opens the other recording. Please try to drag&drop the world.mp4/mjpeg
file instead.
@papr ok thanks, that solved the problem indeed π
Hi @user-5ef6c0, yes Pupil Player only has a central location for settings. When switching versions, the settings will be set back to default for the newly opened version. @user-c5fb8b thank you for the clarification.
A question regarding the "gaze history" feature in the vis polyline plugin. Is the robustness of the tracking improved when using fiducial markers and the head pose tracker plugin?
@user-5ef6c0 Even though the functionality does not use marker tracking explicitly, the markers will help the optical flow algorithm since they are salient features that are easy to track.
I see. Would it make sense to try to make them work together? Does the optical flow algorithm take into account 3d gaze data? Or is it just based on object detection/pixel tracking? If the former, I imagine that using head pose data could increase the accuracy of the gaze history visualization. Or maybe I'm talking non sense
@user-5ef6c0 Currently, the gaze history is simply based pixel tracking and 2d gaze. Yes, using the head pose tracker, it would be possible to build a similar but not accurate functionality. But it would have the disadvantage of being less general. People would have to setup their environment to work with the head pose tracker.
For this to work with the head pose tracker, you simply have to transform 3d gaze data into the "room" coordinate system based on the "head pose" of each frame. You can start building this based on the output of the raw data and head pose tracker exports.
@user-c5fb8b Yes, you are right. Thank you for the information, I thought it would be possible to get real x,y data from the start without having to integrate each and every screen (since I might be working with many screens). Thank you both, @user-26fef5, for the information. .
@papr thank you
@user-7d4a32 you can also define multiple surfaces for multiple screen in the Surface Tracker.
Dear pupil community, just for a quick checkback: I plotted the time between my pupil samples (per eye) for one of my experimental blocks (approx 8 minutes) and this is what I get:
The mean is very close to the 5ms that I was expecting (these data were recorded with the 200Hz VR add-on). I just wanted to ask whether that is expected behavior or whether I should be suspicious of something in my hardware (or software) setup. Thanks for feedback already!
Hi Guys, I am about new to pupil lab and using pupil core mounted on Hololens. In the documentation it is mentioned 200Hz eye cameras (my current version) do not need to be adjusted, while I see that the accuracy in detection vary with adjusting the focus of eye cameras. I am very eager to know why there are some improvement with adjusting eye cameras! Moreover, what is the best calibration method of detection in different depth in room-like and bigger environment? (I mean from 1 meter to about 10 meter)
Hi @papr, I just realized I don't think I responded to this message quite some time ago. "@user-c6717a please be aware that the monotonic clock might use an additional offset while being time synced... Do you need to be time synced in real time or do you just need to synchronize your clocks after the effect? There are two clear solutions for both of these use cases. Let me know which one you need." We are still trying to solve the problem but have been distracted with other issues. To answer your question, we are going to be doing the synchronization after the fact, so no need for real-time synchronization. Can you remind me of the solution you had in mind? Thanks!
@user-c6717a In that case, you only need to calculate the difference between the pupil time and utc-0 date time during the recording, and apply the difference to all recorded timestamps. This difference can be calculated based on the start_time_system
(utc-0 timestamp) and start_time_synced
(pupil timestamp) values recorded in the info.csv
/info.player.json
file.
@user-141bcd The pupil timestamps are taken from their respective eye video frames. On Windows, the eye video frame timestamps are calculated based on the time they are received in the application and a fixed offset that compensates the transmission delay from the camera to the application.
Unfortunately, this real transmission delay is not fixed and can only be approximated. What you are seeing is the variance in this transmission delay.
@user-64e880 You should adjust the eye cameras such that the pupil is visible in as many eye positions as possible. What the documentation is referring to is that you should not be adjusting the focus of the eye cameras by trying to rotate the tiny lens.
I would recommend using the single marker calibration in manual marker mode (printed markers). You can read more abut in our documentation: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-methods
@papr thanks. I'm asking as I want to apply a saccade detection algorithm that expects a fixed sampling rate. If the variance in the recorded timestamps is actually driven by transmission times (not by recording time), would it be a legit assumption that the samples are actually equidistantly spread out over time originally?
@user-141bcd Yes. On operating systems where can measure the time of exposure (macOS/Linux), we see a much smaller variance in the timestamp differences.
@papr interesting. So for velocity calculations would you recommend using the pupil timestamps (in my case unfortunately recorded on Windows) or assuming a fixed delay of 5ms between two samples of the same eye?
I would assume a fixed 5ms duration between samples, but rely on the pupil timestamps to detect dropped frames, e.g. if the time difference between pupil timestamps >10ms
@papr thanks! so the 2nd/right peak at ~8ms in the histogram above (~25% of the data) does not consist of actually delayed frames due to dropped frames but of problems with the time estimation? Actually dropped frames are then the single points beyond 10ms (in my case <1%)? That's very good to know.
Hi All - Can anyone advise on appropriate cleaners for the Core during COVID-19? Which are safe to use with the Pupil Core (which I think is just PLA material)? See FDA/EPA's recommended cleaners here: https://www.epa.gov/pesticide-registration/list-n-disinfectants-use-against-sars-cov-2. Thanks!
@user-141bcd yeah, especially since most of the data seems to be quicker than 200Hz given the <5ms time difference
@papr but the 200Hz is an upper bound defined by the hardware, right? So those rated beyond 200Hz (or below 5ms distance) are also artifacts of the software based timing estimate?
@user-76ca8f we use https://www.amazon.de/-/en/gp/product/B0849VG5GR/ref=ppx_yo_dt_b_asin_image_o00_s00?ie=UTF8&psc=1 for cleaning the Pupil headsets during the assembly process.
@user-141bcd both correct.
@user-755e9e Thank you!
Hi,I want to ask a question! Does anyone know What is the triangle's function on the surface ? Thank you!
@user-ab28f5 hey, yes, it shows the orientation of the surface. Specifically, it points "up".
So the top of the triangle mean the top of the surface right?
I have the other question... How can I know where is the origin of coordinates?
@user-ab28f5 The normalized coordinate system has its origin in the bottom left. https://docs.pupil-labs.com/core/terminology/#coordinate-system
@papr
Thank youοΌ
And I want ask the another question...
How can I know the calibration is good or bad? Is the value of dismissing a good Reference pointοΌ
@user-ab28f5 There is some information on that in our best practices section: https://docs.pupil-labs.com/core/best-practices/
Hello, the pose_t attribute from apriltags, does it give the position of the center of the tag? Cheers!
@user-894365 Could you respond to @user-eaf50e ?
@user-eaf50e yes, pose_t
is the 3d position of the center of the tag with respect to the camera.
@user-430fc1 I will come back to you early next week @papr hey, were you able to figure out what might have been causing this?
@user-430fc1 You should have received a response via email. Turns out that there is no actual issue. The base_data
field includes a series of -separated values. The length of this sequence depends on the duration of the blink. In your case, the blink is a false positive detection with a duration of 51 (?) seconds. Therefore, it includes a lot of base data which is not correctly displayed by excel.
@papr I didn't receive an email. But thanks, this makes sense.
@user-430fc1 For reference, the response was sent Apr 14, 2020, 9:09 PM UTC+2 (CEST) from [email removed]
@papr Thanks, must have got lost somewhere as I can't find it. Not to worry, case solved.
Hello everyone ! π I'm working on custom mount glasses for a Pupil Lab. I'm finishing the project of another guy. And i got a small problem : the cables of each side camera were taken off from the connectors. Can someone send me a picture of the order in which i need to put them back ? It would be nice π
Hello @user-b4144f , the connection of the JST connector is as in the photo attached. We would still recommend you to get it repaired by our experts, in order to avoid additional hardware failures. If you want to proceed, please get in touch with [email removed]
@user-755e9e Hello ! Thank you for your reply ! Actually, the JST connectors are not brocken, neither the cables. They were just "disasembled", so i only need to put them back together at the right place.
@user-755e9e On the other cable, the colors are not the same for the data ones. They are blue and yellow. Are you able to tell me their order too ? Thank you !
@user-b4144f The JST facing as in the photo above, from left to right: BLACK - BLUE - YELLOW - RED
.
@user-755e9e Perfect ! Thanks a lot ! I appreciate your help π
no problem, good luck with your project! @user-b4144f π
hello - was wondering if anyone could help me extend the second camera for the eye tracker device
apparently the two camera windows on pupil capture is only tracking one eye
does anyone know how to fix this problem?
Hi
I am using the pupil core, and need live video feed and gaze position in Unity.
I am using aruco markers that I've placed around a room, and I then use an openCV library to get the aruco marker id's and the position of each corner of the aruco marker. I use this to calculate distance between an aruco marker and the gaze position, to figure out if I am looking at a marker. However, I am having trouble with reliably recognizing when I am looking at one of these aruco markers.
It seem to be that I am not always getting the latest gaze data using the network API, which is a problem for my project. When I check the eye position in Pupil Capture, it is shown directly on top of the aruco marker, which is why it is confusing to me why I am unable to recognize that is where I am looking.
I am creating indivual threads for subscribing to frame.world and gaze.2d.0.. I have changed the eye camera to only use 30 frames per sec, such that there should be the same amount of messages with the video feed as the gaze position.
Since I am only interested in live data, is there any way to force the pupil capture to drop everything that is not live? such that the threads I am creating in unity can not fall behind on getting the data from the glasses.
Also, I am pretty sure I have a version with the 120Hz eye camera, atleast that is the highest framerate I can choose. There is no lens adjuster in the package and I am not having any luck using my fingers to twist the lens. Should I just try with more force? I don't want to end up breaking the camera.
Hi, I have just started using Pupil Mobile. I'd like to ask a question - Is it possible to turn off the world camera from Pupil Mobile App? What I need is only recordings from eye cameras, and I'd like to save storage if possible. I'd appreciate for any comments and advices.
Hi, I would like to ask how the confidence value is calculated? I'm trying to identify the source code but I'm frustrated.
@user-2e2394 Hi. If you use apriltag markers, you can use the build in surface tracking feature to check if you are looking at an area of interest or not.
Pupil Capture will also only send data over the network that has been subscribed to. Therefore, if you are not receiving as much data as expected, Capture is dropping data already because you are not processing it fast enough.
Please be aware, that Capture maps gaze at the beginning of each event loop iteration. Lowering the world frame rate will therefore map + publish more pupil positions at once.
Can you elaborate on why you want to twist the lens? Generally, we do not recommend adjusting the lens manually.
@user-74be48 This is not possible as far as I know.
@user-6b3ffb The 2d confidence value is based on how well the fitted ellipse matches the candidate pupil edge pixels. Reference to the implementation: https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_2d/detect_2d.hpp#L600
@papr, from the pupil labs hardware documentation: If you have a 120Hz eye camera, make sure the eye camera is in focus. Twist the lens focus ring of the eye camera with your fingers or lens adjuster tool to bring the eye camera into focus.
This is why I wanted to twist the lens, so make sure it had the right focus.
@user-2e2394 In this case, there should have been a lens-adjustment tool in the original package. It looks like a gear and should be colored orange (depending on how old your device is)
That is my problem, there is no lens adjuster in the package. The eye camera only goes up to 120 frames per sec in the pupil capture software, which is what makes me think it is the 120Hz eye camera, and might need to be adjusted. But I have been hesistent to just use more force, as I do not want to break it.
@user-2e2394 Which resolution do you have selected?
1920x1080, using the zoomed in world camera lens.
sorry, my bad
400x400
That is the 200Hz camera. For enabling the 200Hz mode, please set the resolution to 192x192
This camera does not come with a lens adjustment tool as the lens is not adjustable.
Yeah, got that from the hardware doc aswell, just thought I had the 120Hz. Glad I didn't just try to force it π Thanks.
@user-2e2394 Regarding the data reception issue, do you have any questions?
Not really to the data reception issue, if pupil capture is already dropping data if I am not requesting it quick enough, then my problem is obviously with another part of my program. I am subscribed to the IPC topic for the world camera and gaze position, and receive data continously in seperate threads for each topic.
You mention lowering the frame rate of the world camera. However I only have the option of using 30frames, no matter what resolution I choose.
@user-2e2394 How do you know that you are missing data? Also, please check the fps graphs in the world and eye windows to check if they provide the theoretical maximum.
Subscribing each topic in separate threads sounds like the correct approach to ensure as much data processing as possible.
The fps graphs show the same frame rate as chosen, +-1 frame. So ~30fps for both the world and eye camera.
It was not that I am missing data, just that the gaze position I got in unity seemed to lag behind what is shown in pupil capture. So I need to wait more time looking at a specific object, until the gaze position I received in unity caught up to where it was suppose to be.
Pretty sure from your first message, that this is not how pupil capture works, and I can't possibly be lagging behind. Maybe using the built in surface tracker, rather than a third party apriltag reader, is going to remove this problem.
And thanks for your help π
I want to ask a question... What is the value of dissmissing mean? The value higher is good or the value lower is good?
Thank you for you replyοΌ
@user-2e2394 In this case, I would either increase the world frame rate (smaller gaze mapping batches that are processed with less lag) or use Pupil Service. Pupil Service only provides a limited set of features, e.g. does not support making recordings, but is build to reduce the gaze mapping delay.
@user-ab28f5 By default, Pupil Player hides data with confidence less than 0.6. In some cases, it is worth to increase the threshold to 0.8.
Due to some previous cables failures on other projects (cables broke near JST connector), i've chosen another way to assemble my project : removal of the male JST, removal of the femal JST, and use of solder to put cables directly on the board, with an add of epoxy resin to strenghten everything ! Works like a charm π
@papr Thanks for answering my question. It was very helpful. Then, if I detached the world camera from Core, will the Pupil Mobile App work properly only with eye cameras? I'd like to know just in case I could not put it back. Thank you.
@user-74be48 The world camera is not pluggable. In other words, you would have to cut the cable. I do not think this is advisable. There might be a software workaround by resetting the camera permissions, and only granting access to the eye cameras. This will keep the world camera effectively offline. You will have to be careful to not accidentally grant permissions to the camera, though.
@papr Yes it is. At least on mine. You just need to unscrew the lens, unscrew the two screws in the back, release the cover and the camera board, then unplug the JST connector π
I got the pupil Labs for Hololens
@user-74be48
@user-b4144f @user-74be48 I need to correct myself. You indeed do not need to cut the cables.
@user-74be48 Nonetheless, I would recommend trying the suggested software workaround before attempting to unplug the scene camera. As you can see from the picture, the cables are very thin and can easily break.
Okay, i misunderstood π
Yep @user-74be48 : cables are thin, and easily break if too much manipulated. Happened to me several times (but only on eye cameras so far)
@user-b4144f Thanks for sharing the pictures btw. Your custom mount looks very interesting. Aren't you able to use the builtin camera as scene camera?
@papr Ahah thank you and your welcome for the pictures π
Yes, this custum build is pretty interesting, because it allow the pupil labs to work with people wearing glasses.
The glasses that i use are designed for active 3D cinema. Right now, they are hollow (no electronic, no shutter), but i've already made a version with all the elements in it + the pupil lab ! It took me a lot of work, because i had to remove carefuly all the protections on pupil lab's cables to get them "naked", in order to fit them inside the glasses ^^
About the scene camera, i got some 3D printed support with neodyme magnet to hold the camera still. But as i'm not done working on it yet, the pictures above didn't show them.
@user-b4144f Do the glasses have to look "normal"? Or is the goal to simply provide eye tracking for people with impaired eye tracking? Have you thought about a 3d-printed clip that holds the glasses and can be attached to the existing Pupil Core headset?
@papr Thoses 3D glasses were designed to works with people already wearing glasses, and as i work for the compagny that designed and sell them, we already got a lot of "raw materials" available. So we decided to see if with some proper modifications they would be fitting with the pupil labs π
@user-b4144f Ah, so you would be wearing the prototype on top of the existing glasses? Aren't you worried about the actual glasses covering the eye cameras' views? This is usually the issue i people try to put the Core headset on top of their existing glasses. Btw, I like the magnet idea π
Ahah π The problem you pointed about the Core headset is exactly why we used or solution. I never had the opportunity to try the core headset but others tried and told me about this problem. With my design, the eye camera are low enough to counter this issue π We may lose a bit of precision and efficiency tho, because of this. But it seems to not be too much to cause any trouble for their purposes.
our*
As long as the pupils are visible to the eye cameras, you should be fine. π
Exactly ! Dont worry, i've already produce 3 pairs of them, and they works fine ^^
Of course, due to the position of eye cameras, the pupils is way more eliptical, and maybe in some extrem angles (specialing looking at tops) we may lose a bit of precision. But nothing important, so no worries π
This is great to hear π
(sorry for mistakes, i type way too fast in my non-native language xD )
No worries π
anyway, it's insanely good what Pupil Labs have designed. This solution works like a charm, even better than others solutions, at a fraction of their prices ! I tried a lot of them, and Pupil Labs is definitly the best. Plus they provides a really great software, customisable, and open source. β€οΈ
Thank you very much for your kind words. π
Ahah, you are in the dev team ? π
Yes, software development. My knowledge about the hardware has its limits though as you might have noticed. π
Okay ^^ Good job then π
No worries, i can understand that !
Hello, I'm a grad research assistant looking to incorporate the Pupil Core 3D design into a larger system. Is there any way/place that I can get/purchase the CAD file for the 3D-printed headset design?
Precise 3D Scanner π (joke appart, i don't know)
@user-fe4a3e Some of the geometry is released under LGPL-3.0 at https://github.com/pupil-labs/pupil-geometry
If you need more please contact [email removed] π
Lol @user-b4144f I've tried that using the Structure Core scanner/Skanect scanning software but it didn't work too well. @papr I'll take a look here and let you know if this works for what we need, thanks!
@papr Hi, I have a question. In the pupil diameter data, Does the software consider off-axis compensation?
@user-8fd8f6 could you elaborate on what you mean by "off axis compensation"? Are you referring to the gaze-angle dependency in the 2d pupil diameter (major pupil ellipse axis in pixels)?
yes
@user-8fd8f6 the 2d pupil diameter is not compensated for that gaze-angle dependency but the 3d pupil diameter is.
Thank you, Do you have any publication I can cite for that?
@user-8fd8f6 This is the paper on which the 3d eye model is based on https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
Or maybe even better: https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf
Do you have any idea why the right eye is not tracking in this recording?
https://www.dropbox.com/sh/es3t5xbxehngwe7/AACFDImoPOIZ-6lbsNrOslF_a?dl=0
@user-e70d87 from a first review, it looks like the recorded pupils are close to the default "pupil max" value. Increase the value in the eye processes' pupil detector menus.
I can have a closer look tomorrow.
@papr Thank you again for your advise! First I will look up camera permission control. @user-b4144f Thanks for sharing beautiful pictures! It'd be really helpful when I should move to plan B (detach, if not destroy).
Hello, how can I map the 3d tag pose to the 2d image plane? I have the camera matrix & distortion coefficients and the translation and rotation of the tag. I tried to use cv2.projectPoints() but this gives me weird results. I know I can obtain center and corner coordinates for the tag on the image plane, but I'd need to do the transformation myself to test something.
@user-894365 could you help @user-eaf50e with that?
Great! Another question: How can I write to a textfile from a plugin? It seems that the text is written to the pupil labs console, but not saved to my textfile.
@user-eaf50e how are you saving the data right now?
text_file = open("file.txt", "w") text_file.write("My text") text_file.close()
@papr That didn't seem to help, I'm afraid.
To be clear - this is using the 120Hz eye cameras (they really do produce beautiful eye videos, don't they?)
Is it possible that some of the changes you have made since the introduction of the 200Hz cameras are causing issues here?
@user-eaf50e This looks generally correct. Do you see the text file and it is empty? Or can't you find the text file? It is possible that it is in a different place than you expect it.
@user-e70d87 It looks like the right eye model is not fit 100% correctly. If I had to guess, the eye model is a bit closer to the eye camera than the eye is in reality. You can see this from the green circle (eye ball outline projected into the image) being bigger than on the left eye. This also explains why the pupil detection (red circle) is bigger than the actual pupil in the image. Use the options to pause and restart the pupil detection as well as reset and freeze the eye model to fit the model manually. Specifically, if you freeze the model in the middle of the recording, and hit redetect, the redetection will use the frozen model for the new detection.
@user-eaf50e Perhaps this script could help
pose_R = detection.pose_R
tvec = detection.pose_t.ravel()
marker_center_3d = np.zeros(3)
marker_corners_3d = (
np.array([[-0.5, 0.5, 0], [0.5, 0.5, 0], [0.5, -0.5, 0], [-0.5, -0.5, 0]])
* tag_size
)
marker_center_in_camera_coordinate = np.dot(pose_R, marker_center_3d.T).T + tvec
marker_corners_in_camera_coordinate = np.dot(pose_R, marker_corners_3d.T).T + tvec
projected_center = cv2.projectPoints(
objectPoints=marker_center_in_camera_coordinate,
rvec=np.zeros(3),
tvec=np.zeros(3),
cameraMatrix=cameraMatrix,
distCoeffs=distCoeffs,
)[0].reshape(-1, 2)
projected_corners = cv2.projectPoints(
objectPoints=marker_corners_in_camera_coordinate,
rvec=np.zeros(3),
tvec=np.zeros(3),
cameraMatrix=cameraMatrix,
distCoeffs=distCoeffs,
)[0].reshape(-1, 2)
@user-eaf50e Please note that the tag pose output from the detector is a rough estimation since the detector does not take the distortion coefficients into account.
The timestamps in the annotation.csv does not match the timestamps in other eye tracking docs, such as gaze_position.csv. How to solve this problem? @papr
Hello I am Srishti, for the first time I am trying to analyze the data but my pupil player is not working .I am getting the message saying install pyrealsense, is this common ?Kindly help me what shall I do? I am attaching the image.
@user-6bd380 please close the window, delete the user_settings files in the pupil_player_settings folder and try again
@papr Thank you so much for your prompt reply π I deleted the file its working now.
Hello, I'm wondering about the confidence parameter found in all sorts of locations. I'm presuming it means how "sure" it is of its answer. Specifically under the blink feature, does confidence mean how sure the subject has blinked?
Also, anyone has any idea that if the "confidence" is relatively accurate?
@papr Hi, I would like to ask that is there any methods to read and export the location file recorded by pupil mobile? Thanks!
Hi, I just got a pupil core and am trying to install pupil capture on Win10 following the instruction on the Pupil Labs' web page. However, the installation does not go any further (for several hours) in the middle of the PupilDrvInst.exe command screen, showing "successfully deleted private key" as the last message. Does anyone have an idea on what is going on or what I should do?
Update
I left the machine overnight and closed all the relevant windows this morning. After restarting the app, then it worked!
Sorry for bothring you. I don't know what was really the problem, but it is OK anyways.
@papr I understand that the eye model is a bad fit, I am trying to understand why the algorithm is unable to find the pupil in what should be a pretty ideal eye video. It also performs poorly when I only detect in 2D mode.
I don't seem to have these issues when I use an older version of Pupil Capture/Player (1.7.42)
@user-905228 If you mean to change the export location it is under "Recorder" from the menu on the right when opening the pupil eye. under "Path to Recordings"
@user-7d4a32 Thanks! π But my question is how to open the .loc file? I've tried the software "EasyGPS" but it didn't work, and I couldn't any software to open it.
The location of the fixation point seems to be too high.
I'll expect the fixation point falls in the right rear mirror.
@papr And the other question is that can I modify the fixation location manually by pupil player plugin after it is calibrated and mapped? Some of my data have the same problem (location of the fixation points are too high) no matter how many times I reread by pupil player. (The data is collected by pupil mobile and calibrated by manual calibration marker.) Thank you so much!
@user-7d4a32 Hi. Yes, the confidence value is meant as an indicator for data quality. The confidence value is calculated differently for different types of data. In the case of the blink detector, it is based on how sharply the pupil confidence drops and increases. The sharper the drop/increase, the higher the blink confidence will be.
@user-905228 The location file is a custom binary file and does not follow any standards on gps data. Specifically, it is a series of N "location" data frames, where each data frame consists of 6 consecutive little endian float64 values (longitude, latitude, altitude, accuracy, bearing, speed). The corresponding N timestamps can be found in the location_*.time
file.
@user-905228 Regarding the gaze estimation offset: I suspect this being due to slippage of the headset. You can set a manual offset correction for each "gaze mapper" in the offline calibration plugin menu. You can setup multiple gaze mappers with different offsets for different sections of the recording in order to compensate slippage over time. I would recommend to make sure that these sections do not overlap.
@user-6752ca regarding your question in π₯½ core-xr, academic discounts are available. You can see more information when selecting the desired product on our website: https://pupil-labs.com/products/
@papr Thanks for the help! I'll try it. π
@user-7d4a32 Hi. Yes, the confidence value is meant as an indicator for data quality. The confidence value is calculated differently for different types of data. In the case of the blink detector, it is based on how sharply the pupil confidence drops and increases. The sharper the drop/increase, the higher the blink confidence will be. @papr Thanks!
@user-eaf50e This looks generally correct. Do you see the text file and it is empty? Or can't you find the text file? It is possible that it is in a different place than you expect it. @papr I immediately check with: print(open('{}.txt'.format(filename)).read()) but no file gets written
@user-eaf50e Perhaps this script could help ``` pose_R = detection.pose_R tvec = detection.pose_t.ravel() marker_center_3d = np.zeros(3) marker_corners_3d = ( np.array([[-0.5, 0.5, 0], [0.5, 0.5, 0], [0.5, -0.5, 0], [-0.5, -0.5, 0]]) * tag_size )
marker_center_in_camera_coordinate = np.dot(pose_R, marker_center_3d.T).T + tvec marker_corners_in_camera_coordinate = np.dot(pose_R, marker_corners_3d.T).T + tvec projected_center = cv2.projectPoints( objectPoints=marker_center_in_camera_coordinate, rvec=np.zeros(3), tvec=np.zeros(3), cameraMatrix=cameraMatrix, distCoeffs=distCoeffs, )[0].reshape(-1, 2) projected_corners = cv2.projectPoints( objectPoints=marker_corners_in_camera_coordinate, rvec=np.zeros(3), tvec=np.zeros(3), cameraMatrix=cameraMatrix, distCoeffs=distCoeffs, )[0].reshape(-1, 2)
``` @user-894365
thank you, I'll try that.
so the 3d pose estimation is not accurate? I tried to match 3d gaze and 3d tag pose but it doesn't give me satisfying results, i.e. when the gaze is on the tag, the coordinates of the tag and the gaze don't match (even with an error margin)
@user-eaf50e Could you give this code a try? (requires Python 3.6 or higher)
print("START")
from pathlib import Path
test = "TEST"
file = "file.txt"
file_path = Path(file).resolve()
print(f"Writing to {file_path}")
with open(file, "w") as fh:
fh.write(test)
print(f"Reading from {file_path}")
with open(file, "r") as fh:
assert test == fh.read()
print("END")
You should see 4 lines being printed and no errors
Please be aware that with open(path) as file_handle
is preferred over the manual version (file_handle = open(path); file_handle.close()
)
this works thanks! I was looking indeed in the wrong directory π I thought my print check was enough, but I guess I was wrong. Sorry, for the trouble!
@user-eaf50e btw, I would highly recommend using the pathlib module to handle/manipulate paths. https://docs.python.org/3/library/pathlib.html#module-pathlib
@papr Hi, so I am trying to setup and use the surface tracker. The way the MessagePackSerializer i use in C# works, is by deserializing into a dictionary with type (string, object), I then need to cast the object to the correct type, such as (float)(double)unpacked["timestamp"]..
I am having trouble figuring out what types are used for the "gaze_on_surfaces" and "fixations_on_surfaces", is there somewhere where I can look up the correct types? or see examples of using the surface tacker in c#?
@user-e70d87 I am getting very good pupil detections with the 2d detection. Even better with the following settings: 10 intensity range, 60 pupil min, 120 pupil max.
I think the reason why the confidence is lower than the left eye on average is that the led reflections are often on the pupil boundary, interrupting the black, elliptic pupil edge. 2d confidence is the ratio of how many candidate pixels (light blue in algorithm view) match the fitted ellipse (red).
I reduced the intensity range to reduce some candidate pixels that where inside the pupil instead at its boundary.
@user-2e2394 I understand. These two keys are actually lists of more dictionaries. I can get you some example data.
That would be lovely.
In the end, I just need to translate the gaze / surface tracker data into whether or not the user is currently looking at Surface1, Surface2 or Nothing.
@user-2e2394 https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py In this case, simply check this example python script.
filtered_surface
is the root dictionary of the surface payload
gaze_positions
is the list of surface-mapped gaze positions
gaze_pos
is a dictionary
gaze_pos['norm_pos']
is a list of 64bit floats
and with 0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1
you can check if the gaze was on the surface
do you need to specify the types of all keys? or is it sufficient to know the types of fields mentioned above?
I will need to cast every value from the dictionary to the right type, but the above should be enough to go on with.
Do you have it written down somewhere similar to the pupil and gaze datum format found at: https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format ?
@user-2e2394 Actually, we don't have that yet. I will update the documentation with some example data.
@user-2e2394 For reference, this is an example payload of a surface message:
{
"topic": "surfaces.surface_name",
"name": "surface_name",
"surf_to_img_trans": (
(-394.2704714040225, 62.996680859974035, 833.0782341017057),
(24.939461954010476, 264.1698344383364, 171.09768247735033),
(-0.0031580300961504023, 0.07378146751738948, 1.0),
),
"img_to_surf_trans": (
(-0.002552357406770253, 1.5534025217146223e-05, 2.1236555655143734),
(0.00025853538051076233, 0.003973842600569134, -0.8952954577358644),
(-2.71355412859636e-05, -0.00029314688183396006, 1.0727627809231568),
),
"gaze_on_surfaces": (
{
"topic": "gaze.3d.1._on_surface",
"norm_pos": (-0.6709809899330139, 0.41052111983299255),
"confidence": 0.5594810076623645,
"on_surf": False,
"base_data": ("gaze.3d.1.", 714040.132285),
"timestamp": 714040.132285,
},
...,
),
"fixations_on_surfaces": (
{
"topic": "fixations_on_surface",
"norm_pos": (-0.9006409049034119, 0.7738968133926392),
"confidence": 0.8663407531808505,
"on_surf": False,
"base_data": ("fixations", 714039.771958),
"timestamp": 714039.771958,
"id": 27,
"duration": 306.62299995310605,
"dispersion": 1.4730711610581475,
},
...,
),
"timestamp": 714040.103912,
}
I also started a pull request to add this information to the documentation https://github.com/pupil-labs/pupil-docs/pull/367
@papr HiοΌI used Pupil to get the eye data, I want to know if there is any way to get the head movement data, similar to Pupi's head-mounted.
@user-a98526 What device are you currently using to get the eye data?
Used for gaze estimation, and want to use head movement data to obtain more accurate gaze estimation
@user-a98526 Have you checked out the head pose tracking feature? https://docs.pupil-labs.com/core/software/pupil-player/#analysis-plugins It requires you to setup your environment with apriltag markers. Is this what you are referring to?
@papr I think this is helpful, but can this plugin provide online head pose estimation, I want to use this information to control the robot in real time.
@user-a98526 Yes, there is a realtime version for Pupil Capture, too.
@papr Thank you very much, it was very helpfulοΌ
@papr Appreciate the example.
It seems to me that the boolean from the python example "0 <= norm_gp_x <= 1 and 0 <= norm_gp_y <= 1" is already evaluated and saved in the dictionary entry "on_surf".
@user-2e2394 correct
Is there a way to trim a pupil core recording? i.e. export a segment of my recording, that I can then import back to pupil player.
@user-5ef6c0 there are two handles left and right of the timeline that you can drag around to specify a subsection of the recording. This trimmed section will be used when you export your data. There's no way to import that back into Player though. Player will always keep your whole recording as it is.
@user-c5fb8b thank you. Yes, I was aware of that. I have 8 minute long recordings with multiple sections I'm analyzing separately. I thought it would be useful to be able to export them once and then open separately in player.
@user-5ef6c0 Normally we recommend to make separate recordings if your experiment allows splitting into reasonable blocks, maybe this could be an option for you? Otherwise you will have to use the trim marks to specify one section at a time, do all your analysis and export. If there's any additional post-hoc analysis needed, you'll have to do this manually on your expoted data for now.
hi, Can I adapt pupil labs core to Vuzix M300xl?