Hello! I have two hopefully quick questions. #1-I'm noticing that my fixation durations only go up to right below 1000 ms and never over. Is that part of the program that fixations are only recorded up to 1000 ms and then a new fixation begins? #2-I am using the fixation detector and the Vis Circle plugins. Sometimes the circle and fixation ring don't align. What would be the reason for that?
Thank you so much for your help!
@user-2798d6 what is the max_duration
of the fixation detector set to in Pupil Player?
@user-2798d6 the fixation ring displays the average gaze location of the gaze that belongs to it. In longer fixations with higher dispersion you won't see all the gaze at once since it is spread over multiple frames. Therefore, it might look like it is not aligned
@wrp - yep, that would be my issue! π Thanks for pointing me in the right direction. And thank you @papr!
Hello everybody:
I a m wondering if the gaze positions have calculated the fish eye distortion of the word view?
Meaning: Lets say a change of 100Pxls is a bigger "Gaze Movement" in the Center of the Fisheye Window than it is at the borders (due to fisheye distortion). ==> Is this calculated or is the gaze position at the border of the fisheye distorted world window inaccurate?
Thany you!
@user-a04957 3d gaze is corrected for distortion, yes
Hello! A followup question about adjusting the maximum for fixation duration: I previously had a fixation range for 200-1000ms and then changed it to 200-4000ms. But then I lost some fixations that were previously there. I know some fixations were combined into longer fixations, but there were some single fixations on one item that were 500-600 ms that now aren't showing up. Is there a reason for that?
Thank you!
@user-2798d6 We do some basic outlier detection. It might be, that somehow these were filtered out? The outlier detection needs improvement.
I would recommend to run the fixation detector with different settings and look for overlapping detections, as you might have done already
@papr Thank you! Does 3d gaze mean, that I can use the norm_pos_x and norm_pos_y coordinates and they are corrected too? Thank you!
@user-a04957 norm_pos_x is not corrected. only gaze_3d is corrected. If you want to correct norm_pos i recommend using cv2 fns and the intrinsics supplied in the recording.
@wrp yes, that is right
Hi, i am using Tobii Vive device in Unity3d, is it possible to use pupil-labs software in that case
i see there is a tool hmd eyes
but cannot make sure since it is used with htc vive
can we also use the unity plugin with tobii vive
Unless the Tobii device registers as normal usb cameras on your computer (very doubtful), no, sorry.
I think Tobii uses proprietary hardware which requires their own software for access.
Ok we'll try. Yes tobii has a software but quite expensive
thanks for quick response
we'll let you know if it works
Good luck! Let us know your results π
π
Hi papr, I was curious if you could point me to an existing recording I could play back in pupil player?
Sure! See this recording for offline pupil detection and calibration: https://drive.google.com/open?id=1OugotQQHsrO42S0CXwvGAa0HDvZ_WChG
Check out this recording for the offline surface tracker functionality: https://drive.google.com/uc?export=download&id=0Byap58sXjMVfZUhWbVRPWldEZm8
Thanks papr! I was able to play the surface tracker and export a x,y coordinate in CSV. IS it possible to get a Z coordinate in CSV using binocular cameras?
@user-41643f if you enable 3d pupil detection and mapping you should have access to the gaze_point_3d values
thanks!
@papr How can I map the gaze_point_3d to normalized [0,1] coordinates? Thank you
@user-a04957 you can project it to the cameras image plane using the cameras intrinsics (including distortion). If I am not mistaken, this would be equal to norm_pos
@user-a04957 if you want it without distortion just use the same fn but with distortion coeffs set to identity.
Hi @mpk , I am interested in getting 2d gaze positions without distortion, do I need to do the same thing? If so, can you please explain what does it mean to "use the same fn but with distortion coeffs set to identity" - how do I do this? thanks!
hey hallo to everyone! is there any possibilities to change the world camera with one with more resolution? or do you have any project to improve the resolution of the wolrd camera? thnks a lot
@user-1bcd3e our current camera does 1080p if you need 4k you could get a version of our headset with usb_c connnector at the world camera mount site and a generic mount. This allows you to mount a logitech 4K camera.
Hi @papr, in some of my recordings no blink detection report can be exported. Blinks are not shown with the visualiser either. I tried both the 'pupil from recording' and 'offline pupil detection' options but none worked. The offline blink detection plugin was active during recording. This is weird because the confidence of both eyes was high and the drops and gains of confidence was recorded as shown by the values shown on the upper left corner of the screen when I played the recording. I started and stopped recording several times without changing the settings so I had many sections of recording, for some of which blink detection reports were exported fine. Would you suggest anything else to try to export the blink detection reports? Thanks
@papr : is is possible for me to turn the outlier detection off?
Hi guys.....can we use this code on Raspberry Pi? Thanks in advance
@user-2798d6 not as a setting. You will have to change the source code for that
hello everbody π i am conducting a psychological experiment using the pupil labs eye tracking glasses. Concerning this i've got some questions: How can i define
Hi everyone. I'm really new to using eye tracker and at my university we chose Pupil. I will conduct an experimentn with 90 people and I have understood so far the Pupil Player does not allow joint analysis by groups (someone corret me if I was wrong). I would do this with the raw data. Basically in addition to the individual heat maps I need the position data of the fixations and the duration of the fixations. The duration I understood however regarding the position in the file it is not clear to me what the number norm_pos_x 47385810382993000,00 and norm_pos_y 4405092041169440,00 (as example) represents. I have already understood that the lower left corner is 0,0 and the upper right is 1,1. Someone to help me?
Hi, we have tried for tobii vive, as you said, we couldn't make it
in the pupil capture - eye window, 'activate source' options appeared as unknown
i just want to let you know
Hey everyone, I am also very new to this. I built my own 'homemade' headset, with two cameras, one facing the eye and the other one the surroundings. Is it possible to use the Pupil Capture software for my headset or do I need the PupilLabs? thank you
Thanks for the update @user-1809ee
@user-07d4db Welcome to the Pupil community chat π Responses to your questions:
1. Units of measurement - please see https://docs.pupil-labs.com/#data-format
2. Fixation within AOI - Please use the surface tracker and fixation detector - when you export data with Pupil Player with both plugins active you will get fixations within each surface as .csv
data. Please see see: https://docs.pupil-labs.com/#capture-fixation-detector and https://docs.pupil-labs.com/#surface-tracking
@user-624fa4 Welcome to the chat π Responses to your questions:
1. Multi-participant analysis - out of the box, Pupil software only support single participant visualization/analysis. You can export raw data for each participant as csv
data and then ingest this gaze/pupil/surface/fixation/etc data to perform aggregate analysis.
2. Data format - please see: https://docs.pupil-labs.com/#data-format -- norm_pos_x
and norm_pos_y
for gaze positions are the normalized gaze position where 0,0 is the bottom left corner of the world camera frame and 1,1 the top right corner of the world camera frame. The norm_pos
data is floating point data - so I'm not sure what you are refernencing with the number 47385810382993000,00
for example (was this maybe imported to a csv reader program with the incorrect settings? Can you send us the csv
file so we can give you concrete feedback?)
@user-71969b please see https://docs.pupil-labs.com/#diy - if your cameras are uvc compliant, then the should be able to work with Pupil software.
@user-9bca2f RPI would likely not have enough CPU power to run pupil software's pupil detection algorithms at high frame rate. Some members of the community are using single board computers (sbc) but not sure what frame rates they are achieving with their setups. Perhaps someone in the community would like to step in here.
Hello I guess the question I have is a common question. I am currently using the pupil hardware for a study where people are looking at stuff where it is impossible to use the markers. Before I was used to use the SMI semantic gaze mapping function which allows to manually code to what part of a reference image people are looking at in order to calculate dwell time etc. How do people solve this issue with pupil labs eyetracking data? Can anybody assist with that?
@wrp so can you tell me how can I make mobile device?
@user-9bca2f I would encourage you to consider Pupil Mobile https://docs.pupil-labs.com/core-mobile
Interesting how channel linking takes preference over the actual link π€
Hey papr, does the system have a audio component or supported plugin? I want to incorporate audio collection as well - if you have a recommended channel to follow, or system someone is familiar with (e.g. a good midfield mic?)
@user-41643f audio capture works with built in mics and supported USB mics in Pupil Capture. Audio can also be recorded via built in mic on Android with Pupil Mobile.
Hi guys, I'm trying to load in Pupil Player the recording I took with the Pupil Mobile, but it just won't load it. The log says: 2018-11-05 17:52:02,042 - MainProcess - [INFO] os_utils: Disabled idle sleep. 2018-11-05 17:52:03,869 - player - [ERROR] player_methods: No valid dir supplied (/Applications/Pupil Player.app/Contents/MacOS/pupil_player) 2018-11-05 17:52:17,714 - player - [INFO] launchables.player: Starting new session with '/Users/teresa/recordings/20181104203042389' 2018-11-05 17:52:17,716 - player - [INFO] player_methods: Updating meta info 2018-11-05 17:52:17,717 - player - [INFO] player_methods: Checking for world-less recording 2018-11-05 17:52:17,718 - player - [ERROR] launchables.player: Could not generate world timestamps from eye timestamps. This is an invalid recording. 2018-11-05 18:00:29,561 - MainProcess - [INFO] os_utils: Re-enabled idle sleep.
@user-2be752 - saw your issue on github here: https://github.com/pupil-labs/pupil-mobile-app/issues/29
and responded there with questions
perhaps we can continue the discusion via the issue if that is ok with you?
@wrp of course! I've answered there too. Thanks!
@papr @user-af87c8 Hi, in terms of precision: What is the difference in angular resolution of the 3D eye model if I detect the pupil in VGA or QVGA? Since QVGA uses half the pixels in each direction, I assume the precision is half as good. I think the "edges" of the eye ball projection are most sensitive to resolution (since the density of degree-value is the highest here)... so what are (technically) the highest precisions possible at 0, 10, 20 degree of visual line of sight? I'm not talking about gaze, because this would add some other error sources. Did anybody calculated them? I noticed your gaze precision did not change from VGA to 200x200px on your website π
@user-29e10a
Precision is calculated as the Root Mean Square (RMS) of the angular distance (in degrees of visual angle) between successive samples during a fixation.
1 . Degrees of visual angle are independent of the selected resolution. 2. Precision is measured in world camera space, not eye camera space.
@papr Thanks, but I'm not talking about world space. When the 3D detector calculates the model, the center of the ellipse depends clearly on the resolution, doesn't it? So if the ellipse center is shifted one pixel to the right, the difference in degree is double as high on QVGA than on VGA, or am I wrong?
@user-29e10a Ah, ok, thank you for the clarification. 2d pupil ellipses are fitted with sub-pixel accuracy. Therefore, the effect on precision should not be as high as you would expect.
Hi, is there a way to open the "world.intrinsics" file to see the camera matrix and distortion parameters after calibration? thanks!
Hey, did you receive my email?
And yes, this is possible if you have the source code installed.
@user-8944cb Save this file in the shared_modules
folder: https://gist.github.com/papr/e14382fb4d7af5f4da9997f9f6b79f53
Execute it with python3 display_intrinsics.py <path to intrinsics file>
Hi @papr , yes, I have received your email - Thank you very much for your help and the informative reply! If I understand correctly, my options to get the undistorted 2d gaze are either export to i-motions, or post process the distorted gaze points from Pupil based on the camera intrinsics parameters. I don't program in Python, but will try to do the latter first, as we already have a code written for the format of the distorted gaze points (norm_pos_x and norm_pos_y) from Pupil. Again, thanks!
Hi, do you recommend or not streaming the recording from Pupil Mobile to Pupil Capture while running and experiment? If I want to use the time sync plugin, it needs to be streaming. However, I find that there's a bit of a frame drop when it's streaming.
hello all, which is the ideal data to use to measure the gaze of a person or if you can explain the difference between the following : - gaze point 3d - eye center 3d - gaze normal 0 - eye center 1 gaze normal 1 im using the eye tracker that attaches to the hololens , also how can i measure fixation using this eye tracker please excuse my lack of knowledge
Hello everyone, I dont know if I am in the right place to ask this, but I am currently working in a research using Gear VR as a HMD. The question I would like to make is: -Is there a way to use Pupil with Gear VR? We're really looking foward into your technology, but this specific research requires a Gear VR to work.
@user-2be752 time sync works without streaming as well
@user-9f40a2 Check https://docs.pupil-labs.com/#data-format
@user-e0a0e6 check out the hmd-eyes project on github for more information regarding vr integrations
@papr thank your your response but i checked the document and i want further clarification please
@papr thanks!! so do you then recommend not streaming the recording from Pupil Mobile to Pupil Capture while running the experiment?
@user-2be752 streaming is really just meant for monitoring. If you don't need streaming, turn it off
But for what do you need time sync, if you do not intend streaming the data anyway?
@papr oh okay I might misunderstood, but I wanted to use time sync so the recording from Pupil Mobile will be time synced to the main computer (which is recording subjects'response) timestamps. Does that make sense?
Yes, that makes sense
@user-9f40a2 What exactly do you want to know? Pay attention to the coordinate systems that are mentioned in the docs. These make the difference between the different fields
@papr so with that purpose, is it still okay for me not to stream?
You just will need a way to combine the recordings
How do you record subject response timestamps? Which response exactly?
we want to sync keyboard press on matlab (windows)
i want to measure the gaze position @papr
@user-9f40a2 Use the gaze norm_pos. It is the easiest way to find the gaze position within the recorded video
is that the norm_pos or gaze_normal that you are referring to @papr
norm_pos
thank you so what is gaze_normal @papr
The 3d eye model has a normal vector (it is normal to the 3d model's pupil) and it is mapped to the world camera. That's the gaze normal.
Check out this paper on details of the 3d eye mode: https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
thank you very much @papr
Hi @papr , I have another small question, the gaze file that is exported via the i motions exporter is a .tlv file. What is the best way to open the file? will it give we position, the same as norm_pos_ y and norm_pos_y? thanks!
@user-8944cb it is a text file similar to csv. The difference are the column delimiters
@here We are pleased to announce the latest release of Pupil software v1.9
! We highly recommend downloading the latest application bundles: https://github.com/pupil-labs/pupil/releases/tag/v1.9
@mpk Hi, this looks great. Just a question - do the changes to the annotation plugin affect now we're to send payloads from other systems?
@user-dfeeb9 its a bit different now: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
However I think only a small change would be required to make it as easy as it was before. I have made a suggestion to add this to Pupil: https://github.com/pupil-labs/pupil/issues/1378
I see, thanks for the information. Currently @user-e7102b and myself have been sending simple bytestring packets with the old annotation system, I haven't a chance to look in detail at the new way of communicating with my stimuli etc. yet, so I'll need to see how best to work under your new system but may not update until I can
Hello! I am having some trouble with audio syncing with video. There was a post by @user-68d457 on Oct. 19 about the same issue, but I didn't see a solution. My audio is slightly ahead of the video when I open Player. Is there a way to adjust or fix this? Part of my research deals with fixation placement related to the audio, so they really need to be aligned. Thank you so much for your help!
I see that @user-68d457 W responded about a fix, but it involves adjusting the audio file. Is there any other solution besides adjusting all of my audio files? Thank you!
@user-2798d6 was it recorded with Pupil Mobile?
@wrp Hello, I am sending one of the exported files here. I opened them in excel so maybe that's why the data I mentioned is wrong. I intend to do analysis using the raw data and videos. My experiment is with people shopping on a website that we create. We will use the very basic metrics such as fixations, saccades, heatmaps, first fixation time and fixation sequences in each of the Areas of Interest.
@user-624fa4 good night what plugins are you using to get the basic metrics, fixture time, and fixation sequence in each of the areas of interest?
@papr - It was not, it was recorded on my MacBook Air through Capture.
@user-624fa4 I think the issue you are seeing is due to your import settings in excel. I have downloaded your csv
file and imported to google sheets: https://docs.google.com/spreadsheets/d/1Qz9m3otu4ZjybkGzgdw1wBdm8hDpvbjqkjJ4T1-jSrc/edit?usp=sharing so you can get an idea of what this file is supposed to look like. Hope this helps
Howdy, I was a little confused by this note: "Pupil headsets with 3d world camera are not compatible with Pupil Mobile" - the highspeed world camera should be fine? Thanks!
Another question I have: "Pupil Mobile on Android (Supported Devices: Moto Z2 Play, Nexus 5x, Nexus 6p, OnePlus 3)" Since Samsung is not listed, I am curious if Samsung which runs on Android is supported as I have a Galaxys9 phone
You are correct, 3d is not supported but high speed cams are
thanks!
I'm trying to sync the time of pupil lab eye-tracker with some other device, can't not figure out how to relate the start time (synced) of eye-tracker with the local time. Please help explain a little bit. thanks.
@user-41643f Regarding the phone: Most important is that your phone has a usb-c connector. As to the listed phones: These are the devices known to be working. We cannot guarantee anything for other devices.
@user-37c9fb Synced time is a monotonous clock that has a random time epoch. System time is the unix timestamp. Substract each other to get the clock difference which you can use to sync time with the other device's data
@papr The other thing is that the precision of System time is only second, does this mean that I could not sync the time to a millisecond level?
@user-37c9fb The system time should have an higher precision. Let me look into that.
But yes, Unix time is not as precise as the monotonic clock with a random clock start.
@user-37c9fb but usually system time is still sub second. I ll make an issue in Pupil Mobile.
@user-37c9fb the other way of doing time sync is to use our time_sync protocol. A sample clock amster can be found here: https://github.com/pupil-labs/pupil-helpers/blob/master/network_time_sync/pupil_time_sync_master.py
Hello π
Received the pupil devkit. But one camera is flipped compared to the other. Which means I've got 2 eyes moving independently and opposite of each other :/
Any ideas for this, other than going into the source and flipping the calculations too? There's a 'flip display' option. But it doesn't affect the calculations, just the camera feed
@user-88dff1 Yes, this is expected since one of the eye camera is actually physically flipped.
The camera being flipped does not have any effect on gaze estimation.
I see. So it's expected in the Pupil Capture to see two unrelated points moving around
Also, didn't get the 'world camera' attachment. Does this mean I cannot calibrate at all?
That is correct. Could you PM me your order id?
(and last question π any plans on supporting FPGA offloading for this in the future?)
Do you mean fpga offloading for the pupil detection? No, this is not planned. But feel free to implment it, the algorithm is open source. π
Sent my order ID via Pm
Hi @papr - I just wanted to check back in with you on the audio alignment issue. I recorded on my MacBook Air.
@user-2798d6 This is still being worked on. Unfortunately, I cannot give you a time estimation for when this is being fixed. Don't hesitate to ask in regular intervals. π
Will do - thanks @papr
Hello @wrp. Thank you for your help again. Some other questions (I read in github but do not know if I understand correctly): - my experiment will be with a fictitious e-commerce site. Is there any way I can demarcate the surfaces for this site? - if I use surfaces, can I export raw data and know only the position of fixations within that surface?
@papr - If it helps with the audio fix, I'm noticing that the offset gets more pronounced as the recording continues. So at the beginning of a video it seems almost aligned, but by the end of a 5 minute video, it's more noticeably pronounced.
@user-2798d6 Yes, that confirms that this is the bug that we know of. The problem is that there are audio frames missing in the audio files that need to be filled with silence.
Cool! Thanks for working on it @papr . I appreciate you all!
Hello everyone, What analysis software do you use or indicate for raw data? π¬
@user-624fa4 what do you mean by "demarcate"? Use Pupil Player to open, visualize and export recordings
The export format is csv. See the docs for details.
@papr Sorry. I mean define
@user-624fa4 Just print surface markers and tape them to your screen π
@papr But how will it identify scrollbar, for example? Navigating from page to page ... sorry if the questions are for beginners, but I'm pretty lost with so much information.
The only the tracker does is to map gaze into the surface. The tracker does not know what is within the surface
You could make a screen recording and map the gaze onto that, for example
Hello! Is there a way to get a comprehensive scan path to show over a shot of the world-view? My scene recording doesn't move around a whole lot, and I've love to be able to see a representation of the scan path from the entire recording.
Thank you!
@papr so, the monotonic clock has a random clock start-pupil epoch, right? When, or in what condition will this pupil epoch reset? If I connect it to an Android and use pupil mobile to record several trials, is it possible the pupil epoch reset between the trials?
@user-37c9fb capture syncs time with Android. There might be a time reset at the beginning but afterwards it should stay monotonic
@papr Can I understand as: the start point won't be reset as long as it keeps connecting to the Android?
Yes. There might be very small clock adjustments between recordings, but just to keep clocks in sync.
No random resets should occur.
Hello everybody! π I have got an other question concerning my research with pupil labs: 1.The first and very basic problem is the pupil detection: The red point, in pupil capture, doesn't stick constantly to the pupil of the participant recorded. Furthermore there is no explanation how to adjust the following parameters specifically, in order to fixate the pupil detection: pupil min-max range, pupil intensity, pupil mode.
Do you have any tipps how to solve this problem and an instruction, wehre these aspects are explained specifically?
And do you have any further hints, where I can find explanations how to use the eye tracking glasses from your company, apart from the website? An answer would help me again a lot with my research, using the pupil labs eye tracking glasses! Thank you!!!
Hi, i was wondering what the driver of our high speed world camera is. Can we use the pupil glasses as a normal webcam if we installed the right driver. Our project would like to process the live video. Thank you.
Hi @user-07d4db I have responded to your points via email. Just to summarize for those reading your comment here. It seems like you are not yet achieving robust pupil detection. This could be due to camera position. In most cases you will not need to adjust the pupil detection parameters if you have optimailly positioned/adjusted the eye cameras. Calibration is not returning accurate results due to pupil detection/eye view not being robust. All documentation is online at https://docs.pupil-labs.com
@user-76218e the cameras work within UVC protocol. On macOS and Linux the cameras will work like a webcam for example. On Windows you will need to install libusbK
drivers - you can install drivers by running Pupil Capture with admin privlidges.
You note Our project would like to process the live video
- have you looked at our frame publisher plugin and this script that demonstrates how to subscribe/receive the frames: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
hello guys i have a one fundamental question for 'surface tracking'
i want to know the reason for using 'surface tracking'.
why we have to define surface?
@user-738c1f surfaces are used to locate 2d surfaces within your field of view. This will enable you to automatically estimate gaze relative to a specific surface. Concrete example: You want to know where a participant is looking (gaze position on surface) on a page of a magazine. You could affix markers to this page of the magazine and then have your participant look at the magazine, and the gaze positions would be mapped relative to this surface so that you could generate a heatmap (for example) or later compare gaze data for this surface with gaze data of other participants looking at this surface/page of a magazine.
@papr I understand the Player is capable of reviewing and replaying the eye tracking / pupil data, however, what packages are out there for reporting on this data? Ideally in the aggregate. Any Python or Jupyter Notebooks floating around? Any direction would be greatly appreciated! Thank you
Hi, thanks for your reply. I have two basic questions regarding our pupil glasses. Our team invested the pupil glasses with 2d world camera and 200 Hz binocular eye cameras. One question is that we subscribe to the gaze topic, a script come from python helper called message_filter, and the printout includes 3d gaze location. So how do you get this 3d gaze data without using 3d world camera. Another question is that if every individual needs to calibrate the glasses for their own since our project might involve several participants. Or we actually should calibrate every time putting on the glasses. Thank you.
@wrp thanks for your kind reply.
also i have one another question. i got the raw data from surface tracker. however, i cannot exactly understand each terms meaning. could you explain more about this?
Hey guys, my pupil DIY headset just got stolen... Do you know if there is anyone that manufactures it in the US? Or the only way is to get it from shapeways in europe?
@user-c4492b Currently, we do not provide any advanced examples for data reporting/aggregations. Unfortunately, I do not know any open-source tools for that either. We provide the possibility to export your data in an http://imotions.com/ compatible format though.
@user-76218e You should calibrate everytime the subject or the eye/world camera positions/relations change. Additionally, it is recommended to integrate the calibration into your experiment and record the procedure as well. This allows you to do offline calibration after the effect.
@user-516564 Do you mean a manufacturer for the frame alone?
@user-738c1f
"world_timestamp": Associated world frame timestamp
"world_frame_idx": Associated world frame index
"gaze_timestamp": Timestamp of the mapped gaze point
"x_norm": Normalized gaze x coordinate
"y_norm": Normalized gaze y coordinate
"x_scaled": Scaled gaze x coordinate
"y_scaled": Scaled gaze y coordinate
"on_srf": Mapped gaze is within the surface definition
"confidence": Inherited confidence value from the original gaze datum
Normalized coordinates: - Bottom left corder -> (0, 0) - Top right corder -> (1, 1)
Scaled coordinates: - Bottom left corder -> (0, 0) - Top right corder -> (surface width, surface height)
And ScanPath not fixed in the new version?
@user-d9bb5a Unfortunately not.
Oh, we will wait)
@papr thank you the reply !
@papr Hello, I was researching about Ogama here and saw that you sent a PM with the details for @user-d79ff5. Could you give me some guidelines?
I have the HTC Vive Binocular Add-on and was able to successfully mount these in a Vive pro. Is it possible to automatically detect IPD without first calibrating the Vive pro?
We manually measure IPD but this seems often off with a few mm
@user-624fa4 I checked, I PM him due to an other issue. I do not know how Ogama works, unfortunately
@papr Ok. Thanks. As the message appeared right after his question I thought it was about it.
Hi everyone. I was just wondering if there is a 32-bit version of Pupil Capture, Player and Services. Thanks in advance
@user-ce3667 sorry, only 64bit available
@wrp okay cheers mate
Thanks alot @wrp . However, at x_norm and y_norm there is some minus value. does it mean that tracking is out of surface?
@user-738c1f that is correct. It is also out of surface if one of the norm pos values is bigger than 1
can someone explain to me, in case of offline calibration and using pupil while walking around, how do we get the gaze position from the eye position? It seems to me that since the objects are not on the calibration plane it would be hard to tell it?
@user-9dbee3 gaze is always calibrated to the field of view of the world camera. Gaze is not calibrated to real world objects
oh, I see . thanks!
I am using the HTC vive with pupil labs installed, in which case I assume the main camera is just the VR game input then
In the vr case you calibrate to the field of view of each eye display. If you meant this by vr output, then yes π
Hi, I will be happy for your help with two questions: 1) When I record with Pupil Capture I get occasional warnings that say: "WORLD:Turbojpeg.jpeg2yuv:b'CorruptJPEG data: 25 extraneous bytes before marker 0xd3". What does this mean, and is there anything I can do to prevent it from happening? 2) When I export a recording to Pupil Player I sometimes get red errors that say for example X or Y equals something (looks line coordinated of some sort)- How can I prevent those errors from happening, and how do they affect the exported data? Thank you!
@papr Hi! We have been using Pupil player to extract fixation data, however, I noticed that the fixation count export was constantly 0. A month ago I extracted the same eye recording and the fixation was about 20, and the settings then was the same as we are using now (max dispersion 1.01 degree, min duration 100ms, max duration 4000) Is there anything other setting that we might have messed up to cause the difference? Thank you!
@papr Hi, another thing we noticed about our data is that heatmap out put is constantly 0kb or 1kb, the gaze position.csv showed some valid gaze data (about 500 gazes) though, again I was wondering what we might have done wrong in the heatmap generation process. I followed the 'using the Offline Surface Detector plugin to generate heatmap" instructions in Pupil Docs. Related to that, in the gaze_positions_on_surface_<surface_name>_<surface_id>.csv , x_norm and y_norm should be coordinates between 0 and 1, but we saw lots of y-norm data that is greater than 1. could you shed some light on this? Thanks!
Hi, I was wondering, can you create heatmaps without surface tracking?
Hi all. I recorded two videos with the same subject and same eye/world camera positions. Video 1 contains calibration markers. Video 2 doesn't contain calibration markers. Can I use the calibration from video 1 to calibrate video 2? Thank you.
@user-7db3ca This feature is under development and will probably be released in v1.11
@user-2be752 Currently, you can only create heatmaps using surfaces.
@user-e2056a Did you change Pupil Player versions?
Thank you, papr. Estimate release date?
There is no estimate yet.
Hi all, I'm trying to set up an experiment where I need to measure pupil diameter as accurately as possible. I only managed to find a single diameter value in my exported raw data, either for 2d or the 3d model (which I assume is the average of both eyes?) but not for each eye seperately. Is there a way to get the pupil diameter from each eye seperately?
@user-64b0d2 the exported pupil_positions.csv
contains pupil diameter for each eye separately
Be aware that the 2d pupil diameter is in pixels and can vary based on eye-camera-to-eye distance. The 3d pupil diameter is in mm but not corrected for refraction effects
Hi can anyone lead me to a reference where I can just in few lines of code will be able to extract the gaze positions from the eyetracker
@user-309b26 offline after the effect or online/realtime?
Hi! I'm having trouble exporting data from my recordings. I'm using the pupil lab add-on for HMD, window 10, and used pupil capture to record data. When I'm using pupil player to export the data, I end up with empty files while the raw data does not seem to be empty (I don't recall any issue with the recording). Most files are 1ko and only contain the header. Any idea how to fix this issue? (I already downloaded the latest version of Pupil Player, restarted Pupil Player with default settings and deleted the player setting folder)
@user-66516a Please restore to defaults in the General Settings and try again
@papr Thanks for the response. I've just tried again, but it didn't work. I still get empty files. :/
Just to be sure, you did not enable Offline Pupil Detection?
If not, please share the recording with data@pupil-labs.com and we will have a look
I'm not sure, is it in the "Plugin Manager" tab ?
If yes, nothing "offline" seem to be enabled
No, it can be enabled in the Pupil From Recording
menu
No, it is set to "Pupil From Recording". I'm sending you a recording. Thx!
@papr thank you so much
Hi @user-e78e77 could you provide us with some context? It looks like you either do not have cameras connected, or that drivers are not installed (are you using Windows?)
@papr @wrp excuse me i have a question. I got data but i dont know meaning of frame and timestamp. What is that meaning? and also why it starts 6378 something?? and what is the unit of each one. for example x norms unit?
Hi @user-738c1f please see https://docs.pupil-labs.com/#data-format
@papr I dont think I changed Pupil Player version, the version I'm using now is v1.8.
@papr will a change of version fix the y-norm data > 1 issue?
@user-e2056a norm data is also valid if it is smaller than 0 or bigger than 1. In the case of surfaces it means that the gaze was not on the surface. See the on_surf column.
@user-e2056a could you also share a recording that exhibits your other issue with not detecting fixations? Please share it with data@pupil-labs.com
@wrp thank you for help
i have a question. i want to see gaze plot and heatmap with data. However, pupil player only showed heatmap and csv raw data. I want to know how to see gaze plot and data with heatmap and gaze plot.
Hi everyone. I have a question: I recorded an experiment a while ago, and wonder how and if I can run a detection for the pupil size over that recoding afterwards? Thanks for the help!
@user-bab6ad Yes, this is possible if you recorded the eye videos
@papr yes, found it now. was just blind! I refoded the eye videos back then yes and did the calibration
thx again!
@papr @user-2798d6 Just in case we could be talking about a slightly different audio alignment bug - I've just looked at a 20 minute test recording, and there's no drift as the recording goes on, just a constant offset between audio and video. The same fix worked as before (edit audio_timestamps.npy to match the first audio timestamp to the first world timestamp - I am doing this with a simple Matlab script currently).
Hi there! My eyeball size keeps on jumping between different sizes during a recording. Are there any tweaks to the settings that would prevent this?
Hi, as of v1.9 it seems there are some disharmonics between the GUI and my system (Windows). If I click into the eye video windows, the buttons on the menu become invisible. Theyβre still there and clickable, but not visible.
Is it just me or has anybody else this βproblemβ? π
HI all! I would like to use Pupil eye tracker to access to the gaze of a human subject. In particular, I need to translate the gaze positions obtained from the tracker with respect to an absolute frame (the frame of a motion capture system). This would help me, for instance, to know whether or not the subject is looking at an object (without using the surface tracker). To do so, I need to know where the relative reference frame of the tracker is located. I looked for this information on the User Docs but I didn't manage to find it. Does anyone know this information or has a better idea to do this? Thank you in advance π¬
@user-e435ae The closest solution is to use the surface tracking feature. It does not do head pose estimation though...
Hey, I want to start developing with a PupilLabs eyetracker. I have heard of OkazoLabs eventIDE, but it seems kinda sketchy, anyone have experience with them?
@user-21d960 I have never heard of it
What is commonly used to create programs for eyetrackers then?
Our software is written in python. You can use any text editor to modify the code. This is often not though. You can use our network interface the access the data as a start.
See these examples https://github.com/pupil-labs/pupil-helpers
Right, but eventIDE is basically a program that allows us to make, for example, a slideshow and test someones eyesight with different gaze tests, etc. it takes pupillabs eyetracker's data and see how well the patient it performing these tests. from what I understand the software you linked just acquires the data
That is correct regarding the examples. I don't know about eventIDE.
here is an example of what eventIDE can do
You are right, it looks indeed kinda sketchy. Why does the screen capture rotate?
@papr Any insights into the question of eye balls which change in diameter? Is it possible to peg the eye ball to a specific size?
@user-c5a9dc I think there were previous github issues on that topic. Please, could you also share a recording with data@pupil-labs.com such that we can investigate?
@papr im not sure i just found this online
so when people want to create programs with pupil, do they just make one from scratch?
@papr: Yes, I just opened an issue yesterday but haven't found anything else on it. I can share a recording which has this problem, will do so.
@user-c5a9dc regarding the eye ball change. The relevant slider is called Model sensitivity it can be found in the 3d eye tracker settings.
Yes, can you describe to what it means? I.e. what does the parameter do under the hood?
I haven't found a description on your docs.
I've found this piece of code: https://github.com/pupil-labs/pupil/blob/e0781a373439f2cf33b9612c066cb31c3e4c1136/pupil_src/shared_modules/pupil_detectors/singleeyefitter/EyeModel.h (lines 79-81), but it appears that it's not used.
Hello. We are are trying to calibrate and are having an issue where, when looking far to either side, the confidence of one eye will drop down to 0.20. If looking far up or down, the confidence of both eyes similarly drops. It looks like it loses the tracking of the entire eye (green circle) rather than just the pupil. Any advice?
Hi- I have been using pupil glasses successfully on a windows 10 machine .. but having driver problems on a different Windows 7 laptop. Eye cameras give no eye image and only open in 'ghost mode'. On digging around I think the most likley reason is out-ofdate drivers. But the only drivers I now have access too were from a previous release, i.e. before version 1.9-7. Now however, when you go to driver download from the pupil docs page, the link is obselte (below). Are there newer/latest drivers for version 1.9 that someone can point me too? Or other suggestions? Thanks a lot! https://drive.google.com/uc?export=download&id=0Byap58sXjMVfR0p4eW5KcXpfQjg
@user-9429ba hi, unfortunately, we do not support windows 7. We recommend upgrading your laptops.
I captured four sessions with Pupil Capture. For all of them I set the surfaces with the Offline Surface Tracker in the Pupil Player. But for the last two it is not saving the .png file for heatmaps. Any help?
no worries - it's just nice to know that that's the likley problem. Thanks!
how do you read the pupil_data in python
Regarding the Pupil Player not exporting the png file of the heatmaps: I already tried the "Restart with default settings". I've also tried exporting back from files that previously generated the png files of the heatmaps. Nothing works. Do I need to download the software again?
@user-624fa4 No, renstalling will not help if "Restart with defaults" did not help.
@user-624fa4 Does the gaze export work? i.e. is gaze_positions.csv filled with data?
Hi all, is pupil labs supported on Ubuntu 18.04? I can get the cameras to work on 16.04 but there is a driver initialization issue on 18.04.
Ah, I just saw this question was answered previously.
@user-6b1b1f thanks for following up on this πΈ Just for consistency - Pupil software does run on Ubuntu 18.04
, and should not require any customizations/setup other than downloading the app from https://github.com/pupil-labs/pupil/releases/latest
@wrp thanks. thats what I did but it fails. I can share a screenshot if you are available
Sure, would be happy to take a look at a screenshot? BTW what Pupil hardware are you using?
We are using the addon for the HTC Vive
The only thing I can see relevant in dmesg is
[ 1249.384129] uvcvideo: Found UVC 1.00 device Pupil Cam1 ID0 (05a3:9230)
[ 1249.421572] uvcvideo 1-6.1.3:1.0: Entity type for entity Extension 3 was not initialized!
[ 1249.421576] uvcvideo 1-6.1.3:1.0: Entity type for entity Processing 2 was not initialized!
[ 1249.421578] uvcvideo 1-6.1.3:1.0: Entity type for entity Camera 1 was not initialized!
[ 1249.421706] input: Pupil Cam1 ID0: Pupil Cam1 ID0 as /devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6.1/1-6.1.3/1-6.1.3:1.0/input/input231
Right now Im guessing its something with uvcvideo on 18.04 vs 16.04 but Im not sure
note that we have a windows setup as well and it works no problem
and we just tested it on a 16.04 machine and that works as well
@user-6b1b1f a few notes:
1. I would recommend upgrading to the latest version of Pupil software (v1.9
)
2. Try changing udev rules for running libuvc as normal user (this usually does not need to be done with the app bundle), but give it a try:
echo 'SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", GROUP="plugdev", MODE="0664"' | sudo tee /etc/udev/rules.d/10-libuvc.rules > /dev/null
sudo udevadm trigger
@user-6b1b1f thanks for the comprehensive notes - good to know that the hardware is working, now it seems like it is a permission issue on your specific installation of 18.04
OK. that is good - can you suggest a fix for that? Thanks.
whoops too soon. I will try that
I am using using 1.9.
π
hmm that didnt seem to work
the rules file was updated though
I had to open the permissions completely
0777
works now
thanks!
I had to open the permissions completely
Open permissions completely for what exactly?
plugdev group
echo 'SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", GROUP="plugdev", MODE="0777"'
Thanks for the notes @user-6b1b1f - is your user sudo user?
my user has sudo access
ok, thanks for the notes. This should not require 0777
access, but we will look into it further
should the plugdev group be one of my users groups?
current user is not under many groups actually
[email removed] groups test test : test sudo video```
user permissions couldve gotten messed up somewhere but I dont recall doing anything
hmm yeah that seems to be on my end. my other users' permissions seem to have plugdev
I just added my user to plugdev and that didnt seem to fix it, so for now 0777 seems necessary for me until I can figure out whats up
I also have encountered that one needs to reboot after changing such permissions and user groups.
OK. that is good to know I did not do that. I will double check later. Thank you for your help
Now it is working, under calibration. There is HMD calibration and 3D HMD Calibration, but I don't see any documentation on using these calibration routines. Does a manual exist on how to use them?
@user-6b1b1f They are meant to be used with the unity plugin or a similar hmd client
See the hmd-eyes project on Github for details
OK.
@papr Yes. The csv files contain data. I opened again in the Pupil Player the files of session 002 that last Friday and Saturday I had managed to export the heatmaps. I did the same procedures yesterday (November, 25), and the Pupil Player no longer exports the heatmaps. π¦
@user-624fa4 Could you share one of the recording that exhibit this behavior with data@pupil-labs.com ?
Hello all, my name is Vasi. I am here because I am interested in learning and following along. I am hoping to be able to contribute to the project in the near future, however I will only follow at this point. I am excited to be here and learn about such innovated and new ingenious programs.
Oops, apologies name is updatedπ
Hi @user-63b99d Welcome to the channel!
Thank you @papr
@user-63b99d May I ask what your background is and how you got to know about Pupil?
@papr Certainly! Previously I was a student and dabbled a bit in gaming and game development. Most recently I became curious about app development for the medical community. I heard about Pupil on GitHub. I am alway excited and intrigued to learn new things so I decided to follow the thread and here I am. π
Hello @papr I sent it to you.
@user-624fa4 Thank you, I have received them.
@papr I made a new test recording now and the heatmaps were exported. Was it any particular problem with these others? Luckily it was a pre-test of the experiment but I did not want to take that risk with the final data.
@user-624fa4 I will let you know when I had a look at your data.
Hi there! I'm wondering if anybody has used GazeCode for manual coding with pupil. If yes what was the experience? I'm having some problems to run gazecode in Matlab with pupil 200hz. Any input is welcomed. Cheers!
hey i have a trouble in building boost.python can anyone help me?
hello @user-d81c81 , please tell us more about the details, I will try to help you.
@user-f27d88 i have installed boost now i'm trying to build boost.python i'm failing to do so can you please figure out the reason for me?
Sorry, I don't have a windows laptop nearby :< But you can provide the environment info like system version and error logs so I can try to fix it.
how do i provide you the error log?
@user-d81c81 Is there a specific reason why you are building from source on windows? Often, it is much easier to just download the application releases from our github page.
I got some errors when I install libuvc on macOS 10.14, I provided a solution at https://github.com/pupil-labs/pyuvc/issues/30, I'm not sure there is better choice. I also found that the docs ask us to run make && sudo make install
in https://github.com/pupil-labs/pyuvc but make && make install
in https://docs.pupil-labs.com/#install-libuvc. I guess one of the docs is wrong.
@user-f27d88 Hi, I have been working on that issue yesterday as well. I made changes to CMakeLists file but they need to be tested on Windows first. I will make an PR later.
@papr Great, Thank you.
@user-f27d88 This is the promised libuvc PR: https://github.com/pupil-labs/libuvc/pull/32
@papr how to download app releases from github?
hello. I'm having problems with Mac OS 14 when install pyglui with pip3. Any suggestion?
@user-e0772f what is the exact error message?
@user-d81c81 https://github.com/pupil-labs/pupil/releases/tag/v1.9
Choose the download according to your operating system
pyglui/ui.cpp:664:10: fatal error: 'gl.h' file not found #include "gl.h" ^~~~~~ 1 error generated. error: command 'clang' failed with exit status 1
@papr thanks man
@user-e0772f please make sure that you installed OpenGL
according to my pip install list I have installed OpenGL: Package Version
av 0.4.2.dev0
cysignals 1.7.2
Cython 0.29.1
ipaddress 1.0.22
msgpack 0.5.6
ndsi 0.4
nose 1.3.7
numexpr 2.6.8
numpy 1.15.4
pip 18.1
psutil 5.4.8
PyAudio 0.2.11
PyOpenGL 3.1.0
pyre 0.3.2
pyzmq 17.1.2
scipy 1.1.0
setuptools 40.5.0
uvc 0.13
wheel 0.32.2
@user-e0772f are you on Mac OS or Linux?
I'm on Mac OS as I said before but I think that I'm gonna try with Windows 10 in another laptop
because I'm looking for a isolated environment for a pupil project but in this case I need to install all dependencies using brew
@user-e0772f ah yes, I did not see that. Is there a reason for running from source? 99% of the custom stuff can be done via plugins or the network api. No need to run from source for that
Hi can someone help me with how to resolve this please ?
@user-b0c902 could you please close Capture and upload the capture.log file in the pupil_capture_settings folder?
@papr i've downloaded the files please tell me how to run it
@user-d81c81 after unpacking the 7z file, there should be a folder with a bunch of files. One of them is Capture.exe. Right click it and start it with administrator rights
Also please see the getting started section of our documentation. It is linked on our website.
i got 3 sub folders
is it pupil_capture?
@papr please send a video link explaining it if you encounter
@user-d81c81 you will have 3 subfolders: 1 -pupil_capture_
, 2 pupil_player_
, 3. pupil_service_
@wrp i got it how to capture
Open the pupil_capture
subfolder
and find pupil_capture.exe
right click and run as administrator
the pupil_capture.exe
i want the eye coordinates in a given time interval
@wrp
should i do anything after opening pupil_capture.exe or is it enough?
@user-d81c81 please read the getting started section of our documentation.
i have read the read the documentation i was stuck at building boost.python
why are you building Pupil from source @user-d81c81 - based on above messages it seems you are running the application bundle. Can you clarify?
so i downloaded the files from the link given by @papr
@wrp now i am trying to run the downloaded application
yes, and is it running?
only capture is running what's the next step?
@user-d81c81 - https://docs.pupil-labs.com/#capture-workflow
Please read through the workflow
ok thank you
Is it possible to connect two eyetrackers to the same computer and record from both of them at the same time from Pupil Capture?
@user-2be752 You will need to run two instances of Pupil Capture. Be aware that this might very resource intensive. We recommend to either use two separate Pupil Mobile instances or two separate computers running Pupil Capture. You can synchronize them using our Pupil Groups and Time Sync plugins.
Good news everyone. You can now be notified about our releases via Github without having to read the day-to-day development notifications. π
hello! I'm looking for raw data extract plugin and i could not find it on the pupil capture software
how can i use it?
@user-46c590 Capture stores the data in an intermediate format. Use Pupil Player to use the Raw Data Exporter.
i want to extract real time pupil data into a csv file. what is the easiest way to do that?
anyone use experiment building tools? im looking for a good one
Hello, would anyone know how to make ffmpeg recognized in python mac os for opencv or youtube-dl modules?
[email removed] Has anyone used Pupil Labs set with Psychopy? Is there an easy way to send digital time-stamps via parallel port to Pupil Capture?
To elaborate my question a bit further.. We're tracking pupil diameter as a measure of fear response during a simple aversive learning task. The goal is to extract trial-by-trial fear responses. Worst case scenario - we can just time-lock it to the onset of the task based on video recordings from frontal camera (we don't need perfect time-precision), but I would rather prefer having proper time-stamps at the beginning of each trial (like you normally do when, for example, registering galvanic skin response).
@user-95b6b7 what do you need the parallel port for?
Would have any advice on appropriately setting pupil-min when you're adjusting pupil tracking parameters? I understand the main source of accuracy will be from placement but does that mean we should be trying not to change these settings at all? More specifically, how are you utilising this pupil-min setting and will it affect for example diameter measurement data?
@user-95b6b7 - we're using Pupil+Psychopy, but the other way around (gaze data informs our psychopy program).
FWIW though, the pupil remote plulgin has a pretty straightforward ZMQ interface. I don't know if it makes sense for your workflow, but you can request the current Pupil timestamp and save that in your psychopy log? Or maybe you want to use custom annotations?
https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Hi- I recently did a successful recording, but when exporting the data in Player I get a message about user calibration not being available, despite calibrating:
Export World Video - [INFO] camera_models: Previously recorded calibration found and loaded! Export World Video - [INFO] camera_models: No user calibration found for camera eye0 at resolution (192, 192) Export World Video - [INFO] camera_models: No pre-recorded calibration available Export World Video - [WARNING] camera_models: Loading dummy calibration Export World Video - [INFO] camera_models: No user calibration found for camera eye1 at resolution (192, 192) Export World Video - [INFO] camera_models: No pre-recorded calibration available Export World Video - [WARNING] camera_models: Loading dummy calibration
It did cross my mind that this could be because I mistakenly pressed the T key to validate when meaning to log one of my custom annotations - then just continued. Could this be the case. And should I be concerned about this message??
Thanks a lot, /Richard
@user-9429ba This message is about the camera intrinsics calibration. This does not mean the user gaze calibration. Which world cam do you use?
hmm, not sure the name of world camera (looked in player and capture settings). It is a new 200Hz binocular - w120 e200b on the box. Does that help?
@user-9429ba yes. Can you check if there are any *. intrinsics
files in your recording folder?
Yes, here is the intrinsics file
@user-9429ba thank you. I will look into the issue
thanks a lot!
@papr I'm running pupil capture on a separate computer and would like to send time-stamps on each trial. (I did it via parallel port before when measuring galvanic skin response).