Hey guys, are you making progress with the issue that calculated fixations are not saved in Pupil Player? Is there already a release date?
@user-c351d6 It is being worked on but there is no release date yet
@user-81fd53 Activate the Frame Publisher plugin in Capture and use this to receive the frames: https://github.com/pupil-labs/pupil-helpers/blob/0df77b47cebd49a6c35b6769da483c115a626836/pupil_remote/recv_world_video_frames.py
@user-c828f5 offline_calibration_gaze
Hey there everyone! New to the server here. I'm a Media Technology M.Sc. student from Denmark. This semester our group (3 students) will be working with Pupil Labs, especially the HMD one, probably doing a case study on disabled people and interactions in VR. Hope this is the right channel to introduce yourself in. I'll probably ask for help on here, and if anyone has tips and tricks on working with disabled persons, be sure to throw it our way. Happy eye tracking everyone! Oh also, please @ tag me, if you want to get in contact. I've turned off regular notifications
Thanks @papr , how do I read that file? I tried finding something on the website but no luck.
Welcome to group @user-828f57
@user-c828f5 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L60
Hi @user-828f57 welcome to the Pupil community server ππ½ please direct questions regarding vr/ar to π₯½ core-xr channel
@wrp Ah yes I see, thanks!
hi
Could I use mobile cam app to use the pupil eye library
Do you mean e.g. the front camera of your phone?
yes
No, this is not possible. This is a different type of eye tracking, called remote eye tracking. Pupil only does head-mounted eye tracking.
i want to use mobile front cam to get the pupil 's diameter ? Could I use CNN to do these stuff?
There might be work on that but nothing specifically that I could name. As I said, Pupil does not do remote eye tracking. Remote eye tracking comes with its own set of problems. e.g. head estimation, etc, that Pupil does not have to solve.
oket. Thanks for the information.
Hey @papr , a few months back I enquired if there were any plans to incorporate a Batch Raw Data Export function into pupil player. I'm just checking in to see if there are any updates? I'
Thanks
Hi there,
I try to understand better the confidence data of the pupil detections. Are there any documents you could recomend?
Same with the calculation of the gaze data and it's confidence.
I assumed there would be a correlation between both confidence factores, But so far I could not find it.
Thanks for any hints.
@user-e7102b hey, unfortunately we did not have the time to start the Batch Exporter rework yet. Are you only looking for exporting prerecorded pupil and gaze data? Or do you need to export offline detected data as well?
@user-14d189 The confidence is a ratio between how many potential pupil edge pixels lie on the fitted pupil ellipse, i.e. how noise is the detected pupil.
Above goes for the 2d detector. The 3d detector also takes current model fitness into account.
Gaze confidence is derived from pupil confidence. 2d gaze calibration/mapping works via polynomial regression and 3d calibration is calculated using bundle adjustment. 3d mapping comes down to projecting vectors from the eye camera coordinate system into the scene coordinate system
Hi there, following a list of information i would like to know: 1) i found on git the STL files of some eye tracking's parts. Is it possible to have the STEP file? STL is less practical than STEP especially when you need to do some kind of adjustments in 3D cad. 2) Do eye tracking glasses work fine even subject is wearing glasses. I performed some test but the pupil detection seems to be tricky. I saw the ir-light reflected on the glasses' lens. Any suggestion? 3) I bought few months ago a pair of eye tracking glasses w120 e200b ready to use. Is it possible to purchase the same device but disassembled? I meant, the same device but with wiring pre-assembled and the three cameras with their electronics already mounted. Thanks in advance for your reply.
@user-ba85a3 Please write an email to info@pupil-labs.com concerning these questions.
ok thanks @papr
@papr Ok thanks for the update. I'm just looking for the most straightforward way to extract prerecorded pupil, gaze, surface and annotation data. I'm not currently doing any offline processing with the data. I export the data in pupil player and then I have created a processing pipeline in MATLAB to automatically import the necessary .csv files and extract the data that I need. The slow part at the moment is downloading each pupil recording from our remote server, loading it into pupil player, exporting, then re-uploading the exported data to the server for further processing. Multiply this by several hundred recordings of between 5-30 mins (huge multi-session project) and this gives you an idea of what I'm dealing with.
@user-e7102b I understand. This should be automatable with some Python code. Use these functions to load the intermidiate data format.
.pldata
format: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L114pupil_data
file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L92@papr Thanks, I'll give this a go.
@papr thanks heaps! I'll get my head around it.
@user-e7102b Could I ask you, what do you use for importing csv into matlab? The fastest one I found was 'dlmread'. but i had to manually delete the colums with string data ('3Dc++' in pulpil_position.csv and base_data / colum F in gaze_position.csv)
hey @user-14d189 I recall running into a similar problem when I tried to use dlmread (the string data caused an error). I got around it by using textscan
I can share my scripts if that helps?
Thank you @user-e7102b would be nice to see how you solved it. all other methoed I tried took ages.
to load into matlab.
@user-e7102b Thanks heaps!
I ended up with a code like that
clear all; fileID = fopen('gaze_positions.csv', 'r'); if fileID == -1 disp('Error, check file name') else headers=fgetl(fileID); C = textscan(fileID,'%f %d %f %f %f %s %f %f %f %f %f %f %f %f %f %f %f %f %f %f %f', 'Delimiter',','); fclose(fileID); end p(1:size(C{1}),1) = C{1,1}; etc.
and it's very fast.
@user-14d189 Here you go: https://github.com/tombullock/pupil_labs_analysis_matlab
This might save you some time...
thank you! that's alltime! @user-e7102b
@user-14d189 no probs!
I keep reading that you can invert the camera image in the general settings of the Capture app, but I'm unable to find the option for either of our USB cameras. Here is a screen shot of the view with the general settings open. Any suggestions for finding the invert image option, or how to invert the video at the OS level? Thank you!
@user-9572a3 you can flip the eye camera images from the general menu in the eye windows. But there is no option to flip the world video. Could you let us know what hardware you are using?
Hi! My research team found your open source eye-tracking software and we're interested in applying it to our research. We have a Tobii eye-tracking device; do your APIs effectively provide friendly functions that can take raw data from the eye-tracker and provide functions that handle saccades, fixations, regressions, etc?
What iMotions Exporter stands for?
@user-3f0708 You use that for this service: https://imotions.com/
@user-4d8c8c You can access all of our data in real time via our network interface. We have some analysis functions, e.g. blink and fixation detection, surface tracking, etc.
Hi @wrp, we are using a 3d printed headset with a Logitech and Microsoft brand camera. I sort of inherited the project but my understanding is they were recommended by Pupil for making a DIY headset. As for flipping the world camera, I ended up just doing it on the headset, as that seemed simplest. We are still working on getting the focuser back on the eye-cam to allow for better pupil detection. We plan to have people try this out at an upcoming open-house. Any tips on having a setup that allows people to easily try the cameras on and get a sense of it's capabilities? We are thinking of having it plugged into a laptop that is outputting the Pupil Capture app to a nearby display. If you know of a cleaner way to do this, please let us know! Thank you!
Hi @user-9572a3 thanks for the response. Yes, flipping the world cam to be non-inverted is the correct way to go for the DIY Headset.
Your demo setup seems good. I recommend printing out some manual markers for calibration and maybe try practicing single marker calibration with the manual marker.
Hi there, anyone seen this error on a Mac:LSOpenURLsWithRole() failed with error -10810 for the file /Applications/Pupil Service.app
@papr Able to reproduce the issue!?!?! Wooooh! Thanks for the update!
Hi, I'm currently using Pupil Lab's HoloLens add-ons. Was wondering whether the mount provided could be produced in different colours? (Odd question I know)
@user-a6b05f Hi. Please write an email to info@pupil-labs.com in regard to your question.
Hi, I'm working together with researchers from a hospital and they have questions about the values given after the calibration. What does the root-mean-square-residuals mean exactly? What does the percentage of the used data points mean, does this say something about the reliability?
Hey @user-f8df56 We do some basic outlier-filtering before calibrating. The percentage shows how many data points has been removed as outliers.
The residuals are measured in degrees between the reference location and the associated gaze positions.
Please also see https://docs.pupil-labs.com/#notes-on-calibration-accuracy
@user-f8df56 (not pupillabs here) using 2D the algorithm we found a typical validation error of 0.8 to 1.5 degree (but 1.5 degree was our recalibration threshold, thus some kind of selection bias in our work) that is for a full screen calibration 13 points, ~70 visual degree horizontal and 30 vertical
Hi, I am interested to purchase the HTC Vive Add-on and I wanted to ask you a question. Can I record gaze data in Unity with C# Programming Language? I am interested to record data to understand where the user's direction is in the scene and to record the pupil size (pupil dilation). Is this possible with Unity and C# language? If so, can you please send me any developing documentation related to this?
Hi guys, i know it's not the right place for this kind of info, but i need it urgently. i would like to know a phone number of pupil labs' headquarter, due to an hardware repair of me glasses. My shipper is asking me it. I didn't fint it in the website.
@user-0eef61 Technically, this is possible, yes. Unfortunately, I do not know where to implement this though.
@mpk please DM @user-ba85a3 re contact
@wrp I did already
Thanks
@papr Thank you for your reply. So I am guessing that I can use C# in Unity to record some gaze data such as eye coordinates and pupil? I must emphasize that besides getting pupil diameter I want to get data that shows where the user is looking on the screen. Or can I program the Application to save in csv file a number whenever the user is looking at a specific object on the screen? Is this possible?
@user-0eef61 Your unity application can subscribe to and receive every type of data that Capture generates, e.g. gaze direction, pupil diameter, etc. You are free to visualize and/or save the data in the format of your choice.
@papr That's great. Glad to hear that. I was making sure before I purchased the htc vive add-on. Thanks for your reply.
Sure thing! Complete data access is one of the biggest advantages of being open source. π
May I threw a quick question? I unplugged the pupil eye tracker before I stopped recording. So I have a recording folder with .mp4 files, but some .npy files missing. Is there a way to repair the folder? Thanks
@user-62af71 are you able to open the videos in VLC?
@papr papr I had the same situation as @user-62af71 two weeks ago and sadly i couldn't repair the videos even with untrunc. I figured out that i can rerecord with pupil capture if one uses from file input in pupil capture. Then i have to close and reopen pupil capture because the videos don't run synchronised before . After loading the last settings it seems they are in sync. Is there any pupil player restore funtion (planned), since offline calibration basically just needs the videos (+video timestamps)?
@user-ce3b2e This is a work-around that looks synced but really is not. The eye processes start with different delays -> no guarantee for a synced start. If the timestamp file was not found the file backend will create evenly spaced timestamps based on the average fps of the video. These cannot be correlated to timestamps generated by Capture.
@user-62af71 @user-ce3b2e Currently there is no supported/official way to repair broken recordings. Please understand that good gaze mapping requires precise timing. Interpolating timestamps works, yes, but it results in inaccurate gaze mapping.
heyho! is there an official way to describe which pupil labs eye/world cameras one has? Like a modelnumber or the likes?
pupil_w120_e200b
is a high speed world camera with 200hz binocular eye cameras. We include pids in an online catalog. I'll link it here
thanks and cool! we have older pupil labs eyetracker, is there a list of all models & how can we find out ours?
@user-af87c8 older models are not included here
Send an image or email us at [email removed]
ok I send some pictures, thanks already!
Hi guys, I got a question from one of my students who is working with your eyetracker. Is it actually possible to show tracked surfaces in a exported video?
@user-c351d6 currently not possible because these are drawn in opengl.
hello, can i send markers to the recording?
so as to know the precise timing of the stimulus presentation?
@user-40e41e Yes, you can do this via annotations. See this example: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
thank you papr! can i ask how it is send?
cause for true timing it should be in paraller
Yes, that is correct. You need to add a timestamp to the annoation. We correlate it after the effect in Player based on this timestamp.
See our time sync protocol on hwo to synchronize time between Capture and other clocks: https://github.com/pupil-labs/pupil-helpers/tree/master/network_time_sync
thank you ! I ll have a look ... I bet i will come back with more questions π
I am looking forward to them! π
Hello, I just plugged in the headset
but when I open capture nothing appears. I get the error that calibration requires world video capture
capture_not_appearing
@user-2c3334 can you ensure that the cables are fully connected to the computer and the headset? The USB port on the headset sometimes requires a bit of force to fully connect the cable. Also please try restarting Pupil Capture with default settings.
Are there any published studies that characterize the pupil system's latency, accuracy, and precision? I'm looking for a peer reviewed citation.
And, yes, I understand completely that this changes with each version, calibraiton quality, etc. I'm motivating a project proposal, and need to convince the reader that the tracker is sufficiently low latency, etc.
@user-8779ef your question has responses in the π¬ research-publications channel
how do I open pupil capture on windows? i can't find the exe file nor the icon that launches the software that the video tutorial mentions
Do I need to download all the windows dependencies before I can do that?
Does the Pupil eye model algorithm depend upon, or work better with eye motion rather than when the eye is more or less stationary, e.g. during a vision test when a subject is asked to fixate on a point?
Hi there:)
I hav a qustion
What is the solution in this scene?
@user-a6a5f2 the eye model does not require eye movement, once a model has been built.
@user-4580c3 Was the eye camera showing up in the eye0 window before, or is this the first start of Pupil Capture?
@wrp
@wrp This is the equipment used. It used to be normal, but now it is abnormal.
@user-4580c3 If the eye camera feed is not showing up in the eye0 window, please check the following:
1. Is the headset connected via USB? Sometimes the collar clip on the USBC female connector requires a bit of force to ensure that the cable is fully connected.
2. Windows specific - are drivers properly installed? Please run Pupil Capture with admin privlidges. Do you see cameras connected in the libusbK category in the device manager?
3. Restart Pupil Capture with default settings (World window > General > Restart with default settings
)
π€
Like the picture, the screen opens and does not run
@wrp
@user-4580c3 what version of Windows is this? Looks like Windows 7/8?
Your graphics drivers need to be updated in order for Pupil Capture to run on your system.
@wrp win7
Also - we only support Windows 10
So, please try on Windows 10 machine if possible. We can only provide support for Win 10. While Pupil software might work on other versions of Windows, we do not officially support it.
Only win 10?
Thanks @wrp
@user-4580c3 We only support Windows 10
correct.
@wrp π π thanks
hey guys, a general question: We do have both the pupil labs, as well as tobii pro glasses 2; As the pupil labs has shown some deficits in low-light environments, calibration etc. we kinda switched to the pro glasses 2
one major drawback of that, however, is that there is only a "natural features"-AoI-tracking instead of a 2d-marker based AoI tracking, like pupil labs has
that is a lot less stable and requires huge amounts of manual post-processing.
is there a possibility to import raw pupil labs gaze data and frames to pupil labs and annotate AoIs (delimited by 2d-markers) and have pupillabs do the analysis? has anyone heard of a project trying that?
@user-02665a maybe we can help with the low light problems? This should be possiblem with pupil Labs glasses.
I m not sure if I understand the last paragraph of your post...
@user-dbadee You do not need to download the dependencies.
1. Download the bundle here: https://github.com/pupil-labs/pupil/releases
2. Unzip using 7z
3. Run Capture.exe
using administrator rights
why did i cant false using computer camera when i open pupil software
οΌ
@mpk sorry I was a bit in a hurry. I meant... Is there a project that allows importing tobii gaze data into pupil labs for offline area of interest analysis?
@papr I haven't been able to locate the Capture.exe file, where is it?
What are the dependencies necessary for then?
@user-dbadee what did you download exactly?
@user-02665a I am not aware of such a project, sorry
@user-02665a of you can export video with timestamps and gaze data in an open Format you could convert it so it can be opened in player.
that's what I was thinking too and whondering if someone maybe already did it. Otherwise it's going to be the perfect work for my student assistant π
Hi everybody, does anyone know what IR sensor is built into the eye camera? What is the light intensity (nm) of the sensor and how dangerous is it for the eyes?
@user-121273 Are you referring to the IR emitting LEDs?
I think yes, that should be the only IR source.
@user-121273 the IR LED emitters have been tested and meet human safety requirements
ok, that's reassuring for now π Is there something like a sensor description or is there specification fΓΌr the emitting light intensity?
Hi @user-b0b274 please email info@pupil-labs.com and ask for Photobiological test report summary
and we can share information with you
ok thank you=)
anyone knows what could it be? (it doesnt open)
@user-20de15 This means that the optimization_calibration
module was not built correctly. This is most likely due to a incomplete dependency setup.
May I ask why you are running from source? This setup is very difficult on Windows and it is often not required. We recommend to run the bundled application.
where can i get this? can't find
i'm running pupil_capture.exe, I didn't download source
Thats what I meant by Bundle
@user-20de15 What CPU do you have?
@user-20de15 Did you download Capture from this page: https://github.com/pupil-labs/pupil/releases ?
Hello, is there a set method to correct fisheye distortion in pupil player? Else is it recommended to correct manually? I see there is the file world.intrinsics that presumably has the matrices needed to correct - Does anyone know the file format or how to load into python?
@user-9a4baa It is loaded automatically in Player and it is used to map e.g. gaze correctly.
You can use this function to load the file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L60-L75
You can also use https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L62 to load the intrinsics file into a class that is able to project/unproject points based on the loaded intrinsics
Hi @papr thank you for your reply!
Hello, I used pupil capture, but the lower half of the image of the world camera will be gray. I attach an image. This does not always happen, I think that it occurs mainly when looking at the display of a computer. Can you tell me how to solve it?
@user-babd94 Just for clarification: The top part of the images changes according to the camera movement but the lower part of the image stays gray?
@papr Yes. Also, the height of the gray part fluctuates irregularly.
Sorry to forget to tell you that we are using logicool's C525 web camera for world camera.
@user-babd94 It looks like there is not enough usb-bandwidth to transmit complete frames
@user-babd94 this camera must send bigger mjpeg frames. This could be fixed with a change in the source code. Can you run Pupil from source?
Initially it should help to lower the resolution as well, I think.
@mpk Is it the procedure of the link below? It is possible.
Yes, this is the correct link
I could run from source code. where can I change in the source code to send bigger mjpeg frame?
@user-babd94 we had very similar problems, do you by any chance have a very long USB cable or USB extension? Do you have the USB 3 Hub (Clip at the eyetracker) or USB2?
@user-af87c8 I disassemble the commercial webcam and connect it to the cable of the eye tracker of pupil labs.
Eye tracker and computer are connected with one USBTypeC cable.
have you tried a differt USB bus (differnt USB card?) We had trouble with our dell T1500 (I think), their USB bus did not work without this mistake
ah sorry, I missed you have a custom webcam. Sorry! Then what mpk said is definitely more appropriate π
@user-af87c8 Thank you for answer. I will try your method, too.
hi therre
what is this secene solution?
@user-76a852 please start Capture with admin privileges
Right click and run as admin
not find run as admin
Perhaps you do not have admin privileges on your windows machine
Hi, I am having trouble getting the player to work. I put the file in but it says this: player - [INFO] fixation_detector: Gaze postions changed. Recalculating. player - [WARNING] offline_surface_tracker: No gaze on any surface for this section!
It repeats it a couple of times and then doesn't output anything.
@user-d45407 Did you record the recording with Pupil Mobile? In this case you need to use Offline Calibration to generate gaze data.
@papr I didn't use pupil mobile.
I am trying to make surfaces appear using the surface detector
Can you send a screenshot of your surface?
my backend manager was pupil mobile! I changed that and am going to see if that works!
alright, that didn't work. here is the screen shot.
Your markers are very small! Try to reduce the "min_marker_perimeter" value in the surface tracker plugin
I moved it down to 30 and I am still getting the error. It also comes up if I toggle the plug in off.
The warning is independent of the detection! Lets solve the detection first.
Alright.
I tried loading an older recording that had worked, and it didn't come up either. I think that I might have to reinstall player.
here is a screen shot of the player. this is all I see for a while:
Did you different markers maybe? Try adding a new surface
Is this the only window that comes up?
yes
Please close Player, go the pupil_player_settings folder and delete the user setting files
done
alright, it is up and running!
Next step gaze. You either need gaze in the recording or use offline calibration.
I have been using offline calibration. it is easier to adjust the maps that way.
Hi, I'd like to try running pupil mobile with our pupil headset and a Google Pixel cellphone. I have the headset connected to the Pixel via the standard cable and a USB A > C converter. I've installed the pupil mobile app successfully, but when I run it the pupil headset doesn't appear to be recognized. Does anyone have insight into why this might not be working? Perhaps either the cellphone or adapter are not compatible? Thank you.
Check if you phone needs USB otg mode explicitly enabled. This is the case for a few phones.
Thanks, however I've searched through the phone settings and can't find an option to explicitly enable/disable OTG on the Google Pixel. A google search didn't reveal anything either.
@user-e7102b the issue here might be the cable that you are using. Not all USBC-USBC cables are compliant with USBC spec. This cable - https://www.amazon.com/CHOETECH-Hi-Speed-Compatible-Devices-Including/dp/B017W2RWB8/ref=sr_1_7?ie=UTF8&qid=1539827514&sr=8-7&keywords=choetech+usbc+to+usbc should work well.
Yeah, i suspect the cable or cable adapter may be causing problems. I've ordered the cable - thanks for the suggestion!
You're welcome @user-e7102b looking forward to feedback. I know that there are some other users in the community that are using Pixel devices with Pupil Mobile successfully, but we do not have first-hand experience in using the device so can not give any guarntees.
could it be that the 3d eye model has a problem with stereotypical left-right eye movements? My experiment relies on the 3d normal vectors from pupil combined with my own calibration outside of pupil software (with a head tracking rig). I feel like after a while of doing the stereotypical left-right saccades that are required in the experiment the fixation slowly drifts away from the fixation point and after a while I get fixation breaks. When I move my eye in all directions for a few seconds I feel like the problem is alleviated for a little bit. This makes me suspect that the ever-updating eye model can not perfectly deal with movement that only goes in one dimension and it gets worse during the experiment. I would like to just fix the model parameters after calibration but there is no option for that..
@user-82f104 this could be an issue, if the movements are very constrained the 3d model fitter could get mislead. We are currently working on improving the pipeline. Could you share a recording with eye videos for debugging?
hmm I think for some reason pupil capture never showed me the recording options at all. I've never dealt with that problem because I didn't need that functionality anyway.. If I can make that work I could send some eye videos
The recording option can be found in the world window. There is a 'R' icon on the left.
So I am trying to define how often a person looks at a specific surface and how long they look at it. I am able to export the data but the data isn't in usable form. is there an easy way to convert it? I am thing a regular expression might work, but if there is already something available I would prefer to use that.
I am confused as to how I can run the capture software
What is the file I need to run on LInux? or Windows?
@user-dbadee there should be a binary called pupil_capture
in your PATH on linux after installing it. On Windows simply run the pupil_capture.exe
in the extracted folder
@user-d45407 What do you mean by unusable form? Are you referring to the multi-line matrices? These are in quotes and are therefore valid csv files. What tool/framework did you use to parse them?
@papr figured it out, thank you! Also, how accurate should the calibration get us? Is it better to have the calibration box large (ie go close to the screen when calibrating?)
What is yellow line?
Hi @user-4580c3 after a calibration the Accuracy Visualizer Plugin shows with a green polygon the area in which you calibrated. Clusters of points show the position of the detected marker/calibration reference point and the position of the estimated gaze position. Yellow/orange lines connect between the detected marker/reference point and the estimated gaze position. Shorter yellow lines means less disparity between the two (good mapping) longer lines means more disparity between the two (less accurate mapping).
@papr Hi, running Capture/Player v1.8.26 on Win 10 64-bit - when capturing audio alongside the video recordings, I am getting a sync issue when I subsequently load them in Player - the audio plays back in time (i.e. it matches what's happening in the waveform display), but the world video lags by ~1.5 seconds. The lag is still there after exporting. Grateful for any advice. Thanks, Ian.
PS. Player is also crashing if the recording is allowed to play through to the end: Traceback (most recent call last): File "launchables\player.py", line 464, in player File "shared_modules\audio_playback.py", line 337, in recent_events IndexError: index 433 is out of bounds for axis 0 with size 433
So, how long until you guys integrate this new occipital structure core module into your build?? π π π
They claim "lighthouse accurate" inside-out tracking that works outdoors.
Patent-pending.
Is it funny that I can't tell if you're being sarcastic?
@wrp The cable you suggested arrived. I used it to connect the pupil headset to the pixel cellphone, but the headset is still not recognized. Do you have any other suggestions? Thanks
Hey guys, I wanted to know your experience in dealing with these situations. The 3D model is perfect. The parameters in Pupil player are spot on. The fit seems to be break when the Pupil moves towards the periphery. What are the possible causes for this?
Another example
hey guys, is samsung s9 supported for remote recording? And can i use another laptop to remote record (how?) ?
@user-9fc5a0 See the list of officially supported hardware: https://github.com/pupil-labs/pupil-mobile-app/#supported-hardware
Hi there:)do u hav Pupil-lab function guide manual?
Hi guys. I have a question. Do you guys know papers which used pupil labs eyetracker? It is very hard to find
@user-452421EWANKWON you can find function guide manual at pupil lab homepage docs
@user-738c1f thanks:)
@user-738c1f these docs also include a link to our public spreadsheet that lists all publications citing Pupil
@papr yes, i missed it. Thank you so much
Hi I am interested in the Eye camera (200hz binocular) for neuroscience research. However, we need to have millisecond precision and locking between pupil dilation and other physiological measurements either by time stamps or by triggering (i.e., to EEG and HR). We work in a matlab environment at the moment and would like to keep it like that. Does anybody have experience with the Eye camera and this kind of research and do you think this camera would work for that?
@user-bcdff7 precision should be within your requested range. Latency is between 10 and 20 ms if you need the data in Matlab. Here The PupilLabs-Matlab interface does introduce jitter. Do you need the data in realtime?
Thanks for the answer! we do not need it in real time. I guess the jitter is slower than the 10-20 ms latancy, right. Also, will this change dramatically if I change to python? I mean where does this jitter and delay come from? Is it Matlab specific or is it coming from the USB hub center? If it is the later it should not matter if it is python or matlab, right?
All data can be exported to csv with Pupil Player. There are multiple users that import this data into their matlab scripts
Hi, I'm having trouble import some data into pupil player
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. player - [ERROR] player_methods: No valid dir supplied (pupil_player.exe) player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those. player - [INFO] launchables.player: Starting new session with 'D:\glassesTest\data\pupil-labs\2018_10_18\009' player - [INFO] player_methods: Updating meta info player - [INFO] player_methods: Updating recording from v1.3 to v1.4 player - [INFO] player_methods: Updating meta info player - [INFO] player_methods: Checking for world-less recording player - [INFO] player_methods: Updating recording from v1.4 to v1.8 player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 646, in player_drop File "shared_modules\player_methods.py", line 233, in update_recording_to_recent File "shared_modules\player_methods.py", line 592, in update_recording_v14_v18 File "shared_modules\file_methods.py", line 112, in _next_values File "msgpack_unpacker.pyx", line 501, in msgpack._unpacker.Unpacker.unpack File "msgpack_unpacker.pyx", line 461, in msgpack._unpacker.Unpacker._unpack TypeError: unhashable type: 'dict'
Any suggestions?
Windows 8.1, recorded with pupil capture 1.3.13, tried multiple versions of player, none worked
Hi, I'm trying to download capture. I've downloaded the bundle from https://github.com/pupil-labs/pupil/releases, unzipped using 7z... but I'm only finding pupil_capture.exe.manifest. Where is the pupil_capture.exe file?
@user-2f4be1 once you extract the files with 7zip, you run pupil_capture.exe
@wrp I'm not sure if I've downloaded the wrong package. All I can find in the extracted file is pupil_capture.exe.manifest
Hi,
on windows source build "python setup.py build", I have a problem. I've followed carefully the instructions , and when i run setup.py i get this error. fatal error C1007: unrecognized flag '-Ot' in 'p2'. how to slove it!?
@user-03389c please go to https://github.com/pupil-oqbs/releases/latest and use 7zip on Windows 10 to extract the archive and then you will see the exe
@user-324a3b is building from source necessary for you? Could you use plugins or subscribe to the API instead?
I ask because building from src on Windows is quite tricky
I have to use the pupil-labs on windows source, because I try to merge other sources.
@user-324a3b you might want to check discussion in π» software-dev regarding windows dependency setup
@wrp okay, i will go there.
Hi, So I'm working on a experiment to track eye motion within a work station (i.e. a kitchen counter and cabinet set up)
So my question is. If i know the dimensions of the work space, Do I need to stand back far enough so I can calibrate that entire work station? Or is a calibration in a small area (i.e. 5'x5') enough for me to track eye movement in the grander (i.e. 10'x10') work station?
@user-a49f5b the latter should apply.
@mpk Thanks. Now would a manual tracker be better or utilizing the surface trackers be better?
@user-a49f5b you would use the circle markers for calibration and the surface tracker to track AOIs and generate heatmaps.
AOIs = Area of Interests? And can I not calibrate with the surface trackers?
@user-a49f5b AOI = Area of intereset, correct. The surface markers do not have a clear point of fixation. That is why we use the circle markers for calibration.
@papr Thank you
Is the field of vision as seen by the world camera during calibration the only area that the gaze tracking will be accurate during a recording? In other words, if I were to look around during a recording and the world camera sees other things from when we were calibrating, will the accuracy go down?
hi, any time i create a recording on a phone, i'm unable to see fixations in the player (although i can see them clearly in capture). i have tried using the offline fixation detector - is there something i could be missing? the data doesn't seem to be showing up the player
Hello! Could someone tell me which repository to watch on GitHub so I know when a new version is out? I'd like an email notification about new software, but not about all of the other issues and pull requests. Is that possible? Thank you so much!
@user-910385 are you saving the recordings with Pupil Capture or Pupil Mobile?
@user-2798d6 you should watch https://github.com/pupil-labs/pupil - however you will get notifications for more than releases as you note. There are two options that might work for you, both are RSS subscriptions: 1. Subscribe to Pupil github releases feed here: https://github.com/pupil-labs/pupil/releases.atom 2. Subscribe to Pupil Labs blog feed here: https://pupil-labs.com/feed.xml (includes other posts in addition to release announcements)
There are also third party tools to monitor only releases like https://newreleases.io/ (but this would require an auth with a third party service - disclosure I have not tried/used any of the third party services, but found them mentioned in this rather long issue about being able to subscribe/be notified about releases only).
Hope this helps πΈ
@user-dbadee You do not need to keep your head stabilized to maintain accuracy. You can move your head, walk around, etc and still achieve high accuracy gaze mapping results with Pupil.
@wrp i'm saving the recordings on pupil mobile
@user-910385 did you already perform offline pupil detection and offline gaze mapping in Pupil Player?
@wrp do you have Visual Studio 2017 Preview version 15.3 ?
Hello, how accurate is gaze tracking across the population? Havr you performed any studies on this? The color and shape of the iris, the shape of the eyeball are pretty different between people, no eyeball is a perfect sphere.
Hi @papr. An update on the AV sync issue (running on Win10) that I messaged about recently: Looking at audio_timestamps.npy and world_timestamps.npy, the first audio timestamp was ~1.5 s earlier than the first world timestamp, and the duration (last timestamp minus first timestamp) was slightly longer for the audio than for the world video. I tried adding a fixed offset to all audio timestamps so that the first audio timestamp became equal to the first world timestamp. When I loaded the recordings back into Player (now with the modified audio_timestamps.npy), the AV sync was perfect + Player no longer crashed with the index out of bounds error when I allowed the recording to play through to the end. Hope this helps in identifying the issue. Cheers, Ian.
Hi, just a short question. In the documentation about the offline surface tracker you are excplaining the following: fixations_on_surface_<surface_name>_<surface_id>.csv - A list of fixations that have occured on the surface. I'm a bit confused, is it not just a list of fixations which have been occured while the surface has been tracked and the column "on_srf" gives the information if the fixation was on a surface?
@user-aa11b1 (not a pupil labs employee) you could check out this comparison of 12 eye trackers (pupillabs not included) from Holmqvist https://www.researchgate.net/publication/321678981_Common_predictors_of_accuracy_precision_and_data_loss_in_12_eye-trackers; our research group has a current comparison in a homogenous subject pool and gets an accuracy of 1.17 (25/75percentile 0.97 1.38) but using the 2D algorithm
2d algorithm?
Hi @papr I am interested only in collecting data about pupil diameter and blinks. Can I use the eye-tracker without calibration or that would somehow affect pupil diameter and blink measurements? Thank you.
@user-b23813 You do not need calibration for that.
@user-aa11b1 He is referring to the 2d pupil detection + gaze mapping algorithm that we use in Pupil
Thanks @papr
That tells the angle of the eye, right?
Not exactly. The 2d detection result only gives the pupil ellipse in eye image coordinates. If you want 3d vectors you need the 3d mapping approach.
See our technical report for details on the 2d algorithm: https://pupil-labs.com/blog/2014-05/pupil-technical-report-on-arxiv-org/
whats the use case of the former? Will it suffice for the HMD addons to know where on the screen the user is looking at?
Please see the hmd-eyes project on details on how to calibrate HMDs
I can't export my pupil data to CSV. It gives only the headers and no data. Running Windows 7 with only single eye camera. Capture works .. Sees pupil on camera .. Red circle .. 3D model shows .. Pupil diameter shows .. Any ideas why I can't export? Does anyone have small raw eye data file only I can see if I can export? Need to find out if problem on capture or player/ export.
Is the field of vision as seen by the world camera during calibration the only area that the gaze tracking will be accurate during a recording? In other words, if I were to look around during a recording and the world camera sees other things from when we were calibrating, will the accuracy go down?
@user-dbadee calibration is relative to the field of view of the world camera not relative to the actual world where you are looking at
@user-2dca50 1) we only support Windows 10 2) please share the recording with data@pupil-labs.com
@wrp i performed offline pupil detection and still nothing shows as far as fixations. i can't find where offline gaze mapping is in the Pupil Player - any help there?
@user-910385 check out the Gaze From Recording
plugin and select offline calibration
Can someone point me at a conversation about object recognition?
@user-41643f This is rarely a topic in this channel. If you need gaze mapping on regions of interest: Check out our surface tracker
Hi, I found filter_message.py in pupil_helper. From that, we can read data, for example, pupil.0, if we run this script. I was wondering if we could read like only diameter of pupil.0, since the printed message is too much for me and I only need some of them. Thank you.
Hi there. I have a question to your fixation detection. We work with velocity and therfore I would need the timeframe you consider for your dispersion threshhold. What do you use there.
And to your online explination of the minimum duration.
"Minimum Duration (temporal, milliseconds): The minimum duration in which the dispersion threshold must not be exceeded."
Is the dispersion measured on consecutive points or on the initial?
Hi
AOI Sequence >include function?
@user-76218e No, there is no way to subscribe to diameter only. I suggest you subscribe to pupil, access the diameter field and discard the rest.
@user-14d189 The minimum duration can be set by the user. Unfortunately, I don't know the default from the top of my head right now. Dispersion is measured as maximum distance between all samples collected in the minimum duration window. We remove some outliers though, so "all samples" is not 100% correct.
@user-4580c3 could you elaborate on your question?
thanks @papr! As I understand fixations can be determined by 3 characteristics. min lenght in time, maximum lenght in time and smaller then maximum velocity / smaller then maximal dispersion over time. I belive that all of thoes characteristics are addressed in your offline fixation detector and they are independent.
Correct
or in other words the gaze need to hover around a narrow spot a min amount of time to be a fixation and can not be longer then.
The maximum time constraint is kind of artificial though and helps us classifying fixations much faster
In other words: the relevant constraints are minimum time and maximum dispersion.
And the dispersion indicates the small area in degree. And to have a comparrison to velocity, I need the time. If it is calculated inbetween datapoints, then the eye recording frequency is the base for the calcualtion, isn't it?
Quote: In other words: the relevant constraints are minimum time and maximum dispersion.
Not exactly. Each data point has a timestamp. The difference between them is used as base for duration.
I have to veryfiy that. reducing the dispersion should have the same effect as reducing the min time.
The reason is that we do not assume a fixed rate since we remove low confidence data points first
I don't think they are exactly equivalent since estimation noise affects dispersion while duration is very exact
ah good to know.
So you would say the dispersion angle is like a cone that includes all eye movements around an area, and is not exactly comparable to velocity based algorithms.
Correct.
Thanks for filling me up. Make more sense now.
Do you by chance know if there is a velocitiy based fixation algorithm available? or who might have already done one?
I know that there were some users implementing their own detection algorithms, but I don't know if they published their work already
@papr cheers! I keep looking. and in the mean I have a little matlab script.
Is there any documentation on minimum spec requirements for computer? I am running binocular 200hz pupil labs while capturing 30hz 1920x1080 - Using Mac OS 10.14 Mojave on a late 2013 Intel i7-4558U 2.8ghz - The only app I have running in foreground is pupil capture and my frame rate is varying from about 12-29 fps - I want to guarantee solid performance if we need to buy a new pc - any links for documentation? I googled and searched the pupil labs website and could not find one mention of processor requirements
@user-14d189 we implemented (or used an implementation based on tobias knappen). github.com/behinger/etcomp. The pipeline is not polished yet (first the paper, then polishing :-). But you basically want to run this: https://github.com/behinger/etcomp/blob/master/code/functions/et_preprocess.py using et = 'pl' you would llikely need pupil-src compiled (we have a make file, try it out π (you need pupil compiled to do some recommended steps (recalibration, surfacemapping).) You can likely skip some steps of the pipeline but we should probably discuss this offline once you got pupil labs calibrated data as a pandas dataframe (gx gy and a is_blink column you could use: https://github.com/behinger/etcomp/blob/master/code/functions/detect_saccades.py Directly
just fyi this is not a plugin but everything is standalone completly away from any gui pupil-labs uses
second fyi: we use interpolation to get a constant sampling rate, I think this is not strictly necessary but it was easier for us
@user-af87c8 What type of interpolation do you use?
@papr Thanks for looking at the data I sent to you. I was able to confirm the pupil data are there in the pupil.pldata file and was able to figure out how to read it .. Manually opened in hex editor .. Found timestamp field .. 156 bytes in .. 8 byte double float, copy and paste into matlab hex2num gives 7.1088331e+4, diameter_3d field is 178 bytes in, reads 2.1408. Next data point offset 561 bytes timestamp 7.1088325e+4, diameter_3d 2.1572. There must be an easier way to get these data out! I didn't want to have to write code to retrieve data from files, and wondering why pupil player is unable to export this raw data.
@papr I realize Win7 is not supported .. But it looks like capture works. See above .. Was able to manually retrieve a few data points from pupil.pldata file showing timestamps with 8.5 msec offset = 120 hz, and pupil diameter 3d at 2.14 and 2.15 mm. Thanks for helping me figure out how to export from pupil player so I don't have to write my own file read and write code. I am not very good at that! And realize you have some c code to read raw files .. Even harder for me to implement. I may be able to make or run a matlab or igor pro script. Does anyone have such a thing for pupil data reading?
Hi, is there a way to export the gaze positions from pupil player without a correction for the fisheye distortion in 2d the recording, as if the world view is a 2d flat plane? Or alternatively somehow pos- process the data to reverse the correction of distortion? thanks!
What's up everyone. Can someone point me to documentation about running more than one pair of pupil glasses at the same time? Appreciate any direction, thank you
Has anyone had any experience with dealing with glare from printed fiducial marks on paper?
@user-8944cb This is possible if you rename world.intrinsics
before opening Player. This way Player will load intrinsics that assume no distortion and therfore won't apply any correction.
@user-c4492b You will have to run multiple instances of Pupil Capture to do so. Please be aware that running the pupil detection at 4x 200Hz (assuming binocular headsets) requires a very powerful CPU. Also, if you want to record from both devices, your storage media needs to be very fast. We rcommend to use a single computer for each device. Use the Pupil Groups and Time Synce plugins to synchronize the instances between the two host computers.
Hi all, I've been struggling with an eye camera issue all day. I'm working with Pupil Labs Vive add-on, and itβs been working fine until last Friday. Now, I get this error: βeye0 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. β My computer seems to detect both βPupil Cam 1 ID0β and βPupil Cam 1 ID1β (I see them in βBluetooth and other devicesβ and in Pupil Capture), but βPupil Cam 1 ID1β doesβnt appear in the devices manager. I uninstalled the drivers completely and reinstalled them using libusbk 3.0.7.0 and Zadig ; tried plugging Pupil Labs add-on on different USB 3.0 ports, tried different USB3.0 to USB C cables, tried to plug pupil on 3 different computers (all on windows 10), and the issue remains the same. Since the hardware was left untouched since Friday afternoon, Iβm a bit surprised. Did someone deal with the same problem? (I already read the following posts: https://github.com/pupil-labs/pupil/issues/1120 & https://github.com/pupil-labs/pyuvc/issues/32 ) Thanks !
@user-66516a Do I understand correctly that on all 3 computers the ID1 cam is not listed in any of the Device Manager categories?
Yes, exactly
@user-66516a This might be an hardware issue. Please write an email to info@pupil-labs.com with the description of which steps you have taken already in order to solve the issue.
Ok, thanks!
@user-66516a, similar thing happened to me - one of the camera feeds stopped in the middle of the day. I needed to return the hardware to Pupil and they repaired it (or gave me a new one, I do not recognize them)
@user-9d45c9 Thanks for the information. That's a relief!
I wonder what's causing it not to work anymore so suddenly though
It might be that one of the cables is disconnected from the eye cam
@user-66516a the same happened with two different hardware sets. We sent them to pupil to investigate.
@papr Thanks for your reply. I tried renaming before opening player, and also tried erasing the folder, however, I am getting the same exported gaze points as when the folder is in the recording file. Is there anything else I can try/ should do? thanks!
hi - what is the best way to calibrate for mobile devices? is there a best practice?
@user-8944cb please send an email to [email removed]
@user-910385 to clarify, you are going to be showing stimulus on the mobile device screen, correct?
Hello. I am having problems in using Pupil Mobile. I have a motorola Moto Z3. I am trying to stream to my Desktop PC. The capture input is set to pupil mobile on Capture. Now capture does not even start:
ctypes.ArgumentError: argument 4: <class 'TypeError'>: expected LP_IP_ADAPTER_ADDRESSES instance instead of LP_IP_ADAPTER_ADDRESSES
@user-11dbde Hi, which OS do you use?
Hi. I am using windows 10
Is this the issue you described here? https://github.com/pupil-labs/pupil-mobile-app/issues/28
yes
ok, I will have a look at it
Thank you.
@wrp Thanks for your reply, I sent an email to [email removed]
Thanks @user-8944cb