More or less. It is still in the works. Basically data can be received from Pupil Service and a very random Raycast can be done. Now I am struggling to make a game similar to hmd eyes in order to calibrate the device
Would you mind sending me what you have?
Hi everyone, I just received my pupil in the mail and I'm trying to run Pupil Capture but nothing seems to work. My device is correctly plugged in, but it tells me that "eye0 init failed. Capture is started in Ghost Mode." and all I see is a grey screen. Has anyone seen anything similar to this happen?
Also, I'm wondering for the surface tracking- how big do the surface markers have to be so the pupil can detect it? Can I go down so far as 1inch by 1 inch for example- and what kind of coordinates does the surface tracking functionality return? Is it pixel offset from the top left of the surface? Or does it know absolute distance?
Thanks for the help! This device is so cool and I hope to get started with it successfully soon ๐
Here's the image of what I see when I open pupil capture
@user-4fbb59 what operating system are you using?
I'm using a macOS High Sierra
Pupil Capture version is v1.6.x?
Yup, 1.6.14 to be exact
Please try restarting with default settings General > Restart with default settings
Yeah, I gave that a try but it still does the same thing
Are there any other Pupil applications open like Pupil Service?
No
though I could restart my computer to see if that helps?
@user-4fbb59 do you see any world video?
if by that you mean the pupil's video feed, then no
Are the cables connected fully?
I just see a grey screen
I believe so
You may need to push the USBC connector in more to the cable clip on Pupil Capture
it may require a firm push
ahh! yes that did it.
I was afraid to break the device haha
The USBC connector is a bit stiffer/tighter than other USB connectors
Great, works amazing now. Thank you so much for the fast response and help!
Do you have any insight on my second question regarding the surface tracking?
ok regarding your other questions
Also, I'm wondering for the surface tracking- how big do the surface markers have to be so the pupil can detect it?
A: This depends on how far you are away from the marker. If you are very close, you can use a very small marker. You can adjust the min marker perimeter setting in Pupil Capture/Player to reduce the search area.
Surface tracking will provide you with a transformation matrix which will allow you to map coordinates into the space of the surface
Hmmm, I see.
So the Pupil will give me a transformation matrix and coordinates in terms of what it sees
and then I can use those two to map to coordinates within the paper?
does that sound right?
@user-4fbb59 Pupil software will take care of the mapping of gaze coordinates --> surface coordinates - you don't have to do anything further. For example: If you defined a surface as the cover of a magazine, then Pupil will calculate the position of the gaze relative to this surface. Gaze positions within a surface are normalized coordinates (0,0 for bottom left of the surface and 1,1 for top right of the surface).
here is an example of surface tracking used within a driving study - https://vimeo.com/266868006
in the beginning of the video they show heat map for the surface, afterwards they disable the heat map visualization and just show the surface bouding box. You can see how markers are used to define a surface in this example.
I see, that makes a lot of sense
let me make sure I understand this correctly
the pupil will return coordinates between (0,0) and (1,1) depending on where you're looking on the surface
where (0,0) is bottom left?
as defined by the markers
Hi all, I am looking to extract from pupil capture via UDP the x-axis angle of the eye gaze, with calibration center at 0deg and +/- from this point depending on gaze direction. Any advice would be appreciated, thanks.
@user-4fbb59 gaze positions are normalized betweeen 0 and 1 relative to the world frame. When you're using surface tracking plugins, you can also get gaze positions relative to surfaces in normalized coordinates. If the gaze position is outside the surface it will be outside of the range (0,0 - 1,1) (e.g. you can have a point that is (-0.2, -0.2) which would be outside the surface but with coordinates still reported relative to the surface
okay excellent. That definitely helps, thanks so much! Last thing, are you aware about the type of screws that sit on the Pupil? Little buggers are so tiny, I already lost one while moving the eye camera.
@papr @user-b116a6 - I've found new proposition for saccade detection algorithm, what do you think about it? From http://dx.doi.org/10.1016/j.visres.2017.03.001 Bargary et al, Individual differences in human eye movements: An oculomotor signature? (2017) "All saccades in each of the tasks were detected with the same purpose-built saccade algorithm. This algorithm used both eye acceleration and eye velocity criteria to detect and profile a saccade. The presence of a saccade was detected if the eye acceleration exceeded a relative threshold value (six times the median value of the standard deviation of the acceleration signal during the first 80 ms of all trials for a particular person), or if the eye velocity exceeded an absolute threshold of 50/s (the latter criterion was used very rarely). After detection, the saccade was profiled using the eye velocity record: borders of the saccade were defined as the regions where the eye velocity dropped below three times the median value of the standard deviation of the eye velocity record during the first 80 ms of all trials for a particular person."
Hello, I need assistance in finishing the plugin PupilLabs to UnReal game engine that Tudor started ASAP! Sorry to sound demanding but we have a May 11 deadline presentation to NIST!
Hi! Another quick question regarding Pupil- how does it get Z positions? Is the world camera able to detect depths? If not, how does it get gaze_point_3d_z?
Anyone know if this is being used for medical testing? I've got a use case that could really help patients who might be having subtle, hard to detect strokes.
@user-4fbb59 the z positions are estimated via binocular vergence using gaze angles from our 3d model.
I want to use libuvc to get the video streams from world camera. ๏ผUbuntu16๏ผ The ID 05a3:9231 is Cam1 ID1, the ID 05a3:9230 is Cam1 ID0, but the Cam1 ID2's ID is the same as the ID0: 05a3:9230, if I set the ID param "05a3:9230", the video is from right eye cam, so how to get the Cam1 ID2? And please tell me if the Cam1 ID2 is the world camera.
Correct, Pupil Cam1 ID2
refers to the world camera.
You should use the cameras' names to select the correct camera to initialize instead of the IDs
@papr Thanks, and I also find when using IDs I can set the index value, the default 0 refers to the eye cam, 1 refer to the world cam.
You mean the index value within the device list? I would advise against using the device list index as a way to select the correct camera.
The code is as follows: <!-- Parameters used to find the camera --> <param name="vendor" value="0x05a3"/> <param name="product" value="0x9230"/> <param name="serial" value=""/> <!-- If the above parameters aren't unique, choose the first match: --> <param name="index" value="0"/> I think the index is under the same IDs.
Hi guys, I have a question regarding the DIY pupil headset, because I could not find exposed (black) film negative, what kind of other material do you recommend me to use as a filter for eye camera?
@user-d1b281 you could also source a bandpass filter, but exposed film is the best choice for DIY setup.
@wrp Thank you very much
another question is, can I use an undeveloped film?
@user-d1b281 no, this will not work.
I'm sorry for asking so many questions. but can I use a floppy disk?
@user-d1b281 questions are welcome. This will also not work (someone asked/and tried this earlier - search back and you might be able to find the discussion ๐ )
@wrp So, can I use this one?
Hi all, I just got Pupil, and Im trying to use it on my computer now (running Windows 7). I installed Pupil Capture, but there seems to be a problem with the drivers: 2 of the cams are installed in the "Imaging devices" instead of "libusbK Usb Devices". I uninstalled, restarted, and additionally installed the drivers automatically. Still the same issue. I tried on a different pc and there was absolutely no problem. Any ideas on how to solve this issue?
*manually I meant ๐
As an (expected) result, I can only access one of the cameras with Pupil Capture
We only support Windows 10. Please upgrade your system to use Pupil Capture.
I am aware of that, but for our experimental setups we were using Windows 7 - and I was trying this out before changing everything. Thank you anyway
great, thanks!
Can someone help me understand the exported data for surface tracker? What's the difference between world_timestamp and gaze_timestamp? And what's the difference between x_norm and x_scaled?
@user-4fbb59 Hey there. *_norm
means that the values are normalised to the surface, meaning that (0,0) refers to the bottom left corner, and (1,1) to the top right corner. *_scaled
values are the *_norm
values multipy the surface size (which you have to set manually in Capture/Player)
world_timestamp
refers to the scene frame's timestamps that the datum is correlated to. You can use it to identify the correct scene frame. The gaze_timestamp
is the gaze datum's timestamp that is mapped to the surface.
@papr so each frame has multiple gaze recordings on it?
correct, since the gaze timestamps are based on the eye camera frame timestamps and the eye cameras record with an higher frame rate than the world camera
excellent. Thank you so much!
Hello, I have a question about the capture rate. When I record with a 200 hertz capture rate for the eye cameras (binocular) and 120 Hertz for the world camera I am getting ~400 timestamps for each second (not sure it is seconds but it aligns with the number of timestamps with respect to the length of the recorded video in seconds), and about 100 different (a little less) 'index numbers' in the 'Gaze positions' excel spreadsheet. When I record with 120Hertz for both eye cameras and world camera, I am getting ~240 timestamps. The data in all variables (as far as I can see) is different under each timestamp. The number of timestamps seems to be double the capture rate of the eye cameras - does this mean that the eye cameras are not synced? Is there something I need to change in the settings in this case? Thanks for the help!
Hi all, I have been working on a project using pupil groups and wanted to see if anyone here has had experience operating the groups feature on a standalone laptop. I have consistently run into issues getting my separate instances of Capture to recognize my two separate headsets (I have two 300hz trackers, connected to a '16 MacBook Pro via USB 3.1 cables). I don't have any issues getting the first camera to recognize, however I have found myself unable to reliably connect each of the sets. Sometimes I will succeed at getting both cameras recognized, however when I begin recording one or both cameras freeze up fairly fast.
Just wondering if anyone has experienced these quirks; my current theory is that my laptop CPU simply cannot handle to load of four (considering each eye cam) video streams. If anyone has had success in a similar setup I'd be excited to hear how!
I have found a publication from Tobii explaining about the sampling frequency. Maybe it could be useful (the references are also so useful for deeply understanding the rates in eye-tracking. This is the link: https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/eye-tracker-sampling-frequency/
Hello Folks. If i would like to know whether im doing the right thing, before taking on a major analysis chor :). I want to compare the fixations on to a surface vs the total amount of fixations. Now what i have done is to enable both offline fixation detector and offline surface tracker. Once having exported this file, I can see the values on the exported document named "Surface_gaze_distribution" in Excel. This give me the following info: http://piclair.com/pksnf Note that the name of my surface is "unnamed". So based on this i would assume that i am able to compare the fixations on the surface against the total fixations right? Lastly what are these numbers a sign of? fixations measured in time? Sorry for the long message and thanks in advance :).
Hi all! Do you know if with the calibration files (eye 0, eye 1 and camera world) its possible to transform the position data (in normalised units) to centimetres? Or at least to know the position in normalised data at which the calibration points appear. (we controlled distance from screen, head movements, and we know at what distance from the centre of the screen the different stimuli appear). Thank you very much! ๐
@papr I had calibrated the world camera, and get the cam.yaml, but cannot find where to use it, do you know how to change the world camera params on uvc?
@here please check out our latest release. Improved pupil detection included! https://github.com/pupil-labs/pupil/releases/tag/v1.7
@mpk I tried to use the new pupil for mac 1.7 but after download if I start the program says "file damaged"
@user-1bcd3e I just saw that too. I m uploading a new bundle. Can you post the full error message? We have not seen this before and it does not happen deterministically.
ok. I m codesigning the mac bundles again now and will upload. It should work then!
@mpk thank you ... I'm excited to see the new release!! sure will be a great job ๐
@user-1bcd3e alright I uploaded the bundle please try the new one.
Hey folks, just got the Pupil eye tracker, and wondering what's the easiest way to integrate with Python. I'd like to be able to stream estimated eye position in screen coordinates and draw that estimated eye position via, e.g. PsychoPy, PyGame, PyGaze, etc. Is there a higher level API than the bare metal ZeroMQ API? Thanks!
For streamming of screen coordinates you will need to use Pupil Surfaces. The easiest way is to write a plugin that will talk to the online surface tracker.
@user-bcf3aa I am guessing you want a real time stream.
You also have an alternative option for offline screen coordinates.
@mpk it doesn't seem to be solved ๐ฆ same file demaged!
@mpk I downloaded by Chrome
@user-41f1bf yes a realtime stream would be ideal
Hello, my pupil headset have a very good accuracywith a fixed head and distance (the distance between subject's head and screen/working area). However, if the subject's head moves, It will come out with a bad eye tracking accuracy. Have anyone encountered this? Thanks.๐
Hi ! I am trying to send a custom message from a raspberry to a pupillab PC and then to record this message in a text file . I am able to send the standard message by zmq for instance -- socket.send("R") to start the recording-- but I need help with the sending and recording my custom message that allow us to synchronize different machines. Best, Pablo
@user-1bcd3e I confirm that the most recent macOS bundle is not currently able to install/run. We will work on this today to resolve any signing issues in the deployment process.
@wrp @mpuccioni#0374 sorry about that. Somehow Pupil Capture was still not signed. I just uploaded again. Please try it!
I have tried few times. It remains the same. At the moment, I canโt use it. Does it work on your side?
Ah. Just saw the previous message. Hope it will be solved soon.
@user-1bcd3e As a quick and dirty workaround you can disable the mac OS signing check by running "sudo spctl --master-disable" in the terminal. it is sufficient to run the app once and enable the check back with "sudo spctl --master-enable" and it will continue to work.
we all trust in pupil labs that they're not a Trojan horse ๐
@user-b571eb see my answer for a quick solution
@user-29e10a yes, that works, but I hope that the version from this morning is properly signed ๐
@mpk I just downloaded this: "https://github.com/pupil-labs/pupil/releases/download/v1.7/pupil_v1.7-42-g7ce62c8_macos_x64_signed_confirmed.zip" and the error persists, I'm sorry ...
...
I have no words.
whats the output of codesign -dv --verbose=1 /Applications/Pupil\ Capture.app
and for the others as well..
*processing
Executable=/Applications/Pupil Capture.app/Contents/MacOS/pupil_capture Identifier=Pupil Capture Format=app bundle with Mach-O thin (x86_64) CodeDirectory v=20200 size=27237 flags=0x0(none) hashes=1355+3 location=embedded Signature size=8568 Timestamp=04.05.2018, 08:21:33 Info.plist entries=10 TeamIdentifier=R55K9ESN6B Sealed Resources version=2 rules=12 files=292 Internal requirements count=1 size=176 for capture
Executable=/Applications/Pupil Player.app/Contents/MacOS/pupil_player Identifier=Pupil Player Format=app bundle with Mach-O thin (x86_64) CodeDirectory v=20200 size=26716 flags=0x0(none) hashes=1329+3 location=embedded Signature size=8567 Timestamp=04.05.2018, 08:25:34 Info.plist entries=11 TeamIdentifier=R55K9ESN6B Sealed Resources version=2 rules=12 files=286 Internal requirements count=1 size=172 for player
i didn't install the service
ok. thanks. I ll investige some more.
as we're speaking, I didn't notice this before, but is in pupil player the coarse pupil detection disabled, or is it since 1.7?
@user-29e10a coarse Pupil detection is disabled for low resolutions since at least v1. 6
@mpk @user-29e10a thank you I'll wait for updates.... me too I didn't install pupil service, as I think its not the problem
@papr why is that? it makes it easier to tune the intensity range imho
@user-1bcd3e @user-29e10a @Eli#9432 mac os release if finally fixed.
@@user-29e10a It was mainly used for speed increase. But the speed gain is not high enough for lower resolutions to be worth it.
Your argument is something to consider though
@papr understandable โ I'm working on a project which includes the pupil detection completely inside a c++ and c# environment ... works fine so far, but I had to implement the coarse pupil detection in c++; if this is interesting for you, I'm ready to share
I wrote a wrapper to get the results of the pupil detection from unmanaged c++ in managed c# code โ all without python
This is definitely something that would be beneficial for the Pupil project
ok, i can prepare a pull request, or I'll send you the relevant code snippet (just a method)
Is there any kind of minimum hardware recommendation? Would a Raspberry Pi Zero be sufficiently powerful for capturing?
any sugestion for settings of the eye camera first time?
Hello @papr, I'm trying to analyze how the gaze positions shift in between two surfaces (left and right in this case). I couldn't find any documentation about how to understand this exported files. Before making my own assumptions, I would like to ask for your help. Thanks in advance!
Hello guys, is somewhere info about how calibration is done? What algorithms are used to detect eye ball, pupil, pupil movement etc. for both 2d and 3d detection
@user-e38712 This paper describes the pupil detection algorithm: https://dl.acm.org/citation.cfm?id=2641695
@papr thanks! When I'll reach my PC I'll check if I have access to this online library as a student
This is the pdf for the paper. The paper is publicly available as well but I do not have the link at hand.
Ok thanks! I'll look at this
Hello I need help about using Pupil Mobile and capture in my PC , I have Nokia 6 and Samsung Galaxy 8 and Windows 10 ,I have installed Pupil Mobile on my phones and start Pupil capture on my pc , But nothing happens.Windows and cellphones are connected to my ADSL modem. I have not connected Pupil headset yet, I want to test the platform then connect headset
Is there any kind of minimum hardware recommendation? Would a Raspberry Pi Zero be sufficiently powerful for capturing?
@user-9d7bc8 I think RPI might not be powerful enough
@user-9d7bc8 you should get in touch with @user-c494ef to discuss - as they have Pupil running on NVIDIA Jetson TX2
@user-ba98c6 please post your questions - I am about to go AFK, but will respond when back
Hello, I'm trying to setup eye tracking for vive and whenever I launch the 3d calibration unity project, my computer blue screens with the stop code "MULTIPLE_IRP_COMPLETE_REQUESTS"
@papr Hello, Iโm struggling to figure out the basics of Pupil Remote, forgive me as I donโt have much networking experience :p Iโm just testing the Remote feature -- I have Pupil Capture running and Iโve enabled the Remote plugin, then Iโm running python3 pupil-helpers-master/python/pupil_remote_control.py but I get this error. What else do I need to do?
Also, a previous link you made to https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/remote_annotations.py no longer exists -- is there somewhere I could still see the code? Thank you!
Hey @user-6930d5 The traceback indicates an interuption by the user. This happens if you hit ctrl-c within the terminal.
The ouput should look like this:
4452.78368767
Round trip command delay: 0.00037980079650878906
Timesync successful.
OK
OK
The linked script moved to https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Hello Folks, 1. Is there any way to get a clean screenshot from a scanpath through PupilPlayer? 2. is there any way to make small clips for presentations of findings through the software? Thanks in advance
Hi everyone, I am trying to get surface tracking working for multiple monitors and I'm not quite sure how the markers are supposed to work. If I attach the markers to each monitor, should the eye tracker just be able to detect them or will I need to specify each surface through the pupil capture application?
@user-88ecdc The recommended way is to set the trim marks such that they encapsulate the section that you want to present and export the recording afterwards. The exproted video includes all visualizations
@user-fa3706 The markers are detected automatically. But the markers alone are not a surface yet. You will need to specify which markers belong to which surface. You do so by adding a surface and showing the markers that belong to the surface into the camera
@papr so I will need to use the capture application to present the marked surface to the world camera and then I can specify the surface while it is viewing the marked surface?
correct. To summarize: 1. enable the surface tracker. 2. point the world camera to the monitor with the markers 3. hit A
to add a surface. The surface will automatically use the recognized markers as a surface definition. 4. edit the surface as required
Perfect. Thank you!
@papr I have been testing what you said and the pupil player doesn't seem to be recognizing the surface in the recording even though the pupil capture application seems to recognizing the surface.
Hello!
I am using Motion on a raspberry pi to create a stream at its ip address at two ports. Is there a way I can capture those feeds using pupil on another machine?
@user-c494ef @user-9d7bc8 @wrp I am using a rpi 3B and a DIY headset with the two webcams recommended by the DIY documentation.
Thank you for your Brilliant answer yesterday @papr. On another note, does anyone of you get this error: "Player_methods: No Valid dir Supplied" When opening Pupil player? and do any of you know a fix? Ty in advance ๐
@user-88ecdc do you happen to have any non-ascii chars in your info.csv file within the dataset?
@papr @user-8889de question seems relevant for pyndsi
@user-88ecdc Are you using Windows?
Yes i am, What is non-ascii chars? ๐
@user-88ecdc Does the error appear before you drop the recording onto the window or afterwards?
@user-88ecdc please share the info.csv file - I have a feeling that this might be related to the username of the machine listed in the info.csv file
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. player - [ERROR] player_methods: No valid dir supplied. This is what appears when i open Pupil player. The window where i am supposed to be able to put in clips does not appear.
Ok, is info.csv found within the Pupil Player folder?
@user-88ecdc could you share your recording with data@pupil-labs.com ? The easist way to do so is to upload it to Google Drive and to share the folder with the email address above.
Thanks @papr , I think I had an issue with a module but Remote works now!
I want to use Pupil Labs with jsPsych, a javascript framework for running behavioral experiments in the browser. Iโm trying to find a way to sync up the timestamps of the experiments and Pupil Labs recordings. Any suggestions for a good/easy approach? Some ideas: using Pupil Remote to somehow reset timestamp to 0 when the experiment starts/make an annotation, or have Pupil Capture/Player recognize a special marker in the experiment and make the timestamp 0 from that point on.
Iโve tried setting timestamps to 0 using Pupil Remote during a recording and it messed up the videos, probably because of anachronistic timestamps. How would you recommend handling this? Should Timesync always be done before a recording starts?
You are exactly right! Time sync needs to be done before starting a recording else the assumtion of time being monotonic will break.
I suggest to simply set the time to what ever clock your experiment uses.
@papr Roger will do, i have 24 recordings which are each about 6gb each, but i assume it is ok if i just upload one?
Do all of them trigger the issue?
Never mind, you said that issue is that the drop-recording window does not appear, correct?
correct
This issue is independent of the recording then. Which version of Pupil Player do you use?
System Info,"User: HumLab, Platform: Windows, Machine: AAU109897, Release: 10, Version: 10.0.10586"
Capture Software Version,1.4.1 Data Format Version,1.4.1
Could you please update your software to the newest version https://github.com/pupil-labs/pupil/releases/tag/v1.7 and try again?
It worked, thanks :). When exporting a trimmed sequence of my clip, is there any way to make the x50 speed follow, or can an exported trimmed section only be visualized in realtime when being exported?
Are you saying that playback in Pupil Player is slower while video exporting a section?
I want to show a sequence in half speed, but when i set it to half speed and export, the exported version is displayed in real time. And is there a way to improve the video quality when exporting? Thanks for such quick replies!
Ah, now I understand. Setting playback speed in Pupil Player only affects playback in Pupil Player. You can use a media player like VLC to playback the exported video more slowly. Alternatively you can slowdown the video permanently using a tool like ffmpeg.
And there are no export quality settings.
it is 1280 x 720 as a source file, 30 FPS when looking at File source, i assume that is just how it is. Thanks for the VLC Advice
Wow the new version of pupil labs exportations gets a different result when exporting than the previous, with all settings still on default. Can anyone tell me, what Total Gaze Points can be interpreted as a sign of? TY for your help
What differences do you see? And what do you mean by Total Gaze Points can be interpreted as a sign of?
Total_gaze_point_count is a result i get from looking in to "Gaze Distributions", after exporting offline surface tracking + offline fixation detecter. What i mean is that i assume that Gaze points are related to how much time they look at something, but also that there is not necessarily a causal relationship between that.
This statistic is definitively related to time. total_gaze_point_count
is the total number of gaze points in the exported section. Below follow all registered surfaces and how many of the gaze samples were detected on the corresponding surface. Be aware that summing all surface gaze counts does not necessarly sum up to the total count. It can be less (subject does not look at any surface) or more (subject looks at overlapping surfaces) than the total count.
I see, thank you for that explanation! for our study, we only had 1 surface and calculated the number of gaze point counts that this one surface had against the total amount of gaze points recorded. But what exactly is the definition a single gaze point?
A single gaze point is a mapping of a single or a pair of pupil datum. Each eye video frame generates exactly one pupil datum. A meaningful metric would be to calculate the average amount of gaze points per second over the whole recording.
Hi all, One of our research topic is to monitor the pupil diameter with regards to an audio input. I would be glad to have any comments from the experts to the following points: 1- we have the feeling that the pupil diameter measurement is more accurate with the 2d model rather than with the 3 D one. Is it correct ? 2- could you confirm that in a csv export the timestamp unit is in second and that the rate of capture is given by the cameras so the number of diameters recording might depend on cpu load or other external issue ? 3- the blink process is inducing a physiological variation in the eye diameter ; we would like to delete these non valuable data. Does it make sense to set the confidence value to 0.9 or would you advice another value (1 ?) ? 4- is it possible to add some triggers or marks into the time line in pupil player that could be retrieved in an csv export ? I did not test yet version 1.7 with the new audio wave visualization. We could perhaps with this new feature use the trimmers in pupil player and export part by part but for a long file, it is a bit boring 5- is it possible to add audio export to the video export plugging in pupil player ? Thanks (sorry for the long list !)
5 - this is already implemented since various versions. Audio playback might not work with your default medai player though. We recommend VLC.
1 - the 2d pupil diameter is a direct result from the pupil detection while the 3d diameter is based on the 3d model. If the 3dmodel is not well fit or unstable the 3d diameter will vary. If you are looking for relative pupil size changes it might be better to use the 2d diameter for the moement. We are working actively on improving the 3d model and its stability.
2 - yes, timestamp units are always in seconds. And yes, the frame rate can drop on high load and result in fewer pupil diameter samples
3 - I would suggest to use the blink detection plugin to find blinks and to use them to filter your data acordingly
4 - You can use the annotation plugin to add such triggers either manually during the recording, automatically during the recording via notifications, or manually in Player
@papr thanks for the answers;
5- yes the export with vlc has audio (I used the quicktime players unsuccessfully)
Thank you so much for the answers @papr Mucho appreciato ๐ have a great day.
@user-88ecdc you too!
1- we noticed that stability of the 3D model was the major issue especially after a blink. Did version 1.7 adress this point or should we try again with the next one?
The stability improvements are still work-in-progress.
ok thanks papr; last question (for today!), are you aware of any visualization plugin from the community to display a parameter (ie diameter...) over time ?
Pupil Player shows the 2d pupil diameter in a timeline by default
yes but it is not scalable and it is difficult to assess any minor changes (absolute or relative) to compare between recordings
I understand. This will be fixed as soon as we introduce timeline zooming. Until then I would recommend to export the pupil data via the Raw Data Exporter and to use something like Google Spreadsheets to visualize the diameter column of the csv file.
ok it is what I am doing for the moment, waiting for the introduction of this new feature!
Ohh. Now it happend again (even on the new version) i cannot open pupil player now: it says player - [ERROR] player_methods: No valid dir supplied (C:\Users\HumLab\Desktop\pupil_v1.7-42-7ce62c8_windows_x64\pupil_player_windows_x64_v1.7-42-7ce62c8\pupil_player.exe)
Try the following:
1. Make sure Player is not running
2. look for the pupil_player_settings
folder in your user/home directory
3. delete the player_user_settings
file
4. start Player
Worked perfectly, thanks once again ๐
Ok, thank you for the report. I will investigate the cause of this issue.
Deleting the user settings is just a bad work-around for now.
Hey all, so when I use the pupil to do some simple gaze tracking it seems like the predicted gaze is jumping around a lot, very inaccurate and noisy. I'm trying to keep my head relatively still and the accuracy seems to work fine when I'm calibrating. Any reason why this could be? Thanks!
Also- when the gaze jumps around, is that a consequence of not being able to see my pupil correctly, or is it misidentifying the pupil?
@user-4fbb59 This can have multiple reasons. Could you share an example recording with data@pupil-labs.com such that we can have a look?
@papr Hello, I have attempted the surface tracking setup that you explained to me yesterday. It seems that Pupil Capture is registering the surface, but the Pupil Player application doesn't seem to be recognizing the surface when I open the test recording. Is there a different setting that needs to be used? I can send the recordings if that helps.
Did you open the offline surface tracker?
Hi guys can you please tell me what the resolution is for your 200Hz cameras and what is the latency from camera to PC and PC to dumped values?
also the angular resolution/precision
@papr It is enabled in the plug-in settings on both Capture and Player.
Mmh. Normally Capture should save the surface definition. You can simply add a new surface player though
Ok. Is that done in the same process? Just tell Player when the world camera is viewing a marked surface?
Yeah, you seek to a frame that shows your surface, pause playback, add the surface, edit if necessary
Ok. I just tried that and it seems to be working now. Thank you.
@papr semiresolved, seems like just has to do with accuracy of pupil capture, recalibrating and being more deliberate about eye movements seems to help
I do have another issue though- i'm trying to use pupil to find out where exactly I'm looking in this maze (attached below) but the surface tracker seems to pick up the maze as skewed- i'm afraid this will affect my accuracy, because I need to match these gaze tracking points up with pen data. Any thoughts on how to make the surface tracking more accurate?
@user-4fbb59 the image you attached is a screenshot from pupil capture? Can you turn on surface tracking so that we can see the quality of the surface or send a sample/example dataset to data@pupil-labs.com so that we can provide you with some concrete feedback?
@wrp the image is a screenshot of the surface debug tool
ok, thanks for the clarification
I essentially need to get the location of my gaze on the page in (x,y), but since the image is skewed I'm not sure how to translate the gazepoint into actual gaze coordinates
(as in, where I'm looking on the page)
by skewed, you mean distorted due to the lens distortion from the world camera, correct?
yup
(sorry for the imprecise wording)
Have you tried re-calibrating your world camera?
You can do so by using the camera intrinsics estimation
plugin
Hm, okay I'll look into that
thanks so much!
@user-4fbb59 final question - where are the corners of the surface that you defined?
Well, I have 4 tracker images on the paper
they should be in the corners
Did you edit the surface at all or no?
I haven't tried editing
would that provide consistent results even when I move my head around?
You might want to try editing the surface so that the corners of the surface correspond more closely to the content
Okay. I'll try that as well
Thanks for the pointers!
You're welcome. Looking forward to your feedback.
Hello, I want to use world camera with python to record video, how can I do it and thank you
@user-3742dc https://github.com/pupil-labs/pyuvc this is what you need
Good morning. I would like to know about your application on android. What technical characteristics should be, what functions will be supported there, etc. Thank you in advance.
Good morning everyone. Does anyone knows if it is possible to access the raw image data from a pupil headset outside of Python?
We could do it before using the default DirectShow drivers, but it seems that in the latest version of pupil capture these drivers are overwritten with some custom ones and we can't find the cameras anymore
btw, this is in Windows 10
and using the latest headset (i.e. 200Hz cameras)
we've tried it on a number of different computers. as long as we don't run pupil capture, everything runs fine
but as soon as we run it the first time there is some funky powershell scripts that tries to overwrite driver certificates, etc
and then the headset becomes "bricked" to using pupil capture only
@user-48da9b to revert back to normal drivers just demove the libusbK drivers from the cameras in the device manager and allow the system to install default uvc drivers. We use these custom drivers to run 3 cameras on one USB bus.
@mpk ok, thanks for the heads-up
is there a way to access the custom drivers via some C/C++ API?
@user-48da9b you should be able to do it with this: https://github.com/pupil-labs/libuvc/
our fork has changes that are required.
@mpk thanks, this sounds perfect
which version of libusb are you targeting?
@user-48da9b libusb1.0
Hi All, Can anyone explain me how audio vizualization can be turned on with new version ?
and how can I retrieve an annotation in a csv file ? TIA
@user-42b39f this feature is enabled by default and will show in the timeline when audio was recorded.
To get annotations in a csv file you need to 1) Start Player 2) Load the recording 3) Enable the Annotation plugin and 4) hit export e
@mpk it is not showing up, I should try perhaps to delete player_settings ?
You might need to scroll or resize the timeline view
@papr it is what I have done but I can't find it in pupil_positions.csv?
Is there a speaker icon visible in the right icon bar?
@user-42b39f Annotations are not listed in pupil_positions.csv but in annotations.csv if I rememeber correcty
Oups, obvious but I didn't see the scroll control of the timeline view,sorry! However, it is a big timeline view now. Is there any possibility to select what I would like to be present in the timeline view and is there a control to zoom in the envelope because if unfortunately you get a click in the recording, the automatic scaling is not really appropriate
There timeline selection and zooming are not available yet.
ok perhaps in a future version ? It would be useful
have a good day
Yes, it is on our (long) todo list ๐
yes but the new audio features are really of great value ๐
Great to hear ๐
Indeed
@papr We would love to use the Annotation plugin to mark audio events, but for some reason we cannot hear the audio in Pupil Player (v1.5-12). We can see gaze on world with audio in VLC, but again, we would like to annotate with respect to audio as well as visual events. Any suggestions? Thanks in advance!
@user-90270c audio playback is only available in v1.6
and up. Please upgrade to the latest release (v1.7
)
@wrp Many thanks!
@user-90270c you're welcome ๐
hello, My world camera is not working, I just get a grey interface but the two eye camera work correctly, how to fix it and thanks
@user-3742dc try to pull out the usb cable and plug it in again... does it help?
It doesn't work i try it many times
@user-3742dc did you try to restart with defaults? This can be done by clicking the corresponding button in the general settings.
I try it also to restart with default settings , I get this in cmd line of pupil ... world - [WARNING] launchables.world: Resetting all settings and restarting Capture. world - [INFO] launchables.world: Process shutting down. eye0 - [INFO] launchables.eye: Process shutting down. eye1 - [INFO] launchables.eye: Process shutting down. Clearing C:\Users\incia\pupil_capture_settings\user_settings_eye0... Clearing C:\Users\incia\pupil_capture_settings\user_settings_eye1... Clearing C:\Users\incia\pupil_capture_settings\user_settings_world... world - [INFO] launchables.world: Application Version: 1.7.42 world - [INFO] launchables.world: System Info: User: incia, Platform: Windows, Machine: phyb1, Release: 10, Version: 10.0.16299 world - [INFO] launchables.world: Session setting are from a different version of this app. I will not use those. ...... world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied. world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution (1280, 720) world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration world - [WARNING] launchables.world: Process started.
I have another question about synchronization, can i synchronize the world camera with my web cam or other camera? thanks for your help
Hello everyone, how to calculate the saccades protocols?
Tell me, please, who can explain how to work with the data of fixations and saccades.
I am using Pupil to track singers' gaze while watching a video conductor stimulus. I have found that calibration is rarely consistent throughout the 4 minute duration. What is the expectation for how long a single calibration should last? Should I plan to recalibrate every minute or so?
hey, just curious if when using the surface tracker -- is it possible to bound your gaze tracking to be only within the tracked surface? I have an experiment where the participant is looking at a monitor and I want to be able to determine only when they are looking at the screen. I have created the surface using the QR codes, but it doesn't look like it is bounding their gaze to just the surface
Tell me, please, who can explain how to work with the data of fixations and saccades.
Hi guys, is there paper or blogpost describing 3D pupil detection? I wonder what algorithms are used there
Hi Guys
Can someone teamview with me to help me adjust the eyes settings in Pupil Capture for optimal pupil detection ?
Hi! I would also need some advice on how best to set up Pupil Capture for pupil detection as Pupil detection has been quite unstable under current settings. Is it possible to speak with someone using screenshare so that settings can be adjusted real time?
I've never had problem with that guys. Did you adjust camera focus?
there is step by step guide on how to adjust that in documentation
Thanks @user-e38712 ! Yes, I did adjust the camera focus and made sure the video capture is as per the documentation (i.e., camera close enough to pupil, etc)
detection & mapping mode are set to 3d?
Yes
hmmm.. your pupil is detected or there is a problem just with mapping?
Hi @user-ef1c12 perhaps you could share a demo/sample dataset (recording) with data@pupil-labs.com and we can provide you with some concrete feedback. Please include the calibration sequence in the recording.
@wrp hello, good to see you here. Are you able to provide me with information what algorithms are used to 3D pupil detection in Pupil Labs?
@wrp ok I will do this now. Thank you!
Good afternoon, please tell me, does anyone work with the data obtained by using the aytracker? Can anybody help with their transcript? (fixations, etc.). Help me please...
Hi, all! I received my Pupil Labs eye tracker yesterday (binocular + 3d), and I'm having a hard time with drivers on Windows 10 ๐ฆ
Both pupil cams seem to work, but none of the other cameras even seem to be detected by the PC
Does anybody have experience troubleshooting this? I've uninstalled, reinstalled, and rebooted more times than I can count on 3 different Windows 10 PC's
Same thing on Debian (buster). At least in linux I can see the RealSense hardware in lsusb
, although it doesn't appear in the pupil_capture UI. I installed pyrealsense (via git), but still see this in the pupil_capture output: "world - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend"
Is it necessary to calibrate the RealSense world camera using Intel's Camera Calibrator software? I can't move past the DS_INVALID_ARG
error.
Hi. I'm having some trouble getting audio recording working. I'm using Pupil Capture 1.7.42 on a Macbook Pro (Sierra). My internal microphone is definitely switched on and functioning. I've activated the audio plugin and set the source to "Built in Microphone" (and I don't get any errors when I do this). When I record, an audio.mp4 file gets produced, and an audio_timestamps.npy file. However, if I try and open the audio.,mp4 in quicktime the file plays but I don't hear any sound. When I play the recording in Pupil Capture (also latest version), I also don't hear any sound. Do you have any suggestions how to get this working?
Thanks
@user-81072d we pre-calibrate the devices during QC it should not be necessary to calibrate again.
@user-e7102b thanks for the report. Would you mind making an issue on github?
@user-e7102b could you please also send the recording to [email removed]
Hi I have a silly question about the pupil eye tracking in VR headstes. Are they documented to be safe for the eyes?
Hello everyone I am looking for the source code for "pupil capture" application. someone please share the link if there is anything out there. thanks
Thanks Papr, is this GUI ?
Please follow these instructions if you want to run from source: https://docs.pupil-labs.com/#developer-setup
Many thanks papr
@user-e7102b Have you tried VLC? I believe it does a better job.
Help. I'm trying to install pip3 install git+https://github.com/pupil-labs/PyAV on MacOS 10.13.4 and it fails with src/av/_core.c:637:10: fatal error: 'libavfilter/avfiltergraph.h' file not found #include "libavfilter/avfiltergraph.h"
I installed ffmpeg etc. like it is in the docs... the weird thing is, it did work a few weeks ago ...:( anybody?
@user-20c486 Hi, the VR add-on gets warm and this is something some users find a bit uncomfortable. We have tested for IR safety and confirmed that it is save for use.
@mpk Thanks a lot! ๐
@mpk - that's good to know, thank you. Unfortunately, however, we're still unable to use the device. When I pick the R200 as the backend manager, the screen appears gray. Standard output shows this:
world - [INFO] camera_models: No user calibration found for camera Intel RealSense R200 at resolution (640, 480)
world - [INFO] camera_models: No pre-recorded calibration available
world - [WARNING] camera_models: Loading dummy calibration
This is what the UI looks like.
@user-81072d Please select the Realsense Backend
And then the R200 from the menu
Sorry, if you have done this already
I have, yes
That's the screen which displays immediately after doing so
Any other suggestions, @papr?
Yes, I would suggest to try the pyrealsense examples and see if they work
Any of these: https://github.com/pupil-labs/pyrealsense/tree/master/examples
Ok, will give it a go. Thanks.
@papr - does the latest release of the RealSense SDK 2.0 (Build 2.11.0) support the R200 that came with my Pupil headset? It's not listed on https://github.com/IntelRealSense/librealsense/releases
None of the RealSense software (Viewer and Depth Quality Tool) seem to be able to detect my hardware, despite it showing up fine in the device manager
Hi all, i'm using pupil capture to track my gaze on a table. I can't seem to get the calibration right. what's the best way to calibrate looking down on the surface. I tried doing this with the manual calibration feature
@user-81072d no it does not!... Unfortunately
So would we need an older version of the SDK then?
The Pupil repositories point to the old versions already
hey, just curious if when using the surface tracker -- is it possible to bound your gaze tracking to be only within the tracked surface? I have an experiment where the participant is looking at a monitor and I want to be able to determine only when they are looking at the screen. I have created the surface using the QR codes, but it doesn't look like it is bounding their gaze to just the surface May 15, 2018
@user-ecbbea no
But the surface tracker will mark which gaze is on the surface and which one is not
Are gaze/fixation coordinates normalized to total screen resolution? E.g., calibration occurs on just one monitor, but if I have a triple monitor configuration, the center of the center monitor will be [0.5, 0.5] - right?
@user-81072d gaze is normalized to the world camera. Not to a specific screen. If you want to do that you need to setup surfaces.
@user-81072d the reason Realsense is not showing up is a Windows driver issue. Please remove the Realsense drivers. Install system update and install drivers again. No need to install and librealsense framework.
Thanks @papr and @mpk. I'm up and running now on one machine. The RealSense drivers have been a real headache in Windows
Hi everyone, how do we convert between the timestamps given in the data to UNIX timestamps?
@user-4fbb59 you will need to set Capture's clock to the Unix clock via Pupil Remote or the Time Sync protocol.
I see- is there a detailed tutorial on how to do this?
Also- are the units still in seconds?
All timestamps in Capture are floats in seconds.
Okay, I'll take a look at that
See the time sync protocol definition in the Pupil repository (linked above) or the Pupil helper remote control file.
When sending pupil service the current timestamp : Do I have to send it on the same request socket or it doesn't really matter ?
No it does not matter.
The Remote plugin does not differentiate incoming requests at all. Only important thing: Each request gets exactly one response. And the client needs to receive the response. Else the socket will block.
i'm somewhat confused- do i have to write my own code to sync pupil's clock with UNIX?
Yes
Why do you want to use the Unix clock?
i'm trying to sync it with another utility (a pen) that uses unix time stamps
The reason why Capture does not use Unix timestamps: time.time() is not monotonic and the Unix time is not precise because it uses a lot of the available digits for the time passed since 199X.
Therefore we do not want to encourage users to use the Unix timestamp. You need to know what you are doing if you want to sync Capture to a different data source.
I see
What would you recommend if I was in need of time aligning the pupil with another data source (such as the pen?)
Line 42. Pass the string representation of the current Unix timestamp instead 0.0
okay- perfect! Thanks @papr I'll take a look into that
and this is over zmq i assume?
This should be precise enough for now
Yes, that is correct. Alternatively you could use write an plugin that resets the clock directly in Capture. But I do not have example code for that
Ok, one more challenge for @papr, @mpk, and any other experts.
On one of my machines, the 2 of the RealSense devices (RGB and DEPTH) show up as Cameras, and they appear to have a default windows driver installed. The third device (Left-Right) appears as an imaging device with the Intel driver.
I believe the drivers are part of the RealSense Depth Camera Manager software, which I've installed and uninstalled numerous times. I've also tried some driver cleanup tools, but no matter what I do, I can't seem to make the RGB and DEPTH devices use the actual driver from Intel.
The devices show up and work fine on a different machine, so I'm really scratching my head
okay, thank you!
Question about 3D eye position detection: 1. What the green circle mean?
Hey @user-bd2540 The green circle indicates the outline of the fitted 3d eye model
This looks like a very good fit in this case
Question 2: Thank you very much. For 3D eye model, you have a sphere for the whole eye ball and one sphere for the corneal. Is this correct? How many infared lights are needed at least for 3D eye position detection?
The current model does not model the cornea. The second/red circle shows the pupil. The 3d model is fitted using a glint-free method that is based on a series of pupl ellipses
Question 3: What the blue wireframe mean and what the red wireframe out sice mean?
We fit multiple models in parallel if new observations do not fit the old model anymore.
Blue is the current model, red are potential new candidates
This is the paper to the model fitting approach: https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf
Question 5: How do you measure the maturity of the model as shown in the following picture? What the purple points in the image mean?
Question 6: Why don't you use the glints 3D eye position detection method?
May I ask that when the glints based 3D eye detection method is replaced by the glint-free method? (https://pupil-labs.com/blog/2016-10/open-source-3d-pupil-detection-and-mapping-for-windows/) On this webpage, you publish the 3D eye position detection based on glints. So I thought that current 3D eye position is detected using glints
To Q6: Our pipeline nees to work as well as possible outside. Glints are not very realiable outdoors. Therefore we choose the pupil-ellipse-based method.
This is a third-party implementation and is not part of our software
To Q5: This is kind of difficult to say. We have a model confidence based on how well new data fits the old model.
The purple points are older pupil positions
Thank you very much for your patient answer. Thanks
@papr
You are welcome. Let me know if you have any other questions. ๐
For the user wearing glass, how it performs?
It works quite well as long as the glasses' frame does not occlude the eye cameras.
Although glasses often introduce reflections that effect the pupil detection. We recommend to use contact lenses.
Thanks a lot.
ok got it โ my issue about the PyAV above is due to the ffmpeg 4 version. they deleted some things on which the PyAV 0.4 relies on... (PyAV 0.4.1 will have resolved this issue, but isn't released yet) .... if you just brew install ffmpeg, the new version will be installed.
Thank you for the update
Greetings room. Looking to buy a Pupil headset. On the checkout page, I can't get the "country" drop down to appear in Firefox. Switching over to Chrome, that works, but when I submit the form (I intend to use a PO), the form just hangs and says it is "submitting" ad infinitum
@user-49ac90 sorry to hear that and thanks for your feedback! We will debug and fix this ASAP. In the meantime, please feel free to write your quote request for PO via email to sales[at]pupil-labs.com we will get back to you with a Quote for PO processing.
@mpk Thanks, I'll send that email
Hey! I'm trying to repair my Pupil installation for a few days now. The problem started when I updated OpenCV and/or ffmpeg via brew (I'm on macOS). But even after going back to the (assumed) previous versions of both, starting main.py won't work. Now, with the most recent versions (OpenCV 3.4.1 and ffmpeg 4.0), I get the following error: ld: library not found for -lboost_python3
. When downgrading ffmpeg to 3.4.2, I get the this error: ImportError: dlopen(/Users/patrick/Code/offis/pupil_labs_leap_plugin/leap_plugin_env/lib/python3.6/site-packages/cv2.cpython-36m-darwin.so, 2): Library not loaded: /usr/local/opt/ffmpeg/lib/libavcodec.58.dylib
, downgrading only OpenCV results in the following error: ImportError: dlopen(/Users/patrick/Code/offis/pupil_labs_leap_plugin/leap_plugin_env/lib/python3.6/site-packages/cv2.cpython-36m-darwin.so, 2): Library not loaded: /usr/local/opt/ilmbase/lib/libImath-2_2.12.dylib
. Downgrading both results in the same error. Compiling OpenCV 3.2 (the version that you're supposed to install for Windows) didn't work either. I also tried simply symlinking some libs from the second error (libavcodec.58.dylib
to libavcodec.57.dylib
, for example), but this resulted in only more errors, and a seemingly endless chain of not found libs.
Wait, something is wrong. Now that I upgraded both to the most current version I get the last error instead of the first... I"m sorry, but all this doesn't make any sense anymore. Fact is, no matter how often I reinstall everything (including the other things from the installation docs), it doesn"t work. Does anybody have a clue what I could be doing wrong?
error: ld: library not found for -lboost_python3
indicates that you need to run brew install boost-python3
Library not loaded: /usr/local/opt/ffmpeg/lib/libavcodec.58.dylib
indicates that you need to install an older version of ffmpeg. I use version 3.4.2
Library not loaded: /usr/local/opt/ilmbase/lib/libImath-2_2.12.dylib.
I have not seen this error before.Which version of OpenCV have you installed?
Interestingly my cv2.cpython-36m-x86_64-linux-gnu.so
does not point to a libImath-2_2.12.dylib
or anything similar
This issue seems to be related to opencv 2.4.x. Can you use brew to look up which version of opencv was installed?
I have 3.3.1
installed
I have 3.4.2 installed
Reinstalling pyav doesn't seem to work
python3 -c "import cv2; print(cv2.__version__)"
Could you run this for me please?
I want to make sure that the python version points to the same version as you expect
3.4.1 is installed. The code doesn't work, I get this error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dlopen(/Users/patrick/Code/offis/pupil_labs_leap_plugin/leap_plugin_env/lib/python3.6/site-packages/cv2.cpython-36m-darwin.so, 2): Library not loaded: /usr/local/opt/ffmpeg/lib/libavcodec.58.dylib
Referenced from: /usr/local/Cellar/opencv/3.4.1_5/lib/libopencv_videoio.3.4.dylib
Reason: image not found
Ah, interesting. So it is not necessarily pyav's fault
Hello, we have been practising our calibrations and are seeing some variability in the accuracy of the eye tracking after the calibration. We are getting a good calibration window (in the corners of the frame) and pupil seems to be detected fine in the debug window when the participant looks up, down, left and right (and center). So we aren't sure why the eye tracking sometimes isn't accurate. One thing we want to know is: Is there a particular distance of the eye cam from the pupil that is optimal? Another question is: is there a particular angle of the camera to the eye that is optimal? And finally, can and should we be manually adjusting the green circle that encircles the eye? Thanks! Sara
can and should we be manually adjusting the green circle that encircles the eye?
This is not possible
Is there a particular distance of the eye cam from the pupil that is optimal?
Not that I know of.
Another question is: is there a particular angle of the camera to the eye that is optimal?
Position the eye cameras such that the 2d pupil detection is best. Did you try 2d calibration?
@user-06a050 the exact problem i had the last few days. You need to install the expected version of ilmbase (and you will see you need another version of openexr too). And remember to use brew switch
Or you wait until pyav 0.4.1 is available as a wheel
Oh nice! Which version of ilmbase would that be? Or how do I find that out?
I think 2_2.12 as it is in the error
Oh okay, so simple ๐ Thanks, I will try that out!
Took me hours ๐
@papr @mpk maybe it would be a good idea to open a source related channel โ the mixture of threads in this channel is overwhelming
You are right. Which categories would you suggest?
Pupil Source // Installation // Hardware // Usage
Source in the sense of software functionality/how the software works under the hood?
For everyone working with the source code.
How it works under the hood is of main interest for all and fits in the main channel
Mmh, I will talk to @wrp about this. Thank you for your input on this. I think that this will be useful to everyone. We just need to figure out details on how to split the channels.
Of course ๐๐ผ
We are using the 3D calibration. Should we be using 2D calibration?
We can restructure/add channels. I will do this tomorrow.
What using binocular vergence mean?
@user-bd2540 You are able (in principle) to tell if a subject focusses on something in 3D-space if you look at the crossing gaze vectors of the both eyes. It is different than pure eye accomodation because Pupil cannot track the cornea. It is also a litte different from a fixation which means how the conciousness of a subject is directed. Hope this helps
If something very near to you appears in your line of sight, your eyes have to accomodate to focus it
Two things are necessary: change the refraction of your eye lens (cornea) and moving the eyes to adjust the gaze
@user-29e10a Thanks. The thing that I don't understand is why the eye accomodation is menthioned? Why should I take this into consideration? It seems that it doesn't influence the accuracy.
Depends on your kind of research. And is only valid for binocular data. It is a โhigh levelโ eye movement and does not affect accuracy (only if youโre misinterpret your data)
Hi
im having problems trying to run pupil from source
when i run python main.py i get /usr/bin/ld: can't find -lboost_python-py36
its the same issue co https://github.com/pupil-labs/pupil/issues/874 its closed, but there's no solution
does anyone know haw does it fixes
how*
i've already reported it with a note on the same issue with the whole output
Hello, Will be happy for some help with sampling rate. When I record with the eye tracker connected to a laptop with 120 hertz for world camera, and 200 Hertz for eye cameras the frame rate I get in the combined video after exporting the recording from Pupil Player is about 70 (it is different every recording) - if I understand correctly this corresponds to the "Index" column in the "Gaze_Positions" excel spreadsheet. However when I record with the same values with Pupil Mobile the frame rate I get is 20-30. The question I have are: 1. Why does the overall frame rate is lower than specified in general? 2. Why does the frame rate is een lower when recorded with the Pupil Mobile with the same setting? 3. If I want to filter the gaze position data to find angular eye movement, should I address to the capture frequency as the 'eye camera sampling rate X 2' - Id I understood correctly this is the total number of gaze points I receive? Thanks in advance for your help!
@user-8944cb recorded frame rate can be lower than expected if the system was experiencing high load during the recording. The index column just indicates which gaze datum belongs to which world frame. To the third question: I would recommend to use the recorded timestamps to calculate time instead of assuming a fixed frame rate.
Hi, What's the reason for this error?
File "C:\work\PupilLab\pupil\pupil_src\launchables\world.py", line 96, in world import glfw File "C:\work\PupilLab\pupil\pupil_src\shared_modules\glfw.py", line 78, in <module> raise RuntimeError('GLFW library not found') RuntimeError: GLFW library not found
In the tutorial, we are only told to Copy glew32.dll to pupil_external
Question 2: How to I run pupil lab from source? Just Run main.py ?
Be aware that there two libs: GLEW and GLFW. These are two separate steps!
You run python3 main.py
from within the pupil_src
folder to start Capture, or python3 main.py player
to start Player
Yes, I have both libs
Still doesn;t work
Sorry, I do not have any experience with the dev setup on Windows. The only thing I know is that we highly recommend to use macOS/Ubuntu for your dev setup.
In the tutorial on website, we are only told to copy file about GLFW. I think something is missing . Of course the python can't import GLFW. Can you update the information on the wensite?
Dynamic libs are required to be stored in pupil\pupil_external so that you do not have to add further modifications to your system PATH.
Nonetheless glfw.py
calls find_library('glfw3')
. Not sure if this is intended. Maybe the pupil_external path needs to be added to the system PATH?
But as I said, I have no idea about the windows dev setup.
Question1: While running main.py get the following error. Can't import pupil detectors
Aaaah, my bad. There is a hint that you need to execute run_capture.bat
instead of python main.py
This will probably fix the other issues as well
The same error happens
See the box with the question mark and build the pupil detectors explicitly https://docs.pupil-labs.com/#modify-pupil-detectors-setup-py
I think that's the problem. It seems that the pupil detector is not built correctly. But what's the reason for this?
I get output from the compiler when I run this command. Not sure why you do not gey any
This is the content in build, is this correct?It seems that build doesn't run setup.py
this is build.py
this is different from what would run by calling python setup.py build
Maybe switch https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detectors/setup.py#L120 from quiet=True
to quiet=False
It doesn't work. This is really weird.
When it says, running build, what the codes are doing?
This is cython/dist tools territory and not related to our software project
When we run python setup.py build, what the codes are doing
This is the content of setup.py
@papr I just added a pull request with the coarse pupil detection in C++ ... hope it helps
Tahnks. What the pull request with the coarse pupil detection means?
@user-b91aa6 never mind, has nothing to do with your issue ๐
@user-29e10a software-dev channel has been created for software development related discussions
What's the possible reason that I can't do python setup build in pupuil detector?
When we run python setup.py build, what the codes are doing This is the content of setup.py
I see one image from other people's messages. It seems that the extention of mine is not built. Why this may happen?
@user-b91aa6 let's migrate this to software-dev
Good afternoon. Tell me, please, how to calculate saccades in pupil?
(Heatmap) how to define the surface? I see button A on the left and add surface (No.) in the plus in Tracker. Did I miss anything? How to define the surface? Or I need to do it in the pupil capture already? Thanks
@user-d9bb5a currently Pupil software does not have a saccade classifier built in. @papr can you provide @user-d9bb5a with further information on this topic.
@user-b571eb You can define surfaces in Pupil Capture or Pupil Player. First turn on the Surface Tracker plugin in Pupil Capture; or Offline Surface Tracker in Pupil Player. Next, ensure that the markers that you want to use to define your surface are visible and detected - you will see detected markers highlighted in the World window in Pupil Capture or the main Pupil Player window. Press a
on your keyboard or the A
button in the GUI to add a surface. Once you have added a surface. You can then name that surface in the Surface Tracker plugin GUI and set it's x, and y dimensions.
If markers are not detected, please ensure the following:
1. That markers have enough of a white border around them
2. min marker perimeter
in the Surface Tracker plugin is set such that your marker is detected in the scene. If you have small markers you will need to reduce the default min marker perimeter
setting
Again, you can define surfaces in either Pupil Capture or Pupil Player.
Now on to heatmaps in Pupil Player. Once you have defined a surface in Pupil Player (or using surfaces defined already in Pupil Capture) ensure that your Surface has a X size
and Y size
defined. Size can be real world size - the sizes will determine the size of the png image output when you export the in Pupil Player and the number of bins for the histogram.
I hope these notes are helpful
Please refer to https://docs.pupil-labs.com for more information.
2018-05-18 13:27:05,857 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 478, in player File "site-packages\OpenGL\error.py", line 232, in glCheckError OpenGL.error.GLError: GLError( err = 1281, description = b'ung\xfcltiger Wert', baseOperation = glViewport, cArguments = (0, 0, 1280, 720) )
2018-05-18 13:27:05,857 - player - [INFO] launchables.player: Process shutting down.
what does this mean? WIN10 Issue?
@papr Thanks for clarifying! One last thing about the sampling frequency, and differences between the exported "Pupil Positions" and "Gaze Positions" excel spreadsheet. I noticed that the number of gaze positions differs from the number of pupil positions for a given second. Additionally, the timestamps within the same second are also different. My question is what I need to do, or how I can match the timestamps to calculate a certain measure from the "Pupil Position" data (for example pupil diameter) and a time synched measure from the "Gaze positions" information? Thank you!
Howdy,
I found this closed thread: #1097
I tried minimizing the pupil player window, upon restore the error appears. Might not be as unsystematic as I first thougt. Okay. So, a potential workaround is not to minimize the window. But it doesn't seem to be fixed - at least not in the latest release 1.7.42.
is it fixed in the source code?
Regarding the notify procedure to send pupil notify.eye_process.should_start.0
m.Append ("notify." + data ["subject"]); m.Append (MessagePackSerializer.Serialize<Dictionary<string,object>> (data));
is the first part of the message always not serialized ?
I'm looking at buying another headset. Can anyone tell me which sensor is being used for the high-speed world cam?
Does it mather on which subport I send the calibration handshking data ? If I send all the callibration data one subport and then read the data on another would that influence anything ?
@user-f1eba3 yes. The first part is just the topic, its raw, the second part is the payload, msgpack serialised.
Does it mather on which subport I send the calibration handshking data ?
@user-8944cb sometimes two pupil positions are used to generate a binocular gaze point. Use the timestamps in the base data column to identify which pupil positions were used for eaxh gaze point
Hey guys, has anyone spend the time to open up the vive and put a camera behind the optics? My lab has been considering giving this a go this summer and it would be a shame not to learn from previous experiences
@papr Was your decision not to put the cameras behind the optics due to manpower? Or a desire to keep it as a simple snap-in, Non-invasive installation? Or is there some other reason that I havenโt considered yet and should be aware of?
@user-8779ef we tried putting it inside. It's not easy to film through the fresnel lens and the modifications of the hmd are non reversible.
Hello, I am trying to implement surface tracking onto a driving simulator and after trying a few times resizing the markers used to mark surfaces, I haven't been able to get it to register the markers. I've been doing it in OS X Preview since I do not have much experience in Python coding. Our set-up is a vehicle a cab with monitors surrounding the cab to provide 360 degree field of view. The monitor directly in front of the cab is about 4.5-5 feet from where the drivers will be sitting depending on height. I was hoping you guys might have some recommendations.
@papr Thanks for answering. I have additional quick question about the reduced frame rate I am getting compared with the selected capture rare - What factors might contribute for the system to experience high load during the recording resulting in the reduced frame rate, and is there something I can do to improve this?
@user-fa3706 Try to reduce the min marker perimeter value in the settings to recognize smaller markers.
@user-8944cb other programs running, your computer being simply to slow, etc... What are your hardware specs?
@mpk Ok, thanks. Well then, full steam ahead, it seems. Our hope is to design a modified housing for the lenses, cameras, and hotplate that can be put in the helmet. Yes, this will be irreversible
Hi, does anyone know if it's possible to add a hardware trigger to synchronize frame acquisition between cameras? The docs suggest it's not possible, but... maybe it can be done?
Any updates on this? We are trying to assess whether we can use pupil core to measure spike-triggered pupil dynamics and would need the ability to either hardware trigger the cameras, or alternatively get a shutter exposure strobe signal output to send into our acquisition board
Hello guys, I seem to have a problem, when I try to edit a surface the window freezes and the point does not drag to the position I want it to due to this freeze. I tried with different versions of the Pupil Capture application (1.6 and 1.7) but still no luck. I tried to open the application through the source code as well (1.6 and 1.7) but again the window freezes. My current setup is the Macbook Pro Retina running MacOS High Sierra 10.13.4 with 16GB Ram and 2.2GHz i7 processor. I tried running the applications on other PCs as well (Macbook Pro Retina 10.13.2 and a HP Windows All-in-one PC) with the same versions of applications and the surface editing works perfectly which makes me think that it's my Macbook that has some problem. Can it be a storage problem or something else? Thanks in advance for any help.
@papr @wrp Hi All. We are furiously processing data for a conference next week. We've successfullly processed multiple participants, but the last 3 are causing problems. We first load up multiple plugins. Everything processes fine. Then we try to set the minimum fixation duration from 300 to 100. Pupil player starts to reprocess fixations, but half way through, it halts and then the software crashes. Can you help?
Hey
is the position.z = PupilTools.CalibrationType.vectorDepthRadius[0].; in the 2d Calibration modus just for specifing something on the imediat plane ?
@user-78dc8f can you share the logfule from player after the crash?
@mpk Hi , I have a codec problem with the pupil player on a 10 years intel CORE 2 DUO windows 10 64bit. Like the following it is looking if I run it from CMD. Is it may different with different OS. I was recording the folder on my android mobile, that sometimes is running out of storage. Could that end up in buggy mpeg files that are read by ffmpeg on OSX but not on windows 10?
player - [INFO] launchables.player: Process shutting down.
C:\Program Files\pupil_player_windows_x64_v1.7-42-7ce62c8>pupil_player.exe . . . player - [ERROR] player_methods: No valid dir supplied (pupil_player.exe) player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those. player - [INFO] launchables.player: Starting new session with 'C:\Users\Admin\Desktop\20180408143738803' . . . player - [INFO] launchables.player: Session setting are a different version of this app. I will not use those. player - [ERROR] libav.mov,mp4,m4a,3gp,3g2,mj2: moov atom not found player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables\player.py", line 393, in player . . . File "shared_modules\audio_playback.py", line 70, in init . . . File "av\container\core.pyx", line 182, in av.container.core.ContainerProxy.err_check File "av\utils.pyx", line 103, in av.utils.err_check av.AVError: [Errno 1094995529] Invalid data found when processing input: 'C:\Users\Admin\Desktop\20180408143738803\audio.mp4' (libav.mov,mp4,m4a,3gp,3g2,mj2: moov atom not found)
@papr Looking to buy a PupilLabs headset. I came here probably a week (?) ago to report I was having difficulties with the web checkout form. It was suggested I email [email removed] I did that. I have not received a response. It shouldn't be this hard to give you my money ๐
@mpk how do I do that?
In pupil player settings dir These is the logfile. Right after the crash it will contain the crash report
there*
@mpk player.log?
I think so
@mpk can I attach that here? or send by email?
@mpk there you go...
I'll look at this tomorrow. I'm not in the office now.
@mpk ok. thx. Zip me an email with any observations: [email removed]
Timestamp question here. I am looking to see the computer clock timestamp of the first frame of my videos to as many decimal points of a second as possible. Looking in the info.csv file there are 3 different start times: "Start Time", "Start Time (System)" and "Start Time (Synced)". The what are the units of System and Synced, and is there any way to calculate/see when the first frame was recorded using this information? Thanks.
@user-bfecc7 units are always second. Start time system is Unix EPOCH. Start Time (Synced) is the Pupil internal clock and then one that we use for timestamping.
if you take the diff of both and add this to the timestamps you have them in UNIX Epoch. (take note that you may lose decimals when doing this without changing type.)
Hi, does it make sense to do an offline pupil detection but use gaze from recording? My guess would be that thae gaze points in the video output won't be affected by the offline pupil detection, correct? Vice versa: If I use puil from recording and start an offline gaze calibration, gaze points will be computed based on the recorded pupil data. So that workflow would make sense, correct?
@mpk Thank you very much. Follow up question: I use Pupil Mobile and stream it over wifi to my computer where I start the recording and save them directly to my computer. Would the Start Time(Synced) be the Computer's internal clock at time of recording or the Phone's internal clock.
@user-bfecc7 if you turn on pupil time sync (just start the plugin) Pupil mobile will sync with Pupil Capture and you can consider Pupil Capture clock to be used everywhere.
SOLVED my crashing pupil player : . File "av\container\core.pyx", line 182, in av.container.core.ContainerProxy.err_check File "av\utils.pyx", line 103, in av.utils.err_check av.AVError: [Errno 1094995529] Invalid data found when processing input: 'C:\Users\Admin\Desktop\20180408143738803\audio.mp4' (libav.mov,mp4,m4a,3gp,3g2,mj2: moov atom not found)
SOLVED it by - running it from CMD as Admin - gave write permissions to the pupil_player application folder - granting write permissions to project-folder
After that, I got another ERROR Traceback (most recent call last): File "launchables\player.py", line 393, in player File "shared_modules\plugin.py", line 294, in init File "shared_modules\plugin.py", line 321, in add File "shared_modules\audio_playback.py", line 131, in init File "site-packages\pyaudio.py", line 750, in open File "site-packages\pyaudio.py", line 441, in init OSError: [Errno -9996] Invalid output device (no default output device) player - [INFO] launchables.player: Process shutting down.
SOLVED by - Connecting HEADPHONE to the DELL windows 10 CORE 2 D
@mpk And the Pupil Capture gets its clock from the computer the application is running on. Great, thank you very much.
@here As per a couple postings here, can anyone official with PupilLabs help me purchase a head set??? Web form is broken and the sales team hasn't responded to my email!
Hi @user-c1458d apologies for the delay in response. Out of curiosity what browser were you using to submit the web form?
We try to be quick with responses - please send us another email or let me know your email address via DM so I can sort this out for you
I can't see the "Country" drop down box in Firefox
I can complete that field using Chrome, but it hangs on form submission
Ok, thanks for the feedback. We will run some more tests and try to recreate this behavior.
Hello, I am looking for a way to synchronize Pupil capture and a Motion capture system (Qualisys Miqus), and to get information where people are looking. If I understand correctly, there is a difference between triggering them to start recording at the same time, and between synchronizing them . I found the middleman script by @user-e7102b and @user-dfeeb9, and was wondering if this would be a good solution to trigger the recording of pupil capture? Would triggering them at the same time enable us to get information where people are looking the motion capture coordinate system in some way? Will greatly appreciate any tips on how I should approach this...Thank you!
hi @user-8944cb. I would like to ask about the temporal sensitivity of your motion capture system and data
using the middleman you will likely end up with some small milliseconds of delay
the best solution would be to use some kind of resynchronisation or time correction like Pupil has natively. we haven't implemented it in our middleman though.
@user-dfeeb9 thanks for the fast reply! The motion capture can record in a capture rate of up to 640 hertz. Is there any way to account for the delay, meaning will it be constant, or will it increase over time? Where can find information regarding the time correction? Thank you!
It's been a while since I looked, there are docs in the pupil github. I'll try to find them in a minute. 640 hz rate is certainly not something you want to be using our raw middleman for. you may end up needing to tool fit for purpose. The delays are not consistent and vary on context, but they don't get worse over time too much. I haven't tested the middleman as extensively as i would have liked to but in 20 minute experiments they consistently range from 0-40ms on average.
@user-8944cb please try taking a look at these, https://github.com/pupil-labs/pupil-helpers/tree/master/network_time_sync
@user-dfeeb9 Thank you! I will read it.
Hey all. I'm trying to get pupil to run on a Raspberry Pi 3 B+, but I don't have much experience at all with c++. How would I go about building the project from the git repo for the Pi?
Hi @user-9d7bc8 you will need to look through instructions on building from source here https://docs.pupil-labs.com/#linux-dependencies - You do not need to have experience with C++, but do need to have experience in setting up dependencies and being familiar with developing in linux. Unfortunately I will not be able to give you feedback/tips for RPI - but hopefully the notes in the linux dependencies dev notes will help. Additionally I think this conversation might be best migrated to software-dev ๐
Awesome, thanks ๐ If I have any more questions, I'll ask there
Thanks @user-9d7bc8
Hi, if I use the h264 encoder to record eye videos, which quality preset for ffmpeg is used for this? I'm asking how I can convert the non-h264 to the same size and quality as if they were recorded in h264 in the first place.
@mpk Any reason there has been no attempt to integrate the 200 Hz cameras into the Vive?
We are working on it :-)
Will be an addon too.
It is a bit tricky with the lens
Ah, oK. Great. Is there any interest / mutual benefit to working with Jeff Pelz and I on our summer project to move the camera behind the lens?
Let's me discuss this here. I'm not in the office right now.
Sounds good! ...is the Unity integration totally ready for the 200 Hz swapout, or will additional work needed to be done to the software?
No software changes nessesary
Beautiful.
experienced this now multiple times: what to do if one eye has a "constant shadow" (either everything is very light or everything is very dark)? recorded with pupil mobile. changing camera settings in the preview didn't help as well as resetting the app or changing USB cable. It doesn't seem to be lasting, but occuring from time to time. opened an issue on github: https://github.com/pupil-labs/pupil/issues/1200
Hi guys, so I'm trying to get Pupil Capture to recognize camera and it doesnt work with any, not my integrated webcam, not a webcam by Logitech, not 3 different UVC OEM cameras (Raspberry, etc) ranging from 30 fps to 120 fps ones, while all of these cameras work fine. I used Zadig on all of them to change their drivers to "libusbK (v3.0.7.0)" as the docs mention but nothing. What might be the issue?
@user-c7a20e this is a driver setup issue. can you use linux or mac. Less issues on those plattforms...
Afraid not...
What do you mean though? If you mean use Linux only for updating camera drivers then using the cameras on a Windows environment then thats possible.
No, he meant if you can use Linux/Mac to run the Pupil applications. You cannot use Linux to install windows drivers.
Well, I can't. Does this also apply to the python SDK?
You mean running from source if you say SDK? The bundled application uses the same drivers as the software that runs from source. As mpk said, this is a driver setup issue and not related to our application. Did you verify in the Device Manager that the cameras are listed as expected?
how should they be listed? Its listed as "usb camera"
They should be listed as libusbK devices
yup, they are, named "Usb Camera"s
This means that the manual driver install procedure was not successful yet
Beware that you will not be able to use your camera in a different program, e.g. Skype, as long as you do not roll back the drivers to default
Sure. This is the error: "EYE0: The selected camera is already in use or blocked"
Yes, as long as the cameras are not listed in the libusbK category, Capture will not recognize them
they are
Ah my bad, I misunderstood.
Then please open Capture, and select the UVC Manager
icon on the right of the eye process window and select the cameras. There should be cameras listed, correct?
yes, thats the error i get when i try to select my camera from there, in "Activate Source"
Mmh, weird. This is unexpected. I will ask my colleague about it tomorrow. Maybe he has an idea what the issue might be.
But generally we recommend not to use custom cameras on Windows. The whole driver situation is very unstable and we only support automated driver installation for our supported cameras.
Mac and Linux do not have such problems.
okay, I'll wait
Hey! Due to other reasons, we use an old pupillabs version (0.9) (not possible to upgrade, sorry) on a dell latitude laptop. Unfortunately, during recording every ~1.5minutes ( +- a minute) we have a world-cam only framedrop of ~500ms - 1000ms (eye cameras seem to work fine). Any ideas?
@user-af87c8 Hey there! ๐ What plugins do you use? I guess that you are using a custom plugin as well?
yes, we send parallel port trigger every 2s
@papr also hey ๐
A world-cam only frame-drop indicates a blocking call in the world event loop. This is very likely related to your plugin then ๐
Ah wait, minutes, not seconds
and sometimes for 500 sometimes for 1000. Will check the plugin. Do I get it right: Plugins are not started in independent processes?
No, plugins are run within the world loop. This means that if your plugin blocks the UI and everything within the world window will freeze as well.
If you have any blocking calls you will have to move your code to a background thread
Possible other reasons are system-depend, e.g. if you OS pages memory
ok. but I would need to mange the background thread?
I have an idea of what could block the loop, and also block it in 500ms chunks
Manage the thread in which sense?
start it, stop it, call it from time to time
I'm more wondering if I have to do it for myself, or if there is an ecosphere already there
I would recommend to do the following: Have a thread started with a function that has its own loops and runs your serial
communication. Then add a mechanism (e.g. a list for messages) that can be used by recent_events
to read/write from/to the thread
Can the thread asynchronously read data from pupil? Or does it need to wait until world is in the right state?
I did find one thing that could give raise to the issue in the trigger plugin somebody wrote in our group - so I will try fixing that first. The external thread makes a lot of sense as well
If I remember correctly, you were using a thread based approach before. I recommended to switch to the recent_events version assuming that you would not have any blocking calls. If you can remove the blocking call, you are fine with the current implementation. ok. Else you will have to go for the thread approach. In this case, I would recommend a message based approach. Have two queues (simple list
objects as plugin attributes) one for reading and one for writing from/to the background thread. Put any message objects into it that you need to communicate to the background thread.
You can access your plugins attributes (incl. g_pool
!) from the background thread. Threads are not really concurrent in Python. Therefore it should be rather safe to access these attributes directly.
ah cool, so gett self.g_pool.capture._recent_frame.timestamp (if it exists) should not be a problem
If you run in any concurrency problems I recommend to use https://docs.python.org/3/library/threading.html
Correct
mh. Ok Not sure I want to rewrite (or have the time ;-)) but we'll see if it helps what I did already
so I get it right. If in a plugin, I run time.sleep(0.01) I will have no data for 10ms. Or is there a buffer for the worldcam?
@user-af87c8 you are correct. There is no such buffer since the world loop does not move but is being blocked
@user-af87c8 if I remember correctly, you had to add sleeps in order for the serial message to go through
yes exactly. Ok I will need to check this. With 60Hz, this would mean I would drop 1 or 2 frames - not perfekt but also not terrible. But I fear the trigger plugin might not be the problem. 80% througout the function we happen to write out how long the function took, and its <1ms for all cases (and never 500/1000ms)
after sending a notify_all, there is no need to sleep - correct?
@user-af87c8 correct
ok. Looking at the world-timestamps, I'm finding my trigger plugin to introduce a constant 40ms (which it should be) delay (looking at np.diff(timestamps)). But the plugin is clearly not the cause of the delays
Ok I checked another recording (new pupillabs 1.6.X version binary, USB-C clip, different Pupillabs eyetracker). Same phenomenon. We employ a completly different plugin there (zmq) and the commonalities (besides being plugins) are a call to self.g_pool.capture._recent_frame.timestamp and notify_all
Many eye-frames on the same world-frame
besides, plugin is uncorrelated to delay onset
and confirmed in another experiment (but same setup as the pupillabs 1.6 binary USB-C one). We recording with 120Hz binocular and 60Hz worldcam
We use USB3.0 throughout up until the clip
@user-8fe915 it is expected to get multiple Pupil positions for a single world frame if you had frame drops
exactly, just wanted to add a visual ๐
hello everyone! i have just updated the pupil software to the latest version (1.7-42), but now my hmd calibration routine (based on the one in hmd-eyes) is broken ๐ฆ pupil capture is bailing out with
world - [ERROR] launchables.world: Process Capture crashed with trace:
Traceback (most recent call last):
File "launchables\world.py", line 445, in world
File "shared_modules\accuracy_visualizer.py", line 133, in on_notify
File "shared_modules\accuracy_visualizer.py", line 147, in recalculate
File "shared_modules\accuracy_visualizer.py", line 187, in calc_acc_prec_errlines
IndexError: boolean index did not match indexed array along dimension 0; dimension is 1340 but corresponding boolean dimension is 1675
i checked recent update to hmd-eyes, but couldn't find any relevant changes. any ideas what could be the problem?
here's the full log from pupil capture: https://gist.github.com/skalarproduktraum/441382e90c26ba80874564901e79c2dd
Hello, I am trying to implement eye-tracking in my Hololens app, but when I do seemingly the same operations than on the example given, it does the eye-tracking, but does not compensate for any head movement, any idea of what I could have forgot ?
Hello, I have a question about accuracy in 2D mode - with respect to what point the accuracy is calculated? In other words, how is the error in degrees of visual field is calculated if there are only X,Y coordinates information? Thank you!
Dump question: For usability testing/research of website on desktop computers โย is one camera sufficient? Like the 2d approach? Or do I need two cameras for pupil tracking? Thanks for help.
I didn't try with only one camera, but the double one is really very precise and it might be overkill for that kind of ergonomy research
@user-8944cb we use the camera intrinsics to deproject the 2d gaze and reference positions into 3d vectors within the world camera space. Afterwards we can calculate the pair-wise cosine distance resulting in the angualr error. See https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/accuracy_visualizer.py#L157-L211 for details
@user-f68ceb The problem with the monocular headeset is that it does not cover all eye positions, i.e. if the eye camera is mounted on the right side and the subject looks to the left it will be difficult to detect the pupil from this extreme angle. Looking straight forward should be covered well enough though. 2d is more precise than 3d but is more prone to slippage. You should be fine with 2d detection/mapping if your subject only looks at a screen.
any updates on my libusbK issue? (""EYE0: The selected camera is already in use or blocked"")
@user-c7a20e You run the bundle and not from source, correct?
yeah
by source you mean git repo or running a python script in the folder?
Correct
Under which names are the cameras listed in Capture?
"USB Camera"
And there is only one? Which camera is active in the world window?
theres also the notebooks webcam labelled as "unknown"
there is no camera in the world window. That said Ive also tried using that camera for the world camera but got the same error
I consulted my collegue. His guess is an implicit incompatiblity with drivers/pyuvc/uvc. Please understand that the driver situation on Windows is very tricky and that we cannot support hardward that we do not have access to. I am confident that you would not have this issue on mac/linux. Therefore my recommendation is to try these operating systems. You can easily install Ubuntu on an USB stick, boot from it and test the Linux Capture bundle on the live-usb-stick.
Thats not a practical solution. I can ask the manufacturer to provide a different firmware/driver with the UVC camera. Will that work and what driver should I ask him to use? libusbK?
By the way this has been the case with every single camera Ive tried. Unless nobody got this working on Windows I think the issue may be pinned down
@user-c7a20e our video uvc backend is pretty finetuned and will not work with all uvc compatible hardware. Its all open source, feel free to build it all from source and mod to make it work with your HW. Please understand that its not possible for us to debug and support without having access to this HW.
I can get them to send you one as well. The main thing got me interested with thos OEM camera is it can do 330 fps at QVGA. I wanted to study how much of improvement that could provide to saccade tracking
(but again I cannot get any camera to work at all)
@user-c7a20e as a hint, your camera needs to support mjpeg compression or you need to modify the uvc backend to support other video modes.
(yeah its mjpeg and usb2)
then it should work and you should try linux.
Dunno what to say. The ones they sent me are configured to run at 120 fps VGA, mjpeg stream... (but can also run at 330 fps QVGA)
For everyone that followed @user-af87c8 's discussion in regard to frame drops during recordings: https://github.com/pupil-labs/pupil/issues/1204
Hi, this is Lakshman. I'm trying to build a gaze contingent experiment with the trial proceeding to the next one when fixated for some duration on a target among distractors. I want to work with psychopy for this but couldn't find right sources. Can anyone help with this? I used pygaze which has an AOI function to achieve my abojective but pupil labs tracker isn't compatible with pygaze. I'll be grateful for any help.
Did it work? I just join the group and did a search for "gaze-contingent" and your message from 2018 popped up
@wrp @user-006924 I saw that both of you are mentioning the Pixel 2 and it's capability to run Pupil Mobile. All of my work with Pupil Labs over the past several months has used a Pixel 2 and Pupil mobile and I've had no issue. If its not running on the phone my guess is bad cables. Hope this helps.
@user-bfecc7 Thank you very much! This is indeed helpful.
@user-bfecc7 thank you for your comment. I'm going to get the cables @wrp suggested and hopefully the problem will go away.
Hi @user-006924 The cable will fix the problem. I have a Nexus 5X with Android 7.0.
Hi, my pupil player 1.7.42 (on lubuntu 18.4) is exiting with the known error message: Session setting are a different version of this app. my android info.csv tells me data format v 1.4 ... But what can i do to load my recording?
I started the processing of the same recording on my mac with the pupil player 1.7.42. Here I didnยดt got that error. It seems to me that mobile recordings are bound to the mac and windows plattform. Is that right?
Hi @papr , Regarding your response from last week, I am sorry that I missed it. I was asking about the reduced frame rates after importing a recording to pupil player compared to the frame rate that was selected (120 hertz for world camera). The raw data videos before importing to pupil player are with the right frequency. Trying this on different computers - I receive a final frame rate of between 70-90 on laptops/stationary computers, and pupil mobile (running on one note 8 phone) - frame rate between 20-30fps. The best frame rate I got (90 when specified 120, the frame rate is inconsistent throughout the recording) was when recording on a computer with the following hardware specs: Processor: Intel(R) Core(TM) i7-6700HQ [email removed] 2.59 GHz Installed memory (RAM) 16.0GB. System type: 64-bit Operating system, x64-based processor I tried running it when no other programs are working. 1. Do you have suggestions to why it might be happening?
2. Is there any preferable computer for recording? 3. How is the inconsistent frame rate affect the exported gaze and pupil positions? Thank you so much for your help!
where is the calibration_routines folder ?
Hi, we recently exported a video where throughout the recording there are two fixations - one appears accurate with where the individual would be looking, and the other is definitely not correct. Everything else about the recording appears relatively normal with the eyes being tracked, etc. Is there any reason why this would happen and any way to prevent it in the future? Thank you!
uvc: Turbojpeg jpeg2yuv: b'Invalid JPEG file structure: two SOI markers'
Does anyone know what that means ?