@vish IR light may burn your retina. However, for people with normal vision, chronic exposure to a single IR led can't do any harm under normal light conditions (lets say, > 200 lumens/m2).
A single conventional IR led will emmit 10 - 25 mW/sr2 under 100mA. The safe zone is below 10W/sr2. (I am writing from the phone, I would recommend confirmation of those values).
Hey guys, is it me, or is pupil player searching for pupil calibration markers even when I'm in natural feature mod?
@user-8779ef Welcome to the channel! Thank you for your valuable feedback on Github!
Notice the blue circle indicating the incorrect identification of a circular calibration marker
papr - happy to contribute! thatnks for the hard work and responsiveness.
So, check that image. Note the one quirk - it's stuck at 99% mapping.
You are right that the circular calibration marker detection is run even if the section is set to natural features. We run it once and cache all found markers.
Oh, no wonder I have no laptop battery left 😛
You should wait until it is finished and close player. After the next launch the markers will be loaded from the cache file instead of re-running the detection.
Could you try recalibrating? Does it keep being stuck at 99% every time?
I'll try now. I believe it is stuck every time, yes.
...but, lets see.
Would it be possible that you share this recording (or a similar one that shows the same issues) privately with me? This would help me a lot with reproducing your issues. 🙂
Yes, I am stuck at 99%.
Papr - I have to check. It's a many-gig file and, more importantly, may be under NDA.
Non disclosure agreement.
Yes, this is often the case for experiment recordings. Try to make a small test recording and see if it shows the same issues.
OK.
I'll also talk to Jeff - he's got more experience with your player on a similar machine (if that's the issue).
This might be an edge case that only appears on large recordings.
FYI, our preferred pipeline is this: during capture, use 2D methods and save out eye images. We are sure to include a calibration sequence with natural features (eg. a custom grid) . Later, in player, we switch to 3D modes with natural feature detection.
This seems to provide the most flexibility and quality.
Yes, I agree that this is also the most future-proof way to record.
For large recordings, the search for cabliration markers eats up a ton of time and CPU. I think I'll open a request on github to remove this from default behavior.
What's the term you guys use to refer to your circular markers?
Or better: A way to cancel it
yes.
calibration markers
is fine
oK, thanks.
Btw, do you run from source or from bundle?
Source.
SOrry, bundle
I think this is going to be the best approach for me - I'm testing this out for student use, too.
I need to know the issues they will run into ahead of time.
am I correct to infer that a result of being stuck at 99% is that it won't save to file? I assume this is automatic at completion.
Mapped gaze is actually not cached since it is quite fast to map pupil data. The actual problem is that other plugins that depend on gaze data (e.g. fixation detector) do not get the data.
However, my natural feature locations were not saved.
Upon shutdown / reload, the entire calibration region is missing.
Ah, I have to switch away from gaze from file, to offline calibration. Shouldn't that be remembered?
Mmh. This might happen if the application was not shutdown correctly/crashed.
It was a graceful shutdown, so that's not it.
...at least, I have no indication that nything went wrong.
This should be remembered. It is an indication for a crash during shutdown if not even the user settings were stored. I would suggest to upload the log file but it is overwritten after restarting the application. @mpk We should add an option to keep the last three log files automatically.
Ok, thanks again.
Be aware that a crash does not necessarly show as such during shutdown. The crashlog should be written to the log file.
Good news - now that I'm able to poke around a bit more, the track seems to be really nice. 😃
It seems something changed between shutdown / reload. Strange.
So, you're probably right.
Yes, if the saved user settings are broken they are automatically resetted. My guess that was your case.
K. I'm currently playing with a custom fixation detection algorithm. It's slower than the one you have implemented, but if it works I'll se if I can share.
Output is in the same format excel as you guys use, so my hope is to explore using your plugins.
wish me luck, and thanks again.
Cool! Do not hesitate to ask if there are any further questions. Especially if it comes to code details/behavior.
Ok, thanks. I won't.
So, it seems like on cannot import fixations, only export. Is that right?
...that is, into pupil player, for visualization of my own algorithm. I have formatted the algorithm output to match the output of the csv file found in the exports folder.
@papr In addition to the calibration hangingn at 99%, I've since also found an inability to export the video from player. However, resetting to default (in gen settings), recalculating pupil positions, and recalibrating fixed both issues.
@performlabrit#1552 That is correct. There is no such thing as importing external data. But you could easily write a plugin that reads such a file via drag and drop and visualizes the data.
Hi everyone! Is it normal that the headset get very hot? I've been running some temperature tests and these are the results:
I have another question: Does the device have a recommended usage time? Thanksss! 😃
@user-ec208e the cameras do get warm.This is bacause we do compression and high speed capture with them. The headset can run continously without problems.
@user-ec208e the new 200hz cameras run much coller btw.
@papr "easy" is a very relative term 😃 . In fact, I've been looking at making a simple plugin, and find the lack of documentation a major impediment. Is there a location to make requests related to documentation?
Nevermind, I've found it.
Just checking myself - on the excel output from Player, the timestamps listed are in seconds, correct? And if I want to get the timestamp of a certain fixation, I can subtract the very first timestamp in the gaze positions excel file from my chosen fixation in the fixation file?
@user-2798d6 correct.
Hello everyone.
@user-39ac51 Hey, welcome to the channel 🙂
Guys, so I have like 6-7 short questions after reading the docs, should I just ask them here one after the other or would that be spam-y?
I would suggest opening an issue over at https://github.com/pupil-labs/pupil-docs/issues that includes all the questions. Please number them such that I can refer to them when answering. Afterwards we can create new issues for eventual docs changes
But it is also ok to simply ask your questions here. I just have the feeling that it might be easier on Github to reference single questions when answering.
it's not really a bug or issue with the code, but of hardware related questions
ah, I see. These should be asked here.
OK
OK so first thing I have an UVC cam that can capture in 30,60 and 120fps depending on resolution. How do I specify what resolution/fps to use? Is it something the camera software controls or the uvc camera itself?
This is settable via software. See pyuvc
for details.
OK, thank you.
Second question, if you use two uvc cameras, where do you take care of syncing them? (OK so this one is software related question but not the Pupil library necessarily)
We do not sync the cameras themselves. But each frame has a timestamp which we use to correlate the frames.
doesnt that introduce lag in realtime mode?
The pipeline is the following: Each frame is processed in its own process. The result is a pupil position that has the same timestamp assigned as the frame. This pupil position is send via the IPC backend to the world process which correlates pupil position pairs and maps them to gaze positions. Therefore, of course yes, there is a small lag due to the processing, but the data includes the timestamp of frame creation.
Understood. My only concern is each eye readings might be always few ms apart which may give slightly inaccurate data.
unless the uvc cameras (identical ones) could be initialized at the exact same time
Even provided the function calls for starting each are right after each other and the cabling length is the same I don't know if they will start equally. But you probably have some info from your own tests. Maybe it is a non issue
We just do not expect them to have the same timing. Also, the mentioned frame rate is not fixed. There might be slight timing differences between frames. But we are talking about a time difference of 4 ms (worst case) between binocular frames (at 120Hz). If you need to be even more accurate I would suggest to buy the 200hz eye cameras.
I might. Although 4ms sounds better than I was expecting. Anyway, thanks for answering this question too.
FYI, the calculation: 120 Hz equals to frames each 0.0083 seconds. Therefore the maximum offset between two 120 Hz cameras is about 0.004 seconds. This means that you can reduce the maximum offset to 2.5ms using the 200 Hz camera.
OK, perfect. ty
The next question is regarding the IR leds. Do they just illuminate the eyes and areas around the eyes, or are their reflections on the iris needed for calculations? Reason I'm asking is 1) if it is the latter than their positioning is important and also using more will probably affect the calculations by having more randomly positioned white dots on the iris and confusing the algorithm and 2) probably shouldnt use a more diffuse and more uniformly illuminating but brighter ir led because there wont be a point reflected on the iris.
Our pupil detection algorithm works glint free, therefore we do not require the reflections. The eye cameras should be positioned such that the pupil is clearly visible. More important though is that the pupil is in focus.
Oh, great. Makes life easier for me. Thank you.
OK so the next question is regarding what format pupil expects from the uvc camera. because my camera has different fps speeds for MJPEG and YUY2. Maybe I could convert one to the other if needed in realtime but not sure if it could be processed so fast. Sorry if this is mentioned in the docs and I missed it.
Our current pyuvc
implementation expects mjpeg data.
Oh, perfect.
Can the world camera be used to determine your relative head position relative to a starting center position?
This will require something like monocular SLAM I think
@marc Has been working on something like that using markers. He published the current state of his work at https://github.com/pupil-labs/pupil/pull/872 Feel free to test it and to contribute. We would appreaciate your feedbakc on this.
Thanks, I will. if you or anyone in the team have contact with him you can pass this to him if you want: ARToolkit has a good marker based position tracking which also supports stereo cameras. He might want to use that library in his plugin.
He is part of our team 🙂 I will tell him about it.
great
ARToolkit is under LGPL license
(shouldnt matter for plugins though)
For markerless tracking ORB-SLAM seems the only decent open source left: http://webdiis.unizar.es/~raulmur/orbslam/
Final question, does Pupil use/need GPU for faster processing? I want to test it on a ASUS Tinker Board and while its twice as fast at least than Pi 3 I'd like to have a general idea what to expect
No, currently everything is process on the CPU. Be aware that the software will drop frames if the processor is not fast enough.
Sure.
Actually I lied theres another question left. With ordinary webcam software I notice lag/latency of about 200 ms. Is it because of the camera or the software displaying the camera content (not your viewer)? Can I expect less latency issues by using pyuvc?
I cannot tell you were this lag/latency comes from. Pupil Capture might show the frames with a slight delay as well since the processing happens before displaying the frame. Lowest latency will be archived by using pyuvc
directly. Pupil Capture uses pyuvc
under the hood as well.
You should be able to test this though. Simply download and start Pupil Capture 🙂
@user-39ac51 using our hardware and pyucv we have a latency of 6-4ms depending on the camera used. Other webcams may add considerable latency.
you also count the processing for calculating the pupil positions right?
latency is from start of expusure unit the frame is available to the user.
then you have to add 3-5ms for proceesing to get the pupil pos (this depends on your cpu.)
I've read somewhere that it takes a while for our brain to process the visual information from a new gaze position the eyes have moved to and it is not instant. Is anyone familiar with what I'm talking about?
might make up for the latency
mpk, whats your camera fps, res and how many cams? (and what pc specs if its okay to ask)
@user-39ac51 I m using 200hz eye cameras with 120hz world cam on an i7 macbook air with 8gb ram.
What I read a while ago not only referred to the time it takesfor vergence and to refocus the eyes when target changes, but also just pure time to make up what the new view shows. I don't know what it's caused by. Maybe exposure, other things the brain processes such as familiar shapes or linking what is in the new field of vision to the previous one(s) but I think the conclusion was it take a moment to actually "see". Is this familiar to anyone?
@mpk Thanks!!!! 😋
Hi folks, looking for some guidance. I have solid programming experience for scientific analysis, but limited experience developing applications. I want to have a try and developing a plugin for player, but can only get so far as installing all the dependencies and running from source. I would now like to import the project into Visual Studio (on a mac) and be able to debug the module I'm trying to create. .... I may be asking too much (some of this is background knowledge to a developer) . Any input would be appreciated, even if its a good webpage / resource.
@user-8779ef For a simple player plugin written in pure python, you could try sublime text or geany+bundle pupil player. Yes, no sources at all, no dependencies headaches.
Everything bundled with pupil can be imported
If your plugin have dependencies not bundled with pupil, you don't need the sources.
You can install those dependencies and import them from your plugin, normally.
What your plugin will do?
So uh, is there a proof of concept of foveated rendering with the pupil labs addon for vive or rift? There are some articles about saccadic movements and they give some crazy possible rotation sums like 800 degrees per second of saccadic eye rotations. I don't think a 90 Hz VR screen is enough to make foveated rendering possible even a perfect eye tracking aside but maybe I misunderstand these articles.
@user-3aea1d FYI, the old eye camera model is able to provide up to 120 Hz, and the new ones up to 200 Hz.
not talking about eye tracking speed but the refresh rate of VR headsets
Ah, ok, makes sense. But it might not be necessary to render during saccades. As far as I know people do not perceive anything visual during saccades. Therefore one would only have to render as soon as the saccades ends.
I understand, but from what I read, and I hope I read it wrong, there can be more than 90 saccades each second.
with the sum of rotation angles 800 degrees max
My bad, " The smallest “microsaccades” move the eye through only a few minutes of arc (one minute of arc equals one-sixtieth of one degree). They last about 20 milliseconds and have maximum velocities of about 10 degrees per second. The largest saccades (excluding the contributions of head movements) can be up to 100 degrees, with a duration of up to 300 milliseconds and a maximum velocity of about 500–700 degrees per second."
but anyway, is there a proof of concept of foeated rendering with the pupil addons?
@user-41f1bf Yes, I've started along that path, but it's not a very useful environment for debugging. I would like to be able to halt the script and inspect the local variables ... for example, g_pool, because there isn't any documentation on its contents.
@user-41f1bf Eventually, an improved fixation detector. Possibly with a visualization of a velocity / acceleration time series.
@user-3aea1d Not that I know of. But we should ask @user-e04f56 , he maintains most of the VR related projects.
@user-3aea1d . The equipment doesn't have the precision to measure microsaccades. You should also maybe read a bit about post-saccadic inhibition. My intuition is that we have a few tens of milliseconds after a saccade ends during which we suppress change. This would suggest that there's some tolerance for eye tracker latency when implementing mid-saccadic manipulations.
@user-3aea1d As far as foveated rendering goes, make sure to have a look at work by David Luebke's group at Nvidia (including work by Jaewhoo Kim)
@user-3aea1d I haven't seen anything done with foveated rendering using Pupil.
but do you think the hardware should handle it?
@user-3aea1d Yes. A student over here measured the average latency of the pupil lab mobile eye tracker running at 60 fps at 0.012 seconds.
12 milliseconds. So, you could expect the information to be available to the unity pipeline within about 1 frames time (at . 90 Hz, that's about 11 ms)
How soon it influences the screen would depend upon the delay introduced by your shader ( I assume you'll use a shader to impement your foveated rendering compression)
So, off the cuff, assuming your shader introduces a minimum of <20 ms of latency, I'll guess you're introducing a minimum 30 ms of latency from eye movement to screen update. I would go to the literature on change detection and post saccadic supression to see if that's within the bounds of blindness due to post-saccadic inhibition. Sorry - I don't have names I can provide on that literature base (it's a tentative suggestion - you may have better luck elsewhere).
@mpk , are you still using sublime text? Do you use any tool for inspecting stuff?
Beyond the logger and python self awareness functions
is ir led to your eyes dangerous? This is a very loose/vague question, I know. I guess both legally and according to scientific evidence we have so far. I know for uv things are tricky and not 100% known at this point.
Are there linux commands to be able to run the player in a GUI-less way?
I am having errors with GLFW window failing to create now, ubuntu 16. went through the steps just seems to be hung up on this
Hi all, I'm having a bit of trouble running Pupil from the source The install seems to have gone fine, but when I try to run main.py, I get the following error: https://pastebin.com/B8iJxZ9K I've tried searching for similar cases online - in fact, I found a much older Pastebin that contained much the same error message - but I haven't had any luck. From what I can tell, ZMQ is experiencing some kind of error when looking for the appropriate socket. Any advice is greatly appreciated!
Research inconclusive but cataracts sounds scary. Please link to other contradicting research if available. Thanks.
srsbdness: the amount of IR from mobile eye trackers is typically a fraction of what we get during exposure to outdoor light.
@performlabrit#1552 for developement with a python debugger check out pycharm. the community edition is free.
Anybody successfully subscribing to pupil from qt c++ zeromq?
anyone tried running on a Pi3?
@user-3aea1d I actually did get it running on a Pi last year!
This was a Pi2, so it should work on a Pi3
@user-3aea1d We take IR safety very seriously at Pupil Labs. Pupil Labs eye tracking hardware is tested and evaluated by professionals to ensure that they comply with photobiological safety standards.
nice. is opencv optimized?
@user-3aea1d regarding "foveated rendering", I came accross these papers last year, one of them co-authored by the [email removed] mentioned
I experimented a bit with the concept by adapting the toon shader from the 3D market demo scene through reducing the texture LOD based on the distance from the gaze position. But I could not see any performance improvements
You usually render the scene twice, one at full fov and about 20% target resolution and one at much lower fov but pixel perfect resolution, then merge these renders into one frame by a fragment shader
I also asked around in the Unity forums for suggestions on how performance could be improved in any means by utilizing gaze information, but I did not get an answer
There's a transition added between the two renders. The low res full fov frame also has some fragment shaders, such as blur
not sure what you mean by texture LOD
@user-3aea1d texture2Dlod is a shader function to specify the "level of detail" in which you access the texture used for the current 3d model. so it corresponds to a reduction in resolution, as you describe it
so based on the distance from the gaze point, I tried to reduce the resolution on the texture being used
I see, I dont use Unity myself. But that will make only a tiny difference in performance
I'll read the pdfs you linked to, thanks. Do any of them by any chance mention using pupil for tracking?
@wrp Where can I find those standards for using custom leds for myself? Im my own test subject for my experiments so I should be fine but just to be on the safe side.
@user-e04f56 I don't know your levelo of experience with 3d programming but maybe you are confusing render resolution with texture resolutions of 3d models.
Unity should have a profiler, you can check that once texture of a 3d model is loaded it takes up far less resources than render resolution per frame.
@user-3aea1d I was not confusing them, just experimenting with different ways on how to utilize the gaze. But you are right, maybe I should have a look at composing scene renders of different resolution
Hi, Has anybody worked out a way to use the eye-tracking on a mobile device? To test e.g. user interaction?
Hello, Is there a way to get the number of the fixation (that shows up with the yellow fixation circle) to be present in the exported video? I'm getting the yellow circle, but no number with it.
Am I correct that there is no CLI for any of the eye recorders?
Hello every one, quick question. Which packages do you use to create tasks that we be linked to my pupil eye tracking device ? I was thinking about using PyGaze, but I read somewhere that it was not compatible. Thanks!
@user-c3650b you are correct.
@papr thanks!
@hellomarcoliver#8847 You can use our Android app Pupil Mobile to connect the Pupil hardware to your phone and make recordings on it or stream the video live to your computer.
@papr what about pupil remote? I could attach via python...? does that require building my own version of the app from source?
No, Pupil Remote is a zmq based network interface. You can use the bundle as it is to connect to it.
So I could use it to start and stop recording just like a CLI?
You will have to run the app but you can surely control it remotely.
perfect
Thanks again!
@user-2798d6 Mmh, I am pretty sure that the numbers are rendered in the same way as the circles. I will have a look at them tomorrow.
@user-4e7774 Incompatible is the wrong word. There is no built in integration into pygaze. But this is surely doable. The most important thing is that you synchronize the clocks between pygaze and Pupil Capture such that you can correlate task related events with the gaze data.
@papr - I see them now! They were just going by really fast and the background made them hard to see. Thanks!
hello, Is it available to purchase the product including 120hz binocular eye camera? If It does not possible, I'd like to buy 120hz eye camera as additional product.
Hi @user-537e9a we do still have some 120hz eye cameras available. Please send an email to sales@pupil-labs.com and we can go from there.
@wrp Thank you!
@user-537e9a you're welcome!
@mpk Thanks - I've switched to Pycharm, which I actually have experience with. So, the issue now is that offline nat. features calibration halts at 99%, and player does not shut down gracefully.
@user-8779ef any console/log output you can share?
I posed an issue to github with the console output
Let me find the link.
Sadly, it's not much to go by. THe program does not shut down gracefully either. Is there a log file I can access to see messages stored during shutdown?
the output you show is not an actual error. we only save the marker data when the search was complete. In you case the 99% stuck issue is what I think leads to the search not being saved.
lets try to get to the bottom of that first.
Yep.
how long is your recording?
@papr can you try to recreate this?
Let me see.
BTW, I'm now running from source
...persists in bundle, too.
ok. thats good to know.
One video is 5:25. The other is 11:34.
Same issue with both. I had this issue previously and found that resetting to default settings resolved the issue ...but, that only worked once.
Just loaded the file and, during load, I see this error: Ceres Solver Report: Iterations: 13, Initial cost: 1.943841e-01, Final cost: 7.563179e-03, Termination: CONVERGENCE Traceback (most recent call last): File "/Users/gjdiaz/PycharmProjects/Pupil1/pupil_src/shared_modules/background_helper.py", line 46, in _wrapper raise EarlyCancellationError('Task was cancelled') background_helper.EarlyCancellationError: Task was cancelled
...and, good news, this time it loaded the settings. So, it did shut down gracefully that last time. Calib. still stuck at 99%.
FYI, the pipeline I use is .... collect data using with 2D pupil detection and calibration, but later I use offline 3D pupil detection and 3D calibration with natural features ( a calibration grid ).
We found the issue: Because of caching reasons, the binocular gaze mapper does not return a gaze point at the end. This only happens in few cases and there is no way to flush the cache. I will fix this this afternoon.
Actually, the mapping is successful but the displayed mapping state is simply incorrect.
ok, great. So the calibration issue is not linked to the ungraceful shutdowns.
Now, how can I help debug those? Is the log saved to file?
Yes, there is a log file as <pupil repo>/player_settings/player.log
Great. I'll create a new issue on github, and the next time I have a crash on shutdown, I'll post the log.
It would be great if you could use this branch for testing: https://github.com/papr/pupil/tree/offline_calib_improvements
It includes the fix for the mapping display issue
Ok, I can switch. My goal here is to develop the knowhow to build plugins.
...so far, I've found player's instability between runs to be the greatest obstacle to progress. It seems I have to recapture the pupil far too often due to crashes, and that costs time.
It crashes and reverts to pupil from file.
Plugins have a cleanup()
function that is called on shutdown. If the program crashes beforehand the cleanup method is not called. This might cause an ungraceful shutdown for a lot of plugins.
And even if the detection was finished beforehand it does not read the data from the cache?
Say that another way for me? A bit unclear.
Oh, I getcha. Yeah, it often reverts to pupil from file.
So, another suggestion I was going to have was the option to save pupil detection to file explicltly . It is, afterall, a computationally heavy process.
The Pupil Offline Detection runs the detection for both eyes. After finishing the detection the eye windows should close and the resulting data is written to a cache file in <recording>/offline_data/
. The plugin will try to load this file on start up.
So even if Player crashes and reverts the session seetings, and Pupil from Recording
is loaded: Does selecting Offline Pupil Detection a new detection or does it load the data from cache?
New detection.
It does not load from cache.
I just saw that the data is only cached on cleanup(). I will add another save after finishing detection explicitly. I do not think that an ui element to do so is a good idea. It should work reliable automatically.
Yes, that seems like a reasonable fix.
Pushing that change to the repo you've just shared?
yes
Great!
That should save me a lot of time.
@user-8779ef I implemented the "caching on finish" for both, offline pupil detection and calibration. Both is pushed to the branch above. You should see an INFO-level log message saying Cached ... data to <path>
@papr Thanks very much. I'll continue to try and break things, and will report back!
Hi guys, I just runned the Pupil player 1.2-7 on my mac and I noticed that the main menu is not responding when I click on the icons on the right side. It basically does nothing and does not show/hide the selected window.
nevermind. I deleted the setting folder and now is working
I guess the new version is not compatible with the old setting files
@user-0d187e This is a know issue that appears in some edge cases. I fixed it this morning in https://github.com/pupil-labs/pupil/pull/1006 Yes, deleting the settings folder is a work around
Thanks
Hello - I have downloaded the new version of the software, but am still having issues with offline calibration with natural features on my MacBook with Retina display. I click a point and the marker shows up almost 4 inches away on the screen up and to the left.
Is there something I need to adjust on my computer or Pupil settings?
@user-2798d6 . This issue has been addressed in the newest version. Update 😃
I did
What version does it say in "about" ?
Unless the update came out this morning
1.1-2
K, lemme check something on this end
ok, thanks!
Ok, I'm running the same version on a macbook pro w/retina, and no issue. I had this issue previously, but it was resolved in the latest update. So, I'm not sure. It's a question for the team.
Thanks for your help 😃
I will check in with someone about it. Maybe it's my computer
Tried my best 😃 . I don't think it's your computer.
It seems to be a mac issue - perhaps a retina display issue.
I appreciate it!
HOld on...
Please add your issue there. See if you can reopen the issue (I think you can, but maybe I have to).
I added - thank yoU!
No problem. You may also want to report your computer type (mac retina, etc etc). They're usually quite responsive, but hold tight.
I've re-opened the issue.
Hold on - apparently I had two versions of the software on my computer and may have been running an older one?
That makes sense. Don't worry - I did the same thing 😃
Should I close out the issue?
It's freezing at the moment so I can't see if it works - give me one second and I'll let you know! 😃
IT WORKED! Thank you so much for working through that with me @user-8779ef !
No problem LKH!
Hi all, I know this is probably somewhere and I am just being lazy and not finding it. But I am trying to save my pupil diameter to the .csv file. I know I read that it automatically saves data like this but frankly I am lost on how to access or enable this. As well does the diameter have a time stamp option? Background I am doing research into pupil dilation so I need to be able to sync the pupil diameter with tasks people are doing, I’m hoping to do this by just using the time stamps and then comparing them to the outside time. Sorry if I’m being a dummie here, but any help would be greatly appreciated.
Hi @user-93f13b please open your recording in Pupil Player and open the raw data exporter plugin from the plugin manager. Press e
or the down arrow ⬇ in the Player GUI to export raw data. Diameter data is listed in each row of the csv, each row/datum is time-stamped.
You are an absolute life saver! Thank you for the quick response. One more question. So I bought the diy headset, I was wondering if there is a way to adjust the eye mount? I read somwhere that it can be shaped if you heat it? currently it is only pointing at the very bottom of my eye and I was wondering how I could move it up somehow so the eye is in the center of the picture. Thanks again for helping with possibly dumb questions.
You can bend/twist the DIY eye camera mount arm a bit. You may also want to try repositioning the headset frame on your nose bridge.
You can also tie the nose bridge, so the frame will be higher. It also decreases sliping down into the nose.
Just a quick question guys. Is the current Pupil Capture executable program able to log the keypress events while recording?
Thanks I will try that. Another question when Installing the new exposed film for the ir sensors. After installing it I am not able to get the camera to focus? I have taken off the auto focus, and have tried redoing it a couple times to see if I put it on incorrectly but still the same problem. When I take it off the camera works fine and focuses. Any fix for this? I have been using normal exposed film from a camera, is there a different material I should use?
@user-93f13b it should just work. Makse sure that you insert the lens far enough to focus. I find that running the camera and inserting the lens give you decent feedback.
Could I be doing something wrong when putting the film in? It gets close to being in focus but then i can't move the lense anymore.
as in it bottoms out.
@user-93f13b no I m not sure whats wrong then.
Hi everyone, i was thinking about if it is possible to calculate the brightness of the delivered world camera image. Therefore i have the question, how am i able to save every world camera image as a seperated image during the record. I thought about writing a plugin, but i dont know how i can get access to the pixel image data?
@user-6419ec you should be able to access image pixels through the "recent_events" function in a plugin. Each world frame is accessible in there
You could export them usind cv2.imwrite() or just doing your stuff right there. Frames are numpy arrays.
@user-0d187e take a look at pupil anotations in the docs
@user-41f1bf thank´s for the fast reply 😃
Im curious if this could be a good option for an eye cam? https://www.spinelelectronics.com/USB_2.0_Camera/2mp_usb_camera_modules/UC20MPD
thinking about taking off most of the IR leds becasue they are over kill. But not sure if the pupil capture can handle the camera?
Hey, is self.g_pool.gaze_positions_by_frame[fIdx][0]['index'] an index into g_pool['gaze_positions'] . ?
I am trying to get the sample of gaze data used to calculate the gaze position currently shown...and then want to search for the gaze data 0.5 secs ahead of time. (I need to grab the range from now through now+0.5 secs)
So, although gaze_positions_by_frame is most convenient data structure when finding the first sample used for the current frame, gaze_positions is more convenient for searching for the last sample less than N seconds from now.
I think the germans are asleep.
Ignore the previous question, I used this solution: curGP = next(gp for gp in self.g_pool.gaze_positions if gp['timestamp'] > events.get('frame').timestamp)
...and interestingly enough, the index value is not the index into gazePositions. Oh well...
@user-8779ef events.get("frame").timestamp is the current world frame timestamp.
@papr does this kind of thing a lot ich think the bisect tool is usefull.
basically you make a list of all gaze timestamps.
then you can find the value closest to target+.5 and you know the timestamp of the gp you want.
then you can go through gaze_positions and find the gp you want.
you can even make a dict that has all gp by timestamp value for faster access.
Thanks @papr – So I understand correctly; you the software knows which point I am looking at on the mobile screen?
@mpk, Yes, I did something similar: startFr = np.where(self.g_pool.timestamps > aTimeStamp)[0][0]
Unfortunately, I eventually found myself in nested for-loop territory. I know, I know, I'm not proud.
gazeNormE0_fr=[]
for frame_gp in gpList_frame_gp: for gp in frame_gp: gazeNormE0_fr.append( gp['gaze_normals_3d'][0] if 0 in gp['gaze_normals_3d'].keys() else [np.nan] * 3 )
I would like to consider more elegant ways to search for data present on current frames, but I was struggling with your indexing scheme.
...and not familiar with an elegant way to search through the list of dicts for dict['timestamp] within a range.
@mpk . THanks for pointing out bisect(). That one's new to me.
(solved) ... what am I doing wrong? turbojpeg is corrupting. I download newest apps to OSX 10.12.6 . tried to solve it by installing the developer dependencies. But no change. Do I have to update to OSX 10.13? It´s eating cpu pretty alot, I guess it´s ok . cpu 80 degree. Disconected ones also USB - drive. It´s running, but it makes me feel that I am doing something, with yellow messages on the screen. . . . Finally I could wipe out all turbojpeg-problems by migration into a clean user account. I guess some background processes slowing down the performance of my default user account.
@user-9b14a1 this is a potentially faulty usb connection. Can you try a different USB port?
@mpk it´s macbook pro , it has 2. ok, i go to wifi and disconnect my harddrive. one moment
Guys, I have a question related to pyglui, and the Line_graph class. How do I define the length of the data, and replace the data? I only see an add() function right now. THe class has a field for data, but I see that in the init() it is defined as a cython view array: self.data = view.array(shape=(data_points,), itemsize=sizeof(double), format="d")
I'm not quite sure if I can modify a view array inside a *.pyx from a separate python file. Any thoughts on this? Is there an intended approach?
@mpk (solved) ... the right side is worth. It´s skipping even half images. . . . Finally I could wipe out all turbojpeg-problems by migration into a clean user account. I guess some background processes slowing down the performance of my default user account.
ok. This looks like a fautly usb cable or hub. We can do a repair replacement. Please contact us via email for fruther diagnosis and coordination!
@mpk ok that would be good.
who woudl I talk to about guidance making subtle modificaitons to pyglui?
@user-8779ef talk to @papr he has work on it most recently.
Great. Thanks. I'll keep an eye out for him.
@mpk Finally I could wipe out all turbojpeg-problems by migration into a clean user account. I guess some background processes slowing down the performance of my default user account. ( osx malware ).
@user-9b14a1 great to hear!
@papr Lots of questions related to line_graphs in pylgui. currently, I have a bunch of data I've added to my line_graph, but the thing crashes up the all to graph.draw(). File "pyglui/graph.pyx", line 292, in pyglui.graph.Line_Graph.draw TypeError: not all arguments converted during string formatting
Not the most transparent error. Perhaps I'm not adding the right type of data?
I'm using system_graphs as a template, but building an analysis_plugin to plot gaze velocity data.
Is there any age limit
For when you can use the eye tracker technology
?
@malkawi#8305 upward no. for children we have custom frames. Babies need custom solutions.
@performlabrit#1552 I would not use the graphs in pyglui but the timelines feature. Check out this: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/offline_surface_tracker.py#L117
here you can draw a polyline of what you want to show.
I meant regarding the IR and it’s effect
Hello - I am trying to work with the audio file I recorded while running Capture. I have exported 4 different "clips" from a full recording, and the video and audio will play on VLC player, but the audio doesn't play on anything else and there is no audio file that got exported. Is there anything I can do about this? I'm trying to see how fixations line up with audio.
@mpk an example of output with timeline
@mpk Not quite there yet. Need to find a creative solution... it may make more sense to use something similar to the system graphs for this one, so that I can have a variable range of n seconds, and show a dynamic figure that is always focused on t-n/2 through t+n/2 seconds.
I have been in touch with papr about it.
Yes, we would need a solution, that scales all timelines at the same time, though. I will keep you up-to-date on that matter.
That makes sense. Thanks, papr.
@user-2798d6 Unfortunately, I cannot tell you why that is. Did you try transcoding it?
Im curious if this could be a good option for an eye cam? https://www.spinelelectronics.com/USB_2.0_Camera/2mp_usb_camera_modules/UC20MPD thinking about taking off most of the IR leds becasue they are over kill. But not sure if the pupil capture can handle the camera?
@user-93f13b 30 FPS sound a bit low for an eye camera. But this depends on your use case.
It is actually 60 fps, tested it.
Ah, and `320x240 (QVGA) [email removed] I did not look at the specifications before.
Im more wondering about the software side of it, if it is compatibale with the program. The camera switches between ir and normal. Is pupil capture able to switch that itself or would I need to do it another way?
I tried it, and the program crashes after about 5 minutes of use.
Mmh. So the eye process just uses the grey image of the video frames. As far as I know, our eye cameras have an IR light filter, there is only one mode. I cannot tell you anything about the crash though. Is there a crash log?
@papr. WOuldn't you want to remove the IR light filter, in this case? Your cameras operate in IR.
Ehr, your eye images are in the IR spectrum.
In this case it is a filter that lets through mostly IR light but nothing else.
I am sorry if I did not use the correct terminology
No prob - just making sure 😃 . That's an issue when it comes to hardware - really narrows camera selection for eye tracking, because most RGB cameras have built in filters that filter OUT the IR spectrum.
...at a scale that is really too small to remove manually after production.
@user-93f13b Make sure you have a look at what I just said regarding IR filters.
But what would be causing the system to crash. As well since that camera switches between the filters is there a way to have it stay in the ir mode?
How would I show the crash log?
There should be a capture.log
file in your capture_settings
folder. Upload it after the crash without restarting Capture. The might be a low-level libuvc issue specific to your camera, but this is out of reach for us to fix.
alright. Thanks all for the quick responses. Also quck quesitons that I know has been adressed before. SHould I worry about the amount of IR this thing puts out?
I think yes. Measure it, and compare to the flux outside on a sunny day.
Is there a reason the pupil cam isn't sufficient for your needs?
...just cost saving?
I am using the diy kit. And yes I am using the headset for my senior thesis. The department gives out money for that type of research, and it was not enough for the full headset.
Ok, makes sense.
Do you know that you can buy the eye cameras separately?
Yes I looked into that, the camera itself is still to expensive. It looked around 400 correct?
I am waiting for the hd 6000 to come in. The first one I frankly did not put together well so it stopped working. Thinking something to do with soldering. But this was a cheap alternative I wanted to try becasue it looked like it was better than the hd 6000
They've just upgraded to a new camera type. Maybe you can talk em down on a 120 hz for student academic use 😛 . (no, I don't work for them)
...and don't forget the educational discount.
( I probably should have said that in a private message :P)
always worth a shot, though.
No worries I dont mind haha
E0115 11:13:55.623034 13320 trust_region_minimizer.cc:72] Terminating: Residual and Jacobian evaluation failed.
I am getting that when I run the program.
Sorry pretty new to this, and have been having problems gettting things working/ findng resources to help
This is not a problem related to the camera. This is output of the 3d model software. It is an indication for a low-level issue if there are no uvc related erros in the logs.
It looks as if I keep getting flashed of the IR going on and off almost. I know this sounds weird
this is the the line it shows when it crashes.
E0115 11:25:05.889878 7912 trust_region_minimizer.cc:72] Terminating: Residual and Jacobian evaluation failed. OpenCV Error: Assertion failed (0 <= _rowRange.start && _rowRange.start <= _rowRange.end && _rowRange.end <= m.rows) in cv::Mat::Mat, file C:\build\master_winpack-build-win64-vc14\opencv\modules\core\src\matrix.cpp, line 483
Daam now if we would get that kind of support at Tobii i never would need to use python i'm pretty shure lol
@user-93f13b This error message is new to me. It might that this is a usb transmission issue and that opencv crashes when it receives a broken frame. But this is only a vague guess.
@user-93f13b a general approach to problems like this: make sure all your packages are up to date.
'specially openCV, which seems to be the thing crashing here.
Recording capture question: are the calibration started and ended events present in recording data? And perhaps is even more specific data present, like calibration position?
@user-d74bad Yes, all this data is present in the pupil_data file as part of the notifications, if you calibrated during the recording. You can also calibrate offline if the recorded video includes the calibration markers or similar things that the subject had to fixate during the recording.
excellent, thanks
I have install pyserial on windows, but after move my plugin source code(contain "import serial") into the plugins folder and run the capture app, there is a WARNING: world - [WARNING] plugin: Failed to load 'real_time_picture'. Reason: 'No module named 'serial'',so how to solve this problem?
@user-6e1816 The bundle behaves differently than running from source. Please try copying the pyserial module into the plugin folder as well.
How do I subscribe to 3d pupil detector?
zmq::context_t context(1); zmq::socket_t subscriber(context, ZMQ_SUB); zmq::message_t message; subscriber.connect("tcp://127.0.0.1:50020");
while (1) { subscriber.recv(&message); cout << std::string(static_cast<char *>(message.data()), message.size()) << "\n"; }
Hey just wondering if anybody has an e-prime (or similar, we're not picky) script for inserting triggers/measurement points in to the tracker data that they wouldn't mind letting me have a look at. We are building a paradigm involving image displays and need to be able to insert timestamps for picture onset, etc
@user-e938ee Please be aware that 50020 is the designated Pupil Remote port. You will need a ZMQ_REQ socket to talk to it. You can connect and request the port for the actual subscription socket. See this Python script as reference https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_messages.py
@user-921ec1 you can use Pupil Remote as well in your case. Just use it to send custom notifications that contain information about your triggers and their timestamps and add the record
key in order for the notification to be stored during a recording.
You should use it to sync clocks between eprime and Capture, too.
Hi, looking back at offline calibration and it seems that some of the plugins are missing? As well as visualisation and analysis, there doesn't seem to be the third section that includes the offline calibration plugin. Does anyone know why this is the case, and how we can find the plugin?
@user-84047a I would recommend to upgrade to the newest version. 😃
Hurrrm.... "Exception: pyndsi version is to old. Please upgrade"
I think I have screwed up my local git repo. , so if this isn't a common error, forget it.
Even though the application is for hmd-eyes, I'm pretty sure this question is for pupil_capture in general. Is it possible to launch the eye windows already minimized? When stopping play on a Unity scene the eye windows close. When starting play the eye windows open up but take over focus. This causes the Unity application to pause and go to a SteamVR loading screen. Things only start working again when the Unity app is given focus by clicking on it... and for performance reasons we want to minimize the eye windows first before we give Unity focus. It would be easiest if there was an option to open the eye windows minimized.
@user-8779ef pyndsi is one of the dependencies.
@user-5d12b0 I will look into it.
@papr , today I finally got around to testing the labstreaminglayer plugin. It works, but there are a couple issues. 1) The metadata isn't in the same format as other streams I was collecting. That might be my fault. I'll look into the metadata standard a little more and make changes to the pupil plugin or my streams, as appropriate. 2) The signals were highly unstable. This might not have anything to do with the plugin and might just be a consequence of me finally looking at the continuous data on a graph for the first time, but it seems like 20% of the samples were low confidence and had incorrect estimates of pupil position. I'll see if I can generate a png really quickly...
Recorded with Vive add-on and labstreaminglayer plugin. Values are z-scored.
@papr Yeah, I see that. Trying to upgrade... is it not a pupil module?
....because pip isn't seeing it.
If pip does not upgrade, use the - U flag. It will force an update.
sudo pip install pyndsi --upgrade -U ?
No, we did not register the package yet. You will have to use the repository URL as it is described in the docs
Ahh, Ok, thanks.
--upgrade is the same as - U
I would have thought dling hte master would have fixed this.
..forking the master branch would include the dependencies.
However, this happens with a fresh fork.
No, pyndsi is not part of the Pupil repository. It needs to be installed independently.
Same goes for pyrealsense, pyav, pyglui, etc
Ok, thanks for helping me out. Sorry for being all amateur-hour over here - still somewhat new to app development.
Yeah, understood.
This should be sufficient info for a fix.
You are welcome. We know that the whole dependencies part of the docs is very painful (especially on Windows) but maintaining an automatic installation is not something we realize right now. We have the bundles if you do not want to go through the manual installation.
I'm working on it in pycharm so
I'll deal. Thanks!
gotta run. Take it easy.
@user-54a6a8 looking at the graph I think the data looks explainable. You can ignore all data with low confidence. In the beginning the eye0 is not detected and then we are seeing a few blinks and a few frames with no detection. (blinks can be identified by low confidence on both eyes simultationously.)
@mpk the x axis is seconds. The ‘beginning’ is actually 35 seconds into the recording. Segments like that were found throughout the data.
@user-5d12b0 Why is the timestamp graph constant?
@user-5d12b0 Could you make such a recording again, but make a Pupil Recording in parallel to see if the data is transmitted correctly or if the issue is already in Pupil Capture
The time stamps aren’t constant but rising slowly. I z-scored everything so I could see them on the same axes. I’ll do pupil capture in a few hours.
Ah, right, you mentioned that. Ok, lets wait for these results then.
@papr , is there a way for me to record eye data without doing world capture? With the HMD addon there is no world camera.
Yes, you need to activate the Test Image
/Fake Capture
I was able to capture it. I loaded it up in pupil_player and it didn't look very good. The result of the hmd calibration was pretty bad too. I'll finish writing my Python script to plot my LSL data and the pupil_data then maybe re-record if necessary.
@papr Any sample code on how to read pupil_data in Python. I see from the docs that it's python pickled pupil data
but pupil.load(filename) doesn't work. I'll investigate how pupil_player does it on GitHub.
Actually, it is msgpack encoded. You can use this function to read the data: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L51-L66
It works, thanks.
@papr : Good news! The data are effectively identical between LSL and pupil_capture.
Those segments of data where confidence was high are segments where I was running the HMD calibration. The data look pretty good except for the one-eyed blinks in eye_1. In between calibrations I didn't take off the HMD. I didn't even stop playing on the unity project. I did, however, use SteamVR to open the desktop view so I could see my desktop through the HMD.
Did you record the eye videos as well by any chance? Could you share them with us?
I did, but I'm not sure they are correct. eye0.mp4 is 580 MB and it won't play in any player I tried. Same with eye1.mp4.
I couldn't get them to play in pupil_player either.
That is ok. VLC should work though.
I tried VLC. Do I need a particular codec pack?
Mmh. Could you upload one of the videos to Google Drive, or something similar?
I can. First I'll redo the recording with only ~ 20 seconds of data.
Which version of Capture do you use?
1.2
This time the videos loaded in VLC. I don't know what happened with the last one. I was also able to view them in pupil_player and do offline detection. Still problematic though. I'm hoping you'll see something obvious in the videos about how wrong my setup is.
Hello! Is there any data output that shows saccade length (distance and/or duration)?
@user-5d12b0 As someone patiently awaiting news that HMD calib works, I'm thankful for the work you're putting into this.
@papr I just shared the videos with you via google drive, at your pupil_labs address.
@user-5d12b0 You're in a neuro lab?
@user-5d12b0 great, thank you. I will have a look at it later.
@user-8779ef Yes. In Ottawa Canada. We do invasive brain computer interface stuff with applications in Parkinson's and assistive communication for tetraplegics.
Are you at RIT? I did my PhD at SUNY Albany with Jon Wolpaw. The group is now called the National Center for Adaptive Neurotechnologies.
@user-5d12b0 Awesome. Yeah, I worked in Brett Fajen's lab @ RPI, and have interacted with those guys once or twice.
@user-5d12b0 I now run my own lab at RIT where we study the principles of visually guided action and oculomotor control.
Hey, one tip for offline pupil detection. With the mobil tracker, the results are MUCh better if you lower the threshold on new eye models.
(for 3d pupil detection). I lower it to like .992 or so.
By default, the system switches modes far too quickly, disrupting the track often.
Thanks. I'm more interested in online detection, but that applies to online detection as well, I guess.
yeah, that makes sense. ...and, yes, it would.
In your field you probably know Doug Crawford? I'm trying to setup an experiment for a collaboration with him, and for a student in the lab. For the student's experiment we need online detection to progress through the task and give feedback. For the experiment with Doug, we can do offline detection. I'm a bit worried about the file sizes though.
Do you plan to record eye videos for all your experiments?
Yes, I know Doug. I have met him at several conferences, and not of my graduate students recenlty presented at a workshop he was involved in.
Data is cheap.
Hard drives are cheap.
Yes, this stuff is space hungry, but I'm sure doug can find the scratch for a few drives. 😃
Yes, I plan to record eye videos for all of them.
It's probably a good idea for us too. I guess it's not the storage I'm worried about as much as all the simultaneous things we are doing.
Yes, I understand.
Do you have everything running on one PC?
The current plan is to do the VR experiment and stream out task-related data on one PC, then do the neural acquisition and file storage on a second PC.
Sounds like a good plan.
Our VR PC has already overheated a couple times just doing the experiment and the eye tracker.
I can only speak vaguely, but I believe we had issues recording eye images and doing 3D pupil detection at the same time.
Not sure where the bottleneck was. Our approach is to use 2D pupil det. during collection, and then 3D post-hoc.
offline 3D detection./
I might have to plug the eye tracker into the second PC, and shuttle the data from the second PC back to the VR PC for task progression.
I have yet to try that. I somehow doubt hmd-eyes supports that right now.
That's a pupil service task common to the mobile and VR tracker environments, so it might actually work.
as I understand it, the IPC backbone is well developed, and they have been really careful with timestamps
...but, ask papr!
ehr, maybe this isn't IPC, but their networking (which is something else).
in either case, I believe that aspect of the system is fairly far along.
I know it SHOULD work, but using the .unitypackage from hmd-eyes has proven to be difficult. e.g., Whenever we were running pupil_service in a different directory than where it was specified in the Unity inspector, we could still connect to the already running pupil_service but other things wouldn't work. This was fixed by matching the inspector and the actual location of the running service. Sorry I can't remember the details of what wasn't working.
interesting.
I'm letting the student toil away on those problems.
eeesh. Well sir, good luck. I'll keep an eye on your messages so that I stay abreast of your issues. You're farther along than I am, though.
Last I checked, the poor 3d calibration disuaded me from continuing.
We aren't doing 3d calibration at all.
Only 2D, and then reconstructing the gaze vector from the head pose + the 2D gaze vector.
ah, OK, nice.
that's a good way to go. I started down that path, but couldn't find enough info to reconstract the gaze vector (in a 3rd party program)
issues with unity's weird frustum. Not an insurmountable obstacle, but it put me off for now.
All of our selectable objects have collision spheres around them so we can trigger progression on object selection.
Great if all your objects are sphereica in shape, or if precision isn't a big deal
Hey, sorry, but I have to run
Thanks for the chat.
Yes, lets keep in touch.
And the helpful info.
I try., and I want you to succeed, so I'll do what I can.
FYI trying to work with krystel huxlin @ uofR on a home system for visual rehab
needs a tracker
want to use pupil tracker, but it's not there yet. Using SMI for now. Not a good long term solution (they were bought by apple)
k, chat later, thanks
There's Tobii. A bit expensive, and a bit too black box for me, but I might run out of time. Later.
yeah, forget tobii
they don't care about researchers
black box is right. their demos at scientific conferences are designed to hide the systems accuracy
I always give them guff, they never address the issue.
I'm not spending 10k+ just to find out if the thing is useable.
...at least they finally started providing eye images
still, they lost my faith ages ago.
Right. I haven't loked into their API to see what was available. If eye images aren't available then that's a problem. Thanks for your perspective.
@papr Getting some strange behavior for offline 3d detection in the latest source
Notice that the eye video is blank, despite the presence of eye videos in the scene insets.
...and notice that the FPS is 0 in the eye video, and the progress bar is stuck at 0%. Strangely enough, the eye videos still close as if the process did a good job. However, no pupil data is cached.
Shall I post an issue?
@papr Thanks for the advice reagrding using Pupil Service to send info through to e-prime. Are there any documentation or guides at all on how to do that?
@user-8779ef this was a regression caused by commit https://github.com/pupil-labs/pupil/commit/44e719825ac6b8f615ae791fc6b496783919142c I just fixed it. Please pull master.
@user-8779ef Thanks for catching that 🙂
@user-921ec1 I would start by reading and understanding these pupil-helpers examples https://github.com/pupil-labs/pupil-helpers
@user-5d12b0 Your data looks different to me
2d offline detection
This is the jupyther notebook that I used to visualize the data.
@papr Does doing a 2d offline detection overwrite the pupil_data file?
because if so then the zip I sent you has the offline-detected data. Not the online. I'm only interested in the online data.
online and offline 2d are the same. Was yours recording in 3d?
I don't even know what that means. How do you record in 3d?
check detection and mapping mode in the world window general settings.
2D
The graph you showed is different than the previous graph I posted because I uploaded a new shorter recording than what I used to generate the graph I posted.
What you plotted is similar to what I have for the new data. Aren't those extended sections of low-confidence data in eye1 problematic?
Hi I want to know if it is possible to subscribe to multiple topics on a single socket I see that in the source code for Connection.InitializeSubscriptionSocket(), there is a subport that is used to initialize sockets, that are then paired with a topic I believe this is causing unity to crash when I make multiple calls to this function, as it tries to establish new sockets for topics, but using the same port What I want to achieve is very simple; I want to be able to subscribe to multiple topics
@papr 100 * np.sum(np.asarray(pupil_data[1]['confidence']) < 0.25) / len(pupil_data[1]['confidence'])
20.510238335011749
. 20% of my eye1 data has low confidence. What am I doing wrong?
@user-bb3137 you can subscribe to mulitple topics on one socket in general. Unity3d specific questions I recommend asking in the hmd-eyes channel.
@user-5d12b0 if @papr has your eye data he will have a look at in on monday and let you know!
He has the eye videos. OK thanks. Do you have any ballpark numbers for what % of the data can be expected to have low confidence? It's not even the %... I'd be fine with 20% if it was exactly every 5th frame. The problem is that I have blocks of low confidence ~ 1-second long. My next idea is that it is related to the strain I'm putting on the system. I will try putting the eye tracker on a different computer.
but no time today. Maybe Monday. Thanks again.
@user-5d12b0 detection requires a decent view of the pupil. Let me check the videos and give you a more specific answer on monday.
@papr No prob! Another issue - debug mode no longer works for me. It hands after a folder is dropped on player. Could this be a config issue?
Hello, for some reason, the side panel doesn't seem to be working in the pupil player (1.2.7) GUI.
The buttons on the side panel don't bring out the necessary pane for each option. Any thoughts on that? Seems to work fine on Pupil capture.
Tried re-installing pupil player. Experienced same problem.
@user-c828f5 Could your pane simply be adjusted to minimum width? Try changing the width by dragging from the small lines halfway down its height, and just outside the black sidepane.
@user-8779ef I tried all the possible lines to move it around but no luck! Also, it seems I have lost control of the time slider line (the vertical cyan line) using my mouse.
@user-c828f5 Mac or PC?
Search for the pupil_player_settings folder and find the player.log. Open it up and dump it here, if it's not super long.
@Rudra8#0474 just delete the settings file in ~/pupil_player_settings
Hello! Is there any data output that shows saccade length (distance and/or duration)?
I'm getting an error when I try and import a module (pandas) into a plugin that is in the pupil_player_settings/plugins folder.
Is this something to expect - that I can't add modules that pupil isn't compiled with?
If you run from bundle yes
HUrmn. Bummer.
I'll see if I can convert my code to run on numpy only.
wrote it in a notebook
Or run from source. That should work as well.
Yeah, but I want to give this to a non-cs undergrad or two to use for analysis . Best to hide that from them.
I see.
thanks for the input, though
is scipy is a part of the package?
yes, yes it is.
@papr Still having issues running in debug mode. It hangs on loading the folder. Have any ideas?
What debug mode do you mean?
this is using pycharm on a mac.
I realize that this may be a pycharm question. Perhaps I should erase a folder or settings file or something
it hangs on ...
player - [WARNING] camera_models: Loading dummy calibration player - [INFO] gaze_producers: Calibrating "1" in 3d mode...
Strange, no?
Did you check if you have a break point somewhere which you are not aware of?
yeah, no bp
Hello - The side pane with plugins will not expand. I've re-downloaded the software and I've deleted the settings, but no luck. Is there anything else I can try?
May I ask if there is a research channel? If not, can we create one?
@user-2798d6 deleting the settings should help. The next release will have a fix for that issue
@wrp I think @user-33d9bc has a great point. What do you think?
ok, thanks! Also, is there any part of the raw data output that gives info about saccade length or duration?
Not yet. I have been working on a saccade detector but it is not finished yet
@user-2798d6 I may be able to provide one ... depending on issues related to an NDA. I'll text you if it works out, but don't hold your breath.
Ok, thanks everyone! 😃
@papr for inspiration on the timeline class, you might look at Bokeh.
@papr ...where you associate a columndatasource and style with each glyph / timeline per figure.
@papr the link to your example client is broken, is there a working one I could check out? https://github.com/pupil-labs/hmd-eyes/tree/master/hmd_calibration
@user-921ec1 My link should point to a repository containing multiple examples. These are minimal examples, not full clients. These are independent of hmd_eyes
but use the same network interface.
@papr but the link is 404'd?
Your link is not the link I posted. These are the examples: https://github.com/pupil-labs/pupil-helpers
Oh I see what you mean, cheers!
@papr Do you happen to have a python program with the pupil lab tracker codes in it? It would be incredibly healpful to see/backwards engineer a functioning paradigm (I'm not too terribly programming-savvy)
Do you mean like an user program that fully integrates Pupil Remote?
Yeah 😃
Hi! Everyone, is there anybody knowing how to integrate Pupil Lab to LSL?
@user-8deb3b @user-54a6a8 has been working on this lately.
@user-8779ef @user-8deb3b - https://github.com/sccn/labstreaminglayer/tree/master/Apps/PupilLabs
@user-33d9bc @papr I created 🔬 research-publications as a place for the community to share their research and publications
Thanks, wrp
So, is anyone here devloping on a mac? I'm having serious issues with debugging, regardless of the IDE
Sadly, this problem seems quite esoteric. Three IDE's and not one will allow me to properly debug pupil player.
Is this issue specific to Player or does it appear in Capture as well?
I can try it with capture later today, papr.
Notice that Pycharm and VS Code fail in different ways, but both during calibration
and <sigh>, only in debug mode.
Wing actually doesn't crash, but it also ignores the breakpoints I'm placing in my submodule. Someone get me some scotch.
So, I'm stuck in the mud. Here's what I haven't explored yet - I've tried this only with one machine, and only video file, although I have tried deleting the offline data.
I'm going to try a different machine and video file later today.
...it will also be a mac running high sierra, though.
@user-8779ef After a short test, I can only tell that PyCharm uses 100% of my cpu to run Pupil Player in debug mode. I think that this is mostly a speed issue.
I know, it is far from nice, but maybe you can get further by simply using Sublime Text + manual breakpoints using https://docs.python.org/3/library/pdb.html
@papr I have successfully debugged in pycharm without issue for weeks, until the latest commit. Also after 3 IDE failing me, this tells me that the issue may lie elsewhere. Of course, I have no idea how you guys would go about addressing the possibility that something in pupil is at hand.
I'm vocal because I'm wondering if it's just me, or if others have the same issue. This issue has eaten up a lot of my time, lately.
If nothing else provides more details, I'll try sublimetex.
My collegue @marc uses pycharm regularly and did not encounter any issues yet.
@papr Thanks for inquiring. Mac, or PC?
I used it on my Macbook Pro (Late 2015) with High Sierra
My collegue uses it on a Linux machine.
Hi All. We have some data from a session where we did the calibration, stopped the recording (but did not move the trackers), and then started the data collection session. Is it possible to combine these recordings or somehow tell pupil player to apply the calibration from the first segment to the second video segment?
@user-78dc8f There is an open issue for this feature request: https://github.com/pupil-labs/pupil/issues/1003 Unfortunately, this is not a trivial problem to solve.
With other words, this is not possible yet.
@papr thanks for the link. Any idea on the timescale for cracking this? And can we link into this thread to follow updates?
@papr Sadly, we have about 90 participants worth of data that has this issue since our team in India didn't realize this was an issue until just recently...
@user-78dc8f I would classify this as a mid- to longterm issue. There are a lot of issues that have higher priority to us due the amount of people being affected by them.
@user-78dc8f I m sorry to hear that!
I m not sure how we can best help since this is a not so easy to solve problem.
@papr thanks for the info. If we can tip the balance toward 'mid' that would be brilliant. Here's the basic problem...we hook up a mom with the eye-tracker, then we have to set up the child. This can take a while, and we were having storage and over-heating issues. So we started stopping the videos in-between as a solution. And now we realize this was a problematic solution...
We have lots of data from our UK sample to process, so we can wait a bit. So if 'mid' means a few months, that would be brilliant.
@papr I think we could save the gaze mapper config in offine_data and could then move this to a different recording...
Alternatively, you could manually merge the recordings, most importently the videos and their timestamp files. But I cannot tell you for sure that this would work.
both ideas sound promising...any way we can help test these possibilities?
We are not able to provide the manual merging solution. You would need to test this on your own. You probably can concatenate the videos using https://trac.ffmpeg.org/wiki/Concatenate The timestamp files are just numpy arrays that can be concatenated as well.
@papr alrighty. I'll give this a try in the new few weeks. Basic idea is to just force them together and then see what pupil player does, yes?
Exactly.
@papr @mpk ok. I'll give the merge idea a try. If that doesn't work, perhaps you can keep mulling over the idea of saving the gaze mapper config and moving this to a different recording...
@user-78dc8f I've put my two cents in as well, that this would be a valuable feature. Especially to the developmental context. Getting multiple calib. sequences out of a child is practically impossible. Considering the number of develop. psychologists that would love to update their eye trackers...
@user-8779ef Excellent point. I hadn't even gone there yet. Yes, multiple calibration runs with a child would be impossible...I can see the value of applying a previous calibration. Even if it isn't perfect, it's better than what we have currently (which is often nothing...)
Which tracker are you using now? POs sci?
@user-8779ef eyelink II in the lab. pupil labs in the home...
Are you able to get kids to wear the eyelink 2? This is impressive. On the images it looks very heavy...
@papr No. We use a remote system in the lab....
Ah, I see. Thanks for the clarification.
@papr @user-8779ef @mpk Thanks for the input. We'll try the concatenate approach and report back soon...
@user-78dc8f Please do report back. Thanks.
@user-78dc8f Please do so! Preferably in the issue linked above, such that we have a persistent reference for other users.
@papr yep. Got it...
Hey, wondering if anyone has had success using the off screen calibration markers. I've had a lot of trouble using printed markers, and have found that the system continually tells me I haven't calibrated enough markers (I've done upwards of 20) - the calibrations I have managed to get with it were extremely subpar. Wondering if anyone has figured out the best method for presenting markers or whether this functionality is just unideal.
@user-02ae76 You only need to print one marker. The calibration should look similar to what can bee seen in the beginning of Offline Calibration tutorial: https://www.youtube.com/watch?v=lPtwAkjNT2Q
@user-02ae76 Alternativly, you could share a small recording with us and we could have a look at it and maybe provide ideas for improved detection.
I have tried it this way, my issue is when I try to calibrate online this way, it runs into issues as I am moving the marker. I may share a recording soon to get advice
Does off-screen just work better when offline?
@papr That video is really helpful. Thanks!
@papr You'll get much better results if you keep the marker stationary, and have the person engage in VOR.
you'll have less gaze/object error from biology, and independent of the pupil tracker, because the movement of the target in the head frame is voluntary and predictable. Jeff pelz does this and calls it the "head tic" method.
"Head tick" because he also had them move in small discrete increments.
@user-8779ef can you clarify the stationary method? Would you simply tack the marker somewhere and have the person keep their gaze on the marker while moving their head to different angles?
@user-02ae76 Yep. That's the only difference.
@user-ba183b ....and be sure to have them move their head somewhat slowly, and try not to blink 😃
@user-8779ef I appreciate the advice! I had seen someone mention that method on the Google a few years back and wasn't sure if updates had made it less ideal. Definitely will try doing it with a stationary marker!
Is there any more documentation that you know of on the "head tic" method?
@arispawgld#8014 No, but I'll try and remember to ask Jeff tomorrow, or the next time I see him (we're both at RIT)
@user-02ae76 One limitation of both approaches is that they assume you are always looking at the marker, and never blink. In truth, I would imagine folks might accidentally look away, or blink. Not sure how pupil handles these exceptions. Drops in pupil tracking confidence might deal with blinks. No way to tell when the person is not looking at the marker, but there are algorithms to detect outliers, like RANSAC.
We filter low confidence pupil positions before calibrating.
@papr , did you get the chance to look at the eye videos I shared with you? Is there anything obvious I'm doing wrong? If it's just a settings thing, can you recommend some tips on "things to look for" while I tweak the settings? e.g., the pupil debug window which I have no idea how to interpret.
Mmh, generally it looked good. My guess these sections of low confidence are due to bad contrast. I would suggest to either increase the eye cameras exposure times or to increase the gamma/gain/contrast values.
Thank you, I will try. I think the exposure times are already at their upper limit for 90 fps. But I can reduce the fps if it will lead to overall better tracking.
Then I would recommend the seconds option. The tracking is already good for most of the times. This is just about optimizing the last few percent.
@user-5d12b0 one more option is to play with gain, gamma and contrast.
@papr @mpk Sorry for the late reply, I'm working on something else at the moment... What is a good way to monitor pupil tracking performance, without doing a full recalibration? What should I look for in pupil_capture as a good indicator that things are working well?
There is a confidence graph for each eye in the Pupil Capture world window.
also available in the pupil data stream on the IPC
Hey guys, running into a bug with the debugging window - it stays black even though previously it worked. Already tried reinstalling, should I raise an issue in Github?
@user-02ae76 This is the 2d detector debug window. This is supposed to look like that. I guess you expected the debug window for the 3d mode?
@user-02ae76 You can change the detection mode in the general settings of the World window.
@papr I feel silly, thank you!
Might be a silly question, but in what settings would one use the 2D detection? I assume it would operate more like screen based ET, so I wonder what advantages there are within headset models.
The 2d calibration is more precise than 3d but way more prone to slippage. 2d/3d is just the pupil detection and mapping method. It does not matter if you are in a VR or a flat screen based setting.
Okay, I was misinterpreting what the dimension referred to. I'll experiment with both and see what works best for us, thanks again for the help!
You are welcome 🙂
@user-5d12b0 If exposure times are at their upper limit, then you are going to get motion blur.
@user-5d12b0 I mean, that's generally the case with cameras. I can't really say how bad it is with the pupil system because I haven't played with it. Increase the ambient light levels in the room to get better image quality with lower exposure times.
@user-5d12b0 Actually, I'm second guessing my suggestion to increase ambient light levels. We're dealing with the IR camera here, so the ambient light levels (which are mostly in the visible range) shouldn't play much of a role.
Hi guys, I just found out about this platform and I'm thinking of integrating it in my Bachelor Thesys. So my quick question is : Can you integrate it with Unreal Engine ? Has anyone done project with it + Unreal ?
Hi guys, I opened a new recording in pupil player this morning and the clickable sidebar buttons would not open their respective menus. i attempted to reinstall player and capture and when attempting to open the recordings they are not valid.
Recordings were made using pupil mobile and capture and player are in stalled on my mac
mac os sierra version 10.12.6
and ideas would be greatly appreciated, thanks,
@user-8a8051 running latest version of Pupil software?
yes
@user-8a8051 Could you restart with default settings?
From the general menu
@user-f1eba3 we maintain a Unity3d plugin. There is no official support for unreal. However you can subscribe to messages over the network and develop your own plugin for Unreal - this would be a solid contribution 😄
Im going to do a litle bit of research on how integrating third party software in Unreal. My professor told me we could be able to work with Pupil from March sooo if we decide on this I will definitly try to integrate it.
@wrp initially, even the general menu would not open. after the reinstall, the initial grey screen ('drop a recording' screen) remains and says oops that was not a valid recording
Do you think it is feasible to develop such a plugin in a month @wrp ?
@user-8a8051 please delete pupil_player_settings direct in your home directory
@user-8a8051 you can direct message me with a link to a small sample recording that is not working for you and I can take a look
@user-f1eba3 it really depends on what you want to do and your familiarity with the dev environment/coding for unreal
I don't have any insight into Unreal
We want to integrate it in this project : http://robcog.org/. I'm still somewhat of a noob even though I have developed one or 2 things in Unreal. I'm betting some colleagues could help me on this if I can figure out exactly what is to be done. 😄
Hello! Is there a plugin or some way to add a time bar in Player rather than or in addition to the frame bar?
[email removed] - I wanted to let you know that we created a new repo pupil-community
that will serve as a place to share community contributed projects, forks, plugins, and scripts for Pupil. https://github.com/pupil-labs/pupil-community
If you would like to add your work (or edit) please fork this repo and make a PR to the README file 😄
I know there are a lot more plugins, custom scripts, and projects out there and it would be great to have these all in one palce so that the community can build on top of your work and contribute back.
Hello, I'm attempting to read pupil info into MATLAB using the scripts posted by @user-ed537d in https://github.com/matiarj/pupil-helpers/commit/8d25d75645cf53082a423943ed79418d1563472b . I'm able to get the Python Server (pythonPupil.py) to connect to Pupil Capture (I see a stream of "Average Sent..." messages in the terminal), and I'm able to run PupilNetwork2.m in matlab and receive a steady stream of tracker data (e.g. Eye tracking data: [0.46, 0.10], 1.00) to the matlab command line However, the gaze coordinates/confidence messages I'm seeing at the matlab command line are not changing in value at all when I wear the eye-tracker and start the acquisition, suggesting something may be up with the connection. I'm not seeing any error messages, so I'm not sure what the solution is. Does anyone have any suggestions? I'm running Python 3, Macbook Pro (Sierra), Matlab 2017b. Any advice would be much appreciated! Thanks!
Anybody can show me how accurate is this Device :O?
sorry my bad. Software, hardware... Pupil at all i mean
Hi @user-3c2df0 gaze accuracy with 2d mode (in ideal circumstances) is 0.6 degrees, with 3d mode you should be able to achieve 1.4 deg of accuracy
is there a VIdeo i can see
Or do you mind sharing your screen a minute so i can see, while you can say, I'm looking at : left, right, left , right
Or. if you have the.... Information, rather in PIXELS then Degrees?
@user-3c2df0 you might like to check out some community contributed demos to get a qualitative idea of accuracy: https://www.youtube.com/watch?v=X_BalnBOcpk&list=PLi20Yl1k_57pr6zl9D6JHSrOWyLXxsTQN
👆 is a playlist of community contributed demo videos
We gaze accuracy in degrees because this is industry standard, but also because the resolution of the world/scene camera is variable (e.g. you can change resolution of the world/scene camera)
I hope this is helpful
Woooah Is that persson in the video interacting with multiple screens? Now I'm asking myself BUT How does it work.. ? Receivers on monitors and Senders on the face?
This is Absolutly awsome x"D
Or am i Imaging things and .. the device is simply and Eye Tracker, not even hooked up to the computer x"D?
(not interacting*)
Sorry not understanding the concept i dont live in the future yet 😮
Ok so this is Basicly Open source... SO i can Basicly hook up As much Devices as i want ._.
WTF ❤
@user-3c2df0 You need to hook up Pupil hardware to a computing device (e.g. laptop, desktop, or Android device running Pupil Mobile) in order to power the Pupil Headset and get data (video frames) from the Pupil headset. Pupil Capture is the software we have developed for real-time pupil detection, gaze estimation, surface detection, and more. You can subscribe to Pupil's data stream over the network. So, yes, you can hook it up to other computing devices that are connected on the same network.
For more information on the API/network communication please see: https://docs.pupil-labs.com/#interprocess-and-network-communication
@ÐarkMøðns#4964 It's good to be excited, but realize that the device only gives you a crosshair on an image after calibration. As the video demonstrates, we make many eye movements a second. If you want more information, like WHAT is under the crosshair, you will have to manually label it, or develop an algorithm to analyze the image data, or that communicates between the eye tracker and your rendering software (e.g. unity). In either case, to get accurate data takes time, expertise, and patience.
@user-3c2df0 you can also use our fiducal marker tracker to autolocate the screen in the video and map gaze onto that.
Is there any way to use an online calibration as a basis for offline? i.e. start with the online calibration and adjust from there.
So I'm thinking of making a plugin for Unreal Engine. Now I'm researching the Approuch
Should I Build
Server publishing the eye tracking data over the network. Is there something working in this area, and I just need to subscribe, or do I have to implement the publisher as well, in the SDK?
Or
Client (Unreal Engine Plugin) subscribing to server and accessing the data.
@user-f1eba3 Pupil Capture publishes all data via a zmq.PUB socket. You just need to subscribe to this socket to receive the data that you need.\
I would suggest to have a look at the https://github.com/pupil-labs/hmd-eyes/ project.
Hello! Is there a plugin or some way to add a time bar in Player rather than or in addition to the frame bar?
@user-2798d6 what data should the timebar show?
@papr - minutes and seconds into the recording, and down to milliseconds would be awesome!
@user-2798d6 understood. There is an open issue for that. I cannot tell you if it will be solved with the next release, but within the next two for sure.
awesome! Thank you @papr!
@papr Wondered if you could help with an issue, we have been trying to work on using manual marker calibration and started running into an issue where the calibration would start and then immediately end. We have tried restarting, etc. Running on a MacBook PRO
Hey guys, so I recorded data post calibration and the 'gaze from recording' is accurate. However, when I calibrate the recording offline, it calibration is completely off.
The offline pupil detection worked very well.
Has anyone experienced this?
There also happens to be another bug where in the calibration status remains stuck at 99%.
The last message from pupil is:
player - [INFO] gaze_producers: Calibrating "1" in 3d mode... Ceres Solver Report: Iterations: 23, Initial cost: 4.722104e-01, Final cost: 1.205650e-02, Termination: CONVERGENCE
Does this mean it has applied the calibration to all the gaze samples?
@user-c828f5 They have fixed this one and committed the fix to the main branch. It will be incorporated into the next release. ...and yes, it is just an issue with the text update. It may also prevent writing to disk.
@user-02ae76 Can you make a short sample recording that demonstrates this behavior where calibration is included in the recording (start recording first then start calibrating)? I see your issues now on github as well.
@user-c828f5 Mapping is complete at 99% as @user-8779ef notes
Hello - Is there a way to remove one of the eye videos after the recording? I recorded with both eye cameras and discovered after the fact that the right eye did not get a good read, but the left eye did. I've tried messing with a few things in the offline pupil detection process, but it's not working. Is there a way to only use the left eye or does that compromise the integrity of the recording?
@user-2798d6 have you tried just removing the eye video from the recording?
@user-2798d6 The issue for reference: https://github.com/pupil-labs/pupil/issues/949
@user-c828f5 @wrp The relevant (closed) issue. https://github.com/pupil-labs/pupil/issues/1008
@user-2798d6 Ready for you in the upcoming release: Time-based seek controls.
@papr THANK YOU! That's perfect!
@mpk - will removing the eye video from the screen actually remove it from being considered for fixations and such? I'm not getting a read on fixations for a portion of my video because (I assume) one eye has a 0.99 confidence but the other is around 0.20 or lower.
@user-2798d6 He meant to delete the actual eye video file. Renaming it should do the trick as well.
Oooooh ok - that makes sense. I'll give it a try!
Is there any way to update pupil without entirely reinstalling?
@user-02ae76 If you run from source, you just need to pull the changes from the master github branch. The application needs to be downloaded. Somewhere in the future, the application will notify if an update is available.
You usually do not need to reinstall, as I mentioned in your github issue. Deleting the correct settings will do the trick.
@papr thanks I was actually wondering if the same worked!
@papr The addition of time is great. Any idea when the next release is coming?
Hopefully next week 🤞
There have been lots of good updates since the last one. 😃
BTW, positive results with a new tracker on 120 hz mode
We are gearing up to record about 40 subjects worth of data, 30 mins each.
That's a lot of data...
YEP.
I have my ISI detector working well, although it's not streamlined as a plugin due to my issues with the debugging mode.
I'm half convinced these issues are related to mutiple python environments, but that's a hard thing to debug.
Which algorithm did you implement for the ISI detector?
My own.
applies binary filters on velocity, rising accel, and falling accel.
Only good for post-hoc use, really. Not for real time.
The offlne blink detector uses such filters as well. You might be able to use it as a template for your plugin
I wrote the plugin the plugin, for the most part.
The skeleton is all there.
The one function I need to fill is ins the one that applies my algorithm after a button click. Right now it just imports data that is. processed in a notebook and then saved to disc (a temporary fix)
How fast is it? How many pupil positions per second are you able to process on your macbook?
I haven't really timed it.
Next time I meet with my sponsors, I'll ask if I can share it.
They may make me wait until the project is complete 😦
Q: The eye sphere model constantly updates itself during the use of the tracker. I wonder how useful the 3D data could be when the origin of the sphere changes constantly. I mean when the model changes, all the 3d data (phi and theta angles, circle_3d_norms and ...) will be changed accordingly.
Old data points do not loose their validity if you mean that.
how? the phi angle for example. it's measured in a polar coordinates system that is defined based on the eye model
when the eye model changes the old phi angles won't be valid any more
They are valid within the old model. New models are created due to e.g. slippage. New pupil data is based on the new model, old data based on the old one. That is why each pupil datum includes information about its model.
Do not forget that this is just pupil data. Pupil data lies within the eye camera coordinate system. This data is then mapped to "gaze data" within the world camera coordinate system. 3d Gaze data includes a gaze vector that is independent of the current eye models
Anyone in here familiar with the blinking demo?
Could you be more specific?
Everything Connects through unity fine, however the script is not detecting blinks and not calling the 'CustomReceiveData' function at all.
I feel like im missing something obvious.
Ah, this blinking demo. Please refer to the 🥽 core-xr channel for the unity questions. 🙂
Thanks
😃
Just to be sure, did you turn on the blink detector in Pupil Capture/Service?
Yeah, still nothing...
Should we expect a big jumps in the 3d pupil data everytime that the model gets updated?
@user-0d187e would you mind to explain your setup and what you want to do?
For a project I need to use the pupil 3d data and not the gaze and I need those to be measured relative to a fixed model so that the reference remains the same throughout the experiment with a few trials. But I wasn't sure if those 3D data are relaiable as the model changes. I wish there was a way to prevent the software from updating the model after the calibration.
This is definitively possible. But you will have to make manual adjustments to the source code.
fo course. Thanks
But Please consider this for the future versions. For some studies the model shouldn't change after calibration.
@user-0d187e We are actively working on improving our pipeline. Model stability is one of the aspects. Keep an eye on the release notes. 🙂
@user-0d187e I agree with your intuition, and I have found better performance when I lower the threshold to change models. @papr (who probably played a significant role in designing the system) is also correct to suggest that it may be more robust to slippage if you raise the threshold. I think we're all still learning the art of how to adjust the threshold based on the particulars of the session - no one threshold will be ideal for all situations.
I generally lower my threshold to about .991-.992
@user-0d187e And, just to be clear, the threshold is set during 3D calibration, in the individual eye windows. It's the bottom-most slider. Make sure to restart pupil detection following a change.
@papr Hi, I noticed you responded to @user-921ec1 's post last week regarding sending event markers via eprime and suggested using pupil_remote. Is there any chance you would be able to provide an example of the command used to send event markers via pupil_remote? Also, can you tell me where I would be able to view these event markers? I did a test recording and attempted to send several triggers using the following command: socket.send_string('TRIGGER') ...but when I exported the data I was unable to see these commands anywhere in the pupil_positions.csv file. I'm clearly doing something wrong. @user-921ec1 ...did you manage to figure this out? Thanks!
@user-8779ef The pupil detection and mapping pipeline have been there before I started working at Pupil Labs. I do not have as much insight into it compared to the parts of the software.
@user-e7102b I will create a Pupil Helpers example that will illustrate how to do that.
@papr Great - thank you!
Can anyone tell me how to get to the exports folder from inside a plugin?
surely relevant folder info is passed in, I"m just not sure how to access it, and I can't debug right now (not sure why, it's an ongoing saga)
Found it: <export_range = g_pool.seek_control.trim_left, g_pool.seek_control.trim_right export_dir = os.path.join(g_pool.rec_dir, 'exports', '{}-{}'.format(*export_range))>
@user-8779ef There is a notification that indicates an export intend by the user. See this example: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L166-L167
The plugin should listen to this notification instead of having their own export button.
@papr It's actually an import button. Due to my issues with debugging, I'm unable to complete my code. The workaround is to calibrate, export gaze positions, process them in an external program, write out saccades/isi from this program as a csv, and import the csv back into pupil for visualization and validation against the video.
Hi @user-8779ef – I followed your conversation and just wanted to asked you if the initial setup of the pupil labs glasses/software was easy on your MacBook? I am about to order a set and just wanted to make sure tracking and recording works fine. I am not a real tech experts. Thanks for help.
Trying to run Pupil Capture on a Asus Rog notebook and no UVC source is found, not the builtin webcam nor other USB UVC cameras. What might be the issue? Works fine for me in OpenCV and Skype. https://image.ibb.co/c5da4G/43235345.jpg
@user-6db96e On Windows, you need to make sure that the correct drivers are installed for your cameras: https://docs.pupil-labs.com/#troubleshooting and https://github.com/pupil-labs/pupil/issues/1011#issuecomment-360345338
hello. The drivers are installed and work in other programs.
@user-6db96e In the UVC Backend
plugin (click the ≡ icon), what cameras are listed in the Active Source
selector?
Only one "unknown" when clicked says "WORLD: The selected camera is already in use or blocked". No other program using the camera is running.
This means that the drivers are not correctly installed. Be aware that Pupil Capture requires special drivers in order to recognize the cameras correctly. The cameras should be listed as Pupil Cam1 IDx
and not as unknown
. Please see the links above on how to make sure that the correct drivers are installed.
Is that the case with any UVC camera?
Yes, because the default Windows driver for imaging devices does not give us enough control over the device, e.g. does not allow to run the eye cameras with 120Hz. Be aware that you will need to manually install the libusbK
drivers if you want to use cameras that were not provided by Pupil Labs. See the the instructions for such a manual installation: https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
thx ill get back after im done
Is the situation different on Linux?
Yes
Im going to switch to Ubuntu at one point. Is there docs for Linux as well or is it much simpler?
@user-6db96e Do you run from source or do you simply use the bundled application? In the second case you just need to install the deb
package and you just be good.
In the first case you need to copy/paste a list of dependency install instructions that are listed in the docs.
Nice, that looks good. The next problem is that Pupil Capture was not able to set our default camera settings for your camera. You will need to manually adjust them in the Sensor Settings
and Image Post Processing
menus on the right. Unfortunately, I cannot tell you which exact settings you need, since I do not know your camera.
But since your image looks over-exposed I would suggest to start with reduzing the absolute exposure time
setting.
Error: could not set value
Right, else Pupil Capture would have been able to set it automatically as well... Are you able to change any of the Image Post Processing
values?
no
Mmh. You are still running on Windows, correct?
yes
Then my recommendation would be to switch to Linux and hope that it is able to set these values. If this does not work as well my guess would be that your camera is not as UVC compliant as expected by Pupil Capture. But this is just a guess. I am not deeply familiar with these low-level details. With a bit of luck @mpk has an idea what the reason for this behavior is.
can a long 5 meter cable for the camera introduce lag? If yes, how much?
@user-6db96e Not sure, probably only very small one. But there might be usb power issues. Make sure it is usb-conform.
by small are we assuming sub frame levels?
definitively. But you can test it yourself with pyuvc. Just compare the frame timestamp with the timestamp of first availablity in pyuvc using the two differennt cables. Measure this for 1000 frames each and see if the two distributions overlap
gotcha, thanks
is it possible for an UVC camera to be accessed by two programs at once? Sadly the other one is not in Python so I cant mofigy the code for them to access the same frame data.
I am to 80% sure that this is not possible. But if you can use Pupil Capture's Frame Publisher plugin to stream the images using zmq
sorry to clarify Im not talking about pupil capture but using the pupil library in code
Given that there can only be one program instance that receives the camera video, then you either need a special programm that duplicates the video into two virtual camera devices or you need one of both programs to relay the data to the other. The Frame Publisher plugin is able to relay the data. The only thing left is that the other program is able to receive it.
the other program now that i think about it just asks for an image each nth of a second, not necessarily a uvc camera. So as long as frame data can be read by Pupil in Python I can somehow feed it to the other program which is in C++
correct
thank you
So the webcam I made the changes to to work with Pupil is now not recognized by other programs it used to be recongized by, such as Skype. Is this normal?
yes, since the other programs do not look for libusbk devices
but I can't get it back...
if you deinstall the drivers as in the instructions above, disconnect the device and reconnect it, then windows should install the correct Imaging Devices drivers
its a webcam of a notebook (embedded)
then rebooting after deinstalling the drivers should have the same effect
ok
didnt work. https://image.ibb.co/kdfL6w/Untitled.png
It should either be listed as libusbk device or as imaging device. Please enable the hidden devices in the View menu as well
Well, I uninstalled its driver, rebooted Windows 10 and its there (in the pic), under Imaging devices theres only my printer
it says the driver is already installed, but it isnt
@user-f68ceb Installation of the capture/player software was simple. However, getting reliable data from any eye-tracker, takes experience and patience. So, if this is your first foray into eye-tracking, don't expect to plugin and get instant results.
@vrsauce#9955 I just realised that you wanted to use an integrated camera with Pupil Capture. This is typical for remote eye tracking. Pupil Capture does not support remote eye tracking.
Hello - I'm having an issue with pupil player, it crash when I try to load recording that was made on a different laptop (both computer are Windows 10). Everything works on the laptop that was used to record.
@user-c14158 Do I understand correctly, that opening the recording on one computer works, and on the other it does not? Please make sure both laptops use the most recent version of Pupil Capture.
No, I dont want to use remote eye tracking, the camer is very close to the eye
Yes, opening the recording on the computer that was used to record works, and on a other it does not. Bot computer are using the 1.2.7 version
@user-c14158 Could you try to open the recording on the laptop on which the app crashes, wait for it to happen and then upload the player.log
file that lies within the pupil_player_settings
folder?
looking at the log i realize that i renamed different folder with letter with acute (shame on me )
It works now 😃 thanks you
Just got a new error: Traceback (most recent call last): File "/Users/gabe/Documents/Pycharm/pupil/pupil_src/launchables/player.py", line 409, in player handle_notifications(n) File "/Users/gabe/Documents/Pycharm/pupil/pupil_src/launchables/player.py", line 387, in handle_notifications g_pool.plugin_by_name[n['name']], args=n.get('args', {})) File "/Users/gabe/Documents/Pycharm/pupil/pupil_src/shared_modules/plugin.py", line 321, in add plugin_instance = new_plugin(self.g_pool, **args) File "/Users/gabe/Documents/Pycharm/pupil/pupil_src/shared_modules/pupil_producers.py", line 169, in init except Exception(): TypeError: catching classes that do not inherit from BaseException is not allowed
player - [INFO] launchables.player: Process shutting down. MainProcess - [INFO] os_utils: Re-enabled idle sleep.
This is off of a fresh clone of the pupil git. The error happened when I tried to use offline pupil detection.
yeah, probably my bad. I am fixing this right now.
@user-8779ef Please git pull origin master
in your repository. The error should be fixed now.
thanks! I'll do it now
@papr Yep, it works.. Thanks for the quick response!
Also, loving those timelines. You're working on variable X range?
Thanks. Still in the planning phase.
Ok. Happy to weigh in, if it matters.
For example, it might be nice to be able to drop csv data (timestamp + value) into a folder (or otherwise import it) and have pupil provide the option to represent it as a time series.
helpful for prototyping algorithms 😃
you could imaging a plug that includes an "add timeseries" button, and a dropdown selection box
that looks in a folder or something like that
Just spitballing' here.
I understand. I like the idea. Please make an issue for that.
Ok, will do.
@papr Bad news - just tried to debug on another Mac with a fresh fork, and fresh Python install. Same issue with the debugger in Pycharm:. Stuck on: player - [INFO] gaze_producers: Calibrating "1" in 3d mode. Bummer. I've posted the same to the issues. I'm guessing this is a Mac / OS X high Sierra issue. This means that no Mac users can really contribute.
@user-8779ef I would disagree. I develop on my Mac all the time and I do it without pycharm. Just plain old sublime + terminal.
@papr That's encouraging. Sublime + terminal wasn't halting. I'm a bit drained today, but I may ask you for some specifics tomorrow.
Ehr, my tomorrow is your today.
(still 7pm here)
@user-8779ef Sublime Text 3 + terminal are my go to for development as well. Everyone has their own preferences of course 😄
@wrp This is what I'm dealing with here: https://github.com/pupil-labs/pupil/issues/1029
Sadly, I don't get to choose 😦
@wrp @papr Your approach, I assume, is to create a breakpoint with "python3 -m pdb main.py player"
...and then to run, " python3 -m pdb main.py player" ?
Using this method, my debugger skips right over my breakpoint.
No, I do not use break points at all. If at all, I use assert
s.
ah, OK. I'll google that method. Thanks.
assert
are just simple tests, that raise an exception if the condition is not true
but, no breakpoint is an issue for someone who isn't quite as familiar with the data structures as you are. Not so much documentation on g_pool etc.
I guess the halt on exception treats it as a breakpoint, eh?
@user-8779ef I agree that using a debugger is what you want. We have devs that use debuggers with pupil on a regual basis. Let me check and get back to you if I find anything helpful.
@mpk . Thanks. If you look at that thread, you'll see that i've tried 4 IDE now, without luck. The issues vary with each IDE, and persist across multiple data folders, and 2 macs.
In every case, I can run without issue when a debugger is not attached. I have not yet reinstalled Python, but that's a thought.
Part of me worries that my shared use of anaconda, brew, and pip mean that I have a volatile python environment.
...however, as I mentioned, this does persist across two machines.
I should also add that I used to use a debugger. It stopped working shortly before I created that issue.
Hi there! I was wondering if there are any good 3rd party papers evaluating the usefulness of the tracking glasses for research purposes?
Also: do you have any data on using the glasses in people with nystagmus? I saw two forum entries but I'm more interested in: can the glasses be calibrated with the standard software? Do you know of anyone who has made a custom calibration script for nystagmus individuals?
Finally: how well do they work with spectacles?
If some of the answers are at an obvious place on the website or if this would be better posted in the forum, I apologize and please let me know.
@user-62cec9 Re:spectacles: I played with this a bit yesterday. They tracked the pupils just fine - typically, trackers don't work through spectacles because the corneal reflections are smeared. THe pupil tracker does not track corneal reflections, so this isn't as big an issue.
However, be wary of bifocals! They change gaze behavior quite a bit.
Just be sure to take this at face value: an anecdote.
@user-62cec9 Pupil is too new to have much out there on its usefulness.
Ahh, I was expecting to try and angle the cameras below the spectacles, but I wondered if that angle would even work. I was hoping someone somewhere tried many different spectacles. I suppose a successrate of 50% would be acceptable. May I ask the refractive error of the glasses you tried?
@user-62cec9 SOrry - don't have that info available.
@user-62cec9 THis was with the newer 240 hz cameras. You can't adjust their positioning so much, only the orientation.
there's sufficent play tehre to get a good track across a variety of conditions.
200 or 240Hz?
Sorry, 200 Hz
@user-62cec9 "can the glasses be calibrated with the standard software?" THeir software is fantastic once you learn how to use it.
Pupil player. You can download it. I don't know if they provide a sample bit of data ...you can probably get one if you ask around here (sorry, mine is under NDA).
the thing with nystagmus is this: https://entokey.com/wp-content/uploads/2016/07/DA1-DB2-DC1-C11-FF5.gif
The software has a learning curve. I would say these are tough glasses for someone new to eye-tracking, but if you know how to use em, they work well.
their "intended fixation" is where the horizontal lines are drawn in that pic, so it can be quite tricky... the eyes aren't even stable enough to use the eyelink default calibration
Software is constantly in development, and they are VERY responsive to issues (if posted to the appropriate place on github)
would you suggest cross-posting to the google groups? and would it make sense to inquire about nystagmus calibration at github?
No, this is the best place to ask.
I have heard the legends about the responsiveness (:
Just give them a few hours to see the post, and remember that they are on German time.
You might write it once more after our conversation ends, so it' s near the bottom 😃
ahh, they are in Germany, I forgot!
@user-62cec9 So, thre are two things that might contribute to pool tracking during nystagmus. THe first is motion blur. This will be amplified in low lighting conditions, when teh camera needs to increase exposure time to get a good image.
Oh, wait, wait - Not true.
not true because the eye camera is in the IR range, and the tracker provides its own lighting via IR illuminators.
So, that light level should be fairly constant.
That is true of the scene camera, which works in the visible range.
I keep on making that mistake.
hehe
anyhow, motion blur would be an issue, but I don't htink that will be an issue with the 200 Hz eye cameras
They have a high sampling rate and low exposure time.
so my current understanding is that the endpoints of nystagmus saccades are where people try to and think are looking and also sample the most information
The other issue would be if the pupil tracking algorithm has any "memory."
in teh way that a kalman filter does
...or if it only operates on the instantaneous information present in a single eye image. I think that's the case, so I think you're good on both counts.
@user-62cec9 You're on your own there. Pupil, or any tracker, will only report a gaze location in pixel coodrinates based on the angular orientatino of the eyes. IN this ase, that location will have good deal of jitter due to nystagmus.
The question is, should you treat that jitter as noise? If so, is it normally distributed around the location of interest, and suitable to be averaged out?
These are questions for hte researcher more than the eye tracking software.
absolutely, and I'll likely just do an offline calibration.
one question I still have is: usually calibration procedures are based on fixation detection and nystagmus patients often have only very short fixations (which throw off the eyelink calibration, as it just reports unstable pupil). I assume that I could just tweak the software to allow short fixations, but suppose all attempts to use the pupil calibration fail and no calibration takes place, can I still get data from the glasses and do my own calibration on that data later?
@user-62cec9 THe pupil system allows for manual post-hoc calibration using "natural features"
Basically, you can scrobble to the frame when the subject reports fixation (we give them a button activated LED that is placed in teh scene camera), click on the calibration point they are looking at.
Do this for all calibration points, and then click "recalibrate." Done!
No reliance on automated fixation detection.
In fact, the fixation detection in pupil could use some work 😃
...I don't rely on it, and wrote my own detector which I can' t share yet due to NDA.
that won't work because patients aren't typically aware of the movements, and they are very fast, so it would have to be done by the experimenter after seeing the traces... if I see those I can easily say, okay, here and here and here are the points they wanted to fixate. Those things we have solved, my question is more: do you get any useful data if you skip the calibration?
@user-62cec9 Responses to (some) of your points: - Academic (third party) papers - We maintain a collection of papers that cite Pupil/use Pupil via this spreadsheet: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing - while this may not be exactly what you are looking for, hopefully there are some papers in this list that can at least demonstrate the quality of data and uses cases in research contexts - Nystagmus research with Pupil - I know that there are researchers in the community that are/were using Pupil to develop diagnostic tools for nystagmus, however there are no papers published that I am aware of at this time - Pupil Player and sample dataset - you can download latest software bundle from https://github.com/pupil-labs/pupil/releases/latest - we have sample dataset available via https://pupil-labs.com/pupil - if you are looking for a short demo of a specific type of movement, DM me and I would be happy to try to make one for you or set up a time to talk to demo via screen-sharing - Pupil with eye glasses - we will be shipping all new 200hz Pupil headsets with a extension arm for the eye camera, this will enable researchers to have greater control of the position of the eye cameras. Typically we recommend trying to capture the eye from below the frame of the eye glasses. @user-8779ef I am pleased to hear that you are able to capture the eye through the lens. - Discussion - Yes, this is the best place for discussion. We are considering transitioning completely away from Google Groups in favor of just github issues and chat, but of course we will ask the community first before making any decisions on this. - 200hz eye cameras and motion blur - global shutter and high frame rate reduces motion blur artifacts
@user-8779ef it would be great if you would be able to share work on the fixation detector if it ever makes its way to a more permissive environment
@wrp Yes. We have a meeting coming up. I'll share it the second I'm able to get it in writing.
@wrp wonderful, thank you! Perhaps you could comment on the question of skipping the calibration procedure: if calibration isn't possible, will I still get data which I could run my own offline-calibration on later?
@user-62cec9 Yes. During data collection, you really just record the eye and scene imagery. Pupil detection etc can be performed later.
Aha!
@user-62cec9 If you buy a tracker, come back here for details on the best methodological pipeline to use.
@user-62cec9 Just keep in mind, the software is still in alpha, and I don't know how reliable everything really is. I consider myself well versed in the software, but until you try and use it for real data, you never know. I'm about to collect 30-40 subjects worth of data, though. Wish me luck.
about = this month.
ahh, gl!
@user-62cec9 as @user-8779ef noted, you can always record eye and scene video data - without calibration - and can run pupil detection algorithms post-hoc in Pupil Player and calibrate post-hoc as well in Pupil Player
great!
Pupil software will always be developed continuously so that we can continue to add new features, improve existing features, fix bugs, etc. There are currently Pupil users/community members both in industry and in academia that record with high volumes of participants/subjects per day and conduct long duration recordings.
@user-8779ef I wish you the best in your data collection/experiments and look forward to feedback
Ok - going AFK now (UTC +7 time zone)
thank you both (:
Hello, everyone! I am a member of a computational cognitive science lab, and I am very excited to utilize Pupil Labs hardware and software for some research. I am curious about how to utilize the glasses for eye tracking on mobile devices. More specifically, I would like to be able to track gaze position on a mobile device's screen (nexus phone or tablet). If I am not mistaken, I would need to define the device's surface using markers or some other method. Does anyone know if this has been done already or is in the works?
still cant fix my webcam...
Hi @user-6952ca welcome to the Pupil channel/community. Yes! You can use Pupil for gaze tracking relative to a tablet/phone screen. Using fiducual markers is exactly the method I would recommend. This works and is the method used by other researchers in our community. If you are only interested in content on screen of the tablet/phone I would recommend that you consider using the high speed world camera with the narrow angle lens that ships with this system so you can dedicate more pixel real estate to the on screen content.
Side note to all, when you swap out world camera lens to the narrow FOV you'll need to recalibrate the world camera
@user-6db96e please remind me, you're trying to restore system drivers in windows 10 for your integrated webcam, correct?
You were not successful in installing libusbK drivers (based on what I see above in the chat)
If your looking to restore system drivers you can either roll back if drivers are still on your system or use a tool like this: https://www.drivethelife.com/free-drivers-download-utility.html
to clarify for the hololens: pupil 2d gaze tracking assumes a standard distance of the cameras from the eyes, right? in order to do the raycast from 2d -> 3d. So this makes the 2d gaze tracking most accurate with objects closer to the camera, right?
gazePosition.z = PupilTools.CalibrationType.vectorDepthRadiusScale[0].x;
I'm assuming this is the code that supplies the standard distance
@user-6952ca I suggest that you do some back-of-the napkin math before you get going. You should expect all eye trackers to have maybe about 1.5 degrees of error in the center, and maybe as much as 3 degrees in the moderate periphery. Please consider how close your ROI on the cellphone are during comfortable use (in units of degrees). Is that level of accuracy sufficient to distinguish between gaze locations for your particular use-case?
FYI, pupil can (and usually does) outperform those numbers, but not by much. That's still very good for a mobile tracker.
Guys, I'm curious - do you have any idea why a good pupil image would produce fluctuating confidence measures like this:?
See eye 1, which was taken while the subject was in fixation. The eye images are beautiful throughout, but the confidence measures are meh.
@user-8779ef The confidence signal looks very periodic. That's unexpected.
@papr Hi, I'd just like to follow up on my previous message re sending event codes via pupil remote. Can you please tell me if there is a simple way to do this? I can control all the other necessary functions e.g. start/stop recording, calibration etc. using the functions in pupil_remote_control.py...but I can't see an obvious way to send event codes/timestamps. This seems like something that should be fairly straightforward to do. Thanks!
@user-e7102b Sorry, I have been very busy the last few days. I will do so first thing when I get to the office tomorrow. Have a look at the annotation plugin and which notifications it generates and receives. You will need to send notifications in the same format over Pupil Remote. See the Pupil Remote example on how to send notifications.
I will create a complete example tomorrow
@papr No problem - thank you! I'll take a look at that annotation plugin now.