@papr You can read this file, 2.7 section https://pdfs.semanticscholar.org/4167/7844556582adc68a5a14dbb1cea0b28d9016.pdf
What should I do for reducing the noise?
@user-128c78 Did you try applying the "second order Butterworth high-pass filter" as suggested in this section?
Hey guys - I'm going to add a feature request. I believe you are implementing the manual offset at the wrong time in the pipeline, which unnecessarily requires that we remap gaze vectors between adjustments. It would be much better if it were implemented at the time of drawing the gaze vector with a callback to redraw following an adjustment. The offset would also have to be evaluated at the time of data export.
The issue is that, for longer videos, an exploratory adjustment must be followed by 5+ minutes of reprocessing before you can see the results. Very inefficient.
Hey @user-8779ef Thank you for the input. I agree that this is a problem. Unfortunately, it is not that simple since "higher order" plugins (e.g. fixation detection, but as mentioned all drawing plugins and export plugins) would have to evaluate the callback all the time. Even if the gae data did not change. Not very efficient either.
The actual problem is that manual gaze correction (and Vis Scan Path) require manipulation of previously calculated data. This breaks our processing paradigm that does not support data manipulation. -- That's also the reason why these plugins are currently disabled.
Yes, I thought that might be a problem.
We still need to think of a good, general solution to this problem. A mapping callback (chain) is definitively a possible solution. The question is how to avoid redundant mapping calls
I have considered suggesting that you guys abandon the approach that all plugins auto-recalculate via callback upon a change to the data. Instead, you might have each plugin have a very clear indication that its data is out of date.
For example, a red glow on the icon in the tray.
...also, provide a single button to update all out-of-date plugins.
(rather than force the auto update upon each change to the data)
Again, this would help avoid the issue of unwanted and time consuming callbacks, e.g. fixation detection.
...also, perhaps provide a warning on data export that some plugins are out of date.
I will talk to my collgue that is responsible for UX and ask him what he thinks about this approach.
It would be a pretty big change π
Yeah, that's why I want there to be a clear UX concept first before considering further steps π
Most definitely. I haven't yet floated it to the folks over here, but I'll see what they think. Maybe they'll shoot it down for some reason I can't think of.
My biggest concern is that this approahc would not be inuitive. The softwrae knows that data is out of date but does not update automatically? Not something I would want. I think a clear processing chain as in Capture would be nice. We will see...
Has anyone used the .csv tracking data for graphing to validate data from multiple users?
We just got our first pupil labs htc add on and i have some questions. when i start pupil labs capture i can clearly see both camera inputs and can activate them both. both of my eyes are displayed but one of my eyes has a wrong direction. it is displayed upside down. maybe thats the reason why i can't calibrate in HMD eyes demo. and i recognized that the eye tracking add on is getting pretty hot.. Can anyone help me?
Okay i managed to flip the image
After starting Pupil Service i detect eye 0 and eye 1. Now i start Test Build 3D VR and it searches for pupil service running. after that it restarts the detection of eye0 and eye1 in pupil service but in the end there is no eye detected
Then i need restart with default settings
@user-ea779f the flipped image results from the physically flipped camera. This does not effect Pupil detection.
Does anyone know why Pupil Capture would create a new 'recordings' directory out of nowhere? I just was debugging an application I built, and all it was supposed to do was connect to the backbone, listen for messages, and disconnect. I ran it (using video source in the eye windows) and it worked just fine. I didn't change anything, ran it again, and my recordings file was empty. The log says: world - [INFO] recorder: Created standard Rec dir at "/Users/katie/recordings"
Thankfully all of my videos are backed up, but this is rather frustrating.
@user-988d86 this is the default recording directory. What directory where you expecting? How did you set it? Manually or via notification?
That is the recording directory I've been using, where my files were saved. It seems to have just recreated it, therefore wiping all of my previously recorded files from it.
Now I understand the issue. I have never encountered this issue. Are you able to reproduce this phenomenon?
No because I don't have any recordings on my local machine now, and don't have any working Pupil cameras to record more video.
Did you rename the original recording folder by accident? Just making sure because the code would need to actually delete old files and I cannot remember this type of code in the recorder... π€ I will have a second look at it tomorrow though to make sure that I am not wrong.
Nope. I didn't make any changes to anything, including my code. I simply deleted my csv output file (which I've done many, many times) and re-ran my program.
My colleague was experiencing similar problems on his machine, but I don't know details from what his problems were, aside from that he seemingly wasn't making any changes to Pupil related code, or to the Pupil application, or to the Pupil recordings folder.
Hi @user-988d86 we will look into this issue tomorrow to see if we can replicate.
@wrp thanks much. For context, I'm running on v1.7. Based on the release notes on v1.8, its not feasible for me to upgrade.
@user-988d86 based on the path above, am I correct to assume that you use macos?
correct
Just for curiosity, what do you mean by not feasible to upgrade?
The changes in the Developer notes about API changes and fixation format changes suggest that we would have to put some extra work into to our applications in response. We just don't have the time to devote to that right now.
OK, thank you for your feedback.
As wrp said, we will look into your issue tomorrow.
Thanks!
Hi, I just got my pupil headset. When I connected the headset to a MacBook Pro (High Sierra 10.13.6), the Pupil Capture showed that βCapture initialization failedβ. I am wondering if there is any other thing needs to be done except for plugging into the laptop (like press a button?) and how to check if the device is connected to the laptop?
@user-5ccd98 any time we have that problem, we just unplug/replug into the USB port and it seems to fix it. I'm sure you've already tried that, though.
Is there a way to check if the headset is connected?
Click on the uvc manager icon on the right and check if the drop down menu lists any Pupil Cam cameras.
Make sure that the cable is correctly connected to to the hub of the headset.
I guess this means not connected?
@user-5ccd98 please could you try to firmly press the USBC cable into the clip of the Pupil headset - it may require a bit more force to ensure it is connected
Oh!!!!!!!
Got it!! Thank you so much!
Sorry for not being clear. Clip is much clearer than using the word hub.
You're welcome @user-5ccd98 pleased that this could be resolved πΈ
One more quick question. I also want to record the data using pupil mobile app. But I only have an Android phone with micro-USB (not type-C). I am wondering if the pupil mobile Android app is goanna to work with this Android using a Type-C to Micro-USB cable.
This will unfortunately not work.
So, I need to find an Android phone with Type-C, right?
See the official Pupil Mobile repository for a list of phones that are known to be working.
Any other requirement?
Cool.
Thanks.
That's super helpful. Thanks papr.
Hi, sorry to bother again. I came across another issue. When I dragged my recording folder to pupil player, the software quit, reopened and quit again. But if I dragged the sample data, it worked quite well. And I could not open the .mp4 file in my recording folder. I am wondering if I missed setting something?
@user-5ccd98 do you use Windows? There is a bug related to importing Pupil Mobile on Windows whose fix is currently in review.
@user-988d86 ok, I just checked and tested again: Our code does not overrite the recording folder. Did you by any chance run once from source and once from bundle? The default recording locaton is different for each of them!
/Users/user/recordings
is the default bundle recording location<source code location>/recordings
is the default source recording locationHey @papr, you spoke to one of my students yesterday (Rudra8), about coordinate transformations in pupil.
Let me tell you why he's asking all these questions, becauase he still has a bit of confusion.
Yes, I wrote him today. I reimplemented the angle calculation in Python: https://gist.github.com/papr/b0a59dc39d4b6d0ab773dc46eeff9773
The results are as expected. I think the matlab angle calculation implementation is wrong
YOu're the best.
I'll check in with him. Let me give you some quick context: he's running a simple test - having someone look at a target at eye-height, on the opposite side of the room. In this case, one would predict that the eye normals, after transformation into world space, would be approximately parallel (with the greatest magnitude along the Z axis).
However, he's not finding that, and I think the poor guy is losing sleep over it.
WOrld space = head space.
Yeah, I totally understood the issue and this assumption is correct. In practice it is very difficult though to reach parallel gaze normals. But the angles are <5 degrees for these cases.
Unfortunately, his vectors are apparently much further apart than that. He knows the math - he's very sharp.
I'll have him try the same calculation in Python, or using different matlab functions.
Yeah, I suggested to use pdist with the cosine metric instead of the custom angle calculation
Thanks for that. Ok, lets see what he does.
or, how that approach works out.
A while back you tol dme you were going to try debugging in pycharm on osx again (or something like that ). Any luck?
I need to venture back into the world of plugin development (refining my inter-saccadic interval detector)
I am using Visual Studio Code. I talked to one of the VSCode Python extention core developers at EuroPython last week. Multi-processing debugging is disabled right now but will come after they release the stable version of their new debugger.
Multii-processing debuggin is disabled in VSCode, or in Pycharm?
in VSCode
I don't use PyCharm but my collegues do without problem.
I don't get that. My breakpoints withini a plugin do not halt the program. It blows right through them.
shrug I must be missing something.
You do run the app from within PyCharm, corret?
Yes. I also have "attache debugger to process" enabled.
My collegue tells me that there are two modes to run a script in PyCharm: normal and debug. You need to run the script in debug mdoe in order to halt at the breakpoints
Yes. It used to work just fine . Somewhere along the line that functionality stopped.
....I should grab another package with subprocesses to see if its Pupil specific.
Yeah, good idea
Hi there, I'm having trouble running the HTC Vive test build. Each time the connection between test build and pupil capture (or puli service) is build, the eye cameras crashes. After that I am not able to restart the cameras in the general settings. Nevertheless, the demo scene performs the calibration and fails in the end. The following error is shown in the pupil capture console:
eye0 - [ERROR] launchables.eye: Process Eye0 crashed with trace: Traceback (most recent call last): File "launchables\eye.py", line 555, in eye File "shared_modules\zmq_tools.py", line 144, in send File "msgpack__init__.py", line 47, in packb File "msgpack_packer.pyx", line 284, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 290, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 287, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 234, in msgpack._packer.Packer._pack File "msgpack_packer.pyx", line 263, in msgpack._packer.Packer._pack File "msgpack_packer.pyx", line 281, in msgpack._packer.Packer._pack TypeError: can't serialize <MemoryView of 'array' at 0x225faf41be0>
Same for the second eye.
I am running latest builds I found with pupil_capture_windows_x64_v1.8-22-gdcb17d1 and Test.Build.3D.VR v0.5.1 on Windows 10. The same problems occured with earlier builds I tested (v1.8-16 and v0.5). I also have reinstalled the drivers without success.
Any help would be appreciated.
This is the same error as above. I fixed it already. The fix will be released this afternoon.
By above, I mean as reported in π» software-dev
Ok great, thank you
Hi papr, I am not using Windows. I used Macbook Pro (High Sierra 10.13.6). And I just used Capture to record, not Pupil Mobile
Hello. We are trying to run our Pupil Labs eyetracker through a Python script in OpenSesame. We were going to use the remote annotation example code that was on GitHub as a starting point, but the links we found in the Pupil Docs and online all try to take us to a page that doesn't seem to exist anymore on GitHub. Is there a new location for the annotation code? Thank you!
@user-ef565b the script is still in the same repository. We moved them into the python folder. Also, we will have to update the links in the docs.
Perfect! Thanks!
@papr Nope, I haven't run from source in many months. I was simply running Pupil Capture 1.7 as I normally do.
@user-988d86 You mentioned that you have a script that interacts with Capture over the IPC? Do you start and stop recordings with it?
@papr It has that ability, but I was not utilizing it at the time. All I was doing was subscribing to notifications and processing the payloads as they come in.
@papr I also know that my colleague, who experienced a similar problem, was not running from source and was not recording eye video.
Hello again. We're running into a new problem. We found the annotations.py script on GitHub and are attempting to use it, but the pyglui package it uses isn't working correctly for some reason. We forked and cloned pyglui from GitHub, installed glew, installed the pyglui package itself using pip, and ran the setup.py script to build it, but the annotations script is still throwing errors. It looks like the ui part of the package is missing and the other modules are also not building correctly. Have we missed something? (We are running everything on a Windows 10 machine with Python version 3.7). Thank you!
@user-ef565b if you run pupil capture from bundle you will only need msgpack and pyzmq to run the remote annotations script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
or am I misunderstanding something?
This is actually a different script than the one we were trying to use. We must have missed it on GitHub. We've already installed pyzmq and can get msgpack. Just as a clarification, what do you mean by running Capture from bundle? (Sorry, we were using another eyetracker that worked with PyGaze through OpenSesame and are new to this method).
@user-d90de9 the release for linux and mac has been uploaded now.
@papr update: i just found all of the contents of my recordings folder within the directory that my script was in, in a directory called 'pupil.' Again, I was running Pupil Capture and my script completely as normal. I did not have any functions within my script to move any files, nothing. Overall a mystery! ha
@user-988d86 can you make a gist with your script and share the link to @papr
@user-988d86 @wrp yes, I would like to see that :)
Hey everyone!! Just got the headset in the mail. Would really appreciate if one of you could walk me through the Pupil application setup process
@user-cc04d9 https://docs.pupil-labs.com/#getting-started
Thank you!!
Quick question. We've gotten the notification code working, but we're unclear on exactly how Pupil is storing/incorporating the notifications. Which file do they go into? Thank you again!
*annotation code. Our apologies.
@user-ef565b They are stored in the notifications.pldata file (on version 1.8). They should show up in Player if you open the recording and load the Player Annotation plugin
Hello, is there a way of generating a heat map?
How do I open a video in Imotions?
@user-810714 Heat maps - You can generate heat maps if you use markers within your scene. Please see this section of the docs: https://docs.pupil-labs.com/#surface-tracking
@user-810714 You can use the iMotions Exporter
plugin in Pupil Player to export data to a format that can be imported by iMotions software.
Hi there ,
just a qustinon to some 2D recordings. Can you get a 3D gaze data in later processing? I set pupil producers to offline pupil detection and if I set gaze producers to offline calibration then the error comes up.
circle_detector - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend circle_detector - [ERROR] libav.mjpeg: unable to decode APP fields: Invalid data found when processing input circle_detector - [INFO] camera_models: Previously recorded calibration found and loaded! circle_detector - [INFO] launchables.marker_detectors: Starting calibration marker detection... player - [INFO] gaze_producers: Calibrating section 1 (Unnamed section) in 2d mode... Not enough ref point or pupil data available for calibration. Calibration failed: Not enough ref point or pupil data available for calibration. player - [INFO] gaze_producers: Cached offline calibration data to C:\Users\p.wagner\Documents\phd\pupillab recordings\015\offline_data\offline_calibration_gaze
Thanks for looking into it.
@wrp Thank you very much. Already set with the imotions exporter. Great plugin.
Any ideas why the annotations plugin would work perfectly when running in a script through Jupyter Notebook but not through the exact same code run through Python in OpenSesame? The in line Python script will start and stop the recording with no problems, but none of the annotations commands work through it.
Hi there, I'm new to marvelous pupil software! A question: I need to measure the pupil diameter and stop! No world camera nor gaze or other stuff; neither I need to store video/frames but just the pupil diameter values; please could you suggest me the best way to get rid of all unnecessary features in order to have a simpler and more performant environment? Thanks really much!!!!
Just to say a few words about my project: I'm interfacing a diy camera mounted in a Samsung GearVR; the aim is not to see the gaze to interact with the VR but just measure the pupil diameter during a neuroscience/psychophysic test. I plan to use an S7 phone with GearVR to show the stimuli while the eye-camera connected via USB to a MacbookPro; then I have to find a way to "collect" data from the VR and from PupilCapture/Service and import all into Matlab for analysis... I would be the top to be able to refactor the Eye module to be executed directly in the S7 VR (I'm currently planning to work with Unity3D that communicates with pupil capture over Zmq protocol).... Any piece of advice or suggestion would be really precious!!!!!
@papr Papr, can you point me towards the method used to visualize fixations in the exported video (via video export launcher)?
Has anyone installed Machine Learning components (AI) towards their algorithms based on Pupil Labβs data ?
For better words, towards a specific goal for their purposes ?
@performlabrit https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L408
@papr Ah, so The exporter just calls each module's recent_events() ? THanks.
No, that doesn't seem to be it. My visualization works in pupil player, but not on export. So, I'm still missing a piece of the puzzle
I have updated my offline saccade and inter-saccadic-interval detector to be much more efficient, and to run as a background function.
...but, trying to compare the ISI detector to your fixation detector by exporting the same vid with fixation visualization / isi visualization.
hi!
does this software show where im looking on screen? i have both python and C++ experience
if not, is there any way this can be achieved
doesnt need to be realtime, i can just relay video over top of the eye tracking data
@user-8779ef mmh, you are right that this should work. I will have a look at it tomorrow
@user-bbb364 yes, this works if you use our surface tracking and define the screen as a surface
Realtime as well as offline
Thanks, @papr papr.
Hi guys, should the surface definitions which I define in pupil player work in pupil capture?
No, but the other way around should work
Ah ok. Damn.
To clarify: If you 1. define a surface in Capture 2. make a recording 3. open that recording in Player 4. load the Offline Surface Tracker
then it should recognize the defined surfaces in step 1
Is there a way to get it working the other way around?
@user-c351d6 No. The reason why the way above works is that Capture stores the surface definitions in a file within the recording which Player can read.
Where would Player place such a file for Capture?
There is a surface_definitions file in the settings_folder of Pupil Capture. I thought that's possibly the file to store the definitions of surfaces.
Yeah, but then Player would be writing into the capture settings folder. Also, different recordings might use the same markers for different surfaces.
I think the general solution here is to have a clear surface definition format
that can be copied explicitly into the capture folder.
Also, keep in mind, that the usual work flow is to use Capture and afterwards Player, not the other way around.
I thought Pupil Caputre might use the surface_definitions file from the settings folder as standard definitions and will copy it to the recording folder. I also hoped that pupil caputre store the "surface detection information" then in the recording folder as well.
That is actually what happens, yes. Maybe you are able to copy the surface definitons from Player back into the Capture folder. But I don't know if this works and it is definitively not supported officially.
Yes, for some reason it does not work.
Sorry for the amount of bug reports today. But we've encountered a lot this morning.
Don't worry, that is great! this is one of the big reasons to have a community project: Reporting bugs is a good way to contribute!
I actually plan to contribute more in the next months by building a smart glasses plugin for the Vuzix
@user-c351d6 That is great to hear
Feel free to add your repository to https://github.com/pupil-labs/pupil-community
Btw the password for this issue's recording does not work: https://github.com/pupil-labs/pupil/issues/1250
Thanks, that worked
@papr It seems like it's realy hard to run this experiment...
Yeah, your setup is quite impressive.
However, the issue that droping frames are breaking a whole recording might be a show stopper for simpler field experiments using pupil mobile as recording device as well.
Dropping a few frames within a 20 minutes recording should be fine. What is not ok is that Player crashes because of it.
I also highly encourage you to include the calibration procedure in the recording in order to be able to do offline calibration.
@user-c351d6 did you see any exceptions being logged Capture during the recording?
Yes, there were some. I was not smart enough to save to log. It's saves always just the last one, correct?
That's correct. Maybe this behaviour needs to change in the future.
Having it for each recording whould be better. However, than you should make sure that this files are not growing to large. I'll try to reproduce this when I got time for it.
Technically, if there is a broken frame, the exeption is logged and the frame dropped before saving it. I am unsure why there are broken frames in the recording nonetheless π€
@user-c351d6 This is actually a great idea. Any warning or error messages during a recording should be logged
Hi papr, me again. I still cannot load the recording folder to Pupil Player
And I cannot open the world.mp4 and eye0.mp4 with quicktime
Any suggestions to solve this?
@user-5ccd98 can you quickly remind me: Did you record using Pupil Mobile? Does the video open in VLC?
And did you send the recording to [email removed]
No, I just used the Pupil Capture
I tried the VLC
the video can be played
But Pupil Player still crashed after I dragged the folder into it.
I can send the recording folder to the email address. Would you please take a look?
Yes, please do so
Sent you.
My pupil player worked perfectly with the sample data downloaded from the website...
Do no know what is wrong here. I have tried to use different resolution and fps
@user-5ccd98 The recording opens fine for me when I run from source. I will quickly install the newest bundle and check again.
Is it because my operating system? I am running it on High Sierra 10.13.6
I am using macOS 10.13.5 right now
@user-5ccd98 I have just installed Pupil Player v1.8-26 and it works as expected
I used v1.8-22
I will try 26 immediately/
It worked!!! Thank you so much...
@papr
Hi, I have been using the new of pupil capture v1.8.26. I tried playing some videos on the new version of pupil player, however, only the eye loads and not the wold view. I get the following error-
error message
And when I try playing it on the older version (v.1.7) of pupil player, it crashes.
@user-2da779 can you please try deleting pupil_player_settings
folder
and then relaunching Pupil Player
ok, will try that now
now it just crashes
@user-2da779 what OS are you using?
nevermind - i see that this is Windows
@user-2da779 can you send the log again for the most recent attempt after restoring default settings?
It looks like there was a problem reading the audio file based on your first log
ok I will send that
@user-2da779 this happens with just this one recording or with multiple? Did you record this with Pupil Capture or Pupil Mobile? It looks like the audio file could be corrputed - based on the log message. Transcoding the audio file could resolve this issue. Before further speculation, I will wait to see your log and response.
This has happened with several recordings but only for pupil mobile. The recordings from laptop play perfectly fine. I have recorded another sessions without the audio capture plugin and they seem to be running perfectly fine.
These are the latest logs from pupil capture (laptop) and pupil player (for mobile recording). they seem to be running fine
@user-2da779 Thanks for the update, however you have sent the pupil_capture
log and not the pupil_player
log
What version of Pupil Mobile are you using?
app version 0.17.1
@user-2da779 please update Pupil Mobile
the latest version is v0.31.0
and allow for auto-updates of Pupil Mobile on your android device
Now, in order to salvage your existing audio files, you will probably need to transcode these using a tool like ffmpeg
ah! and would that bring the world camera back, as well?
@user-2da779 I see no error in your log file
This is from the session that I just recorded. It seems to run fine now that I removed the audio capture plugin.
Ok, thanks
@wrp I have ran a few session after removing the audio capture plugin and its been running fine on pupil player v1.8.26. I haven't updated the pupil mobile yet. Before I update it, just to confirm, would that solve the issue of audio capture then?
@user-2da779 please could you try making a test recording using the latest Pupil Mobile - I can not reproduce this issue locally.
ok. I will do that and send it.Thanks! π
@wrp with v1.7 almost every video file got an "moov atom not found", as it is this files from @user-2da779 ... with v1.8 we didn't see this again (yet) ... there are possibilities to repair these files, via ffmpeg
it has something to do with missing or messed up headers
@user-29e10a Thanks for following up. Yes, this is due to the file container not being closed properly (missing headers like you noted)
@user-29e10a you also were using Pupil Mobile to make the recordings, correct?
@papr I just was thinking about our experiment and the problems. Surface tracking with compressed files seems to be quite difficult because the surfaces can only be tracked when the head is not moving so fast. Otherwise, due to the compression, the markers can't be tracked. However, it's is actually only necessary to track a surface during a fixation. A fixation usually only happens when the head does not move. Also, the analysis (heatmaps) should just consider the data of fixations because of the human brain which can only process information during fixations. How is this actually at the moment implemented?
To get, for example, the surface distribution only during fixations, do we have to work with the output files from surface tracking and fixation detection?
The offline surface tracker exports fixations on surfaces but the heatmaps use all available gaze, not only fixations
Only when the offline fixation plugin is activated?
Correct
@user-c351d6 I challenge your statement that a fixation usually only happens when the head doesn't move. During natural behavior, fixations often arise during VOR.
or, if the person can hold what they are interacting with, a combined eye and head movement can track the movement of the held object
Don't just assume that won't happen. If i were to review a paper that clung to that assumption without proper citations, or admitting that it is a limitation of their system, I would challenge the assertion during the review process.
Better you hear it from a snotty person on discord than a reviewer π
@wrp no I was not. Simple pupil capture (although controlled via network, I started and stoppes the recording via netmq)
Thanks @user-29e10a we will try to reproduce this
Hey guys, my student reports that one of our cameras has been giving us less reliable tracks. It seems the LED is not perfectly seated in its housing. I am tempted to try and disassemble the camera housing and push it back in. Do you think this is a smart move, or am I likely to break something?
(i just unscrewed the screw a bit - that's not an issue)
@user-8779ef Thanks for the information. That was just an assumption based on guessing. I'm actually not that deep into the literature so far but I also wouldn't write this in a paper without any reference. My part in this project is so far just a technical one.
Do you guys have any glue what happend here?
The left surface is still tracked with two markers but it is starting to "fly around" in the FOV.
@user-8779ef regarding the camera led - please send (or ask your student to send) an email to info@pupil-labs.com - and we can coordinate a remote repair or return/repair if needed.
@user-c351d6 a short recording would be useful (if possible) send to data@pupil-labs.com so we can provide feedback
I can't provide you a short one but the full one and the times when it happens. Are you actually saving the marker cache once it has calculated completly? I just had it filled completly and than pupil player crashed and it was gone....
@user-c351d6 I'm AFK now, @papr can you provide a response to this when you are available?
@wrp Thanks. I'll coordinate soon.
Guys, having a pyglui error in 1.7.42 This worked previously.
2018-08-08 10:28:51,592 - player - [DEBUG] plugin: Loading plugin: Vis_Circle with settings {'radius': 16, 'color': [0.0, 0.7, 0.25, 0.2], 'thickness': 2, 'fill': 0} 2018-08-08 10:28:51,593 - player - [ERROR] launchables.player: Process Player crashed with trace: Traceback (most recent call last): File "launchables/player.py", line 420, in player File "pyglui/ui.pyx", line 256, in pyglui.ui.UI.configuration.set (pyglui/ui.cpp:12585) File "pyglui/ui.pyx", line 248, in pyglui.ui.UI.set_submenu_config (pyglui/ui.cpp:12362) File "pyglui/menus.pxi", line 704, in pyglui.ui.Scrolling_Menu.configuration.set (pyglui/ui.cpp:63029) File "pyglui/menus.pxi", line 87, in pyglui.ui.Base_Menu.set_submenu_config (pyglui/ui.cpp:50999) File "pyglui/menus.pxi", line 482, in pyglui.ui.Growing_Menu.configuration.set (pyglui/ui.cpp:59207) File "pyglui/menus.pxi", line 87, in pyglui.ui.Base_Menu.set_submenu_config (pyglui/ui.cpp:50996) IndexError: pop from empty list
Delete user settings, maybe?
Did you double-click on the close button in the vis circle menu?
Or does this happen on load?
Yeah, that worked.
If the lather is true, deleting the user settings should help
@papr Hello, I am having trouble accessing the base_data key in the gaze datum
I can use this command(base = msg['base_data'][0]) to print out the whole datum
but I cannot seem to just get norm_pos from it
Nvm I got it
Hello, what is the standard value of Maximum dispersion at Fixation detector? and why?
@user-c351d6 I will check and answer to your github issue tomorrow.
Has anyone here implemented Pupil in psychological attention tasks to analyse PD?
I am studying impact of startling experiences on pilot performance in a simulated flight task.
Sounds interesting. How long is your task? I'm studying PD fluctuations during sustained attention and vigilance tasks
Not far off what I'm looking at, perhaps even more similar actually. In essence, depending on competence level and confidence, tasks can run typically between 3mins and 12mins. Test participants have 15runs at the same (3stage task) and task scores collated for key performance indicators, which we believe are aligned with the concept of visual acuity. Scores are a reflection of how accurate the participant's read out of instruments are. Currently, I'm focusing on any correlations with fixations in the first instance.
@user-965faf and @user-239f8a if you haven't already - I would suggest taking a look at papers that cite Pupil (some of them may be of use to your research area) - https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing
@wrp sure. Thanks for the link.. I am familiar with this database. It has definitely been very helpful. Cheers.
BTW - if anyone here has papers that are not in this list, please contact me and we can add it πΈ
@user-239f8a That's really interesting, and it does sound similar. I need a profile of baseline PD (tonic) and task-responsive PD (phasic) fluctuations across a 25 min Sustained Attention to Response Task (SART) and a 30 min Attention Network Task (ANT), so particularly interested in how these designs and particularly data export - previously conducted in E-Prime and Tobii - are comparable to the Pupil Capture and output. Not familiar with how similar the timestamps, PD validity codes, and PD diameter output will be in attention tasks to that of the Tobii/E-Prime output. What experimental design software did you use? @wrp Thanks wrp, will check that database out now, big help!
You're welcome @user-965faf πΈ
@user-965faf . I haven't really looked at my case study from the perspective of PD in this case,.. might look at that post doc. Indeed, such comparative insight as you describe could be invaluable. Having said that, I haven't used much software for my experimental design per se. I have primarily approached my study from first principles of factors, levels and responses. Given a reasonable background in statistical analysis also has been handy with hypotheses etc . On the software front, I am using a fuzzy cognitive map approach to analyse causality. I found a tool aptly called FCM Expert developed by some other researchers at Hasselt University for this purpose. The rest of my analysis is based on fixation exports and writing custom python scripts for analysis added to good ol' experimenting and documenting results and responses in transcripts. Along the way however, I did find an interesting read called "Automatic Stress Classification with Pupil Diameter Analysis" by a Marco Perrotti et al. Perhaps this might also be a good rabbit hole to go down. Currently on the 9-5, so apologies for slow responses. Best Regards.
Hey folks - Yesterday, I began the slow process of checking track quality and exporting fixations for about 30 recorded sessions, 20 mins each. My player has crashed about 10 time in the process due to this bug alone: https://github.com/pupil-labs/pupil/issues/1256
I've also had to delete player settings about 5 times to account for this issue: https://github.com/pupil-labs/pupil/issues/1258
(which prevents player from starting up). This is 1.7.42 on OSX High SIerra.
Overall, of course, very happy with the results despite these setbacks.
@user-8779ef i.r.t. #1258: Could you provide a list of plugins that you use/are loaded? Also, please upload one of the "broken" user_settings_player
files.
i.r.t #1256: Do you need audio export? If not, you could rename the audio file to audio_.mp4
as a temporary work-around.
@papr Thanks. I'll upload the log and user_settings_player files upon next crash.
Well, that didn't take long π Uploaded them to the pyglui issue.
Thank you
Good Morning I'm having trouble executing the mouse_control.py code available in github, because the mouse is moving slowly when I perform the tracking of the look with the pupil. Can someone tell me why this is happening?
@user-3f0708 Please post your questions in one channel only. Could you clarify what you mean by slow movement? Do you mean that the cursor lags behind or that the movement of the cursor is actually slow?
hello, i'm trying to print gaze distance in real time. I think gaze_point_3d_z shows that. when i finish calibration and look around, printed data is lower than true distance. Should i need to multiply initial value gotten from after calibration? @papr
@user-42840c Depth estimation can be very noisy and is not accurate for distances further than 1.5 meters. It works best in the range of 0.5-1.0 meters.
The estimation is based on vergence.
Aha, but in that range of 0.5 - 1.5 meters, printed data seems like not best working. when distance from me to monitor is 1000 mm, printed data of gaze distance is much lower than 1000 mm , also data fluctuates every after calibration.
@papr I mean the cursor movement is actually reading.
@user-3f0708 could you clarify? Not sure I understand.
quick question: what is the time delay of camera to computer? I remember 40ms, but it could have been 38-42ms any of those
@papr Hello, I have been querying gaze_normals_3d from python until now, and the data has been coming back with the eye id and the two gaze coordinates for each eye
but I am now getting an error message that gaze normals 3d does not exist and when I query gaze normal 3d, (without the plural normals) I am getting three coordinates
Has there been a recent change in the source code?
@user-cd9cff check the topic. There is a difference between monocular and binocular data.
Yea, it works now thank you
Hi all! Is it basically possible to subscribe to two different topics (gaze & blink) from one unity scene instance? hasn't worked for me using "PupilTools.SubscribeTo()". Thanks!
@user-3638ed yes, this is possible.
@papr ok, i looked at the source code of PupilTools.cs - more precisely at the function "SubscribeTo( topic )" and couldn't find a way to bind it to script instance or sth.
Atm i have implemented gaze tracking for wich i subscribe to "gaze" at the startup of one of my scripts (i use the data for selecting items through pursuits which works fine).
Additionally i'd like to subscribe to "blink" using another script, but when both of them are executed, only the last subscription works.
Do you have any hint what i have to do to make it working?
Thanks a lot!
@papr Hello! How to decode a timestamp in gaze_pozitions file?
@user-2c0e1f what exactly would you like to do? Convert the timestamp to wall time?
@wrp yes
Timestamps to wall clock time - Please check out the info.csv
file that is contained within each recording. You will find two keys Start Time (System)
and Start Time (Synced)
. You can read more about the difference between Start Time System and Start Time Synced at https://docs.pupil-labs.com/#data-format (Start Time System is Unix Epoch at start time of the recording and Start Time Synced is Pupil Epoch at start time of the recording).
You can convert timestamps to wall time with some quick conversions. Here is an example on how to do this with Python using the start times in the info.csv
file and an example first timestamp in the exported gaze_positions
file:
import datetime
start_time_system = 1533197768.2805 # unix epoch timestamp
start_time_synced = 674439.5502 # pupil epoch timestamp
offset = start_time_system - start_time_synced
example_timestamp = 674439.4695
wall_time = datetime.datetime.fromtimestamp(example_timestamp + offset).strftime("%Y-%m-%d %H:%M:%S.%f")
print(wall_time)
# example output: '2018-08-02 15:16:08.199800'
@user-2c0e1f I hope this is helps clarify.
@wrp thanks a lot!
@user-2c0e1f welcome
how to fit a circle around the eye
when detecting eye pupil
@user-11fa54 The pupil should be automatically detected by Pupil Capture when the eye image is within the frame (and under normal conditions). Could you elaborate/provide a concrete example?
i am implementing a paper Hybrid eye center localization using cascaded regression and hand-crafted model fitting
please help me with the implementation in opencv
@user-11fa54 it doesn't seem like your question is related directly to Pupil hardware, software, usage, etc. Perhaps you would like to migrate this question to π¬ research-publications to ask for pointers/references. Please correct me if I misunderstood.
thanks
Hi, I'm getting this kind of errors in my logs, lately.... i'm using the bundled 1.8.26 on Windows ... I did not change anything in my own control code, just passing string back and forth [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 454, in world File "shared_modules\recorder.py", line 306, in recent_events File "shared_modules\file_methods.py", line 156, in extend File "shared_modules\file_methods.py", line 144, in append File "msgpack__init__.py", line 47, in packb File "msgpack_packer.pyx", line 284, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 290, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 287, in msgpack._packer.Packer.pack File "msgpack_packer.pyx", line 234, in msgpack._packer.Packer._pack File "msgpack_packer.pyx", line 263, in msgpack._packer.Packer._pack File "msgpack_packer.pyx", line 281, in msgpack._packer.Packer._pack TypeError: can't serialize <MemoryView of 'array' at 0x1fe9e5ef398>
capture crashes painfully and pulls everything with it into hell
Hey @user-29e10a which plugins are you running?
frame publisher, remote, annotation capture, and one visualizer ... no custom one
the accuracy visualizer?
yep
So my guess is that the issue lies with frame publisher but I thought I had fixed the issue π€
it happens when stopping the (eye) recording
and, I start the frame_publisher plugin programmatically when stopping the recording (through unity)
I was able to reproduce the issue on macos as well
ok, thanks in advance π there was a second exception leading to a crash, too, I did not copy the log unfortunately, but it said, that capture did not find the path of the recording (maybe I deleted it while recording or very shortly after stopping) ...
I was able to further track down the issue. Not sure why this issue did not come up before. It will be fixed with the next release.
perfect β₯
@papr, @mpk I could not find the delay of cameras to computer on the webpage. I remember Pablo mentionion ~40ms but how much is it exactly? In the abstract I found 45ms, is that the accuracte number to use?
@mpk any plans to include h265 support for compressing the videos before saving?
Hi, We have been using the updated version of pupil mobile and it stops capturing at 5GB. Is there a way to increase that?
we are also getting this error message
@user-b0c902 This issue is very likely related to @user-29e10a his issue from yesterday. Could you please add the crash log to https://github.com/pupil-labs/pupil/issues/1263
sure
I have a pupil device connected to a Windows laptop USB port. I also downloaded the Python projects and viewed a couple demo videos.
How do I get the software working on the laptop to capture my eyes?
@user-24fdfb did you download the application bundle - https://github.com/pupil-labs/pupil/releases/latest ? Extract the archive with 7zip. And then run pupil_capture.exe
You may need to right click and run as administrator the first time to give permission for driver installation
Thanks, @wrp . I ran the pupil_capture.exe
I ran pupil_capture.exe not as administrator and then as administrator and both times, I see lots of failures.
/code MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS ver sion. world - [INFO] launchables.world: Application Version: 1.8.26 world - [INFO] launchables.world: System Info: User: JGreig, Platform: Windows, Machine: Josh-PC, Release: 8.1, Version: 6.3.9600 Running PupilDrvInst.exe --vid 1443 --pid 37424 OPT: VID number 1443 OPT: PID number 37424 Running PupilDrvInst.exe --vid 1443 --pid 37425 OPT: VID number 1443 OPT: PID number 37425 Running PupilDrvInst.exe --vid 1443 --pid 37426 OPT: VID number 1443 OPT: PID number 37426 Running PupilDrvInst.exe --vid 1133 --pid 2115 OPT: VID number 1133 OPT: PID number 2115 Running PupilDrvInst.exe --vid 6127 --pid 18447 OPT: VID number 6127 OPT: PID number 18447 Running PupilDrvInst.exe --vid 3141 --pid 25771 OPT: VID number 3141 OPT: PID number 25771 world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in gh ost mode. No images will be supplied. world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720] world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration world - [WARNING] launchables.world: Process started. world - [ERROR] calibration_routines.screen_marker_calibration: Calibration requ iers world capture video input. world - [ERROR] calibration_routines.screen_marker_calibration: Accuracy Test re quiers world capture video input. world - [INFO] recorder: Started Recording. world - [INFO] camera_models: Calibration for camera world at resolution [1280, 720] saved to C:\Users\JGreig\recordings\2018_08_12\000/world.intrinsics world - [INFO] recorder: No surface_definitions data found. You may want this if you do marker tracking. world - [INFO] recorder: Saved Recording. world - [INFO] recorder: Started Recording. world - [ERROR] calibration_routines.screen_marker_calibration: Calibration requ iers world capture video input.
@user-24fdfb just to check, are you using Win 10?
I also ran PupilDrvInst.exe and restarted the computer.
Windows 8.1
is 10 required?
Yes
We only support Pupil software on Win 10
ok
and there's no older version of the pupil software for 8.1?
While software may work on versions lower than 10 - we can not offer support
We only target Win 10. There are no older versions of Pupil for older versions of Windows
ok
would the steps I took so far work if I had Windows 10?
Anyway, thanks. I'll probably continue this effort in a week or 2 when I have a newer laptop.
I have Ubuntu 14 on VMPlayer but this may need too many resources.
@user-24fdfb you should just be able to run pupil capture on win 10
ok
Vm is not recommended
ok
good day
Likewise :)
Hi, I am trying to track the eye when the user is looking at a work bench. I want to get what and where the user is looking at. I checked the tutorial and found out I might need to use the surface tracking. I am wondering what is the relationship between surface checking and calibration. If I put markers on the surface, do I still need to calibrate the device? I guess we still need manual calibration (print the marker and move it around)? After calibration, what would happen if user moves his head or body? I need to re-calibrate the device?
@user-5ccd98 Be aware that calibration/gaze mapping and surface tracking are two separate processing steps. Each transforms data from one coordinate system to an other:
Therefore you will need to calibrate independently of surface tracking. The second transformation is done automatically as soon as you add surfaces.
Moving head/body is not a problem since the headset moves with it and gaze is still correct (ignoring the issue of slippage).
Hi papr, thanks for your reply. Do you have any recommendation of which calibration method I should use? I found many options in the tutorial.
I think manual marker calibration fits your purpose well.
Cool, thanks papr.
Hi papr, I am trying to run Capture in my windows laptop. But the program crashed when I opened it.
Any way to fix this?
@user-5ccd98 which version of windows are you using?
windows 10
It crashes immediately upon opening?
Yes.
Which version of capture do you use?
v1.8-26
If I did not connect the device, it worked
So opening capture without the headset, and then connecting it does not work either?
Let me try it.
It is in ghost mode.
Can I re-connect the device by clicking somewhere in the software?
Yes, you should be able to select the cameras in the uvc manager menu.
The activate source shows four unknown devices ....
The selected camera is blocked or in use
This means that the driver installation was not successful. Please try running Capture with administrator rights
Oh, I see
Thanks, papr
Did it work?
Yes, papr. But I came across another issue. When I used the manual calibration, the process always stopped immediately and showed that "not enough ref data or pupil data for calibration".
I ensured that the pupil was within the range...
And even dismissing 0% of pupil data due to confidence
Please make an example recording of the procedure and send it to data@pupil-labs.com
Sure.
If the calibration stopped, I need to restart it over and over again?
Sent you a dropbox link
Thank you! I will have a look tomorrow π
hi, I'm trying to use the 120hz monocular pupil labs eye tracker with a separate motion tracker, for high quality tracking with unconstrained head motion in a dark room. i'm trying to follow the calibration procedure from Cesqui et al. 2013 (https://www.ncbi.nlm.nih.gov/pubmed/23902754) but for this i need the sensor size and the focal length to convert the normalized camera units into mm. do you have this spec? thanks!
jkrum: Does your mocap operate in the visible range, or does it operate in the IR range?
@user-82f104
in the IR range, it's an Optotrak system
@papr I am wondering if you get a chance to watch the video?
@user-82f104 Have you considered that both the optotrack markers and the eye tracker will be emitting IR light? Have you checked for potential interference?
Hopefully the temporal signature of the optotrack markers will differentiate them from the eye tracker and prevent false tracking of the eyetracker LED's, but you might check that early on.
similarly, hopefully the pulsing from the IR emitting optotrack markers will not interfere with the eye tracker.
@user-5ccd98 I found the issue. You are using the stop marker instead of the calibration marker. This is why the calibration immediately stops without having found any reference data. The calibration marker has a black dot in the center of the concentric circles that the subject is supposed to fixate during the procedure.
Oh!! Got it. Thanks so much. π
Sorry for the stupid mistake...
hello
i am trying to use pupil labs software on my mac using python
i get the error and can not see the video stream . world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
can you help me?
@user-f81efb Please make sure that the cable is fully plugged into the clip.
yes it is plugged in completely
i will unplug and re connect and check
@user-f81efb sometimes a bit more force is required to ensure that the USBC cable is firmly attached to the cable clip of the Pupil headset (as @papr notes)
when i run using the application Pupil Capture , it works completely fine
when i run using python, it doesnot work
Are the eye cameras working in both cases?
yes
they are working in both cases
Only the world camera does not work when running from source?
Do you have a 3d world camera?
yes, it says process started
but doesn't display
yes i have 3d world camera
OK, then you have to select the realsense backend in the world window
okay i'll try that
The selector can be found in the Uvc Manager menu
world - [WARNING] launchables.world: Process started. world - [INFO] camera_models: No user calibration found for camera Intel RealSense R200 at resolution (1280, 720) world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration
i still don't see any video
and the eye trackers didn't open yet
should i use the world window to open the eye tracker window?
or wait
the eye trackers window has opened
but still no video
A possible reason I could think of is that you installed the official librealsense instead of the required custom Pupil Labs version
Else I will try to reproduce the issue this afternoon.
https://github.com/pupil-labs/librealsense/blob/master/doc/installation_osx.md i used this to install
thanks π i will check it once again
What Mac do you have?
i have macbook pro 2012
with os X El captain
Which bundle version do you use? And do you use the current master?
i am using the latest release 1.8
i cloned from git https://github.com/pupil-labs/pupil.git three days back
OK, great, thank you
Thank you
Hey @user-f81efb , I just tried to reproduce the issue. My R200 is correctly identified, independently of running from bundle or source.
what do you think could be the problem ?
it gets stuck after this
world - [INFO] camera_models: No user calibration found for camera Intel RealSense R200 at resolution (1920, 1080) world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration
Stuck in the sense that the software freezes?
yes
if i click on the option to enable eye tracker, eye trackers load after this
i can also see the gaze points moving on the screen
but no video
What do you mean by "eye tracker"? Do you mean the eye processes?
i mean the eye processes
for each eye
ok, then the application does not freeze.
ok
Can you post a screenshot of the Realsense Source
menu?
yes
in a moment
after choosing the option
Please let me clarify. I meant the menu of Realsense Source not realsense Manager. That's the icon below the currently selected one.
The small camera icon
sorry
@user-f81efb From the installation page that you linked above, please try cmake ../ -DBUILD_EXAMPLES=true
and make && make install
okay you mean for librealsense
Yes, librealsense. The first step is to identify if the c++ examples work. Also let us migrate this discussion to a private conversation since this is a personal setup issue and not relevant for the community.
I will summarize any relevant outcome later.
okay
@user-f81efb Have u deinstalled the Realsense driver from device manager and reinsert the Tracker so the driver reinstalls ? Try this 2-3 times. Worked for us.
@papr is there a known issue with the Auto Exposure Mode 'auto mode' for the High Speed camera ? i always get world - [WARNING] uvc: Could not set Value. 'Auto Exposure Mode'.
@user-103621 Do you use Windows? Because there is not such a device manager on Mac. Which bundle do you use?
Yes i use Windows. Bundle is 1.8-16
Please update to 1.8-26
. This should fix the issue.
Sorry i didnt read he was using mac. As for the exposure i have this error when i try to use Auto Exposure with the world Camera and the new bundle doesnt change my problem
@user-103621 I was just told by my collegue that the "auto mode" does not work as expected and that actual auto exposure can be achieved by using the "aperture priority mode"
@papr Thank you !
Hi
can someone help me with the timestamps format?
The pupil headset I am using stores the timestamps as a double, and I can not extract the datetime from it
The timestamps are created from a monotonic clock that has an arbitrary epoch.
Be aware that this epoch is not the unix epoch unless you synced the pupil time epoch to it.
how to sync it with unix epoch?
You need to do that either before the recording,using the time sync plugin or by subtracting the time offset between the two epochs. The difference can be calculated using the Start Time (System)
(unix timestamp) and Start Time (Synced)
(pupil time epoch) keys in the info.csv file of a recording.
Thank you!
I'm trying to help someone get the pupil tracker working on linux. It works fine on a windows machine, but on Linux only the front camera seems to be available (I'm testing with cheese). Both pupil cameras show up in the camera list, but if I select them in cheese I just get a few blocky pixels and no movement. This is the same on two machines. One recentish linux mint and one Ubuntou 16.4.
In the pupil software no cameras show up. the 'select available' list shows 4 'unknowns', and trying to select any says they are not available/in use. Suggestions welcome. I am perusing the docs.
Is there an ubuntu package I can test on this ubuntu box?
What headset are you using? Are you running from source or from the app bundle? Are you using Linux in vm?
hmm. good questions . The linux mint machine is running the app bundle. I've not yet installed the software on my ubuntu box - I was just seeing if I could get to the cameras at all. No VMs involved. Syslog says: idVendor=05a3, idProduct=9232, Is there a better way to find out what version of the hardware it is?
Let me rephrase re headset. Are you using a DIY headset or a Pupil Labs headset?
No extensive testing has been done on mint. I would suggest running from src on mint as the bundle was compiled targeting Debian
Pupil labs headset.
OK. Am downloading the bundle to try on ubuntu. Will do a build from source on mint. So the massive .zip is the only packaging - no real packages?
Ah. the zip contains debs. Nice/.
The zip archive at https://github.com/pupil-labs/pupil/releases/latest contains 3 applications - Pupil Capture, Pupil Player, and Pupil Service
(you beat me to it π )
OK. Installing pupil capture on my ubuntu box does indeed get me some pupil camera images, so that's good. I'll do a source build on the mint box and see if that helps. May be back later if I have troubles π
Thanks
Ok thanks for following up @user-d16987
I'm a Debian Developer so if you are interested in getting the software into Debian/Ubuntu sometime I can sponsor that.
Cool :+1: we will certainly get in touch
@user-d16987 Your offer is very appreciated!
Hey guys, could someone shed some light on how pupil computes confidence metric for each eye? Is it mentioned in any literature that could be cited?
I looked up previous chats and someone mentioned that the confidence metric is the ratio of the support pixels / number of pixels covered by the ellipse fit? I'm not sure if I understood it correctly.
Hi. I am using a 200Hz pupil camera headset. What is the focal length of the pupil camera/is there a way to measure it from the pupil capture? I can not find this information on the pupil docs (it only says they are fixed focus)
@user-40bf4b The focal length is part of the camera intrinsics. Unfortunately, we only have pre-recorded camera intrinsics for the world camera. What you can do is to select the eye camera within the world process, run the camera intrinsics estimation plugin and extract the focal length from the saved Pupil Cam2 IDX.intrinsics
file. The biggest issue is that you need to make the circle pattern visible to the IR-filtered camera. Neither the version printed on paper nor displayed on screen is visible in the eye camera.
Hey guys, is there a way to not always detect the fixations from scratch when opening a video in the player? This takes ages for a 20 minutes video! Why are the fixations from pupil captrue not saved once they are detected?
Hey Erik, I am working on that right now actually. π
Hello @papr I have 2 questions: 1. is it possible to read the exposure time while using the aperture auto mode ? 2. Why isnt the brightness of the picture not proportional to its absolute exposure time ? When i increase the exposure time it "jumps" back to a darker picture despite having higher exposure. its every 32 values.
Ah, that sounds nice! π
@user-103621 1. You would have to change the source code for that. This information is not accessible for third-party plugins due to the process boundary.
@papr Will this be available within the next two weeks? We are getting close to our test...
@user-c351d6 If you run from source, yes. From bundle, maybe.
@user-103621 2. Within the allowed exposure time range the brightness should be linear as expected. It is possible that values bigger than the max values result in darker images than expected.
And what does change if u go over the exposure time range ? Does it increse the gain or ?
@user-c828f5 This is the code that calculates the confidence: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detectors/detect_2d.hpp#L570-L573
@user-103621 Exposure time ha a defined range within the firmware that is based on the selected frame rate. The behavior outside of that range is not defined.
@papr How big is the effort to run it from source?
Depends on the OS and your experience with the terminal: Linux/Mac: Easy. Widnows. Difficult/painful. You will also need administrator rights.
Hmm, I got a Mac and some experience but little time. π I would prefer the deployed version π
So the plan is to release a bug fix release, including the caching feature, soon. I just cannot promise you that it will be released within the next two weeks.
In your case you will have to weight time spent on source installation vs time spent on recalculating fixations. π
Well, as I plan to write a Vuzix Plugin in the next time I will be forced to install the soruces I guess.
@user-c351d6 Actually, you do not have to run from source to run/develop third-party plugins. Simply put your custom plugins in pupil_player_settings/plugins
and Player will load it automatically.
Ah, I haven't dived into this. I will check the documentation when time comes.
For reading the exposure time while using apertrue mode. Do i see it correctly that it would mean to add the features of reading the exposure time in the pyuvc using libuvc ?
And Thanks for the infos !
@user-103621 I am not very sure about that. Are we talking about the world camera, the 120hz eye cameras or the 200hz eye cameras.
papr#8338, wrp#1848, mpk#4851: Hi folks. We are running v 0.32 of pupil mobile. When we connect to the mobile using pupil capture on our laptop, it will allow us to connect to the head camera, but it won't show the image (we just get a white screen). We have another phone running v 0.23 of pupil mobile, and this connects just fine. Is there any way to fix the new version quickly or revert to the old version? Note that these issues persist in multiple versions of the pupil capture, so that doesn
@user-78dc8f For clarification: These are two different phones? Are all video streams (world + eye) white when streaming with the phone running v0.32
? What does the preview on the phone show?
@papr: yes...different phones.
@papr....one second on the other queries.
Important: Does the sensor overview show the cameras as idle
or as streaming
?
im talking about the world camera (highspeed model)
@papr preview on the phone looked fine. Regarding video streams on laptop, the world cam was white (eye cam was grey...as if nothing had been selected). We'll double-check now on 'idle' vs. streaming...
@user-78dc8f I am able to reproduce the issue on v0.32
. Looks like the streaming does not start as it should.
@papr Can confirm, all statuses on the phone are 'idle'. Screen is white on world cam, and remains grey on the eye camera.
@papr Is there any possibility of being able to revert the version on the phone in question? As we require it for sessions tomorrow.
Yes, I will try that.
@papr I need to log off for a bit. Is there a way I can get an update on pupil mobile later this evening? (I'm not too discord saavy...)
Sure, please write an email to ppr@pupil-labs.com such that I can keep you up-to-date on the issue.
@papr Awesome. Thanks.
Hello,
I'm using pupil for a marketing research and I need to get the following data: β’ Number of fixings β’ Total fixing time β’ Total time until the first fixation.
Can you tell me how to get that data please?
Hey @user-e24be9 Use the offline fixation detector to detect and export fixations to csv. You can calculate all the required data based on the exported data.
Hey everyone, did anyone manage to evaluate stress levels using pupil size data ?
Hey @user-58d5ae I do not know from the top of my head but you can have a look at our list of papers that cite Pupil: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0
Hey, I have a question about the gaze data that is being published by the pupil capture. One computer only receives the topics 'gaze.3d.0.' and 'gaze.3d.1. ' while another computer has 'gaze.3d.01.' as well as 'gaze.3d.0.' and 'gaze.3d.1. ' . Just wondering why this could happen and how to get the other computer to have the same output. They both have the latest versions , we are both running windows. Thanks, Jack
Hello everybody,
I have a question concering the Manual Marker Calibration Method.
Is it good practice to vary in distance with the markers? Or shall one calibrate by always staying in the same 2D-Plane?
I would like to cover a circular area with the user in the center of the circle... so a fixed distance is not the best option for me.
Thank you very much!
@papr Hi, again regarding the video recording issue: Sometimes the mp4 files are not readable (even with VLC) β I understand, that is due to corrupted headers (although I do not understand why it is happening so often) .... what I missed until today, is that the eye*.timestamps file for corrupted videos is missing!!! This is very bad, because a) in our pipeline we rely on the possibility to look at the eye videos at the correct timestamp and b) redetect the pupils with different pupil detectors β if the timestamp file is missing, the mapping of frame-to-time is lost... strangely, it did happen that one eye video was readable (with timestamp-file) and the other not, with missing timestamps. .... first, I hope you can bugfix this for the next release, and secondly_ Is there a possibility to repair old recordings? we use windows, 1.8.26.... issue is apparent with both codecs (big and small file) ... world video seems to be always fine
@user-29e10a Please remind me, are these videos recorded using Pupil Capture or Pupil Mobile?
@papr Capture
Is there an open github issue for this already? Sorry that I do not know that. There were a lot of bug reports lately :/
Also: Timestamps are missing and video files are broken if the recording process crashes. The question is if a Python exception is at fault or if it is a crash cuased by one of the underlying c libs...
@papr not yet, I will raise an issue there right now
@papr as I recall correctly there is nothing in the logs that points in a direction
Thank you! Try to provide the eye log files. If they do not contain python tracebacks then underlying c libs are at fault.
ok, I will provide them with the issue
@papr is there a possibility to recalculate the timestamps? I can repair the video files... and I have the pupil-timestamps (so I can count the frames) ... is there a way to tell when the recording started exactly?
@user-ad8e2d The last part of the topic .0.
/.1.
/.01.
indicates on which pupil data the gaze point is based on and therefore if the gaze datum is monocular or binocular. Low confidence will be mapped monocularly. If you do not see any binocular gaze on one pc, check the gaze confidence. Also: The HMD Calibration only maps monocularly.
@user-29e10a I do not see a way to restore the timestamps. One could interpolate based on the other timestamp files but this might result in desynced eye videos...
@papr Thank you very much for explaining that.
@papr Do you know how I could only subscribe to the world frame from the frame publisher. Currently when I subscribe to 'frame' I get 'frame' - the world frame, 'frame.eye.0' and 'frame.eye.1'. Is there a way I can get a 'frame.world' similar to the frame publisher example pupil labs provided on the docs. When I use the frame publisher 'frame' is the 'frame.world' as there is no 'frame.world' published. I am using pupil capture, on windows. Thanks, Jack
@user-ad8e2d Is this your issue: https://github.com/pupil-labs/pupil-helpers/issues/28 ?
This is actually a bug in Capture and I have just finished a PR that fixes the issue: https://github.com/pupil-labs/pupil/pull/1276
@papr yea it is
I have another couple of things I have noticed that don't match the current doc
They are minor things like spelling in the json package recieved.
What do you mean exactly?
You should be receiving msgpack and not json, btw π
Yea that's what I meant sorry, I unpack it to json. π
I'll find an example now
You can use this fix until we release the fixed bundle: https://gist.github.com/papr/59f9b2eba22fa8cc4306d67730f089a3
Put the file in pupil_capture_settings\plugins\
and activate Frame_Publisher_Fixed
from the Plugin Manager. Afterards you should be able to receive to world
frames only by subscribing frame.world
When I get gaze data I used this doc: https://github.com/pupil-labs/pupil-docs/blob/master/user-docs/data-format.md#pupil-player but I recieve "gaze_normals_3d" and "eye_centers_3d", where as the doc says "gaze_normal_3d" and "eye_center_3d"
@papr
Thanks for the fix π
@user-ad8e2d these are not spelling mistakes! These are binocular gaze sample fields.
Check the topic, it should end in .01.
. See the description above
oh ok thanks, my bad! That makes more sense.
@papr @wrp @mpk Hey guys, just an update. We've finally finished the design of our polycarbonate hot mirror insert into the vive, and ordered 5 pairs from a local optics manufacturer. We are working on the camera mount (have preliminary designs in solid works). There will still be work to do on the software compensation for distortions from the fresnel.
I'm sure we're still at least a few months out, but we progress!
Cool! Thanks for the update @user-8779ef
@user-a04957 Which type of detection/mapping are you using? 2d or 3d?
Good afternoon Is it possible to get the calibration process data through a .csv file?
@user-3f0708 What do you refer to by calibration process data? Do you mean the reference marker and pupil data that is collected during calibration?
Yes @papr
Well, there is no plugin that exports this type of data yet... But the data is definitively stored in the notifications.pldata
file. You can write a python script to export that data.
@papr In which place is this file saved from notifications.pldata on Pupil's platform?
This file is part of every recording. (starting with v1.8, previously the data was part of the pupil_data file)
Thank you for the informations @papr
how do you configure the pupil software?
I can record video with it but the mp4 files show mostly white and very light gray
hard to see any detail of the eys
eyes*
Hi @user-24fdfb was the eye visible in Pupil Capture when you were recording?
@user-24fdfb did you try loading the recording in Pupil Player and using the Visualize Eye Video Overlay plugin?
I will give that a shot, thanks
this discord group is very helpful. really saves lots of time to talk with real people here
Thanks for the feedback @user-24fdfb πΈ
woohoo I can see my eye
I needed to adjust some settings on the camera
It was settings in the pupil capture software
is there a way to calibrate without a world camera and without it being in VR? like, look at a screen marker at a known distance, or anything at all??
essentially i have people wearing augmented reality glasses. i want to know what icons they are looking at...but i dont have access to a world camera due to the system.
Does the open source platform of the pupil labs accept any Logitech camera model?
This model is officially supported: https://www.logitech.com/en-us/product/c930e-webcam
@user-29e10a I just realised that you can use the start_eye_capture
notification to set uvc settings for the eye cameras, even if they are running already: https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/eye.py#L517-L518
@papr Hey pablo, my student @user-c828f5 has been struggling with something for a while (I believe you've been helping him). Would you be surprised to see a notebook in which we show that the angular distance between the left and right eye approximated using 2D POR data (and a pixel-to degree conversion) is very different compared to an angular distance calculation from the same track, but using 3D gaze normals?
@papr thanks a lot, i will try this!
@user-8779ef what do you mean by por data?
@user-8779ef also, I would like to see that notebook π
Hi all, I came across two questions when using Pupil: (1) the video size is too large. I plan to record about one hour data. Is there any other way to decrease the video size besides lowering down the video resolution? (e.g., customized file format?) (2) I checked the exported csv files. I am still confused about the timestamps in both files even after I read the descriptions. What I got is something from 2200 to 2300 [the whole recording is about 2 min]. What does these numbers mean? I am wondering if there is any way to get the global time, I mean something like 2018-08-23, 22:45... Thanks for your help!!
I want to get gaze normx and normy dynamically, so that i can control a gimble camera as i move my eye ball. What python code i am looking for?
Hi guys, I just have a question about the surface tracking. We are tracking a couple of surfaces in a 20 Minutes video. After filling the marker cache (which takes around 12 minutes), pupil player freezes for a long time (and sometimes endless). Is this a known behaviour?
And it doesn't save the marker cache in this cases.
I just measured it, it freezes for about 13 minutes
@papr POR: gaze locations within normalized screen space. I can't remember the name of hte variable.
probably norm_pos_x / norm_pos_y.
Thanks for the reply. So, he's going to put the notebook together. Either 1) he'll find thathe's made some kind of mistake in his calculations, or 2) he'll demonstrate what he's been telling me for a while: that the angular distance between the left and right eye is very different if you use norm_pos_x/norm_pos_y, vs gaze_normal0 / gaze_normal1
I want to relay blink and eye movement events to a website. I have experience programming with Python. Is modifying the pupil capture code the best way to do this or is there an API I should study?
Hi guys, I just have a question about the surface tracking. We are tracking a couple of surfaces in a 20 Minutes video. After filling the marker cache (which takes around 12 minutes), pupil player freezes for a long time (and sometimes endless). Is this a known behaviour? And it doesn't save the marker cache in this cases. I just measured it, it freezes for about 13 minutes.
@user-c351d6 this is the gaze being mapped onto each surface. We need to refactor to make this work without blocking.
I have made an issue for this: https://github.com/pupil-labs/pupil/issues/1280
Hi all. We are a medical research group interested in gaze patterns of surgeons using the da vinci surgical robot. We are uncertain of how to lay a gaze pattern over an outside camera being used as a scene cam, in this case the da vinci robot's surgical camera. Is this possible with the current software available?
Hey guys, I was wondering if its possible to buy and use one pupil cam (200hz Eye Camera ) just to check if it meets my needs and its performance before buying the actual headset? I can save up for 650EUR but might not afford the whole headset just yet.
I can build a cheap and dirty holder for it until I save up for the headset+second camera upgrade
@user-24fdfb I would recommend to use our zmq network API. See this example python script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
@user-f5ff51 Hey, yes, you can buy a single eye camera from the store but it does not include the necessary cabling to connect it to a computer.
what cabling?
@user-a08c80 Am I correct in assuming that the surgeon's head movement is independent of the robot's scene camera? Our software assumes a fixed physical relationship between eye cameras and scene camera.
@user-e711e0 Check out https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
@papr what cabling?
@user-f5ff51 the headset includes internal cabling that connects the eye cameras to the USB clip. The eye camera listed on the website does not include the cabling nor the USB clip
@papr If its just standard 4 usb pins I can get/make my own and use that for now. Is that the only issue?
@user-f5ff51 The ey cameras have a female jst connector. We can sell you a jst-to-usb cable as well. Please write an email to sales@pupil-labs.com if you are interested in that.
@papr That's okay I'll solder one up myself. Just let me know if that single camera will connect with the Pupil software and SDK by itself so I can evaluate it before I can buy the whole system
The software should work perfectly fine with a single eye camera.
@user-f5ff51 What exactly do you need to know/test, btw?
@papr That's correct. We have developed an approach where the user's head is essentially stationary once comfortable and calibrated in the console viewfinder (we feel somewhat analogous to the oculus rift). Thank you!
@papr accuracy, precision, latency, those sort of stuff
@user-a08c80 You are right! I would handle it like VR scenario with the difference that you can show the calibration markers to the scene camera. The only technical problem is to get the scene camera working in Capture.
@user-a08c80 An other question is if the subject is comfortable in the console viewfinder even when wearing the headset.
@user-f5ff51 Be aware that you cannot test gaze mapping (+accuracy, precision) without a world/scene camera. You should be able to use any uvc camera for this purpose though
okay, thanks!
@papr We have taken steps to make sure the setup is comfortable (enough... still improving π ) Ok excellent. Thank you.
Exporting a recording in Pupil Player yields a .csv file with a column each for an x and y position for my gaze. I've got the 120 HZ pupil cameras, so there are tens of sample positions for every World Camera frame. When I play back the exported data in a program I've written to visualize the gaze path, it's similar to the green dot visualization I see when I play back in Pupil Player, but the output in my program is sampling each point individual and this results in a very jerky and erratic playback. It seems like Pupil Player is using some smoothing/averaging of the data the create the green dot. Can you tell me what kind of smoothing is happening, or point to where I could find it in the source code?
@user-2686f2 This is the visualization for the dot https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/vis_circle.py#L39-L54
As you can see there is no smoothing. We filter for a minimal confidence though!
Hello everyone. Is it possible to tell the pupil cam to stream in Uncompressed mode rather than MJPEG? And is it possible to stream grayscale frames instead of RGB? (If that will require to use USB3 connection, that is not an issue for me). Thank you.
I am not 100% sure about that but @mpk should be able to answer this.
@mpk Is it possible to tell the pupil cam to stream in Uncompressed mode rather than MJPEG? And is it possible to stream grayscale frames instead of RGB? (If that will require to use USB3 connection, that is not an issue for me).
@papr Thanks!
I also had a question about calibrating via the manual marker (the target shape). The video provided shows someone hold the target several feet away the person wearing the Pupil glasses. I noticed a slider for "marker size" in the Pupil Capture software. Can you make smaller target print outs and calibrate with them closer to your face? If so, does this increase the accuracy/affect of the calibration in any way?
@user-2686f2 yes, you can print them smaller. Make sure that the markers are detected during the calibration. You should be hearing tick sound if it is recognized.
In regards to the calibration: Always calibrate at the distance that your subject will mainly look at. Also make sure that you cover most of the subject's field of view.
@papr I've been using the new experimental calibration where you keep your eyes on the target and move your head around. I've been getting better accuracy using that method. I've always been calibrating at the distance my subject mainly looks at, I was just wondering if a smaller target could improve the accuracy (calibration usually occurs at about 2 feet).
Thank you for your feedback! Cool to see that the single marker calib. is being used. I do not think that a smaller marker would increase accuracy.
Can someone please remind me of the email address to contact if I have a hardware issue and need a repair/replacement?
found it : [email removed] Thanks!
Hello. I want to get an angle of eye rotation. I'm beginner for python
I'm working on Ubuntu 18.04 and follow the instruction for Linux. I try to run main.py but I can't see any window. How can I check that my setting is correct?
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. world - [INFO] launchables.world: Application Version: 1.8.33 world - [INFO] launchables.world: System Info: User: xxx, Platform: Linux, Machine: xxx, Release: 4.15.0-33-generic, Version: #36-Ubuntu SMP Wed Aug 15 16:00:05 UTC 2018 world - [INFO] pupil_detectors.build: Building extension modules... cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++ cc1plus: warning: command line option β-Wstrict-prototypesβ is valid for C/ObjC but not for C++
The same warning is displayed continuously.
@user-b70259 Don't worry, this warning is normal. Just wait a bit while the pupil detectors are being build. Once they are build the window will appear.
Also: You do not need to run from source in order to access eye rotation. You can access it via the network interface as well. https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
Hey, my pupil software time does not match with the actual clock time. So when it records the time of an event, we have an offset, which we want to determine or correct for. Can anyone help?
With actual clock you mean your system time?
Do you want to do it after the effect or before a recording even starts?
yes I meant the system time, thanks. I need to record the pupil data with respect to system time, its easier that way when doing the analysis with some external software. For instance, Matlab.
Is there a way to do this? If not, what would be the next best thing?
Yes, this is possible. Enable the Time Sync plugin and run this script: https://github.com/pupil-labs/pupil-helpers/blob/master/network_time_sync/pupil_time_sync_master.py
Where should the code be run from? The same directory as capture.exe? Does it need to be run before each recording?
It should run on the same computer that runs the external software that you want to sync to.
This needs to run in parallel to Capture, yes.
I tried running the code, it says can not find 'Clock_Sync_Master' module. I could not find such a module on the project, am I missing something?
Ah, yes you will need this file from the same repository: https://github.com/pupil-labs/pupil-helpers/blob/master/network_time_sync/network_time_sync.py
Put it next to the first file.
You might need to install other requirements
This is the error I get. The files are on the same level
I checked, uvc is installed. I am running Python 3.6, as required
Installing the requirements on Windows is very tricky. Maybe a small plugin that sets the Pupil clock on load might be better
Can you maybe help me with this? I am not an expert on programming...
Sure, just to confirm: You need seconds in Unix timestamps?
aka what ever time.time() returns
yes, we want a setup where the pupil recordings are in terms of UNIX timestamps of system time (if that makes sense)
Yes, I understand.
thanks a lot, papr
@user-40bf4b https://gist.github.com/papr/87c4ab1f3b533510c4585fee6c8dd430
The plugin does not have any ui. You can make sure that it is running by checking the plugin manager.
Hey, it seems that the Frame Publisher plugin in pupil capture is broken in the latest release. I think this is fixed by https://github.com/pupil-labs/pupil/commit/b20f4c19dfceabe13798d9b92631fec49ce76dda, but a build with these changes hasn't been released yet. Is there any way to know when the next release will be? We are running on windows 7 so it's kind of a pain to build from source.
Unfortunately, there is no fixed timeline for the release yet.
You can place this fixed plugin in your pupil_capture_settings/plugins
folder: https://gist.github.com/papr/59f9b2eba22fa8cc4306d67730f089a3
This way you do not need to run from source @user-23e10d
Will do. Thank you!
Hi, I'm trying to use the offline fixation detector. I believe I have saved 3d pupil information (when I export the data, pupil_positions.csv has 3D info in it), but the fixation detector persists in using the "gaze" method. Any idea how I can troubleshoot what's going on?
update: it looks like fixation_detector
looks for a field in the pupil data called 'gaze_normal_3d'
(line 161); my data has a field called 'gaze_normals_3d'
in addition -- is there a way to batch offline fixation detection export?
Hi everyone! Is it possible to make a pupil's lab eye tracker communicate with the HoloLens through ZMQ? The netMQ doesn't seem to work on the HoloLens
@user-a6b05f we did not get it to work but maybe it is possible now. We have a special plugin on Pupil to talk to the hololens.
Hi mpk
@user-3a93aa hey, this is a very subtle bug. Please create a github issue for it. Currently, the batch export is disabled.
@papr After finishing build, windows appeared. But something is wrong so that I got error "video_capture.uvc_backend: Init failed. "
Then, I tried filter_messeges.py with Pupil Service and it worked so maybe I can manage it. Thank you for answering such easy question.
Guy, I've payed real money here, can I please get an answer to my question?
@user-a3b3bd I just saw your question. You can stream uncompressed but not grayscale. Have a look at the available uvc streams.
Hey @user-a3b3bd This is a community based channel. As you can see there are a lot of questions and it happens that some of them are overlooked by mistake. A simple reminder would have been enough. I would appreciate it if you adjusted your attitude accordingly.
@user-b70259 What hardware are you using? The headset or one of the addons?
@papr I use headset. It maybe bought 3 years ago so camera module is old one. I don't know that could be related.
@user-b70259 do you have a binocular or a monocular headset?
@mpk Thank you. Is uncompressed better for reduced latency?
@user-3625f2I don't think that there is enough USB bandwidth to run uncompressed with high frame rates.
If it is 24 bits per pixel @ 200x200 pixels and 200hz than each second pure pixel data needs only 24 MByte bandwith. Should be enough? That said if latency is unaffected I don't care.
But I feel like any compression will take time and use buffers
I have never tried to measure the latency with uncompressed images myself. Keep in mind that you need the same amount of bandwidth for the second eye camera as well as additional bandwidth for the world camera. I am assuming that you are using a headset.
Do I really have to connect them to the same USB port?
By the way how do you measure latency yourself?
And is there any specific reason not to have grayscale capture to cut the bandwith? I guess mjpeg compression would make it pointless but for uncompressed streams it would cut it by x3.
libuvc provides the start-of-exposure-timestamp for each frame. We define latency as difference between this timestamp and the time that we receive the frame in our consuming application.
Oh, so you can get good latency measurement purely in code? Thats nice, I have this highspeed camera and LED setup. I guess I didn't really need it here...
What do you mean by "purely in code"?
Also, as mpk said, the cameras simply do not provide the grayscale streaming option.
And technically yes, you could disconnect the cameras from the clip and connect them one-by-one to different usb ports.
I mean by comparing timestamps in software.
I understnd what mpk says, I'm just asking a general question, why tell cameras to provide frames in RGB rather than grayscale in your firmware.
Oh, well. Was worth to ask.
By the way can I run pupil capture without starting any GUI, just from the command line, if I just want to dump eye position data?
This is not possible yet. You can write your own cli though. Simply use libuvc+pyuvc+pupil_detectors. This would not include any gaze mapping though.
thank you. "This would not include any gaze mapping though." Why not? Code not open sourced for that yet?
Let me clear up the terms: - pupil data: data relative to the eye camera - gaze data: data relative to the scene camera - gaze mapping: process of mapping pupil data to the scene camera
Everything is open source but the calibration procedure needs further logic to detect reference data. You can use libuvc+pyuvc to receive frames and use the pupil_detectors to receive pupil data. If you want gaze data, you will have to run a calibration first. But there is no isolated code to run in a script though.
understood, thank you for your time
created issue https://github.com/pupil-labs/pupil/issues/1286 .