Hi
Where is your office?
Hi @user-6151db - could you please send an email to jobs@user-64de47-labs.com if you are interested along with your CV and/or link to your github for review
the BKK office is a satellite office (smaller team here in BKK than in BER) - we are located on Soi Phrom Chit - just moved offices π
I am interested in your job, but my skills right now absolutely not fit to your company for sure. I used to self-study ML and looking forward to apply it to image processing
@user-6151db - send me a PM and we can discuss further
is ther eanyway to set the file directory via pupil remote when sending the record command?
there anyway*
No, only the name of the recording. But you could send a start_recording notification which includes the recording directory
I am not on my computer to look up the details for the notification, unfortunately. You can find them in recorder.py
i see how you send a notification but where would i put the directory
I just had a look at the code. Setting the recording directory via notification is not supported right now. Please open an issue for this missing feature.
Yeah i keep getting...Sent command {notify.recording.should_start} and received Unknown command
I have a matlab script that can send commands to python
i will upload to my fork later tonight or this week
opened an issue
@user-ed537d @papr you should be able to include a /
in the session name. This will created a subdir.
This is supported via the notifications and via pupil remote.
@mpk thank you however I'd like to change the root directory to a network drive...
would it still work if i just hardcoded it to root
on the pupil machine
@user-ed537d you can first set the path_to_recordings
to root and then go from there to wherever you want via the sesison name
actually I just checked if you start the sesison_name with /
you will ingore the path_to_recordings and start from root.
so no need to open a new issue. We just add a bit of docs regarding this nifty trick.
awesome i gotta try it out
@mpk i guess a problem with this method is you lose the ability to name the file
@user-ed537d I m not sure I understand. If I set the session_name to /users/mkassner/test
I made recordings like this:
/users/mkassner/test/000
/users/mkassner/test/001
...
yes I think it would be nice to be able to have
/users/mkassner/test/RunName1
/users/mkassner/test/RunName2
rather this is how I would use it
/users/mkassner/20170501/taskName
you mean without the 000 001 002 subdir?
/users/mkassner/20170501/taskName2
yes
we do this so we dont have to deal with user input that would trigger an overwrite.
I mean if you are carefull it will always be:
/users/mkassner/20170501/task1/000
/users/mkassner/20170501/task2/000
and you can in your automation simply deal with the 000 dir
true...my current issue is im trying to write to a "smb://network/share" and when i send this address it crashes pupil..lol
hm. This is indeed very tricky. You are asking our app to write to a non-local dir. Even is this worked I would never recommend it, writing to network dirs is potentially slow.
Are you runnig on linux?
yeah ubuntu
yeah i figure but i'd like to give it a shot π¬
any change to make a new mount point so that you map into the smb dir?
the app will think you are writing locally.
i have no idea how to do that lol
thats what i was actually looking into before i messaged here
not sure but the first answer from here? https://askubuntu.com/questions/46183/how-to-map-a-network-drive
Hi all, I have a question about Pupil Mobile's procedure. It records the session on a mobile app and streams the session via Wi-Fi to a computer real time?
@user-006924 yes. This is correct. You can do one of both or both at the same time.
@mpk do you guys ship just the world camera? I have the pupil headset but the world camera and I want one
@praneeth we can do that π . Please make an order for the eye camera upgrade with a note and we can customize the order according to your note. You can use this permalink to make a request: https://pupil-labs.com/cart/?0_product=e120upgrade&0_qty=1
Hi all. I'm new to discord. I'm having trouble installing the pupil labs software on windows 10. Any experts out there who might help me debug? Caveat: I'm in India with terrible internet, so apologies if I suddenly disappear π
Hi @user-78dc8f - Will from Pupil Labs here
Could you please re-post the issue here so that others can see the error
You bet. Iβm trying to install the pupil suite on a PC running Windows 10. I ran the driver installer. It said the drivers were installed successfully, but I donβt see libUSBK in the list of devices.
And then when I try to run pupil_capture.exe, I get an error that
βApi-ms-win-crt-math-l1-1-0.dll is missing from your computer. Try reinstalling the program to fix this problem.β
@user-78dc8f - do you have admin rights on your computer?
yes
and you are on a 64 bit machine?
Great question. How do I check that?
with 64 bit Windows 10
It's an India machine so i didn't do the install
right click on the windows icon on bottom left and go to System
all information about the machine is included there
64 bit, x64-based processor
windows 8.1 pro
ooh... windows 8.1 huh
guess so
Please be advised that Pupil is compiled for Windows 10
it might run on Windows 8 machines if you install the right dependencies like VC++ redistributables
well that could be the problem then...damn...
you could try installing another Visual C++ redistributable - but I don't know which one is required off the top of my head
I tried installing the redistributables (the link you sent), but both of them failed to successfully install
ok. so I'm looking for Visual C++ redistributable for Windows 8.1?
Yes, because you are on Windows 8 and not Windows 10
will search. thx.
or VC++ redistrib for VS 2010
please give that a try and let us know if you have any success
but a Windows 10 machine would be a better bet if possible - and we can help debug in that case because that is our development target
π
Would you suggest I try to upgrade? Might be more stable in the long run, yes?
Or if I get it to run, i should be ok?
If you have a choice, I would certainly recommend Win 10 for Pupil
Ok. Let me see what is possible with the IT folks here.
but if you do not have a choice, then you may be able to get by with Win8.1 - but we can not provide any concrete support for anything less than Windows v10
got it. thanks. I'll see if I can upgrade. If not, will try the link you sent. thanks...
Great π
Heyo! Can I ask a question regarding the Unity integration example? π
@user-489841 You can always ask! I am not an expert concerning the unity integration but I might still be able to help. π
Awesome! Thanks, @papr ! In calculating pupil orientation, does it take into consideration when the user blinks or otherwise has their eyes closed? If not, how would I go about detecting open eyes?
The Pupil detection algorithm is a best effort detection. This means it will not differentiate between open or closed eyes. Although, the confidence of the pupil position decreases significantly when the subject blinks or closes his/her eyes.
So at that point it will be more like a guesstimation?
Yes. Although I really would not rely on that. It is better to ignore low confidence values when you are analysing your data.
Because I come from using a few different eye trackers, and they all handle it differently. Some keep the last known position as the actual position and some even outputs zero
Okay, perfect
That seems like a very good solution. What would you qualify as too low confidence?
Pupil Capture comes also with a Blink Detector plugin. You can subscribe to the blink events over the IPC.
Similar as to how you subscribe to the pupil/gaze positions
I think the default threshold is 0.7
.
Okay, perfect! Thanks a bunch @papr. Huge help. π
No problem @user-489841 ! Do not hesitate to come back here and to ask more questions.
[email removed] - we (Pupil Labs) have been working on v2.0 of the unity plugin. You can test it out in the dev branch of https://github.com/pupil-labs/hmd-eyes/tree/dev/unity_integration_calibration
.... Awesome! Would love to!
if you find any issues - please post them π
Actually I do have another issue, with the Pupil Capture software. (Posted this in the community group earlier today) Any chance you could help me with that? π
@user-489841 - Try the dev branch with the latest version of Pupil Service or Pupil Capture. I believe this issue will be resolved there.
Also @user-489841 - in the future could you post hmd-eyes related questions to the π₯½ core-xr channel π
... Already on it! That would be great.
@user-489841 The pupil detection runs in seperate processes -- hence the multipe "Pupil Capture instances." This would be fine if not both processes would try to access the same camera.
I just saw that your cameras have the same name. This might be indeed the issue. Could you try and reinstall the camera drivers?
I do not think that this is an issue related to hmd-eyes
I will try that!
For reference (and for others) Link to Windows camera drivers is here - https://docs.pupil-labs.com/#windows-driver-setup
I would suggest trying out the dev branch with latest Pupil Capture/Service before re-installing drivers though
The project persists even out of unity, so I think the reinstall might be the issue
might solve the issue*
After reinstalling drivers they are still named the same, @papr
@user-489841 - can you open device manager and show hidden devices and share a screenshot of device manager
Ofc
Device manager
@user-489841 - have you tried clearing pupil_capture_settings
and starting up v0.9.6?
@user-489841 - it may be time to do some cleanup - can you manually delete all drivers in libusbK section, restart, and reinstall drivers?
Roger wilco
BTW - are you using Windows 10?
just want to check
Yep!
And I'm using 0.9.6
π
@user-489841 - can you also let me know in a PM the order ID associated with this hardware?
@wrp Uninstalled from device manager and Add/Remove programs, rebooted and reinstalled the drivers I downloaded monday. Still same result
@user-489841 - and drivers in device manager look the same as before in device manager?
(Same result as in they are named the same)
Sorry, yes
Just thought I would try again for the sake of repetition. (Definition of insanity and all that) Still the same. π
Can we check to make sure the left camera is working. Could you please physically un-plug the jst connector of the right eye camera and re-start Pupil Capture.
I want to be able to make sure that we know that this is a driver issue and not a hardware issue
Ofc
Is that even possible on the HTC Vive addon?
I can get a picture of the camera in capture
ok, that's good
hardware is working, but it is certainly a driver issue (or naming issue in firmware)
@user-489841 - can you send me an email or PM with the associated order id of this hardware
or email to [email removed]
I want to track down the order in the case that we need to do a hardware repair/replacement
Sure, of course. I realised something though, it worked at some point. When I first installed it. Hmm..
Where they were named differently
If it was working when you first installed, then my next question would be - how many times did you 're-install' drivers?
That is a very good question.
When I plug them in without drivers, it installs them correctly named, but as imaging devices
And in windows "bluetooth & other devices" they are correctly named.
@user-489841 - can you start Pupil capture without eye processes and just show the list of cameras in Activate Source
You should see both eye cameras correctly labeled.
Yep
I do
regardless of what is shown in the device manager
OK - then please assign these to eye0 and eye1 processes
activate test image in world
launch eye0 and select id0
launch eye1 and select id1
Eeeh...
It's working perfectly.
Before I did anything
The worst kind of bug.
:p
Well.. "bug".
Beforehand there was an "Unknown" on my list of sources, which is gone now. That could be it
Sorry for the long journey here. Older versions of our cameras will appear as the same name in libusbK
The Unknown comes from my Vive apparently
I guess the frontfacing camera?
The older versions of Pupil Cameras have the same pid but different device name. In Windows this means that the first camera will appear in the device manager by the name of the camera that was first connected to your machine.
Ah, right
This might mean I won't have a lazy eye in my Unity application anymore. Woot
But Pupil Capture and Pupil Service will always be able to distinguish our cameras
So yes the "unknown" in the list could be the Vive front facing camera
Apologies for the odyssey here - but I guess we can mark this resolved now, yes?
Let me just confirm, then by all means let's do that. π
π
Okay now Unity is causing the exact same issue as before. Let me just update to the dev branch version and try again.
@user-489841 are you using Pupil Service or Pupil Capture with the Unity Plugin?
Tried both
I believe that the key here is to start Pupil Capture, select cameras for the eye processes and close pupil capture from world process
so that your settings are saved
I saw that edit
π
Joking aside I tried that, to the same result
πΆ
The new editor gui looks great
Gimme a sec
@wrp - Seems like the pupil Capture software works fine with the new unity version. π Have some troubles connecting to the server though, but I haven't really looked around
Can you specify these troubles?
Can't connect to the server through localhost.
(127.0.0.1:50020)
I'm AFK now but @papr will take over questions π
Thanks so much, wrp. You were an amazing help
Enjoy your freedom!
Last remark. Did you check the port in Pupil remote in Pupil Capture?
The current one is identical to the one in the master branch, so I just guess it would be fine.
50020
The reason why wrp asked is that Pupil Remote will open a different port if the system blocks 50020 π
What is the error message that you get?
Does Unity report if it is connected?
Or only if it fails?
Because I think I might just have been a huge idiot and closed the Pupil Remote addon
If that does a different.
difference*
Yes, Pupil Remote is required for the unity integration to work
Right. π
Ignore me!
It is the main entrance point for all communication
It seems to be working. Now I just need to figure out how this scene actually is set up and then converting it to VR. But I won't bother you with that. π
Good luck with that!
Thanks a bunch, @papr . Really appreciate it
You are welcome π
Have a great day!
Thanks, you too!
Resolved β π
Greetings all. Was trouble-shooting with Will earlier and my windows version needed to be updated. Fixed that. I'm running Windows 10 64 bit. I installed the drivers and the install program said success; however, I don't libUSBK in the device manager even though i'm showing hidden files. Throwing caution to the wind, I fired up Pupil Capture. It is now running, but I'm not seeing the images from the head or eye cams. Anyone want to help me debug?
@user-78dc8f you may have to change the source of the video stream coming into capture
I selected USB device
activate source
?
try changing that
unfortunately i don't have a pupil with me rn so i can't help beyond that
Actually, I just restarted, and now it is working. Brilliant.
It is amazing how many things a simple restart can fix πΎ
Indeed...
Can anyone tell me where I can find the marker tracking plugin in pupil capture? I only see surface tracker (which i think comes after marker tracking?)
the surface tracker includes the marker tracker
or do you refer to the actual implementation in the code repository?
ok. I was watching the demo video and it said to open 'marker tracking'
Hi, what is the status of the offline calibration?
@papr can you make a PR of your WIP?
We are very behind on updating our documentation, apoligies for that @user-78dc8f
No worries @papr
@mpk I do not know if it is stable right now. Referencing the branch might be more useful
@papr regarding offline calibraiton. I dont plan to merge it before ready. Just to have it in place.
@papr did you implement pyre.events()
Thank you!
Hello pupil community. I was using my pupil mobile setup and I just switched back to USB to test something. Capture is running is ghost mode and I can't seem to get it to detect the world and eye cameras connected to USB. I've set the capture selection to local usb. Not sure what's up. And i've restarted already...
I did add the 'capture source' plug in earlier. Could that be causing conflicts? How do I remove a plug in?
There is always a Capture Source active. You can only replace it. One that always works is Fake Source. It displays a static image. @user-78dc8f
If the drivers are installed and the cameras are connected correctly, then they should show under Local USB
Hello all ! I am a beginner in using the eye tracking camera and in informatics in general ... π I am running it from the source code, with Mac OS, it works very well, now I would like to have access to the coordinates of the gaze (in order to give it to 2 servomotors), I looked at the docs.pupil-labs.com , "plugin guide" and "Interprocess and Network Communication", but actually I don't really understand what do I have to do for having gaze position, and save it in real time in a folder ... Many thanks is advance !! (ps : pardon my french-english ...)
Dear @user-030e35 , welcome to the community!
Have a look at the pupil helpers, espc. https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_messages.py
if you uncomment line 24, you will be able to receive gaze positions
Dear @papr , thank you very much ! sorry to bother you with a stupid question : where do I have to put it ? in the main.py ? and where ? many thanks in advance !
ah, no, it is a stand-alone script that uses default features in Capture. You simply run it from the Terminal. python filter_messages.py
From there you can send them to your motors or store them to a file.
(this is not part of the script)
Hi, i just started with pupil-labs source, so i have some getting started question: How can i run the algorithms without pupil-lab hardware?
Which algorithms?
You can run pupil software in many ways, using video files, using your own DIY hardware for non-comercial applications, for example
I would like running the algorithms for marker detection.
Have you checked the code?
You could copy and paste the detector function and explore it with opencv, for example
Of course, take a look at the install instructions in the wiki at first
So, after installing all dependencies for the algorithm you want, just copy and paste and adapt it for your needs. Remember that the DIY hardware is for non commercial applications only. However, the software is LGPL.
Yes, i had looked surface_tracker.py, square_marker_detect.py, reference_surface.py. So, do i need first install the drivers for pupilabs ? or only the dependencies? Yes of course, is for research propose.
Which drivers?
I think all you need is just follow the install setup in the wiki
Sorry, I am on Linux. So I could not help you with windows install steps
The windows driver should work with any UVC compliant camera
Thank you ! i will try it now.
You'r welcome!
Hi everybody, I am trying the eye tracker on a child's head. Does anybody know if exists a smaller version of the eye tracker?
Yes exists!
You should contact them using the sales email.
Ok, thank you!
@user-99e72e I think the e-mail is sales@pupil-labs.com
You could talk privately with someone from the team also, I guess
Yes, I sent them an email. I was also wondering if this script https://github.com/pupil-labs/pupil-helpers/blob/master/pupil_remote/filter_gaze_on_surface.py can be optimized for real time applications. I need to catch in real time the user's gaze position.
@mpk @wrp
@user-99e72e email received. And yes, we can supply a child sized frame. Thanks @user-41f1bf for responding π
Thank you so much @wrp. To supply this smaller version do you need the cranium measures?
The version we have is for a child around 5 years old
But should fit 4-8 year olds
mmm ok. We have already bought a base version of the pupil eye tracker thanks to the university where I am developing this project, so we would like to know eventually, the price of this version.
Cameras from the adult version are compatible with the child sized version
So one option is to get a child sized frame with cabling and another option is to get another headset for children. @user-99e72e let's continue the discussion via email if that's ok with you. I'm AFK at the moment but will reply to email in around 12 hours from now.
Sure, no problem. I will reply to the just received email. Thank you!
You're welcome @user-99e72e
Regarding the other question I asked, can you tell me something?
Hi can you be more specific about your use case?
Real time surface tracking is already implemented. External plugins or applications can use streams of gaze positions and other information to do stuff
The linked helper is an example of an outside app (in python) using real time streams, if I remember well enough
I am using the eye tracker to catch user's gaze position and create gui interfaces. When the user moves the gaze in another place, the gui must be destroyed. But I think the zmq is faster than the gui creation/destruction
Yes, zmq is really fast
Yep, and I would like to know how can I improve this process
And more important, if it's possible
In pascal one can use critical sections to wait for the main process, or use blocking (and less complicated) zmq functions to wait inside a thread
Yes , I am already using blocking functions in a separate thread
I must say that I am not a professional programmer, I can only speak about stuff I have tested by myself
No problem π any opinion is well accepted
Are you using python or c++ or what?
Entirely python, for now
And pyqt to create gui
Creating and destroying stuff is usually slower, if your gui is very dynamic I would recommend caching all fluid controls and work with Hide Show peocedures
Yes, that could be a possible improvemet
Regarding zmq, I would recommend talking directly with the pupil backbone, first register to receive the port and then register to receive needed messages
That said, I think you should share some code and then increase your chances of better feedback
Its hard, at least for me, to know for sure why you are (I am guessing) receiving segmentation faults
Nono, no segfaults
While creating and destroying stuff.
I mean invalid memory errors
My suggestion may resolve your issues, if stuff is cached, zmq can be as fast as it can
I willl try this suggestion π
Dear Community We have started to work with Pupil Labs in order to perform real time image processing based on eye & gaze coordinates. This mechanism is part of an AR application we are working on. Our main challenge as we see it is to perform the image processing at real-time, while basing it on the Pupil Labs platform. Currently we are in the POC stage, so our initial thought, as a workaround, was to just develop a Plugin for the βCaptureβ application. The issue is that it does not expose the world view (Frame) for us to perform the imahe processing. Other possibilities we are now exploring are to use the eye & gaze data and to expose it to Unity/Matlab/any other 3rd party. Does anyone have any other tips / directions how to go about this? Anyone can point out to relevant Matlab materials?
Thanks in advance!
Hi @user-f181b1 - Thanks for the overview π You can access raw images of eye and world frames via ipc, or directly within the update fn of a plugin in Pupil Capture
@user-f181b1 - if you haven't already - you can take a look at existing plugins like surface tracker to see how to access world frames: https://github.com/pupil-labs/pupil/blob/1c7b10b2a0ba07eb31778c664e23c2cac82d196d/pupil_src/shared_modules/surface_tracker.py#L195
Many thanks for your prompt response here! Actually we have investigated this Plugin but somehow missed the frame parameter within the fn. Just saved us a major headache
Welcome!
You should also be able to get frames via messages over network if you do not want to write a plugin
Topic should be frame.world
You can see pupil-helpers repo for example of receiving messages.
And see docs on message documentation: https://docs.pupil-labs.com/#message-documentation115
Thanks again! We will focus on the Plugin method for now, it should be sufficient for our POC at this time
Dear @wrp
just making sure we got you right, when we change the frame in the in the "player" everything works. But when we do so in "capture" it does not affect the world view. a simple test we have made is -
def update(self, frame, events):
img = frame.img
img[:,:,1]=np.ones(img.shape[:-1], dtype=img.dtype)*255-img[:,:,1]
something we are missing here?
Hi all, I'm trying to get the unity_callibration sample project in the hmd_eyes repo to work with the epson movario, but I'm running into some trouble. Anyone help me?
Hi @user-efb835 , what abou trying the π₯½ core-xr room?
ah thank you! Sorry I'm new.
Nothing to worry! Welcome to the pupil community ;)
@here - here is nice concise demo developed with Pupil and PyGame - https://github.com/jeffmacinnes/pl_surfaceGazeViewer - and a live demo of this code in action via youtube video: https://youtu.be/Sii9wRNSWHg?t=1h15m11s
anyone have a quick explanation of how adjust calibration works? as in do i just print out a marker and move the marker around?
Is there documentation on pupil service? I was wondering if I could just stream a video feed with gaze position via the network
you can do this using Pupil Capture and using the video streamer plugin.
@mpk awesome thank you!
Hi every one, I am trying to use the surface tracking plugin but for some reason when ever a marker is detected the player app closes itself. I am using the latest version on windows 10. I had placed some markers in the experiment room but I hadn't defined surfaces during the time the recording was happening and I am trying to define surfaces while using pupil player, what I've been doing so far is opening the file in pupil player, defining the surface in the plugin and when a marker is detected I pause the video to make sure it's the correct surface and I click on the marker but what happens is that the information near the marker loads to 100 percent and the app closes itself. Is there something I'm doing incorrectly while using this plugin?
It sometimes closes even if I don't click on the marker and pause the video for a second and unpause it agian.
Hi @user-006924, thanks for the detailed report. I will try to re-create this behavior on a Windows 10 machine with the latest Pupil Capture bundle today.
Hi All. My lab group is trying to use pupil sync (the time sync plug in) because we have two eye-trackers that we want to coordinate. We are recording data using pupil mobile (locally on two android phones). Is there a way to use time sync with pupil mobile? More generally, we aren't really sure how to verify if time sync is running...
Hi John, we are currently implmenting timesync for pupil-lmobile.
ok. So not quite ready for that option? For now, could we test this out by plugging both eye-trackers in via usb?
@user-78dc8f You are running on windows from bundle correct?
@mpk Did we make a release for windows with the new time sync already?
in one lab, we are running on mac. In the other lab (in India) we are running windows (just to complicate things!)
Ok, this should not matter as long as they use the same Pupil Capture versions
They are both running the latest version.
You will need https://github.com/pupil-labs/pupil/releases/tag/v0.9.10 for time sync to work EDIT: link to release notes
It is not released for windows yet, unfortunetely
So the new release for mac will enable time sync with pupil mobile?
no, not yet. The new mac version has a working time sync between Pupil Capture instances but not Pupil Mobile
Pupil Mobile has no Time Sync support yet
Ok. So to test this out via usb, would we launch two instances of pupil capture? And then when we hit record on one, the other will start as well?
Time Sync only syncs the Pupil Capture instances' clocks
If you want multiple Captures to start as simultanious as possible, you will need to activate "Pupil Groups"
Sorry to be dense here, but how do you verify that the clocks are synced? Just look at the time stamp data for each recording? (i don't really need them to start at the same time...sync is the key)
so the data in 'world-timestamps.npy'
I understand. The Time Sync has a status field. One will say "Clock master" and the other "Synced with <ip address>"
and the node instances should show in the plugin ui as well
Thanks for the help! We'll give this a go.
Good luck!
Hi @papr and @mpk : We just tested the sync. We are actually streaming data from the android phones and recording on pupil capture (also recording on the phones as a backup). Time sync seems to work with that setup. The real-time data on pupil capture is quite staggered because we are running through wifi, but the recorded data from pupil capture seems smooth. We're just testing that the two trackers are synced over a longer recording, but seems like success!
Thanks for the help.
Hi John. Make sure to turn on h264 for the world cameras in capture it will help with bandwidth!
Will check that. thanks.
Hi @mpk. We just tested the sync option. We had both cameras viewing a timer on a iphone. The frame capturing is really jumping around on the phones--and definitely not sync'd. Is this because we are streaming the data from the phones via a router?
Also, I tried to set h264, but I can't while streaming...
Unset streaming and change
ok. will give that a try. thx.
When we click on the h264 option, the system allows us to turn that 'on'. But then streaming comes back on as well. I assume that is ok?
as long as h264 is 'active'
That's ok
When using surface tracking would printing each marker in a larger size help with its detection or is it irrelevant?
It helps. You can also try reducing the minimal perimeter setting in the surface tracker ui
Has anyone felt like the pupil cameras hurt their eyes?
Also - what does this black knob adjust?
@user-efb835 that has never come up. I personnaly find that wearing the eye tracker can be a bit straining because of the occlusion. We have measured and calclared IR expossures their are well within the save range.
If you mean by knob the wheel around the lens, it adjusts the camera's focus
or better: depth of field
Cool!
@mpk I wear contacts - do you know if that affects it?
We have devs wearing contacts and there has been no issues. The IR savety spec takes this into account.
Wearing contacts is generally exhausting for the eyes since they get way less oxygen. I had to stop wearing contacts because of that.
If you feel that the headset is really causing you pain please dont continue wearing the it! Shoot us an email so we can do a video debugging session and make sure the headset is fine and worn correctly.
That sounds great! Is there a recommended time of wearing them before stopping? We are looking at using these as a back-of-house production tool.
@user-efb835 - Pupil has been used in research for entire day long recording sessions. We do not have a suggested duration of wearing time. But as @mpk noted, if something doesn't feel good to you or others, then try taking a break π
Okay thank you!
im having issues where the ir light gets right into the center of the pupil and i'm having difficulty adjusting the camera in such a way tha the two ir light diodes don't fall into the pupil. The reason i bring this up is it dramatically drops my confidence and greatly affects 3d tracking more so than 2d...any suggestions?
@user-efb835 I think I noticed what you're talking about and realized it was due to me trying to debug some eye tracking stuff and wouldn't blinking like i would normally...if that makes any sense
I've used it with multiple people and none of them have reported anything
even when i asked them
@user-ed537d I have seen this kind of reflection before. Is your envirnment very dark? Usually this can happen with very big pupils. One suggestion is to extend the eye camera are further out.
I've extended it out all the way...the environment is lit...any other ideas @mpk ?
Can you send a picture of besaid eye in condition?
Sure. I'm away from the computer but will send when I am. Thanks!
alright, more then.
Hello π After some great help from @user-5ca684 I got my calibration working in the unity_calibration project. However, some of the tracking is very problematic since the video from one of the eyes seems to freeze every few seconds. The CPU drops to about 0, the video isn't moving, and these messages constantly appear over and over again in the console window of the capture. (This happens even when I'm launching the capture without the unity project at all) world - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID0. Estimated / selected altsetting bandwith : 592 / 800.
I am running windows 10, 64bit. Any suggestions?
Hi,
make sure that you are reading the right camera in each window and that there are no extra instances of capture or service running.
I have seen this error when a camera is used by more than one process at a time.
Just restarted to make sure I have no other processes which may have been left trailing. Still happens on first launch. By the way, what am I supposed to see on the main pupil_capture window? not the specific eye one
It seems like the actual main window for the capture is competing with the eye window, only one of them works at the same time while the other is frozen
Updated to 9.10, problem gone
I think you where reading rthe same camera in more than one window. updateding the version will reset settings.
Hey Guys,
I am trying to integarte the Pupil Eye with Microsoft Hololens. Has anyone tried doing it ?
@mpk i've circled the IR lights that disrupt the pupil
this example doesn't really show how it affects algorithim mode and isn't too great in proving the point that it gets significantly worse depending on where the subject is looking in some regions of the world they're looking at it gets placed smack dab in the middle
@user-4a3b48 - please could you migrate your question to the π₯½ core-xr channel
@user-ed537d can you share a sample eye video via link in email or PM? It would be great to be able to test this to gain further insight
Sure @wrp who should I send it to?
Could you send to info@pupil-labs.com
Will do uploading now
Are there are any general tips for better calibration/detection? I seem to be getting very stuttering data, even when fixating on one point the data returned is very erratic
hmm this is strange. @user-64e12b can you explain your setup and best is to set the recorder to also record the eye and send one of those eye videos? It sounds like the camera is not focused or angle not ideal.
One of the eyes is actually blurry
The video from one of the eyes I mean π
My setup is with the Oculus Rift DK2
What can I do about the blurriness? @mpk
@user-64e12b - you can focus the eye cameras.
@user-64e12b - Adjusting focus for the DK2 kit is a little bit tricky - but will resolve your blurry eye video issue. First, remove the cup from the DK2. Second, carefully remove the hot mirror, adjust focus, reattach the hot mirror then insert cup back into DK2.
@user-64e12b - it could also be that the hot-mirror itself is dirty
Okay so I've meddled with the focus, and I think it worked nicely. The eye on the left is the one which I fixed the focus for, and now no longer appears blurry. However, the erratic behavior remains.
ok. this looks devent. Now please turn on algorithm view. Also what resolution are you running the cameras?
ok I see its VGA, this is good.
one eye is in 3d mode. The other as well?
(was about to ask the same question as @mpk - eye0 looks to be in 2d mode and eye1 in 3d mode)
switching to algo mode and making a screenshot with help debugging.
@user-64e12b - can you also reset pupil_capture_settings or pupil_service_settings by deleting the directory
Okay so I've deleted the folder and now reloading
Can't seem to find where do I switch to algo mode
Oh okay. Found it.
I was too slow - but maybe this is helpful for others π
Here's algo view. Not sure how do I check whether the eye is on 3d or 2d mode?
this is 2d mode. and its looking good at least in the picture.
Oh. I want 3d mode
Well, I don't really know what I want
My calibration mode was currently 3d so I guess it has to match, but I don't actually understand the difference between the modes
yes you need 3d mode for 3d calibraiton.
you can try 3d mode. However the dk2 lenses introduce distortion that could throw the model off.
2d is more stable?
in the case of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wow! This works sooo much better!
Okay so now its very responsive, very little stuttering, pretty happy. only problem now is that the gaze is sometimes very different between both eyes, spread out with the average gaze between them. any tips to get some more accuracy? What's ROI?
ROI = region of interest
you can set a ROI in eye windows - go to ROI mode (next to algorithm mode in drop down)
this will constrain the area in which the algorithm looks for the pupil
Hi. I am trying to calibrate eye trackers using the pupil mobile app with the manual marker setting. Sometimes the program simply cannot find the marker, even if is streaming fine between the phone and laptop, the person is staring at it and everything looks good with the eyecam settings. Is there anything I should look into to improve this? Thanks!
can you send an image of the capture world window?
@user-9945c8
?
Hi! Didn't expect such a quick reply. We are having issues when we work with participants rather than when we try to calibrate each other, so not sure whether I can get you a video. Is there something specific I can look for?
somethimes false positives (non markers being detected as markers)
The problem is that it isn't detecting a marker, period.
I set up a debugging session for today, but no one joined the google hangout.
Can anyone help debug my headset now?
Hi Kathryn sorry. I m online now.
@wrp @mpk sent eye video to info@pupil-labs.com
@user-9945c8 pull up the pupil capture world view and see if it puts a green circle around your manual markers....if it does then it should work if not i've noticed that sometimes due to lighting issues I have to change the pupil world view to manual and adjust the exposure time for the world view to find the marker
@user-ed537d - received - thanks!
Awesome!
@user-ed537d I looked at the video you sent. Using Pupil with standard settings I see eprfect detection. I m not sure whats going on that you see poor detection...
detection result.
@user-ed537d - are you using default pupil detection settings?
Thanks Matiar! (Sara)
Hi I am getting this error video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied. Can you please guide me to what might be the problem?
@user-042a9b Please could you provide more info about your setup? 1. Operating system and OS version 2. Pupil Capture version number
@user-042a9b If you are using Windows 10, this message could result from not installing drivers for cameras.
If this is the case, please see: https://docs.pupil-labs.com/master/#windows-driver-setup
Hi all, My hardware setup is monocular and I accidentally put it on 3D detection mode while recording. Is my data still valid ? I mean is it possible for me to use 3D detection mode with a monocular and use the raw data from it?
Yes it is
3d detection uses 3d eye modeling, either monocularly or binocularly
You will notice differences in data quality, though
you mean the level of confidence is less with 3D?
Dependens on your setup, maybe you can get more confidence
I have been using 2d detection and post processing images for the sake of simplicity.
Thanks
Is there any docs available for Pupil for me to read up on the different detection methods or is it just the "docs" section in Pupil Labs website?
Yes, there is, pupil docs is referencing the original academic work of the 3d modeling strategy implemented
sorry to ask again, I just want to make sure I'm understanding correctly, so with a monocular setup with 3d detection mode I can get 3d gaze directions and they are valid?
No, you just can't get a 3d vector (x, y, z) with a monoculsr setup
@user-006924 we can give you a direction (gaze normal) but not the depth of the gaze point.
When I want to know the pupil position in pixels and I'm using ROI mode for pupil detection should I multiply my pupil position data by the original pixel size of my eye camera feed or by the pixel size of the ROI window?
@wrp I was using 2d looks like you're using 3D
@user-ed537d detection was really good when I tried your video. Maybe strange settings?
@user-006924 - the pupil position in pixels will be relative to the eye camera frame and not the ROI.
Hello everyone! I suprisingly bumped into issue with HTC Vive add-on. Hope it's right chanel to ask the question. I run Pupil Capture app on Windows 10. Pupil's HTC Vive add-on is connected to separate USB. In a previous week both cameras worked fine. But today Capture app very strange: both windows on cameras is running but only one of them is updating, second one is freezing. Moreover! If I stop the running camera then second (freezed one) become alive and frames are shown. I tried to re-start Pupil Capture app, tried to re-connect add-on, tried to reboot system - no luck (
@user-f7028f I quite sure you are reading the same cmaeras from both windows. Make sure the correct capture is selected in the sidebar.
I'm ashamed! You're right ) It was supposed to be the first thing to check!
Thanks
no problem! glad I could help!
One more Q. Is that right channel to ask about HTC Vive add-on's issues or π₯½ core-xr is more suitable?
@user-f7028f π₯½ core-xr is the preferred for VR/AR questions π
ok, got it )
I found that IR LEDs on left eye of HTC Vive add-on isn't work, so I got dark frames from this side. Is that kind of software issue or I have broken set? ( What can I do to somehow test and fix it?
Hi @user-f7028f, can you take a screenshot of the left side eye camera and share it with us so we can diagnose and if necessary replace/repair?
Yes. one moment!
or screenshot of frame? ^)
Hi @user-f7028f thanks for the image. However, what would ne even more helpful is the screenshot of the eye camera image from Pupil Capture or Pupil Service
misunderstood, sorry )
No problem π
Hi @user-f7028f Thanks
It does look like this will need a repair/replacement
Can you send us an email to info@pupil-labs.com with the original order number to get the repair/return process started?
sad news ((((
Yes, sure. will send in few hours
Our hardware is covered under 12 month warranty. We will repair/replace it for you, and should be able to do so quickly so that you can get back to work π
Hope so! I'm already started to love to sort out such new things for me as eye-tracking and VR and Unity )
@user-f7028f once you send us an email I will start the repair process
Thanks for great assistance!
Welcome @user-f7028f
Hi all, For surface tracking I was looking at the raw data for surface positions and gaze positions and I wanted to know what the "m_to_screen" and "m_from_screen" meant in the raw data file
trying to use mouse_control from pupil_helpers and running into this error
x_dim: 3200, y_dim: 1080 Traceback (most recent call last): File "/home/nps/Downloads/pupil-helpers-master/pupil_remote/mouse_control.py", line 92, in <module> set_mouse(x, y) File "/home/nps/Downloads/pupil-helpers-master/pupil_remote/mouse_control.py", line 28, in set_mouse m.move(x, y) File "/usr/local/lib/python3.5/dist-packages/pymouse/x11.py", line 128, in move fake_input(d, X.MotionNotify, x=x, y=y) File "/usr/local/lib/python3.5/dist-packages/Xlib/ext/xtest.py", line 103, in fake_input y = y) File "/usr/local/lib/python3.5/dist-packages/Xlib/protocol/rq.py", line 1338, in init self._binary = self._request.to_binary(args, keys) File "/usr/local/lib/python3.5/dist-packages/Xlib/protocol/rq.py", line 1060, in to_binary static_part = struct.pack(self.static_codes, pack_items) struct.error: required argument is not an integer
nvm got it working
had to convert x and y to ints
and PyMouse is now replaced by PyUserInput
@user-006924 m_to_screen
is the transformation matrix of marker coordinates (normalized space) to screen coordinates (this is a perspective transformation). m_from_screen
is the inverse of m_to_screen
@user-ed537d - you are using linux correct?
If you made changes to pupil-helpers/updates to the mouse control script, please consider making a PR to the repo
@wrp Thanks for your answer, just one more question, by screen coordinates you mean image pixels?
Yes, the world frame pixels
@wrp yes I will when im at that computer again...
@user-41f1bf @wrp @mpk I remember not too long ago there was a discussion about instructions on calibrating camera intrinsics. A follow up question I have to this is we currently use two pupil headsets with the two separate subjects. I'm assuming that the capture settings loaded are for the last camera intrinsics calibration. My question(s) is does pupil load the last calibrated camera intrinsics? if so how can i make it so it loads the respective camera intrinscis file?
Hi, I am trying to get calibration to work on my Oculus DK2. I attempting to use the pupil-labs/hmd-eye git repository, but I am pretty confused about what I need to run to make things work. So far, I have tried running Calibration.unity in Unity with Pupil Capture open. I am able to get a blue screen with to show up on my computer and on the oculus, but then nothing else happens. I have also tried playing around with running the python code in hmd_calibration, but I don't really see what's going on there. Would anyone have any more specific instructions on how to get a basic screen target calibration up and running on the Oculus? Thanks!
@user-f13d61 try the hmd-eyes channel
In general make sure your IPs match to what pupil remote shows
Hi dose anyone encounter the error when executing the latest pupil-lab capturer (v0912)?
@user-6f6224 what error are you referring to?
[email removed] pupil_player Traceback (most recent call last): File "main.py", line 59, in <module> File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module File "video_capture/init.py", line 27, in <module> File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 389, in load_module File "video_capture/fake_backend.py", line 14, in <module> File "/home/pupil-labs/.pyenv/versions/3.6.0/envs/general/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 573, in load_module ImportError: /opt/pupil_player/cv2.so: undefined symbol: ZN2cv2ml6RTrees4loadERKNS_6StringES4
are you running a bundle or from source?
I'm installed the debian package on Ubuntu 16.04
ok.
thanks for your repyl
π
I also try the v0910, and I had the same error
so is pupil_player
any idea about it?
I use to successfully installed v094 on the same laptop
I just tried running the budle v0.9.12 on a fresh 16.04 install and it started just fine. My guess is that your system setup is causing some sort of issue π¦
hmm
is it because I have installed previous version?
You should be able to uninstall the previous version and install the recent one
@mpk said that he was able to start normally the recent version, so if the problem persists for you, we will need more details to provide a proper solution
usually an older version does not interfere. The new version should replace all that is in opt/pupil_player
@user-6f6224 do you have an updated system or fresh installed one?
It is a bit difficult for me reinstall the system, the error seems related to cv2.so library
It might because the newest opencv library that I just compiled
can anyone of you recognise the _ZN2cv2ml6RTrees4loadERKNS6StringES4 ? as some function that is used by the pupil-lab?
No, I am sorry. I was not asking to reinstall. I was asking if your system is up-to-date or instead you have a fresh install.
which version of opencv does v0.9.12 depend on?
Yes, my system is up-to-date.
Pupil requires python 3 and opencv 3
3.0 or 3.1 or 3.2?
opencv 3.0, 3.1 or 3.2?
@user-6f6224 generally your own version of opencv should not interfere with PUPIL bundles...
_ZN2cv2ml6RTrees4loadERKNS6StringES4 is namemangled and it does not sound familiar to me...
but I install my opencv binary into the system folders and when I do ldd cv2.so, I find it depends on the latest opencv libraries that I installed. but I can try remove my own opencv library and see if it works or not.
The bundle does not communicate with your cv installation, unless you have deleted the bundled library (just saying, I am not saying you did such a thing π ). Please, if you want to use your custom opencv version, you should try running from source, of course, changing accordingly
the source
but /opt/pupil_player/cv2.so does depends on 3.2 that I jus build which are installed in the /usr/local/lib/.
Well the bootloader should make sure that it actually loads the lib we ship. But I cant vouch that that actually happens. We also dont have newer versions of OpenCV than the one we bubdle installed on our test/dev machines.
The ldd output here is very short and has some "not found) libs. He is starting ok though. Will install 9.12 ans see
@mpk the windows release note should be removed (24h note)
@user-41f1bf - thanks for that - fixed the release notes.
Hi, I dont know why but the upper bar is missing, it was removed??
Also, the drop and down is not working
thanks guys I'll try it later and hopefully let you know if I managed to get it running.
@user-41f1bf I can't tell from your screencap; what OS is this?
I am not on Ubuntu though it worked in the past
Bunsen Labs (Debian 8 based distro)
It usrs open box
It is not a priority, take it just for your knownledge guys
I am not familiar with this distro.
I know how to use the command line
π
That's great π
What desktop environment does this distro use? Is it GNOME based?
Open Box
ah, ok
GUI is GTK in its guts
Also tk
@user-41f1bf, we're not going to be able to debug for BunsenLabs
I understand, take it for your information only
but it is always interesting to learn about how Pupil performs on other Linux distros
BTW, Pupil runs very well on Arch Linux
Antergos
But officially Pupil Labs targets Ubuntu 16.04 LTS for Linux. All other distros could be considered in the domain of "more advanced"; for Linux users who know how to configure systems (and use the terminal π )
Myself included, I will take some time to investigate this further next month
Hopefully it will work
See you
great - I look forward to seeing your progress/feedback on this. Have a nice weekend!
Hi all!
I have question about message types in Pupil Capture. Third-party app (unity_integration_calibration from hmd_eyes) is subscribe to 2 type of messages: notify. and gaze I see that gaze message only contains x and y coordinates (normilized) for gaze. But all variables associated with the eyeball are zeros. I found in Pupil Capture source that there is one mire message type - pupil. What is the message? Is it contains eyeball's info such a x, y, z coordinates? My primary goal is to get coordinates x, y, z of detected eye. Is it implemented?
Oh! I've got an data when I subscribed to pupil.0 So, the next question is about values of this data. For example I look to Sphere object. It has variable center - array of doubles. In what measurement system is that data?
@user-f7028f if you care about pupil 3d data in the world camera space you will need to get this from gaze. Its in there. I m not sure if @user-5ca684 has exposed it.
use these here to inspect:
It's strange, but 'gaze' messages I received dosn't contains 3d data. Right now I've got this data from pupil.0 messages!
yes in that case it will be in the coordinate space of eye0
in gaze this data is mapped to the world space.
you will need to calibrate for gaze data to have all field availalbe.
As I understand, it's quite impossible to calibrate HTC Vive add-on cameras in Pupil Capture app when I wear that hmd ) I use unity_integration_calibration to calibrate eye data in hmd's screen. If I do that and don't run calibration in Papil Capture, is it right that ''gaze' messages will be send from Pupil Capture without 3d eye data?
you will have to calibrate using the unit3d plugin.
maybe then its best to look at pupil0 and pupil1 data.
I've set in Pupil Capture calibration method HMD Calibration, run calibration in Unity and as a result got an eye data in 'gaze' message!
glad to hear it π
Now I have to understand which one of this values I must use to get x,y,z coordinates of detected eye ) Sphere? Circle 3d?
you what the eye ball center?
that would be sphere.
sphere" : { "center" : [ -0.796004551473776, -1.35599382903352, 43.0460060737645 ]
actaully eye_centers_3d
is the right one.
thats the one thats mapped into the world space.
hmmm, but there's no eye_centers_3d data in message (
{ "confidence" : 1, "timestamp" : 6932.254, "base_data" : [ { "model_birth_timestamp" : 6925.462, "theta" : 1.82212504480308, "confidence" : 1, "method" : "3d c++", "sphere" : { "center" : [ -0.796004551473776, -1.35599382903352, 43.0460060737645 ], "radius" : 12 }, "diameter_3d" : 3.98505337922062, "phi" : -1.48534703324559, "norm_pos" : [ 0.505724969277259, 0.43438272057267 ], "ellipse" : { "center" : [ 323.663980337445, 271.496294125119 ], "axes" : [ 74.7444001575137, 78.5360943279047 ], "angle" : 73.1936276169107 }, "timestamp" : 6932.254, "topic" : "pupil", "model_id" : 8, "diameter" : 78.5360943279047, "id" : 0, "projected_sphere" : { "center" : [ 308.534991583934, 220.46935707438 ], "axes" : [ 345.676669154888, 345.676669154888 ], "angle" : 90 }, "circle_3d" : { "center" : [ 0.195963892346401, 1.62829999333466, 31.4654193447494 ], "radius" : 1.99252668961031, "normal" : [ 0.0826640369850147, 0.248691151864014, -0.96504889408459 ] }, "model_confidence" : 1 } ], "topic" : "gaze", "norm_pos" : [ 0.661832355705696, 0.344426222439735 ] }
this is not a datum from the 3d mappers. I think this is in 2d mode...
I found that Pupil Capture reset calibration method from HMD Calibration 3D to HMD Calibration when I run callibration in unity...
oh! I see that this calibration app is send message to start HMD Calibration plugin
Probably it's developed to use 2D calibration.
Is it question for hmd-eyes team? )
@user-f7028f Please switch to 3D mode in the Unity plugin Settings, and calibrate like that
You talk about Pupil Capture's plugin or what? Because in that app I don't see plugin like that in the list of plugins (
I believe I said Unity plugin
in theory you wont have to set anything in the Puli Capture
everything is being set from Unity
Unity is switching to 3D calibration mode or to 2D mode
through the IPC messages sent to Pupil Capture
perhaps, you could try to reset your pupil capture to defaults (or have another local copy)
and switch to the desired calibration mode in Unity, in the example scene there should be a PupilGaze GameObject
there you can find all the settings needed
but I have to admit, not having this clear is my fault, because I am late with the documentation, having serious troubles with my PC..... I'm going to try to finish up the complete documentation today/tomorrow, so you will be able to see a clear step by step instrucion set on how to set your Unity plugin up
@user-5ca684 Thank you for clarification! I looked at PupilGaze object (its behaviour implemented in PupilGazeTracker script) and there is no settings for callibration type. I only found hardcoded call of 2D callibration of PupilCapture app in StartCalibration method:
_sendRequestMessage ( new Dictionary<string,object> {{"subject","start_plugin"},{"name","HMD_Calibration"}});
If I change this call to HMD_Calibration_3D it will be enough to get 3D eye data or it's necessary to changed some data type like Vector2 to Vector3 for example?
@habolog#2770 Im starting to think that you are still not using the correct version
@user-f7028f
this is how you should be able to switch between calibration modes
if you do not have this menu system, you are still not using the correct version
@user-5ca684 I use master branch. Should I use dev branch instead?