Hi, one more question... I am using HMD-eyes package in unity... But I am confusing about sending requests.... Especialy this piece of code, I am using HTC vive, and whz frame size is 1000x1000?, what is this treshold? and why I should use this translations?:
Send (new Dictionary<string,object> { { "subject","calibration.should_start" }, { "hmd_video_frame_size", new float[] { 1000, 1000 } }, { "outlier_threshold", 35 }, { "translation_eye0", Calibration.rightEyeTranslation }, { "translation_eye1", Calibration.leftEyeTranslation } });
This is a cool project
What fps can be expected?
Hello again, I will be using surfaces to define that my AOI will be the screen of my laptop. Is there any way to split that AOI into 100 segments based on the screen's dimensions? Also, what data does topic surfaces return when subscribing through the IPC backbone? Thank you.
Regarding my previous post, I used the script to create the markers, I printed them and set them on my screen on the 4 corners and I tried to add a new surface. I was using the show Markers and Surfaces
mode from Surface Tracker
in Pupil Capture
. When I click add surface
it starts the process and recognizes the 4 markers but not fully as the green colour that is shown on the marker is not static and goes on and off. Will this be a problem later in the recording process? I tried to record some short videos and it always says no gaze on any surfaces
.
Hi there, I have a question about designing visual search tasks. Does anyone happen to know some software I can try?
@user-ed537d Thanks for responding to my questions re your MATLAB script! Yes, I was trying to port the information within the same computer. I haven't looked at this for several weeks (we ended up not needing live gaze interaction for our current studies) but I'll take another look at it soon. However, I do recall playing around with the downsampling in the python script and not having much luck.
@user-b116a6 I use a similar setup (printed markers attached to screen) and noticed a similar issue with the markers sometimes flickering on/off. I had to make the markers larger to make them work more consistently at the participant viewing distance (1m20cm). Detection is much more consistent now, although some markers do still flicker...but from the data I've looked at so far this doesn't seem to affect the surface.
@user-3070d9 PsychoPy (python based) and Psychtoolbox (toolbox for MATLAB) are both good options for visual search tasks.
@user-e7102b thanks a lot
@user-e7102b Ohh, I'll try using larger markers then and hopefully it will be okay, thanks for the input. Any idea about the data sent through the IPC backbone or are you exporting the data analysis reports from the Player at a later stage? I want to use the data as they are received and I saw in the documentation that the topic surfaces
exist. Thanks again.
@user-b116a6 No problem. If you happen to use psychtoolbox, I wrote a script here for positioning markers at screen corners, resizing etc, (https://github.com/mtaung/pupil_middleman/blob/master/matlab_functions/Display_Surface_Markers.m)
@user-b116a6 No idea about the IPC backbone stuff, sorry. I'm currently just using pupil for passive gaze recording. Perhaps if you make the surface more consistent you will see data in the "surfaces" topic?
@user-e7102b Thanks for the help, when I fix the issue with the flickering hopefully I will be receiving data too.
@user-e7102b I may have time to help you today if you'd still like
ahhhhh i think i found the mistake in the code i uploaded
also as a side note i've also found a way to use the matlab python api to load zmq and pull values but in my experience this also leads to lag times
will post the correction in a second
are you using pupilRead?
and do you set a timeout on the udp socket?
@user-ed537d I have a bit of time today too, so any help would be great! Yes, I'm using pupilRead, and setting a timeout on the udp socket. When you post the correction I'll have another go at getting it to work. Thanks again.
i'll dm you
Hello All, I am getting the following message on PupilCapture when trying to record audio :"World: could not identify audio stream clock". Has anyone solved this issue? Thanks
@user-88dd92 this is a news to us. Could you post an issue outline the problem here: https://github.com/pupil-labs/pupil/issues/new ?
Hi guys, at first thanks @papr for your point of view regarding module writing I asked few days ago .
Do you know why when I recorded using Pupil Mobile, streamed and calibrated in Pupil Capture after recording there is no gaze data when I try to open it in Player? On streamed view I could see my gaze position.
@mpk The issue has been posted as: Audio stream clock Issue #1139
thanks for all the help !
Hi guys, we are starting a new research project where we will record long time interactions between people (kind of 40 minutes for session). Is there any possibilities to cut long recording in shorter parts to better upload later on pupilplayer? it will be very hard with our mac to open files so big...
THNKS
Hi everybody, I know, that the orange line indicate the "angular distance between mapped pupil positions (red) and their corresponding reference points (blue)" BUT If i do a screen calibration: Shouldnt there be only one (blue) reference point for the pupil per Marker and not multiple? Obviously I did not fully understand the "Visualize mapping error". I am happy if someone could help me with that ;)
What light conditions are perfect to minimize this error?
Thanks!
@user-a04957 there will be more than one blue marker per site if you move your head. We sample about 30 sites per second. The marker is usually present for 1-4 seconds.
@user-1bcd3e we are working on memory optimizations right now. I think 40min recordings will be much more managable in the near future.
When I record (single eye) with the app, after 15-30 min it will stop and say the max file size has been exceeded. The memory card isn't full, so what is limiting the file size?
@user-072005 do you know what the file size is in this case?
Hi, I also have a question about calibration and accuracy. As I understand it, there are 2 metrics, Angular Accuracy, Angular Precision. If I press C and calibrate and then press T, I will look at some markers and then the the Accuracy Visualizer opens and tells me these values. If I recalibrate, the values changes. What can be considered good and bad accuracy? I don't understand these values.
Also, sometimes when I calibrate, it fails after I'm done, printing like 10 errors messages saying "markers detected. Please remove all other markers" This can happend when I try to test accuracy aswell.
Sometimes either one or both accuracy metrics shows nan (not a number).
Hey, I am wondering if the Oculus Rift DK2 Binocular Add-on also works with the Oculus Rift version sold now commercially (not DK2). Anyone experiences or thoughts how to eye-trackify the Oculus Rift?
Hi @user-aa5ce6 this is something we (Pupil Labs) have been working on for a while. We have a working prototype, but are still waiting on a revision of our new cameras with optics that will enable us to capture the eye region of a wide range of users. Our CV1 release is very behind schedule due to the limited space within the CV1, which requires very specalized cameras/optics in order for it to be an end-user (after market) add-on. That being said, it is in the works and we really hope to have something to share with you all soon
@user-d72566 Gaze accuracy - under ideal conditions, has been measured to be within 0.6 degrees (with 2d mode). However, due to varying eye appearances, facial geometry, and other factors you may see something like 1 to 2 degrees of angular accuracy in the wild with the 3d mode. You should be aiming for something between 1-2 deg with the 3d mode.
@user-d72566 regarding the message markers detected. Please remove all other markers
- do you have other markers (e.g. manual markers) visible in the world camera (or multiple markers visible on screen)?
Hi, how is it possible, that the norm_x_pos (column 4) and the norm_y_pos (column 4) have values outside the range 0,1 ? What does that mean (does it mean anything at all?) Even with a confidence level of 1.0 (see picture column 3). The same I see with very strange pupil diameter values.
Thank you!
@mpk Thank you, that makes sense now
@wrp Huh, I haven't noticed that there were 2 different detection&mapping modes, 2D and 3D. Can't find anything about it in the docs, what are the pros and cons of the modes? Can I find more info about it somewhere? The only thing I found was this: https://docs.pupil-labs.com/#notes-on-calibration-accuracy but it doesn't explain 2D or 3D modes.
But am I doing the proceedure correct? I mean, first I do a calibration and then I can just open the Accuracy Visualizer plugin and check the accuracy. But then I know that I should aim for 2 or less. Is there some litreature or reference I can use for this? (I'm using pupil for my Bachelor's thesis)
Regardin the markers, no I don't have any other markers visible. Not sure if it is a bug. If I reset the application and calibrate, I don't get the errors, but if I do multiple calibrations in the same instance of the program, that's when it starts printing the errors. I'm running Pupil version 1.5.12
Dear pupil users, please help me! I cannot set up the system on my computer. Can somebody show me the way? Thanks
Installation cause me some difficulties
I plan to use PyGaze. I read about previous chat and it seems to be possible just to synchronize the clock. Does anyone have experience using PyGaze and how to add support to it?
@user-d72566 yes, we should provide more information about the difference of 2d vs 3d mode in the docs. A coarse summary is here: https://docs.pupil-labs.com/#pupil-detection
Please update to v1.6 - https://pupil-labs.com/software
Re accuracy test: Yes, you would do a calibration, then accuracy test right after (e.g. t
on the keyboard). @user-92dca7 can you reference a paper on physiological limitations of human vision for @user-d72566 ?
@user-ef998b please could you provide some information about your system: 1. Computer specs 2. Operating system and version 3. Pupil hardware you are using 4. Pupil software version number 5. What difficulties/issues are you observing?
I have the same problem as @user-a04957 . The "norm_pos_x" and "norm_pos_y" values are supposed to be normalized, but there are many points that are outside the [0, 1] range (surprisingly ). Points with low confidence values (<0.6) are not displayed.
*(surprisingly with few points that have negative values)
Hi,
whenever I work with multiple markers (only using the "offline surface tracker" in the player) and try to set the surfaces in the player, my computer crashes. I am using Windows 10: Would it be more stable to use linux? You guys have any idea/workaround? The setup can be seen in the picture. Thanks!
@user-a04957 can you send the error/exception that is raised in terminal?
@user-112ecc are you looking at the gaze topic, pupil or gaze on surface?
@mpk yes. here it is. Thanks! PS: Is there a way to safe the surfaces defined in the Pupil_Capture (where are they stored)? (e.g. safe a config file on a usb-stick) => If I define specific surfaces for 64 markers, it would be pretty nice to export this to another machine.
hi there
@wrp Alright, thanks.
I'm using a modified backend so I'm running from source. I have tried 1.6 but it gave me lots of errors so I will stick with my version for now. :D
Regarding the accuracy, a test does not seem to be needed? When I calibrate and check the plugin, it might say 3.23354665. Then if I calibrate again, I might get 1.7545343545. This is all without running the test, just regular calibration.
Another question about the Pupil Player. When I load a recorded video, it always says "no pre-recored calibration available Loading dummy calibration". Does this mean that the player ignores my calibration I did when I calibrated in Pupil Capture? I've tried including the user_calibration_data file in the recordings folder, but Pupil Player ignores it. I though calibration data was exported when recording?
(Ignore the image)
Hi guys
Do you know in the code of https://github.com/pupil-labs/pupil where the sending methods are written ?
I want to observe their behavior so that I can accomplish my own communication in cpp
Hi everyone! Iβm trying to have Matlab/PTB3 talk to Pupil Capture to start recording, send triggers, etc. Iβm using the pupil middleman scripts (from @user-e7102b & @user-dfeeb9, https://github.com/attlab/pupil_middleman). I can get the sample_mm.py script running , and it opens the pupil remote plugin on the pupil capture GUI successfully. In addition, in the command output I can see the various triggers being sent from Matlab, received by the middleman script (e.g., mm_modules.pyudp:Received buffer: b'START_CAL'. b'START_CAL' 1521745885021). However, the triggers donβt seem to show up on Pupil Capture or do anything there (e.g., calibration/recording doesnβt start when I send the commands from Matlab). Iβm running on Windows 7 (running both Matlab and Pupil Capture on a single machine). Iβm hoping I might get some suggestions from the community for troubleshooting - I have some experience with Matlab, but almost none with Python, so itβs possible Iβm missing something very obvious. Thanks!
@user-dae976 Welcome fellow MATLAB user! It sounds like the matlab>python side of things is working OK, but not the python>pupil capture. Did you check to make sure the addresses in sample_mm.py and pupil remote are the same? @user-dfeeb9 wrote the python script, so he might have some other suggestions? Re not seeing the triggers appear in Pupil Capture, that's not surprising (this should be possible, but I think we need to add a few lines of code to the python script to make it work). However, when you play back the recording in pupil player, you should see the annotations then.
Also, bear in mind that I haven't tested this on Windows (only Mac). But it shouldn't really matter...
Hi @user-dae976, thanks for pinging us on this and it's good to see my code being put through the test. As @user-e7102b said, we'd like to check if you have everything connected to the correct addresses (this is a stupid check but it's good for sanity), though my guess is that they are. As a matter of fact, given that the py code is able to start the annotations plugin (if I understand you correctly re: opening the remote plugin), then the py script is appropriately connected to pupil-remote. The problem therefore is with annotations and triggers. I have some ideas regarding this, but can I ask what your setup is like and what you are recording? Also, have you observed the pupil-capture log? In theory, any annotations/triggers you send should register on that log so if they aren't coming up on pupil-capture logs, they aren't being sent/received properly. if they show up on the logs but do not show up on pupil-player then we know it's more probable to be an issue with your player setup
Hi @user-dfeeb9 and @user-e7102b , thanks for the quick replies, much appreciated! I think i described it poorly before, but yes, python is starting the annotations plugin, so i think those addresses are correct. What can i tell you about setup that would be useful? I'm running on Matlab 2015a (32-bit - i have to use this because i am also using a reach-tracker that works better on 32-bit matlab). The pupil capture GUI works fine when i use the commands there (e.g., i can calibrate, start a recording, see the file it creates, etc). I really haven't changed the setup much from how it is when you download it from the pupil-labs site. I'm recording in 2D. However, the Matlab script doesn't start a recording remotely, so there aren't files created in the recordings directory when i run the script. I can look at the output in the command window that opens with pupil capture (I think this is similar to the log file, though tell me if there is somewhere else i should look), and i don't see anything pop up when i send various commands through Matlab. Are there any additional scripts (besides those posted on the pupil_middleman github) that i'm supposed to have downloaded possibly?
I don't think there are any other scripts that you need to download. One question - which version of python are you using?
python 3.6.4
Hello everyone, I have several beginner questions, and will greatly appreciate some help.
1. is the coordinate system created by the area that is seen by the world camera at any time, meaning that the coordinate system is changing every time the head is moved? Or is it someway creates a coordinate system that can be compared between different head positions? 2. Does defining a surface means creating the surface boarders as a coordinate system that can remain constant even when head is moved?
3. Using the Surface plugin, is is possible to define several surfaces in different depths with respect to the person wearing the eye tracker, so to then get the heatmaps? 4. Can the eye tracker detect the pupil when it is completely dark? (I know that there would be no gaze positions, but will the pupil detection still be accurate? Thanks so much!
@user-dae976 Can you try just running this simple python script to start a remote recording:
@wrp thanks a lot! hope to hear announcements soon. π
@user-e7102b Yes, that script worked! Started a recording in pupil capture
Ok great. That is essentially the same code we use to send commands from python>pupil capture, so it looks like python>pupil is working OK, as well as matlab>python. My guess is that the issue is related to the format of the triggers/annotations (as @user-dfeeb9 suggested earlier). Perhaps something specific to Windows? @user-dfeeb9 have you tried running our scripts on Windows?
Yep, I use them on windows. My current implementation is nearly identical to what's on the pupil-middleman repo with the addition of some timer threads. They run as needed in windows 10
@papr In an earlier discussion you suggested recording the eye videos and calibration procedure along with the main recording, so that we could take advantage of the offline calibration tools in pupil capture. I have been recording the calibration procedure, but in a separate recording file to the tracking session (i.e. start rec > calibrate > stop rec, start rec > run task > stop rec. It occurred to me that you may have been suggesting to record both the calibration and tracking session in the same, continuous recording file? Will my approach cause problems if/when I come to do offline calibration, or is this OK? Thanks
Hi everybody, Is there a way to safe the surfaces defined in the Pupil_Capture (where are they stored)? (e.g. safe a config file on a usb-stick) => If I define specific surfaces for 64 markers, it would be nice to export this to another machine. Thanks
@user-e7102b yes @papr's suggestion (and my suggestion) would be to record calibration and tracking session in the same recording. If you want to conduct offline calibration, then you will need to have the calibration procedure present/visible in the tracking session recording.
@wrp Thanks - that's good to know.
@user-e7102b you're welcome
Hi, I have another question about the Pupil Player. When I load a recorded video, it always says "no pre-recored calibration available Loading dummy calibration". Does this mean that the player ignores my calibration I did when I calibrated in Pupil Capture? I've tried including the user_calibration_data file in the recordings folder, but Pupil Player ignores it. I though calibration data was exported when recording?
Hi @user-d72566 is the message you are seeing related to the camera_models? Like this:
player - [INFO] camera_models: No user calibration found for camera world at resolution (1280, 720)
player - [INFO] camera_models: No pre-recorded calibration available
player - [WARNING] camera_models: Loading dummy calibration
If so, this is the camera calibration (not gaze calibration)
@user-d72566 IIRC you are using a custom camera/backend, correct? Please clarify
@wrp Yes that is the message I get and yes I am using a custom backend. I feel a little lost, what is the difference between camera and gaze calibration?
@user-d72566 are you using a custom world camera as well?
@wrp Yep
ok, this makes sense then
@user-d72566 camera calibration refers to estimating/calibrating for camera intrinsic parameters - please see: https://docs.pupil-labs.com/#camera-intrinsics-estimation
Gaze calibration creates a mapping so that you can map pupil coordinates into the world coordinate space
@user-d72566 out of curiosity would you mind/be able to share information about the cameras you are using in your setup/backend you are implementing?
So Camera intrinsic is for modifying the scene/world view to capture more? Could actually be useful to me as my scene camera has a low fov. I tried the plugin and doing the steps described in the docs, but I can't get it to work properly. I click show pattern, resize the window and then press c, nothing happens.
The cameras I use are just regular web cams, not UVC compliant hence the custom backend I have allows for non-uvc devices to be used. They are pretty low res though. Eye camera is 640x480 30 fps and scene is 1280x720 30 fps
But so the section called Screen marker Calibration is just for gaze then?
@user-d72566 you need to capture at least 10 images of the pattern for camera intrinsic estimation - this will enable you to rectify camera distortion from your camera lens and estimate depth accurately
screen marker calibration is for gaze calibration, correct
Alright. But so if I do camera calibration, this will be exported during recording, and then used in Player, is this correct? My modified backend is here btw : https://github.com/Baxtex/pupil/blob/master/pupil_src/shared_modules/video_capture/none_uvc_backend.py Not perfect, but it works . (shoutout to papr, he helped me a lot)
Any pupil capture/service developer around ?
Hi, is there a way to use the eyetracker remotely with an iphone?
@user-d1ad4f no unfortunately not. Only Android.
Hello guys, why when I calibrate Pupil Mobile stream in Pupil Capture and record in pupil Mobile there is no gaze data available?
when I choose Gaze From Record option
do you recommend an specific android device an version?
I see they work with Motorola Z2 Play, is there any other device which could work better and have more recording time?
Hi so obviously you are using msgpack to serialize and deserialize the messsages. Is there a definition written somewhere ?
@user-d1ad4f Z2 play is best so far in terms of maximizing recording duration as it has expansion interface where you can hot swap batteries on the back and external sd card
Other devices work like Google Nexus 5x, 6p as well as OnePlus 3/3T/5/5T - but these devices do not have the ability to have extra battery power like the Z2 or external SD card
I imagine a few people on here use (or have used) chin rests for stability - does anybody have recommendations for models/brands? Thanks!
@user-dae976 We use SR Research Head Supports. They're super heavy duty and will last forever. However, they cost over $1500 when you factor in shipping, tax, import duties etc., so not cheap. I've not found a good alternative, but would love to hear if anyone else has suggestions.
@user-dae976 We have ordered one from Taobao https://item.taobao.com/item.htm?spm=a230r.1.14.90.452662c6UsXujr&id=564882492485&ns=1&abbucket=8#detail
It has a funny looking, and it may not be as good as the professional ones, but at least it costs only around 4 - 5 USD
@user-2fbdee Interesting. I wonder if this will extend far enough for taller adults?
@user-e7102b I will keep you guys posted when I have my hands on one of those, as I will be running some experiments that would require participants to go through passages for almost 10 mins
Hello, I have a pupil headset including R200 world cam and binocular eye cam. The problem I have is simillar to https://github.com/pupil-labs/pupil/issues/767. One difference is that the world screen is used via realsense, not UVC. So I want to know if realsense2.0 sdk is supported alternatively.
Hi, so im currently trying to get the pupil with a R200 camera working in Ubuntu 16.04. Got several problems, but the core is, i can't get the uvcvideo patched correctly. Been checking the issues and searching the internet but while trying to install librealsense v1.12.1 (legacy) and it fails the patching. The result is Pupil not detecting the camera. Anyone here had the same problem and found a solution?
Unloading existing uvcvideo driver... modprobe: ERROR: could not insert 'uvcvideo': Exec format error
Hi! I have a question regarding intrinsic camera estimation & actually applying it. I'm confused whether the pupil-player automatically applys the intrinsic camera estimation or not (I selected the fisheye during recording, the "show undistorted image" looks good to me.) In the player I get the distorted view (no plugin to "show undistorted image"). But the surface marker shows me a UNdistorted rectangle. My questions: On what are calibrations, fixations, surfaces, saccades etc. calculated, distorted or undistorted? 2) where do I change this in pupil player? - Thanks for the response!
Hi guys, I have quick question concerning Pupils for Hololens. I have been using it for a week and unfortunately wires from in plug broke. I'm adding photo for damaged part. Is possible to repair it easily?
Hi, I have a question: Why is the Start Time (Synced) in the info.csv different to the first timestamp in the gaze_positions.csv? Are they using the same source time ?
Does this mean, the first entry in gaze_positions.csv also referes to a UNIX-epoch time smaller than given in Start Time (System) ?
Thank you very much!
hi @user-489bd5 this is Douglas from the hardware team. Sorry for your inconvenience, we'll send you replacement hardware as soon as possible. I've sent you a direct message with further details
@user-e7102b in reference to the pupil recording etc i have some code that adds zmq into matlab via the python matlab api and I have it working I can post the code here if anyone would like it
@user-ed537d I would be interested as well, we have a working solution now, but interesting to see how other people did it π
@behinger#4801 @user-ed537d This will be the official Matlab example in the future https://github.com/pupil-labs/pupil-helpers/pull/26
@papr if I understand correctly, you open a socket from matlab to python, then python opens a zmq socket to pupillabs? We used a direct matlab zmq interface + custom plugin to decode text-commands in pupil
I'll take a look in a second
Ok @papr just wanted to add all I found during the time I was implementing our solution.
π¬
@user-af87c8 No, the example uses the Matlab bindings for the czmq library and communicates directly with Pupil Remote. No middle man software required.
ah sorry, I'm a bit confused - ah I clicked through the wrong branch. now its clear. let me check what we used
This was the closest we found to a Matlab zmq implementation. https://github.com/fagg/matlab-zmq
there are multiple ones. I think we found three (or four?). We used the only one that did not crash matlab & compiled fine
The Readme in the pr also lists the msgpack library that the example uses
we used: https://github.com/UCL-CATL/cosy-zeromq-matlab/tree/master/zmq There was some problem installing matlab-zmq
BTW, the example is only tested on Matlab 2017 a, Ubuntu 17.10
(Ubuntu 16, matlab2016)
ok, cool, thanks! Can you by chance also help me out with the intrinsic camera settings I asked above?
Unfortunately not. I do not know that part by hard and I won't have the means to look it up for an other week. I am currently on mobile only.
ok. should I write a ticket? Will someone else check discord? I also found an USB-mainboard bus (or possibly the unix driver for it, cant tell) that does not allow to run the same bandwidth as other usb-busses. (on Dell Precision 1700s, (we checked 3 of them) the maximal bandwidth is quite limited) Is that something interesting for you?
Regarding the intrinsics issue, yes, please create an issue and assign me. I will check it as soon as I am back in the office.
@papr i found with afagg's github of zmq there were serious issues w/ lag
Regarding the USB bus stuff, I would say that this belongs in the category personal setup details that is so specific that it is difficult for us to document such things. Nonetheless, I would appreciate it if you could name the details here. This way we can search for them in case that we need them.
ok. I will do this next week. We will checkout an external PCI USB Bus and see whether that fixes the problem. Else we will need the split-hub schematics and upgrade our setup π Thanks a lot for your effort & the amazing piece of software!
enjoy your weekend, bye!
You too!
here is a very raw version of something that i pulled out of the module I built in matlab to send zmq commands it uses the matlab-python api to load in the zmq packages and thus doesn't require the c compiled codes that others like andrew fagg has on his github. I found that this was the most reliable.
p.remoteIP = '127.0.0.1'; p.remotePort = '50020';
isRemoteConnected = false; % import python packages zmq = py.importlib.import_module('zmq'); zmqM = py.importlib.import_module('zmq.utils.monitor'); % this.msgpack = py.importlib.import_module('msgpack');
% this must be run before running Requester context = zmq.Context(); remAddress = sprintf('tcp://%s:%s', p.remoteIP, p.remotePort);
% Requester Initialize socket = zmq.Socket(context, zmq.REQ);
% connect and block node block_until_connected = 1; if block_until_connected == 1 monitor = socket.get_monitor_socket(); socket.connect(remAddress) for attempt = 1:5 status = zmqM.recv_monitor_message(monitor); if double(status{'event'}) == zmq.EVENT_CONNECTED fprintf('Pupil Remote: Event Connected\n') break elseif double(status{'event'}) == zmq.EVENT_CONNECT_DELAYED fprintf('Trying to connect to Pupil Remote again: Attempt %d\n', attempt) else sprintf('ZMQ Connection Failed: Attempt %d\n', attempt) end end socket.disable_monitor(); isRemoteConnected = true; else socket.connect(remAddress); fprintf('Pupil Remote connection NOT tested...check ip and port\nIP: %s\nPort: %s\n',remoteIP,remotePort) end
remoteMountLocation = '/path/to/dir/tosave/recordings'; filename = 'nameOfFile'; % Set Time Base cmd = 'T 0.0'; socket.send_string(cmd); zMsgRecv = char(socket.recv_string()); fprintf('Sent command %s and received %s\n',cmd, zMsgRecv);
% set mount/save location and record mountLocation = remoteMountLocation; filePathName = [mountLocation filename]; cmd = ['R', ' ', filePathName]; socket.send_string(cmd) zMsgRecv = char(socket.recv_string()); sprintf('Sent command %s and received %s\n',cmd, zMsgRecv);
% set isRecording property to true isRecording = true;
%%
%stop recording cmd = 'r'; this.socket.send_string(cmd) zMsgRecv = char(socket.recv_string()); sprintf('Sent command %s and received %s\n',cmd, zMsgRecv);
% set isRecording property to false isRecording = false;
remoteMountLocation = '/path/to/dir/tosave/recordings'; filename = 'nameOfFile'; % Set Time Base cmd = 'T 0.0'; socket.send_string(cmd); zMsgRecv = char(socket.recv_string()); fprintf('Sent command %s and received %s\n',cmd, zMsgRecv);
% set mount/save location and record mountLocation = remoteMountLocation; filePathName = [mountLocation filename]; cmd = ['R', ' ', filePathName]; socket.send_string(cmd) zMsgRecv = char(socket.recv_string()); sprintf('Sent command %s and received %s\n',cmd, zMsgRecv);
% set isRecording property to true isRecording = true;
%%
%stop recording cmd = 'r'; socket.send_string(cmd) zMsgRecv = char(socket.recv_string()); sprintf('Sent command %s and received %s\n',cmd, zMsgRecv);
% set isRecording property to false isRecording = false;
Hi everyone! Im new here. Finally i have managed to install the software. But there is a problem using the device: When i click on detect eye, the little popup window shows the picture of the things infront of me and not my eye/pupil. It says something like this: capture failed to provide frames. Can someone please help me? Thank you very much.
Hey guys, I am wondering how can I integrate different psychophysiological measures... like eye tracking and Galvanic skin response... any advise?
@user-2fbdee Sure. Of course, it depends on the different devices you're trying to integrate, but basically you'll need to send event codes simultaneously to the different recording devices, thus enabling you to synchronize the data in offline processing. For example, in our lab we use matlab/psychtoolbox to for stimulus control. With each stimulus presentation we send a numerical event code to pupil-capture via a UDP connection (https://github.com/mtaung/pupil_middleman#pupil-middleman), and to a Brain Products ActiCHamp EEG system via a custom USB/LabJack setup.
I have a question. Our lab are currently running a study where we record eye data from participants across multiple different sessions on different days. I'd like to compare pupil diameter across these sessions, but there will likely be variability between sessions in terms of camera position and distance from the eye. My understanding is that pupil diameter is measured in pixels. Will I be able to correct the pupil diameter values to compensate for variations in camera distance? I'm using a binocular pupil headset in 2D mode (algorithm), and I'm recording both eye videos. Any suggestions would be appreciated. Thanks!
@user-e7102b You could try to run offline pupil detection in 3d mode and compare the resulting 3d pupil diameter that are measured in mm. This assumes a well fit 3d model though. Please be aware that the the 3d model does not take refraction of the cornea into account.
@papr Thanks, I'll give that a try.
Hi all - I'm wondering if there is already a way to import eye and world camera data of a custom headset from a network source (such as through the lab streaming layer, http, or some network protocol). For example, a raspberry pi is broadcasting camera data, which I would like to receive and analyze on a local desktop running Pupil Capture. Has this been done? Can this be easily done with a plugin, or might I have to dig deeper into re-writing some of the source code?
@user-fc793b you should look at https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/ndsi_backend.py and https://github.com/pupil-labs/pyndsi
@user-fc793b starting from pyndsi's example might be the best intro to ndsi
Hi everyone! Im new here. Finally i have managed to install the software. But there is a problem using the device: When i click on detect eye, the little popup window shows the picture of the things infront of me and not my eye/pupil. It says something like this: capture failed to provide frames. Can someone please help me? Thank you very much.
Greetings. We are trying to set a protocol for offline calibration. The first step is to detect the eye and set parameters for the eye model in algorithm mode. It looks like we have 4 parameters we can play with: intensity range, pupil min, pupil max, and model sensitivity. Any advice on how to optimize these parameters to achieve a good calibration?
min and max seem straightforward, as does pupil intensity. Any advice on setting model sensitivity?
Hello, how can my plugin react when pupil app start recording? I tried the following code but not work def on_notify(self, notification): if notification['subject'] is "recording.should_start": print('my plugin: recording started')
@user-e02f58 try == instead of is
. Also look for recording.started
instead of should_start
@papr == work for me, thanks!
I also want to get the x and y of pupil_positions. Can I know which file is the gaze finder? so that I can get some reference
I wonder if I can get the x of pupil_position with something like that
def recent_events(self, events): if 'pupil_positions' in events: pp = events['pupil_positions'] print(pp.norm_pos_x)
Is there anybody who has more insight on the pupil capture implementation to help me with some questions ?
Hi again, I have a question about vision correction lenses and the pupil trackers - at the moment, is there a native solution to accommodating corrective lenses and the pupil trackers? When testing the trackers on people wearing corrective lenses, as expected I don't get a very good capture
@John Spencer#6980 I would be interested in that as well
I'm encountering a lot of issues using PupIl Player (latest version) to open and export recordings on my Macbook Pro (2017, 16GB Ram, 2.5 GHz core i7). The files I'm trying to open are either 10 minute or 20 minute recordings (Pupil Headset, 120 Hz binocular, 30 Hz World). Player often crashes when attempting to open the files. Furthermore, when I get the files to open and hit the export button, this frequently fails or only exports a portion of the data. Do I not have enough a powerful enough system to perform these actions? I've found that I can reduce the likelihood of crashing by closing all other power hungry applications on the machine, but even with a freshly rebooted machine and nothing else open, I'm still having trouble.
This might be entirely coincidental, but I seem to have more luck with the exports if I don't attempt to play back the recording first i.e. just load the folder into pupil player, then hit "e" immediately.
Whether the data recorded with capture contain the detailed system time?
Hi @user-ef998b apologies for the delayed reply. You noted:
When i click on detect eye, the little popup window shows the picture of the things infront of me and not my eye/pupil. It says something like this: capture failed to provide frames
Based on your description it would seem that the eye window is displaying the video feed from the world camera. Please restart Pupil Capture with default settings. General > Restart with default settings
.
Please also let us know what OS you're using, OS version - if the above does not resolve the issue/behavior you are experiencing.
@user-78dc8f In most cases you can use default settings for the pupil detector. I would only suggest setting model sensitivity as an advanced setting. The protocol I would recommend is as follows: 0. Adjust eye cameras and ensure that the eye of the participant is within the frame for all eye movements. Ask participant to move eye around to sample extreme sites or look at a point and roll head to build up 3d model. 1. Check if pupil is detected in eye windows. If not, then try adjusting min/max pupil detection params if needed.
Thanks @user-e7102b
By the way, regarding the chin rest, here are some photos (it seems to work well
Chin Rest (Low cost)
Hope it helps!
Hello, anyone know how can to get timestamp in self created plugin? I tried self.g_pool.timestamps but the program crash
plugin base class 's g_pool have no attribute of 'timestamps' but base class like Gaze_Producer_Base do have the timestamps
@user-e02f58 g_pool.get_timestamp IIRC. G_pool is just a reference to a bunch of useful objects and functions. It's base class is meaningless.
@papr oic, I can use self.g_pool.get_timestamp() to get the number is there a list the describe all useful objects and functions in G_pool?
The dir() function is usually used to inspect python objects. https://docs.python.org/3/library/functions.html#dir
hello, im new here, got some problems with drivers installation , somebody can help me? cant find libUSBK Usb Devices, in activate source the cameras are "unknown" , and the pupil windows says "camera already in use or blocked"
in gaze_producers.py, it get timestamps using self.g_pool.timestamps[self.g_pool.capture.get_frame_index()]
but I cannot use this function in my plugin
That's because gaze producers includes Pupil Player plugins. These are not compatible with Pupil Capture. Have a look at the imports in world.py for a list of Capture-compatible plugins
@user-bf07d4 this means that the driver installation was not successful. Try running Capture with administrator rights
I've already tried but it still doesn't work
Please see the driver troubleshooting section in the docs then
Already done π’
My computer uses windows 7 my cam is logitech c615 and hd 6000
Well, we only support Windows 10.
thank you very much!!! Now I will try in another pc
@user-bf07d4 and another thing that just came to my mind: You will have to install drivers manually due to not using the standard cameras. https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
thanks π
papr help please im doing installation on a win 10 os following instructions on this page :How to prepare your system for uvc on Windows (8 and later), but cant find wheel file :download the whell file from the releases page and do pip install uvc-0.7.2-cp35-cp35m-win_amd64.whl
point 8
@papr
Use the newest wheel that you can find on the github release page
ok all clear now (maybe)! thank you again
this is when I try to launch the program @papr
How can you simulate dummy data from pupil capture for others ? Can you record some pupil data/Are there any dummy data available ? Don't need anything specific yet just to simulate the behavior ?/
@user-f1eba3 if you set the eye images to fake capture you should get dummy pupil and gaze data.
How can one do that ?
Hi all, i'm fairly new to pupil labs. Trying to make sense of the data export. Regarding the gaze positions, pupil labs gives normalized coordinates based on the world frame. i'm trying to understand what the world frame looks like in order to make a transformation unto a surface. any help would be greatly appreciated
Question: is there a way to batch export raw data from multiple participants in pupil player? I'm aware that there is a batch export option, but this only seems to export the .mp4 and numpy files (I need the csv files, surface data etc.). If this doesn't exist, it would be really useful. It's very tedious to manually export each file if you have a ton of data, like we do.
Hi All,
Whenever I open Eye0 & Eye1 in debug mode, pupil_capture gets crash. Prompt displaying "out of memory". Can any one help me.
Dear @wrp , thanks for your help, but it is still not working. The following text appears: EYE1: done updateing drivers! EYE1: Init failed. capture is started in ghost mode. No images will be supplied. EYE1: no user calibration found for camera Ghost capture at resolution 320,240. EYE1: no pre-recorded calibration available. EYE1: loading dummy calibration.
After this, nothing happens
Do you have any suggestions?
Thank you very much.
Question: why pupil lab in VR can compensate slippage? How this process is done?
Hi, I am completely new in this app, and I dont know who I should write to. I created my own camera and I am trying to find how to obtain the software programme to be able to start collecting the data. Thank you in adavance, Ana
@user-04d904 gaze positions are relative to the world camera frame. So (0,0) would be the bottom left of the world camera video frame, and (1,1) top right corner. Re surfaces: you can use the Offline Surface tracker and export with this plugin loaded in order to get gaze positions relative to each surface
@user-e7102b currently there is no way to batch export raw data from multipule participants in Player. This is a much needed feature for sure. Please create an issue for this in the github repo if there isn't one there already.
@user-b19122 could you supply the specs of your system (I recall this is running on Windows). E.g. cpu and ram specs.
@user-ef998b can you check the device manager to see if drivers are installed in libusbK
category
@user-1d8719 thanks for getting in touch. What camera are you using? Software is available at https://github.com/pupil-labs/pupil/releases/latest
@user-b91aa6 I have responded to your question in the π₯½ core-xr channel.
I am using the Logitech C525, and I have changed the light as it is done in the video. Is it a Demo or is it the complete software? Thank you π
@user-1d8719 Pupil software is used by many researchers around the world π - so more than just a demo. We are always working to add more features and improve - so feedback is always welcome.
Cool!! Thank you so much, I will!
You're welcome!
dear @wrp , i am really not finding this libuskb category. where is it exactly? Thank you
Device Manager > libusbK
if Drivers are correctly installed, then you should see libusbK
category within the Device Manager
on Windows
based on your message @user-ef998b I assume that you are using Windows 10 - please confirm that this is the case.
yes, i am using windows 10
Do you see drivers installed for Pupil Cam
in other categories of the device manager?
pupil cam1id2 is intsalled
in what category?
libubsK
@user-ef998b how many cameras are listed in libusbK
category? You can also go to view > show hidden devices
in the device manager
Please also start Pupil Capture and go to General > restart with default settings
only one is there: cam1id2
are other cameras installed in other categories. You have a binocular or monocular system?
i have a binocular one, and yes, under the 'cameras' there are 2 ido and id1
To debug driver installation on Windows 10 please try the following:
To debug driver installation could you please do the following:
1. Unplug Pupil Labs hardware
2. Open Device Manager
2.1. Click View > Show Hidden Devices
2.2. Expand the libUSBK devices category and expand the Imaging Devices category within Device Manager
2.3. Uninstall/delete drivers for all Pupil Cam 1 ID0, Pupil Cam 1 ID1, and Pupil Cam 1 ID2 devices within both libUSBK and Imaging Devices Category
3. Restart Computer
4. Start Pupil Capture (With admin privlidges on Windows 10)
4.1. General Menu > Restart with default settings
5. Plug in Pupil Headset - Please wait, drivers should install automatically (please ensure you have admin user privlidges)
i will give it a try
i really appreciate your help
will get back to you with the results soon
@user-ef998b I hope that we can resolve driver issues soon.
Thank you!
@wrp system specs are [email removed] 8gb ram, 1 TB Hdd, 64 bit OS Win10.
I don't think that it is the issue with specs.
have a check prompt message
dear @wrp , i have tried out your suggestion, but nothing changed. Unfortunatelly now is worse because there isnt a main camera picture. Advice? Thanks.
Thanks @wrp for your help
Hi - I am new to Pupil and when I open Pupil Capture on MacOS 10.12, I get the following error message: "EYEO: Init failed. Capture is started in ghost mode. No images will be supplied"
How can I fix this?
FYI I have installed all the MacOS dependencies
Is there a repository for Pupil plugins? I was just wondering if there is any plugins for detecting and exportiong Saccads, and smooth pursuits?
Hi all, has anyone had any experience measuring pupil dilation?
@user-ef1c12 have you tried running using a bundle? Does it work then?
@user-d72566 maybe this is relevant: https://github.com/pupil-labs/pupil-community ?
@user-ef998b please send me a DM and we can try to coordinate a way to fix this for you.
@mpk Thanks, will take a look.
I have a feature request: It would be really nice if it were possible to zoom in on the graphs in Player. Like Holding CTRL and using the scroll wheel or similar. Just a thing that hit me. π
Also, I'm wondering if it is possible to export annotations together with for example fixations? For example, if I would export fiixations and blinks, it would be nice if they could include columns for annotations as well.
@mpk I am sorry, what do you mean by "using a bundle"?
@user-d72566 regarding timeline zoom: This is something we plan to implement soon!
regarding joining annotations with other data. I have not thought about this before. If there is a nice way to do this in a generalistic way. I d say we can do it.
@user-ef1c12 I mean just download the app from here and run it: http://github.com/pupil-labs/pupil/releases/latest
I get the error message on the app
FYI, the headset doesn't show up on the list of USB devices while it's plugged in
@user-ef1c12 it should look like this:
if not there is a USB cable issue.
@mpk What can I do about it?
try a different usb port? Different cable? Different Computer?
if nothing helps, please reach out to info@pupil-labs.com for supoort on the hardware.
@mpk I have tried all USB ports (they are all functioning), I can't use a different cable since it's a Pupil-specific cable
Are you sure it's not a software issue? I have seen other users report the same issue before
if the hardware does not show up in the device manager and you are on mac this is for sure not a SW issue. make sure to restart the machine just to make sure.
make sure that the USB-C connector is fully engaged with the Clip of our headset.
That was the issue!
It's working, thank you!
@user-ef1c12 glad to hear I could help!
@mpk is it normal if the pupil/iris image is out of focus? I had seen a way crisper pupil/iris image with the previous Pupil headset
@wrp - I tried uninstalled & followed your instructions. But still pupil_capture getting crash while we opens debug mode for both eye ( 0 & 1 ). Other issue is that while we start device & both eye cameras. It gives -- eye1 - [ERROR] uvc: Could not init 'Analog video standard'! Error: Error: Pipe error.
what is this??
@user-b19122 we have not seen this issue before. Can you try a different machine?
@user-ef1c12 if you are using a 200hz system. Yes the image looks less nice but tracking should work fine regardless.
@mpk Got it. Thank you!
@mpk Yeah, a tool for merging the annotation and blink, fixations etc into CSV files would have been great. Right now, I doing it manually, I don't have that much video to work with for the moment(just 30 minutes) but it still takes an hour or two. I can imagine having more video would take a substantial amount of time.
As an example: I exported the blinks into CSV and inserted a new column with the label "Task". Then had to move the trim handle inside Player to the first task in my experiment. Let's say that task 1 started about 30 seconds into the video. Then I move the trim handle to 30 seconds and look at the minimum Frame index to determine were the task started. I then continue to watch until task 2 starts and do the same thing again to determine the end frame. Then I look into CSV file and mark all cells that occurred in this frame span as task 1. This is pretty cumbersome if you ask me. and I have to do it for both blinks and fixations, maybe there is an easier way?
@wrp check the image (algo) & let me know whether it is correct?
I am using pupil mobile and I can record for 15-30 min and then I get a message saying that the max file size has been exceeded and it cuts off. I have more room on the SD card, so I was curious what is limiting the file size?
Hello, just got a second eyetracker, with the newer 200Hz fixed focus cameras. We're having a lot of trouble with calibration relative to the old 120Hz cameras, and the camera view seems out of focus at any reasonable distance from the eye, with or without the extenders. Also, the framerate in the software seems to have a cap of 120Hz. Any suggestions for what to do?
I see others have had the same issue, but our calibration is objectively worse than with the other device (testing them sequentially)
@user-82fd94 We've had a 120 Hz headset for a few months now, and just purchased a 200 Hz headset. I've not had a chance to play around with it too much yet, but I noticed that the eye looks less focussed, although apparently this is normal and shouldn't affect calibration. I've played around with 2D screen markers calibration and it looks OK. Are you trying to do 2D or 3D? Re the framerate, if you drop the resolution of the eye cameras you should see the higher frame rate options appear.
We're doing the 2D, and using the screen markers. I'll try changing the resolution as well, thank you.
Lowering the resolution seemed to fix it, we're getting similar angular accuracy for both now. Thanks @user-e7102b
Good to know! Seems counter-intuitive that lower resolution = better calibration...but hey, whatever works π
I want to know the timestamps format in the world_timestamps.npy, and better tell me the source codes location.
@wrp Camera Eye0 is giving quite good view but Eye1 is bit dark. How can we do settings for Eye1 cam? I didn't get any options.
Hi everyone. I am a researcher in Austria. Actually a college is the main eye track guy, but for a demo we wanted to stream with a headset and the android phone to the desktop. It always says "Make sure 'time_sync' is loaded" and then 'time_sync' we are the leader, 'time_sync' Become clock master with rank 6.5someting, 'time_sync' 4.54 MG-CU removed, 'pyre.pyre_node' Group default-time_sync-v1 not found, 'time_sync' 6.54 MG-CU added...
No video is appearing on the desctop. If I connect to the desctop directly it works, also switching on the individual cameras on the pupil phone works
@user-bab6ad does the phone show in the pupil mobile menu of pupil capture? Did you select the correct camera in the menu below?
@mpk hm, I did not see any menu on the phone. I just see the three dots for the settings. Or do you mean the main sensor listing in the middle?
Maybe: The only phone we have with USB C is a pixel with Android I think 8.1. Can that be the problem.
I was referring the the sidebar menu in Pupil Capture. I dont know if 8.1 is an issue.
@mpk there I went to NDSI manager and selected pupil mobile. When I select it the messages with time_sync from above appear and the screen is grey. But maybe I need to select a camera in an other screen then?
the bottom menue shows the camera you want?
if you select the camera there and you still get a gray window its a phone os issue (unfortunately.)
Hm, the laptop has windows, and I quickly check the version of the pupil capture, it looks different to me
I start pupil v.1.6.11 on windows
thats fine.
ok, so I can select 'Pupil Mobile' in the Manager dropbown, but it does not tell me a remote host
we are on the same WiFi, I checked that
then the Wifi is port blocking. Can you try opening a hotspot with a third phone and logging both laptop and pupil mobile phone onto that?
ah, that is a possibility
I will try that
it now says exception in thread-8, in the end it goes to zhelper.py 489, GetAdaptersAddress: argument 4 expected LP_IP_ADAPTER_ADDRESSES instance instead of LP_IP_ADAPTER_ADRESSES
o_O this error message look as it would make no sense
@user-bab6ad sorry to hear that. I m not sure what the issue is. sounds like a funny wifi config. The error is in one of the external libs we use. So nothing I can directly help with π¦
@mpk ok, one last question: it now also do not start when I switch back to the default network. Is there a command to reset that? Because then I can play with the WiFi
yes. Just delete the folder called pupil_capture_settings in your user directory.
ok, perfect, thx!
@user-6e1816 it's a one dimensional float64 numpy array. Use numpy.load() to load it by yourself
Hi, - did last week end some offline mobile binocular recordings. With the pupil-payer_v1.2.7 I could successful offline detect and video export. After that I got the idea to try out v1.6 and updated that for to OSX High Sierra. Now the pupil-player_v1.6 just shuts down after dragging a local recording folder onto it. To continue I put back my old working v1.2.7 binaries and suddenly the writing of the export doesnΒ΄t start anymore, the progress just stops at the in point. Are there dependencies to fulfil or permission to set manually? Again thinking never touch a running system.
I will be back on Monday afternoon π DonΒ΄t worry, but may someone has an idea.
... π v1.6 is working now. May it was a invisible window in the display arrangement, with primary screen on the right, ... or a missing update after upgrade.
Hi all. I am looking into some of pyglui functionalities and thus downloaded the example from github. I cannot understand why in the pyglui/example/example.py I can't import "draw_concentric_circles". I checked that pyglui in in the path and there is no error with the importations from pyglui.cygl.utils.
@user-42b39f the example is unfortunately out of date. Please create an issue and I will update it the coming week.
@papr I know this, but what I really want to know is the format of this time, I used to think it came from time.time(), but apparently not.
Oups, I did not notice that this issue had been reported already. Is Draw_concentric_circles deprecated or not supported anymore ? I tried to have a list of all the methods for pyglui but as I am new to Python I didn't find how to list them. I tried various combinaisons of dir(), hel(), getmembers...
@user-6e1816 by default the Pupil time is monotonic clock whose epoch has no meaning. But you can use Pupil Remote or Pupil time sync to change the epoch to e.g. the Unix epoch.
@user-42b39f it is not supported anymore since we changed the calibration marker drawing method.
@user-6e1816 Small addition: The time unit is always seconds
Hi all, I am interested in installing an IR camera inside a VR headset, recording pupil movement, and uploading the video into Pupil Capture + Pupil Player to get a readout of eye movement. I am not concerned with gaze (i.e. I only need one camera). Is there a smart/efficient way to do this?
Hello,
I am having a problem using the pupilremote to connect 2 desktops
One desktop is connected to pupillab eye tracker and the other is for remote visualization
Can anyone help me with how to do that?
Hello, I purchased "Epson Moverio BT-300 Binocular Mount Add-on" through a Japanese agency. but, I broke wires in plug. Could you tell me how to repair it? I attach the photo of the damaged part.
@user-babd94 Do you feel comfortable to change the cables yourself?
@papr I think that wiring can be done if it is not too difficult
Please write an email to info@pupil-labs.com including your order details, the image and a reference to this discord conversation.
Certainly. I will send to it after contacting the agency. thank you.
@papr, any advice on pulling only the eye tracking data, without using a world camera?
Hi Everyone, I just got my pupil labs eye trackers (previously I was using a DIY version) It's the high speed world camera. 200 Hz binocular version. I was wondering how I can zoom out in the eye cameras since even after adjusting them on their rail I can't get a decent view of each eye and one the eye cameras is upside down, I already flipped the image but I was wondering if there's another solution for this problem.
Hi @user-006924 nice to hear from you again. The 200hz eye cameras do not zoom or focus. The right eye image is upsidedown by design - the orientation of the images does not affect gaze estimation. Did you try the extension arms (the orange arms that come with the headset)?
You can learn more about the additional parts in the docs here: https://docs.pupil-labs.com/#additional-parts
If you are still having difficulty capturing an eye image, please share or send images/videos of the eye so we can provide feedback.
hey, I'd like to open the camera feed from the two IR eye cameras with pyuvc and opencv. When I run: import uvc import logging import cv2 logging.basicConfig(level=logging.INFO)
dev_list = uvc.device_list() cap = uvc.Capture(dev_list[0]['uid']) cap.frame_mode = (400,400,60) cap.bandwidth_factor = 1.3
while True: frame = cap.get_frame_robust() cv2.imshow("img",frame.img) if cv2.waitKey(1) & 0xFF == ord('q'): break
cap = None
I only get a blank, black screen
Is there something else I have to initialize? Values, etc?
@user-d74bad did you take a look at https://github.com/pupil-labs/pyuvc/blob/master/example.py already?
I've tried bgr and gray frame attributes as well
that's for the world camera
and I can get that working easily
with the example I posted
I just change the device list index to 1
@wrp yes
I do get a little bit of feedback: Estimated / selected altsetting bandwith : 210 / 256. !!!!Packets per transfer = 32 frameInterval = 166666
not sure if that's normal
and why not use bgr
?
that doesn't work π¦
just was trying some other formats
@user-d74bad to clarify - you are trying to display frames from Pupil Labs 200hz eye camera, correct?
@wrp yes, exactly
@user-d74bad I will try to repliacte this issue and get back to you
beautiful, thanks
@user-d74bad what OS are you using?
Mac OSX
I've also opened a working camera feed with ffmpeg of the ir cameras
I had to specify a certain codec though
@user-d74bad I am able to successfully get frames from the eye cameras as well - but perhaps opencv is not able to display the format we are supplying - all the data is there - but I also see black screen using opencv's imshow function. We will need to update the example in pyuvc. I will talk with my team about this today
Interesting, you're able to display the frames with ffmpeg or opencv?
Oh gotcha
I see now
Great, thanks for working on that
alternatively - the exposure and other settings may be set such that you can not see the image, hence black screen from opencv's imshow
@user-d74bad it is in fact the exposure mode and gamma
Example - if you do the following you will see the image in opencv's imshow window
controls_dict = dict([(c.display_name, c) for c in cap.controls])
controls_dict['Auto Exposure Mode'].value = 1
controls_dict['Gamma'].value = 200
Heck yes! I'll try when I get back to the lab, and I pass the controls dict to the capture object somehow?
Oh or those are.passed by reference and affect the cap through their object reference?
you don't need to pass the controls dict - they are passed via their reference in this simple example at least π
Gotcha, great thanks
@user-d74bad welcome π
Good day to you people. I have a question regarding analysis of data using surfaces. For my study, i have used a single surface and i want to compare the amount of fixations that this particular surface received versus all other fixations on the dataset. Do you guys know how to do that? I hope it makes sense, thanks in advance π Best regards Alexander.
Hey @user-88ecdc You can export the fixations on the surface and all fixations in two separate csv files. Fixations are identified by their timestamp. Then it is just a matter of comparing the two files.
Thanks for answering so quickly @papr ! I assume that exporting all fixations is done by using the Raw Data Exporter, but how do i export a csv file that only contains the fixations on one surface?
Hey, I have a problem with binocular recordings using two 120Hz cameras. The absolute timestamps of the two pupil signals are set-off by 100-500ms. E.g. In this picture, the orange line is earlier than the late one. I'm using very basic code for plotting (see next message).
pldata = pl_file_methods.load_object(os.path.join(filename,'pupil_data'))
def plot_trace(pupil): t = [p['timestamp'] for p in pupil] x = [p['norm_pos'][0] for p in pupil] plt.plot(t,x,'o')
plot_trace([p for p in pldata['pupil_positions'] if p['id'] == 0])
plot_trace([p for p in pldata['pupil_positions'] if p['id'] == 1])
Did this ever occur to someone else? I will try to do more recordings and tests soon, but I thought its worthwhile to ask
@user-8fe915 Could you shortly summarize which hardware and software/os you used for the recording?
sure, 120Hz trackers, 60Hz worldcam, Ubuntu 16.04, newest compiled pupil (would nee to check, downloaded it ~7 days ago)
@user-88ecdc You need to start the offline fixation detector and the offline surface detector, and after both finished their computation, you just need to hit export. The offline fixation detector exports the "all fixations" file and the offline surface detector exports the fixations on each surface
(need to go, will be back in ~2h)
@user-af87c8 that means that you probably recorded the eyes as well. Could you please enable the eye video overlay plugin and check if the eye video are in sync?
@papr the videos are delayed as well
Is this the case for all of your recordings?
I would need to check - but how can this occur for even a single recording?
also the delay is increasing over time. Does pupillabs rely on continuous framerate of the videos? Or is it using the timestamps of the eye video frames
@user-af87c8 We have not seen this behavior before. And we also do not know what the cause could be. We specifically do not rely on a fixed frame rate but on the frame timestamps to correlate the data.
The timestamps are generated by the cameras.
btw, @user-af87c8, you can archive the same results by loading the worl/eye0/eye1_timestamps.npy files with numpy. They should yield the same timestamps.
I don't follow, do the cameras have their own clocks? Or do you mean the eye-process (and then the timestamps are in system time)
I checked, at least the number of timestamps (numpy.shape) and the number of frames in the video (checked with ffprobe) are the same. I occasionally get a frame drop (libjpegturbo finds a corrupt file, will soon fix this) during recording, but framedrops should not influence the mapping of timestamp + video
The camera timestamps are based on the system's monotonic clock that is sued for the usb connection timings AFAIK
@user-af87c8 just to confirm you are using a Pupil Headset with usb-c clip?
framedrops do not affect the clock or timetstamps.
are you setting uvc controls manually?
no, we have the old USB 2.0 connectors, I don't set uvc controls manually
@user-af87c8 so every camera exposed seperatly?
or a single connection?
single connection to the usb hub (the clip), then three cables (not sure what you mean)
three cables from the headset to your PC?
Concerning the numpy timestamps: Pupil data is based on the frames' timestamps. Therefore [p['timestamp'] for p in pldata['pupil_positions'] if p['id'] == 1]
should yield the same timestamps as np.load('eye1_timestamps.npy')
@papr ok this I understand. @mpk no. Single connection (as in a single USB cable) from the PC to the hub/clip . Then from the clip three cables to the three cameras
@user-af87c8 ok thanks. Then its on the same USB bus and I dont know what else could be the issue.
yes. I request the alternative building instructions to make use of multiple USB hubs last week, but did not get a response so far
we are now checking another recording to see how easy we can reproduce it. But this is a huge problem for us
@user-af87c8 of course this is a huge problem. We have spent a lot of time to get this sort of thing right. Its the basis for all other work.
As I said, there are sometimes framedrops (libjepgturbo complaining about incomplete frames)- I guess it is due to the high framerate being at the limit of USB 2.0. These occur occasionally, but might add up over time
@user-af87c8 I m not sure if we recevied the email requesting alternative hardware. Could you send it again?
framedrops do not affect the timestamps, I think the reason lies elsewhere.
sure, info@pupil-labs.com ?
yes.
ok, send (thanks already!!)
@user-af87c8 I just tried to reproduce your issue on a binocular 120Hz headset (with usb-c clip though, since we do not have). You mentioned that the delay got bigger over time. What time span are we talking about until the difference is noticable?
after 10 minutes you can easily see it if you look at blinks
or saccade onsets with half speed
Yeah, blinks is usually what we use to determine sync
I made a three minute recording and was not able to reproduce the issue. I will do another 10-minute one
@mpk Camera Eye1 is not working anymore and I get the following error message. Could you please let me know what I need to do?
Camera Eye1 appears as "Unknown" when I attempt to select it in the Local UVC sources
hi @user-ef1c12 it's Douglas from the hardware team. Let's schedule a quick debug meeting - i'll direct message you a link
ok great, thank you @user-67e255
Hey, just curious if there is any news on the release of the HTC Vive addon 200 hz version. We're getting ready to collect some data, and already have a 200hz headset, but noticed only the 120hz version is available for the Vive
@wrp that did indeed work for me, I submitted a pull request on the pyuvc github to help others who want to do the same
Thanks for following up @user-d74bad we will review the PR this week πΈ
@user-ecbbea we have finalized the camera design for the 200hz Vive and Vive PRO and are gearing up for production. We will let you and the community know when it is available. These new cameras will also enable us to (finally) make an add-on for Oculus CV1 as well. I hope to be able to share something concrete with you all soon on a release date.
@user-d74bad @wrp I reviewed and merged the PR
The yuv output from the frame publisher plugin appears to be yuv422 is there anyway to change this to yuv420?
How is the norm position X and Y for the frames aggregated in the fixations csv file?
@user-8be7cd you would have to convert that yourself. We simply transmit the yuv format we are getting from the cameras.
But doesn't the camera give MJPEG anyways that the uvc/pyuvc converts to yuv?
or rather libturbojpg
@user-b0c902 IIRC it is the mean gaze position of the gaze data that the fixation is based on.
I was calculating the average based on the gaze position but it doesn't match. I then tried to exclude the gaze positions that were below confidence threshold, and that gave me a closer result yet not the same. However, I just want to be sure of how it is aggregating the gaze position.
@user-2da779#9122 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L58
Good afternoon. After reinstalling windows does not see the world camera computer, what should I do? What driver we could not install, could you tell us?
@user-d9bb5a please try running the latest version of Pupil Capture as admin to install drivers
and how to do it? as an admin?
We installed the latest version, but the World Camera is not displayed\
Right click the app, "Run as administrator"
@user-d9bb5a you are using 3d world cam correct?
Yes thank you. Everything worked out.
and thanks for the improvement of your product, the latest updates are very good. But could you please explain the point about iMotions Exporter -- #1118
@user-d9bb5a You can export your recording to the iMotions format. This is only of interest if you use https://imotions.com/
understand, thanks, read here
@wrp thanks for the info! is there a timeline in terms of when production will be ready? even vaguely, like a week/month etc? cheers
also, is it possible for us to beta test the new hardware? we can pay, as well as provide feedback. we're just in a time crunch to collect data.
Hey guys, I have successfully set up the physical surfaces markers on my PC screen and written a script using the IPC backbone to receive the metrics regarding the surfaces
topic. What interests me is receive the fixations
and in particular the norm_pos
and timestamp
on the surface but there isn't such a topic in the message received, should I calculate them myself based on some id
or am I doing something wrong?
Or should I tap on the fixation
topic also and compare the timestamps received from the fixations
topic and the surfaces
topic ?
@wrp I have logged in. all OK
Hi @user-2334b9 I have sent you a Direct Message - thanks!
Hey guys, can someone help me-give me an insight with the above question I have as it is quite time sensitive. I have already written the script to receive the metrics related to the topics fixation
and surfaces
but I don't understand which information is "useful" to calculate the fixations in the surfaces
topic in a real-time process. Any idea/insight will be very much appreciated.
Thank you in advance.
@user-b116a6 Actually, the online surface tracker does not map fixations to surfaces. But that is a useful feature that could be added to the next release.
Alternatively you can use the surface's m_from_screen matrix to transform the fixations' norm_pos into normalized surface coordinates
@papr I thought it had something to do with the gaze_on_srf
list that is received. How can i use the m_from_screen matrix to transform the fixations' norm_pos? Currently, yes I am saving the norm_pos of the fixations as they are received but I have trouble finding the correlation between the message received from the surfaces topic.
Is there an alternative way to do this mapping?-say given the timestamp
of the surfaces
topic if the on_srf
field is True
, map this timestamp to a fixation timestamp if it exists and assume that the fixation's norm_pos
was inside the area created by the physical markers?
I would simply cache the most recent surface events, and use their m_from_screen matrix on receiving a fixation. Maybe add a maximum time difference between surface event and fixation
But this can easily be calculated in the plugin. And should be published directly. I will add this tomorrow
How could I use this in a real-time process though and use the m_from_screen to transform a fixation's norm_pos?
You subscribe to both, fixations and surfaces, and try to apply the most recent fixation to the most recent surface.
@papr Yes, I thought of that after I asked the question, but how would I use the m_from_screen matrix to normalize the fixation's norm_pos and what is the expected result after that normalization? I seem to be a bit confused... π
The matrix is meant as a linear transformation. Matrix multiply it with the norm pos vector
@papr Okay then I will try that, after that I assume that the fixation happened within the bounds of the surface if the output of the normalized coordinates are positive? Shouldn't another value exist in the norm_pos vector since the m_from_screen matrix is 3x3, or do I set it to 0?
Ah, you need to convert it to a homogenous point
The fixation is on the surface if x and y are between 0 and 1
The location of the fixation should correlate with the gaze that was mapped to the surface
@papr Okay thank you very much for the clarification, by saying converting it to a homogeneous point you mean that the 3rd value of the norm_pos vector should be 1 right?
Anyone know why, when calibrating the stuff on the HTC Vive using hmd-eyes, no calibration stuff gets saved? (Also, in the dev console, it keeps trying to access files that don't exist....)
@user-b116a6 correct. And after multiplying you need to normalize the third dimension back to 1
@papr Okay got it, thank you very much, I'll try that and hopefully it'll work.
Dear Pablo
I need to guide for setting up wifi
I'm not an expert on computer and information technology
A clear and complete instruction can help me a lot
clear and complete instructions would be nice π¦
I have 2 quick (I think) questions as I write a grant proposal. I want to have multiple children wear the pupil-labs headsets in classroom settings, and then will have them and others watch the resulting video records. So I'm interested in getting the most watchable videos and I'll also need to synchronize multiple records. I assumed the monocular high resolution system is what I want, but the specs on the high speed lens seem comparable (except for the lack of autofocus) - any advice on what system will produce the best video in a free-ranging environment? As for synching, I know in theory how to do this, but I'm interested whether others are doing multi-headset work.
Hi @user-e91538 we received your emails and responded via email. Thanks for following up here via chat.
@user-b116a6 This one is for you: https://github.com/pupil-labs/pupil/pull/1157/commits/4bb1aebae85699a1ad78a20b796140c6a10ce727 Fixations will be included as fixations_on_srf
in the surface events. Would you mind testing it and giving feedback?
@papr Thank you very much for your prompt response and commit, I just tested it from source and works really well with the script I implemented as I mapped the norm_pos values on the surface and the mapping was correct. Great work, thanks for implementing it in such short notice.
Nice! Great to hear that it works!
@papr Hello Pablo,
I'm Shubham Chandel masters student in Queen Mary University of London, am planning to do my thesis on pupil headset. I'm facing an issue with the eye camera, it doesn't seems to be working(init failed error), I've tried the troubleshooting steps and even tried installing the drivers separately but still no progress. Have installed the software on my Windows 8.1. I would really appreciate it, if you could help me resolve the issue. Thanks.
Hello, I have a Moto z2 play with pupil capture and I want to transfer the recording to a macbook pro. How can I do this and view the videos in pupil play
hi, I'm using the HTC Vive addon for my research in foveated rendering. On your website you mention that the eye cameras transfer their images with USB2.0, however the product comes with USB C - USB 3.0. I would prefer using a cable of about 5m to provide sufficient mobility to the users. Can I use a convertor to microUSB without experiencing data loss? Do you have other ideas?
@user-aa0833 yes you can use a converter. We use a usb3.0 cable but usb2.0 will be fine as well.
@user-8fba8e I recommend to use the Android File Transfer application
When I use puìl mobile how do I calibrate the glasses?
When exporting fixation data to a CSV file, it has a column named dispersion. What is the method for calculating this dispersion?
Hello! I am running an experiment with the pupil-labs eye tracker. We are asking multiple persons to drive on a test-track and drive specific scenarios. Unfortunately, the weather is quite changing and it is extremely difficult to configure correctly the eye cameras to have a good pupil detection (especially when the sun is strong). Is there any documentation or anyone that could advise me a procedure to follow to optimize the pupil detection? Thank you
@user-d72566 maximum cosine distance between gaze/pupil vectors that belong to the fixation
I went through the procedure presented on https://www.youtube.com/watch?v=6sWmOcGMDTk but I feel that it is missing some tuning to adapt the algorithm to a sunny environment.
@papr thanks!
Is there a chance to get a phi/thera rotation out of the fitted eye ball in 3d detection mode?
How can I calibrate the eyetracker for doing a study while going as a copilot?
@papr I have been looking at the code now, more specifically in fixation_detector.py There are a lot of references to dispersion but I can't find the actual algorithm that calculates it. Thought I could take a look at it and maybe mimic the algorithm for blinks and saccades. You could perhaps point me in the right direction? π
@user-d72566 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L84
Aha, seems it has changed a little from my version too "def vector_dispersion(vectors): distances = pdist(vectors, metric='cosine') return np.arccos(1. - distances.max())"
Yes. But this is a very recent change in order to disregard outliers
aha
Disregard is incorrect though. Take into account is the more fitting term.
For some reason the player shutdown whenever I try to load in a specific recording. It only does it for that one and not any of the others
@user-e1dd4e Could you share this recording with info@pupil-labs.com via e.g. Google Drive? We will have a look at possible causes.
Hey guys, we are using the 200Hz eye camera. I'm just wondering whether the focus of the picture we get is high enough to do accurate eye tracking. Is there a way adopt the focus? Other eye pictures seem to be more sharp.
The 200hz cameras do not have adjustable focus. Do not worry about it. The pupil detection seems to work fine.
@user-c351d6 please make sure to use th 192x192 resolution.
And I have another question. Is there somewhere a list of supported android devices, except of the list with the three devices in the documentary or a list with the requirements for a android device? For example, do they need a native USB-C connection or is it possible to use other devices with an adapter?
@user-c351d6 it is very unlikely that it will work with an adapter. Even having a usb-c connector is not a guarantee, unfortunately.
@user-c351d6 Wolf#0823 I recommend using a nexus 5x, p6 , one plus5 , pixel 2
Hi! I want to use Eye-Tracking for my thesis and I am new to all this. It is possible to see how long a fixation last? And how can I record this??
Thanks π
Is it possible to record more than 15-30 min with the app?
@user-1f74d7 You can detect fixations using the fixation detector plugin (online through capture and offline through player). There is a section in the user docs on fixation detection and it says the duration of a fixation is recorded in Capture. In player there is a raw data exporter to get the data.
@mpk Did you try the OnePlus 5T as well?
Yes, I am 99% sure that it works.
... π v1.6 is working now. May it was a invisible window in the display arrangement, with primary screen on the right, ... or a missing update after upgrade.
Hi, how do I play videos recorded in pupil mobile in pupil player?
Hi @user-6f2339 You visualize datasets by dragging a dataset into Pupil Player - https://docs.pupil-labs.com/#pupil-player
@user-6f2339 You will need to first transfer the data recorded with Pupil Mobile on the Android device to your desktop/laptop and then drag dataset into Pupil Player. Please note that you should have included a calibration routine within your recording made with Pupil Mobile so that you can calibrate post-hoc in Pupil Player
hi, we have successfully deployed and run pupil labs on NVIDIA Jetson TX2. While it does work it consumes a lot of CPU on average 80%+ on all 4 cores. Any recommendation?
@user-c494ef are you displaying video/graphics or running headless without a UI?
with graphics on, I would like to run headless if that is possible
It is not possible to run completely headless. You can save some CPU by minimizing all windows.
Lower eye video resolution helps as well.
well we initially used 320x180; however we found that 640x480 with lower frame rate was a little bit better. In any case one of the major problem is that the pupil detection consumes about 40% CPU of all cores. Are you planning any optimizations in that area, soon?
or have you experimented with compile time optimizations so far?
Hi, since today my player always crashes with this error. Anyone has an idea?
... i cannot play any recordings π¦
@user-c494ef Do you need pupil detection and mapping for a real time application? If not you can do that offline after a recording.
@papr yes its for a real time application
so disabling is not an option
hello, can anyone give me an hint on how to stream pupil data to hololens wireless(ly)
@user-11dbde check out the hmd eyes project. It includes a Hololens application to do so, if I remember correctly
thank you
I will have a look there
@user-c494ef an easy solution would be to use two separate computers. One running Capture and the second one running your application
the application is running on hololens
i want to send eye data to the app in a wireless setup
The app connects wirelessly to Pupil Capture.
There is no pupil detection on the Hololens itself.
Could you describe your setup a bit more in detail? I have the feeling that I am misunderstanding your question.
I think i see what i need now
thanks!
@papr yeah problem is that we want to have a wearable autonomous system, so more PCs means more of a power problem. One solution we investigated so far is using pupil mobil, are you by any change planning to do pupil detection on the phone?
@user-c494ef we do not plan on having pupil detection on the Pupil Mobile Android app.
okay any chance that a CNN based approach gets implemented as suggested by https://arxiv.org/abs/1711.00112
@user-c494ef Pupil Mobile will not be conducting pupil detection - if you are interested in implementing the approach in the paper, it would be great to see an attempt to add something like this as an alternative approach to pupil detection pipeline in a fork of Pupil as a proof of concept π - I'm sure the community would be interested to see this π
hehe π
Hi everyone! Do you know if it is possible to know where do the calibration points appear in the screen, in coordinates, so they can be used to transform the normalised data to cm? (we controlled distance from screen, head movements, and we know at what distance from the centre of the screen the different stimuli appear). I found 3 calibration files that are automatically generated when I calibrate (eye 0 camera, eye 1 camera and world camera), but I don't know if can be useful. Thank you π
Hi,
I am using eye trackers for Htc Vive and currently I wanted to know, is my focus and distance optimall?
Hi guys
Regarding the messaging of the Pupil Service to it's subscribers . Does the structure of the message becomes volatile at any time?
What do you mean by volatile? The gaze messages might differ depending on being 2d/3d and mono/binocular?
but once set it will not take any other form ?
monocular and binocular mapping is decided dynamicallly based on the pupils' confidence
Therefore it can change over time
@wrphow do I include this calibration routine?
You start the recording before calibrating instead of doing it the other way around.
I have a problem, when I transfer the data from the phone to the computer and open it with pupil player, the recordings do not show any gaze or fixations. Is this a setup problem?
@user-6f2339 This is expected behavior. You will have to run offline pupil detection and offline calibration in order to see gaze data. See the docs for more information.
@papr I did the offline pupil detection and offline calibration but still cannot see the gaze or fixations.
@user-6f2339 You need to activate the offline fixation detector to get fixations. They are based on gaze therefore you will only start seeing fixations if you have gaze data. If there is no gaze the offline calibration failed. There can be multiple reasons for that: 1. Not enough pupil data 2. Not enough reference markers (either circle markers or manually annotated ones) 3. You enabled 3d pupil detection but set the calibration section to 2d, or vice versa 4. The calibration does not converge.
Hi,
I wanted to ask, is there a possibility to know pupi gaze coordinations in degrees?.... for example Horizontal where midle is 0,0? I make a research for coordinations between head and gaze movement. Head movement is expresed in degrees from HTC Vive tracking system..
@user-c9c7ba Yes, this is possible if you know the intrinsics of the hmd.
We use this method in the fixation detector to convert 2d gaze norm_pos values to unprojected 3d vectors: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L113-L121
Afterwards you can use these vectors to calculate the cosine distance and convert it to degrees.
@papr thanks for response, what I need to know for my HMD?, screen resolutions?...
and this line of code: width, height = capture.frame_size frame_size is 1280x720 in my pupil capture, by HMD resolution is 2160x1200 (1080x1200 per eye) what I should use?
One more question: vectors = capture.intrinsics.unprojectPoints(locations)
this methos is very complicated, do I need this method?
Yes, this is the most important function when transforming 2d points to vectors. Also, the intrinsics should be as exact as possible.
OK, I found that method, but can you give me some information about properties?
def init(self, K, D, resolution, name): self.K = np.array(K) self.D = np.array(D) self.resolution = resolution self.name = name
resolution is the resolution of HMD, name this is not important.. but K and D?
Ok, in the hmd case it is a bit different to unproject the 2d point. Have a look at the hmd eyes examples. There is an example that projects a 3d ray into the scene. Based on this ray you can calculate your gaze in degrees.
OK I try it :)... I
See the π₯½ core-xr channel in case of hmd-eyes specfic questions
Yes but I have feeling that this chanel is currently not very active
@papr Thank you very much. I managed to solve the issue.
Hi,
I have yet a big problem.... When I open my pupil capture, and run recording, it tooks 14 sec or so to start... I am using pupil capture v 1.5.12.. and I have worked with this version 2 months and all has worked fine, I dont know what happened.. I have tried reset to default setting, I have reinstalled this version... I have tried version 1.4.. and it was the same... any idea what is happening?
@user-c9c7ba something on your system must have changed. Which OS do you use?
Windows 10
but I am using HMD-EYE for data export... And I dont need this feature record.. I have removed send socket to start record and I can eport data yet
@user-c9c7ba could you confirm that the issue persists on v1.6?
@user-c9c7ba I do not understand? What did you change? The hmd-eyes/unit part?
And just to clarify: Capture is slow when starting, but hmd-eyes is as fast as always?
OK I will try v1.6, but its v1.6 tested with HMD-eyes plugin for Unity?...
capture takes 25 seconds to start recording and meanwhile is freezed... with Unity its the same when I make send(start recordin...) ..
Aah. I misunderstood. I will try to replicate the issue.
@user-c9c7ba do you also record audio?
yes
In unity? Or in Capture?
This is my pupil capture
and so is called recording from unity
Hi, drivers for Windows 10 are correctly installed and in working condition, but Pupil Capture doesn't see any camera of the Pupil Headset. This headset works fine with an iMac (and the Windows 10 computer works fine with all other applications) . Here is the Pupil Capture.exe screenshot. Any help is very welcome! :-)
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. Running PupilDrvInst.exe --vid 1443 --pid 37424 OPT: VID number 1443 OPT: PID number 37424 Running PupilDrvInst.exe --vid 1443 --pid 37425 OPT: VID number 1443 OPT: PID number 37425 Running PupilDrvInst.exe --vid 1443 --pid 37426 OPT: VID number 1443 OPT: PID number 37426 Running PupilDrvInst.exe --vid 1133 --pid 2115 OPT: VID number 1133 OPT: PID number 2115 Running PupilDrvInst.exe --vid 6127 --pid 18447 OPT: VID number 6127 OPT: PID number 18447 Running PupilDrvInst.exe --vid 3141 --pid 25771 OPT: VID number 3141 OPT: PID number 25771 world - [WARNING] video_capture.uvc_backend: Updating drivers, please wait... world - [WARNING] video_capture.uvc_backend: Done updating drivers! world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied. world - [WARNING] camera_models: Loading dummy calibration world - [WARNING] launchables.world: Process started. world - [ERROR] calibration_routines.screen_marker_calibration: Calibration requireds world capture video input.
@user-7bef48 do you see Pupil Cam listed in the libusbK
category in the Device Manager on Windows 10?
Hi wrp, Yes, Pupil Cam1 ID0, Pupil Cam1 ID1 and Pupil Cam1 ID2 (each 3 times, because I've tried each of my 3 USB ports)
Ok - any other pupil processes running in the background if you inspect task manager?
Also what version of Pupil Capture?
Pupil Capture is running twice
but there were 2 windows opened.
Can you restart Pupil Capture with default settings. General menu > restart with default settings
Capture Version 1.6.11
Based on your description all drivers seem properly installed, running latest version of Pupil software, and on Windows 10.
(I'll be AFK for a bit will respond async)
Some text is written at the starting of the app, but it is too quick for me to read it
Here is the text :
@user-7bef48 can you send a screenshot of drivers in device manager?
Also expand the Cameras section in device manager if this category exists
I have a suspicion that this is driver related
The headset is currently connected to one of the 3 USB ports
The Device Manager is in "hidden drivers shown"
mode
This screenshot shows that the drivers are not correctly installed.
ok π
I can desinstall all the drivers
"uninstall" sorry
All drivers were unistalled. Pupil capture was restarted with original settings. Same problem.
@user-7bef48 we are about to release a new version of Pupil Capture with driver installer that addresses recent changes to Windows 10. You can either use a tool like Zadig to overwrite system assigned drivers for Pupil Cam with libusbK drivers or wait for v1.7 Pupil software (coming very soon)
@wrp Thanks for this information. I'm not time stressed, because the experiment will only occur in some months. So, I prefer wait for the v1.7. Thanks for all your help. Best,
ok, thanks
Hi. I have a question about pupil dilation. In Capture, a graph is shown displaying the dilation. Is it possible to export the dilation to a csv file? If not, do you know any paper on how to calculate the dilation?
@user-d72566 The dilation is exported as part of the pupil data in the raw data exporter. Be aware that the 2d pupil diameter is measured in image pixels and the 3d diameter depends on the quality of the 3d model
Hmm I had missed that one. What is the difference between the raw data and gaze_positions for example?
pupil data is relative to the respective eye camera space. Gaze is mapped pupil data into the scene camera space
Aha.
I am looking at pupil_positions now, it has a column labelled diameter, this is the one you are talking about? π
yhis is the 2d pixel diameter correct
I see, thanks!
But where is the 3D diameter?
It is only available for 3d detected pupil positions
Should be called diameter_3d
Hmm alright, then what are their formats? Here is an example row:
2D: 55.5888373471996 3D: 1.81006565884836
2D looks like mm but 3D looks like its cm?
55mm would be very huge for a pupil π 2d is in pixels (and therefore depends on the eye camera distance to the eye) and is only sensible as a relative value compared to other diameter values.
Indeed it would be very big! π It has been a long day hehe. Alright then but then I can assume that 3D diameter is mm. My readings span from about 1.8-5, seems reasonable.
3d is in mm but as I said this depends on the fittness of the 3d model.
yeah, sounds good
I hear you! Still, might be worth investigating. Thanks π
Hey guys, it's been awhile - thank you for the help previously with the DIY filter and other assistance.
After my experiments are over, I plan on building a second DIY build and setting up a start to finish Video tutorial with Updated reference materials and time line of acquiring those materials for future builders.
The data collection and other biometric equipment I'm running for 60 trials at 45 mins a piece is going very smoothly. Thank you again
Also, I dunno if it's still in the github for manual eye calibration, but I might be able to add that for the DIY camera if that's needed
*lens
Hi guys, I tried to connect the eyetracker to a OnePlus 5T via USB-C and started the app. The app is not displaying any cameras at all, just the three build in sensors audio, imu, and key (however, I have no idea what key could be because it's not responding to anything). Is there anything further I have to do before I can use the headset together with the app?
@user-c351d6 you should get a permissions prompt for the usb devices. Mayb restart the phone? Try a different usb-c cable?
@mpk Unfortunately, no prompt and the usb diagnoses app says there is nothing connected (this time all cables are proberly connected, though). I'll try to find a second usb-c to usb-c cable to check if the cable or the phone is the problem.
@user-c351d6 we use the same phone for development and it works for us. FYI...
@mpk 5 or 5T? We actually also bought a high quality usb cable.
We have 5 and 5t - both working
Please enable USB OTG
@wrp Thank you, it's working now. You may want to update your documentation.
Thanks for the feedback @user-c351d6 - the OnePlus is the only device that I know of that requires one to explicitly enable USB OTG
We will add a note to docs just to clarify this detail though so that others who try do not get stuck in the same place.
Hi people, I am new to pupil π I want to use the ETD to track the eye movement of my right eye. Thus, I want to receive the vertical and horizontal angle of my eye. I am able to subscribe to the gaze' topic in my own program. Which key-value pair are now the eye position's vertical and horizontal angles? msg['base_data'][0]['ellipse']['axes'][0/1] ? Many thanks in advance and cheers, Sunacni
Hi There βΒ can anyone share the exact settings/set-up (soft-/hardware) to get PupilLabs Mobile running on an LG Android Nexus 5x? Would be great βΒ I cannot seem to get a video signal onto my phone. Many Thanks
Hello, I have been using Pupil player the past month, and recently it stopped opening. When I try to open it I get following. I tried deleting and installing again, but am getting the same error. It is happening on one more computer, while on a third computer it is still working. Will be happy for any advise, Thank you!
@user-f68ceb Are you using the USBC-USBC cable that shipped with the Nexus 5x? If so, you will need to change to USBC-USBC cable - like this one made choetech - https://www.amazon.de/Certified-CHOETECH-marker-Delivery-MacBook/dp/B06XDB343M/ref=sr_1_17?ie=UTF8&qid=1524714193&sr=8-17&keywords=choetech+usbc+to+usbc
The issue here is likely cable related.
@user-8944cb is this issue being observed with the same dataset in Pupil Player? Can you provide us with some inforamtion about the dataset - size of the dataset, and the files contained within this dataset.
Hi @wrp βΒ I bought the cable that was recommended in your link. Still does not seem to work. I was wondering if you can send screenshots of ALL the settings in Pupil Capture?
Hi @user-f68ceb just to confirm - you are not seeing any video feeds of eye or world cameras on the Nexus 5x when running Pupil Mobile? There should be no further settings to enable on the Nexus 5x as it natively supports USB OTG - are there any prompts when you connect the device and/or options in the top settings/shade of Android when you connect the Pupil headset?
Hi @wrp βΒ I got it working with the 1.6.14mac update, Nexus 5x, suggested cables and default settings. :-)))
@user-f68ceb pleased to hear and that is for the update!
Hi @wrp , the issue is observed even before trying to upload (drag) a recording of a dataset. When I try opening the program I am receiving this error, and pupil player won't open. Thanks!
This is not expected behavior. How do you start Player?
I extract the files after downloading, and then just double clicking on the Pupil Player icon within the 'pupil_player_windows_x64_v1.6.11' document.
@user-f1eba3 did you ever get anywhere with the unreal implementation?