Hello! Is there a way to create interest areas so that my output will notes when the pupil is fixated on a specific point? Iโm guessing thereโs a way to draw those windows in the capture screen?
@user-4bf830 checkout the surface tracker plugin. It does what you need. Check the documentation for information on the setup.
Thank you!
Hi, I've experienced at least on 2 different headset a freeze of one eye or one eye camera not being recognised anymore in the middle of recordings and could only solve this with a different USB-C cable, has this ever occurred before? On another note with 1.8 and 1.10 on my machine (Windows 10) i've experience the disappearing of the side bar icons. They still work and the label also shows up but not the icon. Resizing restores them but maybe you need to know about this. I've haven't been able to replicate this on any other machine though.
Also do you plan on maybe releasing a headset compatible for children and/or for different populations, with a flatter nose for example ? In the latter case we experienced the glasses falling very low on the person's face ๐
@user-bd800a thanks for the feedback. Notes: 1. Cameras not being recognized - Have you been able to reproduce this issue on another Win 10 machine? Are you using the the USBC-A cable that we supplied? 2. Resize issue on Windows - yes, we are aware of this and have an open issue on github 3. Child sized headset - Yes, we do have a child sized headset. While not listed on the store, we can make this custom if you email sales@pupil-labs.com 4. low nose bridge - Have you tried the clear silicone nose pads?
Hi, I would like to insert the time in the world.mp4 produced by pupil player, is it possible? Thanks
Hi, we've been using the Pupil Capture v0.9 with the 120Hz HTC Vive eyetracker successfully, until now, when we've tried to upgrade to a 200Hz eyetracker and Pupil Capture v1.12. Unfortunately, when we try to calibrate, there are no markers shown. Now, when trying to use the 120Hz eyetracker, we get an error "unexpected keyword argument 'exposure mode'" or "unexpectede keyword argument 'check stripes'". Could you please advise us what could we look into?
Hello to everybody. Do anybody know what to do if my recording session in Pupil Capture was interupted by start of sleep mode. After my laptop had woken up the recording have been continuing. But the folder with recordings do not play in Pupil Player. Actually it is play, but only a grey screen. Did anyone face such problem?
I'm having trouble viewing exported heatmaps. When I open them up (Windows Photo Viewer) they appear briefly and then disappear. Has anyone else had this problem? I'm also unsure if this is a Pupil issue or a Windows issue, although I can view other .png images just fine.
@user-067553 You want to be able to write the timestamp directly into the pixels of the world video correct? While this is not curently possible with Pupil Player, you could do this with ffmpeg after exporting world.mp4 like so:
ffmpeg -i "world.mp4" -vf \
"[in]drawtext=fontsize=15:fontfile=UbuntuMono-R.ttf:text='Frame\: %{n}':fontsize=48:fontcolor='white':x=(w-tw)/2:y=(h-50)-(2*lh):box=1:boxcolor=0x00000099,drawtext=fontsize=15:fontfile=UbuntuMono-R.ttf:text='Time\: %{pts\:hms}':fontsize=48:fontcolor='white':x=(w-tw)/2:y=(h)-(2*lh):box=1:boxcolor=0x00000099[out]" \
"world_with_time.mp4"
(not the "prettiest" one liner I know, but should get the job done relatively quickly. Please ensure that you have the font installed, or specify a different font that is installed on your system. in this example I am using UbuntuMono-R.ttf
)
@user-0cf021 please raise the issue regarding VR in the vr-ar channel.
@user-bc5d02 You should try to install an application that would enable your laptop to not sleep even if the lid is closed. What I think you saw with the "gray screen" is the result of Pupil Player trying to recover from a "split recording" beahvior, where you had video recorded for part of the recording, but not all of the recording. In cases like these, Pupil Player will be able to play back data, but will fill in gaps with blank world video frames (gray screen).
@user-ffdd08 did you define a size for the surfaces before exporting? Have you tried opening in another photo viewer?
@wrp Should I move my question there even though we are not using the hmd-eyes? Our experiment is running on Unity 2017 so we cannot use it. Thank you ๐
Hi again, is it possible to get access to pre-compiled Pupil Capture v0.9.12? We desperately need it for running of our experiment, and you have already removed it from your github
@user-d16d74 you can always download earlier bundles if desired -- https://github.com/pupil-labs/pupil/releases?after=v1.4 - however, it is recommended to always use the latest release.
@wrp Thank you very much!
Hi, I have a question about timestamps. How to convert "Start Time (Synced)" to datetime? I have two recordings (from Capture version 1.11 and 1.12) and I tried to convert this timestamp on https://www.epochconverter.com/ and the results are on the picture
I need to convert it in order to make plot like below:
@user-a6cc45 do you really need the exact date? You can use pandas.to_datetime(timestamps, unit="s")
to convert timestamps to arbitrary datetime objects. Important is the time difference between the events, isn't it?
Hello guys, we bought Pupil Lab eye tracking device and tried to install it on our Win 10. However, even if following the instructions about the drivers, none of the exe files wants to open and start installation. Please do advise us what to do to install the SW on our laptop? https://docs.pupil-labs.com/#troubleshooting
Hi @user-e91538 If you haven't already, please try running pupil_capture.exe
as admin. When you say "none of the exe files wants to open and start installation" what do you mean? What do you see when you run pupil_capture.exe
with admin permissions?
Hello, I am attaching some pictures for you to check. I downloaded the latest version from Github for 64bit Win, and in all the folders I am checking the PyInstaller folder as I don't see any exe files anywhere else. In the capture for Win 64 bit, I see 4 files: run, run_d, runw and runw_d (d stands fro drivers I assume, and w for Windows?). I clicked all of them but no success (for the "run" file I even get that it is compatible with Win 8 only)
Here are the images attached - unfortunately I couldn't send them in rar as the file is to big, @wrp
hm, even now although my images are 5MB, it does not allow me to send me (with a limit of 8MB?)
In general the error messages are: "Cannot open the ....exe";
[11524] Pyinstaller Bootloader 3x;
[11524] LOADER executable is:...exe
[11524] LOADER MEIPASS is NULL
@user-e91538 just for verification, you downloaded the 7zip file from the release page, correct?
yes, pupil_v1.13-29-g277ac8c_windows_x64.7z 476 MB
from here : https://github.com/pupil-labs/pupil/releases/tag/v1.13
ok, and the output above is shown in the console window which opens when double-clicking the Capture.exe file?
yes
Can you paste the complete log?
sec, as it is on another comp
just to show in which folder I am
I will try to provide pictures, sorry
error1
error2
error3
error4
error5
is that helping?
@papr ? Is that ok? Shall I send any other info?
logs checked - I don't even see a proper error log there
@user-e91538 That is the wrong exe! In the unpacked 7zip folder, there is a Pupil Capture.exe, which needs to be executed. It has a green icon
Do not execute anything from within the Pyinstaller folder
perfect. let me check as I don't see it
wonderful! I so much hoped I am in the wrong folder. That is why I wanted to send those screenshots
Thank you so much!!!!! @papr
Hello, I get the following error uvc.c:15514:20: error: too many arguments to function โuvc_stream_startโ when doing uvc.c:15514:20: error: too many arguments to function โuvc_stream_startโ
Im trying to install on ubuntu 18.04
Hi, I've just set up Pupil Labs eye tracking. My first recording worked fine but now when I try and open the Pupil Player the Pupil Player window doesn't open properly. The command window reads: 'player - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend'. The other window shows in the task bar but doesn't display on the screen...
@user-1dd00a did you record with Capture or Pupil Mobile?
I recorded with Capture, viewed it once with Pupil Player successfully but since then Pupil Player hasn't displayed properly.
Ok nvm I got it working
@user-1dd00a the error that you see is not an actual error. What is theast output that you see? Is it the same message?
I just don't see the other window at all (the one that says 'Drop directory here...'). It is shown in my task bar but it doesn't appear...
This is the only window that is actually visible - the Pupil Player window is displayed when I hover over the task bar
Noob question. I've got ubuntu on a laptop . I've downloaded pupil-1.13.zip from the github site, but there does not seem to be any instructions on how I install it. What do I dop next?
@user-1dd00a interesting. And you can't click onto the thumbnail?
@user-f247c1 unzipping it yields three Deb files. Just double click them to install
hmm cant see any deb files
@papr I can but nothing happens... I just tried re-installing the Player folder but same issue persists (restarting doesn't fix it either).
@user-1dd00a please try deleting the user settings files in the pupil_player_settings folder
@user-f247c1 what files do you see when unzipping the zip?
directoies called :- deployment, pupil_external, pupil-src,
files called COPYING, COPYING.LESSET README.msd and updatze_license_header.py
@user-f247c1 ah, you downloaded the wrong zip. You probably downloaded the source zip. Try to download the other zip that has Linux in the name.
Where do I find that zip? all the links seem to bring back to the source zip?
@papr Brilliant - that worked. Thanks!!! (no idea why it happened in the first place but now I know how to fix it....)
@user-f247c1 https://github.com/pupil-labs/pupil/releases/download/v1.13/pupil_v1.13-29-g277ac8c_linux_x64.zip
@user-1dd00a I do not know why this happens either ๐ windows ๐คท
ahh thankyou I was clicking on the 1.13 tag unuder latest release, which took me to the source files.
hi I setup the pupil capture but my eye looks inverted is this normal?
@user-c4e9fb yes, the camera is physically flipped. This has no impact on pupil detection.
Cool thanks
@papr You're right, instead of exact time on X axis I've decided to put there the amount of seconds/minutes since the beginning of the recording ๐ however I have another question: When I play my recording (recorded in Capture v 1.11) in Player v1.13 I get message: "player - [WARNING] surface_tracker.surface: You have loaded an old and deprecated surface definition! Please re-define this surface for increased mapping accuracy!" How can I re-defined surface? Does editing old surface in Player will help?
@user-a6cc45 You can add new surfaces in Player as you do it in Capture. Just editing the old ones should work as well.
Hi again, I'm still experiencing issues after switching from 120Hz to 200Hz eyetracker. Because our Unity project doesn't currently work with the 200Hz project, we want to go back to using the 120Hz eyetracker, but we get the error "unexpected keyword argument 'exposure mode'" or "unexpectede keyword argument 'check stripes'". @fxlange from hmd-eyes channel recommended to "restart with default settings after switching hardware". Could you please advise me how to restart the Pupil Capture with default settings, when right now I cannot launch it at all?
This is what I get when I try to launch the exe. I'm using Pupil Capture v0.9.12 and due to our project, I cannot use a newer version
@user-0cf021 Please delete the user_setting_*
files in the pupil_capture_settings
folder in your users' home folder and start the application again.
Hello! I am trying to track users' gaze while they walk in a park/in a street so that I can later assess what they are looking at and when. Would this be possible using a Pupil Labs headset, and, if so, what calibration method would you recommend?
@papr Thank you so much for replying on Sunday! I've tried that and the problem persists, and it did not create new user setting files
Alternatively, I get this error message. I haven't figured out why I sometimes get "exposure mode" and sometimes "check stripes" errors. I launch the exe file the same way, and one time I get one error and the second time I get the other error.
@user-0cf021 I do not think that the correct files were deleted.
@user-0cf021 could you please also check the capture.log file next to the user_setting_files (or where they have been before), and look if it includes further details?
@papr You were right about the incorrect files! I was able to successfully start the old Pupil Capture now. It doesn't connect to the hardware though. I'm enclosing the capture log
@user-0cf021 Nice! The log shows that the cameras can be found but the process is not able to start the camera. Please disconnect the hardware, restart your computer and try again.
@papr Perfect! I will do that, I just need to wait until some process that's currently running finishes. Thank you so much for helping, I really appreciate it!
Hey @papr , I've created a DIY hardware setup comprised of two Logitech C615 webcams and one C930e webcam, all sampling at 30 Hz. When I load the data into pupil player and play back the recording with both eye overlay videos enabled, it's obvious that that cameras are all out of sync. Comparing the timestamps from the different cameras also confirms that they are several frames out of sync, so this doesn't seem to just be a visualization issue. I was wondering if you have any suggestions for rectifying this issue?
Side note - I'm running pupil capture 1.5, which I realize is a bit outdated now (I'm still using this because the more recent versions of pupil capture won't work with our "pupil middleman" server due to the changes in the annotation format). Also, version 1 of my DIY setup was comprised of 3 identical C615 webcams, and I don't recall seeing this issue (in the current version we upgraded one camera to a C930e for better video quality).
@user-e7102b if I remember correctly, we use software timestamps instead of hardware timestamps for the C930. This is due the camera not producing sane hardware timestamps. The same might apply to the C615. You might need to change the source code to add the same exception for the C615
@papr thanks for the quick reply! That makes a lot of sense. I've tracked down mentions of the C930e webcam in lines 143 and 381 of the uvc_backend.py source code. Can you advise on the most straightforward way to make this exception for the C615?
@papr I've restarted the PC and the problem prevails. I've also tried reinstalling the driver following these steps but nothing changed: https://www.bullseye.uni-oldenburg.de/unity-and-pupil-capture-installation/
@user-e7102b mmh, actually, it looks like the exception is made the other way around: Always use software timestamps unless it is a Pupil cam. l.180-188
Could you share the timestamp files of the recording with me? I will have a look at them in the coming week.
@user-0cf021 Sorry, I am out of ideas here. Even though you cannot use the current version of Capture for your project, did you give it a try? Maybe its automatic driver installation does something different which makes it work?
@papr ok thanks. Here are the timestamp files:
@papr I've resolved the issue! I've followed the steps mentioned here by willpatera https://github.com/pupil-labs/pupil/issues/1011 (especially showing the hidden devices) and uninstalling all of them. Then restarted the PC and followed meticulously these steps https://www.bullseye.uni-oldenburg.de/unity-and-pupil-capture-installation/ I've realised I missed the step where I was supposed to unselect ignoring the composite parents. Such a silly mistake! There, I've found that for these were installed some different drivers. After reinstalling those for libusbk 3.0.7.0. everything worked fine. Thank you so much for all your help! ๐ฏ I absolutely couldn't have done it without you, and you've just saved a lot of people's research! ๐
@papr Do you happen to know the size of the screws used on the ball joints for the eye cameras on the Pupil mobile headsets?
Hi everyone, we have have your 200hz camera in my lab and I wondering if someone can send me the ref of the infra-red LED you use. Thanks!
@user-4bf830 @user-755e9e might know that
@user-909a69 @user-755e9e might be able to help here as well
Hi. What are some tips to increase the accuracy of the eye tracker? I have a single 200hz eye camera. The gaze data always seems to be a few inches off where I was actually looking. But I see much better results in online videos so I feel like I am missing something.
^ and the error is not consistent enough across the FOV to just adjust for drift.
Also, is there a way to change the default X and Y size for new surfaces? I will be analyzing a lot of videos that all have the same surface size and this would save a lot of time.
ยธallo
Hi @user-909a69 please find attached the datasheet
of the infrared LEDs.
Hello @user-4bf830 , the screws used for the ball joint on the 200Hz eye cameras is M1.5/8mm
.
hi @wrp thanks for the answer. I've tried the code you send me for ffmpeg, but it gives me error. Here it is the screenshot, I've just modified the font with arial.tff
Might be missing flags for output format. I will try this again later and get back to you with updated ffmpeg script
thanks @wrp, I've resolved, now it works. Is it possible to take the time from the recording? Like if the video was taken at 1PM, is it possible to show the time when I was recording with pupil capture?
@user-067553 I think you would have to write a custom player plugin to accomplish this.
Hi @papr Can we create it together? I think it could be a good functionality for all the users of pupil.
Might be a nice project, indeed.
You might be able to do this just with ffmpeg, but it would likely be a huge one liner
Can I ask for some experience from pupil mobile please? Can I use saved calibrations that has been obtained with a direct usb connections? or would you recommend post processing of calibrations?
@user-14d189 what is your network setup like? Do you have a dedicated router? I ask because high network latency can lead to dropped frames which could in turn lead to less scene camera frames for calibration markers detection and might yield worse calibration results due to less data
I would recommend starting recording prior to calibration, that way you can decide if you want to do calibration post-hoc or use the real time calibration provided by pupil capture
Btw @user-14d189 I saw your other question in software-dev ; I will take a look at that later today
We use a dedicated router, Netgear CG3100D, nothing else on there. Works good at close range. We run a custom pico flexx app in top of pupil mobile. in close range it uses 40Mbps. It does not seem to have much lag, but the point inclusion during calibration is about half.
thanks for looking into the auto exposure.
In this case I would recommend starting recording prior to calibration
the 40Mbps --- the point cloud data is a bit chucky.
I will have a closer look to the data and try again. Thanks!!
Hi, is there a way to detect fixations using your implementation (at Offline_Fixation_Detector in fixation_detector.py) passing a csv file instead of the capture? I have a csv file containing info about some points (index of the frame in wich this point has been registered, norm_pos_x, norm_pos_y) and i'd like to detect fixations, if any, among that points. Is it possible or i have to modify the script in some way? Thanks in advance
Hi everyone, I'm interested to know the expected release date for Pupil Invisible.
@user-405421 You can preorder starting today. We plan to start shipping during Q4 of this year.
@papr thank you. We will be needing it for a research project about to launch in the Fall, so we will need a more specific timing to decide whether this is a good plan for us.
Hello, is there a way to get the location coordinates (in the video) of the circle targets used for offline calibration in the csv file that is exported by Pupil Player?
@user-405421 We will follow up with more information in 1-2 weeks with a more concrete shipping date estimate for Pupil Invisible.
@user-a7dea8 Currently, this information is not exposed as a csv export, but this can be changed for the next version.
It would be great if that feature could be added, currently I am independently finding the pixel coordinates of the targets after the fact.
@wrp great, thanks
@user-a7dea8 you could extract this information using Python already
Hey, @papr, I've been playing with the surface tracker plugin (with success) for a few days now and just yesterday and today it's been freezing my capture window the second I try to enable it. I'm trying to attach some screenshots of the operations in the event that those help. Any suggestions? I've tried some of the obvious, like restarting the window with the default settings, closing and reopening the player, etc.
So three reboots later itโs working again, but Iโm not sure what the issue was.
@user-764f72 please have a look at the traceback that was posted by @user-4bf830 Do you see a way to reproduce it?
Unfortunately (?) no, it appears to just be working now.
@user-4bf830 sorry, that question was meant for @user-764f72. I forgot to add the "."
@papr Hi! I'm sorry for bothering you once again. So, I've got msgpack for Java ready and when I receive and decode the second frame (msgpack encoded dictionary), either it does not print or it prints unclear string.
print("TOPIC : ");
byte[] bytes = subscriber.recv();
MessageUnpacker unpacker = MessagePack.newDefaultUnpacker(bytes);
ff(unpacker);
println();
print("PAYLOAD : ");
byte[] bytes2 = subscriber.recv();
MessageUnpacker unpacker2 = MessagePack.newDefaultUnpacker(bytes2);
ff(unpacker2);
println();
ff() is the function for decoding according to the dictionary that pupil labs provided
Is there any other dictionary available for decoding?
@user-72cb32 it should not be necessary to deserialize the first frame btw. It is a simple string.
Could also please provide some example output?
It looks like this!
[email removed] [email removed] [email removed] &?u???`+ D?radius???d??YC??confidence??? [email removed][email removed]i?}?sphere??center??@B?ya??????T[email removed][email removed][email removed] X??model_id??model_birth_timestamp?A"a?Y??p?theta???n??? ?phi??^6??*??method?3d c++?id?norm_pos????G???????6Yf?x
@user-72cb32 this looks like the raw msgpack data instead of the decoded result. Could you share your FF function? E.g. Via gist.github.com
Oh, ok!
this is it
It looks like you are missing a case for dictionaries
Or mappings
Could you elaborate that a little more specifically?!
Your function is deserializing content based on its type, see the big switch statement. It includes cases for all types but one: dictionary/mapping. The second frame send by Capture is always a dictionary, i.e. It is always ignored by your code.
Ok. It could be either Dictionary or Mapping? Because there are type for dictionary and also type for mapping
as long as I know
Probably something like this?
Mmh, I am not sure if there is a difference. Usually both terms are used interchangeably. In case one of both does not require the keys to be strings, take that one.
@user-72cb32 also, you might need to call the FF function recursively to unpack arrays and maps correctly
Yes! That was the point for FF function! I will try and if there's another issue, I will find you again. Thank you so much!
Hi all, does one of the Pupil Labs device give the 3D geometric parameters of the eye?
@user-4d0769 Capture implements the swirski 3d model, which gives you the orientation and position of the model in relationship to the eye camera. Is this what you meant?
Hey. How can we make pupillabs run on Arm64 Linux?
Hi, the last week we have done recordings for our study, today i'm trying to open it with pupil player to assemble the eye recordings with world recording. 2 problems: 1- when we open with pupil player the recording (1hour) is grey (I think it's pupil player error, because the file world.mp4 if opened with Vlc player is there). 2-the eye videos are desyncronized.
@user-067553 could you please share the *_timestamp.npy
files of the recording?
As well as the info.csv
file, please.
Hi @papr, here it is
@user-067553 Thank you very much!
Here it is a screenshot from pupil player @papr
@user-067553 Thank you. This is an issue I have been working on today. It will be fixed with the next release of Pupil Player. I will be able to send you updated timestamp files that remove the issue later today ๐
Thank you @papr
hi, I am doing offline calibration and would like to access to the x y values of the reference points. Is these a way that I can extract the data from the plcal file?
hi, I have a problem of heat map size. I am using the most recent version of pupil player, and in the offline surface tracker I have set the size of heatmap (3840x2160). However, the explored the heatmap has size of 31x17. This is very strange, since the older version of pupil player can explore the size that I wrote.
@user-dae891 the logic for the heatmap size changed in v1.13. Try changing the heatmap smoothness.
Sorry, i don't understand the instruction of heatmap smoothness. I have try to adjust the value, and it giving me different size of heatmap every time. Would you please explain to me what is heatmap smoothness? and how can I get 4K resolution heatmap?
@papr
@papr Do you know how can get certain resolution heatmap?
@user-dae891 Apologies, I didn't have time to look that up today. I will come back to you tomorrow.
@papr Thank you very much! Additionally, I was wondering do you guys have any source about how to choose 'heatmap smoothness'? I am working towards publication, and I may need the supporting reference for the parameter selection.
@user-dae891 I talked to @marc (the author of the surface tracker changes). He will look into it.
@user-88b704 The *.plcal
files do not include any reference information. But the "reference_locations.msgpack" file in the "offline_data" does!
This is an example on how to read and extract the data: https://gist.github.com/papr/655cc5f005ca032b0eb602317e89f9ba
@user-dae891 The best smoothness value depends on your application. The generated heatmap is a 2D histogram over the surface. The smoothness value influences how many bins are used for the histogram, thus influencing the resolution of the generated image, which is exactly this histogram. In the current implementation the maximum number of bins you can get is 1000. Getting a 4K heatmap is currently not possible (although you can of course get a histogram with 1K resolution over a 4K display). With the previous version of the surface tracker people were often confused with setting the resolution of the heatmap, so we tried to simplify it this way. Since this is a recent change we are looking for feedback on this!
Hi! I have a question about the pldata and .csv. For example, after using pupil_capture, we can get a file called fixations.pldata. And after using pupil_player, we can get a file called fixations.csv. So they should contain the same information since player actually just synchronize the data with the same video. But after I load the pldata file, I found the data is totally different from .csv data. Is that because of synchronization? So if I want to get the gaze position on the surface, is that meaning I don't have to care about the file from pupil capture?
@marc Thanks for answering! However, I am still confused how to extract the 1K resolution heatmap image from the histogram bin? The histogram bin itself is a three layer channel image.
@marc If I want extract a heatmap with specific width and height, what should I do to the hisgogram bin.
@marc Ok, now I understand
@marc Thank you very much! All make sense now! So the width and height value in surface tracker controls the width to height ratio of heatmap, and I can amplify the heatmap later to any resolution. Is this correct?
@user-dae891 That is correct!
(any resolution up to 1K width)
@marc I am slightly confused about the weight and height. I have set the weight is 1.777 times height. However, the result heatmap is height is 1.777 times weight
@marc width should be larger than height
@user-dae891 this might be a simple bug
@papr Right now, I can just switch the w and h ?
I think so, yes
Could you please @ me if it's fixed?
Of course
Thanks
@papr Additionally, I feel like the surface tracker in the new version player is more unstable, compare with the older version player
@papr Here is some screen shot for a same sence
This is due to how distortion is being handled but @marc might be able to explain that correctly.
@papr @marc Would you please tell me how to fix it?or make it more stable?
Hi, we just downloaded the new version of pupil player. When we tried to open an already offline calibrated recording, the new version does not automatically pull out the offline calibration timelines. Does that happen to anyone else?
@user-dae891 if I remember correctly, the surface is displayed in the undistorted camera space, instead of the distorted image space. So the visualization might be a bit misleading. @marc please correct me if I am wrong.
Not that is not quite correct. The visualization should actually not be different compared to the previous version.
The handling of camera distortion only changed how gaze is mapped onto the surfaces. The visualization we see should be computed the same way.
OK, then let's investigate this tomorrow. @user-dae891 could you please share the recording with data@pupil-labs.com such that we can make sure that any solution that we can up with also works for your recordings?
@papr Ok, thank you very much! I will send you right now
https://github.com/pupil-labs/pupil/issues/1542 for reference
@papr Thanks, and I am sure I have set the same min_marker size. I have send you the file. Thanks for your help!
@papr @marc Hi, for the weight and height bug I mentioned above, I just realize it's cannot be solved by switch the w and h. When I swap it, it actually giving different size of heatmap. Would you please take a look? Right now, I am unable to start my work with this problem.
@user-dae891 @papr @marc i am also having this issue with the surface tracking, will try to update with more info later today.
we have recordings and a tracked screen of 1920x1080 height/width ratio, in player the heatmap shows correctly overlaid on the world camera image but the exported heatmap file shows the inverted height to width ratio similar to Doris above.
Is there a public demo recording directory available that can be used to test the software capabilities, or is anyone willing to share a recording with quite some pupil dynamics (large, smaller)? I want to make sure this phenomenal platform works for my specific application before buying the hardware. Thanks in advance!
@user-dae891 @user-8a8051 Thanks for bringing this up, we'll look into it today!
@user-63c8c0 there are demo recordings available on the website: https://pupil-labs.com/products/core/tech-specs
Scroll down to Sample Recording
!
@user-88b704 Pupil Player resets to default settings when opening a new version for the first time. The default is to load pre-recorded gaze from the recording. Just switch back to Offline Calibration and your previously calculated offline gaze will come back as usual.
@marc thanks for your help. However, I am interested in loading not only a processed world video, but an entire project file/directory incl. raw pupil recordings, settings, and world video. Something I could load into the Pupil Player software and try and see if the recording of the pupil size matches my expectations.
Hey there, who should I contact here for possibly hardware problems? The front camera stopped working while both eye cameras are still working.
problem remains despite changing to windows/mac/ubuntu, to a fresh machine too, window's device manager does not detect the front camera (Cam ID2) at all, only has the 2 eye cameras (Cam ID0 and Cam ID1)
@user-3341e5 info@pupil-labs.com
Thank you @papr
@user-63c8c0 I am uploading the recordings for our Offline Calibration tutorial: https://www.youtube.com/watch?v=_Jnxi1OMMTc&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh
I will let you know when they are uploaded.
@papr That is so helpful. Thank you very much.
Hi! Can anyone tell me where can I find the (Re)calculate gaze distributions button after specifying surface sizes in pupil player?
@user-63c8c0 https://drive.google.com/drive/folders/1wVNoJTskCquvFNeltmvbvoJOb5mhODmM?usp=sharing
@user-9c3078 The re-calculation now happens automaticaly and does not have to be triggered, thus the button is gone. Whenever you change a parameter influencing the gaze distribution it will recalculate. You should see this in affect when you have the heatmap visualization enabled: if you make a change to a parameter the heatmap turns gray while it is recomputed with the new parameters.
Thank you so much!!!
@user-dae891 @user-8a8051 Two bugs are fixed with the following PR: https://github.com/pupil-labs/pupil/pull/1555
1) A bug that lead to worse performance in the marker detection in the offline surface tracker. @user-dae891 this is why the tracking accuracy was worse for your screen surface in the new version. Either way I would recommend to you to add a fourth marker to your surface to further increase the robustness of the tracking.
2) Horizontal and vertical resolution of the heatmap have been switched at one point, which lead to the wrong aspect ratio in the exported heatmap.
Both issues will be fixed in the next release. If your run from source they will be fixed as soon as the above PR is merged. Thanks for bringing these up! Let me know if you have further questions!
Hi! I am trying to get fixation information on surface from fixation.csv. Except calculating to gaze duration from gaze points information on surface to decide whether it's fixation or not, I want to use the img_to_surf_trans matrix, but I cannot find the 3d information of surface even I am using 3D model. Can anyone tell where I can find it or how can I get the fixation on surface/
@user-9c3078 Previously the Pupil Player generated a CSV file for every surface with fixations mapped onto the surface. This functionality was unfortunately lost when we introduced the NSLR fixation detector. We are now restoring this functionality again in the next release.
In the meantime you might be able to use the following work around: The fixation datums in fixations.csv
all have a start_timestamp
. You could look up this timestamp in each of your gaze_positions_on_surface_<SurfaceName>.csv
files and see if the corresponding gaze datum is inside of the surface. If it is, the fixations is too.
Oh, I thought the start_timestamp is world timestamp. I was totally wrong.๐ Thank you so much!!!
@marc Thank you very much!
@marc Sorry, I am new to github. I don't know how to download the software that you have modified.
@user-dae891 In this case we recommend to use v1.12 until the fix has been released. It is scheduled for end of next week.
@papr Thanks for letting me know. However, the heatmap from v1.12 is very different from v1.13 right? The smoothness is different, and which effects the size of salience area. So I think I cannot mix my results with the new version and old version, and the new one provides better result.
@marc thanks very much, looking forward to the new release
greetings, I'm having a hard time with the eye tracking in an environment with only artificial light, how could I improve it?
@user-eaab8a Could you send a screenshot of the eye window, with the Algorithm view enabled?
Sure, here it is
These look pretty good to me
My bad, i've sent the natural light screenshot, I'll be right back with the non-natural light
Here it is, this one is captured with non-natural light
I would change two things: Adjust eye0 such that the eye ball is positioned in the center of the frame. Additionally, I would decrease the pupil min value in eye0
Else the pupil detection looks very good.
Thanks papr, the adjustment are required in both situation? Or just in the non-natural one?
Both situations.
Thanks, I'll try that
The light might be a problem anyway?
No, I do not see a problem in either of both screenshots.
What are your difficulties specifically?
I mean in general, not those 2 cases specifically
My difficulty is in tracking with non-natural light, in wich the tracking seems to be lost and way too offset from the actual gaze point
Can you share an example recording with [email removed]
To be more clear, I'm trying to track the look on a tablet, looking for button press and decision over the interface
Ideally, you include the calibration and test procedure in the recording
Sure, I'll be recording some today, I'll be glad to share an example
Hi I just started working with the pupil glasses and I'm just running a few tests before I get started on my project, and I was wondering where can I get the data from my pupil dilation. I already have my recording finished and I have exported it, but I'm not sure where to find my pupil dilation data.
@user-4ef728 There should be a file called "pupil_positions.csv". Checkout the diameter
and diameter_3d
columns
Found it! Thank you very much!
Hi,
could you explain to me what is the difference between 2D and 3D detection & mapping mode
?
I didn't find much information about it in docs :/
@user-a6cc45 Please see these links for reference:
Pupil Detection - 2D Pupil Detection - Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction [1] - 3D Pupil Detection - A fully-automatic, temporal approach to single camera, glint-free 3D eyemodel fitting [2]
Gaze Mapping - 2D gaze mapping - This method fits a two-dimensional polynomial function during calibration to map the pupil center to a normalised pixel coordinate system - 3D gaze mapping - This method uses bundle adjustment [3] to estimate the physical relationship between eye and world camera during calibration.
[1] https://arxiv.org/pdf/1405.0006.pdf [2] https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf [3] http://ceres-solver.org/nnls_tutorial.html?highlight=bundle#bundle-adjustment
@papr Hi! It seems like both Pupil Capture and Player crashes whenever I try to use Surface Tracker (online and offline both). The error code is following:
world - [INFO] recorder: Started Recording. eye1 - [INFO] launchables.eye: Will save eye video to: C:\Users\SY\recordings\2019_07_23\001/ eye0 - [INFO] launchables.eye: Will save eye video to: C:\Users\SY\recordings\2019_07_23\001/ world - [INFO] camera_models: Calibration for camera world at resolution (1280, 720) saved to C:\Users\SY\recordings\2019_07_23\001/world.intrinsics world - [INFO] recorder: No surface_definitions data found. You may want this if you do marker tracking. world - [INFO] recorder: Saved Recording. eye0 - [INFO] launchables.eye: Done recording. eye1 - [INFO] launchables.eye: Done recording. world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 655, in world File "OpenGL\error.py", line 232, in glCheckError OpenGL.error.GLError: GLError( err = 1281, description = b'invalid value', baseOperation = glViewport, cArguments = (0, 0, 1330, 720) )
world - [INFO] launchables.world: Process shutting down. eye0 - [INFO] launchables.eye: Process shutting down. eye1 - [INFO] launchables.eye: Process shutting down.
I've checked several posts from github, yet I could not resolve the matter. Please help!
And, one more question..! So, if I record the world view as undistorted from the Pupil Capture, when I play the same file on Pupil Player, the video is still distorted. How can I keep the undistorted video and analyze it on Pupil Player?
@user-72cb32 what is the resolution of your computer screen?
@user-72cb32 Also, Capture does not save the undistorted video. But you can export the video in Player using the iMotions Exporter in order to get the undistorted video.
Hi all! Is the source code of the Mobile App available?
@user-2a994d No, the Pupil Mobile source code is not open source.
But the app has a network interface that can be accessed by other apps: https://github.com/pupil-labs/pyndsi/blob/master/ndsi-commspec.md
@papr Thank you! Is the source code planned to be released in the future? It looks like there is some space for improvements there...
Currently, it is not planned to make the Android code open source.
@papr Oh that's a pity... my understanding was that all the code is licensed under the LGPL v3.0 (according to the "License" section of the Pupil Mobile App)
@user-2a994d It looks like the app ships the license file for Pupil (https://github.com/pupil-labs/pupil) . This is a bug.
@papr I see, thank you anyway! Please consider licensing it as open source someday...
@user-2a994d May I ask what changes you have in mind?
@papr I've just installed it... however it looks like the performance is not so good on my device (it freezes from time to time). Looking at the issues on github I see there are other users with open issues, so I was wondering if there was some way to help improving it.
@user-64de47 I'm considering using Pupil Core for a research project, I would be more comfortable knowing that possible issues could be fixed by directly modifying the code. I think that its open source approach is one of the reasons making Pupil an awesome project!
Greetings, i'm having an hard time to get the fixation_on_surfaces.csv, do I need to check any plugin to get it?
@user-eaab8a The functionality has been disabled in v1.13 by accident. Please use v1.12 until we release the fix for this problem in our next release.
Thanks a lot, I'll downgrade until that
I get this error though,even with the 1.12 version
At this specific case it looks like the surface was not detected well. Please try reducing the min marker perimeter
I've tried with no luck, it seems to see only the upper markers
@user-eaab8a Could you share the recording with data@pupil-labs.com so I can have a look at it?
Hi All - I have a new set up and cant get the video working. can anyone point me in the right direction?
@user-bb9207 That depends on which operating system you are using and what the exact problem is that you are facing ๐
๐ Windows 10
Which application are you trying to run? What is different from what you expect?
trying to run Pupil Capture -have calibrated pupils but can get the video working
"world - [ERROR] calibration_routines.screen_marker_calibration: Calibration requiers world capture video input."
So you can see the eye video working but not the world video?
yes
aka. the world window shows a gray background while the eye windows don't? What hardware do you use? Do you use the headset or one of the VR add-ons?
Using Pupil Core - USB mount with Binocoluar
Yes the world window should a gray backgound - eye windows seem to work fine
@user-bb9207 When you go to the uvc manager menu on the right, in the drop down menu, how many cameras are being listed? Is one being listed as unknown?
I have "local USb" - there are two
If you refer to USB mount, do you mean the headset that exposed an usb c connector for the world cam?
I think you have fixed it already ๐
Not selected the right option
๐
thank you so much
@papr Hi! It is 2195x1235! But when i used iMotions Exporter, it did not export the gaze and surface information... it only exported undistorted images alone, without any pupil data shown on the exported video...!
@user-72cb32 That is correct. You currently only have one of two options: 1. Originally recorded world video (includes distortion) + visualizations 2. Exported iMotions video (without distortion) without visualizations
Usually, you only need undistorted video if you want to do object detection on the video. But in these cases the visualizations would interfere with the detection algorithms. i.e. we did not come across a use case yet where undistorted video + visualizations was necessary. If you have a specific use case in mind please let us know.
I'm trying (on Windows 10) to get PupilDrvInst.exe working, i.e., to install the drivers. When running it (as admin or otherwise) all that happens is a DOS window pops up and shuts down a few milliseconds later. It's so fast I can't even tell if text is in the window, much less try to read it.
@papr Thank you!
I don't know why it's happening on Pupil Capture. Can someone help me?
@user-09f6c7 We do not know what is causing this issue either. The next release will contain a more detailed error message. Until then, we found that deleting the user_settings_*
files in the pupil_player_settings
folder.
@papr Ok I'll try
I have a Realsense D415 and I cant get the it to work with Pupil Mobile - do you know what video the mobile app you supports? I have running App on the OnePlus6
@user-bb9207 Pupil Mobile does not support any of the 3D cameras since they require custom drivers. ๐ You will have to use Pupil Capture.
Does it support any video at all?
Yes, it supports the 200Hz eye cameras and the 120Hz World camera, as well as the Logitech C930e
Other usb cameras might work as well, but are not official supported
the Logitech seems massive
Do you sell the world camera?
Yes we do, please contact info@pupil-labs.com for details.
hello, i have questions about billing & product!
Hi @user-762939 Please contact info@pupil-labs.com in this case.
actually i bought 2 pupil trackers by mistake
okey
thnx!
Hello, I am having issue opening pupil player v 1.13 on my MacOS El Captain (10.11.6). The application installs fine, but once installed it will not open. I have read the strings on GitHub about deleting the pupil_player_settings, however, this folder is not installed with the player. I have also tried uninstalling and reinstalling the application a few times, in addition to restarting the computer after a new installation. Is there another way to solve the issue?
Hey @papr , our lab are purchasing a new laptop exclusively for use with our pupil headsets. I have a couple of questions. 1) Will pupil capture/player work OK on Ubuntu 18.04? 2) Is it worth purchasing a system with a high spec graphics card (e.g. GTX 1060 or above), or will something like the GTX 1650 be good enough? My priorities for the system are lots of RAM (32 GB) and hard drive space (4 TB). I don't think the graphics card is too important, but I figured it was worth checking before we buy the system. Thanks!
@user-e7102b Ubuntu 18.04 should work very well. A high spec graphics card is only necessary if you want to do fingertip detection. I think your priorities regarding ram and disk space are very correct.
A good CPU is worth it, too.
@user-bda130 what CPU do you have exactly?
@papr great. Thanks!
@papr my processor is 2.66 GHz Intel Core i5
@user-bda130 ok, thanks. Could you share the capture.log
in the pupil_capture_settings
folder after attempting to start Capture?
@papr Capture is having the same issue
Hello. I ordered two Pupil Core Binocular and one camera is missing in each package. My university's purchasing center is sucpposed to get in touch with you tomorrow but I would like to know what happened with the order for such an error to have happened?
@user-f0d261 Please contact info@pupil-labs.com in this case.
@papr ok. Thank you.
Hey everyone - any word on when Pupil Invisible is supposed to launch? It is a platform myself and colleagues are considering using for some upcoming projects. Earlier searches yield a possible Q4 release but I'm wondering if anything more specific is known at this time. Thanks!
@papr We're using Pupil Core with single eye camera for research purpose. tried with all adjustments but the eyes are not captured correctly. While recording we need to adjust the glass by hand and then need to hold it to capture. It's really hard to record using this. The extender are also not fulfilling the needs. We need to adjust the eye camera top to bottom instead of left to right (or) front to backward. Could anyone please help us?
@papr Where is the 'pupil_player_settings' folder at Windows10? I can not found it yet.
Hello everyone, I was wondering if there is a way to change the default settings of Pupil Capture or, better, to save and load different setting configurations ?
@user-124ee6 currently, this is not possible.
Hi - My World Camera is not previewing on the Pupil Mobile? does anyone have a trouble shooting guide?
@user-bb9207 are you running android 9?
yes
Solved - Applied for the beta and its working
recording audio is not working on mac I did install libav and worked then stopped again
Hi! I've already tried to get fixations from gaze points on the surface, but then I found that there are many fixations cannot be found in the gaze points file using start_timestamp. Can you tell me why this happens? If this cannot be fixed, is that mean I have to lost most of the fixation information? @marc Also, I have a question about the heatmap . There are two heatmap modes and I try both and found they have totally the same heatmap. Is this a bug or I do anything wrong? The final question is about Camera Intrinsics Estimation. If I have this plugin, is that mean the data files I get like the gaze points position are data have already been processed and are made to map the flat surface? Sorry for so many questions๐ฌ
We're using Pupil Core with single eye camera for research purpose. tried with all adjustments but the eyes are not captured correctly. While recording we need to adjust the glass by hand and then need to hold it to capture. It's really hard to record using this. The extender are also not fulfilling the needs. We need to adjust the eye camera top to bottom instead of left to right (or) front to backward. Could anyone please help us?
@user-42b09b is the glass moving down the subject's nose or why do you have to readjust the headset constantly? Also, have you tried the silicon nose pads? These might to put the headset in the right position.
@papr Thanks for your reply. The headset is at the right position. It sits perfectly on top of the nose. The problem is with the eye camera. Eye is not captured by the camera correctly. If we adjust the eye glass manually like moving towards top, then it's capturing eyes. That's the reason. This problem makes it really hard to capture the eyes. Is there a way to adjust the eye camera alone towards top or down?
@user-42b09b you can rotate them by a few degrees at the ball joint
Can you share a picture of the eye window please? Just to get a feeling how much of the eye is not being recorded
@papr . Sure I will send you tomorrow. Because it's in lab.
Hi! Can anyone tell me what's the world_index in gaze_positions_on_surface_<>.csv? I thought it maybe the index of the world frame. But my results start from around 300.
Hello! I am new to Pupil Labs and try to understand some basics. I have multiple recordings i want to analyze, for that i am using version 1.10.20, because the defined surfaces get lost in the newer versions. When i download a file to look at the fixation_on_surface - Files they appear to be different every time i do (for the same data with same settings). Why is that and how can i get consistent results?
@user-9c3078 Apologies for the delayed response. Let's see if I can answer your questions.
@user-9c3078 A) fixations on surfaces - Regarding this one, I am not sure if I understand. You extract the start_timestamp of each fixation and try to find the corresponding gaze timestamp in gaze_on_surface_X.csv
? Please be aware that gaze_on_surface_X.csv
only includes gaze during periods in which the surface was detected. If the surface was not detected during the fixation's start_timestamp, you won't be able to find it in the mentioned csv file. In this case, I would recommend to look for gaze data with timestamps between the fixations start_timestamp and end_timestamp.
@user-9c3078 B) Heatmap modes - As you said, there are two modes: Gaze distribution per surface (1), and gaze distribution across multiple surfaces (2). While (1) calculates its distribution (and therefore heatmap colors) independently for each surface, (2) does so by aggregating gaze over all surfaces. If you only have one surface defined, both modes are equivalent.
@user-9c3078 C) Camera intrinsics estimation - Capture uses a set of prerecorded intrinsics by default. Just opening the camera intrinsics estimation plugin does not change that. You will have to run the procedure to update to use custom intrinsics. Capture uses the intrinsics to calculate undistorted 3d gaze. For performance reasons, the intrinsics are not used to undistort the video itself. Pre v1.13, gaze was not correctly undistorted for surfaces. This changed with v1.13.
@user-9c3078 D) And in regards to your question in software-dev please have a look at the NSLR paper and how it defines pso (post-saccadic oscillations) https://www.nature.com/articles/s41598-017-17983-x
If you have further questions or comments, please use the assigned letters for reference, else it might be difficult to keep the topics separate.
@user-9c3078 E) Regarding the world_index
column in gaze_positions_on_surface_X
: Yes, it should be the world frame index. As I said in A), these files only include data of when the surface was actually detected. If it starts at frame 300, then it is likely, that the surface was detected for the first time in frame 300.
Thank you so much for reply.๐ @papr About the fixation, I found me error on the timestamp. So I check my info.csv and export_info.csv. My relative time is: 0-00:58.408, and absolute time is 25353.502126 -25411.910753, then the synced time is 25353.55836332, the duration time is 00:00:59. I think there is an error between those two data? Because I am trying to map the fixation to the video myself. So timestamp really matters.
@user-9c3078 B) Yes, about heatmaps, I defined four different surfaces and in both modes I got the same result.
@user-9c3078 B) did you assign sizes for your surfaces?
@papr B) only surface 1 have size of 96x128, and the others are 1x1
@user-9c3078 Regarding F) timestamps - Please use world_timestamps.csv to get the timestamps for each world frame. The start time (synced) in info.csv is only important if you want to calculate the offset to start time (system), which is not needed in your case.
@user-9c3078 B) The difference would only be visible in assigned colors, not in structure of the heatmap. The differences might be very subtle depending of the gaze distribution across surfaces.
@papr F) Sorry I didn't understand well. So actually, all the timestamps in export files from pupil player have no difference between the capture. They are still the same from their xx_timestamps.csv . If I want to map every thing into the video myself, I need to use the timestamps in each file to match. And calculate the relative time using the world_timestamp - start absolute time . Then in what case I may need to care about the synced time?
@papr B) But the results I got are totally the same in color. Maybe because the resolution? I found blocks in my heatmaps. I'll try it again to confirm this.
@user-9c3078 F) I would highly recommend to get rid of the idea of relative timestamps, since the sensors do not start recording at the same time. You will have to use the absolute timestamps of each sensor (world, fixations, etc) in order to map everything correctly
@user-9c3078 You do not have to care about synced time, unless you have other sensors that use the unix epoch for recording absolute timestamps
@papr But if I need to map them into the video myself, I have to use the relative time. Is there any other way to do that? I can use only the world_timestamp to calculate the relative time of the video and to get the fixation or gaze point I just need have another mapping between the gaze_timestamp and the world_timestamp. Also I notice that there are index in the files. Do you think using the index would be more accurate if I can find the start frame I want?
@user-9c3078 First you do a n-to-1 mapping between gaze data and world timestamps [1]: indeces = find_closest(world_timestamps, gaze_timestamps)
. This gives you a mapping between gaze index and world frame index
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L136-L150
Then you can step through the world video frame by frame and find the appropriate data
@papr Thank you soooooo much for answer so many questions!!! That's so nice of you! I will try the way you mentioned!โบ
@user-9c3078 Sure thing. ๐
If that ist not possible as i think it is, how could I get these "fixations_on_surface_"-files with the new Version of Pupil Player? They do not seem to appear in the folder as usual. Thanks in Advance ๐
@user-0fde83 Hey, I think you also wrote an email to info@pupil-labs.com in this regard, correct? We are still investigating the issue of the non-reproducibility. Regarding fixations_on_surface
: v1.13 does not include them due to a mistake. The upcomming version v1.14 will reintroduce this feature.
@papr Yes , I sent an e-mail for the same question today. Thank you for the quick answer. Is there already a date planned for the release of v1.14?
@user-0fde83 I will try to put out the release this week.
@papr That sounds great, i'm looking forward to it. But if i understood correctly, the reproducibility-problem remains also in the new version. Is there still a possibility to use the data collected or do you have have any suggestions how we could approach a quantification of visual attention in defined areas of interest with other parts of the output?
Hi all, just found Pupil today and it is awesome! We're currently doing researches with stand-alone Tobii devices to track interaction with ads on mobile devices and the quality is pretty awful. Have anyone seen real-life examples of Pupil Core in work with mobile devices? Some video of the final result would be awesome! Thanks and hope everyone have a great day )
Hi @user-1b0db9 if you haven't already you might want to see this post from community member: https://pupil-labs.com/news/pupil-for-usability-research -- additionally you might want to see work that Eye Square did for Facebook using Pupil Core on multiple screens (both tv screen/monitor and mobile device): https://www.facebook.com/business/news/insights/measuring-multi-screening-around-the-world (this link is unfortunately broken but may be back online later). Hope this is helpful
@user-1ece1e please migrate your question to the vr-ar channel.
@wrp thanks for the links! Didn't see it yet, but all the videos from https://www.youtube.com/channel/UCccG1cRW5dUhDUi_yhogTkg look promissing )
Hello is there anyone can help me with mac audio recording when I installed the libav and reboot the laptop worked fine for first time then again asks me to install libav if reopen pupil capture
Hello everyone!! I'd like to find out if I can obtain the data measured by Pupil Capture during it's calibration sequence. We need to do some calibration for our own purposes, so getting this data would allow is to kill two birds with one stone
Has anyone used Pupil Labs set with Psychopy? Is there a proper way to send digital time-stamps via parallel port or something better to Pupil Capture? I want to know when each image is shown on the screen for: - having a better correlation between pupil diameter and images - a help for trimming the section I need Currently we're using two different computers, should we change our setting?
thank you
@user-31bce0 the data is stored in a notification with the subject calibration.data
and is stored in the notify.pldata
file. You can use this function to read the file: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L139-L155
@user-96755f You can use annotations to send information about your experiment to Capture. https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py They can be sent remotely. But make sure the script uses the current Pupil time for its timestamps.
So, Pupil Remote should work on the computer with Psychopy right?
@user-96755f Pupil Remote is a plugin within Capture. But yes, you should be able to connect to it via python code running in psychopy
I'll give it a try! thank you
Great, thanks!
Another question. After you're done with the calibration, the green contour that appears on screen, do its vertices represent the 4 targets or the 4 corners of the screen (which are a little further away than the targets)??
hi @papr.. After I update to 1.13, the same problem still appears for the duration and time stamp. Is there an errors in the settings? The model I am using is Pupil w120 e200b. I have 3 eye tracking with same models, they give me same results. Please help! i want to make gaze plot and AOI. Thanks.
@user-31bce0 the green contour is the boundary of the area used for calibration. Example: if you used the screen based calibration method and if all reference markers and associated pupil data was robustly detected, then you would have a green contour around the extreme most reference markers (which should closely correlate to the screen boundaries provided you are in the same physical position when the contour is shown)
@user-b13152 could you please make a new export and open the csv file in a text editor? Maybe there is something wrong how the csv is being interpreted.
I have done it but there is no change. I have also taken new data with pupils 1.13 but the results are the same
@user-b13152 Not sure if I asked you before, but could you share the exported file with me?
Thanks @papr , the problem is solved. When I open the CSV file in Notepad, the number changes but how to convert the file from Notepad to CSV?
Thanks @papr .. The problem is in my excel. ๐ ๐
@user-b13152 Yeah, that happens if software thinks it is supposed to be clever. ๐
hi, sometimes happens i try to drop a folder in pupilplayer but it doesnt work. The cmd says this
may i ask the meaning?
@user-c1220d This is a bug that has been fixed in v1.13.
ok thanks, great
Hello! I have a question about timestamp. Now I have a UNIX Epoch time which is from another software. And I want to sync this time with the video timestamp. Is that right to use the difference between the Start time system and Start time synced? Just add/minus this difference?
@user-c87bad That is correct!
@user-c87bad I also created a small Player plugin that renders the recording time in unix epoch into the exported video: https://gist.github.com/papr/7d84267e9e1284b5763ac3afb1732494
Thank you so much!!!! But I am not sure how can I use this plugin, is it required to run the software via source code or?
You can run it using both. It is easier to just use the bundle if you do not have the source dependencies installed. Just put in in the pupil_player_settings/plugins
folder and start Player. It should appear in the Plugin Manager overview.
Okay, I'll try that. Thanks again.๐
Is there a way to access visual angle or vergence information in a .csv file? How is the pupil lab calculating a z distance in gaze position? Does it know the working distance of the object in fixation?
In capture or service, why does the two eye cameras always conflict with each other? When choosing one camera the other one seems not working until tried several times?
hey guys, I'm new here, don't know if it has been asked before, but any of the pupils hmd has positional tracking embedded ?
I'm wanting to record a session, where I get the user head rotation and translation through a certain amount of time, and record the eyes rotation as well, so after that, I can test this record against any 3d scene I want
using raycast and stuff
@user-a7dea8 Pupil transforms the 3d orientation and position of the 3d eye models into the scene camera space, and uses the closest point between these two "lines of sight" as gaze_point_3d
@user-94ac2a That should not happen with any of the Pupil Core eye cameras. Please select the "Start with default devices" in the UVC manager, to automatically start the correct cameras. If they are already in use, use the "Restart with defaults" button in the general settings. Please understand, that it is not possible for two separate processes to access the same camera at the same time (what you might interpret as conflict).
@user-81a601 The hmd add-ons do not have their own head tracking. Use the hmd's head tracking instead. e.g. @user-8779ef has experience on how to do that. If you want to do head tracking with one of the Pupil Core headsets, then you can use our head pose estimation plugin. See this youtube tutorial on how to use it: https://www.youtube.com/watch?v=9x9h98tywFI
@papr if I use the custom one, what were the reasons behind it? It seems that sometimes it works but sometimes not even with restart default pressed.
@user-94ac2a ok, then this does not surprise me. Pupil Capture tries to start cameras with known names (e.g. the Pupil Core cameras). If it does not find it, it tries to select the next available one. Do your custom cameras have the same name?
but how can I calibrate the eyes position and rotation against head position and rotation, there is a built-in plugin for Unity for example that do this job ?
@papr yes. They have the same name. Maybe changing the name helps? Any way to change device name?
@user-81a601 In this case, please move the discussion to vr-ar and ask for examples there
okay, thanks for the guidance
@user-94ac2a I think the camera names are usually burnt into their firmware. You might need to modify the UVC backend, in order to automatically select your custom cameras based on their serial number.
@papr thanks. Which document should I look at in order to do that?
Ok
Please be aware that this code called in three different processes. Therefore, you will need a way to identify in which process you are in.
Capture solves this by passing different preferred_names
lists for each process.
Cool. Let me take a look
Hey! Do you know which files produced by the raw data exporter I have to use, when I want to measure the dwell time and the gaze samples on defined AOIs? For the gaze samples I would use one of these:โข gaze on surface_xY: โข surface_events โข surface_gaze distribution
But where can I see the dwell time on the respective surface?
@user-07d4db You will have to calculate that time yourself based on the timestamps. You should have enter and exit events for each surface. Just accumulate exit_ts - enter_ts
for all events for each surface
Thank you for your response papr ๐ and which file should I pay attention to in order to measure the gaze samples on the respective surface?