@user-26fef5 that's a good response! I'd also like to add to ensure that you have sufficient white border around each of your markers.
What can I do to get the program running on Windows 10?
How can I show the hot zone and the eye-tracking?
@user-d6e717 Can you describe a bit more in detail what's not working on your machine? Is there any error message that you receive? Can you start Pupil Capture, or does it not even start? Please also send us a log file from after a failed attempt to run the software. You can find the logfile in: your home folder > pupil_capture_settings > capture.log
anyone else getting shocked from their eye tracking cameras? Is this a faulty system, or is it just the dry winter air creating static? How do I stop it?
@user-b23084 This should not be happing. Please contact info@pupil-labs.com for support.
Hey @papr I’m using my pupil mobile headsets in my research lab and, after running successfully multiple times, my eye 0, then my world camera as well, are showing as “ghost mode.” What’s going on?
Testing with another headset reveals that the issue is not replicating. It appears to be hardware
Yes, this is likely a hardware issue, e.g. a loose cable connection. Please contact info@pupil-labs.com in this regard, too.
K, thanks
@user-c5fb8b every time I try to run the program in general or even as an administrator, the program does not run at all. Where should I send you the log file?
@user-d6e717 You can upload it here if you like or share it with data@pupil-labs.com
@papr I was somehow able to get it working!
Thank you all for your help
Anyone else having trouble with the 60 degree FOV world camera? I keep trying to focus it, yet it is blurry and never reaches the same sharpness as the 100 degree FOV lens. Also, does recalibration involve refocusing the lens manually or going through the calibration process on screen? Thank you for your help!
Hi, I am setting up my pupil core equipment. When I run my program it keeps saying that I am in ghost mode and will not show an image for eye0 or eye1. I tried disconnecting the devices and uninstalling them then plugging the device back in and running Pupil Capture as an admin but to no avail. Ideas?
Hi @papr , I've noticed that my annotations are behaving strangely for some of my recording files. When I export one of these recordings into pupil player, my annotations appear to be loaded in, but then when I play the recording the annotation messages don't pop up in sync with the events on screen, instead they all appear at once right at the end of the recording. The annotation system timestamps that appear on screen seem to be correct, but something about the timing is off. I did some digging and it looks like the "system" and "synced" start times are incorrect (see info file). Do you know if there's a way to correctly sync the annotations and salvage these data? Thanks!
All the annotations appear at the very end of the recording
@user-7b943c The world camera does not have auto focus. You will have to adjust the lens manually. After changing and focusing lenses, please run the Camera Intrinsics Estimation procedure. Previous calibrations (for gaze mapping) loose their validity when changing lenses, too.
@user-61027a Does that mean that the world camera previews the video feed correctly? In the eye windows, could you check the "Activate source" drop down in the "UVC Source" menu? Which cameras are listed their? Does the drop down contain entries with "undefined"?
@user-e7102b I see, you sent annotations in "system time", while Pupil Capture was not synced to system time. Luckily, this is fixable roughly like this:
system = Start Time (System)
synced = Start Time (Synced)
offset = system - synced
annotation_corrected = annotation_wrong - offset
Hi, i'm trying to control the pupil core via matlab 2017a mac OSx High Sierra, and i've gone through trying to get the matlab helper scripts up and running but am running into some issues getting zqm working. I think i need a compiler program like xcode but i need an older version compatible with High Sierra -- is there any support on getting Matlab communicating with the pupil capture system? Or does anyone have any experience doing this?
@papr ok thanks for the info. I have no idea what causes this occasional sync failure, as I always send annotations in exactly the same way (from matlab, via the pupil_middleman server).
@user-c72e0c It sounds like you have a similar setup to me. A couple of years ago another user and I created a "middleman" server that allows commands to be sent from matlab > pupil capture. It's very basic and will only work in pupil capture ver <1.9, but it gets the job done, provided you only want to communicate in one direction. I run it on a mac with either sierra or high sierra. I don't think you'd need xcode. Anyway, if you're unable to get the newer zmq based protocol working, then you might want to get this a go. https://github.com/mtaung/pupil_middleman
@user-e7102b I think the issue is that Capture's clock is not set correctly to Unix time. How do you do that usually?
Can you guys tell me which exact connector type is used for the eye- and world camera? Are those 4 pin JST SH?
@user-755e9e this one is for you 👆
@user-54376c correct, JST-SH1 4 poles connectors.
Hi guys, I'm currently trying to analyse VOR (vestibular induced nystagmus of the eye) using the PL system. While doing recordings I noticed that my eye model kept 'shrinking' (pupil fit remained accurate though) from a normal eye fit to a fit of just the iris, resulting in an amplitude scaling of the supposed eye movements being made. So what I would like to do is an offline check of my eye model to visually check when it is accurately tracking the eye versus when it is not (so i can discard false data). However in Pupil Player, I can only find a window for visualizing the eyecam + pupil fit, yet no eye model is being depicted here... Does anyone know if this is possible? Many Thanks! Jesse
hi all, I have a quick question regarding the fixation detector plugin in pupil player. What is a good value for maximum dispersion? In your docs you do not provide one, but in the screenshot you have it is set to 1.00. Also, it shows min and max duration to be 300 and 1000, respectively. Are these good values as well?
Also, is there a criteria to select [email removed] vs [email removed] I understand the implications of having less fps (more coarse eye position to video frame pairing)... but is this a big deal in most experiments? If not, I'd probably prefer 1080p. I have 0 experience in analyzing eye tracking data
Hi all, I'm a newbie of pupil labs. The device that I use is a pupil core. But I got problem, my software can't show pupil. Btw, how to solve that?
Serious issue. I am currently using pupil capture 1.20.51, on ubuntu 18.04. The eye camera stops working randomly mid-recording without showing anything specific. The eye camera window automatically closes. I can't find anything in terms of resources that could make it crash (RAM, too much CPU usage, etc.). Sometimes it happens a few minutes in the recording, sometimes not at all, sometimes >15 minutes in.
Based on the syslog copy/pasted below, I think something odd is happening with timestamps.
Feb 6 11:54:16 unicorn pupil_capture.desktop[8267]: eye0 - [ERROR] launchables.eye: Process Eye0 crashed with trace: Feb 6 11:54:16 unicorn pupil_capture.desktop[8267]: Traceback (most recent call last): Feb 6 11:54:16 unicorn pupil_capture.desktop[8267]: File "launchables/eye.py", line 617, in eye Feb 6 11:54:16 unicorn pupil_capture.desktop[8267]: File "shared_modules/av_writer.py", line 168, in write_video_frame Feb 6 11:54:16 unicorn pupil_capture.desktop[8267]: ValueError: Non-monotonic timestamps!Last timestamp: 91864.441208. Given timestamp: 91864.440973 Feb 6 11:54:17 unicorn pupil_capture.desktop[8267]: cannot import name 'MPEG_Writer'
I'm new to programming and I have no idea what I am doing. I tried to update my pupil software but I messed up along the way. I uninstalled it and tried to download it again, but my Mac won't let me open the application. Someone downloaded it for me the first time so I know I am missing something. I get the following image. Any suggestions?
@user-7b943c You've got several solutions.. Option 1: open spotlight search (command + spacebar), type: "Security & Privacy", hit enter. Go to the General tab, and click the button saying something like "Open 'pupil capture' anyways..." Option 2: try right-click on the pupil capture app and click 'open', you'll get the same pop-op only know it has the additional option 'open', click that and it will run..
Thank you! @user-83773f
What is an acceptable amount of pupil data dismissed after calibration? I keep getting about 50% of the pupil data dismissed during calibration and that seems like a lot. The eye cameras look like they are picking up the pupil very well (centered, clear black pupil and red trace of the pupil). Any thoughts? Might it be that it is too bright? I'm next to a sunny window. Thanks!
We are also interested in using the Pupil Core outside. Any suggestions on how to make sure the IR cameras don't 'white' out? We have tried to use a baseball cap it doesn't really help. Any other suggestions or knowledge of how others have solved this problem. Maybe some sort of filter that can cover the IR cameras to block some of the ambient IR? It works fine in the shade, but we get bad 'white' out in the sun. Thanks!
@user-5529d6 if you haven't already please email sales@pupil-labs.com with so that we can arrange a time to diagnose/repair HW if needed.
@user-a555da You downloaded Pupil Core software - https://github.com/pupil-labs/pupil/releases/latest -- correct? What OS are you using? Windows?
Hello... please share me your experience, about calibration and collect data outside. is it possible to catch eye movements from the car to banners in the street? I have an experience in the shops but in outside no.. I am waiting for everyone/
@user-d98d40 It is possible to use Pupil Core outside but you need to pay additional attention to not overexpose the video feeds. Additionally, sun light can create very strong IR reflections which can have a negative impact on pupil detection. I would recommend to have a look at Pupil Invisible instead which was designed to perform well in varying outside conditions: https://pupil-labs.com/products/invisible/
@user-c6717a
I keep getting about 50% of the pupil data dismissed during calibration and that seems like a lot. I agree that this sounds like a lot. I do not have numbers of what constitutes "an acceptable amount" right now, though. Generally, it is recommended to perform an accuracy validation after the calibration and decide based on it if the calibration was accurate enough. Any suggestions on how to make sure the IR cameras don't 'white' out? Use the "automatic" exposure mode in the eye windows to avoid overexposure ("white out").
@user-d98d40 It is possible to use Pupil Core outside but you need to pay additional attention to not overexpose the video feeds. Additionally, sun light can create very strong IR reflections which can have a negative impact on pupil detection. I would recommend to have a look at Pupil Invisible instead which was designed to perform well in varying outside conditions: https://pupil-labs.com/products/invisible/ @papr thanks, but at this moment I have only core, so I need more info about calibration, about collect data, can u help me in this issue?
@user-d98d40 I understand, just wanted to make clear that there is a more suitable product to solve the task. :-)
In your case, I would highly recommend to check out our recently added Best Practices section in our documentation: https://docs.pupil-labs.com/core/best-practices/ It applies to outside and inside. And as described above, you will to make sure that the eye videos are not overexposed. @user-c6717a seems to be on the right track already. Maybe they can share more of their experiences.
Hi! I just used my headset to record some data (around 18 mins), but then I found that there is no gaze recorded after 1min50s, should that be a hardware issue or software issue?
And I tried it again, after 15min5s it happened again
Hi, is it possible to determine which frame is which in pupil player? I am trying to correspond fixation durations (from fixations.csv exported file) with the appropriate frame listed but not sure how to get the right correspondence between frame # in fixations.csv and actual frames in the video. Maybe I should just use any video player (e.g. VLC) on world.mp4 video? Thank you for your pointers!
Is there any way to increase capturing frame rate of eye camera upto 1Khz?
Hi all,
I apologize in advance if my question has been answered previously in the chat (I did a quick search and didn’t see anything…)
I am designing a study in which I will be using the mobile eye tracker with geriatric participants. Within my study, the participants will also need to be wearing their prescription glasses to complete the tasks. However, recording with the eye tracker through the lens of their prescription glasses doesn’t give nice, clear recordings. As such, I think that my best bet is finding a way to thread the eye tracker cameras under their prescription glasses. I have been able to do this, but the cameras are a bit too close to the participants’ eyes. To get around this issue, I am trying to devise a way to slightly lift the participants’ regular glasses so that the eye tracker can be a bit further from their eyes when threaded under the participants’ glasses.
I am wondering if anyone else has used the mobile eye tracker with participants who need to wear their prescription glasses for the purposes of their experiment? Further, does anyone have any possible solutions I haven’t thought of? Or anything they’ve come across to slightly lift the participants’ regular glasses away from their face (I’m thinking a silicone nose piece or something along those lines)?
Thanks in advance!
@user-371eba We've got the same problem, so if you find a workaround, please do post it. Thanks!
@papr We're trying to get the distortion coefficients for the world cam. We can get the camera matrix, but Open CV requires the distortion coefficients as well in order to undistort an image. Can you (or anyone else) suggest where/how to do this? Thanks!
hi @user-c87bad - please could you send capture.log file to data@pupil-labs.com so that our team can provide you with concrete feedback? Additionally if you could include a summary of the setup and steps that led to this behavior that would be helpful.
Hi @user-97d695 It looks like you are trying to correlate world video frames with fixations. In fixations.csv
exported by Pupil Player you will find two columns start_frame_index
and end_frame_index
-- this shows you how world camera frames are correlated with fixations. Note that fixations can span across many world camera frames as fixations are composed of a "cluster" of gaze positions. The "cluster" that is classified as a fixation is dependent on the paramaters of the fixation detector plugin (min/max duration & max dispersion). Hope this helps!
@user-5ea855 thanks for following up here -- we have also received your question via email. We will follow up here as well for consistency:
While it would be amazing if we could support higher frame rate eye cameras, we are not able to do so currently due to both hardware and software limitations (limitations of single USB controller and the current speed of pupil detection algorithms would likely not be able to handle 1 KHz). I wish we could give you a "yes" on this, but unfortunately are not able to help you out with this specific request.
@papr @wrp Re: best practices. Thanks, a nice addition to the docn. On slippage: we've been using inexpensive eyeglass retainers of the sort that slip on the ends of the temple pieces (eg. https://www.amazon.com/Leyaron-Universal-Retainer-Sunglass-Eyeglasses/dp/B074H3NCGB). They're inexpensive enough to use one per subject and can be adjusted to be secure without being uncomfortable.
@wrp (Directed this to papr but thought I might try you as well.) We're trying to get the distortion coefficients for the world cam. We can get the camera matrix, but Open CV requires the distortion coefficients as well in order to undistort an image. Can you (or anyone else) suggest where it might be found in the code or files, and how to get it? Thanks!
@user-abc667 Hi randy. You can find both the camera matrix and distortion coefficients in the message pack encrypted camera calibration file in the pupil directory (given that you have made a calibration using the pupil capture camera plugin thingy) - copy the content and paste it into an online converter (e.g. https://toolslick.com/conversion/data/messagepack-to-json)
@user-838cfe To add to @wrp's statement: Player shows the current frame index next to the "seek head" at bottom playback timeline.
@user-abc667 To add to @user-26fef5 statement: - We have prerecorded intrinsics here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L26-L129 - The Python version of @user-26fef5 approach would look like this:
import msgpack # pip install msgpack==0.5.6
path_to_intrinsics = ...
with open(path_to_intrinsics, "rb") as file_handle:
result = msgpack.unpack(file_handle, raw=False)
hi @user-c87bad - please could you send capture.log file to data@pupil-labs.com so that our team can provide you with concrete feedback? Additionally if you could include a summary of the setup and steps that led to this behavior that would be helpful. @wrp okay, I'll do that, thank you👍
@user-26fef5 @papr Thanks for the info. Turns out we had unpacked the intrinsics file successfully but somehow still managed simply to overlook the distortion coeffs.
@user-abc667 Just out of interest, what tool/programming language do you use to unpack the msgpack file?
@papr My student did it in python, apparently just as suggested, and can't quite account for overlooking the info we were after. One of the mysteries of life, but at least it's ok now. tnx
@user-abc667 Thanks for the info. Overlooking stuff like this happens all the time. This is what this chat is for: Pointing people to stuff they have overlooked or just not seen yet. 🙂
@papr The responsiveness and understanding of you guys is one of the best things about your system. Rare to get such good suppoort. Bravo.👍
Quick question. I need to install pupil capture on a computer that is not connected to the internet (apparently the still exist!) Would I need to do any more than simply copy over the pupil_capture_windows_etc. directory?
@user-b23084 Yeah, that should work
@papr @wrp We are interested in the frame by frame transforms that get computed when using fiducials. We are using Apriltags and get good results on surface tracking, but would also like to get access to the transforms themselves. Is there someplace in the code we can siphon off a copy of the transform computed for each frame that contains a known surface, so we can establish the rotation and translation of the form in that frame? Many thanks.
Hi there, I'm a clinician and PhD student in the Dept of Ophthalmology new to eye tracking and have just acquired a Pupil Core. Can anyone help explain the difference between norm_pos_x, gaze_point_3D_x, and gaze_normal0_x ?
Are there any resources that explain what each graph and column means when you data export? (I do own Holmqvist's Eye Tracking textbook but this problem is specific to Pupil Core)
[University of Auckland]
@user-abc667 The exports contain the matrices for both types of transformation: image to surface and surface to image.
Hi @user-aa7c07 please see: https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format and https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format
@wrp that's fantastic - I'll read this in detail to get started with my data. Is it at all possible to discern radial smooth pursuit? to be able to know the X,Y coordinates of the circle on the screen to compare with the X, Y coordinates of the person’s gaze.
It would be great if you could measure tangential error of smooth pursuit around a circle on the screen, for example, but would this require a monitor-based pupil tracker to know the x,y coordinates of the circle on the screen for the experiment?
@user-aa7c07 while this is not opthamology, here is a project that uses smooth pursits (small smart phone screen + Pupil Core): https://perceptual.mpi-inf.mpg.de/files/2015/09/Esteves_UIST15.pdf
You can also get gaze relative to screen by using markers and surface tracking: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
That's great, thank you so much. Plenty to get me started
Hello, I am a graduate school studet. I have a question about "Adjusting of Coordinate (x,y) in recoding vedio". Before I worte this question, I found the way to adjust the gaze data but it only availalbe in "Gaze From Offline Calibration" not "Gaze From Recording". Therefore If anyone know about this, could you mind let me know?
@wrp Hi, do you have any idea how much bandwidth would one eyetracker cost ? I mean, is the video compressed before it is sent to PC through USB?
@user-b8b425 old doc - but useful notes: https://github.com/pupil-labs/pupil-docs/blob/70e6ec71e48d3750b7068afd99386d5ec3c6e8a6/developer-docs/usb-bandwidth-sync.md
thx
Hello, I have some questions on the data recorded by pupil player. 1) About the file fixations_on_surface_1: I have a problem with norm_pos_x and norm_pos_y data. I have some values not comprised between 0 and 1. Whereas those values should be normalised. Do you know why ? 2) I would like to calculate saccade duration , saccade velocity and facade amplitude thanks to fixation.csv, gaze_position.csv and pupil_positions.csv. Could you help me to do that ? I am looking for some formulas but I don't find.
Thank you very much!
Hi. We are unable to the remote host via Wifi. Both laptop and Andriod phone were on the same network. Is there any troubleshooting guide for this?
Hi. We are unable to the remote host via Wifi. Both laptop and Andriod phone were on the same network. Is there any troubleshooting guide for this? I am referring to Pupil Core
@papr "The exports contain the matrices for both types of transformation: image to surface and surface to image." Egad of course. Score one more for overlooking what was in front of me. Tnx.
@user-86d8ec please share the capture.log file with us after starting the Pupil Mobile manager.
@user-4a6d1f normalized locations outside of the 0-1 range are outside of the field of view of the respective camera. You can drop these data points if you are only interested in gaze points that can be located in the world video. Please be aware that you should filter your data by confidence first.
@user-4a6d1f unfortunately, the Pupil core software does currently not implemented saccade detection.
@papr thank you very much ! I have understood with the column True and False ! Ok, I am going to try to do without it. Thank you for your answer!
Just a random question, but when I use my Pupil Labs Core system the device seems to heat up after an hour. I am worried about this causing damage to the system, so I usually let it rest for 15 minutes after an hour has passed. Am I being paranoid? Can the system safely be worn for 5-6 hours continuously as I play around with it each day?
@user-7b943c it is totally normal that the cameras warm up, especially if you run them on high frame rates. Nothing to worry about :)
Thank you!! @papr
Hello, I have two questions.
1)First question is occured after updated the latest version of the player. I set the fixation Maximum Dispersion as 1.50, Minimum Duration as 200ms, Maximum Duration as 200ms as same as before. But the exproted fixations.xlse value are all different. I'm so embarrassed..... I want to know why this happened. (All setting value are all same as before)
2)I want to calculate the entropy rate using the transition probabilistic matrix which based on the fixation data. If anyone know about this then please let me know!!
@user-a48e47 hey, we have recently updated the fixation detector to be more consistent across different settings and included a lot of fixes that prevent false positive detections. Therefore, it is expected that the output changed in the new version.
Please do not use the same value for minimum and maximum duration. It is extremely unlikely that fixations have an exact duration of 200ms. I would highly recommend to extend the allowed duration.
Unfortunately, I am not able to answer your second question.
@papr thank you for reply. By the way if you don't mind, may I ask one more question? Lots of papers fix the fixation as 100ms (Crundall 1998, Chapman 2002)or 200ms (Löcken, A., Yan, F.,2019; Salvucci, D. D., and Goldberg, J. H.,2000) and there aren't the range of fixation. So I wonder how can I set the maximum duration.
@user-a48e47 you can set the maximum duration, min duration, and max dispersion in the GUI of the Fixation Detector plugin within either Pupil Capture or Pupil Player. The value for min/max duration is dependent on your research/task at hand - for some tasks researchers are interested in very long duration of fixations (e.g. in sports performance research for example). So, the range must be determined by the researcher or by using established values from literature within your research domain.
@wrp Thank you so much. It depends on my research domain lol. I reminded it thanks.
Hi, I have old pupil core and I'm trying pupil mobile, they are detected but I get the following message "ndsi.network: Devices withoutdated NDSI version found. Please update these devices" How do I do? Pupil Capture 1.21, latest app (downloaded yesterday)
@user-bd800a If they are detected correctly, you can ignore this warning. We will check what causes this message to appear even though you are using the most recent software.
well I guessed they were detected but I do not get any thing, I also enabled the tyme-sync plugin but i get "Make sure time-sync is enabled"
as group default-time_sync-v1 is not found
@user-bd800a Which version of pupil mobile are you using?
1.2.3
On a Samsung A50
Hi, I'm doing some recordings in 1920*1080 (fisheye), the recordings are great and the visualization is absolutely fantastic. However I'm using the raw fixation data and when I plot the norm x and y manually on each frame I see a decent distortion with the original X and Y presented on the video. How can I correct this? Thanks in advance.
@user-bd800a Could you please share the capture.log file with data@pupil-labs.com after activating the Pupil Mobile backend in Capture? This way we can check if there is an issue with the network setup.
@user-2fd67a Could you post an example screenshot? Fixations group gaze positions, and use their mean location as the "fixation location". Therefore, there should not be a big difference between gaze and fixation.
sure
Sent via pm
Here is an example, the problem is only when I'm not looking at the center of the screen.... I have a couple of points that correlates with the raw data...
@user-2fd67a What is meant by raw data in this case? Where do you take it from?
I took from the fixations
Could you please describe your workflow more specific? E.g. how do you extract the frames? How do you match fixation entries to the frames? etc
I have the video, Frame (say) 30, Fixation X and Y at Frame 30
simple scatter plot
This is the red dot that you are talking about?
Yes
Where does the yellow circle come from? From Player world video export?
yes
exactly
Can you see the same mismatch in Player? Player should visualize the gaze as a green dot by default instead of your smaller red version.
no, the green and Yellow are perfect, the problem is to reconstruct them...
using the raw data
when I'm away from the center, the thing is decently distorted...
using fixations.csv of course
I have the video, Frame (say) 30, Fixation X and Y at Frame 30 Matching by frame index is less trivial than one might think.
can you explain more?
I think that it is likely that this is just a mismatch in frames.
I'm not sure that is the problem...
meaning, if you use e.g. opencv to access frames one by one, that opencv's 30th frame might not be the same frame as Player gets it
Do you use Python to access the video frames?
yes yes
I would recommend to use https://github.com/pupil-labs/pyav to access the frames.
import av
container = av.open(path_to_exported_world_video)
for frame in container.decode(video=0):
bgr = frame.to_nd_array(format="bgr24")
If this still does not work as expected, there might be an issue with the fixation export. In this case we can investigate more. In this case, please also share your code such that we can compare our implementations.
okay 🙂 Thanks!
look... Pupil Player is saying that in this video I have 1930 frames. I've counted using cv2: 1930
therefore, frame by frame is working quite well...
@user-2fd67a OpenCV has an issue with random access for frames in VideoCapture, where it won't perfectly give you frame number X. It should however work if you start from the beginning of the video and read one frame after another until you have read X frames. Is that what you are doing?
Hi all, looking for a fix to a pupil capture issue. Pupil capture is no longer starting up fully (was working Feb 7th, no longer working Feb 10). There doesn't seem to be a clear error message in the command window, but the video feed windows simply do not start up. These windows are present in the windows taskbar, but do not actually have windows pop up in the desktop environment (as shown in the attached screenshot).
Here's the full startup log when this happens.
After encountering this issue, I uninstalled and reinstalled the Pupil labs drivers, following the guide on the pupil labs site, but the issue persists.
@user-70d6b7 can you please try to delete pupil_capture_settings
folder from your home dir and re-run Pupil Capture. This will clear the settings. It looks like maybe the windows are outside of the visible screen boundary. Resetting back to default settings should resolve this issue.
Hi there
We're about to use the Pupil Core glasses in the context of a clinical trial and we're asked to provide proof of CE mark for the glasses
Where can I get that?
Thanks!
@user-8e220c Please contact info@pupil-labs.com in this regard.
Thanks @papr
Hi everyone,
We are trying to read gaze position on a surface (tracked by the surface tracker plugin) from the IPC Backbone. When subscribing to the "gaze." topic we don't receive the gaze position on the tracked surface. Is it possible to receive this data via IPC Backbone? If yes, should it be in the "gaze" topic or a different one?
Hi @user-e672e9 , quoting the docs at https://docs.pupil-labs.com/core/software/pupil-capture/#further-functionality:
Streaming Surfaces with Pupil Capture - Detected surfaces as well as gaze positions relative to the surface are broadcast under the
surface
topic.
Oh, I missed that. Thank you very much!
No problem, glad I could help!
@wrp Thanks so much, that worked! For reference, this happened after switching from a 2-monitor setup to a 1-monitor setup. Even when switching back to the 2-monitor setup, the pupil capture windows did not pop up on either monitor. Maybe pupil capture should reset the screen settings to default on startup to avoid this issue? Also, in case anyone else encounters this -- after deleting the pupil capture settings folder, I also needed to disconnect and reconnect the pupil core headset. Just deleting the folder and not reconnecting the headset resulted in pupil capture crashing upon startup.
Hello and thank you @papr and @wrp for your answers regarding finding frames for fixations from the world video. I have a follow-up question regarding fixations on surfaces. What are the differences between norm_pos_x/norm_pos_y and x_scaled/y_scaled for fixations on surfaces? I believe for surfaces top left corresponds to (0,0) and bottom right corresponds to (1, 1); I would like to superimpose fixation locations onto an image of the surface after recording so just want to make sure I choose the right coordinates to match up with the image (which is the whole surface). Also, could you please tell me more about dispersion and how it determines fixation (I think this will help me better understand how a fixation can span multiple world frames). Thank you!
I am also curious if there are any research publications that have used core for recording eye movements of experts (for example radiologists) as they are viewing medical images? I looked at the Pupil Citation List here: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0 but couldn't find anything like that yet but maybe I am missing something/some other place to look - thank you! I am hoping to learn about protocols/methods from such previous work!
Hi, When I browsed the User Guide and Plugins, I found that many functions require a high speed world camera, such as:
But I bought the USB-C mount and realsense D400 rather than high speed camera, can the tools mentioned before be used normally?
Hi, I try to compare pupil timestamps with ?iso? date. but it is nowhere near correct. How can I convert 2020-01-24 16:36:27.391000 to your decimal time, please?
start_time = (datetime.strptime("2020-01-24 16:36:27.391000", '%Y-%m-%d %H:%M:%S.%f')) ts = start_time.timestamp() print("start: " + str(ts))
I think the reference time is different 1900 and 1970?
I found a work around. don't worry too much. Thanks
@user-2fd67a OpenCV has an issue with random access for frames in VideoCapture, where it won't perfectly give you frame number X. It should however work if you start from the beginning of the video and read one frame after another until you have read X frames. Is that what you are doing? @user-c5fb8b I'm doing exactly this... I really don't understand why it is distorting this much.
@user-2fd67a I had the same problem and got a good working script from pupil. depends how you access the frame. # Do not try to call video.set(cv2.CAP_PROP_POS_FRAMES, frame_idx)!
that is the trouble maker. my understanding the frame index is a good but not accurate estimate.
i'm reading frame by frame... I think it is causing some sort of distortion due to the fisheye lens
the norm_pos_x is working fine
if you want i can sent you the script I got from pupil
so you trying to reconstruct the ellipse onto it?
no
i'm just trying to reconstruct the fixations by using a simple scatterplot in each frame
ah now I got it. Myself I look into eye videos, but you print the gaze onto world video and you see the fixations there, is that right?
yes yes
Look, same video, same frame...different data points for norm_pos_x and norm_pos_y..
I know they should be a little bit different, but not a std of 20%
knowing that my maximum Y is about 0.66 and mininum is .44
it is almost random the correct positions I have to take... I took the X from fixations and Y from pupil_positions and it worked perfectly for this particular frame.
Couple of things. The way gaze is calculated (pairing eye 0 with eye1, 1-0, 0-1, 1-0 consecutively according to pupil data) gaze has 240 data per second, world video has 30 up to 60 frames per sec. with 30Hz you will have 8 gaze data points for each frame. taking data from fixation is a good approach because eye keep local position for 0.15s - 0.4-6??? roughly.
pupil_positions relate to the actual position of the pupil center in the eye videos. Gaze_positions.csv norm_pos_x and norm_pos_yrelate to world video.
Hello. I have a question. As I know the pupil lab core's Hz is 200 Hz and fixation classificator based on spatial dispersion (the default value is 1.5). However,dispersion-based algorithms are not suited to analysing data collected with higher sampling frequencies (> 200 Hz) (in the book "Eye-tracking: A comprehensive guide to methods, paradigms and measures). So I am wondering about this. Because my fixation range is depending it.
Okay, I REALLY need help here... the fixation matrix is giving something really strange.
@papr can you please help me?
@user-a48e47 the dispersion will still be the same, no?
@user-2fd67a I cannot understand could you please write more detail please?
@user-2fd67a - was this recorded on Pupil Mobile (e.g. on the android device) or via streaming from Pupil Mobile --> Pupil Capture and recorded on the computer running Capture?
@papr please follow up on this today when you are online.
I've recorded on Pupil Mobile (android)
is there any accelerometer on the device? It seems to mee that is related to the Y movement of my head as when I'm looking perfectly straight it works quite well...
@user-2fd67a Just have seen the video. the norm positions are in the image coordinate system. 0 , 0 is top left. It looks like your y value is in a cartesian coordinate system. 0,0 bottom left. do you recon that could be?
Hi @user-14d189, so then why the X axis is perfectlu fine?
particular when you look up down on you see that it is counter acting
x is fine because both start left.
hmmm... let me try here, thank!
thanks!
i'm considering 0,0 top left...
BUUUT
the fixation matrix is not
you are a genius
Na, just run into same problems, 😩 🤣
Thanks Peter!
yeap, it worked
NEED QUOTAION FOR EYE TRACKING GLASSES
Hey @user-879265 have you contacted info@pupil-labs.com in this regard already?
NO
I WILL CONTACT
THANKS
@user-879265 OK, great. For the future, would you mind turning off your caps-lock? I know that this was surely not intended but reading the text in uppercase gives me the impression that you were yelling at us. 😕
@user-879265 alternatively to contacting info@pupil-labs.com via, you can select your preferred items from the shop and request a quote during the checkout process. Our sales team will follow up in any way. 👍
@user-2fd67a @user-14d189 Great to see that the community was able to help out! 🙂 Please see this link for references to the different coordinate systems that we use in Pupil: https://docs.pupil-labs.com/core/terminology/#coordinate-system
@user-838cfe Normalised coordinates are always setup such the origin is in the bottom left. Beware that the top orientation of the surface is visualized in Capture/Player by the red triangle. See the screenshot for reference.
Hello I am trying to run Core in a Windows 7 Professional PC, I have got it before (in Windows 8.1) but I cannot do it now ... I have correctly installed libUSK drivers ... I do not know what more could I try ... Can be a PowerShell Problem?
I have installed PowerShell 2.0
The mesage is Video_capture.uvc_backend:Init failed
any help?
I have the same behaviour with the three cameras ...
@user-5e6e36 officially, we only support Windows 10. I am surprised that you were able to get it running on Windows 8. :-)
Hi, i am a little bit newbie using pupil and eye.trackers. I just read that Pupil randomly generates a timestamp to each eye-tracker record . I want to know if it is possible to get timestamps with the current unix timestamp, I want to synchronize the eye-tracker with an Otree application that give me the current unix timestamp
@user-c67683 this is my message to @user-dfa378 from yesterday. They had the same question as you.
Checkout the info.player.json file. It contains the recording start time in system time (unix epoch) and pupil time. You can use these two timestamps to calculate the offset between unix epoch and pupil time and apply it to your other data. Maybe they can share their experiences with this approach with you.
So, there is nothing that I could try? I am searching in internet and there is a lot people with the same problem with W10
Ghost mode
In w8.1 I am using v1.11 so I am using the same version in W7Pro, it seems that the drivers are correctly installed. (It is going to be hard to install W10 in this unit)
Hi, @wrp , could you please tell me how to save undistorted video instead of raw video ?
@user-b8b425 Currently, this is only possible post-hoc in Pupil Player using the iMotions exporter. The exporter plugin requires 3d gaze data though.
Hello, in what kind of coordinate system are the pose_T and pose_R for the apriltags?
For the corners I get e.g. [[653.05981445 392.19338989]
[655.08605957 520.94110107]
[779.39581299 522.76715088]
[778.57659912 393.91848755]]
but for the translation pose_T and rotation matrix pose_R I get coordinates in [0,1], e.g. pose_T = [[0.02474529]
[0.02952776]
[0.42436221]]
Hi, I am having some issues opening pupil player v1.21-5. Yesterday was working fine but now it won't open anymore. What might be the reason? I have tried restarting and shutting down my pc but still doesn't open. The command prompt opens but it doesn't run any commands.
v1.11-4 opens
@user-894365 I think you might be able to give more insight on @user-eaf50e 's issue, correct?
@user-0eef61 Please delete the user_settings*
files in Home directory -> pupil_player_settings
and try again. If it still does not open, please share the player.log
file in the same directory.
Hi @papr now it's working. Many thanks 🙂
@user-eaf50e pose_T
and pose_R
are the translation vector and rotation matrix of the tag pose with respect to the camera.
The coordinate system of the camera:
X-axis: pointing to the right
Y-axis: pointing down
Z-axis: pointing forward
The magnitude of pose_T
depends on tag_size
. E.g. Assume the physical size of Apriltags is 5 cm, and you set tag_size=5
, then pose_T
is in cm.If you set tag_size=50
, then pose_T
is in mm.
Hi! I am struggling with the installation of libuvc in Ubuntu 16. I cloned the version found on the pupil-labs repo and when calling "make" I get the following error:
[100%] Linking C shared library libuvc.so /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libusb-1.0.a(libusb_1_0_la-core.o): relocation R_X86_64_32S against `.rodata' can not be used when making a shared object; recompile with -fPIC /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libusb-1.0.a: error adding symbols: Bad value collect2: error: ld returned 1 exit status CMakeFiles/uvc.dir/build.make:188: recipe for target 'libuvc.so.0.0.9' failed make[2]: [libuvc.so.0.0.9] Error 1 CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/uvc.dir/all' failed make[1]: [CMakeFiles/uvc.dir/all] Error 2 Makefile:129: recipe for target 'all' failed make: *** [all] Error 2
Any help would be appreciated 🙂
Nevermind 🙂 I removed the file libusb-1.0.a and it worked (hopefully I didn't messed up)
Hi all I am a researcher trying to run a usability study on a website. I am curious the best route for converting the pupil coordinates into screen coordinates, does anyone have experience with this sort of thing?
Thanks!
@user-526844 have you used a chin rest?
@user-526844 use surface tracking [1], define your screen as a surface. Note that this surface/AOI will only take into account the position of the screen, and will tell you where on the screen the participant is looking, but does not take into account dynamic content - e.g. scroll/video/animations. You will need to denormalize the gaze on surface positions. You can see an example of how this works in [2] (note this is for static content).
If you have control of the website that is being viewed, then you could track scroll position via a custom script and correlate these with timestamps post-hoc to overlay gaze on dynamic content - but that would take a bit of development effort.
Hi Do you ship the AR headset to Turkey?
VR I mean
@user-eaf50e
pose_T
andpose_R
are the translation vector and rotation matrix of the tag pose with respect to the camera. The coordinate system of the camera: X-axis: pointing to the right Y-axis: pointing down Z-axis: pointing forwardThe magnitude of
pose_T
depends ontag_size
. E.g. Assume the physical size of Apriltags is 5 cm, and you settag_size=5
, thenpose_T
is in cm.If you settag_size=50
, thenpose_T
is in mm. @user-894365 So the translation vector should give the translation of the tag in the world coordinate system? Because I get negatives value as well, but the world coordinate system is said to be bound to x: [0, <image width>], y: [0, <image height>]
@user-eaf50e We have just noticed an error in our coordinate system documentation. The bounds of the 3D camera space are not x: [0, <image width>], y: [0, <image height>]
. Actually, the 3d camera space does not have boundaries at all. The origin of the 3d camera coordinate system is in the center of the camera, not any of the corners. Therefore, it is expected to get negative values.
Just for further clarification: This 3d camera space is not equivalent to the image plane.
pose_T
and pose_R
are defined within the 3d camera space. The camera space itself is defined by the camera_params
that are passed to the apriltag detector.
Okay thanks! So the gaze_point_3d_{x,y,z} is returned in the world camera coordinate system as well, right?
Correct! Please be aware that the units of these spaces might differ since the unit of the apriltag camera space is dependent on the tag_size
as @user-894365 noted.
@user-894365 please correct me if this last message was incorrect.
@user-eaf50e
the unit of the apriltag camera space is dependent on the
tag_size
-> correctgaze_point_3d
is in mm
Hi! I just managed to get pupil programs running in Ubuntu 16.04 (after a loooot of effort :P).
I would suggest to make the following modifications to the documentation found here https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu17.md:
(Without CC=/usr/bin/gcc-7 it didn't worked, despite I changed the gcc version with update-alternatives) - I needed to install pybind11 before installing nslr pip install pybind11
@user-03a2fe oh oh, this document seems to be out of date. nslr
is not required anymore in order to run the master branch. 😕
I want to buy Pupil Labs products. Any help....
Email or something
@user-3ac2c5 You can buy them via our website: https://pupil-labs.com/products/ Choose the product that you want to buy, add it to the cart, and afterwards you can start the checkout process. I can also send you a link to a prefilled cart if you let me know what you want to buy. 🙂
I want to buy two Pupil Labs VR headsets with two eye trackers (HTC Vive)
Thank you
Please include the shipment price
@user-3ac2c5 https://pupil-labs.com/cart/?htcvive_e200b=2 You can request a quote during the checkout process.
@user-3ac2c5 Please be aware that Pupil Labs does not provide the VR headset itself but only the eye tracking add-on.
I figured it out. Thanks
Last question. How many days can I have the eye tracker?
@user-3ac2c5 2-3 days after we have received the payment
Tnx
The pupil cam for the left eye wasn't working, so I followed the Windows 10 trouble shooting advice and uninstalled the pupil cams from the device manager. However, when I plugged the glasses back in and restarted pupil capture, the eye cam for the left eye still isn't working. I don't think it reinstalled. How do I fix that?
@user-1cc326 Please send an email to info@pupil-labs.com in this regard.
Howdy! I have a question regarding pupil lab headset for writing my manuscript. How could I find the frequency of data collection? Is it 200 HZ or should I check something in the software? I appreciate any help
@user-edd919 you can check the Eye Video Exporter output. It contains csv files with the original eye video timestamps. Calculate the differences between them and take the reciprocal of the average time difference. This gives you the average frame rate.
Hi! We are trying to record pupil data synched with OptiTrack data using this script: https://github.com/mdfeist/OptiTrack-and-Pupil-Labs-Python-Recorder We have pupil data recording successfully, but no OptiTrack data. (Though we can confirm that motion coordinates are broadcasting successfully.) Does someone have experience using this script or any idea where the problem might be?
Hi@papr ,I want to know pupil groups,but the link on the website failed. Can you tell me some details?
Hi @user-a98526, here's the link to the full pupil-helpers repository: https://github.com/pupil-labs/pupil-helpers
Thanks@user-c5fb8b ,but I didn't find one about the pupil group
@user-a98526 Actually, the link pointed at "Pupil Sync". Pupil Sync was the predecessor to our network api and has been out of date for more than 3 years now.
Let me gather some important links regarding Pupil Group
@user-a98526
- Pupil Groups is based on the ZRE protocol https://rfc.zeromq.org/spec:36/ZRE/
- Pupil uses the ZRE Python implementation "pyre" https://github.com/zeromq/pyre (any ZRE implementation can be used)
- The Pupil Groups plugin joins a user-defined group, by default: "pupil-groups"
- Messages whose topics' start with remote_notify
are relayed to other group members. Alternatively, any notification with the key remote_notify
is also relayed.
- Notifications that are related to group members by default:
- recording.should_start
, recording.should_stop
@user-c5fb8b Could you update the docs with the information above please?
Github issue as reference: https://github.com/pupil-labs/pupil-docs/issues/354
Thanks@papr .Can I use the pupil group to control my other sensors (e.g. depth sensor)
Is there an example where my other sensors join the pupil group, or what should I do?
@user-a98526 What type of control are you looking for? The purpose of Pupil Groups is to allow some quality of life for people who run multiple Pupil Capture instances at once. It does not give you fine-grained control over a single instance.
I have a robot system, and I want to use the pupil group to obtain a variety of sensor data (force, angle, etc.) of the robot and connect them with gaze data.
@user-a98526 I fear that Pupil Groups does not fit that purpose.
@papr thanks,I‘ll try another way to achieve it.
@user-a98526 I mean you can use Pupil Groups for network discovery and to avoid manual ip setups. But it will not automatically receive sensor data.
@here 📣 Announcement 📣 - We just released Pupil Core software v1.22 . Download the apps here: https://github.com/pupil-labs/pupil/releases/tag/v1.22
This release reintroduces "Gaze History" visualization and simplifies the selection of video sources. Additionally, we removed the built-in RealSense video backend. More details in the release notes.
We look forward to your feedback!
Hi, I'm trying to connect multiple eye-trackers together for an experiment. I'm not sure how to do this . Do i use pupil groups? Not sure how to get started. Any help will be much appreciated.
Hi @user-2ad874 Pupil Groups allows you to start and stop a recording simultaneously on multiple instances of Pupil Capture running on the same network. If this is what you want to do, then pupil groups will be the right choice for you. You need to enable the Pupil Groups plugin on all instances of Pupil Capture and make sure they are all in the same group. You should see the names of the other instances listed in the plugin then. When starting the recording, all instances will record simultaneously. Does that answer your questions?
Hi, I need an eye tracking library for webcams. - Is Pupil capable to meet my needs?
Hi @user-0cc5ed Pupil is designed to work with head-mounted eye trackers only. There is the possibility to build your own head-mounted eye tracker, e.g. with webcams (and a lot of other materials), but I assume you want to place a webcam in a fixed location, e.g. on a monitor? This procedure is called remote eye-tracking (as opposed to head-mounted) and we do not currently offer any software for this use case.
Thank your for your answer! Yes, I need remote eye-tracking, which provides the gaze point on the display. - Are you aware of any other library, that can do this?
@user-0cc5ed Pupil offers tools for tracking surfaces, e.g. a display with fiducial markers and correlating the gaze to the surface plane, so this is also a possible use case with our tools. Generally I'm not aware of any library that allows you to use your custom webcam. Most solutions come with custom high-resolution cameras.
Disclaimer: there totally might be libraries out there that offer this use case.
@user-c5fb8b Thank you for your information! Yes, I found WebGazer (https://webgazer.cs.brown.edu/), but I can't use it in the content script of a webextension. Therefore I am looking for a native app solution, to which I can connect via native messaging - but all projects, which I tried are very hard to setup and don't offer support...
@user-0cc5ed Is the display you are talking about a mobile phone or a computer monitor?
@user-c5fb8b It's a computer monitor, but the webextension could also be used on a mobile phone, if mobile browsers do allow to install the extension there too. Is there code vice a big difference between these two situations?
@user-0cc5ed Pupil can be used to track basically any surface, if you attach markers to it. I think it would be best for you to contact sales@pupil-labs.com with a description of what exactly you want to do. Someone there will be able to give you a better overview of which setup from us would work best for your use cases!
@user-c5fb8b Ok, I will try this. Thanks.
Hi, I have the pupil labs gadget and i want to pare this with an Eprime task. Is this possible?
I have Core with binocular 200Hz cameras but when I am usin capture I cannot reach this framerate, what is the problem? (the computer loadwork is about 30%)
Hi, I am new to pupil labs and is trying to get a core. I would like to get the world camera together with the eye cameras but might need to use the depth camera in the future, which I am wondering if I order the one with world camera and get a RealSense camera in the future, will it work with Pupil Labs Core?
In other words can I plug the RealSense camera to my computer with a separated cable and use it as the world camera for the Pupil Labs or do I need to get the USB-C version? Thanks
@user-5e6e36 please change the sensor settings in the eye windows of Pupil Capture to 200Hz if not already
I am going to look for this .... (how could I miss that?)
Ok, I have found the settings, I am living very important bandwith problems ... however I have i7-4790.3.6 Ghz with 16 Gb, this is not enough for running two eye cameras and the world camera with (200 Hz and 30 Hz respectively)? Do you have some minimal requirements advice for running with this settings?
Hi! , I am new to pupil capture. I am trying to manually calibrate the pupil by showing markers on the screen. However, the calibration polygon is not the same which I have used. For example, I am calibrating a monitor. Ideally there should a rectangular calibration polygon which encompasses the whole screen but in my case it is a triangular shape and didn't cover whole screen.
@user-5e6e36 This setup sounds sufficiently fast. What frames are you seeing? Also, on which operating system are you running Pupil Capture?
@user-76416d This means that for some of the shown marker positions there is not sufficient pupil data that reaches the confidence threshold (0.8 by default).
Hi! I would like to ask if the pupil code could be adapted to work with remote eye tracking instead of mounted eye tracking. I would so much like to purchase a "Pupils Remote". 🙂 All the other vendors I have tried have worse accuracy than Tobii 4C. I would like to resell Tobii devices in medical areas, but they're monopolistic and don't allow that. So I'm looking for an alternative, and "Pupil" seems to be the most promising project. Thanks for it, btw.
Hi, I'm looking into the budget for a Pupil Core setup for a startup package. Beyond the pupil core itself, a mobile device, a support contract with pupil labs, and a computer that can handle the analyses, are there other things I will need or should consider?
Also, regarding the mobile device, is there a device that people would recommend? I see the hardware list in the docs (https://docs.pupil-labs.com/core/software/pupil-mobile/#supported-hardware), but that looks out of date. Would a new Pixel or One Plus 6 or 7 work?
Hi, I am running into an interesting problem...my csv output of fixations_on_surface indicates that none of the fixations are on the surface ('on_surf' is 'FALSE' for all fixations), yet I can see that there are fixations on the surface from the world camera and by visualizing the heatmap on the surface during recording. Is there something that could explain this bizarre observation? If needed, I can provide video snippets or fixations_on_surface csv file; please advise what would be most helpful. Thank you!
@user-a4b794 You can use Pupil Core for screen based research if you want to use surfaces ( https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking ). That being said, this would still be a wearable eye tracking system and not a remote system. Pupil Core software source code is designed to be used for near-eye IR cameras (e.g. a wearable system), so there would not really be a way to easily adapt/modify Pupil Core software to support remote eye tracking applications.
@user-12d809 If you'd like to discuss sales related questions, please get in touch via [email removed] Re Pupil Mobile - OnePlus devices should work if they are running Android 9 -- we are still waiting on a fix from the Google Android team in Android 10.
Hi @user-838cfe the heatmap in Pupil Player/Pupil Capture visualizes gaze on surface. You may have gaze on surface, but no fixations on the surface, as fixations are made up of "clusters" of gaze positions.
@user-5e6e36 This setup sounds sufficiently fast. What frames are you seeing? Also, on which operating system are you running Pupil Capture? @papr When I am running 120 Hz (2 eye cameras), and 30 Hz (World camera) everything is OK, but when I am trying to run 180 Hz the CPU workload increase over 100 % (140 %) and the world camera get almost freezed (3 Hz); I can even try with 200 Hz. I am running Windows 8.1
I cannot even try wit ,...
@user-5e6e36 Your hardware should be sufficiently fast but as I mentioned previously, we do not support Windows versions prior to Windows 10. This also means that the software might run significantly slower due to unexpected differences in the different operating systems. I highly recommend to either upgrade to Windows 10 or to use a dual-boot setup with Ubuntu 18.04.
OK, I am waiting for a new computer with W10 so I hope that everything goes faster then, thank you
@papr @wrp Is there documentation on the format of the world camera video before it is run thru pupil viewer? We want to analyze the world cam images and those closest to the original are better. Thanks.
@user-abc667 Pupil Capture actually saves the raw mjpeg data from the cameras. This is the closest you can get to the original as the camera does not support uncompressed video.
@papr OK, good to know. (I assume you meant "mpeg".) I understand it's variable frame rate; is there a time code with each frame? (Might be a naive question, I don't do lot of video hacking.) Tnx.
@user-abc667 No, I mean mjpeg
(https://de.wikipedia.org/wiki/Motion_JPEG). This "video format" is just a stack of independent images without a sense for time. This is why we save the timestamps for each frame in the <video name>_timestamps.npy
file. You can load the ts files with numpy.load()
. Alternatively, you can find the description of the npy
format here: https://numpy.org/devdocs/reference/generated/numpy.lib.format.html
@papr Great, will follow those pointer. Many thanks.
@papr Hi all, I am recording to a phone (running android 9 and set to save to SD card) and I seem to be having an issue if the recording is anything close to an hour long I dont appear to be able to playback the audio capture. The mp4 file is present and appears to be the correct size (approx. 24mb for an hour) but will not open in any audio playback software nor in the pupilplayer software (I am using the pupil core product and using the microphone on the phone to record audio). I don't have this issue if I make short recordings.
Hi, terribly sorry for a stupid question, which camera could be used for DIY eye-camera? According to pupil-labs.com it should be taken from Microsoft HD6000, but it its PCB will not fit the Pupil Labs eye-camera size (10x45x7mm)
i'm trying to get pupil size which are not affected from light- this is why i took a black screen and a white cross- to focus on him , can you please tell me if i should make a black and white cross or a diffrent colours ?
@user-d9a2e5 Hi, I would recommend to have a look at other pupillometry studies and checkout best practices there. This paper would be a good place to start. https://www.researchgate.net/publication/335327223_Replicating_five_pupillometry_studies_of_Eckhard_Hess
As you can see, they try to avoid strong contrasts and large areas of different colors.
For IRB purposes, did the Core headset infrared emissions ever get certification under IEC 62471 or something similar? I found a posting saying it was being assessed, but I didn't see anything to the effect that it has been certified. Alternatively, is there anything that I can point to show it's safe?
@user-5054b6 Please contact us via info@pupil-labs.com and my colleagues will follow up accordingly.
done - thanks!
@papr thanks!! , i will check it out 🙂
Hello, I was wondering if PL can share some information about the sensors used in Pupil Core (1 and 2). I'm looking for the following info: a) Sensor size b) Focal length of lens (fixed focal length in 2.0 and range of length for 1.0) c) If some kind soul has calibrated the eye cameras, then the intrinsic matrix for the same
@user-c828f5 please contact info@pupil-labs.com in this regard. We will follow up with the corresponding information.
Hey there, I came here a week or two ago about a non-monotonic timestamp issue. Apparently it was fixed, or worked around in 1.22, however when I give it a try, the eye camera still crashes... In syslog I get a sequence of "correcting clock frequency" followed by
Feb 26 08:55:53 XX pupil_capture.desktop[11967]: *** Correctieye0 - [ERROR] launchables.eye: Process Eye0 crashed with trace: Feb 26 08:55:53 XX pupil_capture.desktop[11967]: Traceback (most recent call last): Feb 26 08:55:53 XX pupil_capture.desktop[11967]: File "launchables/eye.py", line 542, in eye Feb 26 08:55:53 XX pupil_capture.desktop[11967]: File "shared_modules/video_capture/uvc_backend.py", line 429, in recent_events Feb 26 08:55:53 XX pupil_capture.desktop[11967]: AttributeError: 'UVC_Source' object has no attribute '_latest_ts'
@user-5529d6 Hey, yes, we will release a fix for this tomorrow.
I think you contacted info@pupil-labs.com in this regard, too, is that correct?
yes, I apologize for the redundancy I just wanted to bring it up here in case another user is encountering similar issues.
Don't worry. Your report was very helpful.
@papr Thanks!
Hi @user-2ad874 Pupil Groups allows you to start and stop a recording simultaneously on multiple instances of Pupil Capture running on the same network. If this is what you want to do, then pupil groups will be the right choice for you. You need to enable the Pupil Groups plugin on all instances of Pupil Capture and make sure they are all in the same group. You should see the names of the other instances listed in the plugin then. When starting the recording, all instances will record simultaneously. Does that answer your questions? @user-c5fb8b Thank you for getting back to me. I appreciate it. Could you explain what you mean by running pupil capture on the same network. How do I get that started. I really appreciate your help.
Does anyone know the range of values for the following data points? Also, how are the axes setup? I'm having a hard time visualizing it. Is there a way to display these values in Pupil Capture?
If I'm doing a psychology experiment with Pupil Labs, how can I correlate the trial numbers in my experiment with the timestamp in Pupil Lab's recordings?
@papr
@user-7b943c check that link out. your sample should be x == left 258..., y up 5, z 463 into the space in front of the camera.
in capture, the binocular vector gaze mapper visualize the gaze. is available after calibration. hope that helps.
Hi@papr ,I want to know if there is any related paper or introduction about the gaze estimation algorithm of the pupil core.
@user-a98526 The pupil detection algorithm is described here https://arxiv.org/pdf/1405.0006 (this paper is pretty old and most info is out of date, but the pupil detection algorithm has not fundamentally changed) The 3D eye model we are using is based on this work: https://www.researchgate.net/profile/Lech_Swirski/publication/264658852_A_fully-automatic_temporal_approach_to_single_camera_glint-free_3D_eye_model_fitting/links/53ea3dbf0cf28f342f418dfe/A-fully-automatic-temporal-approach-to-single-camera-glint-free-3D-eye-model-fitting.pdf The way we utilize this 3D eye model for gaze estimation, is currently not documented in a dedicated paper.
@user-c629df Yes. The recommended workflow is to send annotations (https://docs.pupil-labs.com/core/software/pupil-capture/#annotations) remotely from the experiment software to Pupil Capture. https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations
In your case the annotations would contain information about the current trial.
I have implemented a ros node which is interfacing pupil capture using the network interface over zmq. It just receives all surface gaze messages and republishes them to the ros environment. If I measure the number of gaze data per second in ros i get around 240 messages per second. The eye cameras are running at 120 Hz. The messages are binocular gaze data points so from both eyes (gaze.3d.01._on_surface). I would except 120 messages per second, why do I get 240? Do I get the gaze mapping for each eye separately? I would have expected to get one gaze mapping using both eyes.
@papr thank you so much!:)
@user-14d189 Thank you so much! Do you know if it is possible to display these coordinates live on the Pupil Capture software?
I'm using MatLab to remote control [record, calibrate and send triggers] pupil capture software using the pupil middle man python code and accompanying MatLab code. https://github.com/mtaung/pupil_middleman
Using this code to send annotations to the pupil core does not work for current pupil capture versions -- @papr I believe there was a change in v1.9 pupil capture to annotations, can you give any guidance on updating the python code to allow it to send annotations?
Also, my right eye camera displays a blurrier image than the left eye camera. Any suggestions on how to get it in focus or should I send my system in to get that done since my configuration is the 200 Hz no focus eye camera?
@user-7b943c have you focused your camera properly?
@user-2fd67a I am able to focus the world camera. I am scared to mess with the no focus 200 Hz right eye camera though because of the following documentation:
have you tried to clean the lenses with a soft fabric?
maybe someone touched the camera and left a fingerprint as a gift for you
@user-2fd67a I just tried that and used an air duster in the area of the camera but nothing changed sadly.
@user-7b943c I don't think you can display. But imagine a coordinate system with 0 0 in the horizontal and vertical center of the image. z would be the perpendicular through the center.
with the different sharpness of the images right and left, try to change the distances to the eye of the blurrier camera. See if that change something, otherwise may get advise from pupil labs.
check if the resolution setup is the same.
@user-7b943c @user-2fd67a you should not attempt to focus the 200Hz eye cameras. This will result in breaking the lens mount as documented. @user-7b943c I think I just responded to you via email - from the images you sent it seems that you might be capturing the eye through eye glasses lenses, which could explain the "blurryness" of one of the images.
@wrp is it possible to export the results from the head pose tracker?
@user-894365 when you are online can you please respond to @user-14d189's question ☝️ ?
@user-14d189 yes, that should work just fine if you hit the export button in Player while the head pose tracker is active and has finished its processing.
@papr @wrp That was easy! Thank you. What units are the translation x,y,z?
@user-14d189 The unit is one Tag size.
Hi guys, I'm trying to connect pupil mobile with the last Pupil Capture 1.22, but there is no more NDSI manager - Backend Manager - Manager - Pupil Mobile. Do you know where they move it? thanks
@user-d20f83 it is always running. Your devices should appear in the Video Source menu.
@papr Great! we got it
I'm trying to connect pupil remote to matlab so I can export the gaze on surface data. I'm using Matlab_Python from the pupil-helpers-matlab git. I'm using the same machine for both pupil and matlab, so I use the 127.0.0.1 address. I get this timeout: python pythonPupil.py Timesync successful. 0 Traceback (most recent call last): File "pythonPupil.py", line 197, in <module> my_socket.send(uMsg) ConnectionRefusedError: [Errno 61] Connection refused
Any idea how to connect pupil to matlab so I can either export gaze on surface to matlab, or send events from matlab to pupil? Edit- I'm able to use pupil_remote_control.py to start and stop recording, and remote_annotationes.py to send custom annotations, so that part works.