Hi, the world camera crashed partway through data collection today and I can't recover it. The eye cameras are fine, but the world camera shows up in Device Manager on Windows as some sort of unrecognized device with a port reset error. Upon trying to reconnect the cable, I get a warning from Windows about a USB malfunction. Same problem in Ubuntu, eye cameras initialize fine but the world camera stays gray.
Is there any accuracy/precision data for the DIY headset?
Hello! I'm having some trouble with one eye camera, it has problems detecting the pupil, I think because it detecta something dark on the corner of the eye (but there is nothing weird there). It's the eye 0 in the image attached, see the blue corner?
@user-2be752 try the automatic exposure mode in the eye's uvc source settings
@papr this happened now
@user-2be752 Have you tried adjusting the detector's region of interest? You can do so by changing to thee RoI mode in the eyes general settings and dragging the corners. Also, adjusting the intensity range in the detector settings might help.
Ah adjusting the ROI helped! The automode kind of made it worse
It's interesting it only happens with one camera
One more question: what is the marker cache appaearing in pupil player?
@user-2be752 Player detects all surface marker positions across the video and stores this in the marker cache. This results in better playback/scrubbing performance since we don't have to run the detection again afterwards for every frame we look at. The cache will be recalculated when you change any detection parameters.
@user-2be752 to round off @user-c5fb8b 's statement: The marker cache includes the surface markers, e.g. Apriltags, which are used by the surface tracking plugin. https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@user-36b0a3 The accuracy/precision is not primarily dependent on the hardware but the software and how well the subject is able to fixate the marker's center. The relevant hardware details are the eye camera position [1] and the frame rate [2].
[1] You should have a similar view on the eye as on the screenshot posted by @user-2be752 above [2] A frame rate of 60 or higher is recommended for the usage of the 3d eye model
@user-31df78 Please contact info@pupil-labs.com in this regard. Please attach a screenshot of the error to the mail.
@user-e2056a If Player does not show the world video then it is due to it being missing or potentially being corrupt. Are you able to open the world video in VLC player?
You do not need Capture to define surfaces in Player. Here is an example recording which you can use as reference: https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing
@user-e7102b Yes, there is definitely a way. We will have to extract/refactor that functionality at some point.
@user-aaa87b You should find a capture.log
file in the pupil_capture_settings
folder in your home directory. Please provide a copy of it after reproducing the crash on calibration.
Since it is a DIY headset, do not forget to perform the camera intrinsics estimation procedure. https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation
@user-0767a7 Have you tried using Pupil Player v1.19? Includes a series of fixes for the fixation detection, including properly ignoring low confidence data.
@papr I am talking about the original Pupil Dev headset, built in 2014, using the Microsoft HD-6000 and Logitech C615 cameras mounted on the Shapeways frame.
I have had to go into the WaybackMachine to find the documentation I consulted at the time...
Hi! I have a problem of heat map exporting. I'm using the most recent version of pupil player. Up until the previous version, the heat map data was exported in png form but not now. How can you export hot map data to a png?
Hi @user-ed70a0 Are you getting any image export for the heatmap in a different format? Or are you getting no image at all? The export format did not change, it should still be png.
@user-ed70a0 Please be aware that Player resets to its default settings when being upgraded. This includes the list of loaded plugin. Surface Tracker is not loaded by default. Is it possible that the plugin was not loaded?
If it was, and the heatmaps are visible in the Player window then you should also get the heatmap as png export. I have just successfully exported the heatmaps as png using the demo recording I have linked above and Pupil Player v1.19 on macOS.
I got any image file...
@user-c5fb8b I got any image file...
@papr I already set up the plugin of surface tracker. But I don't have any image file...
@user-ed70a0 1. Do you have surfaces defined that you can see in Player? 2. Can you see the heatmap overlayed in Pupil Player? 3. Do you get the rest of the surface data export when exporting? I.e. is only the heatmap image missing or is everything missing?
@user-c5fb8b I can see the surface in player and see the heatmap overlayed in pupuil player. And I only missed the heatpmap image. All files except γ heatmap images have been saved
@user-ed70a0 Could you let us know what operating system and which version of v1.19 you are using?
Also, do you think it would be possible to share the recording with data@pupil-labs.com ?
Since we are not able to reproduce the issue with our own recording it would help us to use yours.
@papr I'm using window and operating version of v1.19 'pupil_v1.19-2-g4e38268_windows_x64'. And If possible, I'd like to share it.
@user-ed70a0 Before you do that, can you share the player.log
file in the pupil_player_settings
folder after performing an export?
@papr Thank you for your helping!
@user-ed70a0 Could you shutdown Player, move the recording to a folder that does not contain any non-ascii characters (in your case \λ μ¦\μμ¬ λ μ¦3\
) and try again?
My suspicion is that cv2.imwrite
is not handling the path correctly.
Also can you check if there are any unexpected folders in C:\Users\Beautiful\recordings\2019_11_21
? They might include your heatmaps
@papr I try to move the recording to a folder that does not contain any non-ascii characters. I also checked other folders that might be stored, but no images were saved.
@user-ed70a0 Could you please share the newly generated log file?
Thank you so much
@user-ed70a0 The recording path still includes non-ascii characters: C:\Users\Beautiful\Desktop\λμ\exports\000
Please rename λμ
.
@papr Oh, I solved it. Typing their ASCII code conflicts were resolved. Thank you very much!
Did someone succeeded in running the Player on a Debian Stretch system? It works quite nicely on Ubuntu, but I get errors on Debian. More specifically, the problem seems to be here: GLFW window failed to create: GLX: GLX version 1.3 is required Can't find this library on the Debian repository.
@user-aaa87b GLX relates to the installed version of OpenGL. Maybe this is easier to find.
Could it be because I'm using X2Go to connect to the Debian system?
Yes, this is possible.
Damn. Thank you papr.
@user-aaa87b I would give this a 60% chance that this is the issue. I would definitively try to upgrade the installed opengl version before giving up π
@papr Ok, solved. X2go is the problem. I'll have to work directly on the system console.
Quick question: does the core use an IR pass filter on the eye camera?
Or does it just record the visible spectrum?
@papr yes, I am using the latest version of the software. the data from the right eye is sketchy; it jumps up and down between 0% and 100% confidence as my participant turns his head obliquely and looks out of the corner of his eye. data from the left eye is consistently good though; a fixation will be stable when the confidence from the right eye is low, but then jump about a little bit when the confidence from the right eye jumps back up. So the right eye IS contributing, but not consistently. Can I tell pupil player to calculate fixations using ONLY the consistent signal?
@user-0767a7 you can rename the right eye video and timestamp file to _eye0*, run offline pupil detection on the remaining eye, recalibrate using the saved calibration, and rerun the fixation detector.
@papr I was able to get the batch exporter working for surface data. The updated scripts are here: https://github.com/tombullock/batchExportPupilLabs in case anyone needs them.
@papr thank you! I'll give that a shot.
Instead of using the actual world camera in Pupil Capture, Can I use my own screen output as the world camera?
@user-94ac2a that is, in theory possible - but you would likely need to write your own video capture backend similar to: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/hmd_streaming.py
@wrp Thanks. I will take a look
How to decrease pupil capture CPU usage?
@user-94ac2a The part that uses the cpu most significantly is the pupil detection. If you do not require the detection to run in real time, you can disable it and analyse it in Pupil Player using the Offline Pupil Detection. You can disable realtime pupil detection in the general settings of the World window.
@papr what if I am using pupil service?
@user-94ac2a Pupil Service does not support recording. Therefore, Offline Pupil Detection is not an option.
Pupil Service is only needed in low-latency real-time applications, though.
In summary: It depends on your use case if you can turn off real-time pupil detection or not.
@papr is there a way to make pupil service consume less cpu?
@user-94ac2a You can try minimizing the windows to save the cpu that is required to draw the eye images. But as I said, the component that uses most of the CPU is the pupil detection.
@papr thanks. But isnβt the pupil detection the core of pupil service?
If I turn off detection, it will not track eyes?
@user-94ac2a correct. You could reduce the frame rate of the eye cameras to perform the pupil detection less often and therefore save CPU.
@papr What is the minimum frame rate for service to work properly?
@user-94ac2a The 3d model is recommended to run with at least 60fps. The 2d pupil detection runs on a per-frame basis, i.e. the frame rate can have any value. Please be aware that other processing algorithms, e.g. fixation and blink detection work technically with less data, but it is more likely to miss events if the frame rate is lower.
Where could I get the code for eye tracking in C language?
@user-0ec597 A part of the pupil detection pipeline in Pupil is written in C++. The C++ code is wrapped with cython to be useable within the rest of our Python Application. You can find all code for the pupil detection in the pupil_detectors module of the Pupil source code: https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_modules/pupil_detectors This module is a mix of C++, cython and Python. There is also some external C++ code that we compile against in https://github.com/pupil-labs/pupil/tree/master/pupil_src/shared_cpp/include
@papr sorry for the delayed response. I was away for the thanksgiving holiday. Both say they are on Android version 9.
@papr I was able to do a bit more troubleshooting and it seems like the connection between our older headset and the phone was the issue. I was able to use a different cord to work around the issue. Thanks!
@papr The right eye camera in one of our headsets stopped responding -- just get the grey square. I suspect it may be a loose connection, as I hear the computer's connect/disconnect sounds when I jiggle the wires. I tried making sure the plug was firmly in the socket at the rear of the camera, and it is. So the problem may be in the wires. How do we go about getting this repaired/replaced? Thanks.
@user-abc667 please contact info@pupil-labs.com in this regard.
@papr Will do. tnx.
@user-755e9e Thanks for the suggestion. I had ordered from Shapeways, using PA12. Just got them today and trying them out. Looks like it might work well.
@papr I noticed an earlier comment about reducing cpu load by turning off pupil tracking during recording, then doing it offline. I suspect I know the answer, but just to be sure -- it's the same tracking code, yes? So the result would be no different doing the tracking online vs offline, yes? The one problem I can imagine arises because we're doing manual marker calibration, moving the standard target around on the table top. With pupil tracking on, we get feedback right away about the quality of the calibration (and have repeated it when necessary). But if tracking is done offline, we won't know until too late that the calibration is bad, yes? Or can we have it on during calibration, check the result, and if ok, turn off for the rest of the experiment? (Lots of questions embedded here; thanks for your patience and advice.)
Hello! I was calibrating through pupil player, and when I finished and tried to export the video and data, it output a message of 'exporting dummy calibration'. What does this mean?
Hi! I'm having an issue when dropping a recording directory into pupil player on my Mac. When I drag the same recordings (have tried multiple with and without world camera used during recording) to pupil player on my Ubuntu machine where I record, they work and are recognized immediately. On the Mac, I get the error 'oops, this is not a valid recording'. I found this github issue (https://github.com/pupil-labs/pupil/issues/1143) and tried moving the directory directly to my Desktop, as well as checked the info.csv file for special characters, but no luck. Any advice? Thanks!
Hi @user-06ae45 are you using the same versions of Pupil Player on both machines? Can you check for the following files in the recording folder and send them over to me? - info.csv - info.player.json Also after failing to open the recording on the Mac, can you collect the player.log file and send it over? You will find it in your home folder > pupil_player_settings > player.log.
@user-06ae45 Also, if you use a recent version of Pupil Player, it will show more information than just "Oops this is not a valid recording". π
@user-2be752 Could you provide the exact warning? The term "dummy calibration" is usually used in the context of camera_models
/camera intrinsics estimation. In this case it means that no intrinsics were found for the specified camera/video file. This is normal for the eye videos/cameras.
@user-abc667 Please beware the difference between Pupil Detection and Gaze Mapping. Both can be done offline, but Gaze Mapping requires a valid calibration configuration to be recorded, or that the calibration procedure is being included in the recording itself (more flexible option).
Yes, you are right that turning detection and mapping off will remove the possibility to monitor the current performance. This is why I would recommend to get a better CPU rather than turning these features off.
Hi @user-c5fb8b & @papr Thanks for the quick responses! I just installed the latest version of pupil player on my mac, and now it works! Thank you for pointing me to it!
FWIW the player log from the previous version install (1.11) had the warnings: '2019-12-05 07:54:31,076 - player - [ERROR] player_methods: Could not read info.csv file: Not a valid Pupil recording. 2019-12-05 07:54:31,076 - player - [ERROR] launchables.player: '/Users/vashadutell/Desktop/000' is not a valid pupil recording'.
@user-06ae45 Mmh, weird. But it works as expected now, did I understand correctly?
@papr yes, it opened up immediately.
@user-06ae45 That is great to hear! π
@papr & @user-c5fb8b Thanks a million! π€©
Hi everybody
I need to ask a couple of pre-sales question
Am I in the right place?
@user-e91538 If they are technical questions, feel free to ask them here π Questions regarding quotes and orders should be directed to sales@pupil-labs.com π
Hi @papr, I'm using the blink detector plugin. I want to detect when the person has the eyes closed for more than 1s. Is it possible? I use binocular data.
@papr Yes, gaze mapping != pupil detection; I typed too quickly. I know we have to record a calibration procedure at the start of the trial for each subject. Point taken about cpu speed. Thanks.
Hello! I have a problem with the device. i run pupil_capture.exe , my pupil- glasses are on the table. I do nothing. Over time window eye1 and eye0 is not responding and close. Why?
Hi @user-045b36 can you please send us the log-file after this happened? You can find it in your home folder > pupil_capture_settings > capture.log
sorry/
@user-045b36 I can't see anything in the log, not even that the eye windows close. The log always only stores the last run of capture and is overwritten when you restart capture. Can it be that you restarted capture after you experienced the issue? If so, please make sure to reproduce the error and then send me the log file without restarting capture in between. Thank you!
@user-c5fb8b I will try to reproduce the error later and send you log-file
Ok thanks!
Hello! I have a question about the tech specs of Pupil Core. According to https://pupil-labs.com/products/core/tech-specs, gaze precision is 0.02. I want to know what 0.02 for gaze precision stands for, since unit of measurement is not specified.
@user-479baa it is also measured in degrees.
Is there an option to cut the video in pupil player?
@papr I appreciate it!
Hey @papr unfortunately I'm having some trouble with my batch exporter script only successfully exporting surface data for ~50% of my pupil recordings. It seems like the error has something to do with functions in pandas treating the data as a string rather than numerical values. Perhaps this is a version specific issue? Can you tell me what version of pandas you used when you wrote the surface extract function? (https://nbviewer.jupyter.org/gist/papr/87157c5da93d838012444f4f6ece6bcc) Thank you
Error report
Hi @user-c9d205 You can set an export range / trim marks in Player. You can either use the input field in the General Settings menu, or just drag around the start/end of the timeline. This won't cut the recording, as we never modify the original recording. But when exporting you will only get the selected slice.
Will that be possible to stream the video from local to another computer to do the computation and send it back in real-time?
Hello guys, I got a pupil core last week, and I have a question about it
if I want to record the session with pupil mobile, how can I calibrate the session for the user ?
I need to calibrate in the computer beforehand, and then connect the pupil on the mobile to record the data ?
Hi @user-81a601 If you record the calibration procedure as well, you can perform the full analysis process offline in Pupil Player. See the following playlist on youtube with tutorials on how to do the offline analysis: https://www.youtube.com/playlist?list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh
so I need to record the calibration connecting core with the computer
and then connect it to the mobile, and record the session
then I can use this two data to do the analysis
@user-94ac2a Pupil Capture does currently not have the option to stream the video and receive the result from somewhere else. Usually, you would connect the eye tracker directly to the other device and just receive the detection result on the target device.
Technically, you can write a plugin that subscribes to the video data published by the Frame Publisher plugin and displays it using the HMD Streaming video backend. But this will add considerable delay to the detection procedure.
@user-81a601 No you would do a manual marker calibration. You record the calibration procedure and then your experiment with Pupil Mobile in one take. Then you transfer the recording to the PC and open it with Pupil Player. There you can run offline pupil detection and offline gaze mapping. Please have a look at the youtube tutorials I linked, maybe that shows a bit better what's happening.
okay, I'll try it
thanks in advance
No problem!
hello there, another questions
can I use pupil core with people whose use glasses or eye lenses ?
another question is, the surface tracker only works with april tags ? Or it's possible to track custom images
@user-c5fb8b it worked, but something is wrong, like my gaze data is inverted in relation of my iris movement
anyone know anything about it ?
@user-81a601 The important question is if the gaze matches the actual locations that you were looking at. The pupil movement in the eye overlay might be just flipped.
the gaze data is following my eye movement, but it seems mirrored
the gaze data is generated by the offline pupil detection feed ?
Pupil data (red circle on the eye overlay) is generated by the offline pupil detection, gaze data (green dot) is generated by the offline calibration
right
my pupil data look's right
my gaze data also looks right, but when my pupil move to the left, the gaze move the same amount but to the right direction
@user-81a601 This is just a visualization issue. Do not forget that the eye cameras point in the opposite direction of the world camera. This causes the image to look mirrored. You can flip the eye overlay in the eye overlay menu on the right.
@user-81a601 Regarding surface tracking: Currently we only support marker based tracking and not area-of-interest (AOI) based tracking (e.g. tracking a specific image). We are working on AOI tracking support, but I cannot give you an estimate of when we will be able to ship this. For now you will have to place markers around the images that you want to track.
Regarding glasses on Pupil Core: We don't have prescription lens add-ons for Pupil Core (like we do for Pupil Invisible), but many members in our community and developers on our team wear prescription lenses. You can put on the Pupil Core headset first, then eye glasses on top. You can adjust the eye cameras such that they capture the eye region from below the glasses frames. This is not an ideal condition, but does work for many people.
@user-81a601 While glasses usually work fine (with the procedure described above), I'd recommend using contact lenses if you have the choice. These do not impact our software at all.
I have a remote trbouleshooting session starting in 4 min. Do I need to connect somewhere or will they know where to find my machine?
@papr I have a remote troubleshooting session starting now. Do I need to connect somewhere or will they know where to find my machine?
Never mind, found the email with the contact info.
@user-abc667 Sorry for the delayed response. I hope everything has been cleared up now.
Hi, I have a question about the offline calibration and gaze mapping. I recorded files with pupil capture not pupil mobile and I know the offline gaze mapping is using for recorded files by pupil mobile but I am wondering if I do the offline calibration for my recorded files, does it improve the accuracy of my calibration even for recording in pupil capture and is that worthwhile to do this offline process for all my recorded files in order to increase the accuracy of the previous files?
@papr I'm just following up on my batch exporter question from the weekend. The example surface export jupyter notebook example that you kindly provided seems to have issue with loading certain datasets. I think it has something to do with the pandas functions being unable to read in string data, but I'm also wondering if it's possibly a panads version issue, as running the code with different versions of pandas seems to give me different error messages. If you have any suggestions and/or can tell me what version of Pandas you are using, I'd really appreciate it. This is the final hurdle I need to clear to have a fully functioning batch exporter. Thanks.
@user-6cdb90 You can apply manual offset correction during offline calibration. So yes, it definitively can increase accuracy. But it won't increase by just running the offline pupil detection/calibration with default settings, which is equivalent to your pre-recorded data.
@papr Thank you very much for your response. So how can I apply this manual offset correction during offline calibration?
@user-e7102b Sorry, I did not have time to look into this further yet. Could you write an email to data@pupil-labs.com such that it does not get lost? I will come back to you as soon possible.
@user-6cdb90 Checkout the "Gaze mapper" submenu. There is an option to set values for x and y offsets.
Hi. I was playing with the generic video overlay plugin in pupil player. I have dyadic data (synced in pupil capture) that I would like to manipulate and export simultaneously. I see that I can pull one worldview on top of another full recording directory, but this isn't exactly what I would like. I am wondering if there is any functionality for having both full recording directories running simultaneously in pupil player? We've gotten around this issue in the past by exporting the videos separately from pupil player and then syncing in final cut pro, but if I could skip that step and do everything in pupil player that would be really great. Thanks!
@user-c37dfd How about exporting one recording and adding the exported video as overlay? Or is that what you tried already?
@papr - thank you - I just sent an email.
@papr Thank you for your previous helpοΌI want konw Can I get gaze positions data in real time from core?
And What is the function of USB-C mount,can i buy High speed cramer and USB-C mount together?
@user-a98526 yes, you can stream data in real-time. You would subscribe to gaze
topic. See this script for example on how to subscribe and filter messages: https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
@user-a98526 Pupil Core USB-C scene camera mount does not ship with any scene camera. This configuration is intended to be used for those who want to add their own scene sensor. Some researchers who require depth data for the scene, use the Pupil Core USB-C scene camera configuration with a RealSense R400 series depth sensor. However, the configuration is designed to accommodate other USB-C sensors that you might want to use/develop/prototype with.
YES, I need these two, but But I canβt pick them at the same time
@user-a98526 what does "these two" refer to? Please could you clarify?
can you please help me?
Hi @user-119591 Sorry that you are experiencing this issue. It will be fixed in with the next release of Pupil. Until then you can make it work by downloading and running vc_redist.x64.exe from the official Microsoft Support page: https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads After this Pupil should start as expected!
thanks!C:
@wrp
I means i want these two at same time
@user-a98526 I think there is a misunderstanding. These are actually two mutually exclusive options.
1) The high speed camera option is a Pupil Core frame with eye cameras and a fully integrated camera, i.e. the world camera cannot be disconnected. 2) The USB-C mount option is a Pupil Core frame with eye cameras and a USB-C connector that can be connected to a USB-C camera of your choice. I think the visualisation on the website shows this nicely.
Addition to 2) The USB-C mount option does not include a world camera.
thanks! I think I may need depth informationοΌso Intel RealSense D400 series sensors can can get depth information and scene information at the same time?
@user-a98526 Correct. Please be aware that Intel does not provide pyrealsense
on macOS. pyrealsense
is the Python module that Pupil Capture uses to access the realsense cameras. With other words: Pupil Capture does not work with the Intel Realsense D400 series on macOS.
Also, please be aware that Pupil Capture does not save the raw depth data during a recording. Instead it saves the RGB color stream and a colored representation of the depth stream int two separate video files.
@papr can the "Reset 3D model" function in Capture be triggered via remote notification?
@user-141bcd Not at the moment, no.
kk, thx
Hi, I'm using pupilposition.csv generated from pupil player to understand when the user has the eyes closed (I recognize it as confidence drops in both eyes). Is this the correct way or are there better ways to do that? Thanks in advance
@user-067553 That would be the correct way. The blink detector does that for you already btw. Just activate the plugin, wait for it to run, export and checkout the csv file exported by the plugin.
Hello everyone, is it possible to make Heatmap analysis without QRCodes?
I just found iMotions platform that can do it, but costs almost $9000
Thanks @papr, I'm not sure about the meaning of onset confidence threshold and offset confidence threshold showed in the plugin interface. If I only want the blinks.csv file, do I still have to export everything?
Hi @user-b8789e Currently we only support marker based tracking and not area-of-interest (AOI) based tracking (e.g. tracking a specific image). We are working on AOI tracking support, but I cannot give you an estimate of when we will be able to ship this. For now you will have to place markers around the images that you want to track.
@user-c5fb8b the Invisible does that?
@user-c5fb8b is it possible to test the alpha version when available?
Hello, there's no software that could help Pupil Core to export heatmaps?
Hi @user-b8789e ! In Pupil Capture and Pupil Player you can use the Surface Tracker
plugin for AOI Tracking. See the documentation here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@marc Thank you. But for this kind of project, we could not use any April Tags or Markers around the AOI.
That's why I'm asking if there's any other software that works with Pupil Core.
I think Gaze Intelligence distributes software for marker-free AOI tracking. Their tools are compatible with Pupil Core recordings. https://gazeintelligence.com/
@papr Yes, finally found the right email in my email avalanche and did the troubleshooting session.
@papr Just finished defining 16 surfaces (paper forms) using AprilTags 36h 11, and when I start up the program again it complains: "surface_tracker.surface_serializer: You have loaded an old and deprecated surface defn." I get one warning msg for each sufrace defined. Is this a bug or is there something wrong with using the 36h 11 version of the tags? Thanks.
@user-abc667 did you do that in Capture or Player?
@papr in Capture, is that wrong? Did them with live images of the forms.
@user-abc667 no :) there should be two surface definitions files in the pupil_capture_settings folder. One with a _v1 suffix, one without. Can you confirm this?
@papr No, have just the _v01 suffix file. I had been using the legacy square markers and thought that the right way to get rid of them was simply to move the existing _v01 file out of that directory and define the new surfaces with the AprilTags markers. Did I screw things up?
@user-abc667 no, you should not get a warning if you only have a _v1 file. Just to be sure: Did you get the "deprecated surface definition" in Capture or Player?
@papr in Capture
You should be on the save if you have just redefined the surfaces. I will look into possible reasons for the warning when I am back in the office.
@papr ok, thanks. Meanwhile I will press forward.
As long as there is a surface_definitions_v1 file, your are good to go.
@paper OK. Also I just downloaded and ran 1.19.2 and got the same warning msgs.
@user-abc667 yes, this is probably a mistake on our side.
Hi, I would like to ask a question about replacement of the cables. My camera cable is broken recently and I found some wires that seems like replacements. Are they really replacements and should I open the black box of the original one? I'm careful since it's very fragile. Thanks very much!
Left is the broken one, right is the replacement(?)
Hi @user-771cfd please email sales@pupil-labs.com and our hardware team can provide you with assistance.
pupil player application ,does it gives me pupil size on excel file?
never mind thanks! π
Hi, I just bought a Pupil lab core and tried to set up it today. However, it's extremely difficult for me to get the right vision in the eye camera. This image is the best that I can adjust. Besides the slider and ball joint, is there any other place that I can make adjustment? And what is the orange arm extender for? How it can be used?
Hi @user-b7d6e5 ! Please see this section of the documentation on how to the eye cameras can be adjusted: https://docs.pupil-labs.com/core/hardware/#headset-adjustments The orange extender arms are necesseary for some face geometries to get a good view of the eye. Please see this section on how to use it: https://docs.pupil-labs.com/core/hardware/#eye-camera-arm-extender
@here π£ Pupil Software Release v1.20 π£ This release contains a few stability fixes and deprecates the fingertip calibration method. Additionally we have externalized the pupil detectors into a standalone library, significantly simplifying the setup procedure for running Pupil from source.
Check out the release page for more details and downloads: https://github.com/pupil-labs/pupil/releases/tag/v1.20
@papr When I run pupil_capture it says that it doesn't have a calibration for the eye cameras and it loads the dummy one (with focal length=1000.0). If it's not calibrated, how can it find the actual eye center in mm? Is there a way to calibrate the eye cameras manually?
@user-4d0769 We assume a fixed size for the eye ball. This allows us to infer the Pupil size in mm.
We do not calculated the Pupil size from a single image but based on the 3d model which requires a series of images.
thanks!
Hello - are there any known issues of version 1.20 not opening on macs with Catalina? Iβm getting an error message saying the software needs to be updated because Apple canβt check for malicious software.
@user-2798d6 this is a general issue on Catalina. Right click the application, click open, and it will give you the option to open the app normally.
In order to prevent the dialogue, one needs to notarize the app at Apple. We are working on integrating this step into our deployment.
I want to detect prolonged eye closures more than 1s. I'm trying it by modifying the blink detector plugin, do you have any suggestion/plugin/instrument to do that?
@user-067553 The blink detection plugin detects the start/onset and end/offset events of blinks. It should immedeatly allow you to also detect prolonged eye closures, which should simply have onset and offset events that are further appart in time.
Thank you @marc, do you know if the filter lenght measures the lenght of the eyes closed or the segment analyzed in which confidences are measured? (as a window)?
@user-067553 The latter is correct. The confidence signal is concolved with a kernel/filter, which yields a signal that measures blink-like movements. The filter length is specifies the temporal width of the kernel.
Hi, is it possible to stream the audio using pupil mobile?
Hi, I tried to do the camera intrinsics, but it doesn't seem that it functions. And I think that some parts are missing on your help/support page, as the text refers to step 6, but there's only step 1 visible on the support page: https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation
There's only step 1, but this is not enough as it is not clear how to use the pattern. Could you explain the process in more detail?
Hi @user-9ee9c8 thanks for spotting this error. I have just made the fix and it should be online within the next minute or two.
I purchased Pupil labs hardware for the HTC Vive. What options do I have in terms of different software that can be used to analyze eye tracking data? Thanks in advance.
Hi @user-d18d24 you can use Pupil Player for visualization and lightweight analysis.
Hi! I started using Pupil with VR, so I have considered using Pupil Remote so I can record from other computer and the performance is not affected in the computer running VR. Is this the correct approach? I also have no clue about how to do this if this is the case. I would appreciate any help, since I am lacking most of the technical skills. Thank you so much!
@user-9a94be Correct, the recommended setup would be to run Pupil Capture on a separate computer. You would insert the VR add-on into your VR headset, and connect the add-on two the computer running Pupil Capture. You can use our Unity plugin to communicate with Pupil Capture through Pupil Remote. Please see our documentation https://docs.pupil-labs.com/developer/vr-ar/ or the π₯½ core-xr channel for more information.
@papr Thanks so much for the quick reply. I will start exploring in that direction π
Hi! I am using the pupil core to measure gaze in a driving simulator. Because of performance problems running both the pupil software and the simulator at the same time I tried to run pupil software on a separate laptop which works fine. Only problem is the calibration. I want to do screen marker calibration and of course on the screen where the simulator software is running to have the area calibrated I need for this screen. I wrote my own script to show the markers used in manual marker calibration on the screen and start and stop the calibration using pupil remote. However the result of the calibrations seems not to be as good as with screen marker calibration. Is this the right way to go or is there another method to do such a "remote calibration"?
@user-274c94 This implementation/workflow sounds exactly right!
@user-274c94 Maybe, you could try performing a single-marker calibration instead? It would reduce the complexity of the remote calibration code and might improve accuracy if performed correctly.
Hi, I'm trying to get used to the kit for research purposes, but I can't load Pupil Capture properly. Every time I try, it gives an error message saying a file is missing and closes the application. I believe it was "win drv". Could someone please tell me what I'm missing? I'm using windows 10
Hi @user-ca31eb You might have to run Pupil Capture as administrator initially for setting up the necessary drivers. Can you give that a try?
@user-c5fb8b I'll try that. I think the drivers are already installed but I'm not sure. Thank you!
Hi there, I just upgraded to v1.20. In Pupil Player, turning on "legacy square markers" in the offline surface tracker plugin causes the software to crash in Ubuntu 18.04
@papr Hi we are looking to buy a usb-c to -c for pupil mobile. We bought one eyetracker in 2017 and one in 2019 and need a usb-c that works for both. Should we get a usb-c 2.0 or 3.0?
Hi. I'm setting up Pupil Core device for the first time. To set up the Pupil Detection i read that I should the min/max values for pupil size. What are the conventional value for this?
Also, is semantic gaze mapping possible with the Pupil Core system?
@user-bda130 the company choetech
makes reliable USB-C to USB-C cables. You can find them readily available on Amazon or other online shops.
@user-670869 You might not need to set any of the min/max values for pupil size. The first recommendation is to physically adjust Pupil Core headset so that you are getting a good image of the eye and check confidence values in the world window while moving your head so you can sample different eye positions.
"Semantic gaze mapping" is, I believe, SMI's term for markerless AOI tracking. Pupil Core software does not have markerless AOI tracking. Instead, Pupil Core software has implemented marker based AOI tracking that we call "surface tracking". You can read more about this feature in the docs: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
@user-5529d6 I will try to reproduce the behavior you note re legacy square markers with Pupil Player v1.20.
Hi, I'm trying to record videos in order to use the gaze 3d distance for my research. The problem is that the gaze distance is mostly not accurate. I used the default screen marker calibration at a distance of ~500 mm from the screen, as I read in the doc. What am I doing wrong? Thanks, Yogev
Hello everyone, I am having problems using the Pupil Core ETG in very dark environments, such as inside flight simulators. The ETG are not able to locate the markers correctly, especially those to identify the AOI. Did any of you find any solution to make them more accurate in dark environments? Thanks in advance!
@user-5529d6 We might have found a possible issue that could cause such a crash. While we are investigating it, please try restarting Pupil Player with default settings from the General Settings menu.
could you help me? i can't open the film!!!!
Hey @user-e10103 it looks like the main window is positioned off screen. Please close Player, delete the user_settings_*
files in Home directory -> pupil_player_settings
, and start Player again.
@papr thanks! Another question is that how can i see the eye-tracking and which application can help me to see that?
@user-e10103 Do you mean the eye tracking for a given recording? How was the recording created? Did you use Pupil Capture or Pupil Mobile? If you are looking for a software to see the eye tracking in real-time and make recordings, use Pupil Capture. See our documentation for details: https://docs.pupil-labs.com/core/#getting-started
but there's no red dot on my screen
and how can i get more ref point ?
@papr Just letting you know that resetting Pupil Player's settings to default did not fix the legacy marker bug. Same thing for downgrading to v1.19
@user-5529d6 Could you share the player.log
file in Home directory -> pupil_player_settings
after reproducing the crash?
yes give me a minute
@user-e10103 The ref points are the concentric circles that are shown on the screen during a calibration. They need to be visible in the field of view of the scene camera. The center dot shows green if they are detected, and red when they are not.
@papr Ok I am confused now. 1. I uninstalled 1.20 2. Installed 1.19 3. Tested, it crashed the same way 4. Uninstalled 1.19 5. Installed 1.20 6. Reloaded the same recording session and the marker cache will not load anything 7. Loaded another session and this time it did not crash when selecting legacy markers
I have a backup of the other session, I will try with the first one again to see if this is an issue with that specific session. I didn't do anything special when collecting the data though.
@papr update 1. The software just hangs and based on the time stamps of the player.log file, there is no entry when that crash occurs, which is odd. (The most recent entry is many minutes old) 2. I just found out that if the surface tracker plugin is activated AND the marker type is already set to legacy marker when I start Pupil Player, it will not crash and cache the markers correctly. 3. The bug occurs when selecting the legacy marker in the drop-down list.
Hi @user-5529d6 would it be possible to share the recording with data@pupil-labs.com so we can take a look?
I will double check to make sure it does not contain any sensitive stuff before, but there shouldn't be a problem.
@user-c5fb8b I just sent a link to download it from google drive.
@user-5529d6 Thanks, we will take a look at this!
@wrp Thank you!
Can we get gaze maps from the Pupil Player? I just want to see some visualizations.
@papr Thanks for the suggestion! I think that would get me closer to what I am trying to do, but I think the videos would still be overlapping in a way that would obstruct some of what I will be trying to code on the backend. I do appreciate the idea though
Awhile back I indicated I was getting some errors in the pupil mobile app regarding an unknown sensor identity and internal android issues. I did some trouble shooting and determined it was our older headset (USB-micro USB) causing the issue (the errors never occur with our newer headset; USB-USBc). I had thought changing the connection between the older headset and the phone had resolved the issue, but it appears that was a temporary fix as I am now getting the same errors as before. I have a few questions: 1) is there any way we can update the older headset in some way that would resolve these errors? 2) Our newer headset is still not the newest model. We are wondering if there are any specs different from our newer headset (purchased in ~2016/2017). 3) Is there potential to order an older version of a headset if the newest version is significantly different from our current setup?
@user-670869 Do you mean "heat maps" when you say "gaze maps"? If so, you can generate heatmaps with the surface tracker. You might want to try loading this demo recording in Pupil Player and enabling the Surface Tracker plugin. https://drive.google.com/file/d/1nLbsrD0p5pEqQqa3V5J_lCmrGC1z4dsx/view?usp=sharing
@user-c37dfd re hardware - could you send an email to sales@pupil-labs.com with your order id(s) so that we can best help you with hardware questions.
Hi,
Hi, I just installed the new version (1.20) and I cant find the "Scan Path" Plugin. Its mentioned in the user Guide and I would expect to find it in the "Vis" plugins.
@user-e3b669 unfortunately, it has been disabled due to technical reasons for a while now. We have an idea for a solution but are still evaluating its technical feasibility.
Thanks, I will try to be patient π
@papr @user-c5fb8b Some update on the legacy marker bug. I got it to work doing the following procedure 1. Open pupil player 1.20 2. Turn on surface tracker 3. Select legacy marker. - It crashes Repeat 1-3 once, it crashed Repeat 1-3, it works. I have to do this individually for each session for it to work.
Hi @user-5529d6 we have been able to reproduce this on one of our machines, but not on others. We are still not sure about the cause of this issue, but are actively investigating. Could you try running Player from command line (do you know how to do that?) and send us the output from there when reproducing the issue? We noticed there might be additional debug info that is not written to the player.log file.
@user-c5fb8b Yes I can do that. I am fetching some more data that I need to process and send you the output
Thanks!
Also, unrelated to the legacy marker issues, is there any plan to do something smart about RAM management in Pupil Player? When it reaches the RAM limit it just automatically shuts down the software. I would normally expect things to slow down by a lot to shuffle the memory, clear a few things, but not a total crash of the software.
I have it happen with anywhere between 8 and 64g of ram.
Hey, has anyone experience with the groups plugin and with the recording of two pupil core glasses? Does anyone know if you need two PCs to record them both and does anyone know if and how things like gaze convergence and gaze latency of the two recorded gazes can be shown in the pupil player program can be shown or what can be shown in the player.. Thanks already for your help!
@user-e2c411 No prior experience with recording with two pupil core simultaneously, but based on my experience with one, if you are using high resolution and high frame rate settings for both the world and eye cameras, you will need a serious computer to record everything without dropping frames.
What specs would you consider as serious?
I am sure papr can comment on the required specs and how it should scale as the number of pupil core increases.
We are currently using this setup for collection, which I would not qualify as a serious computer but sufficient so far: Ubuntu 18.04 AMD Ryzen 7 3800x 8-core processor 64Gb of ram fairly standard SSD Geforce GTX 1080
recording at 400x400 eye camera 120Hz and 1280x720 60 Hz world camera (these are from memory)
With this configuration we are not dropping frames, but turning on online surface recognition results in immediate drop in frame rate.
We also tried the following configurations: AMD Ryzen 7 1700X 8 core, 3.6 GHz, 32 GiB ram, ssd, running windows 10 with ~25% frame drop (online surface tracking turned on) AMD Ryzen 7 1700X 8 core, 3.6 GHz, overclocked to 4.1 GHz, 32 GiB ram, ssd, running windows 10 with less than 5% frame drop (online surface tracking turned off) I5-4690k 4 cores, 3.5 Ghz, 32 GiB ram, hdd, running ubuntu 18.04 with ~13% frame drop (online surface tracking turned off)
Not sure how much any of this helps you though.
Okay thanks a lot!
Hello, I'm asking again because I didn't get any response.
I'm trying to record videos in order to use the gaze 3d distance for my research. The problem is that the gaze distance is mostly not accurate. I used the default screen marker calibration at a distance of ~500 mm from the screen, as I read in the doc. What am I doing wrong?
@user-5529d6 @user-e2c411 You should be able to use any Intel i5 or i7 series with 8-16Gb of ram. You do not need a serious graphics card (integrated graphics card in laptop is fine for example).
@wrp @papr I'm running the latest releases (tho this has happened with prior releases) and have been trying to calibrate manually, using the v0.4 marker, on the end of a 26" stick, which we use to move the marker around on the tabletop. Previously this worked without much trouble, moving the targett around slowly for about 2 min. (This is like your single marker calibration method, but moving the target instead of the head.) But lately I have routinely been getting lots of pings with error msgs that say the target is moving too quickly, where moving slowly enough would make the calibration take 10 or 15 min. (a) Is there some setting I'm missing? (b) The docn says when using manual marker, "select Marker display mode > manual", but I can't find that anywhere in any menu. Am I overlooking it somewhere?
@papr @wrp I'm using the Pupil headset with its wide FOV camera. The camera intrinsics estimation -> show undistorted image does a very nice job of straightening lines. Is this something that can be applied offline/during playback as well, or does the original recording need to be done with this on? I ask mainly because it appears to be a sizable load on the CPU and we have to run with a laptop that is adequate but not amazingly powerful. Thanks. (And yes, I know it's late in your day on Xmas eve; thought I get this in for attention when you can. Thanks)
Just a quick question: I'm using pupil in an experiment with a sequence of 40 different images. In the corners of each image I'm going to insert 4 apriltags. Do I have to use 4 different tags for each image (sounds like 160 different tags)? In other words, should each image have an unique combination of tags or could I use the same 4 tags in a different order in 4 different images (that would make 40 different tags)? Thanks a lot.
Hi, I still didn't get any response. @papr @wrp I will really appreciate your help. 1. I'm trying to record videos in order to use the gaze 3d distance for my research. The gaze distance that I got is mostly not accurate. I used the default screen marker calibration at a distance of ~500 mm from the screen, as I read in the doc. How can I calibrate the device in order to get accurate gaze depth data? 2. @papr I tried to use the Depth Frame Accessor plugin as you suggested here, But I didn't understand where I can find the depth data after I activate the plugin. In the log file there is only a repeating message that says "- world - [INFO] depth_frame_accessor: Depth values: shape=(480, 640), dtype=uint16"
Hi everybody, I am very new to this field. Nice to meet you.
I just want to know the exact data structure or data shapes that we can get from the recording data by this pupil core so that we can make a decision for our project to start with this nice application! Do they consists of coordinates of locations for all frames (i.e., all the time stamps)?
@user-aaa87b Hi, do you mean you have 40 images that are presented to subjects and you want to track gaze on these images? How are these images presented? Are they printed out or presented on a screen?
@user-c5fb8b Hi and thank you for the answer. Yes, each subject is presented with 40 images on a screen using Psychopy.
@user-39f4f5 Hi! You can get gaze coordinates and timestamps for every frame yes. Besides that there is a lot more data that we export. Additional plugins that you enable will export even more data. You can see a description of the common data fields in the Raw Data Exporter section of the docs: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter Please note that the general workflow to get this data is not just from a recording. Our recordings store the same data, but in a very efficient format that is not human readable. In order to get the data, you would open the recording in Pupil Player first. Here you can also fine-tune pupil detection afterwards and e.g. select only specific timespans that are interesting to you. Then you can use the Raw Data Exporter Plugin to get all the data that is referenced in the docs as cvs file. Does that answer your question?
@user-aaa87b By "insert 4 apriltags" do you mean just "paint" them onto the image?
The normal workflow would be to just stick 4 markers (printed to paper) to the edges of the screen. This means that you will have to split them up into the distinct images later manually for your case. You can of course also add "paint" them onto the images.
If you add different apriltags for each image, you can distinguish them by the tag. But this means that you will have to define 40 surfaces. I know that we had cases of performance issues in the past with a lot of surfaces, especially on Windows machines. If you chose to go that route, please make sure to test this to make sure this approach is viable for your setup. If you are experiencing performance issues with 40 surfaces, you will have to fallback to the first method.
@user-abc667 Regarding the calibration: What you want to select is Calibration Method -> Single Marker Calibration in the Calibration Plugin (the one with the circle marker icon). Then further below you will find Marker display mode which you should set to manual. I assume you were using the Manual Marker Calibration method, which is something different!
@user-abc667 @user-b37f66 Regarding your questions to @papr and @wrp: They are both currently on vacation, but we will come back to you as soon as possible!
@user-c5fb8b Thanks a lot for the answer, however Iβm not quite sure Iβve got it right. Now, our experiment goes like this: the subject is shown a first single image, then is shown a second image composed by two separate pictures to choose from. This sequence is repeated 20 times, with different images. Iβm interested in tracking the subjectsβ gaze during the exploration of the first image and during the decision process as well (second image).
@user-aaa87b Well in theory this should be no problem. Again I would recommend you try making a recording and defining 40 surfaces in player to test for performance issues. If this does not work, the best solution essentially depends on your experiment. Essentially: The less surfaces you define, the more work you have to do manually. If the "presentation areas" of the first and second image are of the same size, you could just use 4 markers to define a single surface to track. Then you will have to figure out which image the subjects look at yourself. Or you define two surfaces, one for the first image and one for the second image. (Or even 3, with 2 separate surfaces for the second image). When you only have to figure out which stage you are in. All of these are viable approaches, but as I said: check the surface performance.
@user-c5fb8b OK, I'll check that. Thank you very much for your help, and best wishes for the new year!
@user-c5fb8b Thanks so much for the advice on the calibration. As I suspected, i did indeed have the wrong thing selected. Single marker works fine, and of course I found the Marker Display Mode. Happy to wait for @papr and @wrp to return to get the second question (about intrinsics) answered. Hope they and you have a great new year.
hey guys. what are the system requirements for the pupil labs eye tracker?
Hi @user-854a19 the key specs are CPU and RAM. We suggest at least an Intel i5 CPU (i7 preferred) with a minimum of 8GB of ram (16GB is better if possible). We support macOS (minimum v10.12.0), Linux (minimum Ubuntu 16.04 LTS), and Windows 10.
@user-c5fb8b Thank you very much. A happy new year.