@user-f497a5 I m pretty sure your recordings should still be supported. Can you share an info.csv file?
@mpk He did so in a personal message to me. There are some required keys missing. One can add them manually, but it was not necessary in his case.
ok great!
Hi , is the android version of pupil open sourse?
@user-ac350a Hey, Pupil Mobile is not open source. But it has an open source network API which you can use to access the sensor data in realtime and to control the app remotely.
See https://github.com/pupil-labs/pyndsi/ for details.
hi , I have followed the instructions on installing pyndsi. Installation (on windows) seems to have completed fine, but I received errors when I am import ndsi. The errors are
import ndsi Traceback (most recent call last): File "<stdin>", line 1, in <module> File "E:\MyWorkspace\pyndsi\ndsi__init__.py", line 24, in <module> from ndsi.formatter import DataFormat File "E:\MyWorkspace\pyndsi\ndsi\formatter.py", line 10, in <module> from ndsi.frame import JPEGFrame, H264Frame, FrameFactory ImportError: DLL load failed: The specified module could not be found.
Any advice on this?
@user-b0f1b6 Please make sure that the turbojpeg and ffmpeg dlls are in your windows system path. See the pupil docs for links to the dependency downloads: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md#setup-pupil_external-dependencies
Hi @papr, is there any specific version of libjpeg-turbo to be used?
@user-b0f1b6 pyuvc
uses 1.5.1
, so I would recommend that one.
Thanks @papr, I have resolved my issue. Indeed it was because of the system paths for libjpeg and ffmpeg.
@papr Hi! I've also met the frame drop in the v1.16-71 and found that eye0 fps is normal but there are some discrete regions in world fps which happened the same time when frame drops. Is that a normal condition that frame for eye is okay but not okay for world frame?
@user-9c3078 No this is not normal. Does this happen for new recordings as well? If yes, please contact info@pupil-labs.com regarding a possible hardware issue.
@papr Yes, it happened in several videos and I've sent the data.
@user-9c3078 no need to send the data. Just refer to this conversation and say that your world camera might have a loose connection
@papr Okay, that make sense. But how could I fix it?
Or just because I've used it for long time. Otherwise I need a new one?
@user-9c3078 specific next steps will be determined by our support team. 👍
Hi, I have been trying to run the following ndsi example, but encountered some error during its execution: C:MyWorkspace\pyndsi\examples>python uvc-ndsi-bridge-host.py Traceback (most recent call last): File "uvc-ndsi-bridge-host.py", line 210, in <module> Bridge(uuid).loop() File "uvc-ndsi-bridge-host.py", line 39, in init self.cap = uvc.Capture(uvc_id) File "uvc.pyx", line 455, in uvc.Capture.init File "uvc.pyx", line 507, in uvc.Capture._init_device uvc.OpenError Exception ignored in: <bound method Bridge.del of <main.Bridge object at 0x000001D542FAD0B8>> Traceback (most recent call last): File "uvc-ndsi-bridge-host.py", line 115, in del self.note.close() AttributeError: 'Bridge' object has no attribute 'note'
Realized it was a mistake on my part, I did not connect a UVC source, which caused the program to stop executing
@user-b0f1b6 nonetheless, this looks like a typo in the example. Could you please create an issue in the pyndsi repository?
@user-b0f1b6 thank you :)
Hi @papr, similarly, i encountered similar issues with the other example
C:\MyWorkspace\pyndsi\examples>python ndsi-gui-client-example.py 14:24:54 [ INFO | OpenGL.acceleratesupport] OpenGL_accelerate module loaded 14:24:54 [ INFO | OpenGL.arrays.arraydatatype] Using accelerated ArrayDatatype Basic GL Setup NDSI Setup 14:24:55 [ WARNING | ndsi.network ] Devices with outdated NDSI version found. Please update these devices. Traceback (most recent call last): File "ndsi-gui-client-example.py", line 400, in <module> runNDSIClient() File "ndsi-gui-client-example.py", line 369, in runNDSIClient clear_gl_screen() File "ndsi-gui-client-example.py", line 78, in clear_gl_screen glClearColor(.9,.9,0.0,1.) File "errorchecker.pyx", line 53, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError (src\errorchecker.c:1218) OpenGL.error.GLError: GLError( err = 1281, description = b'invalid value', baseOperation = glClearColor, cArguments = (0.9, 0.9, 0.9, 1.0) )
OpenGL Error most likely from drawing functions of cpu_g, and fps_g. glClearColor does work, but I am just getting a blank grey screen.
I think we will have to revisit and revise the examples :)
Hi, i use the eye tracker for my thesis. I have one question: how is possible show the scan path detector ? I read that the plugin is non avaible.... but for me is very important
Hi @user-08d134 Yes, the scan path plugin has been disabled due to technical reasons. We have an idea for a solution which we are working on. Unfortunately, I cannot give you more information on its release yet. 😕
Thanks @papr
I have an other question... how i can connect eye tracker with pupil mobile?? I have a nexus 5x and it doesen't recognize the eye tracker
@user-08d134 This can have different reasons. The first one to check is if you need to enable OTG Storage
. Search your settings, and if you can find it, enable it.
i have enable otg storage
@user-08d134 Great! Are the devices recognized now?
no...
the device recognized the eye tracker but pupil capture doesen't recognized pupil mobile
@user-08d134 Ah, this is a different issue then
This can have multiple reasons, too 😅
@user-08d134 I think the issue here is that your wifi network may be blocking udp transport between Pupil Mobile and Pupil Capture. Please see: https://docs.pupil-labs.com/core/software/pupil-mobile/#streaming-to-subscribers
Have you tried using a dedicated router? The router does not need to be connected to the internet, just needs to create a local wifi network.
Hi! I've been following the following tutorial https://github.com/pupil-labs/hmd-eyes for getting started with the pupil VR add-on, unfortunately I'm stuck at the second step, I've dowloaded the pupil software but can't figure out how to "extract the pupil capture app to desktop"
@user-d77d0f You will need the 7z software to do the extraction https://7-zip.org/
When I extract it, should it appear as an app?
Because it just appears as a folder
@user-d77d0f No, you should see a folder, with a bunch of files in it
Oh ok perfect
One of them is the pupil_capture.exe
which you will have to execute
Aah indeed
Thank you!!
@here 📣 Pupil Software Release Update v1.16-80
📣
This release addresses deprecated recordings and an issue with creating offline calibrations.
Deprecated Recordings - https://github.com/pupil-labs/pupil/pull/1668 Recordings made with versions prior to Pupil Capture v1.3 or Pupil Mobile r0.21.0 have been deprecated. Pupil Player v1.16 is able to open these recordings with some limiting assumptions. Please see the dedicated section in the release notes for more information: https://github.com/pupil-labs/pupil/releases/tag/v1.16
Fixed issue with creating offline calibrations - https://github.com/pupil-labs/pupil/pull/1667 /cc @user-bda130
We highly recommend downloading the latest v1.16-80 bundle: https://github.com/pupil-labs/pupil/releases/tag/v1.16
Hi, I used Version 1.2.1 and I connect OTG setting on my phone. It worked but today I have a problem : no detection of the eye-tracker (none of the three cameras) on the phone. Eye-tracker is detected on computer. I use a proper USB-C to USB-C cable. I did not change the cable between yesterday and today. I tried with another cable just in case it was the cable but it did not work more. Could you help me please to find solution ?
For reference https://github.com/pupil-labs/pupil-mobile-app/issues/32
Hi, does pupil lab have a sale representative near New York City area? We would like to reach out to get some information about the device and possibility setup an evaluation demo if available. Thank you!
Hi,Excuse me. I have a problem about "python setup.py build" in pupil_detectors folder under win10,it shows:C:\Users\admin\pupil\pupil_src\shared_modules\pupil_detectors>python setup.py build running build running build_ext building 'pupil_detectors.detector_2d' extension C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -I. -IC:\Users\admin\Anaconda3\lib\site-packages\numpy\core\include -IC:\Users\admin\opencv\build\include -IC:\Users\admin\ceres-windows\Eigen -IC:\Users\admin\ceres-windows\ceres-solver\include -IC:\Users\admin\ceres-windows\glog\src\windows -IC:\Users\admin\ceres-windows -I../../shared_cpp/include -Isingleeyefitter/ -IC:\Users\admin\Anaconda3\include -IC:\Users\admin\Anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.14.26428\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\cppwinrt" /EHsc /Tpdetector_2d.cpp /Fobuild\temp.win-amd64-3.6\Release\detector_2d.obj -D_USE_MATH_DEFINES -std=c++11 -w -O2 detector_2d.cpp C:\Users\admin\ceres-windows\ceres-solver\include\ceres/internal/port.h(50): fatal error C1189: #error: One of CERES_USE_OPENMP, CERES_USE_CXX11_THREADS or CERES_NO_THREADS must be defined. I have no idea about it,Could I get help from anyboby?Thanks very much!
Hi, i tried pupil mobile on motorola g6 plus and it works!! The problem is that on my nexus 5x pupil mobile doesen't recognize eye tracker... How can i do?
@papr I just upgraded my Pupil version (lacked a couple versions behind) and now the 3D mapping doesn't seem to work anymore for me. I reset all settings in the menu, but I only get "gaze.2d." messages. (e.g. gaze.2d.0. : {'topic': 'gaze.2d.0.', 'norm_pos': [0.5165608752073741, 0.4801285240183113], 'confidence': 0.9991866616640561, 'timestamp': 90103.466044, 'base_data': [{'topic': 'pupil.0', 'circle_3d': {'center': [0.49586033790509365, 0.4667618615398883, 59.52160397615749], 'normal': [-0.4517110829734998, 0.3169004307812576, -0.8339851404488939], 'radius': 1.4229377740165856}, 'confidence': 0.9991866616640561, 'timestamp': 90103.466044, 'diameter_3d': 2.845875548033171, 'ellipse': {'center': [165.29948006635973, 124.7691542356053], 'axes': [24.7630653250132, 29.64819193133116], 'angle': -36.03045521359836}, 'norm_pos': [0.5165608752073741, 0.4801285240183113], 'diameter': 29.64819193133116, 'sphere': {'center': [5.9163933335870915, -3.336043307835203, 69.52942566154422], 'radius': 12.0}, 'projected_sphere': {'center': [212.7569993844032, 90.25220859832586], 'axes': [214.01010951007933, 214.01010951007933], 'angle': 90.0}, 'model_confidence': 1.0, 'model_id': 1, 'model_birth_timestamp': 90097.299134, 'theta': 1.8932560159820968, 'phi': -2.0671904631690197, 'method': '3d c++', 'id': 0}]}
)
Did I forget any setting?
@user-54376c This is after doing a 3D calibration, correct? I ask, because without a previous calibration, the dummy gaze mapper just outputs the 2d pupil position one to one.
Without calibration
like before updating
alright after a calibration it's 3d, thank you
(working on a new 3d calibration, hence no calibration before)
HI there! has anybody tried to do a calibration for an experiment with three monitors?
Hi @user-2be752 The calibration is usually independent of the number of monitors used in your experiments.
Thanks papr, but if I calibrate only for one monitor, then the fixations that fall into the other two monitors (on the left and right of the calibrated one) will have very low confidence right?
@user-2be752 The calibration is relative to the subject's field of view/the scene camera's field of view, not to a physical object. Unless the subject's head is fixed via a head rest, it is likely the subject will turn their heads to look at the different monitors, and therefore moving their field of view to them. So the calibration should apply in the same way as it did for the first monitor.
Of course, if your subjects move their gaze outside of the calibration area, e.g. if their head is fixed and they need to look out of the corner of your eye, then the gaze estimation will be likely worse.
In the second case, I would recommend to do a manual marker calibration that is spread over the complete field of view of the subject
Hi all, our lab has a the pupil vive add-on tracker. One of the eye cameras has stopped working. eye0 starts in ghost mode, while eye1 starts properly. Are they any suggestions on how to fix this, or what the problem could be?
@user-65df3f please contact info@pupil-labs.com in this regard.
Thank you for quick response
I built a DIY headset with the Logitech HD Webcam C615 and Microsoft LifeCam HD-6000, and am trying to operate it with the pupil capture gui. However, neither camera is recognized by Pupil's software (the world view camera feed remains in Ghost mode). Any assistance would be greatly appreciated!
@user-65b830 Hey, which operating system are you using?
@papr Thanks for the info! My subjects can move their heads freely. I guess then i'll test just calibrating in one screen and then see if the confidence drops when they look at their other screens
what's the best way to verify the accuracy/measure the error of a calibration model? wasn't there a plugin for that? 🧐
@user-54376c in Pupil Capture Accuracy Visualizer, in Pupil Player the Offline Calibration.
The accuracy visualizer uses the data of the calibration process, right? Isn't there a way to independently "verify" it with new gaze points afterwards? Or do I get this wrong?
Yes, just hit the T on the left. It uses different target points. But the calculations are the same
thank you!
@papr I am using Windows
@user-65b830 in this case follow steps 1-7 from this guide https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
@papr thank you!
@papr pupil capture works with my webcam now!! Thank you for such clear instructions
Hi, thanks for making pupil work available. We are trying to use pupil player to parse the pupil size and location from preexisting video. We have constructed a directory including info.player.json, eye0.mp4, and eye0_timestamps.npy. Offline pupil detection works! But we have a problem: the pupil detection crashes after 39% of the video.
It is always 39% regardless how many frames the video has (5000-15000 isnwhag we have tried.) We have tried adjusting recording start and end in info.player.json, and writing different time stamps to eye0_timestamps.npy. None of that changes the error.
The exception is in file_backend.py — line 400: av_frame is not defined. Looks to me like that means self.frame_iterator is None.
Before I add more details- does anything here ring any bells? Thanks!!
@user-33ed6d did you make sure that the timestamp file has the same exact amount of timestamps as their are video frames in the video file?
@papr What is odd is that the GUI always stops at 39% even if we make the timestamps invalid—e g set all the timestamps in the eye video time stamp file to zero. Besides changing that time stamp file is there anything else you would try?
Is there any requirement that pupil labs have for usb wecam? My usb webcam cannot be used in pupil labs.
chiming in to Mark H's comment - yes, we generate the npy file by creating an array of 0:num_frames, then divide by framerate. the resulting file has the exact number of timestamps and correspond to the frame times. also, we are getting this warning (which has 'This should never happen' comment in the code) on what looks like every frame:
@user-94edb6 the video's packet pts are different from the frame pts. You will have to reencode the video with new pts.
gotcha, thank you for the help!
Eye 0 suddenly failed with my pupil glasses. It says video_capture.uvv_backend:Init failed
capture is started in ghost mode. I unistalled drivers, reinstalled them, unplugged and replugged the connectors. tried on different machine.
@user-06ca4e this is likely an hardware issue. Please contact info@pupil-labs.com in this regard.
Hi, I would like to do some post processing on the exported eye videos, but there might be an issue with exporting the videos. (win10, PL 1.13)
filename = 'eye1.mp4' vid = imageio.get_reader(filename, 'ffmpeg') nums = range(90, 120)
pylab.imshow(vid.get_data(91))vid.get_length() vid.get_meta_data()
Output:
{'plugin': 'ffmpeg', 'nframes': inf, 'ffmpeg_version': '4.1 built with gcc 8.2.1 (GCC) 20181017', 'codec': 'mpeg4', 'pix_fmt': 'yuv420p', 'fps': 65535.0, 'source_size': (192, 192), 'size': (192, 192), 'duration': 34.42}
the "fps" can not be read correctly and subsequently the read out of single images does not work correctly. Can you suggest a library to edit videos?
Hi, I am using Pupil Remote to communicate with Pupil Capture. I am activating the gaze calibration through the command 'C' . The calibration can be initiated on pupil capture but I would like to know if there is anyway for pupil capture to notify that the calibration is successful.
@user-94edb6 Please make also sure to delete any *_lookup.npy
files, since they include a cache of the packet pts. Pupil Player will simply regenerate the files on opening the recording.
@user-6d4ea5 Please see the first section of this document: https://gist.github.com/papr/b258e0e944604375752eae502b4ad3d5
@user-14d189 The easiest way to read images from a video is via opencv: https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-read
Hi everyone, I have a question regarding synchronizing the pupil capture with another simultaneously running recording system. I am using the HTC Vive add-on and acquiring EEG data at the same time. I run Pupil Capture and EEG recordings on two different computers. Every minute I send a 500ms TTL pulse to the EEG system and to an LED that is in the field of view of the pupil cameras to 'synchronize' the two systems but was wondering if there is a neater way of doing it on the Pupil side. Is there a way that I can read the TTL in the Pupil Capture software to mark the correct timing of the pulse just using software? Thanks in advance!
@user-b0f1b6 Have you, by any chance, sent an email to info@pupil-labs.com in this regard already?
@user-b2ed5c you can send triggers, so called annotations in Pupil: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Thanks, @papr. I've been trying to clear the autogenerated files between runs but I'll be more diligent about it from now on. We are now extracting the pts from mp4, converting to seconds, then writing to npy. This has fixed the crashing bug, thanks! Interestingly, we are still getting the iterator warning, and only getting offline detected data for ~2/5 of the frames in a pattern: 0 1 2, then 7 8 9, then 15 16 17, etc. It's not a big deal as we can sync the timestamps but wondering if you've seen this before. If it's relevant, the pts from the lookup file are non-monotonic while the extracted video pts are.
Some questions about Pupil Invisible. Can eye videos be streamed ? If so, is it possible to have some eye image samples? Are there some research paper about the algorithms? Is Pupil Cloud a free service ? Thank you in advance.
@user-34de89 Hi,
Streaming the eye videos is not officially supported. What we do support is streaming the world/scene video and the realtime gaze data.
@marc should be able to respond to the remaining questions.
@user-34de89 To add to what @papr said: We do not support streaming the eye videos in real-time, but the eye videos can be accessed post hoc. If the ability to stream eye videos is a feature that is wanted, we might consider adding it. Unlike with Pupil Core it is not neccessary to ever look at the eye videos for most use-cases, which is why we have not added this yet. Let me know if your use-case would require real-time streaming.
Regarding eye image samples: If you could let me know what your use-case is via PM, I can send you an appropriate demo recording.
Research paper on the algorithms: We are currently working on a white paper to describe the algorithms and give a comprenhensive summary of the performance in various conditions. I can not announce a release date for this yet though.
Pupil Cloud: Yes, this is a free service.
Hello, I'm using the pupil core glasses. They got unplugged during data collection, but I'm hoping to use what we had until then. Unfortunately, the files didn't generate properly (timestamps and word intrinsics files missing). Is there any way to recover this data?
@user-072005 The missing timestamp files indicate an incomplete recording. Unplugging the device does not abort a recording in such a way in newer versions of Pupil Capture.
The incompletion must result from an other issue or you have been using a very old version of Capture.
Unfortunately, in most cases, the actual issue are not the missing timestamp files but the video files being broken beyond repair.
To check this, please attempt to open the recorded world and eye videos using the VLC Player. If they open without an error message, there is hope for recovery.
Also, could you let us know which version of Capture you are using?
Ok, I need to download VLC on this computer. I'm using
1.15
That is a fairly recent version. Did Capture crash during this recording?
But the videos of the world and eye were gone and I closed the rest of the windows, which may be what did it
@user-072005 Yes, the preview stops when the camera disconnects. But you can reconnect the device and after some seconds the preview should work again. I would recommend you to give it a try in a test recording. 👍
Ok, so if it happens again, how do I properly end the recording?
Either reconnect the device and continue, or hit the "R" button as if you wanted to stop the recording normally.
Nonetheless, at least the world video and timestamps should be present if you closed the windows during the recording.
Can you share a list of the files that were generated for this recording?
Even if I close the one that looks like the command prompt?
Yes
They won't open in VLC. Luckily, I think I can get this person to participate again
Ok, in this case, the recording cannot be restored. Sorry for that.
We will improve situations where the recording was not stopped via the "R" button.
It is indeed possible, that closing the terminal-like window caused the recording to be interrupted abnormally.
In the future, always stop the recording normally first, and shut down the application by closing the world window.
And if the world window is closed already, I can still click R and it will end properly?
No, the R button is only available in the world window. Once the world window is closed, the recording is being stopped or aborted.
hm...so it happened by itself
Also, as recommended above, make a test recording, where you disconnect and connect the device and checkout how it looks like in Player. You should see gray frames duing the period in which the device was disconnected.
I'll play with it. Maybe I just thought that was the eye video that doesn't exist
@user-072005 If any of the windows disappears by itself, the program crashed. In this case it is recommended to
1. Stop the recording if possible
2. Close the world window if possible
3. Make a copy of the capture.log
file and share it with us for support purposes.
Only afterwards, restart Capture, since restarting overwrites the log file and any information about the crash is lost.
Oh darn, I just reopened capture. Ok, I will keep that in mind for the future
Do you know where to find the log file?
Next time you encounter a crash, feel free to open an issue on Github with the log file. This way we can track the issue and make sure the report is not lost in the amount of chat messages here. 🙂
You've actually told me how to before, but I forgot
wait, I found it
Oh, yeah it definitely crashed somehow. Hopefully that's the only time, but I'll keep the capture.log if it does happen again
(now that I've played with disconnecting in capture)
@user-94edb6 Sorry if I was not clear before:
the video's packet pts are different from the frame pts. You will have to reencode the video with new pts.
The mp4 file is foremost just a container for different types of streams, e.g. video in this case.
A stream is a series of packets. Each packet has a pts. You can access packets via "demuxing" which is different from "decoding". If you "decode" a package, you get a "frame" with the actual image/audio/etc data. A packet can yield multiple frames when decoded. Each frame has its own pts.
When we record videos in Capture and Pupil Mobile, we write the videos such that each packet yields one frame and that the frame has the same pts as the packet.
This does not seem to be the case for your video.
We use the packet pts to build the lookup table since this is quick. When processing the video we use the pts of the decoded video frames and compare it to the lookup table to check our current location within the video.
If the assumption of packet.pts == frame.pts is broken, the lookup fails and you get the above warning.
@user-072005 were you able to reproduce the crash?
No, it is keeping the world video up but pausing when I disconnect which is not what happened
I had it on a USB extender though, and it seems my USB extender broke
Mmh, a broken extender should not result in a crash, but I cannot guarantee it.
It seems like it would be the same as any other disconnect. Well, if it happens again I'll post it in the problems page on github. Thanks
thanks @papr how do I integrate/run the annotations code with the Pupil capture? Do I run it in both EEG and pupil capture computers? I am very new to python but it looks like the annotations run via a network? Due to lab regulations the EEG acquisition computer may not be allowed go on a network but i will check this.
Alternatively, is there a way to read an analog in channel from the serial port in the pupil capture code? i think that might be the easiest way for me.
Is there any way to totally remove the vis polyline when exporting a video from Pupil Player?
@user-a7dea8 Yes, you will have to turn off the plugin. You can do it from the plugin's menu.
@user-b2ed5c Yes, the script uses the Network API of Pupil Capture. Nonetheless, api also works without an explicit network, i.e. on the same computer, or a directly via ethernet connected computer
Anyone knows where I can find a detailed explanation of the export data on the new homepage/github?
@user-e4aafc Which export data specifically? pupil_positions.csv and gaze_positions.csv? Or other files?
@papr ideally both. we are trying to load the data into a visualization platform and we try to understand the relation between eye_center and gaze_point
@user-e4aafc I have just added it to the docs. It should come only soon.
It will be available below this section: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
@papr awesome thanks!
@user-b2ed5c One way to synchronize EEG and Pupil Capture is to write a script that manages the recording.
It would be responsible for starting the Capture recording, as well as for sending triggers to EEG and annotations to Pupil.
The annotations would include the value of the EEG trigger, as well as the current timestamp.
Please be aware of the different clock used by Capture: https://docs.pupil-labs.com/core/terminology/#timing
For clarification: The example script is run additionally to Capture.
@papr Another question just arose.. we are using the pupil with a realsense depth camera but the resulting vector length between eye_center and gaze_point can be manually adjusted (in player --> vector gaze mapper --gaze distance) (which makes the depth camera obsolete) - is there an additional output such as actual distance between eye and gaze object?
@user-e4aafc No, the 3d gaze mapper does not use that option. One would have to either 1) write a custom gaze mapper that makes use of this information (difficult since gaze mappers do not have access to the current frame) or 2) write a custom plugin that modifies the existing gaze data (would only impact gaze data stored during a recording, not the gaze data published in the network API) 3) Calculate custom gaze data post-hoc by combining the gaze output by Pupil Player with recorded depth data
@papr thanks for all your advice on pts and offline pupil detect. We have a good sense of what to try.
On another note, we have gotten pupil running from source on MacOS using Conda (so we don’t have to put everything into the system-level homebrew install in /usr/local). The patch to do this is somewhat rough- are you interested in it? If so we can try to clean up a bit; if not we won’t bother
@user-33ed6d Sorry, we are working on a different approach. We want to reduce the required system-wide libraries instead. 👍 e.g. by shipping them as part of the python module (wheel)
Sounds fine thanks.
Hi, I am using Pupil Invisible (Beta version) and I want to read the files with extensions .raw , .bin, .time and .time_aux. Any advice on how I can read them?
@user-df9629 Please check out our documentation in this regard https://docs.pupil-labs.com/developer/invisible/#recording-format
Hi, guys! I always notice there is 'error dc' which I don't know what's that mean. Can anyone explain this for me?
Also, I tried Apriltags and processed it removing the white edge. Obviously, it cannot be detected. So don't make this stupid mistake as I did.
Another question is that what 'pip install pupil-apriltags' for? Is that used to generate apriltags?
@user-9c3078 could you give us more context in which the error appears? Is it during the usage of surface tracking?
pip install pupil-apriltags
This is only required if you run Pupil from source instead of using the released bundle.
This package wraps the original apriltag detection code: https://github.com/AprilRobotics/apriltag
@papr Yes, during surface tracking using capture, maybe it also happens when using other functions. But from player, error dc happens with libav.mjpeg, it also says error with some xy coordinates
@user-9c3078 thank you, could you let us know what hardware you are using? Are you using a Pupil Core headset? If yes, does it have the 120Hz or 200Hz eye cameras?
@papr I am now using Pupil Core headset with 200 hz camera
@user-9c3078 The error message is a result of a failed attempt to decode the video frames. This is often due to the frames not being fully transmitted. Are you using a USB cable extender by any chance? And are you using the cable that came with the Pupil Core headset?
@user-9c3078 I am using the cable with the pupil core headset
@user-9c3078 are you getting the error message very often or only once or twice?
@papr yeah, I got it very often during the recording and it happens several times before in Sept
@user-9c3078 Can you try a different USB port? Do you have a dedicated USB 3 port that you could use?
@papr I tried a different port and there is no error dc now but with a worse frame drop for around two seconds and happened the same position of time for two recordings. Is that normal? Also I found ' video_capture.uvc_backend: Received frame with invalid timestamp. This can happen after a disconnect. Frame will be dropped!' in the terminal, can you explain what is this? So... I guess there is something loosen between my cable and port which result in the frame drop?
@papr BTW, I didn't find define the surface rectangular as the previous version with square marker. Then how apriltags define the surface? Is that still meaningful I define the size (width and height)?
@user-9c3078
I guess there is something loosen between my cable and port which result in the frame drop? Yes, this seems to be an issue with your headset. Please contact [email removed] in this regard and let them know which steps we have taken towards debugging the issue (using original cable, different ports, resulting disconnects)
@user-9c3078 The surface definition works in the same way as with the legacy markers. A surface is independent of the markers it is using
@papr So how can I resize the surface with apriltags? I cannot edit the marker as before.
hi can I ask what is mean by the start frame and end frame in the fixation.csv @papr
@user-9c3078 I do not understand. Given that you have defined surface, do you see the two red dots at the top right of the surface? One saying "edit surface"?
@user-deafd0 These are the indices of the world frame during which the fixation started and the world frame during which the fixation ended
@papr No, I didn't see anything except the marker
@user-9c3078 Is the marker being detected? It should have a green overlay
You are using apriltag markers correct?
@papr Yeah, I can see the green overlay
Wait a minute, I can show you the sceenshot
@user-9c3078 Maybe you just need to add an other surface definition? Just hit the "A" on the left side. A screenshot should be helpful. yes
Let me try add a new surface.
I see that there is a surface definition, but it might be an older one which is based on different markers.
I should say that a surface is either defined on the legacy markers or apriltag markers, not both at the same time.
@papr okay, I got it. It worked when defining a new surface. Thank you !!!
Hi! Is here any method that I can use to perfectly define my surface? For now, I know that even from the screen we see the distortion image but the data we get are not distorted. But when we define the surface, it actually get the distorted surface right?
Thank you @papr !!
Hi! I just downloaded the newest version of pupil capture and found that my apriltags which can be detected before cannot be detected now. Can anyone help me?
@user-9c3078 Which version were you using before?
Also, which operating system are you using? You deleted your screenshot from earlier, didn't you?
@papr Yeah, I delete that, I use v1.16-71 before, now I am using v1.16-80
Also I found a wired thing that I my world FPS and eye FPS works fine but there still some frame drops which I think because of CPU or something? BTW, I use linux
@user-9c3078 Are you still seeing
video_capture.uvc_backend: Received frame with invalid timestamp. This can happen after a disconnect. Frame will be dropped!'
This means that there were frame drops due to a connection issue. Are you using a different headset by now?
But yes, frame drops may happen due to missing cpu resources
@user-9c3078 Re apriltag detection: You started the surface tracker again (it will be disabled after the update) and the green overlay does not show anymore?
@papr Yeah, I still got that and but it happened during my recording, that's so bad. And also I think the same problem happened again that in a 20 minutes recording, the last 5 minutes you can get any gaze information even it's not ghost mode.
@user-9c3078 Did you contact info@pupil-labs.com already?
@papr No I haven't, do you need that?
And this is what it looks like
But the first time I opened it via player, there is a line on world FPS which is with equal length of pupil
@user-9c3078 There is an hardware issue with your headset that probably needs to be fixed. Else you will keep getting disconnects. Please contact info@pupil-labs.com in this regard.
@papr okay, thank you so much!
BTW, does the order ID the same as the order on the box? Cuz I guess I need that for the remote debug
Yes, that should help. You can find it on the shipping label. Alternatively, you can let us know the email address/name which was used to purchase the device. Please put the information into your mail instead of posting it here.
And my termial keep display 'DEBUG:av.buffered_decoder:frames to buffer = 1', what's that mean?
It is a debug message for a Player specific implementation detail. It basically tells you how many new frames are being buffered during playback.
Cool! Thank you soooo much for helping!
Hi, I was just thinking that everything in mono chromatic light looks gray, doesn't it? And that recoding of the eyes as a gray movie might bring resource requirement down, might it? On the other side compressed mp4 ...
@user-14d189 The cameras provide the data as mjpeg stream. Jpeg uses YUV as color space. The V channel is basically the gray image, making it inexpensive to extract it.
Hi, I am using the Pupil Invisible (beta version) and I noticed a mismatch in the number of timestamps for the IMU [say A timestamps] and the number of IMU raw data [say B raw values] after being segregated into its componenets [i.e, B/6 != A] I unpacked the raw file with float 32 little endian and the timestamps file with unsigned int 64 little endian. I have observed this on multiple data recordings. Is there something I am missing?
Hey @user-df9629 can you check if there is a specific ratio between the number of B and the number of A? And it it is consistent between the different recordings?
I will check it right away
@user-df9629 My guess is that it is around 6 * 80 * len(A) = len(B)
@papr , The ratio of len(B) / len(A) is between 5.5 and 6. It is inconsistent between different recordings.
@user-df9629 oh ok, thank you for reporting the issue. I will forward it to the responsible team member 👍
@papr , you're welcome! 👍
Hi, I'm trying to use pupil mobile for recording audio. I enabled the audio capture plugin but it did not detect any audio source. Is it even possible to use pupil mobile for recording audio?
@user-fd5a69 hey, did you mean Pupil Capture by any chance? Pupil Mobile records the phone's built-in microphone by default.
Hello all,
I posted this on the research channel, but got no response, so I'll ask here too. I am currently involved in research that needs eye-tracking data from videos where the person wearing the tracker is moving (walking, running, driving, playing some sport, etc). Do you know of some recording repository that might help me?
Thanks!
They could be videos not necessarily recorded with pupil, but any other eye-tracking tech
Hey @user-516564 I am not aware of such a repository but you might get lucky by checking the papers that cite Pupil: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?ts=576a3b27#gid=0
Thanks @papr ,I will check that
@user-fd5a69 In case you are trying to stream from Pupil Mobile into Capture: Streaming audio is no supported. The only way to record audio with Pupil Mobile only is to make a recording on mobile and then import this into Pupil Capture afterwards. Or you only stream video from Pupil Mobile and record audio via some other audio input device connected to your machine.
@user-c5fb8b I see. Thanks for the info!
@user-fd5a69 I am curious. Were you indeed trying to stream the audio?
No, I want to record audio from pupil mobile into pupil capture.
@user-fd5a69 Maybe to slightly correct @user-c5fb8b 's comment:
The only way to record audio with Pupil Mobile only is to make a recording on mobile and then import this into Pupil Capture afterwards. He meant then import this into Pupil Player afterwards
The usual workflow is follows:
Alternatively: 1. Make recording on Computer using Pupil Capture 2. Open recording in Player
The possibility to stream video from Pupil Mobile to Pupil Capture should only be used for monitoring. not for making a recording. Yes, it is technically possible, but not recommended because there might be frame drops if the connection quality is bad.
Protip: You can use the Remote Recorder plugin to remotely start and stop recordings on the phones, using Pupil Capture.
I'm currently using the second way for recording world and eye videos. The eye trackers are connected to phones installed with pupil mobile and I use pupil capture in a local computer for starting the recording (pupil remote) and stored the recording in a computer recording folder. But the recorded video does not have any sound, and I couldn't find any audio file in the recording folder. So I'm wondering whether I can record audio using pupil capture.
@user-fd5a69 Maybe to clarify:
- Make recording on Computer using Pupil Capture
- Open recording in Player This workflow expects the Pupil Core headset to be connected directly to the computer running Pupil Capture, not to the phones running Pupil Mobile
Could you let me know if you are using the "Local USB" or the "Pupil Mobile" manager in Pupil Capture? This would help me to better understand your setup 🙂
Sorry I wasn't clear. I am using pupil mobile.
@user-fd5a69 Ok, thank you for the clarification :)
The possibility to stream video from Pupil Mobile to Pupil Capture should only be used for monitoring. not for making a recording.
Then, if I understood correctly, you are doing this 👆 I just want to emphasise again, that this is not recommended due to the possibility of data loss during the recording.
My recommendation would be to make the recording on the phone directly. It records the phone's audio by default. You can start the recording remotely, by using the "Remote Recorder" plugin. https://docs.pupil-labs.com/core/software/pupil-capture/#network-plugins
Ok I see. I'll try make recording on the phone and see if that solves my problem. Thank you for your help! @papr
@user-fd5a69 Sure thing! Let us know how it went 🙂
Hi pupil_player is crashing because it cannot find eye0Lookup.npy. Lookimg in the recording from ppil_capture eye0_lookup.npy is indeed missing. How do I get pupil_capture to create eye0_lookup.npy
@user-a82c20 is there a eye0_timestamps.npy file?
hi,I exported raw data from pupil player using Pupil Core, but I couldn't fully understand all items in those csv. files. I have looked through user guides. where can I get some specific explanations of those items?
Hello
@user-7ba0fb This is the relevant section in the docs: https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter
Additionally, I recommend to have a look at the terminology section: https://docs.pupil-labs.com/core/terminology/#terminology
Hey @user-0e30fc 👋 welcome to the channel.
Hi, is pupil mobile compatible with OnePlus 5? 3? has someone experience with those devices? Do you have any recommendation?
Yes @user-14d189pupil mobile works on oneplus5. Ensure OTG is on and app is locked. Please note OTG enabled and app lock only apply to OnePlus (not other devices AFAIK)
@user-14d189 I can confirm that it works on a One+ 3 as well.
@wrp @papr Thanks, I'll pass that on.
Hello, I have a question regarding the timestamps -- I'm trying to sync the data from Pupil with Empatica E4. When I looked at the timestamps in Pupil data it looks like this ""
However, the timestamps in this other device (Empatics E4) looks like "1570621762" --> which is the start_time in unix timestamp i.e. seconds from 1-1-1970 in UTC.
I noticed in the pupil_info file, that you also show the start time (System) in a comparable format (unix time stamp)
Now, my question is how I could convert the time in "pupil_positions" to unix timestamp i.e. 1570624738 ..and so on for the whole export file of "pupil_positions"
Thanks a lot in advance! 🙂
Hi, I am using Pupil Invisible and I am trying to collect data using Pupil Invisible Monitor. The gaze prediction on the monitor isn't the same as the preview on the phone. The center of the two red circles are different. I am adding the image for reference. Also, hitting record on the Monitor doesn't record it.
@user-df9629 Hey, Pupil Invisible Monitor is not meant for data collection. It only displays the streamed data + applies an optional manual offset. You can apply the offset by focusing an object in the real world, and clicking on it in the preview. The app will calculate the offset between the predicted gaze and the clicked location. This offset is applied to all further gaze until a new offset is set, the offset is reset, or the app is closed.The "R" in the top left resets the manual offset.
Please be aware: Currently, the offset is only applied in Pupil Invisible Monitor, not in the app itself. If you want to make a recording with Pupil Invisible you have to start the recording in the Companion app.
@papr , thank you for clarifying the functionality. I thought the "R" was for record. My bad. One more thing, I noticed the reset button on the companion app but couldn't figure out how to add the offset. Any advice on that?
Pupil player grey window does not seem to be working on Windows. When pupil player icon is double-clicked to start, a black window appears and a grey window with the drag-recording-here prompt is seen in the task bar when the pupil icon is hovered over, but is not at all accessible. Dragging a recording to the taskbar icon or the desktop icon do not resolve this issue.
Worked extensively with what's now called Core last winter and spring, and am now coming back to continue the research. Need to introduce a new batch of students to the system, data formats, etc. Followed the current documentation to this link: https://docs.pupil-labs.com/user-guide/data-format and when I tried to follow it, got a 404. Huh??! Where is the description of data formats? Thanks!
Hi @user-abc667 We recently updated the docs and all the redirects are not yet in place. Apologies for the inconvenience.
The links I believe you are looking for are: - Recording format: a list of the expected files in each recording: https://docs.pupil-labs.com/core/software/recording-format/ - Pupil data format: https://docs.pupil-labs.com/developer/core/overview/#pupil-datum-format - Gaze data format: https://docs.pupil-labs.com/developer/core/overview/#gaze-datum-format
@user-a08c80 Hey, could you please try to
1. shutt down Player,
2. delete the user_settings_*
files in the pupil_playe_settings
folder (you can find it in your home directory), and
3. restart Player?
@user-df9629 we will respond to your questions in invisible channel.
We got some problem with pupil core so can you check this screenshot?
Hey @user-09f6c7
hello papr~ we got this messages..
Please try installing vc_redist.x64.exe
from
https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads
Afterwards try launching the application again
ok I'll try immediatly thanks!
I look forward to it @wrp . Thank you!
@user-09f6c7 were you successful?
Hi! I am running pupil_capture in a new machine and there is an error 'File "shared_modules/pupil_detectors/init.py", line 21, in <module> ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /opt/pupil_capture/pupil_detectors/detector_3d.cpython-36m-x86_64-linux-gnu.so)', however, I've checked the version which is the newest: 'libstdc++6 is already the newest version (5.4.0-6ubuntu1~16.04.11).' What should I do about that?
@user-9c3078 I do not think libstdc++6
is the issue here but GLIBCXX_3.4.22
... Do I see it correctly, that the new system is Ubuntu 16.04?
Which version of Capture have you installed?
@papr Yeah I am using Ubuntu 16.04.6 LTS and capture version is v1.16-80
Is that software or the hardware issue?
@user-9c3078 Definitively a software issue. I am looking into it.
@user-9c3078
I've checked the version which is the newest: 'libstdc++6 is already the newest version (5.4.0-6ubuntu1~16.04.11). How did you check that?
@papr To go into the files under /usr/lib/x86_64-linux-gnu/ and I also tried command upgrade
command upgrade Could you please clarify what you meant by that?
Have you run this already?
sudo apt-get install libstdc++6
@papr yeah, that's the command I use
And my gcc version is 5.4
Yes I have run it and I get: libstdc++6 is already the newest version (5.4.0-6ubuntu1~16.04.11)
Hi, we are looking into pupil diameter changes in response to certain light stimuli. Do you have parameters that allow us to look at horizontal and vertical diameters? Is it possible to calibrate the pupilometer to mm as oppposed to pixels? Do you have a file with the technical specs of the hardware?
hey @user-90eebd 👋
The Pupil detection algorithm assumes the pupil to be a circle. Therefore, we only report one diameter. If you use 3d detection, you can get the diameter in mm (diameter_3d
column in the pupil_positions.csv
file exported by Pupil Player). I would recommend to freeze the 3d eye model once it is fitted well to avoid a noisy diameter signal due to model changes.
@user-90eebd These are the tech specs: https://pupil-labs.com/products/core/tech-specs
@user-9c3078 can you try installing gcc 6? https://askubuntu.com/a/746480
hello PAPR,
hello papr, we found the diameter 3d column (N). the numbers are large (69-88) and alternating in direction (- vs +). hard to imagine pupils are 69 to 88mm, is there are conversion formula ? is the negative vs positive indicating right vs left pupil? do you know the resolution of pupil diameter change that is detectable?
@user-90eebd Please see the id
column. It indicates to which eye the diameter belongs to. 70-90 mm is definitively not correct, and results most likely from a not-well fitted eye model.
@papr There is new error AttributeError: /opt/pupil_capture/libglfw.so: undefined symbol: glfwGetError
@user-9c3078 ok, thank you, that is valuable feedback! Can you try deleting the user_settings_*
files in the pupil_capture_settings
folder if there are any?
@papr And run the capture?
Afterwards, yes
@papr Same problem happens
Ok, can you try an older version, e.g. v1.13 as a temporary work around? We will try to fix this issue with the next release.
The older version won't have the same problem? Why?
@user-9c3078 two reasons: 1. We had to increase the gcc version for the apriltag detectors. We will have to check why this change is not compatible with Ubuntu 16.04. 2. The glfw error function call has been added to our wrapper recently. I guess we did not properly update the bundled glfw.
I am trying to use the LSL Relay plugin, however it is not showing up in Pupil Capture. Therefore, I went into the capture.log in pupil_capture_settings to see what the problem was. This was the error message after "scanning pupil_lsl_relay.py" and "scanning pylsl": Failed to load 'pylsl'. Reason: 'dlsym(0x7fcebdd3b880, lsl_library_info): symbol not found'. Does anyone know how to fix this?
@papr we installed VC++ Redistributor what you were recommend so the log messages are changed. but there are still error messages. we are using the pupil core on the LattePanda Alpha(Windows 10 Pro) board.
@user-09f6c7 This is the same error as @user-9c3078 has encountered. We will try to fix it in the next release. Until then, please try an older version of Pupil: https://github.com/pupil-labs/pupil/releases/tag/v1.13
I own a pupil core and wanted to know if I can bypass the use of the android app and connect directly the eye tracker to a smartphone with windows 10
Hi @user-7f1687 can you clarify what specific setup or use-case you mean?
connect directly the eye tracker to a smartphone with windows 10 Do you mean running pupil capture on a non-android smartphone running windows 10 mobile edition? This will most certainly not be possible.
Thanks, I meant running pupil capture on a windows 10 mobile edition. I've just realised that pupil doesn't work on 32-bit machines. Sorry 😉
Do you have experience of running pupil capture on more advanced tablets such as Microsoft Surface Go or other tablets? Do you have any suggestion?
@user-7f1687 But we've been able to run Pupil Capture with no problems on a Surface Pro. I can't say anything about the Surface Go though. I am not sure if it will provide enough resources.
@user-c5fb8b thanks a lot for your help. In the meanwhile if other of you have experience with tables let me know. Thanks!
Hello!
I have downloaded the pupil capture, player and service from github, and i am trying to run it. I can test with laptop cameras, or only with the pupil labs device?
@user-cccded You can use pupil with any usb camera that supports the uvc interface. Most likely your built-in laptop camera will also support uvc.
Keep in mind that you can use out-of-the-box cameras only as world camera and not for pupil detection (and thus the actual eye-tracking). We have a DIY workflow where you build a pupil detection camera from an ordinary webcam though, if you are interested in this: https://docs.pupil-labs.com/core/diy/#diy
Otherwise I'd recommend using any of our products: https://pupil-labs.com/products/
@user-cccded to add on to @user-c5fb8b's response: Pupil Core software is designed for wearable/head-mounted eye tracking. I just wanted to make sure that we differentiate clearly between remote eye tracking systems vs wearable/head-mounted systems.
When you wrote "can I test with laptop cameras", the technical answer is "yes" because if your laptop camera is UVC compliant then you might be able to select it as a source in Pupil Capture. However, Pupil Core software will not work as a remote eye tracking system.
Pupil Core software requires a wearable/head-mounted eye tracking hardware with eye cameras that capture close up videos of the eye in IR and scene/world camera that captures the FOV of the wearer.
Hi, I was wondering if there's a way to overlay the RGB and RGBD videos (from a single recording) in Pupil Player? I'm trying to sync them up but apparently they were saved at different FPS.
Hi, @papr. My lab is using a pupil core with children's frames to do research. We have already tried to increase to confidence value to 0.95 and position the eyes in the center of the eye cameras. Are there any ways to improve the results of calibration?
@papr ok. we'll check the next release. and always thanks for your effort.
@user-a65263 please be aware of the difference between the confidence (the quality of Pupil detection) and gaze accuracy (the result of the calibration). Feel free to send one of your recordings to data@pupil-labs.com st. we can have a look at it and maybe give further ideas for improvement. Ideally, include the calibration procedure in the recording st. we can reproduce it using offline calibration.
@user-222750 this is currently not supported in Pupil Player. You would have to align/playback the videos based on their recorded timestamps. It should be possible to write a custom plugin that implements the overlay.
Alright, thanks, I seem to have synced them up (to a degree) by modifying their FPS and scaling the depth video. I'll see what I can do with the timestamps.
@user-c5fb8b @wrp thank you for the answers! I understood the requirements of the software, I am going to see the DIY to build the pupil detection camera. Thanks again!
Hey everyone, I am having hard time to understand the angle field (key) and its value. Pupil generates it for every gaze data. What's the essence of it? What does it really show us?
Hi @user-f086ad do you mean the ellipse-angle?
@user-c5fb8b exactly, actually there are two angle keys
@user-f086ad
The angle refers to the angle of the ellipse. Here a section from the OpenCV docs for drawing ellipses. At the bottom of the ellipse
function you can see a graphic of what all the parameters represent.
https://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html#ellipse
@user-f086ad
The projected_sphere_angle
is only present when using the 3D pupil detection and refers to the ellipse that you get when you project the estimated 3D eyeball back onto the image. You can see it in the eye windows drawn as green ellipses.
You are my hero @user-c5fb8b
@papr You are right. We want to have both high quality of pupil detection and the gaze accuracy. What we want to ask is if you could give us some tips on operational steps or strategies? We have read through the official documents on your web, but it doesn’t have much information about many functions listed on pupil capture program.
Hello Guys, I am using the pupil core for the first time and I am interested in measuring the pupil size. I think I am doing something wrong because although I am recording the pupil size in 3d, when I export the data from the Pupil Player the measurement of the 'diameter_3d' is not in mm (it is probably in pixels, since it's a huge number). Am I doing something wrong? Thanks in advance. 🙂
Hello, I've just installed Pupil on my Ubuntu 16.04 laptop and am trying to do offline calibration in Player. When I do pupil detection, I get the following error: "install pyrealsense to the intel realsense backend" - can someone point me to some instructions on how to resolve this?
We have a Pupil Labs Core device with the World Camera, Moto Phone, and two cameras. Is it possible to record all of the eye tracking data to the phone only without having to be on the same wifi network as a recording computer? I see that I can record the video, audio, IMU, etc. info to the phone, but I'm not getting any fixations when I load the data into Pupil Player. I've tried doing the calibration on the computer and then starting the recording on the phone, but it doesn't seem like that calibration information is transferred to the phone. Any guidance would be much appreciated. Thanks!
Hi! May I ask what is this "confidence" mean? How is it calculated?
@user-f2c41a this is just warning log message. Pupil detection should be functional regardless
@user-c6717a Pupil Mobile only records raw data streams. You can detect pupil in your Pupil Mobile recordings and calibrate post-hoc with Pupil Player. Please ensure that you record the calibration sequence.
Hi, I'm new to Pupil products, and have just started using it as part of my GIS courses for college. I was wondering if anyone is familiar with exporting data into a software like ArcPro to create a heat map?
Hello. I'm studying about "take-over" in autonomous driving,
I'm trying to use fixaiton data using Pupil Labs, but I'm writing because the coordinates are incorrect.
In my analysis last year, I found a place to do this by setting the coordinates on the right side of the pupil player (ex. (3,3), (8,8) → (0,0), (5,5)). However, I can't find it now. I would appreciate it if you guys could give me the answer.
Hi, I'm also facing the same issues as @user-09f6c7 and tried to run v1.13 and then I stacked to the following error:
Hi @user-6b3ffb I see you are running from source. The dependencies changed a bit from v1.13, specifically back then we had a dependency on the c++ boost library, that we got rid of. Is it an option for you to run pupil from bundle? Then you can just download the v1.13 release bundle from GitHub here: https://github.com/pupil-labs/pupil/releases/tag/v1.13 Otherwise you will have to dig through the history of our documentation for how to set it up. I looked up the windows setup instructions from the date of the v1.13 release: https://github.com/pupil-labs/pupil-docs/blob/1c4fb94743ff8430d4cef6bc4a1f157a3fbe88b6/developer-docs/windows.md
Hi there. One of the eye cameras of my binocular pupil core apparently doesn't work. I also tried to invert the left and the right cameras but the problem seems to be related to the right part of the system.
Can you help me? Should I contact the support?
@user-7f1687 Please contact info@pupil-labs.com
thanks
Hi there! Could you tell me if one of your eye trackers is compatible with a regular virtual reality head mounted device?
@user-86a23a Hey, check these out: https://pupil-labs.com/products/vr-ar/
Great! Thanks
Hi @user-a48e47 I am not sure I understand exactly what you're talking about. Perhaps you're referring to manual offset? If this is what you want to do, then you can do so by performing a post-hoc calibration aka offline calibration: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration
Hello, in previous versions of player I was able to set trim marks for the location of the calibration marker detection, but I can't find it (in 1.15). It's finding "calibration markers" in the world that aren't really. Is the ability to set trim markers still there? Or a way to remove the incorrectly detected markers?
@user-072005 It is both possible: 1. Set a calibration range via trim marks
@user-072005 2. Remove false positives manually by clicking on them
oops, I see I needed to click new calibration to see it. Thanks
Hello @papr , I was wondering what the best way is to use the Pupil Core in real time? I have been trying to use the pupil_lsl_relay through Python, however, it is not showing up in the plugin menu in Pupil Capture. This is the error I get from looking at the capture.log in pupil_capture_settings: "Failed to load 'pupil_lsl_relay'. Reason: 'dlsym(0x7f9fcd5c0f10, lsl_library_info): symbol not found' ". Do you know how to fix this error or is there a different way that would be better for doing real time?
Hi does anyone know how to transfer surface settings from one version of pupil capture to another? Our research team was using 1.10.20, but we've run into some technical issues, which seem to get resolved when we use the most current software version, but our surface settings did not transfer over to the newer version as they have when have previously updated the software. I know that Capture generates surface configuration file, but I'm not sure how to make it so that the current software version sees it as well.
There is a directory for Pupil_capture_settings in the users folder on our pc and thats where I found the surface _definitions file so I assume it can be transferred over to the new software somehow
Hello. I'm trying to launch Capture, but I'm encountering an error I don't know how to resolve, seemingly involving it's attempt to update drivers. The attached file is a copy of it's output. I've tried completely removing and redownloading the software, but that had no effect at all. How would I fix this error?
The previous file I uploaded was incorrect, I apologize. This one is correct.
It doesn't show an error if I don't have the Core connected to my machine.
@user-00cf0f Hi, do you specifically want to use Pupil Core with the LSL framework? In gereal Pupil Core already has an interface for real-time streaming. the pupil_lsl_relay plugin only makes sense to use if you want to hookup Pupil Core's custom real-time streaming with other infrastructure that uses the LSL streaming framework. Anyways, it seems like your pylsl installation might be corrupted, which is why pupil_lsl_relay fails to load. Please try to reinstall pylsl. Also try running the examples that are shipped with pylsl first to check that your installation of pylsl is working.
Hi @user-fa3706 the surface tracker has changed quite a bit from v1.10. The old surface definitions are unfortunately not compatible with the new version. An upgrade is also not possible since a couple of things changed that are not automatically inferable. You will have to redefine your surfaces in the new version of player. The surface definitions are stored separately for every recording in the recording folder.
@user-fa3706 To elaborate on what @user-c5fb8b said: In versions previous to v1.13, surfaces were defined in the distorted pixel space. Since v1.13, Pupil defines surfaces in the undistorted camera space. Theoretically, Player is able to upgrade your old surface definitions, but only approximately. Therefore, it is recommended to create new ones.
Hi @user-9d7bc8 It seems there is some trouble with the driver installation. Please have a look at the following section: https://docs.pupil-labs.com/core/software/pupil-capture/#windows Unfortunately the reference from point 8 is currently broken (we are working on it) but you can find the relevant information here: https://github.com/pupil-labs/pupil-docs/blob/1b2a9015b82dd45619fd3a50ab209905753059e8/developer-docs/win-driver-setup.md#install-drivers-for-your-pupil-headset
Please try following this trouble-shooting guide and report back if you are still experiencing this issue.
Hello, I noticed that the framerate I am getting from frame publisher is lowered to 1 fps when I disconnect the monitor from a computer which is running the pupil capture. Can this be fixed somehow?
@user-a6e48e Are you disconnecting all monitors from this computer? Or only one of multiple?
Actually I am closing the lid of a laptop
@papr, the same thing hapens when I am switching to another ubuntu workspace
@user-a6e48e I think this is due to GLFW (the cross-platform library for creating/handling windows) specific behaviour. 😕
@user-a6e48e What happens if you minimize the app before switching the workspace on ubuntu?
@papr, let me check
@papr, it didn't help
@papr, or actually it works
I checked again after restarting the Pupil Capturr
oh ok...
@papr, I am checking with the lid closed now
@papr, wow, it seems to be OK now 😄
@papr, thanks a lot!
Let's hope it stays that way 🙏
@papr, I'll experiment a bit, but now I know there's a way
thanks again, have a nice day
You too!
@papr @user-c5fb8b Thanks guys!
Hi @papr , I'm using pupil capture's remote recorder plugin to start two phones' recordings and I want the two recorded videos to be synced. Is there any way to start both phones' recordings simultaneously?
@user-fd5a69 Unfortunately not, and even if you would send the start command with the same button click, it would not be guaranteed that both recordings start at the same exact time due to network delays.
@user-fd5a69 I recommend to enable the time sync plugin in Capture s.t. both Pupil Mobile devices sync to Capture. This way the data generated by both recordings is comparable in time.
@papr I see. Then I'll use ffmpeg to sync the videos manually afterwards. Thank you for the info!
Hi all, our lab is looking into buying another Pupil Core binocular eye-tracker, to use with a RealSense camera and a mini-PC. We hoped to get an idea of whether the systems we'd like to buy will be capable of working well with the eye-tracker, since we have had issues in the past with the phone-based setup we started with. Would a system like this work? https://www.tuxedocomputers.com/en/Linux-Hardware/Linux-Computers-/-PCs/Intel-Systems/TUXEDO-Nano-v8-Mini-PC-max-Intel-Core-i7-Quad-Core-max-32GB-RAM-max-2-HDD/SSD/M-2-NVMe-VESA-Mounting.tuxedo#!#configurator
Hi, is it advisable to use the sliding connections between the eye camera arms and the Core headset as an adjustment area? Is it designed to be partially slid outwards when attempting to capture a better picture for that eye camera?
hi@papr, if surfaces were not defined during recording (but the markers were visible), can I later define them in pupil player and get same results on the surfaces?
@user-31df78 yes, the eye cameras are designed to be moved along this rail as well as orbited about the ball joint. Please see: https://docs.pupil-labs.com/core/hardware/#headset-adjustments Note Eye cameras in the animations are old 120Hz eye cameras, but the same principles of adjustment still apply.
@user-e2056a Yes, if markers are present in the recording then you can define surfaces post-hoc in Pupil Player and still get data relative to surface(s).
thank you@wrp
also, check out this new pupil detection algo we've been working on 😛 We do plan on integrating it into the PL framework soon. the first authors are about to present it at the Facebook workshop at ICVP - they won first prize!
To be clear, this demo is just hijacking the eye cam imagery. Not yet integrated with the Pupil Labs software pipeline.
@user-8779ef Gratulations! We are also currently refactoring our pupil_detector code base, such that it should be easy to integrate it! Head over to software-dev for questions.
Oh, niiiice. I'll ask you for advice before we integrate, then. It may be more efficient for us to wait until the refactor.
...and then you guys get some help debugging!
hi, I just downloaded pupil_v1.16-80-g27f9153_windows_x64.7z when i try to start Pupil Capture I get this Error Message "2019-10-24 17:04:38,241 - MainProcess - [DEBUG] root: Unknown command-line arguments: [] 2019-10-24 17:04:38,242 - MainProcess - [DEBUG] os_utils: Disabling idle sleep not supported on this OS version. 2019-10-24 17:04:39,963 - world - [INFO] launchables.world: Application Version: 1.16.80 2019-10-24 17:04:39,963 - world - [INFO] launchables.world: System Info: User: Robin, Platform: Windows, Machine: Primebook, Release: 10, Version: 10.0.18362 2019-10-24 17:04:39,999 - world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables\world.py", line 136, in world File "c:\python36\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 627, in exec_module File "shared_modules\pupil_detectors__init__.py", line 21, in <module> ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden.
2019-10-24 17:04:40,008 - world - [INFO] launchables.world: Process shutting down." Any Ideas on how to solve this?
Hi @user-663462
please download and run the vc_redist.x64.exe
file from the official Microsoft support page. Afterwards, Pupil should start up as expected.
https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads
If this does not fix the issue, please report back to us!
@here 📣 Pupil Software Release Update v1.17-6 📣 This release addresses Surface Tracker instabilities and issues. We have also enabled Pupil Invisible video streaming via the Pupil Mobile backend and removed the eye movement classifier.
Check out the release page for more details and downloads: https://github.com/pupil-labs/pupil/releases/tag/v1.17
Hi @wrp , what is the procedure of adding surfaces in pupil player? is it the same as in Pupil capture? In pupil player, I could not see if the markers were all visible when adding surfaces, thank you!
Hi , does any OnePlus 6 phone work with the core eye tracker and pupil mobile, or does one have to purchase the Core Mobile Bundle?
@user-e2056a yes, the procedure of adding surfaces should be the same in Pupil Capture and Player.
@user-88b704 yes, you can use Pupil Mobile on OnePlus6 we do not modify the device at all. However, you must have a high quality USB-C to USB-C cable to connect your android device to Pupil Core headset and must ensure USB OTG is enabled and the app is locked.
@user-e2056a I strongly recommend that you upgrade to the latest version of Pupil for marker tracking if you are using the new AprilTag markers: https://github.com/pupil-labs/pupil/releases/tag/v1.17 -- if you are using the legacy square markers, let us know so that we can provide you with some feedback.
@wrp, thank you, we are using the square markers. We noticed that after switching to the latest version of Pupil, the previously defined surfaces could not transfer. Our colleague has reflected this issue to Pupil, so we decided to keep using earlier version (V1.13) to be safe.
Hi there
anyone here from pupil labs?
@user-e2056a have you tried changing the detector mode to the legacy square markers in the newer versions? Your surface definitions should still be there, but by default we detect Apriltags. If you change the detection back, your surfaces should work as expected.
@user-030f61 yes
Please contact info@pupil-labs.com regarding hardware issues 🙂
@papr this is occurring on a new Windows 10 computer. This is the newest software update, any thoughts?
@user-4bf830 I'm going to guess that you need the visual C++ redist from here https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads
@user-4bf830 please download and run the vc_redist.x64.exe file from the official Microsoft support page (thanks @user-31df78 for the link 😸 ). Afterwards, Pupil should start up as expected.
Hey, could anybody tell me how I can get the data for the full eye movement in a recording? For example, if I looked at an image with 4 points and I looked at them in sequence, I want to get the eyetracking data that shows my eyes tracing the shape of those 4 points.
@user-123d16 In which format would you like to have the data?
I want it visualized as an image
Have you opened the recording in Pupil Player already?
Yes, I have
And is the export of the visualization as a video an option for you?
I have the World Video Exporter and Raw Data Exporter
and Eye Video and iMotions
Essentially what I want to be able to see is the heatmap, but connected with lines
the lines that trail my eye movements
@user-123d16 Checkout this tutorial on visualising the scan path on a defined surface: https://nbviewer.jupyter.org/github/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb
Here you can find more about surface tracking in general: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Okay, thank you
I have gotten the heatmaps working, so this solution using Python is what I will need to learn to get what I'm looking for
If you have the heatmaps working this means that you have a working surface already 👍
Yes, unfortunately, the scan path on surfaces is currently not directly exported by Pupil Player.
I don't have any experience with programming, but I will give it a shot!
That's no problem, thanks for your help!
Hi All, We use version v1-11.4 of pupil capture. The programs calibrates as one off. Sometimes, the calibration is perfect. Other times, there is a lag of about 2 cm to the left, bottom or top. To use it with our paradigm, we started with covering the plus and minus signs to now covering the bottom screen entirely, which contain black oval shaped text boxes (font is in white). Any ideas why this is so? Would you recommend downloading the new version. I don't want to install updates part-way through data collection. Thanks!
@papr Thank you for the 1.17 release then we can check more information about GLFW fail.
and the LattePanda Alpha is using the Intel HD Graphics 615 chipset so the solution is here : https://downloadcenter.intel.com/product/96554/Intel-HD-Graphics-615
Thanks again!
@user-09f6c7 Nice! Great to see this feature finally working! 💪
@user-908b50 Could you provide us with a screenshot of your setup such that it becomes clearer what is meant by
calibrating as one off and plus and minus signs ?
@user-908b50 Also, starting in version v1.11, you can recalibrate your recording in Pupil Player and apply a manual offset. See our youtube tutorial for details: https://www.youtube.com/watch?v=_Jnxi1OMMTc&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh
Hi All, I am using pupil eye tracker as a device for my research. Recently, in one of my experiments, I need participants to walk in different environments while wearing the eye tracker; so, I have searched for wireless eye trackers...As they are so expensive, I tried the pupil mobile app. Unfortunately, it does not work and after transferring recorded file to the computer, there are some .mjpeg files and I could not play them even converting them to the .mp4 format. Therefore, I am looking to buy an android device which completely fits with the pupil mobile app. I know there is a Core Mobile Bundle device but due to my university policies, It is not possible to purchase the mobile and I have to buy the smallest android tablet. I greatly appreciate it if anyone can help me to find the best android tablet just for using pupil eye tracker and pupil mobile app.
@user-6cdb90 have you tried to open the Pupil Mobile recording in Pupil Player?
Just drag the folder containing the videos onto Player and you should be able to playback and export your recording.
Sorry, kind of expanding from that, but what might be the reason that the .mp4 files in the recording directory cannot be read by normal video players?
@papr I did it but there is no any gaze mapping or pupil detection data. The related .cvs files are empty. I think the problem is setting up the pupil app for recording. Is there any specific feature for the android device that I need to use the pupil mobile app in the best way?
@user-6cdb90 no, this is expected. Check out this YouTube tutorial on how to get the Pupil and gaze data https://www.youtube.com/watch?v=_Jnxi1OMMTc&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh
@user-31df78 you should be able to open them in vlc player.
Thanks papr, was probably too confident MPCHC could also open everything, I'll try VLC
Also is there a built-in way to automatically generate static images with world video + gaze position (like in exported video) for each fixation identified?
@user-31df78 no, this is not built-in
Alright, just wanted to make sure I wouldn't be reinventing the wheel 👍
@papr Thank you for your response! Actually, I followed the video instruction but there are errors indicate that there are no eye videos. I think the problem backs to the pupil app and recording part. In the mobile there is no gaze mapping and pupil detection. How can I fix this?
@user-6cdb90 could you share the recording with data@pupil-labs.com then I can have a look at it tomorrow.
@papr I have shared the recording file. Thank you again for your help!
Hello. I'm trying to use the IPC Backbone to subscribe to the binocular gaze data topic, but Capture doesn't seem to be sending any data, despite having both cameras working and calibrated. It does send data to the monocular gaze data topic. Do you know how I might fix this?
@user-9d7bc8 Please be aware that pupil positions with confidence less than 0.6 will always be mapped monocularly. This was introduced to avoid degradation of the gaze mapping result.
Hi! Is it possible to have information on how the confidence is calculated?
@user-d77d0f Hey 👋
Quick question about Pupil Player - What does the yellow line at the bottom of the screen correlate too? I am aware that the purple and green line correlate to gaze mapper and calibration.
@user-bda130 it represents the validation range
Great, thanks! I am having an issue with multiple cross hairs in a single frame. I thought adjusting the trim markers for validation range, gaze mappers, and calibration would get rid of this issue, so that the different calibrations/gaze mappers/validation range I've created did not overlap. However, this has not helped. Is there another thing I should try?
@user-bda130 This is due to multiple gaze positions being displayed on a single world frame.
Since gaze is estimated at an higher sampling rate (120-200Hz) than the scene is recorded (30-60Hz), we group gaze points by time and display them together for each scene frame. Each gaze position is visualized by its own cross.
Is there a way that I can get it to average the crosses, or just simplify it to one, so that it is just one cross per frame?
You could write your own visualization plugin.
1. Make a copy of the original https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/vis_cross.py
2. Place the copy in the pupil_player_settings/plugins
folder
3. Change the class name in line 20
4. Filter the pts
list to your liking before calling the drawing for-loop in line 50
Thank you very much
Our team had a slight issue with text not showing up properly until a restart of Capture. We were running on Mac and I haven't yet reproduced the problem on Windows. Any suggestions?
The text issues are in the id1 conf graph and the apriltag label.
@papr I will send you a few pictures of the setup. Thanks for sending the youtube video. I'll take a look and follow the steps. Will update you on that as well.
@papr After trying to work through your steps, I am not sure I totally understand step 4 about filtering the pts list
@user-bda130 the pts variable contains a list of all gaze points. You can either calculate the mean or remove all but one point from the list, it is up to you. The important part is that only one point remains such that only one cross is drawn.
@papr Are you referring to this section?
@user-bda130 no, this is already the drawing code
You should remove entries from the pts variable before the for loop in which the cross is drawn
@papr This section? I apologize. I am not very familiar with python coding, so I am not sure what I am looking for or what will need to be rewritten in order to get the mean of the crosses collected.
@user-bda130 one sec, I am testing an example
Thank you so much!!
Example of the mean cross. In situations like this, it is not accurate. I think the best way would be to visualize the gaze that is closest to the frame in time.
Okay that is a good point. So how would I visualize the gaze that is closest to the frame in time?
@user-bda130 https://gist.github.com/papr/659f88acc5addbd0c9a1252ebc8d4db7
To install as a plugin click the Raw
button at the top right of the document and save it as single_cross.py
in the Pupil Player plugins folder.
See these lines for the selection of the correct gaze point: https://gist.github.com/papr/659f88acc5addbd0c9a1252ebc8d4db7#file-single_cross-py-L44-L56
You are an absolute rockstar, @papr !! This is greatly appreciated. Thank you, thank you.
hi
need a help to design a meeting room with vr,ar spatial etc
Hi @user-eb1bb0 Are you referring to this? https://spatial.is/
And do you need with designing the room itself or do you need help with integrating Pupil eye tracking into it?
Hi @papr I am currently undertaking research with young children who (obviously) have much smaller heads. To optimise the camera position I have to utilise the arm extenders so I can get the pupil more square on. The issue is that doing so causes more shadows and ultimately poorer pupil capture. Do you have any recommendations on how to optimise pupil capture when using the extenders? Thanks.
@user-f3a0e4 Do you use the normal sized or child-size Pupil Core headsets? You can attempt to decrease the area of the pupil detection by changing the ROI in the eye window. Eye window -> General Settings -> Mode: ROI -> drag corners s.t. the pupil is included and the shadows excluded.
Additionally, you can change the intensity range if the shadows are not as dark as the pupil.
Hi @papr my requirement for meeting room is shown below.
• The room is supposed to leverage cutting edge technology including, but not limited to, potential use of: Virtual Reality (VR), Human Centric Artificial Intelligence, Augmenter Reality (AR), spatial data etc. • User friendly data manipulation with real-time output update • The room should be able to loop in via video/phone/screen sharing/data sharing any relevant stakeholders to the decision making process • Touch, voice, text, video interfacing are to be considered to enable decision-making • AI/Machine learning enabled algorithms to facilitate decision-making by specific users given specific meeting dynamics • The room should be easily upgradable (hardware and software) to account for technological advancements • Components of the room should be easily movable; thus use of projectors instead of screens is recommended • The room should be integrated with the existing data systems
@user-eb1bb0 I am afraid that this is not the right place for this kind of questions/problems. This server is specifically used for the community and questions around the Pupil platform https://github.com/pupil-labs/pupil and its related products https://pupil-labs.com/products/. In a broader sense we also discuss mobile eye tracking. Your problem does not seem to be related to any of the above.
@user-31df78 Unfortunately, I do not exactly know is triggering this but it is not the first time that I see this either.
Generally, this happens if some opengl drawing code is not 100% correct
Hello, is pupil core good for both screen tracking (ads, visuals etc.) and also offline such as store tracking ?
@user-c0daa8 Hey 👋 Yes, Pupil is able to provide real-time gaze data as well as do all calculations offline. Additionally, you can use our surface tracker functionality to get gaze in relation to defined areas of interest. https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking