Hello. I have a problem when exporting Gaze positions on surfaces from the pupil player. I set the export to exporting all frames from Frame 0 to the last Frame, but in the exported .csv-data, there are frames missing. Sometimes only Frame 0, sometimes frames are skipped in the middle (see screenshot). Any idea how I can fix this?
This happens if there is either no gaze or no surface detection for a given frame. :)
Thank you for your fast answer. Thats understandable. Does the frame number in the export stay the same as in the video though or are they all shifting when this happens?
They should stay the same as in the video. The only question is if they shift if you perform a trimmed export, but I do not think so. I would have to look that up
Ok, thank you. We don't perform a trimmed export, we trim the data in our program later, so we don't need further information
Hi, which experiment program are you using? We are working on a native psychopy integration which you could give a try.
I wonder if I'm simply using printed out documents, would there be an easy solution to create word-based AOIs?
Oh that'd be great. I am currently using a self-created programme written in Matlab
Hello, can Pupil mobile eye tracking headset i mean the product in this link https://www.shapeways.com/product/LQJJK2CHQ/pupil-mobile-eye-tracking-headset?optionId=43013976&li=shops give me the data like eye cascade, eye fixation , velocity and all the data than can be captured? Or which data this product can give me if i buy it?
See https://docs.pupil-labs.com/core/software/pupil-capture/#plugins for what can be detected in real-time or https://docs.pupil-labs.com/core/software/pupil-player/#plugins for what can be calculated post-hoc. We, currently, do not provide a saccade detector.
Please be aware that the DIY kit cameras have a lower sampling rate, i.e. the temporal resolution of e.g. fixations might be reduced.
Hi @papr . Can you please direct me to guidelines on the best placement/settings for the eye cameras? I am having some trouble achieving consistent tracking amongst different users, and wanted to learn how to best personalize the setup for each individual. Thanks!
You can find some info here https://docs.pupil-labs.com/core/#_3-check-pupil-detection and here https://docs.pupil-labs.com/core/best-practices/#pupillometry
Thank you!
@user-1e4d85 Note that Pupil DIY is mainly intended for those who are prototyping and have the time to experiment and tinker with both hardware (and often software). It will get you basic eye tracking. However, it uses βoff-the-shelfβ webcams (purchased separately) which have certain limitations as described by @papr. You can read more about Pupil DIY here: https://docs.pupil-labs.com/core/diy/ and a list of things you would need to obtain here: https://docs.google.com/a/pupil-labs.com/spreadsheets/d/1NRv2WixyXNINiq1WQQVs5upn20jakKyEl1R8NObrTgU/pub?single=true&gid=0&output=html If you need eye tracking that's ready to go out of the box, you might want to consider Pupil Core. For a more detailed comparison of DIY vs Core, check out this post: https://discord.com/channels/285728493612957698/285728493612957698/689004362642620529
How much pupil core? Thank you
You mean that if I want to buy the pupil mobile eye tracking I have to buy a special camera to get all the data I need?
Follow up question: If unable to achieve high confidence with both eyes, is it better to adjust the cameras to have high confidence in 1 eye at all times (even if that means the other eye is not in view at all in certain instances, such as corner of FOV). To broaden the question a bit - Is the tracking approach using data from both eyes simultaneously, or using the one with the higher confidence at any time point? Thank you.
I am not sure why you would differentiate these cases. The eye cameras can be adjusted independently. You should be able to adjust both of them such that they get a good angle on the eye. To your second second question, the software attempts to pair high confidence detections. If this is not possible the software will map the data monocularly.
@user-311e43 @papr Hello I am integrating the pupil core into an underwater application and have needed to sub some of the cameras suggested in the diy documentation. I have selected this camera from bluerobotics and have consulted the uvc compatibility specifications. I have requested from the vendor to confirm if the camera is uvc compatible and they have said it should be providing the following specifications attached in the file. I was hoping someone at pupil labs can confirm this camera should work. I'm not exactly a software engineer or camera expert so any help is much appreciated!
This looks pretty good! Of course, I cannot tell for 100% sure since I have not tested the camera myself. But I am fairly confident that this camera should work.
@papr Thanks for the help!
I would appreciate it if you could let us know how it goes if you decide to purchase the camera.
Yeah, of course! I will post an update. We are trying to integrate the two eye cameras and the world view into a usb hub similarly to how the pupil core base equipment is integrated into one board with a usb-c out. We have been trying to find an off-the-shelf board with 3 jstph connections and a usb-out but have been trying to figure out if there are any special features that allow the cameras to interface with the usb (forgive my lack of electrical engineering jargon). Water proofing is a b**** so the way it has been done inline on the base headset is ideal, less opportunity for intrusion.
Hi, are there any docmentations about developing Pupil Player with python or c#, such as, adding the gaze coordinates in the recording movie (the default is pupil diameter and its confidence?) and developer defined parameters
Pupil Player has a set of plugins that can visualize gaze. The world video exporter plugin can render these into the exported video. Generally, it is possible to extend Capture and Player via our plugin api. https://docs.pupil-labs.com/developer/core/plugin-api Unfortunately, the documentation for the plugin api is not in its best place right now. I would be happy to give you pointers or even a helping hand if you want to do something specific.
Sorry, is it possible to get the video via the network api?
yes, from the frame publisher. See this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
wow thanks for the quick response β€οΈ π
Is it possible to identify an object, such as a golf ball and track gaze fixations on a moving target using core?
Hello, we are experiencing a problem with the manual post hoc gaze calibration. The vis circle disappears even though there is a calibration within the video. Does somebody have an idea on what could be the issue? Thank you in advance for your help!
There are several things that could cause the gaze circle to disappear when doing post-hoc calibration. I highly recommend replicating the steps taken in these videos: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection
Hello, in one of our recordings one eye got detected badly but the one has no issues. Is it possible to exclude one eye?
To improve your gaze estimate by excluding one eye video, you would need to do post-hoc calibration. Post-hoc calibration requires that you recorded a calibration choreography as shown here: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration
@user-0a22da The definition of what a fixation exactly is, is not super straight forward. According to the most common definition a fixation is when the eye is not moving. If someone is "fixating" on a moving object, like a rolling golf ball, this is usually called a "smooth pursuit" movement, which is different from a regular fixation from a cognitive perspective.
The algorithm in Pupil Player is detecting fixations that fit the basic definition of a non-moving eye. It can not detect smooth pursuit eye movements. To my knowledge there is currently no robust algorithm available for smooth pursuit detection.
In your special case of golf, it may be an option to implement a "golf ball" detector and to build a custom smooth pursuit detector on top of that, where you simply check if the gaze stays on top of the ball.
Hi, I am trying to decide what the best duration and dispersion threshold would be for my recordings, which consists of the wearer looking at physical photos (ca. 2m distance) and the wearer looking at a video on a computer screen (ca. 1.5m distance). I am assuming that the thresholds will vary for these two tasks, but the information I have found in research so far claims very different estimates of fixation durations (anything from 100-400-ish).
My question is: how do I decide which threshold to go with? In your guide (Best practices) you describe the potential errors when having the variables set too high or too low, but how do I know for sure whether the variables are off?
does anyone know what the model and the transmission protocol of the eye camera are? Thank youπ
Hi @user-695fcf π. One could adopt a quantitative approach in deciding optimal thresholds for your tasks, like the chess paper linked in our Best Practices. But this is obviously time consuming and could be a study on its own. As you have identified, the problem with using previous literature is that estimates of fixation durations can vary wildly. Other researchers choose to test different thresholds against a manual coder(s) in order to ensure they are accurate when classifying fixations. Which ever approach you take, I think the important thing is to understand the potential for errors and thus be able to identify them in your results and adjust accordingly.
Thanks for the answer π I will look into the options.
thank you for your answer. We know that we need to do post-hoc calibration but we do not know how to exclude one eye video?
You can rename the video file before opening the recording in Player and rerunning the Post-hoc pupil detection. Afterward, there will only be data for the originally named video file
Hi @user-f9f006. Comprehensive eye camera specifications are available here: https://www.ovt.com/sensors/OV6211
Thank you very much
Hi - I've recently started experimenting with a DIY pupil labs set up (after working with Tobii setups previously), and I'm hoping someone might be able to offer some advice.
I currently have 2x Microsoft HD6000 eye cams, and I'm trying to use them inside a HUD where I can't set up a world frame camera. I'd like to track where on the HUD screen the participant is looking.
The way I was thinking of attempting this was to:
(1) set up the eye cams inside the HUD (2) tell the participants to look at 9x target locations on the HUD in a similar choreography to the screen-based calibration in Pupil Capture (i.e. center, top left, middle left, bottom left, etc..) (3) manually create the correspondence between the coordinates from the eye cams with the locations on the HUD screen that the participants are instructed to follow.
If I can read an (x,y) coordinate from the eye cam when the participant is starting at each of the 9x target locations on the HUD, I should have a pretty good mapping from eyecam output to screen location.
At least this is what I'm hoping.
Assuming this is correct, I'd imagine that the best way to read from the eye cam is to use the IPC backbone (https://docs.pupil-labs.com/developer/core/network-api/#ipc-backbone) subscribe to the 'gaze' topic and somehow find the pupil coordinates (norm_pose_x/y) or (gaze_point_x/y)? Though I doubt the gaze point is going to be possible without a world cam.
Appreciate your advice! Thank you
Hi @user-93c34e π. There isnβt usually a fixed spatial relationship between a head-up display and the person viewing it. Any relative movement between the head and the display would break any mapping. If one assumes a fixed spatial relationship, e.g. using a head-mounted display, then the concepts you describe are reasonable. We use bundle adjustment to estimate the physical relationship between eye and world camera. You can subscribe to pupil data via the network API, yes. But for the kind of custom calibration you describe, Iβd probably run Pupil from source.
Hi All, I am looking for a way to record data of a person reading along with the video. I understand that the audio capture plugin was disabled following v1.23. 1) Is there a way, scripted or otherwise which can help me record and play the audio with pupil core? I would like to use an external microphone to record the audio and generate the audio.mp4 and the audio_timestamps.npy. 2) Also, if this is possible, will the latest version of Pupil player play the audio?
Any help in this direction would be great. Thanks in advance.
I'd recommend using the Lab Streaming Layer framework. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/887783672114208808
Hi Neil, Thankyou so much. I will check this out and let you know
Thank you Neil. Would that topic (norm_x/y) be the correct topic for pupil coordinates? And can I assume gaze_point is meaningless without using the Pupil Core standard calibration methods? Thanks again!
Hi, please be aware that we differentiate pupil and gaze data (see https://docs.pupil-labs.com/core/terminology/#pupil-positions) and that both have normalized coordiates (https://docs.pupil-labs.com/core/terminology/#coordinate-system). And correct, gaze data requires a calibration to be meaningful. π
Hi, does Pupil Core support smartphone-driven gaze collection? Are there any docs for it?
Hi there, I am looking to convert fixation location from the world cam to pixel location on the presentation screen. what is the best way to do this? been playing with using the surface tracker but worry about how large the apriltags need to be if they are shown digitally. should I be attaching them to the screen instead? or should I use some calibration procedure to transform world cam points to pixel locations? thank you for any advice
hello, I wonder it it's possible to change the hotkeys assigned to default actions in pupil player. For instance, I'd like to assign "e" currently assigned to the annotation player. Same with "f" (currently assigned to next fixation)
Hi, unfortunately, the hotkeys are not configurable. You would have to run from a modified source code version for that.
It seems both the pupil capture now run both 2D and 3D modes for pupil detection. We select one based on what we set for calibration right ?
It is confusing... They run concurrently for pupil detection, but I don't understand the pipeline selection in calibration plugin
Yes, 2d and 3d pupil detection happens in parallel but only one of them will be used for gaze mapping depending on your selection in the calibration choreography plugin
Thanks
About the directory of plugins, source said "Each user directory has a plugins subdirectory into which the plugin files need to be placed. The Pupil Core software will attempt to load the files during the next launch.", can we change the directory position e.g., on the directory in dropbox.
Yes, by replacing the plugins
folder with a symlink to the target directory.
I've done it, thank you
Hey guys, im trying to use an older version of the pupil core. I cant get the world camera to work. The world camera appears as an INTEL Real Sense Camera in my device manager but does not show up in the pupil capture software. I tried https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting multiple times now. Any Ideas on how to get it to work?
Hi π Do you need the older version just for the realsense support?
no thats just all i have access to π
Do I understand it correctly that you could not use a different version even if you wanted to? Could you let me know the exact version?
I mean i got the Pupil Labs Core from my old Lab. We currently do not have a newer or different version of your eye tracking systems. The box sais pupil w120 e200b
This module should come with a scene camera that works out of the box. If you want to use an intel realsense camera for the scene video you need to use a custom video backend https://gist.github.com/romanroibu/c10634d150996b3c96be4cf90dd6fe29
Ah, apologies, I was referring to the Pupil Core software not hardware. π
ah there i am running 3.4.0
@papr sorry, If I took information from the Pupil Lab website, how can I properly site it? I don't really like having website references
That depends on what information you want to cite. If you copied specific information that is not present in the references below you should cite the website.
To cite Core see https://docs.pupil-labs.com/core/academic-citation/ To cite Invisible see https://docs.pupil-labs.com/invisible/academic-citation/
I am using the camera that came with the headset. but it shows up as an intel realsense.
Yes, indeed, this is a R200 Intel realsense camera. In this case, you cannot use the user plugin mentioned above. Let me look for a software version that should work with that hardware.
Live Fisheye de-distortion Hello there, when using fish-eye lense with iMotions-Exporter-Plugin in PupilPlayer it is possible to un-distort the image and adapt the gaze-Data accordingly. The plugin works on recorded / offline data. I want to work on a python-script that does this with the live data from the network-api (live video and live gaze data). I thought about re-using the script from iMotions-Exporter (because it is also in python).
My question: I was wondering why this doesn't already exist for the live-data (in Pupil Capture) (or maybe it exists and I don't know about it^^). So I was wondering if my idea is good π Or will I run in any problems I don't see^^ (Maybe a drop of the video-framerate could be a problem for some use cases I guess but this would be fine for mine.)
Hi, the image undistortion is actually quite costly and will likely result in lower frame rates. It mostly does not exists because there has been no demand for doing this in realtime. What is your application?
thank you. I have another question: is there a way to delete an annotation in the annotation player? that would be so helpful but I haven't found a way to do so
Unfortunately, there isn't one. But it is on our todo list.
My use case is about marking digital school/study-tasks (which appear on a monitor) with qr-codes / AruCo marker for defining the tasks' position/area in the video and check if the gaze is in the area of a task to know on which task a person is looking at which time to get a better understanding of the solving process of a learning person. The video needs to be undistorted to detect the marker and to create a correct area of a task and maybe in the future to detect objects in the room to blur them for privacy protection.
But all of this could be done post-hoc, correct? Nonetheless, I would be happy to help you with building a real-time "undistortion viewer". I think you are already on the right track regarding the general software components. You only need to select the more specific functionalities and combine them into one script. π
and btw thanks for the answer π
Hello, I'd like to speak with someone from Sales or Product Development over the phone to discuss questions our team has before we purchase 3 pupil invisible units
Hi @user-ba974b π . Please contact info@pupil-labs.com in this regard π
Hello?
Thank you. As someone who has spent hours doing these annotations, I'd love to suggest a few ideas such as being able to see all annotations on a table, being able to edit the tags, being able to export and import keys/tags to reuse in other projects, etc.
The latter will be possible in our upcoming release. π
Hi - Does anyone have recommendations for pupil cameras (not the HD6000) that could be used in a DIY kit? The HD6000 focal length/resolution seem sub-optimal and was hoping that someone might have advice: re other cameras that have been used successfully. Thanks!
Hi again, did you by any chance have the oportunity too check for older versions of the software?
btw, you can post-process recordings from older versions of Capture in the most recent version of Player.
The built-in realsense backends were removed in v1.22. You can find the latest version with built-in support here: https://github.com/pupil-labs/pupil/releases/v1.21
ty ill try it directly. I just tried 1.23 ^^
Sorry for asking again - I have to select the R200 on the backend manager right? Cause that instantly causes a software crash
That is correct. Can you share the capture.log file? It is possible that this is related to the driver. Please be aware that Intel has stopped the driver support for the R200 a while ago.
where do i find the log data?
Home directory -> pupil_capture_settings -> capture.log
This is not the correct folder. By home directory, I was referring to your user home directory, not the folder that was extracted from the Capture download
ahhh sorry of course - trying to listen to a training while doing this at the same time π
this is what the console puts out after switching to the R200
Since Intel dropped the support, I do not know if the drivers are still available somewhere.
ok, thanks, that information is sufficient. This is a driver issue. π
ok ty anyways - ill try my best to find some
just wanted to let you know that discord has no issues using the R200 as a camera
But Discord is only accessing the RGB part of the camera, correct? The Realsense backend is meant to take full advantage of the camera, i.e. process the depth stream as well. Unfortunately, the RGB camera is not compatible with our (default) UVC backend (see https://discord.com/channels/285728493612957698/285728493612957698/725357994379968589 for constraints)
Yes theoretically it could be done in post-processing, but the requirement in the future will be that the information of a student should be stored in realtime and then read by an intelligent system which helps this student in realtime when he or she is struggleing with a task at the moment. I wouldn't say no to help from you π π Me and another person will have a look at that the following days - checking out how the iMotion-Exporter works and see how far we come and will probably come back to you with specific questions π
Sounds good to me π
Hello. How can I Apply the manual Correction in the Post hoc Gaze calibration? Does it apply automatically when I change the value or do I have to recalculate the Gaze Mapper?
Hi, you will need to recalculate.
Thats what I guessed but I wanted to be sure. Thanks for the quick reply
If you do not need post-hoc calibration, you can also apply manual offset correction via this plugin https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433 it previews the amount of offset. See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin on how to install it.
Thank you very much, I will have a look at it
Hello all, is there a way to get the world_index (i.e. frame number) in LabStreamingLayer? I am using the time frame of the world video stream to determine naturally occurring visual onset of events for EEG analysis. Alternatively, is there a way to match the time_series data of the .xdf with the world_index exported from Pupil Player?
I have been working on a new Capture plugin that records LSL streams as CSV files within a Core recording. It syncs the LSL streams to Capture time. It is just the first draft but maybe you could give it a try and let me know what you think? https://github.com/labstreaminglayer/App-PupilLabs/blob/1078b6d26429f3b2d08246e0b03a1e5795352183/pupil_capture/pupil_capture_lsl_recorder.py
Thanks for the super prompt response Papr!! and please forgive my lack of tech "savvyism", I have added the pupil_capture_lsl_recorder.py file to my App-PupilLabs/pupil_capture should I see any modification in the LSL plugin on Pupil_capture or must I make a LSL recording to inspect the exports. Cheers
You need to copy the file next to the other, already installed, relay plugin. Then, start Capture, go to the Plugin Manager menu and start the Pupil Capture LSL Recorder plugin. It should show up next to the other plugin.
All working for now! you da man! I will make a pilot recording this afternoon (chile time) and let you know how it goes! Cheers. V
It definitively needs stress testing
Hiya - A question on syncing pupil.pldata info with video recordings in Pupil Capture. I'd like to reference pupil locations for each frame of the video. The pupil topic has timestamps for each data packet, and I'm wondering if it's possible to get the analogous timestamps for each frame of the video recording?
Do you want to do that in real-time? Or just post-hoc?
Post-hoc first, to check the validity, and then real-time if that works out.
Basically, you need to buffer the data and match the pupil data to the frame with the smallest timestamp difference.
In real-time I think I can use opencv frame grabbing from the video feed with the zmq API for the pupil data. But not sure how to align in post-hoc processing.
Post-hoc, given two lists of timestamps, you can use this function to find the closest matches: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L386-L400 A slower version is https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/player_methods.py#L403-L443 but it is more applicable to the realtime use-case
Thanks, and each frame that's recorded will have a timestamp in the meta-data?
Correct! Every data point in Capture has a timestamp.
Great, but I think I'm still missing something: the timestamps for each frame of the video are embedded within the mp4 file? I know that timestamps for notifications / pupil / gaze / etc. are in both pldata and .npy files, and get that those helper functions can help align those lists of timestamps...but before we even get to that step, where to find the list of timestamps for the video i.e. how to attach each image frame of the video to a timestamp?
Sheree Josephson, Weber State University, a highly cited communications researcher using eye-tracking, has asked me to evaluate the Pupil Lab's Invisible glasses. Sheree plans to update her current system in November. We have found receiving a sample raw data file and its corresponding video very helpful in the past to start our evaluation. Can someone here help us with this or provide contact with an appropriate person at Pupil Labs. Thx.
Hi, you can find example raw data linked in the Tech Specs sections of our products https://pupil-labs.com/products/ The format is explained in our documentation at https://docs.pupil-labs.com/ and you can playback and export the recordings using Pupil Player https://github.com/pupil-labs/pupil/releases/latest#user-content-downloads
For all remaining questions, please contact info@pupil-labs.com π
Perfect! Thx.
The videos have their own *_timestamps.npy
file. See https://docs.pupil-labs.com/developer/core/recording-format/ Re real-time, in the case of this example, you can access the timestamp via msg["timestamp"]
https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py#L82
Ah phenomenal, thank you!
Hey! Iβd greatly appreciate any tips on the following situation
Weβre trying to use pupilcore with eeg lab (from matlab), and want to use LSL to align the information properly. Can you point me to the proper documentation for doing so, or even an existing implementation in some repository
Question: I just downloaded the newest version of the pupil labs desktop applications. Weβre using the pupil labs core headset and trying to gather gaze position data. However, we arenβt seeing the ellipses around the eye, only orange annotations on the pupil. In previous versions, we were able to see a green circle and red target on pupil. Is there a manual to explain this change so we can see the ellipses again.
I've had exactly this problem, have you found a solution?
Hi - I created a recording yesterday but it is no longer in my recordings folder. Has this happened to anyone? I cannot find it anywhere.
Hi - Has anyone had experience using a DIY kit with Microsoft LifeCam HD6000s on a Windows 10 system? The newest Pupil Core software I downloaded and installed (v3.4.0) doesn't seem to recognize the USB webcams. Linked an issue in github here: https://github.com/pupil-labs/pupil/issues/2198 but if anyone has successfully used the HD6000s on a Windows 10 system, would super appreciate any advice. Btw, these cameras work flawlessly on a mac with the latest pupil core software. Go figure. Thanks!
Found a solution! Posted it on github: https://github.com/pupil-labs/pupil/issues/2198
Pupil Lab say the eye tracker has gaze accuracy of 0.6deg. In another place, they said a 3D pipeline will achieve 1.5 to 2.5deg accuracy. Is this 0.6 for 2D pipeline? because βΉ 1deg is specified for 2D pipeline. Typically, we experienced 2deg average accuracy after calibration. Is that even okay?
I am confused about the 0?6deg in the specifications. Then the 1deg for 2D and that for the 3D
hello, everyone, I want to get access to real-time data via network API, I follow the instruction on this link https://docs.pupil-labs.com/developer/core/network-api/#pupil-groups. But I the python demo did show any data. it's just led to infinite waiting. then I set "topic, payload = subscriber.recv_multipart(flags=zmq.NOBLOCK)" , it reports" Traceback (most recent call last): File "getbackbonemessages.py", line 37, in <module> topic, payload = subscriber.recv_multipart(flags=zmq.NOBLOCK) File "D:\Users\splut\Anaconda3\envs\Furhat\lib\site-packages\zmq\sugar\socket.py", line 491, in recv_multipart parts = [self.recv(flags, copy=copy, track=track)] File "zmq\backend\cython\socket.pyx", line 791, in zmq.backend.cython.socket.Socket.recv File "zmq\backend\cython\socket.pyx", line 827, in zmq.backend.cython.socket.Socket.recv File "zmq\backend\cython\socket.pyx", line 191, in zmq.backend.cython.socket._recv_copy File "zmq\backend\cython\socket.pyx", line 186, in zmq.backend.cython.socket._recv_copy File "zmq\backend\cython\checkrc.pxd", line 20, in zmq.backend.cython.checkrc._check_rc zmq.error.Again: Resource temporarily unavailable " can anyone help on this??
This just means that there is no data to receive. Which is also why it shows the "infinite loop" behavior. It is just waiting but not getting anything. Feel free to share more code context in π» software-dev if you want us to have a look for more concrete feedback
Hey! Iβd greatly appreciate any tips on the following situation
Weβre trying to use pupilcore with eeg lab (from matlab), and want to use LSL to align the information properly. Can you point me to the proper documentation for doing so, or even an existing implementation in some repository
You basically have two options: https://github.com/labstreaminglayer/App-PupilLabs/tree/lsl_recorder/pupil_capture
Do I understand it correctly that you want to use LSL to record Core and EEG data and process the latter via eeg lab?
yes
And what Pupil Core data are you interested in specifically?
Hello- I'm wondering, I just got a pupil invisible v5.5. If Pupil comes out with another version or any updates, how would I get those?
You are referring to Pupil Cloud, correct? Updates to Pupil Cloud are free and happen automatically.
I believe I am referring to the cloud, there isn't any software in the glasses themselves that will have any updates?
There is, the Android phone companion software. You can get its update via the Google Playstore.
Ok, perfect. So it can't push to the glasses automatically, I will have to go get the update in Google Playstore.
What do you mean by push?
To get any updates from the companion software I'll have to go myself to the google playstore and install any updates. That doesn't happen automatically, correct?
You can set the PlayStore to auto-update
ahhh, perfect. Thank you!
Please see the updated link. I replaced the former one with the official documentation.
π
Hi everyone, I am super new to pupil-labs. I would like to synchronize pupil timestamps with our driving simulator. I found some example code in pupil-helpers/python/. I am wondering how to use these from the app? Would I have to write a script for an plug-in? Is there any successful examples anyone like to share?
Also, if I launch the pupil software on one computer but have the participant work on a different computer, do I have to calibrate on the working computer or just the pupil computer?
Hi, is "Reset 3D model" able to trigger via the remote notification?
thanks. I'm reading through the documentation of LSL trying to get it set up, and it's a pain and a half. I think it'd be super cool if a video is posted on the pupil-labs channel on how to set up LSL for pupil-core, and setting up LSL itself on EEGLAB, is done.
Yesterday, I found a new way for easily installing pylsl for Pupil Core 1. Download wheel for your platform https://pypi.org/project/pylsl/1.15.0/#files 2. Save to pupil_capture_settings/plugins 3. Rename .whl extension to .zip 4. Unzip file. If the content was extracted into a subfolder, move the pylsl* folders into pupil_capture_settings/plugins
This way, there is no need for a separate python installation and symlinking the pip-installed files.
Please checkout the eeglab documentation in how to use it with lsl.
thanks! though I will need to install it on MATLAB, which is why everything is so painful
unless the LSL works for both pupil core and our bio-semi EEG device, which I'm guessing is possible
it may be possible to gather the data with python and run the experimental setup in matlab
though I still think that a video showing the installation of LSL on MATLAB, and then the connection of it with pupil-core would be very beneficial
The standard LSL workflow is to use the https://github.com/labstreaminglayer/App-LabRecorder to record the data. It can receive streams from multiple sources, e.g. from Pupil Capture via the LSL Relay plugin or other LSL applications. It stores the data to xdf files. From what I have read, EEG Lab is able to read these files post-hoc.
Even though I feel your pain regarding installing Matlab dependencies, setting up third-party software falls wide outside of our area of expertise.
Hey, just following up on this question to see if anyone has any ideas
Could you please share a Screenshot of that phenomenon?
Sorry for the poor images. The image with the green circles was from a previous version and the image with the yellow pupils are from the newest version of the desktop application
You are running from source, correct?
@papr
I believe so. Is that the default?
No, usually, people would run from bundle. It looks like the pye3d plugin is not running. Could you try restarting from defaults. You can do so in the world window general settings.
Done, but it doesnβt seem to have changed anything
Could you please share the ~/pupil_capture_settings/capture.log
file?
There seems to be something wrong with the libopencv dependency that is shipping with the bundle. I will have to have a closer look. As a result, the 3d detector is not loaded.
quick question about pupil player: I made annotations for each fixation (I would click on the "next fixation") button to nagivate the footage. However, the index frame for the fixations and annotations are offset by 2 frames
Because jump to next fixation
jumps to the fixation mid_frame_index
, not the start index
is this to be expected?
perfect, ty
All right, I will wait to hear from you. Thank you!
Hi, the issue is that this release is no longer compatible with macOS High Sierra. The latest release to support this operating system is Pupil 3.0 https://github.com/pupil-labs/pupil/releases/v3.0
Hi, everyone, I'm developing with an industrial camera as the world camera, it's not a UVC one so it cannot be detected by the pupil lab software. Can I feed the numpy data from the industrial camera to the main program to perform the gaze mapping? Or is there any API for the world camera?
Hello everyone
Is it possible to have a significantly different "calculated sampling rate" compared to the manufacturer specification?
Technically, yes. If your computer does not have the computational resources to run at the full sampling rate, the software will dismiss/drop samples in order to keep up with the incoming real-time data. This can lead to lower sampling rates.
You could give this a try https://github.com/Lifestohack/pupil-video-backend
Hi, papr, I have used the project you introduced to publish the camera's frames to the pupil lab successfully, but then I got trouble with the calibration. You can find the calibration logs from lines 424-441, it shows no pupil data is collected (I used monocular). I'm pretty sure the pupil is captured correctly you can find
Dismissing 4.89% pupil data due to confidence < 0.80
. I'm not sure if there's anything wrong with the video_backend program or anything else.
Thank you very much! It may help!
Hi Pupil Labs, we were looking at using April Tags with our Pupil Core and we're having trouble generating the tags we need. Do you have an easier way of getting these or have any pngs you'd be willing to send? We've been trying to use the GitHub link with little success
Thank you. There are two sampling rate. One for the eye cameras and for for the scene camera. Both could change when the sampling rate of the computer is low right ?
Correct.
In Pupil Capture a maximum of 120 Hz for the scene camera and 60 Hz for the eye camera. And then there is another 200 Hz pasted on the box. The one we calculated was less than 160 and 200
Just trying to understand the differences and how both can be obtained from our study
The scene camera can only reach 120hz for specific resolutions. The default resolution (720p) has a maximum sampling rate of 60hz. Similarly, the eye cameras only support 200hz in the 192x192 resolution.
To calculate the effectively recorded sampling rate, you to calculate the multiplicative inverse of the timestamp differences.
1/(t1-t0)
This gives you the frame rate for each sample. This way you can see if the sampling rate changed throughout the recording.
Alternatively, you can calculate the average framerate of your recording.
I have been using 400 x 400 resolution. Seems to be the best for pupil detection. If at all they are related
It just seemed easy working with 400 X 400. Don't know if I am making sense.
The sampling rate we calculated is for the scene camera?
Sorry for too many questions
I cannot answer that since I do not know what you calculated in detail.
?? Please
In this case, the maximum sampling rate is 120 Hz
Yes
Summarized: Eye camera: max 120 Hz @ 400x400 max 200 Hz @ 192x192
Scene camera: max 30 Hz @ 1080p max 60 Hz @ 720p max 120 Hz @ 480p
Thanks
I'm here again
The image in Pupil Capture is distorted. Please how can I have undistorted image?
You can use the iMotions exporter to get undistorted scene video and gaze signal.
Thanks. I found no eye movement in tht imotion video
~~Could you please clarify?~~
No gaze signals
The gaze data is undistorted separately, It is not rendered into the video.
What's now the purpose of the imotion video?
The export can be uploaded to https://imotions.com/ for further analysis
I will have to find a way of transforming the video shown in player. The images are curved and doesn't sharp edges. They zoomed out from the centre
Does the release work with Mojave? Or do we have to go newer than that too?
Mojave should work :) but knowing Apple, they will deprecated Mojave fairly soon. When that happens, homebrew stops supporting it too, and then it becomes very difficult for us to maintain the bundle.
If I were you, I would upgrade at least to Catalina.
All right, currently upgrading to Catalina. So then should the problem just go away after the upgrade or are there settings I need to adjust first?
It should just work.
Hello, is there anyway I can direct to a specific fixation on Pupil Players? Such as, maybe I can type in the fixation number the player will show that fixation on video
Unfortunately, no. But you can jump to the next and previous fixations using the f
and shift+f
keys
Have any of you worked with the world view camera mounted on the side of the head instead of the center? And if so were there any additional required changes to the underlying programming required to compensate for the offset?
Hi All, I am trying to set up an experiment where the subject is reading from physical texts. When I try the screen marker calibration and then place a paper in front of the same screen, I am more or less able to get word level gaze locations, however when the book is being read naturally, this doesn't work. I understand that this is because of the region of calibration and a change in depth. My question is whether there is a better way to set this up? I tried the single marker calibration but that seems to be going horribly wrong. Is there something fundamental that I am missing?
Hello is anyone here from the Pupil Labs Company??
I am interesting to talk with the sales department regarding delivery times of Core
Hi, have you reached out to info@pupil-labs.com already?
It is somehow urgent
Yes I sent an email at 7:04 Greek time
should be 6:04 German time
I see your email. It has been assigned already. We will come back to you soon.
I received the answer, everything explained perfectly, great service, thank you!
Ok that's fine
Thank you
Hi there ! Kind of weird to come here but I tought hey... Those people knows a lot about eye tracking I guess...
I am a simracer. I have a Super Ultra Wide samsung 49'' monitor but this is not enough for me. I've tried VR, it gives me headakes but because of the strap, I get the eyes crosssided after 2Hrs of Formula 1 and I was sweating too much, I prefer my super ultra wide monitor for real.
Ok but now I heard about Eye tracking !! It's lighter than a VR I guess, and it would be PERFECT for what I need: I only want to be able to look into my sides mirrors ! I have heard about Tobii tracker, that is 200USD right now, my question is:
Is there any cheaper tracker ? Just to be able to get that ok I move a bit my head to right so you know..
I also heard about Pupil ! Am I at the right place ? On the right path ? ahah... I see it's more of a software than hardware...
Hi π Yeah, Pupil Core products are designed for researchers and not necessarily the everyday end user. But I would be happy to give you my thoughts on your issue if you are still interested.
https://pupil-labs.com/products/core/accessories/ holly molly ok... kind of the right path but not the right path... 650euro... I am not spitting on it it's just not what I am looking for, this must do SO FAR much more than for looking the sides mirror imao like controlling the cursor and high tecs captors... I don't need this π¦ I only want to look the mirrors ahah
From an expert like you It would be amazing to have your toughts
My first question would be what problem you are trying to solve using an eye tracker. The eye tracker will not increase the field of view of your monitor, unless the software performs specific actions, e.g. blending in the side mirrors if you are looking at a specific place.
Correct, the FOV will still be the same in degrees, but, you know in ACC (Assetto corsa Competizione), if we press the right cross button, the FOV goes right a bit, know what I mean
The centering of it change
Hi, we bought pupil core with 200hz eye camera and realsense d415 world camera.
we want to substitute the world camera because of many problem we had/have. can we have a list of cameras that we can use?
Please contact info@pupil-labs.com in this regard.
@papr yes, I will email you @ info@pupil-labs.com π
@papr In pupil player, Iβm trying to set up annotations. I see how to flag individual frames, but is there a way to make the annotations last over a stretch of time? I assumed there would be since thereβs a βdurationβ output in the annotations csv
Hi, I understand the confusion. Pupil Player cannot create annotations with durations through the UI. The durations field can be set for real-time annotations sent via the Network API.
So how would I do these real-time annotations? Sorry, Iβm still getting familiar with the pupil labs software
Hi. Is there any way to export the 3D eye model and import it into the Pupil Capture again so that I would have the same eye model all the time?
Hi, setting the eye model to a specific location is not supported. Usually it is also very unlikely that the eyes will be at the very same place every time you wear the hardware.
Hi! I notice that when I run the new pupil core with pupil capture v3.4, It has trouble to keep up with the desires sampling rate (set at 200hz) specifically when I run both eyes at the same time. CPU goes to 250% or so, and actual Fs is roughly in between 120-160 fps per eye. What system requirements are needed in order to smoothly run at 200fps with both eyes simultaniously? Many thanks, Jesse
Hi, yes, unfortunately, the pye3d detector introduced in Core 3.0 requires more resources than our previous 3d detector. If you are looking for new hardware, I can recommend using the M1 macs with macOS Big Sur. They run the software extremely smoothly. Otherwise look for high CPU speed (>3 GHz)
Hello, in the process of switching out the regular and fish-eye cameras, we accidentally overtightened the new camera and cracked the plate under the removable camera part. Our eye tracker (Core) is still under warranty, so what do we need to do to get this replaced?
Also, the April Tags we're using aren't being picked up with the fish-eye camera, but do get picked up with the regular camera. We need to use the fish-eye camera for our experiment, so how do we get the April Tags to work under fish-eye view?
You will likely need to increase their size. π
Ok. Is there a good way to do that without losing resolution?
If you use nearest neighborhood scaling, you do not loose any quality.
Ah, ok. Thanks!
I am researching this product for use in a seizure detection project where user's pupils will be tracked through the day -- my main concern is mobility, does this device connect to Pupil Mobile app like Pupil Invisible can? Or is there a way to run Pupil Capture software off a smaller device than a computer?
Hi, Pupil Mobile has been deprecated and its use for new projects is discouraged. Also, Pupil Mobile does not perform on-device pupil detection, i.e. you would need a Pupil Capture or Player instance anyway. Also, Pupil Capture is fairly resource intensive. I suggest using the pupil detector code directly https://github.com/pupil-labs/pupil-detectors/ within a piece of code that runs on a smaller device that fits your constraints.
Quick question if anyone knows the solution, when i try to launch pupil capture the "world view" opens then stops responding on a white screen
i know it was setup probably a couple years ago by somebody else so im guessing the software version is different?
i went back to an old version and got it to work
Hi. We modified the Pupil Core a little bit such that the subject will always wear the Pupil Core at the same location, so we want to fix the 3D eye model to ensure we can get the same result every time.
Since exporting and importing the 3D eye model is not supported, could you please point out which part of the source code I should check to implement this feature? If possible, can I implement this feature using just a plugin? If not possible, could you please illustrate how to build the software from a different version of source code?
Also, is it possible to modify the eye model parameters (such as eye radius and cornea radius)?
Hi, yes, this can be done via custom plugin and just calling existing pye3d functionality! π You can read about general plugin development here https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin (it contains a section about pupil detection plugins at the bottom, too).
The plugin class for you to overwrite can be found here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py#L36
It interacts with this pye3d class https://github.com/pupil-labs/pye3d-detector/blob/91769309e16655976176f880fa6f24d328af2e41/pye3d/detector_3d.py#L102
You can freeze, replace and set the locations of the models that are being used in background.
I will be able to help you out more concretely once our next 3.5 release has been published.
helper
Please share a Capture recording of you performing the calibration with data@pupil-labs.com This way can give the most concrete feedback.
Thank you! I've sent you the recording.
Is it possible to upload pupil core footage to pupil cloud for easier analysis?
Hi, unfortunately this is not possible as the data formats are not compatible.
also if i want to set up markers for 2 surfaces do they need to be from different families?
Not different families, no. But each marker has an ID that needs to be unique within the recorded video s.t. the software can recognize the surface.
Hello. Is there recommended value of the Onset/offset confidence threshold and filter length for blink detection? I want to measure subjects' blink numbers while they are relaxing in rooms.
You can read about the blink detector in our online docs: https://docs.pupil-labs.com/core/software/pupil-player/#blink-detector; and about onset/offset thresholds in the Best practices section: https://docs.pupil-labs.com/core/best-practices/#blink-detector-thresholds
Thank you very much for the information!
Hello I have a similar issue right now. and I follow these steps but it was not fixed.
I wanted to ask for advice as I am trying to find out if the issue I address is a technical issue or not.
I am using pupil adds-on in HTC vive, and I am connecting it in Unity and use Pupil capture. Suddenly I have issues with the Capture as the world cam from unity is not displayed on screen as you can see in the picture below. I followed the troubleshooting suggested here to re install the drivers and I see two of them:called Pupil Cam1 ID0. (Also checked the hidden cameras) I donβt see one related to the word cam for the unity scene. I can see both eye on pupil capture but just grey screen as world screen. The connection in unity seems to work property as displayed in the second picture attached.
I have tried it before and there was not problem so I am wondering if there is something else to try or if this is a technical issue.
Hi, please see the hmd-eyes docs regarding screencasting. Unity needs to stream its virtual main camera to Capture. hmd-eyes has built-in support for that. This is not a driver issue since there is no physical camera that needs to be connected. The eye camera drivers are installed correctly already π