This request can be done with 3 ways. By; 1- using pupil player as papr has mentioned you 2- investigating recording files 3- collenting data from Network API as video stream and serialized object - so json.
small technical correction: The Network API uses msgpack to serialize data, not json. 🙂
Yes, I agree. I guess, however, msgpsck only converts json phrases to more small expression with their own compress method.
msgpack is conceptionally very similar to json, yes, with the main difference that json uses text and msgpack uses binary data (and therefore being more efficient in various ways)
dear pupil-labs, I would actually like to hook on to your mentioning of the msgpack. I'm having difficulty to read in pldata in python. I'm getting the error "unpickling stack underflow" (on top of the error "ExtraData: unpack(b) received extra data"), where the internet states the pickle data is corrupt, but loading the data from the player seems to works fine. I'm using python 3.8 and I've downgraded msgpack to 0.6.2 following suggestion from an earlier error.
Could you specify what code you use to read the file?
Thanks for your prompt reply! from lib.pupil.pupil_src.shared_modules import file_methods as pl_file_methods
pl_file_methods.load_object('/Users/marsman/organized_data/s001_b6/raw/pl/pupil.pldata')
load_object
is not meant for pldata files. Please use load_pldata_file
instead
Thanks a bunch! that solved my issue! (after upgrading msgpack again)
Hello, I'm having an issue where Pupil Capture is crashing on startup. I've been using the software reliably for about 5 months now, but when switching to a new computer, the software isn't responding. The eye windows connect, but then instantly go blank and stop streaming video. Then the whole software will crash. I've tried reinstalling thr software, rebooting the computer and reconnecting the headset. Any ideas why this might be happening?
Hi! I am sorry to hear that. Could you share the Home directory -> pupil_capture_settings -> capture.log
file with us?
Here is the file
No recorded intrinsics found for camera pupil...
perhaps this error might point us in the right direction?
Mmh, I do not see any traces of a crash or similar in the log file. Please delete the user_settings_*
files in the same folder as the capture.log file and try again. If this does not resolve your issue please contact info@pupil-labs.com in this regard.
Will try that. Thanks for looking into it! 😊
when i try to start the pupil player i get this error. The window closes very fast. Does anyone know how to fix it ? Iwork on windows 10/ 64 bit.
Hi, could you please share some information about your system (System Settings > System > About)?
Hi, I am facing an issue while working with pupil core. While measuring pupil positions data, I can see that the amount for the right eye data and left eye data are not equal. I have tried to test multiple times; but every time, there is less amount of right eye data than left eye data. This is why the values get de-synced with time. Has anyone faced this issue? If you have, how have you overcome it?
the cameras are free running, so you will always have different sample counts. This is ok since you can use the timestamps to find matching pairs for left and right data. check out https://docs.pupil-labs.com/developer/core/overview/#timing-data-conventions for more.
Hey, According to my experiences, this case is very normal. Because each sensor (instrument) has a truncation (or precision) rate. For example, 3D model of Pupil Core has an accuracy between about 1,5 and 3 degree. So, you can get nearly 2 cm different result what did you look actually when you have 50 cm distance from a target - i.g. a laptop display. In your case, you have 2 eye cameras and these cameras can give you data with different precision rate. This thing I said can be first thing that effect the confidence of the system. However, on the other hand, other thing that effect the accuracy can be your eye structure. In some subjects, an eye can be seen different according to other eye.
Hi, the headset is made of up to three usb cameras and a usb hub with voltage regulators. You can in theory connect any usb device and also connect the cameras directly with a usb cable.
Thanks for your reply. Well, are there different components except for cameras and hub in currently architecture, such as gyroscope?
in Pupil Invsible Yes, in core not.
Hi everyone. Just starting to explore the exciting world of pupil labs. 🥳 I've researched gaze interaction earlier, using remote gaze trackers on desktops. (Can share more about that, if anyone's interest.)
As a start, I want to test the software, so looking to use my phone as the "eye camera." Have been reading up the earlier discussions which has been very useful. (Thanks for that, everyone) I might try using the pupil-video-backend to feel the eye camera (in Pupil Capture) from my phone. Has already else done something similar? Thanks in advance!
I don't think so because of general attributes (like infrared usage, frequency) and resolutions of cameras that were bound to headset.
How did it go @user-670bd6 ? Thanks.
unfortunately i havent gotten any help with regards to this issue, i think the lens recommended is just not compatible
On the other hand, I guess, Pupil Labs' bussiness model is hardware selling. In this viewpoint, actually costs of softwares that are made by Pubil Labs are taken with this way. By this reason, I don't think you can use Pupil Core with your own cameras except for used cameras in currently headset. However, I wonder the answer developers will give.
My suggestion is that you can use some statistical model for your data. For example, if confidance rate of your each data that income from eye cameras is enough rate - i.g. > .8, you can use an interpolation calculation and find a recess value.
Hi, I can't start recording in Pupil Core and get this error I would like to know the solution. Thank you very much for your help.
Hey, I'm trying to use pupil player to process eye videos using post-hoc pupil detection. The 3d eye positions in the very beginning of the videos is very inaccurate and it gets far more accurate after about a 90 seconds. I know that the eye cameras do not move significantly during the video, so how can I get the eye location processed into a single location for the entire video?
@user-1d3558 you could re-run pupil detection post-hoc in Pupil Player and freeze the eye model (if there is no movement of the headset).
@user-1d3558 To extend this response: Once the eye model is frozen during the post-hoc pupil detection, you can restart the detection process from the menu and the frozen model will be applied to the complete recording.
Hi, how are you? I'm trying to work with pupil core and Intel Realsense d415 camera mounted on top of it. My issue is that pupil capture can't recognize the d415 camera although my laptop recognize it. I have tried to implement what is suggested in troubleshooting but it did not helped. I have HP pavilion with windows 10. Thanks a lot and have a nice day
Hi, welcome to pupil labs! You can actually connect any usb camera to Pupil Capture, but you will need the cameras mounted a certain way (check out what the HW looks like: https://pupil-labs.com/products/core/) and the eye camera need to operate in IR. Without this, the SW will not produce anything meaningful. As @user-7b683e we finance all our work through selling the HW, but we are keeping Pupil Core open source and we dont lock or force you to use our HW. (We do think that we offer fair prices for what our HW and SW can do though 🙂 .)
Thanks for the welcome and the reply, @mpk. I was also interested in the Moverio add-on, but looks like the model of Moverio glasses being supported has been discontinued by Epson. Any thoughts/expectations on that front?
Hi. I wanna ask if it is possible to use Screen Marker Calibration for three screens at the same time? since our experiment will use three screens connected and we have to do do calibration for three screens, but I only found calibration option for each screen separately
Hi, with Pupil Core, gaze is not calibrate relative to a specific screen but to the scene camera's field of view. The screen selection in the calibration menu is more about where the markers are being displayed. If you are interested in gaze that is relative to your screens have a look at our surface tracking plugin: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
Thank you for your answer! So is it possible to show the markers on the three screens at the same time? because I think the area that showed markers will be the most accurate part in FOV if I understand correctly
Hey, I am having trouble with my export "pupil_positions.csv" file. I built a system that works well when the "method" column is purely "3d c++" and my exports used to do this method exclusively. Now I get a mix of "pye3d 0.1.1 post-hoc" and "2d c++" as the method and this is causing problems with how we are reading the data. Is there a way to guarentee that the method used is 3d c++?
Correct. Unfortunately, the screen marker calibration does not support displaying markers on multiple displays at once. Instead, I suggest using our single marker choreography where you display a fixed marker and ask the subject to fixate it while rotating the head. This way you have full control over the calibration area.
Thank you! it helps me a lot!
Hi, starting with Pupil Core 2.0, we run both (2d and 3d) pupil detectors in parallel. As a result, you get two rows per frame. Starting with Pupil Core 3.0, we replaced the legacy 3d detector (3d c++
) with pye3d
. The latest (with your system compatible) Pupil Core version is therefore https://github.com/pupil-labs/pupil/releases/v1.23 Alternatively, I recommend adjusting the system to accept the newer exports and dropping the rows that you are not interested in.
Ok thanks, it shouldn't be too difficult to ignore or remove those rows
player - [INFO] camera_models: Loading previously recorded intrinsics...
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
File "launchables\player.py", line 624, in player
File "plugin.py", line 398, in init
File "plugin.py", line 421, in add
File "gaze_producer\gaze_from_offline_calibration.py", line 57, in init
File "gaze_producer\gaze_from_offline_calibration.py", line 114, in _setup_controllers
File "gaze_producer\controller\gaze_mapper_controller.py", line 42, in init
File "gaze_producer\controller\gaze_mapper_controller.py", line 129, in publish_all_enabled_mappers
File "gaze_producer\controller\gaze_mapper_controller.py", line 151, in _create_gaze_bisector_from_all_enabled_mappers
File "player_methods.py", line 46, in init
ValueError: Each element in data
requires a corresponding timestamp in data_ts
player - [INFO] launchables.player: Process shutting down.
When I try to use pupil player I get this error, I've tried verifying the files and reinstalling but nothing seems to fix it
You can reset the recording by deleting the offline_data
data. Afterward, it should open as expected in the latest Player release.
I appears that glitch only appears in data sets I have already opened, when I open a fresh data set the Pupil capture eye 0/1 windows used is post-hoc pupil detection crash and I get this error message
Hi, i need help with a problem that occurred recently. I have never had problems using pupil core, but now every time I open pupil capture everything opens correctly, but as soon as I move the world camera the program crash. What could be the reason?
Please contact info@pupil-labs.com in this regard. 🙂
Ok thanks
Hi, I use Pupil Mobile with Moto Z3 when I need small recording device, but sometimes my recording are truncated, why is this happening ?
This happens if the app is terminated unexpectedly. This can have a variety of reasons. Unfortunately, the data can only be recovered partially. 😕
I did find a solution to this problem. I had to uninstall my ciso vpn and that fixed it
Our next release will also no longer crash if this edge case happens.
I also got this one fixed by reinstalling the software from my Program Files (x86) to the root of the drive
This should not have made a difference. What might have happened is that you installed a different version which resets the session settings. When you open the recordings afterward, it will load the gaze data from recording by default. The issue is related to the post-hoc calibration which needs to be enabled manually.
Te application is running all the time and displays the recording time
It is possible that the recording (background) process is what is being terminated (not necessarily a user action). Unfortunately, I cannot offer an investigation of the possible causes as we no longer maintain Pupil Mobile. 😕
Can I use another mobile Phone or do you recommend to use Moto Z3?
We have also made good experiences with the 1+6, but that phone is no longer sold by OnePlus directly. Newer phones might have newer Android versions installed with which Pupil Mobile was not tested.
Okey, thank you very much
Hello, I am using Pupil Core for the first time and I can not see my eyes in the Pupil Capture. I already tried to run the exe as administrator - nothing changed. This is the error message:
Hello! I annotated some videos manually with the annotation player. I wrote down the mistakes I made (as there is no possibility to correct them) like "Fixation 1345 --> label y instaed of label x". Now I wanted to manually correct these mistake in Excel but the annotation csv file does not include the corresponding fixation to the label. Is there any possibility to correct them? Thanks a lot
Could you please make sure the headset is connected and try "Restart with defaults" from the general settings?
The headset is connected via USB and the restart did not help
I can see "world view" so my computer at least recognizes one camera
If you go to the "Video source" menu, enable "Manual camera selection", and open the "Select source" selector, what values are being listed?
Where do I find the "Select Source" selector?
My bad, I was referring to Select camera
. Your screenshot shows the necessary information, thank you! As you can see, there are two "unknown" cameras. This means that the automatic driver installation for the eye cameras did not work. Please perform steps 1-7 from here [1] for both eye cameras (Pupil Cam2 ID0/1
, or similar). Afterward, Capture should list all three cameras by their right name.
[1] https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
Okay thank you, I will follow these steps
Hello! I annotated some videos manually with the annotation player. I wrote down the mistakes I made (as there is no possibility to correct them) like "Fixation 1345 --> label y instaed of label x". Now I wanted to manually correct these mistake in Excel but the annotation csv file does not include the corresponding fixation to the label. Is there any possibility to correct them? Thanks a lot
In reverse, look up the start_frame_index
-end_frame_index
range for a given id
in fixations.csv
, and find the annotation.csv
row whose index
lies within that range.
Fixations often span multiple frames but manual annotations are associated with a single timestamp/scene video frame. You can identify the fixations based on the annotations index
value and the fixations start_frame_index
-end_frame_index
range. (Index always refers to scene video frame index in this context)
thank you very much it works, maybe it would make sense to add the fixation id into the annotation.csv? in the future
For these special cases, we recommend building custom plugins or post-processing scripts that implement the needed functionality. Should you need help with that, feel free to contact info@pupil-labs.com for more information.
It would for your very specific use case, but it does not generalise to everyone using annotations. Annotations are agnostic to what they are annotating in order to be as flexible as possible. 🙂
it's just a recommendation, but maybe our case is very specific 😁
alright thank you
Hey again I followed the steps and the Pupil Cam1 ID2 is presented in the device manager, but the eyes are still not presented in Pupil Capture Is it possible that the camera is broken?
You might want to run steps 1-5 from here https://docs.pupil-labs.com/core/software/pupil-capture/#windows before trying to manually install drivers again
If they are listed in the device manager, the cameras are fine. They just need to be in the correct category (libUSBk)
yes one camera is listed (my model has only one pupil tracking camera on the right side)
Also, it looks like you might have installed the libusbk drivers for your fingerprint reader by accident. It is possible that this prevents the fingerprint reader from working correctly. You might want to uninstall the driver for this particular entry.
Just to make sure: Is there a Pupil Cam2 ID0
listed somewhere in the device manager? The camera visible in this category is the scene camera that already is being listed correctly in Capture.
okay I follow these steps again
ah okay, no there is no Cam2 ID0
Neither under Imaging Devices or Camera? In this case, please contact info@pupil-labs.com
no nowhere, I will contact them. Thank you very much
It doesnt change anything
In what regard?
I unistalled the fingerprint reader and there is still no Cam2 Id0
Ah, yeah, that was expected. I suggested this to ensure that your fingerprint reader was working as expected
ah okay 😄 sorry
I have not ever been able to get the eye camera to connect. on the pupil w120 e200. Any solutions?
Please see my conversation with @user-069b04 above 🙂
I did and am looking for a way to run the .exe on mac. Could you send the link please for Mac
Hi. I have pupil core glases. I am trying to get gaze information from it via API. I successfuly connect to the API, get the subscription port via zmq. I subscribe to topic 'pupil' and 'gaze', but I get only pupil messages. In loggings of recordings I can see gaze messages. What am I doing wrong?
You need to calibrate before gaze data is being generated. Could you clarify what you mean by "in loggings of recordings"?
I did calibration process using pupil capture software. I can see gaze messages in recordubgs: gaze.pldata.
ok, thanks for the clarification. This indicates an issue with your script.
Thank you. Can you please check my script?
Ah, you shouldn't sleep in that loop. zmq receives and buffers the data in the background. Once the background queue is full, it will drop new messages instead of dropping the old ones.
ok, I deleted time.sleep line, but there are still no gaze messages. There are still no gaze messages there. So I tried to run calibration again using pupil capture, but I get this error: no sufficiant pupil data available.
I suggest only printing the topic for a start. This makes it a bit easier to check if you got gaze data or not. You will need a successful calibration, though. Regarding the error message, it sounds like your eye processes are not running or the pupil detection is very bad, causing all samples to be discarded.
by the way, you can easily edit your message via the message context menu. There is no need for deleting them 🙂
You are big help. Thank you. You are right. The processes are not running. I tried to start them, but I get error: eye0 - [WARNING] launchables.eye: Process started. Estimated / selected altsetting bandwith : 151 / 256. !!!!Packets per transfer = 32 frameInterval = 82712 Process eye0: Traceback (most recent call last): File "launchables\eye.py", line 743, in eye File "zmq_tools.py", line 174, in send File "msgpack__init__.py", line 35, in packb File "msgpack_packer.pyx", line 120, in msgpack._cmsgpack.Packer.cinit MemoryError: Unable to allocate internal buffer.
This sounds like you are out of memory. 🙂
Where can I get position from gaze data? I am interested in dx and dy values for consecutive measurements. Is it norm_position? What are its max and min values?
thank you very much. Solved my day.
See https://docs.pupil-labs.com/core/terminology/#coordinate-system
thank you a lot
Hi, I have a problem subscribing to a topic. When I use the subscribe method I get this error " object has no attribute 'subscribe'". This is only when i try to subscribe, i can successfully connect to pupil_remote and get sub and pub port. What could be the problem?
Let's move this to 💻 software-dev
Ok thanks
Hi I'm trying to work with pupil core and Intel Realsense d415 camera mounted on top of it. My issue is that pupil capture can't recognize the d415 camera although my laptop recognize it. I have tried to implement what is suggested in troubleshooting but it did not helped. I have HP pavilion with windows 10. Thanks a lot and have a nice day
Hey, which version of Pupil Capture are you using? And have you installed the corresponding Realsense plugin? https://github.com/pupil-labs/pupil-community#plugins
Hi all, some of the participants in my gaze tracking study will need to use their prescription lenses to be able to perform the intended study task. I have read on this server that you recommend using contact lenses instead or mounting pupil core underneath the prescription lenses. I'm afraid I have only limited control over which kind of corrective lenses the participants will use.
That is why I would like to test a setup allowing to use prescription glasses (if necessary). Unfortunately, I failed to combine pupil core with glasses in my test. 🙁 When I started putting pupil core on and then tried to add the glasses in a second step, the glasses' temples collided with the eye camera arm sliders. I also tried mounting the temples below or above the sliders but this also seem not to work well (glasses are then askew and cannot be worn properly): - below: nose pads of glasses then can barely be worn on the nose - above: temple tips of glasses then end far above the ears In addition, the distance between eye camera and eye is very limited for both options, not allowing me to record the whole eye or generally adjusting the eye camera position and orientation without scratching the glass.
If I have understood previous comments on this topic correctly, the pupil dev team succeeds in using pupil core with prescription lenses. So I may just have misunderstood how this works exactly? Could you give me an explanation (or an explanatory image) how pupil core and prescription lenses may work together? Are there any alternative mounting options available for users of prescription lenses?
Thanks for your help!
Hi!
Where can I find a list of notification message topics/categories/subjects and their respective commands (e.g. recording.should_stop where recording is the topic and should_stop is the command)? Also, what is the purpose of notification messages? On the webpage, it says that notification messages are used to coordinate activities. How? What happens when you don't use notification messages?
Thank you!
Hi. Is it true the pupil mobile is no longer supported? We purchased the Moto Z3 right before the pandemic and now that we are open to research I am reading that it has been depreciated and not not compatible with newer data formats. Are there any best practices for researchers who would still like to capture data with their cell phones (ie recommended version of pupil mobile, pupil capture)? Thanks
@user-3f477e Please see https://discord.com/channels/285728493612957698/285728493612957698/847395213638762496 for reference
Thanks for your response! I had already read nmt's suggestion. To be honest, it was too vague for me to succeed (or maybe I just misunderstood). Should "below" mean vertically below the lower eyelid, or just beneath the eyeglasses?
Thanks for replying. I already solved the problem by installing an old version of pupil capture.
Hi, I am trying to run pupil from source. I keep getting this error Going through Pupillabs GitHub on dependencies, I found out I need Visual studio, however, I have it installed.
Hi, how did you install pyuvc? Have you also seen the note regarding the Microsoft Visual C++ 2010 Redistributable in the docs? https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md
Hi @papr, I followed all the instruction, but I kept getting the error, I even had to use another computer. So I decided to try using another separate computer with Ubuntu 18.04, I was able to build and run pupil capture, but the cameras aren't coming up. Attached is the screenshot from pupil capture.
Please is it possible to calculate the total scan path length for a trial of experiment?
Do you mean spatial or temporal length? With all respect, but are these meaningful metrics? If so, do you have references as to how they are defined?
Spatial length
And do you define the scanpath as the path between fixations?
Yes ... Mixed of fixations and saccades
Then you would have to simply define a temporal start and end point and sum the distance between consecutive fixations, correct?
How do you calculate the distance between fixations?
That depends on the coordinate system. If you have a 2d coordinate system, e.g. fixations on an AOI, euclidian distance might be most reasonable.
Yes, I calculated the ideal Euclidean distance from between AOIs
I used PowerPoint
You need to be very careful to not mix up coordinate systems by accident.
Now what to know how to do that for the whole Scanpath. The distance between fixation can be very insignificant and difficult to calculate
But the scanpath is just a series of fixations, correct? Summing them up sounds most intuitive to me. The only question: Based on which reference does a scanpath start and stop?
There is a start AOI and Target AOI. And the scanpath is a string of fixations and saccades between these AOIs
Are the AOIs fixed in relation to each other, or do they move?
They are fixed
The issue with the two-AOI approach is that the aois have their own coordinate systems. In other words, a fixation location in the start aoi is not comparable to the fixation location in the target aoi. At least, you cannot calculate differences between locations.
Instead, if your aois all lie in the same plane, you can define one big surface to which the fixations are being mapped. Then they are all part of the same coordinate system and you can calculate differences between them.
Thanks
The AOIs are in the same plane. Please are you saying I should create a single surface for the plane?
Are Data for calculating the differences on the recorded data?
The tracking video is made up of different trials but the same procedure.
What I want: To calculate scan path length from start to end of one trial.
As you probably know, the software will not calculate the differences for you. But yes, the fixation on surface data contains the needed information.
My concrete suggestion: Use your existing work flow to identify the ids of the start end end fixations based start and target aois. Afterward, define one big "global" surface, export the fixation on surface data for this global surface and use the Euclidean distance to calculate differences between fixation locations (you can identify the fixations belonging to the scan path based on their ids)
I'm measuring the distance between the last fixation on start and first fixation on end... Note scan path is going to have many fixations and saccades
Are you asking if you should do that, or are you stating that this is your goal?
Just asking if I can measure the distance between these two fixations
As you noted, a scan path is more than just a straight line. I suggest 1. identify start and end fixations by id (this defines your scan path) 2. calculate the Euclidean distance between each consecutive fixation 3. sum these differences to get the total scan path length
Thanks... I think I got this
Thanks.. initially, I plotted the scan path using the x and y data. Drawn a line along each saccades. Read and noted down the xy coordinate lengths for each fixation point as shown in Excel and then calculate the euclidean distances between each fixation and then. Sum all saccade lengths. From what you said, I don't need to plot the graph
Does the pupil lab eye tracker use corneal reflection?
You are referring to glints, correct? No, it does not.
I just purchased and received a pupil core, and am having issues adjusting the eye cameras
Hi @user-343787, are you referring to the arm extenders or adjusting the camera direction?
the ease shown in the video on your website is not anywhere close to my current experience
Which part feels difficult to move?
the whole thing extender arm
definitely feels like I'm gonna break it
I take it I need to remove the small screw? That is not apparent if so...
any and all of adjusting the eye cameras
they're completely rigid and none of the ease of the hardware videos is my current experience right now
To adjust the camera direction, you slightly unscrew the screw on the camera, which will allow you to move the part more easily. To allow the movement, please hold the plastic extension behind the sensor. The fit is tight in order for the camera to stay in place during the head movements.
I guess I should start with extending the arms. I can't even get my eyes in good central focus
Make sure to use the ball joint, too. This can make a big difference.
https://youtu.be/rJcNm5_L6QU this is what @papr is referring to
I'm in the voice channel now. Again, I find these videos not a good representation even after loosening the screws
Please send an email to info@pupil-labs.com and we'll be happy to take over the conversation there.
so you want me to go to a less responsive mode of communication for the hands-on help I'm seeking?
moving the ball joint on one of the eye cameras has now disconnected one of them
Please how does it detect the pupil position and movement? Refection?
We just try to find the area of the pupil and fit an ellipse to it.
Hi, I know the Pupil Core library is developed with Python. I want to know is there any possibility using the Pupil Core device with Microsoft DirectX 12 API? Is there any C/C++ library available for that?
Can anyone tell me the USB cable length of the Pupil Core product ?
The USB-C to USB-A 3.0 cable is 2 meters long.
thanks !
We have actually more people looking at the email inbox than people answering questions here. I am sorry if the procedure feels frustrating to you. We will try to guide you through the setup process. But I would appreciate it if you could refrain from making snarky / passive aggressive comments like this in the future.
Apologies, it was frustrating
I have just heard that we were already able to help you. Happy to hear that.
Was much appreciated! Thanks again
Hi. My lab has the core head cam set up. We are trying to record audio and video using the Pupil Capture software. Is this something that can be done?
Hi, unfortunately, we were no longer able to maintain supporting synchronized audio recording from different source across our supported operating system. The feature was removed in version 2.0.
We have the camera connected to a laptop. We can either use the built in microphone or a USB mic to provide the audio feed. However, when we try to record no audio gets recorded. Is audio / video recording possible with your Pupil Capture software platform?
If anyone on your team who sees this message can provide some guidance please e-mail me at [email removed] If not, please direct me to someone on your team that can. I am happy to jump back on Discord if needed. Thanks!
I see. I do have older versions of the software on my computer. Do you know if the audio recording worked back then?
It might work, but we were never able to get it working reliably. 😕
Ah ok. Is there a configuration that you know of where there is a possibility that it might work? Ie.) Using a USB mic vs. using a laptops built in mic.
I can't name any specific setup if that is what you mean, no.
Just wondering before we try to figure out a more convoluted solution.
Understood. Thank you either way for clarifying!
It is possible to collect audio using the Lab Streaming Layer (LSL) framework. This would provide very accurate synchronisation between audio and gaze data, but takes more steps to set up. You would need to: - Use the AudioCaptureWin App to record audio and publish it via LSL https://github.com/labstreaminglayer/App-AudioCapture#overview - Publish gaze data during a Pupil Capture recording with our LSL Plugin https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md - Record the LSL stream with the Lab Recorder App. https://github.com/labstreaminglayer/App-LabRecorder#overview - Extract timestamps and audio from the .xdf and convert to a listenable format - Do post-processing, e.g. make annotations at given sound stimuli etc
Thank you for the suggestion!
Good to know that about the email though. I had literally just unboxed my core and felt like I was getting the run around on my first time asking for help, so again was quite frustrated
Of course, no hard feelings. We should have clarified the intend when we asked you to contact the email address.
I can send you an example picture early next week :)
Hi papr, did you already have the chance to prepare a picture? Thanks for your help!
That would be great! 🙂 Should I give you my mail address or will you post it here?
I will post it here for future reference
Alright, thank you!
hello! the fixations in some of my videos are shifted (e.g. calibration check with crosses on a paper shows that the fixations are always shifted down). What is the best way to correct it? Thanks!
Hello,
As with every sensor which collects data from nature, Pupil Core has a precision rate (or truncation) in detecting gaze or fixation points. To surpass instandly unsuit jumping for data, I can suggest you to process your recording with some statistical methods such as Savitzky-Golay approach.
On the other hand, with current algorithms of Pupil Core, pupil may not be detected correctly because of amount of eyelid openning. So, you can try to start recording by lifting the magazine up.
If you use the Post-hoc calibration you can reapply the recorded calibration with a fixed offset
Hi we are running the latest version of Pupil Core capture and player for windows 10. We start the recording then do the natural features calibration. Then we continue with the experiment for several minutes. When we view the recording with Pupil Player, we can see the world camera recording until the calibration is finished. Then the screen blanks out for a few seconds and then the recording continues. Is the a known bug? Is there something we can do to eliminate this from occurring?
thank you for your answer. I think you mean Manual Correction? Nevertheless when I activate post-hoc gaze calibration, the eyetracking data suddenly becomes worse. Because our recordings are quite long I tried it with your sample recording. When I choose "Post hoc gaze calibration" the eyetracking recording is suddenly really bad without even doing anything else. Here is a comparison before and after:
In the offline gaze mapper make sure to use the correct recorded calibration. It might contain an outdated one which was chosen by default.
it's not the offset in the picture that bothers me, the fixations aren't as good as before
Just to confirm/clarify. Are you using Pupil Core or Pupil Invisible?
Hi, I'm conducting reading and translation experiments, and therefore my AOIs could well be each word in view. I wonder if it's possible to more efficiently define AOIs instead of manually drawing the lines, trying to separate each word?
Many thanks, Ted
Please the euclidean distances between fixations are in which unit?
The norm_x and norm_y data are in which unit?
See https://docs.pupil-labs.com/core/terminology/#coordinate-system for reference
I have found no reason to employ the post ad hoc calibration. Any advantage using it ?
Is using OCR (optical character recognition) an option? Is the text large enough to be seen in scene camera? Or is text known apriori?
Sorry I didn't receive notifications so missed the reply! I'm using several different formats, including Word documents, slides, PDFs, and txt files to be embedded in an experiment progamme and projected onto the screen. Each word is large enough to be seen in scene camera. The way I know how to define AOI is using the markers to draw areas, but that would be very time-consuming if I have to do that for each word, as each task contains around 200 words. Many thanks, Ted
I can only choose default alibration oder calculate a new one but this does not seem to change anything
That would mean that there was no recorded calibration. 🤔
Oh got it working now
solved using this found on Pupil-labs issue page
code
Hi. I have a problem with my pupil labs: (pupil capture) it shows this message over and over again: Estimated / selected altsetting bandwith : 151 / 256. !!!!Packets per transfer = 32 frameInterval = 82712 eye1 - [WARNING] video_capture.uvc_backend: Capture failed to provide frames. Attempting to reinit. eye1 - [INFO] video_capture.uvc_backend: Found device. Pupil Cam2 ID1. eye1 - [INFO] camera_models: Loading default intrinsics! Estimated / selected altsetting bandwith : 151 / 256. !!!!Packets per transfer = 32 frameInterval = 82712
Version of software: 3.4
Hi everybody, I am new to pupil and I want to ask you, are there any guidelines how to progress the data we record with pupil core?
Hi, it looks like you might have a loose connection. Please contact info@pupil-labs.com in this regard.
Itried using another usb port on my computer. I noticed that the problem is there when I try to run pupil capture and pupil service at the same time. I neet pupil capture to calibrate the glases and I need service to recive gaze info from zhe glases to my program. Why cant I run both at the same time?
Hello everone, I am new to pupil and already have one question: I changed the world camera and got all my calibration parameters. I am a bit lost regarding where I should exactly define my camera model and how to proceed in order for my defined intrinsics to be properly loaded. Thanks for the help!
Welcome! You can find some general best practices here https://docs.pupil-labs.com/core/best-practices/ The recommended workflow is to use Pupil Player to visualize and export the recorded data. See https://docs.pupil-labs.com/core/software/pupil-player/ We also have a series of tutorials that show case how the exported data can be used https://github.com/pupil-labs/pupil-tutorials
Thank you a lot 🙂
Hi, did you only change the lens or have you actually changed the whole camera?
Hi! I have changed the whole camera. I have found pupil/pupil_src/shared_modules/camera_models.py, but not completeely sure where to define thee camera with the correctt intrinsics in order for pupil capture to load the custom camera
@papr @wrp Hello, I have some questions about the eye gaze of HoloLens: 1. The HoloLens 2 only provides the direction of eye gaze (the cyclopean gaze), and calculates the hit position of the user's eye gaze ray with the target. However, we need to obtain the gaze depth directly, i.e., we want to calculate the gaze position through the left gaze direction and right gaze direction. Does anyone know how to obtain the gaze direction of left eye and right eye respectively from the HoloLens 2? You know, if you want to obtain the cyclopean gaze, you must calculate the gaze direction of left eye and right eye respectively. So, we think HoloLens 2 has the gaze direction of left eye and right eye but doesn't provide them. 2. Pupil Labs provide the add-on of eye tracking for the HoloLens 1. I wonder whether the add-on of eye tracking provides the cyclopean gaze direction for HoloLens 1, or further offers the gaze position from the gaze direction of left eye and right eye for HoloLens 1.
Please use either Pupil Capture or Service.
If you want to use Pupil Capture GUI for calibration you can also publish gaze info to your program with Capture. Pupil Service can also do calibration, but you will either need to display the calibration marker and send positions to service over the network.
I would prefer to use pupil capture. How can I access gaze info? Is this the right way? self.ctx = zmq.Context() self.pupil_remote = zmq.Socket(self.ctx, zmq.REQ) self.pupil_remote.setsockopt( zmq.RCVTIMEO, 500 ) self.pupil_remote.connect('tcp://127.0.0.1:50020') self.pupil_remote.send_string('SUB_PORT') sub_port = self.pupil_remote.recv_string() print("sub port: ", sub_port) self.subscriber = self.ctx.socket(zmq.SUB) self.subscriber.connect(f'tcp://127.0.0.1:{sub_port}') #subscriber.subscribe('pupil') self.subscriber.subscribe('pupil.1.2d') self.subscriber.subscribe('gaze.')
Looks correct on first sight 👍 See https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py for a full example
Thank you a lot. Subscribed to 'gaze' not 'gaze.'
Hi all, I've got a question regarding the pupil core set for pupil tracking. For scientific purposes we used to record eye data using an older (monocular, 120Hz) version of pupil labs in complete darkness. However, this device recently stopped working altogether so i've switched to one of our the pupil core's (200Hz) we had laying around. Yet i am having trouble getting clean eye data from it. Typically data is now very noisy as the pupil is not well tracked. Any suggestions on how to optimize pupil tracking in complete darkness (i.e. any specific settings that can be changed to improve the detection)? Or is there perhapse a manual/set of instruction on setting it up for recording in darkness? Many thanks, Jesse
Hi @user-027014. Have you tried adjusting the eye camera exposure settings in each eye window, like in this video? https://drive.google.com/file/d/1SPwxL8iGRPJe8BFDBfzWWtvzA8UdqM6E/view?usp=sharing Increasing the exposure time can achieve better contrast between the pupil and the surrounding regions of the eye
Hi there, I am on an M1 macbook pro, on MacOS Monterey Beta 7, and Pupil Capture is unfortunately not working. I can confirm it is working on an M1 Macbook Air with Big Sur, also can confirm it working on Windows laptop, but it is not working on Monterey. Here are some logs:
world - [INFO] launchables.world: System Info: User: erenatas, Platform: Darwin, Machine: Erens-MBP, Release: 21.1.0, Version: Darwin Kernel Version 21.1.0: Sat Sep 11 12:27:45 PDT 2021; root:xnu-8019.40.67.171.4~1/RELEASE_ARM64_T8101
objc[2137]: Class CaptureDelegate is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d370) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_videoio.4.5.dylib (0x13d5b3948). One of the two will be used. Which one is undefined.
objc[2137]: Class CVWindow is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d3c0) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x13d3f7468). One of the two will be used. Which one is undefined.
objc[2137]: Class CVView is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d3e8) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x13d3f7490). One of the two will be used. Which one is undefined.
objc[2137]: Class CVSlider is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12bf7d410) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x13d3f74b8). One of the two will be used. Which one is undefined.
attempt to release unclaimed interface 0
world - [INFO] video_capture.uvc_backend: 0:3 matches Pupil Cam1 ID2 but is already in use or blocked.
world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.
world - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [1280, 720]!
world - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy!
world - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation!
world - [WARNING] launchables.world: Process started.
world - [INFO] video_capture.uvc_backend: Found device. Pupil Cam1 ID2.
...
eye0 - [INFO] pupil_detector_plugins: Using refraction corrected 3D pupil detector.
eye1 - [INFO] numexpr.utils: NumExpr defaulting to 8 threads.
attempt to release unclaimed interface 0
eye0 - [INFO] video_capture.uvc_backend: 0:4 matches Pupil Cam2 ID0 but is already in use or blocked.
eye0 - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.
eye0 - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [192, 192]!
eye0 - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy!
eye0 - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation!
objc[2139]: Class CaptureDelegate is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12e7e8370) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_videoio.4.5.dylib (0x139f49948). One of the two will be used. Which one is undefined.
objc[2139]: Class CVWindow is implemented in both /Applications/Pupil Capture.app/Contents/MacOS/cv2/cv2.cpython-39-darwin.so (0x12e7e83c0) and /Applications/Pupil Capture.app/Contents/MacOS/libopencv_highgui.4.5.dylib (0x139d8d468). One of the two will be used. Which one is undefined.
Do you have any ideas for any workarounds? I have tried different cables, Reformatted the laptop. Any ideas are welcome.
macOS Monterey
How to wear glasses and the Pupil Core headset at the same time
Hi Is there a possibility to move Surface positions (or rather configurations)created in the Surface Tracker plugin from one PC to another without copying the measurement where it was created? I tried copying the surfaces folder that is created in the export folder, but that did not work
The recording in which you defined your surfaces should contain a file named surface_definitions
. You can copy and paste this to other recordings
Is there a minimum to the size of the surfaces we can define for Pupil Core and track fixation in them with a good confidence?
The minimum size for a useful surface definition will ultimately depend on calibration accuracy. The smaller that you define your surfaces, the more calibration accuracy you will need to ensure that surface mapped fixations are valid. Have a look at this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246
Hi all, How is the confidence of the pupil core computed?
Hi everyone, I am looking for information about the USB-C clip to connect the 3 cameras to connect the Core headset: https://www.youtube.com/watch?v=zzYLRlVBhTY&ab_channel=PupilLabs Is the schematic/hardware available somewhere? We would like to change the form factor to accommodate our application. Is it a regular USB-C hub or there are some custom features? do you have any requirements for bandwidth? Let me know if you have any info/suggestions. Alexis
Hi, please contact info@pupil-labs.com in this regard. Please include some details about your application, too.
After preprocessing and binning my data, I always get these vertical flat lines at the end of each trial. Do you have an idea about what might cause this?
Hi! I have been doing a recording tryout today after a long period of not using the equipment during the covid and encounter two problems: 1) I cannot record sound (it looks like it is being recorded but nothing possible to hear) 2) offline surface tracker does not recognize surface markers. Surprisingly, even on old recordings where markers were previously recognized, now it is not possible. Any suggestions? Mac Os, Pupil Player version 1.12.17
Hey, could you share an example recording with data@pupil-labs.com s.t. we can have a detailed look?
Sure, thanks
Pupil v1.12 recordings
Hi, we are recording in dark condition and we have an issue for the tracking. We lose the tracking when we switch off the light and then we switch on at low level on the light the tracking is very unstable. Any advise to improve tracking by the world camera at low light level ? Thanks
Hi @user-74c497. Have you tried addressing what @papr detailed in the previous message, i.e. adjusting eye camera exposure and increasing max pupil size parameter?
Hi all, I'm trying to create a bayesian estimate of the combined eye gaze position. My targets are presented at a far enough distance for the eyes to not converge, so essentially I have two independent measures of the same gaze position. So what I would like is to get some information on the variance/std/precision of the pupil traces over time. The problem is that i am not sure how the confidence is computed and if it is related in anyway to the variance, OR I could compute it myself with a moving window of sorts, but here the problem is that during blinks both the x/y position are set to zero. Does anyone have a I thought on how to solve this? Any of the following would help me: 1) some measure of how pupil confidence is computed, 2) a way to turn off the blink set xy estimate to zero, or 3) any other tips 😉
Sorry for the question. It seems the answer should easily be found by reading pupil lab website. For a study that use chinrest and 3D pipeline, how long can we revalidate? All trials must be in one video and so we can't recalibrate. A trial can take few second to 1mins and then another another trial up to 8 or 10.
It is generally better to split your trials into separate recordings, particularly if they last 8 or 10 mins. For every new recording, re-calibrating and validating will reset accumulated slippage errors.
I have asked this last year. Sorry I want to ask again to be sure. Are we to investigate for the visual angle of accuracy to adopt or really on the accuracy of the eye tracker (0.6) ? I remember that we can decide the size of the target based on it's distance from the participant and the adopted visual accuracy. And the inaccuracy must not go above the adopted one after validation. Is that correct? I also know that the fovea span 1 to 2 degrees of visual field
If you calculated stimulus size based on viewing distance and accuracy, then you should ensure that the accuracy reported following a validation does not exceed the accuracy used in your calculation.
Hello, what are the units for the Absolute Exposure Time parameter?
The
time
parameter should be provided in units of 0.0001 seconds https://github.com/pupil-labs/pyuvc/blob/master/controls.pxi#L44-L60
Thanks so much
I see that the eye camera had a sampling rate of 200Hz what about those shown on the capture (30, 60 and 120)? There another shown at eye window. How can we harmonize these ?
The Gaze accuracy is 0.60 and precision of 0.02... I guess this is just Manufacturer's average because it varies with study populations
For 200 Hz, set the eye camera resolution to 192x192 px. The maximum sampling rate for the world cam is 120 Hz.
Accuracy 0.6 and precision 0.02° is what can be achieved in ideal conditions (i.e. excellent pupil detection; minimal head movement; 2d calibration pipeline, subject follows instructions during calibration).
I guess this accuracy and precision is affected by individual differences, distance from the target and nature of the environment. If feel the need to use 3D calibration pipeline even though we will have little or no head movements. So we will have to adopt an accuracy based on our environmental conditions and set up.