Hi @user-e3dae5 looks like the port is already in use. Please check the task manager to see if there are any other version of Pupil Capture or Pupil Service running in the background. If so, stop the task and then restart Pupil Capture.
Can anyone guide me on how to start with the software in fedora
Thanks @wrp but it wasn't the main error I think that if I run pupil-service, it runs pupil-capture. But as soon as I decide to run pupil-capture if doesn't work. Seems to launch in "ghost mode"
here the output
@user-e3dae5 Does it work on a different computer running Capture?
I didn't try on another computer yet.
Please do. This way we can narrow down the issue.
Hello - I'm sorry to possibly repeat a previously asked question, but I am having issues with audio not lining up with the video. Pretty soon into the video, the audio is ahead of the visual. Is there a way to fix this? Thank you in advance!
I'm interested in developing gaze interaction with Bokeh (python) information visualisations. Such as hover and tap, but with gaze. Has anyone tried something similar?
Guys, how does the 3D pupil ID algorithm compensate when the contour of the pupil is interrupted by a corneal reflection? Does it try and fit a model to each visible portion of the pupil contour? Does it do some sort of infilling, or interpolation of the pupil's circumference?
A few sentences would be helpful. With our current VR modification, the circle of LED's in the HMD make a ring of reflections on the cornea that are approximately the size of the pupil. If the ring also aligns with the edge pupil, then the pupil's contour can be broken in several places at once by the 5 reflections.
hi community ! im new using the pupil labs for my project about an autonomous vehicles and have a problem in calibration can you all give me tips and trick for it ? it misses almost of the targeted things.
Up clicking the Player to run, the black box is empty and the grey region for accepting files does not open. Software downloaded again and attempt to run the Player failed again.
Player worked one week ago with no problems.
Pupil capture works well. Running Windows 10
@user-9dbb42 Try deleting the user_settings_*
files in the pupil_player_settings
folder before starting Player
thank you very much ,it worked!! π
hello, i used to be able to play the world_viz.mp4 files in other computers when i wanted to demonstrate some things to other people who does not have the pupil player, but i can not do that with world.mp4 files since the new release. Do i need to do something different? Thank you very much
@user-ca46a5 Do you mean the exported world.mp4
in the exports/00X/
folder? or the world.mp4
in the top level folder of the recording?
yeah, the exported one
That is unexpected since other than the file name nothing should have changed.
Which external player do you refer to?
VLC media player, it says This file isnβt playable. That might be because the file type is unsupported, the file extension is incorrect, or the file is corrupt.
even though its the file in the export folder
If I remember correctly, if the export is cancelled, the resulting file might not be playable. Can you try exporting again and make sure that the export finishes as expected?
okay, let me try
still not playable
hmmm
Does this happen for all of your recordings or only this one? Also which OS and Player version do you use?
yes its the same for all my recordings.
i have windows and for pupil player i have downloaded pupil_v1.11-4-gb8870a2_windows_x64 one
i have also tried to play it windows media player
it also can not play it
Could you share one of the recordings with data@pupil-labs.com ? I will try to replicate the issue in the coming week.
sure, thank you very much:)
Hello all, I am trying to detect pupil offline and whenever I try to detect eye videos in loop I get following errors in my log and video crashes. Need help. Thanks in advance.
@user-b7c4e4 Please share the recording with data@pupil-labs.com and I will have a look in the coming week
@here We are pleased to announce the latest release of Pupil software v1.12! We highly recommend downloading the latest application bundles: https://github.com/pupil-labs/pupil/releases/tag/v1.12
@papr Hi, I could used the player software for windows. But today I have a problem. This error comes up and I can't run the player : "Install pyrealsense to use the Intel RealSense backend" What should I do?
This is not a relevant error. If Player does not work anymore, it is due to a different reason
@papr thank you for your answer. I can't drag the files into player ( It is minimized)
@user-8fd8f6 Try deleting the user_settings_*
files in the pupil_player_settings
folder before starting Player
@papr There is not "pupil_player_settings" folder
Hello - I'm sorry to possibly repeat a previously asked question, but I am having issues with audio not lining up with the video. Pretty soon into the video, the audio is ahead of the visual. Is there a way to fix this? Thank you in advance!
@user-2798d6 Is this regarding Pupil Mobile recordings?
No, running Capture on a MacBook Pro
Hello! Have anybody used pupil eye-tracker for smartphone app research? Maybe somebody have any tips and tricks how to calibrate pupil in a best way for this kind of research? I calibrated it using features around smartphone and icons on smartphone, but it is calibrated not very well.
@user-bc5d02 The problem here is that a phone is actually a very small target. You could try to use a very narrow lens, decreasing the effective field of view of the camera, but increasing the spatial resolution for gaze estimation (to some degree).
@papr, I've been running Capture on a MacBook Pro. Is there a computer setting I should change or is this something happening with the glasses and audio working together?
@user-2798d6 There are multiple audio related issues on macOS at the time. I did not have the time to look into this in detail, unfortunately.
@papr, so it is an issue with Mac rather than pupil?
@user-2798d6 it's an issue with Capture on Mac. Other os do not seem to have the same issue.
@papr thanks a lot! Will try the lens.
Thanks, @papr.
I'm thinking about using Pupil Mobile for an active experiment where the participant will be moving around quite a bit. Can Pupil Mobile calibrate and record without streaming to a subscribing laptop running Pupil Capture?
Hello everyone Short question: Does anyone have any recommendation for the threshold values for blink detection. The default values of 0.5 for on- and offset don't produce the best results for me. Thank you for your help!
@user-99bf85 Hi, I guess it depends what you get. I had trouble with too long blink detections and lowered the offset confidence threshold to 0.3. I get a bit shorter and and more blinks.
Hi all Iβm using Pupil-Labs in communication with MatLab using ZeroMQ to run an experiment measuring pupil diameter. However recently Iβve been continually getting the following error while measuring round trip delay:
How can I go about solving this issue? Is it to do with the ZeroMQ server or my own settings? Thanks
@user-20faa1 there is no on-device calibration for pupil mobile. But you can do offline calibration after the effect.
@user-82488e please check in the Pupil Remote menu if the port is still 50020. It changes to a different if 50020 is not available
Thanks for the quick response @papr. How do I check that? Sorry i'm new to pupil-labs
@user-82488e In Capture, on the right side, there are icons for the different menus. One of them should say Pupil Remote. Click it to open the menu. The menu should include a text field indicating the current port number.
Thank you so much!
It has indeed changed
@Lachlan#6367 just set it back to 50020
Will do, thanks
Hello guys, Im using pupil remote and i suscribe for surface information. I get the 'gaze_on_srf' and 'norm_pos' and sometimes the coords are negative. But on the recording, I was always looking inside the surface. What does the negative coords mean? Thanks for the help!
@user-14d189 Thank you for your response. Do you use 0.5 as onset still?
@user-1e8f1b check the confidence? Is the confidence very low in these cases?
@user-99bf85 yes onset still 0.5 and filter length 0.2.
Thanks a lot.
@papr Confidence does not go lower than 60% in the whole recording
@user-1e8f1b Mmh. Please share the recording with data@pupil-labs.com so that I can look for an explanation.
Sure i will, Thanks for the help!
Hi everyone I could connect to pupil, when I used the holographic emulation app in the hololens. But when I build my app in unity and open the project via visual studio i can not connect to pupil anymore. Do i need to change something in the settings in visual studio?
Hey. I have a query . Why is it that the world timestamp not inherit the pupil timestamp? I presume they are calculated after each pupil sample is collected. Or am I wrong ?
Let me know. Thanks ! π
@user-41c874 pupil timestamps come from the eye cameras. World timestamps from the world camera. The cameras are not synchronized in hardware. Therefore we need to synchronize the generated data by time afterwards.
Thanks! So, world timestamp doesn't provide any information about when the pupil sample was acquired. Am I understanding it correctly? (But, in principle if I interpolate all world timestamps to pupil0 or pupil1 timestamps, I should get approximate x, y coordinates at the time when pupil samples were acquired. Right?)
I'm just trying to understand this because we are using Surface samples (which has the same timestamps as world gaze) and syncing the timing of this to another clock . And there was some non-uniformity in the world timestamps and the pupil timestamps were relatively more uniform . Any way, everything else works quite good ! And I have more or less only one more hurdle to solve (as of now) . Thanks a lot for your help!!!
@user-41c874 typically, multiple pupil Datums are mapped to a single world frame. For each world frame we try to detect the current location of the surface. Then we map all gaze data belonging to this world frame to the gaze. Therefore gaze on surface should have two timestamps: its original timestamps and the world timestamp which indicates to which world frame it was correlated to
Hi @papr, we found a broken
Hi @papr, we found a broken wire on one of the eye camera, how will that affect the data?
@user-e2056a This will likely result in the eye cam not being recognized correctly. Please write an email to info@pupil-labs.com with that picture.
ok
@papr, thanks, is gaze and fixation data still available with one eye?
@user-e2056a Yes, it is, but gaze will be less accurate if the subject looks to the right
@papr papr thanks
Hello, I have a binocular 200Hz Pupil headset and I experince the same issue as @user-1603a2 . It seems that the camera for the left eye is not as focused as that for the right camera and the image is way darker. I attached screenshots of the right and the left eye in 400x400px resolution. The right eye is focused and bright enough that the pupil detection works nice. The left eye in default settings is too dark and the pupil is not detected reliably. If I adjust image postprocessing for the left camera to, for example, brightness:8, gain:3, then the pupil detection works somewhat better but its still less reliable than the right eye.
The pupil detection works better when I record in 192x192 px instead of 400x400 px but still, the confidence for the left eye varies more compared to the right eyer (which is usually constant at 1, besides when blinking).
Is there any chance to get better results with the left eye or does the camera need to be changed?
In the following link you can download short recordings of both scenarios: https://www.dropbox.com/s/ynjxp6av5jvv47m/2019_05_08.zip?dl=0
Image of the left eye with default settings.
Image of the left eye. Image post processing set to: brightness=8, gain=3
Image of the right eye.
Hi, I have an question about camera Intrinsics. After I apply 'show undistorted image', the surface mark and surface cannot be detected anymore. I need to define a surface to extract heatmap without fisheye distortion. What can I do for this?
@user-6f86f3 The show undistorted image is just for verification of the estimated intrinsics and is layered on top. The actual surface detection still runs but it's visualization is hidden. Please understand that undistorting the whole image is expensive in terms of CPU and is not necessary for the functionality of the surface tracker.
Hello again. I've been trying to launch Capture using regular usb cameras. Capture's screen is gray. I know, that you should have a pupil headset, but i just have no time left... Can anybody guide me trough?
I've also replaced drivers with libusbk
@user-e08fba Check the UVC Manager
menu. Does your camera appear in the list of devices?
No, it does not
Any ideas?
Wait
Still no
Hey everyone, did anyone else have a problem with zigzag lines when plotting the binocular gaze data? When I just export the gaze on surface data recorded with capture (version 1.7.42) using pupil player and plot them in R, I get these weird zigzag lines, as if the gaze points would jump between left and right eye maybe? (the purple line in the graphs)
But when I recalibrate the data in player after changing it to dual_monocular calibration, the plot looks smooth and there are no more zigzag lines (the red line in the plots is the average of both eyes).
Also, when I recalibrate in binocular calibration mode, the zigzag lines reappear (the blue line).
Did anyone else have this problem with binocular calibration or can point me in a direction of possible errors?
Thanks very much in advance!
hey, is there a best practice guide to position the eye cameras?
@user-019256 Could you please check which of the binocularly calibrated gaze points in your example are mapped binocularly and which are mapped monocularly?
@user-c494ef the pupil should be well visible at all times, i.e. it should not leave the video frame if the subject looks e.g. up
@papr I was wandering more about inside (close to nose) or outside positioning and best working distance
@user-c494ef The position is not relevant as long the camera's field of view is able to capture the pupil well
Hello! Have a problem with a Natural features calibration. After finishing calibration there is a error "world: Not enough ref point or pupil data available for calibration". I found that the reason could be in a pupil service ( https://github.com/pupil-labs/pupil/issues/1140 ) but I did not find a solution of the problem.
Can confidence drop because running short of CPU?
@user-1e8f1b no. If you run out of CPU, frames will be dropped
@papr I have tried to record video with Camera Intrinsics Estimation enabled, and it turns out I cannot extract the heatmap foe a specific surface. So Camera Intrinsics Estimation is not designed for headmap ?
additionally, I want to extract heatmap only for monitor screen, but the mark defined surface will include some part of monitor edge. (like the area in blue line below). Do you have any idea about how to extract the perfect heatmap?
@papr in this plot I colored the data green if it was binocularly mapped and colored it red if it was monocularly mapped. For this I used the information given in base_data (1 or 2 eyes). As you can see, most of the data is mapped binocularly.
π£ Announcement for Pupil Mobile users π£
As many of you may already know, there is a bug in Android OS v9
(aka Android Pie) that breaks USB device enumeration for many USB Cameras. This bug also affects Pupil Labs hardware. Therefore if you are using Pupil Mobile on a device runningAndroid v8
, we recommend that you do not upgrade yet to Android v9
. If you did update to Android v9
and the update broke compatibility with our hardware, fear not we have we have a solution (actually two solutions!) π
Long term solution - We have identified the bug in Android and submitted a fix to the USB subsystem for Android OS. Our change has been accepted in the official Android repository! It's exciting to be able to fix a problem at the source, especially for such a widely used codebase! However, it will take some time for our fix to trickle through via Android OEM updates.
Short term workaround - We also changes to Pupil Mobile source code that enables you to continue using Pupil Labs hardware in Android v9
. The workaround is available in the most recent version of the Pupil Mobile beta release. You can access the beta releases by opting into the beta program here: https://play.google.com/apps/testing/com.pupillabs.pupilmobile
what cpu can handle 200hz eye camera?
because i have i7 intel 2.40Ghz
and the program cant handle the rate and always drops it
@user-9dbb42 do you need the Pupil detection in real time? Alternatively, you can disable real time detection and run it after the effect in Player.
Hi! I bought a pupil lab Hololens add-on but the wires to eye cameras are too delicate. One of the them breaks π¦ Can I have a new connection plugin?
@user-af9864 please contact info@pupil-labs.com in this case
@papr thank you for the answer!! I will use what you have suggested!!
@user-019256 in our formal PupilLabs vs Eyelink comparison we see similar things https://www.biorxiv.org/content/10.1101/536243v1, just wanted to mention this e.g. Figure 5, Figure 9. Potentially this related to the problem discussed in Figure 4 (paper is btw. accepted, will be published soon in peerj)
Hi. I'm trying to run the Pupil Labs software on a .mp4 video. On Ubuntu - the software immediately closes when I finish selecting the path to the video. On Windows, I can't seem to open the application - when I run the executables, it just opens a blank command prompt, and nothing happens... can anyone advise me? Thanks!
@user-23fe58 The software is not supposed to work with only a mp4 file. Please refer to the documentation on how the recording format looks like
@papr Thanks for the tip. According to the documentation, .mp4 files are a valid extension, but in addition, the .mp4 file should have a specific name, and, it must have a corresponding .npy file. If I have my own video file, how can I create a .npy file for it?
@user-23fe58 I have a jupyter notebook that shows how to generate a minimal recording from a single video. I can share it when I am back at the office on Thu. π It uses the easiest approach: Generate evenly spaced timestamps. The more correct way would be to look at video's PTS and calculate a timestamp from that.
@papr That would be great. Thank you!
Hi guys. I know pupil tracker doesn't do glint tracking but does 3D model that estimates the center of the eyeball in 3D compensate for device shifts relative to the head? I'm asking because the gaze data we get in VR setup are very sensitive to device shifts
Has anyone created a pupillometry package for python compatible with pupil labs that can do things like adjust the pupil size reading for size differences from the pupil location, remove blinks, and convert the pixel readings for the pupil diameter to mm? I saw a link to a pupillometry package in the community github, but it was broken.
@papr An update to my attempt to use the Pupil labs software, that you were helping me with. I was able to create the info.csv and timestamps.npy file needed for a particular .mp4 file I wanted to test. However, the software doesn't show where the pupil is located in the video. My ultimate goal here is to input a video file of a recording of an eye looking around, and get the software to tell me the location of the pupil in the video. Is that possible?
@user-23fe58 In this case, you have to name the video eye0.mp4
and rename the timestamps file accordingly. Afterwards, you can run the Offline Pupil Detection in Player on the eye video.
@user-072005 I am not aware of such a package. Feel free to share it here when you have found it.
@user-0d187e That depends on which detection and mapping mode you use. The 2d
mode is much more sensitive to slippage than the 3d
mode.
@papr @wrp : I am working on pulling out video frames from the head cameras that are sync'd across a parent and child (close to the same time stamp). The goal is to pull out the same number of sync'd video frames and then use tensor flow to recognize the objects in these frames. I've started by using matlab and ffmpeg; making progress, but this seems like something others might be working on. Any pointers toward useful plug-ins or otherwise that might be available from the pupil community?
Hi All. I am working on pulling out video frames from the head cameras that are sync'd across a parent and child (close to the same time stamp). The goal is to pull out the same number of sync'd video frames and then use tensor flow to recognize the objects in these frames. I've started by using matlab and ffmpeg; making progress, but this seems like something others might be working on. Any pointers toward useful plug-ins or otherwise that might be available from the pupil community?
@papr Thank you for the help! I got it working. There is one glitch left though - even though Pupil Player and Pupil Service both work for me, Pupil Capture closes as soon as it opens. I have attached a screenshot of the command window's output. Anyway, even if it is not possible to fix, I can just use the other two applications. Thanks again!
@user-23fe58 Are you not using Pupil Player? Capture is for processing real time video, e.g. from a camera.
@user-af87c8 thank you very much, this is helpful!
@papr ah, that would explain it. I can use Pupil Player - it works for my saved video. Thanks again for all the help!
@papr Is the 3D mode meant to compensate for device shift since it detects the eye ball center relative to the camera? Do you consider that as an alternative to using glint?
@user-0d187e Slippage compensation is archived by reestimating the eye ball center relative to the eye camera, yes. Yes, we consider this an alternative method.
thx
hi
I'm looking for some guidance on how to build the apps for mac from source
none of the built ones run
they all give : "Illegal instruction: 4"
@user-67c6ed I have not had any issues with Pupil app bundles on macOS. What version of macOS are you using and what are the machine specs (specifically the CPU)?
@user-23fe58 Just for completion: This is the notebook that implements one of the many ways to generate a Player-compatible recording from a single video file: https://gist.github.com/papr/d3e9d3863b934d1d4893e91b3f935ed1
@user-358ea2 I have an update for you regarding audio recordings on macOS:
[1] https://www.jabra.com/business/speakerphones/jabra-speak-series/jabra-speak-810
Hi @wrp, we finally found the compilation guide on the website and the problem was solved by compiling the apps ourselves. By the way, it's a core 2 quad CPU, I don't remember the exact reference. It's not an "official" mac, but it's unlikely to be related to the CPU, I've never seen such thing happen with any app in almost 15 years of using OSX with all kinds of exotic hardware, except about OS kernel. The apps are not giving any log, are there somewhere, or a way to enable logging/verbose mode ?
or maybe apps just don't support core2 family, it's getting a bit old
tested Sierra and High Sierra
@user-67c6ed If you say "compiling the apps ourselves", do you mean to run them from source, i.e. running python3 main.py
?
If so, there should be a capture.log
file in the capture_settings
folder within the cloned repository.
yes @papr
Please be aware that it is being overwritten every time you start Capture.
it's actually working with a version we cloned from git repo
and this one does work, so the log wouldn't give us much info
is it possible to do the same with bundled app ?
The bundled apps store the log file to the pupil_capture_settings
folder in the user's home folder.
ok, I'll check that
Please be aware that the bundle is not compatible with all Intel cpus. As to my knowledge, the bundle does not work e.g. on the Intel Xeon processors.
ok, thank you for the tips π
hi @papr, I just checked, the only entry in the log is : MainProcess - [INFO] os_utils: Disabled idle sleep.
@user-67c6ed Sorry, this makes it very difficult to debug. I can only recommend to run from source then. π
ok, no problem, I was just being curious
Are there any Pupil management/executives in here or would it be best to email with an inquiry?
Hello to all! I ran into a problem when I first opened the latest version of the player. The phrase written was: player - [INFO] video_capture: Install pyrealsense to use the Intel RealSense backend . Maybe someone faced with a similar and can tell what the problem Thank you!
@user-817474 Please contact info@pupil-labs.com with your inquiry.
Thanks!
@user-9a064b please delete the user_settings_* files in the pupil_player_settings folder and try starting Player again.
Here is an example of an excel file data output, that I reveived for the calculated amount and duration of fixations for an AOI called "excercise"
Hey! I have another question concerning the excel files I received with the data output for my calculations. The dependant variables, I want to measure, is the mean number and duration of fixations within each AOI. Therefore I am using the file βfixations on surface XYβ. I attached an excel files to this thread. When having a look at the data output, I wasnβt completely sure, about the data produced. It would be great if you could give me a short answer to that. β’ For the mean duration of fixations within the AOI I had a look at the column βdurationβ. Surprisingly it was showing for nearly every fixation exact the same number. Do I have to change something in the parameter settings in pupil player? It would be great if I could see here the durations in ms per fixation. Another opportunity could be to take the difference between start and end frame, right? But here another inconsistency accured: When having a look at the start-and endframe, I realised that there is a gap between the endframe of the previous fixation and the startframe of the following fixation. What happens between this time gap? β’ For the number of fixations I would simply count the number of times, where it says βtrueβ for the relevant AOI. Do you agree on that? β’ When I looked at the column id (showing the number/index of each fixation), I noticed that the numbering of the column "id" is not consistent in some places. That means it jumps to a higher number in some places. However, this is not constant between the excel output of different AOI. That means that in the different excel files βfixations on surface XYβ is due to this a different total number of fixations shown. Ilustrating this, in the attached file for example it "jumps" directly from 33 to 36. Could you explain this to me? β’ Furthermore, I wasnβt sure about the columns βx-norm/y-normβ and βx-scaled/y-scaledβ as they show the same data. An answer of you regarding these questions, would be very helpful! Thank you!
Do I have to change something in the dispersion algorithm settings (or further software settings) in order to see the correct values for the duration of a fixation on a defined surface? π
Can anyone help me pls ? Why suddenly my eye tracker cannot connect to PC
@user-0a2ebc The camera drivers are not installed anymore. This is likely due to a Windows Update. Please restart Capture with administrator rights.
Unfortunately it is still not working
How can I install the camera drivers again pls ?
@user-0a2ebc I've sent you a personal messages with the full instructions
Hello, I am trying to get the fixation- norm position in realtime, can anybody give me a tip ? I currently using cmd! Thank you π
@user-ae7c30 Have you checked out our example on how to access data in real time? https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py#L23 You will have to change line 23 to sub.setsockopt_string(zmq.SUBSCRIBE, 'fixation')
in order to receive fixation data. Also, do not forget to turn on the fixation detector.
yes! thank you I am using the same file with 'fixations'
but how can I parse its norm_position out of fixation? messages is formated as below
fixations: {'topic': 'fixations', 'norm_pos': [0.24963392804467022, 0.2768624991848711], 'dispersion': 0.0, 'method': 'pupil', 'base_data': [['gaze.2d.0.', 61075.5142], ['gaze.2d.0.', 61075.538407], ['gaze.2d.0.', 61075.659443]], 'timestamp': 61075.5142, 'duration': 145.24299999902723, 'confidence': 0.7997475862503074, 'gaze_point_3d': nan, 'id': 28}
Ah, nice! Use
while True:
try:
topic = sub.recv_string()
msg = sub.recv()
msg = loads(msg, encoding='utf-8')
x, y = msg["norm_pos"]
print("\nnorm_pos x/y: {} {}".format(x, y))
except KeyboardInterrupt:
break
@user-ae7c30 The important line is x, y = msg["norm_pos"]
oh wow.. I am so gratful
thank you!!
Just checking, the gaze positions are normalized in the exported data to go 0-1 from left to right and bottom to top? So the left bottom corner would be 0,0 and top right 1,1?
@user-072005 correct
Thanks, also the timestamps... I'm looking at the csv in excel. There doesn't seem to be a format that they make sense in. Is it not in the same time zone as the phone I recorded it on is in?
@user-072005 The time unit is seconds. The epoch/start of the clock is arbitrary, unless you used the Pupil Capture Time Sync plugin to sync the Pupil Mobile clock to Capture's clock.
Ok that explains it. Great, thanks
Hey! My data output always shows me the same duration for the fixations on surface_name output. What can I do in order to receive the appropriate parameters? Thank you!
@user-07d4db This is probably a bug.
Yes! Do you know what I can do in order to solve this?
@user-07d4db No, since I was not able to find the cause yet
Okay thank you! I tryed to find out whether there is propably something wrong in my software settings, but I coudn't find the reason so far...
@user-07d4db No, they should not be the reason
Hi, I am new to pupil capture. On a Mac computer, I am trying to record audio. When I select Audio source to build in microphone. It gives me an error 5, saying that input/output/error 'none0'. I already gave pupil capture access to system microphone. Any idea why?
@user-88b704 The audio recording situation on macOS is currently a bit difficult. Please leave the built-in mic selected and restart Capture. Do you get the same error? Is the mic still selected? If not, do you get the error if you select it?
Hey! Here you see a produced data output fixations_on Surface_name for four different AOI, that I recorded simulaneously. Can you explain me why they show a different amount of detected fixations? Are fixations only registered if the frame, to define the AOI with the surface tracker plugin, is visible in the world camera?
Does anyone have ever used Pupil Headset with eeg devices? Iβm trying to eye track while eeg recording, but because eeg is sensitive to electricity, eeg data is distorted caused by Pupil Headset. If there are anyone who solved this problem, or trying to blocking electro signal from headset, please help me.
@user-88f7b8. just a general thought. Did you by any chance look at the frequency spectrum of the eeg signals a) with the headset and b) without ? That should in theory let you design a proper filter to get rid of the disturbances. (We have not done eeg and eyetracking simultaneously yet but were thinking about it)
@user-26fef5 thankyou for replying! I didnβt spectrum analysis yet because i can confirm actual effect of headset by my eyes. With and without headset condition make significant difference. As your suggest, i should check frequency and find proper bandfilter. Thank you.
@user-07d4db That is correct. The surface tracker exports fixations roughly like this:
for each defined surface S:
create a fixations_on_surface.csv file
for each exported world frame idx F:
check if S was detected in F:
map fixations belonging to F to S
remove fixations that were mapped multiple time
write fixations to csv file
hello hello. it seems i've lost an eye
unable to get pupil service or capture to display the video from pupilcam id1 (I'm using the hmd addon hardware)
it was working yesterday
when i use "activate source" to switch to id1, it just closes the window and doesn't open another one
the window for eye1 just opens then immediately closes
This is also true for the main pupil capture window, if I set it to source id1, it crashes to desktop
restarted my PC and they both now work
Thank you Papr! Now I understood how the excel files are created
Hello, quick question: what are the main differences between Pupil Capture and Pupil Service? Pupil Service is just Pupil Capture for VR/AR?
@user-a6cc45 Main difference: Capture's event loop is bound to the world cameras frame rate. ~1 world frame -> ~1 event loop iteration. This results in Capture buffering multiple pupil data points before mapping them to gaze data and publishing it. In comparison, Service does not support a world camera, and is able to map pupil to gaze data as soon as it arrives, resulting in a lower gaze processing latency.
Service has further limitations, i.e. there are multiple plugins that are only supported in Capture.
Can anyone help me , why is the picture not displayed in full (i already adjusted the pixel)...
And it seems that heatmap is not generated properly on a surface (image)
I followed the script for heatmap tutorial
Thanks in advanced for the help
@user-0a2ebc it might be possible that jupyter notebook scales down the image visually to fit it into the cell.
@user-0a2ebc also, you might need to adjust the figsize
I did adjust but it still shows the same result
@user-0a2ebc do you have Opencv installed? You could save the image directly to disk with cv2.imsave() and open the image a proper image viewer
@user-0a2ebc since the image is vertical: Keep in mind that image matrices are described by Height x Width instead of the usual Width x Height
@papr i haven't installed opencv yet...the image is already on my disk. Do i still need the module ?
@papr ah I see ..Hx W...let me try
Any advice about the best pixel size to be displayed for heatmap or gazeplot ?
@user-0a2ebc depends on the size of the image on which you want to overlay the heat map on.
You only need a library like Opencv or pillow if you want to save modified images.
I have a pretty straigth forward question: I calibrate the eye tracker, then record some data. Looking up the raw gaze data of the trial (not the calibration), I get the normalized eye vector to work with. At which value is the calibration origin? (i hoped for it would be at 0,0,1 after calibration?!)
@user-e91538 Just for clarification of the terms used in the Pupil project: Pupil data = pupil position in eye camera coordinates Gaze data = pupil positions that have been mapped to the world coordinate system (requires calibration)
https://docs.pupil-labs.com/#data-format See the Coordinate Systems
subsection in the docs for details on the different coordinate systems that each camera can have.
Excuse. I am try to do Eye Tracking Research Data Analysis in Autodriving οΌ and the link address of "Python pupillometry analysis scripts to analyize pupil_data." on Github is Not available with 404. Does anyone know where I can found those or recommend some other analysis code? Thank you so much.
@user-d3153d it appears that the owner of that code removed it from github, see https://github.com/pupil-labs/pupil-community/issues/8
@user-5df199 thank you for your help!
I am trying to use CHAP https://in.bgu.ac.il/en/Labs/CNL/chap/default.aspx to analysis the data from pupil-lab. But the input file from CHAP of pupil-lab should be .plsd which is not the output file from pupil_capture or pupil_player. Does anyone know where to find the .plsd file ? thank you !
Hi! Can I somehow use a fixation plugin for .csv or .pldata files? I got a little lost in variables in fixation_detector.py. What are the main functions I should pay attention to? Looks like a detect_fixations makes a lot of work, is the gaze_data some sort of gaze.pldata? Or maybe there are some cmd tool or some easier methods?
@user-780603 pldata files are intermediate recording files. Just open the recording in Player, run the plugins and hit export to get the data exported to a csv file
yeah, I understand this, but I have about 60 sessions, is there other way around?
In this case you were on the correct track. You will have to call the source code directly from a script
Hi, I'm interested in using a USB webcam as an input into pupil capture, so that I can record accurately timestamped/synced video of participants performing an action (throwing a ball) from a side view. I have a couple of questions. 1) If I buy a Logitech C615 webcam (as suggested in the DIY section of the pupil docs) and plug it into a USB port on my machine, will it be detected as a source by pupil capture? 2) If this is the case, could I then substitute one of the eye camera source inputs for the webcam, so that when I'm wearing the pupil labs headset I'm simultaneously recording video from the forward facing world camera, the eye camera facing my pupil, and the webcam? Thanks!
@user-e7102b Alternatively, I can recommend the Logitech Webcam C930e.
My recommended setup would be two separate Capture instances, one connected to the headset, one connected to the webcam. You can use the Time Sync
plugin to sync time between the instances and the Pupil Groups
plugin to start/stop the recording on both instances at the same time.
Afterwards, you can open the headset recording in PLayer, open the new Video Overlay
plugin, and overlay the webcam video.
Hello, we just recieved the pupil device with the d415 realsense and it works grat. Though, all the recordings are in .bag files and we can't figure how to play the recordings again nor how to extarct information to a data we can work on (we want to extract the distances information and do some statistical analysis).
Can you please help?
@user-923f12 bag files are usually recorded on ROS. Are you using ROS?
I'm not familliar with ROS...
@user-923f12 so you did a recording with Pupil Capture and it includes bag files? I am wondering if I am missing something here π€
It saved the recordings from Pupil Capture as bag files. Isn't it suppose to be like that?
I've just checked. The realsense backend does not save anything as bag file. And the recordings by Pupil Capture are usually folders with multiple files, incl info.csv, world.mp4, etc
Maybe there is a package needed to be install or something else I missed?
@user-923f12 could you post a link to the software that you installed/used?
@user-923f12 and just to confirm: You get only a single bag file per recording?
I'm not with the computer we're working on right now.. I'd be glad to send you when I'll get to it. About the the recording file - yes. we got one bag file per recording
@papr Thanks for the advice! I didn't realize running two instances of Capture simultaneously was an option. Would you recommend trying this on Windows or Mac OS?
@user-e7102b One of the instances can be replaced by a Pupil Mobile instance, btw. Generally, I would recommend to use two different devices, one per instance.
Sure, two different devices makes sense. I'll try it out with a couple of mac minis.
@user-e7102b The time sync is the important step! Else the overlay will not work correctly
Which supported or unsupported Android USB-C phones do you use for Pupil Mobile? Have you had any issues?