Hi, i am a distributor company for Pupil lab core. One of my user has encountered a problem where the installed software can not be opened. A pop up (photo attached) mentioned there is an error. It would be great if you could help us with this problem. Thank you
Hi @user-f51d8a! Could you open a ticket in #β β troubleshooting? We'll continue the conversation there.
Dear Support Team of Pupil Labs,
I would like to inform you that I have successfully installed the Pupil Capture software on my Ubuntu 22.04 LTS system. The installation process went smoothly, and there were no error messages displayed during the installation.
However, I am encountering difficulties in launching Pupil Capture after the installation. When I attempt to open the program, I receive the following error message: 0x79d43a1628bf <_PyEval_EvalFrameDefault+4719>, 0x79d43a162834 <_PyEval_EvalFrameDefault+4580>, 0x79d43a1627a7 <_PyEval_EvalFrameDefault+4439>, ... Additionally, a traceback message appears stating: ModuleNotFoundError: No module named 'Cython' After encountering this error message, the program leads to a segmentation fault and closes.
I have made several attempts to troubleshoot the issue, but without success.
It would be extremely helpful if you could assist me in diagnosing and resolving this problem so that I can use the Pupil Capture software properly. Please advise me on the steps I can take to address this issue.
Thank you in advance for your support and understanding.
Hi @user-3224d0 ! Since your message refers to Pupil Capture I have moved your message here. Could you please clarify whether you have attempted to install Pupil Capture from source or from it's bundle? If the first, you can simply download the bundle here
Thanks for letting us know!
hello @user-d407c1 I have a several questions on CORE - Confidence calculations for core. Are there any description on this topic. can't find in docs. And there were notes for 1.8 release about GPU acceleration but since then no mentioning of GPU usage by CORE. is it possible to improve CORE performance utilising Nvidia or other GPU acceleration technologies?
Hi @user-311764 ! Pupil Capture does not make explicit use of the GPU, so only CPU power should be determining factor, do you see any drop in performance?
The GPU /CUDA via torch was used for an old implementation of a fingertip calibration method which was deprecated.
Regarding the confidence values, what questions do you have? and are them related to the 2D detector confidence? https://docs.pupil-labs.com/core/terminology/#pupil-positions
so it's cpu only now, I see. Is it possible to run the recording mode without showing 3 video streams of eyes+world? just recording to file? (for performance reasons)
As for confidence - it's for 2D
Hi, everyone Can someone help me with something, I'm new here?π
I want to understand how to get the coordinates of my view
Hey @user-477991, what's the question?
Hi @user-477991, fire away with your questions π
@user-07e923 hello! could you please give a comment to my question above please?
Hi @user-311764, the world camera cannot be turned off. We also don't recommend closing the eye camera windows, because you need them to record to eyes.
To increase performance, you can try the following: Open Pupil Capture normally and check that pupil detection is working. Then, click on settings, and turn off pupil detection.
Start recording as it is. It is important to ensure that you record the entire calibration choreography, because you'll do the calibration post-hoc in Pupil Player.
Hi I have encountered a couple of strange issues with the gaze_position data exported from one of my recordings.
Firstly, I have noticed that the confidence is always zero, or close to to zero, when data is missing for eye 1 (yellow rows in screen shot), however, the confidence often remains high when data is missing for eye 0 (blue row in screen shot). Why is this? Shouldn't the blue line also have a confidence of zero?
Secondly, while calculating the time difference between the timestamps I have noticed that sometimes I get negative values (orange cell in screen shot) caused by the fact that the time stamp values sometime decrease instead of increasing as they should. e.g. the timestamp on row 41 jumps back in time, something that is obvious a mistake in the data. I only noticed this because it has caused me problems with my analysis as I'm try to calculate gaze velocity, hence the calculation of time difference. I should add that although in the screen shot the negative time difference is the same row as the missing data for eye0, this is not always the case (the timestamp issues occurs else where when there is no missing data and sometimes there is missing data but not issue with the timestamp). Unfortunately the full gaze_position file is too large to attach to discord but I have attached a trimmed down file.
Thanks in advance for any help.
Hi @user-a09f5d, Confidence is a function of the data matching algorithm. Since the eye cameras are free running, a gaze datum is either monocular or binocular, depending on factors like base data pupil confidence and time between pupil datums.
The confidence value reported in gaze_positions.csv corresponds to the mean from both pupil datums used to generate the gaze datum, or the base pupil confidence value for monocular datum.
As it happens, all of the datums get mapped, even monocular ones with low confidence. Thus, it is expected to sometimes have high confidence, even if one eye was closed, and sometimes not. It's recommended to first filter out the low confidence (<0.6) values before proceeding with analysis.
Edit: And for clarification, are you using anything that could affect pupil time, such as the LSL plugin, real-time API, or Pupil Groups/Time Sync Plugins? Also, could you provide the raw gaze_positions.csv and send it to data@pupil-labs.com so I can take a look?
Hi, I have read the instructions https://discord.com/channels/285728493612957698/285728493612957698/1219221290972479498 about recording audio using Pupil Capture and LSL, and I successfully installed AudioCapture, LSL Plugins, and LabRecorder. However, I don't know how to use it during my study, could you please tell me how to record audio and gaze data at the same time during studies? Thanks!
Hi @user-cbde4a , I would recommend consulting the instructions for the AudioCapture app and LabRecorder on how to get audio working for your particular setup. For questions about LSL and audio, the LSL team is better positioned to get you up and running faster.
Hey there i made mistake while using zadig and chose the interface is there a way for me to restore my mistake because right now i cannot get anything to work
Hi @user-ab48b8 , it seems you were following the instructions in this thread (https://discord.com/channels/285728493612957698/1026870697915600928/1026871642921644055) ?
May I ask what happened that you needed to follow those steps?
In the meantime, could you try uninstalling any drivers for the "Pupil Cams" in Windows Device Manager (check under the "Cameras" category) and then try following the steps here ?
I am trying to run pupil capture from the source
Ok, I see. May I ask what you intend to do when running Pupil Capture from source? If you try installing a Pupil Capture release, such as v3.5, that will also run the driver installation step on Windows for you automatically.
If you need to stay with running Pupil Capture from source, then can you try uninstalling the drivers for the "Pupil Cams", if you see them in the Windows Device Manager, and then try these steps again?
Ok. Could you open a ticket in π troubleshooting ? Then, we can continue the discussion there.
Hello! I would like to ask for some help in clarifying something. I am trying to get the points of the gaze, as shown in the world video. I was using the gaze_positions.csv but when I try to draw the points per frame in the world video by my own, the points seem quite wrong compared to the pupiLabs prepared video. Am I doing something wrong in the "point drawing" or am I using the wrong points? Thank you!!
Hi @user-50ac4b π ! Have you been following the steps in this tutorial ?
okay i am trying it now
Dear Users, Can Face Mapper be used with Pupil Core or it will be only working with Neon?
Hi @user-24010f, Face Mapper is only available for Neon and Pupil Invisible. This is because those devices allow uploading data onto Pupil Cloud, and Face Mapper is an enrichment that lives only on Cloud.
Hello, I have a question about syncing my video and audio timestamps. Pupil Time can sometimes be negative, whereas the audio timestamps I captured from LabRecorder is always positive. When both pupil timestamp and audio timestamp is positive, I have no problem calculating the difference between them and then adjust the onset of video to match with audio. However, when pupil time is negative, I can no longer compute their difference. Is there a way to convert audio timestamp (always positive) to match with pupil time when it is negative?
Hi @user-bebffd ! Have you seen this section of the docs on how to convert Pupil Time to System Time? https://docs.pupil-labs.com/core/developer/#convert-pupil-time-to-system-time
Additionally, I would recommend you to have a look at the LSL relay which would already synchronise Capture's own clock, Pupil Time, with the pylsl.local_clock()
Thanks for your quick reply. I am using the scripts provided in the link to convert Pupil Time to correct date time. I can also convert the audio timestamp from XDF file into correct date time in the same manner. However, when Pupil Time is negative, the conversion to date time only works for Pupil Time; the audio timestamp would be recorded in positive values and cannot be correctly calculated based on the original offset (System Time minus Pupil Time)
Hi @user-bebffd. If you're using the LSL framework to collect your data from multiple sources, it should already take care of the time synchronisation between streams. That's the whole point of it. So it shouldn't be necessary for you to manually change anything post-hoc.
Hi team, can we use the Surface in different positions, like horizontally and vertically, or do they all need to be in the same plane?
Hi @user-4514c3, could you clarify what you mean by surface in different positions?
hi, I was trying to install the bundle, however, without doing anything every time i get a error
could there be an explanation for this?
Hi @user-afe8dd ! Could you share some additional details like what operating system are you trying to install it on? what error do you observe? and what version of the bundle are you trying to install?
windows. and this error says 'a unfixable error during installation'.
pupil_v3.5-1-g1cdbe38_windows_x64.msi
@user-d407c1 Hi - If possible could you or anyone help with video recovery. from world camera recording.
The issue is the following: - pc shuts down during record - we have a video in "recording.mp4.writing" state. - and there is no way to play that file.
are there any tool or recommendation to recover the video which was not correctly written?
there is no meta data in file.
Hi @user-311764 π. I'm afraid that indicates an incomplete .mp4 file, since your computer shutdown, and also probably the rest of the recording will be incomplete. There are no guarantees it can be recovered successfully, but you'll find some pointers in this message: https://discord.com/channels/285728493612957698/285728493612957698/1073541118404349962
Thank you for untrunc link. hope it helps.
it doesn't
Then I'm afraid it won't be possible to recover π
Hi @user-afe8dd , I see that there are icons for Pupil Player and Pupil Service on your desktop. Was the bundle installation successful at some previous point in time? Can you confirm if they start when you double-click them?
no the installation was never realised. it just given this error. If i download the bundle again, i get the same errors sadly.
it makes the first frames visible, but then it's just empty. or with different settings it stays unreadable.
Untrunc was the best hope I'm afraid.
Just to confirm, does double-clicking any of the shortcuts on the desktop, such as "Pupil Player" do anything? Since there are shortcuts on the desktop, do you also see Pupil Player or Pupil Service listed in "Add/Remove Programs"?
what do you mean with add/remove programs? how do i get to there?
We have this bookrest for presenting the pieces (target), so it is inclined. I tried positioning the surfaces as shown in the photo, and all were recognized. However, I thought it might be a good idea to position the top surfaces vertically. I'm not sure if that would be a problem, or if positioning the surfaces as shown in the photo is problematic. Thank you!
Hi @user-4514c3, if the inclined angle is not too large, like the one in your photo, then it should be fine. Btw, if you didn't know, you could draw multiple surfaces that maps onto different parts of your bookrest using only four tags. More tags are also good :).
Hi, all! I'm analyzing some gaze data I collected and upon plotting the points against the background I used during the experiment, I notice the points are slightly to the sides of where I expect them to be (see first picture), especially since in Pupil Player the gaze is right on the faces (second picture). This is data from the gaze_positions_on_surface file (the surface being the entire rectangle between the april tags). Could this be due to intrinsics? I expected this "normalized 2d data" on the surfaces to have already applied these corrections but maybe not? Can I do this myself? If so, how? Thanks a lot!
Hi @user-75df7c, can I just clarify with you, if the surfaces on Pupil Player have the red triangles pointed upwards?
The lens distortion are already compensated when mapping data from the scene camera onto the surface coordinate system. See here.
Hi, @user-07e923! Yes, the triangles are all pointing up. I thought it'd have applied that distortion already. What do you think could be happening here?
Are the edges of a surface automatically assigned to the outer edges of the tags?
No, not necessarily. This would depend on where you draw the "edges" of the surface. You can draw surfaces of different sizes, including those that are smaller than where the physical markers are.
As for your other question on camera intrinsics, it's possible to re-compute them yourselves. See here. But re-calculating it wouldn't affect the current recording, since the data was already recorded.
Hmm, could you follow this tutorial to plot the heatmaps on the surfaces, then let me know if the heatmap is somehow still shifted?
I'll try but I'm working in R not Pandas, so hopefully I manage.
For this particular surface I just clicked "add new surface" when all four tags were visible, I'm wondering if this is the default behavior when adding new surfaces.
Yes, this is by default.
Thank you for getting back to me, I saw that the 'Network API' plugin broadcasts video frames from the 'world camera' , my goal is to receive the stream of the 'world camera' and display them in the window of another application, my problem is that I get this error (in the code I am attaching to you) - "Error: too many values to unpack (expected 2)" Thank you very much
I can't get to line 25, I get an exeptionπ
Hi @user-477991 , have you tried the tutorial (https://discord.com/channels/285728493612957698/446977689690177536/1240584917725876234) that I linked in the π» software-dev channel? Let's continue the discussion in that channel to make communication easier.
In pupil capture I defined as follows: I can't understand how to receive the video stream and display it (for example using OpenCV) in a separate window (window of my application) a thousand thanks <:emojicoreround:1238029717055995934> π <:emojicoreround:1238029717055995934> π <:emojicoreround:1238029717055995934>
By default it goes all the way to the outer edges of all the tags, right? And do these edges include this 1 block thick black lining?
Hi @user-75df7c , briefly stepping in for @user-07e923 here. It seems your surface ends within the bounds of the apriltags (the pink outline is the boundary), which is fine, but this indicates an issue with the scaling of the reference image that you have there in R. Also the apriltag extents/borders are different here and in your reference image. Note that AprilTags should ideally have a thicker white border around them when doing surface tracking.
Hi! I am new and I have no experience with the processing of the data coming from the pupil core, I have a couple of questions that I would be very grateful if you could help me. I wanted to know if there is any possibility to know the exact frame or the exact time when a given fixation occurs. I also wanted to know how I can identify a particular fixation with my pupil player export data (the csv) when using pupil core, for example how to differentiate the fixations for each of the numbers that appear next to each fixation. Thank you very much in advance and have a nice day.
Hi @user-80b7db , yes, you can find out which timestamp/frame a given fixation occurs, by referencing the data's timestamp to the first timestamp (i.e., row 1 -- the first frame of the scene camera).
Context: each sensor connected to Pupil Core runs independently, which allows us to capture from a variety of sensors at different temporal resolutions. All data is then timestamped and timestamps are saved in the data stream. You can read more about it here.
Fixations are exported in the fixations.csv file. In this file, each ID correspond to the fixation number you see in Pupil Player.
Btw, what do you mean by "differentiate the fixations for each of the numbers"? Do you want to tell which fixation ID it was, or what was gazed at for a particular fixation ID? If the latter, this plugin might be useful.
Hello everyone, I am new here. Nice to e-meet you all. My University lab just got a Core eye-tracker, and I am about to use it for my experiments. I had some potentially basic questions and I would appreciate if someone could assist me with some answers, or even better would be open to conversing via the voice channel or a video call.
My questions are related to the following:
For context, my experiment will be run via psychopy and it has multiple trials (80+) per participant. So with every participant run, I get 80+ folders. Please how can I batch export all the pupil and Gaze data for all the trials per participant? So instead of manually opening all trials one by one in the player and exporting them as CSV, I can batch export the 80+ csv (or even better, batch export the data from all trials into one CSV)?
Looking at the CSV export of one of the trials, I noticed that there are varying number of entries tied to each frame (e.g. for some frames there are just 2 entries and others can have like 12 entries), and the frames per second is also not standard (for one trial I got 42.9 FPS and on another trial I got 40.3 FPS). Please why is this, and can I standardise the data entries per frame, and frame per second? I also noticed that by using the surface plugin, and specifying a surface using apriltags, I am getting much less than the fps I was getting without (from about 60 fps to about 33 fps). This is even more so when I use psychopy as the fps drops to like 14. Please how can I remedy this? I am using a pretty strong CPU (i7 11th Gen), but a mid-low tier GPU(Intel iRIS Xe), so could it be that a more powerful GPU would allow it to run at its max fps of 60 with the surface plugin running?
In the gaze_positions CSV export, are the norm_pos_x & norm_pos_y the 2d binocular gaze coordinates? If not, please what are they?
Hi @user-b31f13 , have you had your free 30-minute onboarding call yet?
To answer your questions:
Running a separate recording for each trial will complicate things. Run one recording for the whole experiment. Please check our Best Practices for more info.
The cameras on Pupil Core are free-running and their framerate is affected by the eye camera resolution setting in Pupil Capture (192x192 is optimal). Also, the presence/lack of system resources and whether or not you have multiple programs running in the background plays a role and it also depends on what exactly your PsychoPy experiment is doing. You can interpolate data to get a consistent sampling rate.
Do you need gaze mapped to your surface in real-time? If not, you do not need to use the Surface Tracker plugin in real-time. You can run it post-hoc in Pupil Player. This can save system resources and increase the framerate. But just to confirm, you are now referring to the refresh rate of the monitor, rather than the sampling rate of data?
The formats of the data are specified in the documentation. norm_pos_x/y are normalized gaze coordinates in the coordinate system of the world image. When you say "binocular gaze coordinates", do you mean that you want a separate measure for each eye?
Without knowing how your PsychoPy experiment is obtaining the surface mapped coordinates and saving them to HDF5, I am not sure. Is your code based on this example or are you using the Experiment Builder?
Apologies for the long text. Looking forward to some assistance. Warm regards.
That is the interface in Windows to install and delete programs:
You will need to choose the appropriate instructions for your version of Windows.
no only this one. Also, i tried to download the bundle and the shortcut starts the installation of the program.
Also, note that when you start Pupil Capture for the first time, it will install drivers for the cameras and will ask for Admin privileges to do this.
Can you remove that one in "Geiinstalleerde apps" and maybe even restart the computer aftewards. Then, download the installer fresh with a different browser and try the installation again? Just to have a completely fresh attempt at it.
Yes, thank you very much. To clarify, I need to delimit all surfaces so I can know which ones the participants are viewing, right?
Quick question: what do you mean by "delimit all surfaces"?
@user-afe8dd , could you also check in your user directory (C:/Users/<your_user_name>) to see if there are directories called "pupil_capture_settings" and "pupil_player_settings"? If those directories exist, could you please also remove them.
it worked, it was my anti virus program
Other question. Uploading all files to the pupil player takes a long time. is there a way to exclude the video files? I tried making a copy of al the files, and delete the videos and open those with the pupil player but that does not work.
Hi @user-afe8dd - You need to upload the entire recording folder to Pupil Player, including the videos.
Hi, since last tuesday, i ran the program which was fine. but now i tried to start it up and had to re install. and it gives the following error: Could you think what is going on?
Hi @user-afe8dd , what happened when you started it up that required you to re-install it? Did you initially install Pupil Capture with a separate Administrator account?
I don't know, it gave an error and started installing. What do you mean with a seperate administrator account, i have not logged in and i only have 1 user on my laptop.
Ok, then you most likely do not have an Administrator account, but is this a work/school computer or your personal computer?
my personal computer
When did you purchase Pupil Core? You may be eligible for a free 30-minute Onboarding video call and we can walk through this together.
Hello, I was wondering if there were any use case in which you recommend a narrow-angle lens over the default wide-angle lens ?
Hi @user-c68c98 ! The narrow lens decreases the effective field of view of the camera, but increases the spatial resolution for gaze estimation (to some degree) and is distortion free. Thus, you can use it with for example when performing an experiment on a phone's screen as it would look larger.
Ok, thanks, so if my stimuli in the FOV of the narrow angle lens, i should use this rather than the wide angle one ? I had a second question : is the world camera a global shutter like the eye camera?
That would depend on your needs. If you find the stimuli too small in the FoV, yes, otherwise, you can try with the wide lens. Unlike the eye cameras, which are indeed global shutters, the scene camera uses a rolling shutter.
Not sure if this is the right place to ask, but when I try to download the files from my videos to put into pupil player format, the download does not download all the necessary files to put into pupil player. Do you know if there is a workaround for this at all?
I should also add that when I try to unzip the files, a message appears saying that the download maybe incomplete
Hi @user-66797d , do you have Pupil Invisible or Pupil Core? And what operating system and browser are you using?
Thanks very much for the response!, I wanted to see if there is any possibility of knowing the exact frame of a fixation on a certain object in the world, for example knowing when you are looking at a certain object on a surface and then being able to classify if the fixation corresponds to a certain object or AOI in a surface, i also wanted to know if you could give me recommendations or some insight to a method to know the specific timestamps when looking at one object or another. regards
It depends on your setup if you want to know the timestamps of what a person is looking at. For instance, are you presenting stimuli on a screen? If so, you can sync the timestamps of the stimuli appearance to Pupil Core's recording. See here.
If you're presenting real world objects, then this is tricky. You'll need a combination of tools to help you identify what is being looked at. For example, this community plugin is useful to identify objects in the real world. You could probably get the timing information about the object appearance/detection, and sync the timestamp to the Pupil Core recording via triggers. You'd have to contact the owner of that plugin for more info.
Hi @user-f43a29 Thank you for your response. As the headset was ordered by a professor in my lab (who is currently on sabbatical), I would not want to use up that 30-minute onboarding call as he may wish to use it when he gets back. 1. Using psychopy, the files come out fragmented into multiple folders and not one. I understand it might not be ideal but it there a mass export option or script I can use? Otherwise, please can I get the script that the viewer uses to create those exports (preferably in python)? so I can tweak it to execute the same action on the multiple folders. 2. I was referring to the sampling rate of the gaze and pupil data. It varies. E.g. in a one second interval, I could have 10 frames, and frame 0 has like 30 lines (data samples) in the excel sheet but then frame 5 has like 7 lines. Why is this, and how can I standardise it so that each second, I will get the same amount of frames, and each frame will have the same amount of lines?
2.1 Also, I am not able to deactivate surface tracker on the pupil capture app, as that is necessary in order for psychopy to detect the surface and use its boundaries as the boundaries of the screen in order to create the HDF5 file.
2.2 Following up on your suggestion of having the eye camera at 192x192, thatβs what I had it on. Does the resolution of the eye camera affect the gaze and pupil data accuracy? 3. Thanks for the answers. There is a monocular β which should be one line per eye, and binocular should be one line for both eyes. So I wanted to know if the norm_pos_x for example was the average x coordinates of both eyes? 4. I am using experiment builder. Could I perhaps share my screen to demonstrate this discrepancy in a call or shall I made a video and send you?
Thanks for your patience and assistance thus far. I look forward to hearing from you. Warm regards, Victor.
Sorry, add as a "surface" for each area of interest which would be each of the figures. Thank you!
I'm not sure if I understood you. What do you mean by a figure? Do you mean you want to assign each colored block as a surface? Or do you have many bookrests and you want to create each surface to each book rest?
Regarding annotations, you can enable the annotation player plugin, and add annotations post hoc. They are then saved to the recording timeline, and you'll get values of when something happens (i.e., the annotation) at which timestamp.
I have another question, I would like to make post hoc annotations to know when an essay begins and when it ends, is there any possibility of naming Y and X, for example, to each part? Thank you!
Hello everyone! I'm using the Pupil Core on a study involving tracking eye positioning when reading a text on a tablet. When calibrating the Pupil Core, is it advised to have our subject calibrate on the laptop it's connected to (screen marker calibration) or to perform the calibration using natural features calibration and selecting the range from our subjects view? Thanks in advance!
Hi @user-46e23b! For browsing a tablet device, I'd actually suggest you try using the physical single marker. Position the tablet at the viewing distance used in your experiment, and then manually move the marker around the tablet screen, while your participant focuses on the marker. They should keep their head still during this process. With this method, you should be able to achieve the most accuracy within the area of the visual field that the tablet covers.
How would I set this calibration up on the laptop? Do I just proceed as a normal screen calibration but instead have the subject look at the physical single markers?
You would select the single physical marker in calibration settings.
Hello! I am using my pupil core glasses for pupillometry mostly. I have read the pupil diameter documentation and know to "lock" the model. Each time I create a new model, the baseline pupil diameter is different pre vs. post. I want to get the most physiologically accurate baseline and change in pupil diameter possible. Do you have suggestions to ensure and validate pupil diameter being read is true?
Pupil Invisible and I am using Mac OS with a Firefox browser
Hi @user-66797d , is it alright if we continue this conversation over in the πΆ invisible channel? I made a post there already (https://discord.com/channels/285728493612957698/633564003846717444/1243879530611871775). This channel is meant for communication regarding Pupil Core.
Hi @user-63b5b0 , do you mean each time you create a new model for the same participant? Do "pre" and "post" mean before/after freezing ("locking") the pye3d model?
sorry, pre and post was confusing. I should have said: When I make consecutive models, and freeze them, the pupil baselines vary. While I kknow pupil baseline can change... I am curious if the model fits the eyes different everytime and the resulting pupil measurements differ. Has these been noticed before? Do you have any recomendations to endure I am getting the most physiologically accurate pupil measurements?
Hi @user-f43a29 please did you understand my questions?
hi, Is there a guide on how to analyze the pupil dilations,and to look at the amount of fixation on defined surfaces?
Hi @user-b31f13 !
From what I can see, the PsychoPy Experiment Builder does not enforce that a recording is started/stopped for every trial. For future experiments, it is recommended to start a recording once at the beginning of the experiment and stop it at the end. For Pupil Core recordings, we do not have a mass export script, but you can try this community contributed batch exporter. Otherwise, the code of Pupil Player is open-source. Note that there is no single script that it uses for the export.
The cameras of Pupil Core are free running and fluctuations in sampling rates can happen, even between two world frames. It depends on available system resources, other background processes, and what is happening in the PsychoPy experiment. You can standardize by interpolating the streams after data has been collected.
Yes, I see now that the PsychoPy Experiment Builder forces the Surface Tracker constraint. Are you doing a gaze contingent experiment?
In principle, yes. For example, an eye camera resolution of 192x192 returns more accurate results if the pupil is dilated (e.g. in a dark environment).
Hello! I have the same file open on two different computers, and in the same frame and setting. However, one of them (the Mac) does not recognize all the surfaces, while the other one does. How is that possible? Am I doing something wrong? Thank you!
Hi @user-4514c3 , does this only happen for that one frame in this recording? Or does it happen for multiple frames or also in other recordings?
Hi Team, i am trying to find a way to capture the timestamp from the tracking video played on Pupil player. We recorded and initial tracking for baseline measurements but I cannot use the full recording and need the measurements from a certain duration in the recording. How can I capture this information from the recording played and lookup in the pupil positions file? Thank you
Hi @user-6cf287, thanks for reaching out! May I know if you're trying to cut out a section of your Pupil Core recording that corresponds to the tracking video played while recording eye movements?
Thanks for the response. I don't need to cut out the recording. While I am playing it, I just need to know the timestamp window from which I can then lookup in the pupil_position file to extract the baseline pupil diameter.
Okay. Since the data is already recorded, you can try using the annotation plugin to add annotations in Pupil Player. E.g., add "start of event X". Once you export the recording from Pupil Player, you'll get the annotation.csv which include timestamps. You can then look up the timestamps to find the corresponding data.
Thanks I will try that now! π
great it works, thank you! i also learned that i have to keep pressing the hotkey if i want to measure for a certain time window π
I'm glad that it worked for you! If you don't want to keep pressing the hotkey, you can also programmatically create annotations. This requires you to know the frames/timestamps of the video. The link I've provided has some examples that show you how to do it.