Hi, is there a possibility to use unity without using the HTC VIVE glasses? I want to make a simple arcade-like project and looking for suitable algorithms, if you have any suggestions and tips I will be more than glad to learn them
Can you tell me the command line argument to do that?
It is as simple as in your first message: <path to pupil_player.exe> <path to recording>
Dear @papr, I explain again: I understood that in order to get my own recordings read by pupil_capture or pupil_service, I would need the corresponding _timestamps files. I made them, but they don't seem to be enough. I get warning messages in the console and in the main "world" window about "No camera intrinsics ... Loading dummy intrinsics ...; Consider running Intrinisc Estimation". In the smaller eye windows, it says it couldn't read either the file or the timestamps. How do I check that the video I supplied is the correct format, an that timestamps are not off by one? (for a video of N frames, I made a N-long np.array in the timestamps file)? Do I need anything else (intrinsics, lookup, ???), and what should I put in those (if indeed possible)? Or am I wrong and shouldn't be trying to do it? Please just let me know if you are going to help. [at the moment I am reading the code about intrinsics and it seems not so complex, except I can't find the method file_method.save_object
that is used to create the file]. I can post or pastebin one eye video and the corresponding timestamps if you want.
for a video of N frames, I made a N-long np.array in the timestamps file That is correct. Do I need anything else The lookup is generated automatically. The intrinsics are not required for playback. In the eye process it is used for accurate eye model fitting.
I am feeling like your approach is a work around for something that I did not fully understand. Maybe, you can elaborate on your goal and I can give a recommendation on the best approach?
Can we say that the duration of a "single" trip form MATLAB to Pupil Capture is almost half the "round" trip measured here?
roundtrip = tic; % Measure round trip delay
zmq.core.send(socket, uint8('t'));
result = zmq.core.recv(socket);
fprintf('%s\n', char(result));
fprintf('Round trip command delay: %s\n', toc(roundtrip))
yes. but I recommend putting the toc as close to the recv as as possible.
roundtrip = tic; % Measure round trip delay
zmq.core.send(socket, uint8('t'));
result = zmq.core.recv(socket);
+ roundtrip = toc(roundtrip)
fprintf('%s\n', char(result));
- fprintf('Round trip command delay: %s\n', toc(roundtrip))
+ fprintf('Round trip command delay: %s\n', roundtrip)
I have changed my annotation timestamp into an array:
keySet = {'topic','label','timestamp','duration'};
topic = 'annotation.duration';
label = strcat('duration',num2str(t2));
timestamp = {(toc(seqtime)) (toc(trialtime) - singletrip)}
duration = 1;
recording is done correctly, annotations are sent and received correctly (I see them being generated correctly both on MATLAB and on Capture while recording), but Player won't open the recording folder; goes in the "no response" mode. I tried it several times.. Is there anything I can do to fix it?
So @papr you go: my goal is to use pupil as much as possible as it is (instead of reinventing the wheel), to get x,y timestamped eye tracking data and elaborate on them, without a world video (the person is rather following an external stimulus). So far I am well confident that I can apply a script like this on recorded data like these that you gave me in this message about apriltags, and I might play with those a bit. Sooner or later I will have files gotten via pupil_capture, or will be able to work real time, but so far I only have a few recorded videos, and I am trying to see whether I can pupil_capture
or pupil_service
them meanwhile. And there I am stuck with the warning/error messages that I said above. Here are data about my video files.
if you want, share an example eye video + timestamps to [email removed] As for now, you can ignore the intrinsics warnings. Please also specify the error/warning regarding the timestamps.
an that timestamps are not off by one? Not sure what you meant by that.
thank you, email sent.
I think I am doing something wrong, but the Microsoft hd-6000 camera is not being recognized by pupil service βlocal USB: camera disconnected!β Ran as administrator. Iβm trying without doing the soldering just to see if it can connect. Not sure if this is the problem. I notice that when capture is opened the light does not turn on on the camera so it seems like it is not being activated at all. Thanks for any help!
Manual selection also does not work
Found a pyuvc setup requirement - going to try that
Worked!
Hi, when using the pupil player, and the annotations plugin. Is there a way to display the annotation events in the timeline? (a bit how it looks with the surface markers?) THanks!
Currently, this is not implemented.
tried it also like this, by adding another key to the container.Map, instead of the array:
keySet = {'topic','label','sequencetimestamp','trialtimestamp','duration'};
topic = 'annotation.duration';
label = strcat('duration',num2str(t2));
sequencetimestamp = toc(seqtime) - singletrip;
trialtimestamp = toc(trialtime) - singletrip;
duration = 1;
this way Pupil Capture also collapses and freezes.
Is it because my system is not fast/strong enough, or that Pupil Capture/Player just won't work this way? both ways, annotations are correctly generated by MATLAB. first way, Capture works fine and records but Player collapses, second way Capture also collapses.
I think it always requires a timestamp
field. And if I interpret your code correctly, you replaced it with two different one. I think this is why Player freezes. The extra fields should be supported by both, Capture and Player.
ok gave up and will send two sets of annotations instead of all in one!
Any idea where this error suddenly comes from?
C:\Program Files (x86)\Pupil-Labs\Pupil v3.0.7\Pupil Capture v3.0.7\sklearn\base.py:334: UserWarning: Trying to unpickle estimator Pipeline from version 0.22.2.post1 when using version 0.23.1. This might lead to breaking code or invalid results. Use at your own risk.
It is not an error but a warning. The bundled sklearn version (0.23.1
) is different from the version (0.22.2.post1
) used to write some other bundled files. This is nothing to worry about in this case and has been fixed in our 3.1 release.
We should have a channel here people to ask questions about coding information.
We do have such a channel. Check out π» software-dev
Thanks. Is there any other way to correct a misplaced event? (pressing the wrong button)
At the moment, this is not implemented either.
Hi,
We are using the single marker choreography. The calibration marker included with the Pupil Core is sized for calibration distance at 1-2.5 meters. I was wondering if there is any information regarding changing the size for a smaller distance, say 0.5 meters.
Hi @user-1bfcbc, the calibration marker will work at a distance of half a metre. As a general rule, try to calibrate at the viewing distance you will be assessing: https://docs.pupil-labs.com/core/best-practices/#calibrating
Hi, I have a question regarding the fixation time and maximum dispersion. I need to know when the maximum dispersion is 1 Β°, how can I find the area as well as circle that includes all gaze locations in that specific fixation? Does the yellow fixation ring represent this area or I need to do some calculations to find the radius of this circle based on the maximum dispersion, and if it is the solution, I do not know how could I calculate the radius from maximum dispersion while I do not know the observer's distance to objects? Any suggestions would be appreciated.
Is there a reason that every time I open Pupil Player, I have to wait for the Fixations plugin to fully load?
every time I open Pupil Player You mean every time you open a recording that had its fixations detected already, correct? This is not about unprocessed recordings, correct?
Recent Pupil Player versions should cache the fixation results once they where calculated successfully. Fixations are recalculated automatically if gaze changes, e.g. due to recalculating a post-hoc gaze mapping.
Which version of Pupil Player do you use?
it's v3.0.7
Hi, I have a question regarding the fixation time and maximum dispersion. I need to know when the maximum dispersion is 1 Β°, how can I find the area as well as circle that includes all gaze locations in that specific fixation? Does the yellow fixation ring represent this area or I need to do some calculations to find the radius of this circle based on the maximum dispersion, and if it is the solution, I do not know how could I calculate the radius from maximum dispersion while I do not know the observer's distance to objects? Any suggestions would be appreciated.
Hey, apologies, I thought I had replied to your question already but it looks like I did not send it in the end. There is no need to repost the question. It is sufficient to reply to the existing message if you have the feeling that it was overlooked.
The yellow ring has a fixed size. The center is calculated as the avarage of the gaze locations that belong to the fixation. The maximum dispersion is not necessarily the radius (or better the diameter) of the circle. The dispersion is calculated as the maximum pairwise distance between the gaze samples belonging to the fixation.
The circle is just an approximation for the gaze distribution within the fixation. It is not guaranteed to be even.
So, I wonder how can I estimate the area including all fixations. Is there any way?
Thank you very much for you response! Sure, I will reply my message.
No worries. This was just meant as a recommendation should this happen again in the future.
You mean the gaze area of a single fixation, correct? Or do you want to calculate the area of all fixations?
Yes, I mean the gaze area for each fixation.
Ok, I think there are two possible solutions with varying precisions. Both use the idea of fitting a geometric figure around the points (a proxy for the actual data) and calculating its area
1) Fit a simple geometric figure with a known area calculation method, e.g. a rectangle, ellipse, or circle to the data (Note: the difference to your previous idea is that the fixation center is not necessarily the center of your fitted geomtric figure)
2) Calculate the convex hull around the data points https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html and access the area (volume
attribute in case of scipy.spatial.ConvexHul
)
I think I would recommend 2) as it fits tightly around your data points and there is an existing fitting function + area calculation
Thanks for solutions. I will go with the second option. I also have a very basic question that made me confused. I could not understand the exact meaning of dispersion angle as a metric for distance. I do not understand the meaning of 1 Β° as a distance.
The fixation detector takes the 2d gaze points, unprojects them into 3d cyclopian viewing directions in the 3d camera coordinate systems and calculates the dispersion as the angle between pairs of these vectors.
I got it. Thank you very much!
no it actually happens for the recordings that I'm dragging into Player for the first time, right after recording, without prior processing
In this case, this is to be expected. The fixation detector processes the gaze to fixations Post-hoc. It does not read the realtime recorded fixations.
Hi! Currently, I am doing research including an experiment that I need to know the fixation time for objects in the scene; so, I have two main questions:
1- To know the fixation time for each object, I look at the fixation data and video from Pupil Player at the same time to identify which fixation is related to the which object. For example, fixation number 189 is on the screen and then accumulated all fixation times of screen to know the total fixation time for the screen. Since it is a time-consuming process, I appreciate it if you could give me some advice for easier and much accurate time fixation calculation.
2- I need your help with fixation time calculations regarding the gaze mapping error. I attached two screenshots to this email. The first image is the beginning of the experiment where I asked the participant to focus on the center of the marker, and it can be seen there is an error in terms of gaze mapping. The second image shows the sample of fixation that is on the screen. Since I know there is an error, I am not sure that the participant is exactly looking at the screen, and due to the error, he probably is looking at the painting and this is a fixation for painting; so, I am wondering how I could match the error with the fixation detection.
Hello, I appreciate if you help me to find out the fixation time accurately. Thanks!
what's wrong with my eye tracking hardware? How can I fix it
It is possible that the drivers are no longer installed. Windows updates tend to purge them. Checkout the Video Source menu, enable manual camera selection, and open the activate camera
selector. Do you see any entries? Do they say unknown
three times?
pupil capture 0 window is black
No image
I have seen that you posted the same questions via email to [email removed] We will come back to you via email.
Hi, sorry, I did not notice your message. I am waiting for your email. Thank you very much!
activate device and no devices found, what's wrong
Please enable manual camera selection below the selection menu and try again
hello, do you know how to replace this cable?
I have a new cable, but I don't know how to change it
Please contact [email removed] in this regard.
Is this a email?
Yes, please contact this address via email.
thank you
Hello, I would appreciate if you could help me with this error
When I try to add surfaces, the pupil player crashes. The videos are quite long, about half an hour. Marked surfaces are visible in 90% of the time. I suspect this could be part of the problem. Note: "No markers found in the image" is just because I tried to add surfaces when the image had no markers in it so this error is fine. But "process player crashed with trace" is the issue. Thank you for your effort. Edit: The main problem was that I added the surfaces while the data was still on an external hard drive. Now on the intern hard drive, it works at least without crashing but it is still very slow. Every step takes a few seconds for example when I try to adjust the surface I have to wait about 5 seconds until the step is saved and until I can go on with other steps. My Laptop has 16GB RAM, AMD Ryzen 7 4700U OctaCore and AMD Radeon RX Vega 7 Graphic shared so it's not the worst technical base..
Hi , the Pupil Capture world video window failed to appear quite frequently. How do I fix this issue on Mac? Thank you
Hi, So I'm using pupil core hardware for a real world navigation experiment and I was wondering is there any direct slam implementation out there for tracking the camera position within the environment (that take into consideration the fish eye distortion of the pupil core). It might be worth mentioning that the environment is the same for all recordings and there is also the possibility of having a simplified mesh of the environment in case there are some particle filter implementation out there???
Hi@papr, when I use Pupil capture, there is a problem with the eye process and there is no eye image. This is the error message: eye0 - [INFO] numexpr.utils: Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. eye0 - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. eye0 - [INFO] root: Refraction corrected 3D pupil detector not available Running PupilDrvInst.exe --vid 1443 --pid 37424 OPT: VID number 1443 OPT: PID number 37424 Running PupilDrvInst.exe --vid 1443 --pid 37425 OPT: VID number 1443 OPT: PID number 37425 Running PupilDrvInst.exe --vid 1443 --pid 37426 OPT: VID number 1443 OPT: PID number 37426 Running PupilDrvInst.exe --vid 1133 --pid 2115 OPT: VID number 1133 OPT: PID number 2115 Running PupilDrvInst.exe --vid 6127 --pid 18447 OPT: VID number 6127 OPT: PID number 18447 Running PupilDrvInst.exe --vid 3141 --pid 25771 OPT: VID number 3141 OPT: PID number 25771 eye0 - [INFO] camera_models: No user calibration found for camera Pupil Cam2 ID0 at resolution (400, 400) eye0 - [INFO] camera_models: No pre-recorded calibration available eye0 - [WARNING] camera_models: Loading dummy calibration eye0 - [WARNING] uvc: Could not set Value. 'Backlight Compensation'. eye0 - [WARNING] launchables.eye: Process started. Estimated / selected altsetting bandwith : 617 / 800. !!!!Packets per transfer = 32 frameInterval = 83263
Please see this conversation for reference https://discord.com/channels/285728493612957698/285728493612957698/817333467124727810 What are responses to these questions in your case?
My βoldβ Pupil Pro had a small modification today. Because I was concerned about damage of the free hanging wires from the camera to the frame (which are constantly in contact with the human head) I have made a new pair of adjustable sliders which now includes a clip holding the wire in its place. Excessive wire is now kept outside the frame, away from the wearer.
Could you try to reproduce the issue and share a copy of the ~/pupil_capture_settings/capture.log
please? The window not appearing indicates an issue on startup. The log file should contain more information.
@papr thank you for your reply. I tried to re download the software and seem to work so far. Will share the log if the same issue resurfaces. Many thanks.
How can this warning be resolved: "could not set value: Backlight Compensation"
Some cameras expose this uvc control but do not actually allow to set it. This warning does not affect the eye tracking and can be ignored.
and, do you know of a plugin to reject blink/artifact data and interpolate the remaining data?
The blink detector calculates the blink sections but does not discard them from the raw data export. You need to remove rejected data during your postprocessing pipeline. The same applies for data interpolation
does the duration of annotations have any effects on the recorded data? how can the duration be used in data processing, anyway?
no, it is an optional field. You can add other optional fields when sending annotations remotely.
You could use it for example by sending a single annotation at the end of your trial, including the duration of your trial. Another example for optional data is dynamically generated content, e.g. a user key stroke.
Thanks π
Hello , I want to synchronize between recording pupil dilation and an audio file. It is expected that the glasses will start recording together with the beginning of the audio file. Currently I have an uneven delay between the start of the glasses and the audio file. (Each time a different delay)
Has anyone done this before? Any recommendations on how to perform?
You could assume the delay, and just sync everything on start with a "calibration" blink
I tried to insert a delay and heard a delay even in the code, but each time it opens and closes the recording of the glasses for me at different times regardless of the delay I set in the code
@user-90a55d the cameras are controlled by different processes and it is not guaranteed that all processes start recording at the exact same time. This is likely why you have a random delay. I assume that you are initiating your audio with a script that concurrently starts a pupil capture? If so, you could modify your script to begin a capture first, wait for several seconds, and then initiate the audio. You could also use the annotation plugin to make events in the recording that correspond to audio events. This script demonstrates how you can send remote annotations over the network: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
Thanks I will try
Hi. I'm new here. Is anybody using the Core product with a smartphone for data capture instead of a computer for portability ?
Hey, using the PupilCapture 3.1.16 and the latest LSL plugin, streams of the glasses are detected in the LabRecorder but empty. Any idea how to solve this issue? Best, Julius
The lsl relay only streams gaze data. Gaze requires a valid calibration since Pupil v2.0. Please calibrate and try again.
Solved the Issue. Thanks!
hello, when I run pupil_capture (or _service, or _player) it creates a pupil_capture_settings folders in my home directory, Can I make them be created in a directory relative to the installation path (or to the cwd when launching the program)? It's just to avoid clobbering the home directory with three more folders.
If you run from source, it will use directories relative to the repository clone
Hi, this is currently not supported.
so there is no PUPIL_HOME environment variable or any such mechanism? (like a config file?)
No, there is not. This is the code for creating and setting up the user directory https://github.com/pupil-labs/pupil/blob/master/pupil_src/main.py#L54-L83
a d how do I launch e.g. "pupil_service"? I just tried with $python launchables/service.py but it just exited in 2 second
Everything is started via main.py. See python pupil_src/main.py --help
Hello@papr, can you provide a 3D model of pupil-invisible? I am trying to design a connecting part to add another camera to pupil-invisible.
I am not sure if we will be able to do that. Please contact [email removed] in this regard.
OKοΌthank you.
Can you think of a reason why fixations are only present around timestamp zero? I used https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb on my data
I think this is just a visualisation issue. Look at the x-axis. It goes from 0 to 40k seconds (11 hours)
Hello, how can I programmatically set the dispersion and duration parameters of the fixation detector? Does it have to be done when starting the plugin?
Yes, you need to pass these arguments as plugin init argument
I remotely triggered recording in Pupil Core with Pupil Mobile app. It worked before but today Pupil Core complained about time sync in pupil mobile and abort recording. It keep saying enable time sync on pupil mobile but I can't find any option in pupil mobile. However it worked before and suddenly it starts trouble. Please any body help me!!!!
You need to enable time sync in Capture. Pupil Mobile attempts to sync automatically.
Hi papr, could you tell me where I can find option to enabled it?
Start Pupil Capture -> Select the plugin manager menu on the right -> enable Time Sync
Yes I did and I can see my ONEPLUS phone in the 'sync group members'
ok, great.
actually it is grayed.
but still I can't trigger recording with the same reason.
ok, please stop Pupil Capture, open Pupil Mobile -> Settings (three dots bottom right) -> Stop background service and terminate app -> Restart both applications
I reboot phone and restart Pupil capture but still the same problem.
I tried again and same issue
Did you change anything between the last time it was successful and now?
Do I have to upgrade phone for invisible app instead of pupil mobile?
This will most definitively not work. Pupil Core does not work with the Pupil Invisible app.
Hi, does setting "auto exposure priority" impact the exposure time if the "auto exposure mode" is set to manual? thank you for your help
Sorry, I must have overlooked that one. To answer your question: No it does not.
Apologies for the followup message, I am struggling to find reference to this information in the documentation. Thank you very much for your help
Hi, my pupil player app suddenly stopped working on my PC. When I open it, the command window opens, but then nothing happens indefinitely. Anybody have suggestions for troubleshooting the issue? It was working fine up until now and nothing on the computer has changed as far as I know.
Please delete the user_settings_*
files in the pupil_player_settings
folder. Also, please make sure to use a recent version of Pupil Player to avoid this issue in the future.
I'll try that, thanks!
just kidding, there is no pupil_player_settings folder that I can see
It should be in your home folder. You can try searching for it, too
hmm, I also don't see a home folder nor does it come up in a search
do i have a weird version? it says v 1.23
nevermind, found pupil_player_settings
That is an older version, but the folder should be there anyway (if it started correctly before)
great, that worked! thanks so much
Auto exposure can be enabled with setting "Auto exposure mode" to "aperture priority"
Thank you, if auto exposure mode is set to aperture priority, and "auto exposure priority" is disabled, does this mean that frame rate will remain constant? or does "aperture priority" encompass dynamic frame rate + exposure time
for context I am building a plug in that programmatically updates the controls shown in this file: https://github.com/pupil-labs/pyuvc/blob/eb2c0a04caafcb5c2f349aa76d188b098571d982/controls.pxi
I am not 100% sure on that, to be honest. Please refer to the UVC standart for how it should behave (the actual camera behavior might differ though). http://www.cajunbot.com/wiki/images/8/85/USB_Video_Class_1.1.pdf
Thank you, I'll take a look through the documentation, sincerely appreciate the help!
I still have a problem of 'world - [ERROR] recorder: Pupil Mobile stream is not in sync. Aborting recording. Enable the Time Sync plugin and try again.'
Is this message related to this issue "pyre.pyre_node: Group default-time_sync-v1 not found." ?
Thank you @papr this document was very helpful. In the document it mentions that "Aperture Priority Mode β auto Exposure Time, manual Iris ". Do you happen to know if the wide angle world camera that comes with the pupil core headset maintains fixed aperture with only exposure time varying under the aperture priority mode?
Sorry for the delayed response. I wanted to confirm the correctness of the answer internally before posting it. The scene camera uses a fixed aperture.
These warnings show up in Player after exporting my recording: "no 2D data for pupil visualization found" "no 3D data for pupil visualization found" how can I resolve them?
Can you remind me of your setup? Pupil Core; exporting recorded data? Or do you use post-hoc detected data?
Yes, recording in Pupil Core and then exporting from Pupil Player into .csv files. The exported data seems to be fine and includes all that it should. I just thought these warnings may be indicatin something serious?
The error message comes from the eye video exporter, correct?
yes
Btw, that is a prime example for how a confidence signal should look like!
That message means that there is at least one frame for which there was no pupil data recorded. Please check the exported video. Does it include the expected pupil data visualization?
yes there are some instances in the video where there's no data:
oh good! maybe it's because I recorded in a dark room?
I think it is mostly the strong contrast between pupil and iris.
Hello! I am aware that Pupil Mobile is no longer maintained, however, I decided to use it in an experiment. Do you know if there is a way to send offline triggers to 2 different mobile + Pupil Core setups, based on which data can be later synchronized? Eg. signaling the start of the experiment. The point is to be independent of the desktop softwares as much as possible during the recording phase. Is zmq good for this?
Pupil Mobile does not have the possibility to receive triggers. The only way to sync two Pupil Mobile instances is through a common Capture instance in the same wifi. Enable Time Sync in Capture and the Pupil Mobile instances will start following its clock. Since you are running Capture at that point anyway, you can send the remote triggers to Capture, and record them in a third recording.
Alternatively, you should be able to sync time post hoc to Unix time using https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb. Then you can store your triggers independently of Pupil Mobile/Capture (e.g. in your experiment software) and correlate them post-hoc to the recordings as well. Make sure that the devices were recently time synced via NTP.
Ah, ok, looks like there is a gap in the eye video recording which is automatically filled with artificial frames by Player. It is expected, that it can't find any pupil data for these frames. These gaps can happen e.g. due to camera disconnects.
I noticed some recordings have so many gaps in them that the video player says the files is corrupt and can not open it. Is there some way to avoid the gaps?
@papr Thank you for your help! I will give it a try.
I see, so it works better for lighter-colored irises..
At least lighter-colored irises in IR light. This does not necessarily translate to visible light.
The darkness helps to evenly light the eye region as well. There are no other dark areas which could be mistaken for the pupil. Also, your 3d eye models are well fit which you can see from the highly correlated 3d diameter graph.
perfect! good to know that π
Which video player? Pupil Player? Also, how do you know that there are many gaps if the video does not open? Was it just a guess?
yeah it was just a guess, my laptop's video (.mp4) player won't open them.
A lot of video players cannot handle the intermediate video format. The exported video should work well though.
The interpolation of gaps in the exported video should never lead to a corrupted video.
then maybe I have not exported eye videos. I'll look into it.
Also, if the export is interrupted, the resulting video will be corrupted.
aha good to know π
Is this message related to the time sync (mobile) issue "pyre.pyre_node: Group default-time_sync-v1 not found." ?
That message means that there was no other node in that group yet. Groups are created and destroyed automatically. This message is expected if you start Capture and there is no other Time sync device already present
So it is not a message about failing time sync in Mobile?
Not necessarily. If the Pupil Mobile client appears in the list, it should work. Are you streaming all three cameras?
I am still troubling with timesync with Mobile.
Yes.
Yes I can see my phone in the Sync Group Members
but whenever I triggered recording remotely, Capture complains Mobile streaming is not time sync.
I understand the symptom of the issue. This is the code that triggers it https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/recorder.py#L279-L298 I do not have any ideas what might be the cause if time sync is already enabled.
However phone shows right time and date
The displayed time is independent of the internal clock used by Pupil Mobile/Capture
Could you share the capture.log
file after attempting to init a recording? You can find it in the pupil_capture_settings
folder.
OK. I found a problem. I sent a notification message and then I sent 'R' to start recording. This caused a time sync issue in Mobile streaming. I changed it to send 'R' first and then notification. Now it starts recording remotely without time-sync error message.
ok. And you can reproduce this behavior reliably with shutting Pupil Mobile down via the settings inbetween? I do not see a reason why the order of these notifications should play a role.
Do you start Capture manually and wait for the Mobile client to appear in the time sync menu before starting the recording? Or is Capture started from the external script as well?
What is the other notification?
Any explanation?
label = "TimeStamp_Optitrack" tempTrigger = new_trigger(label, globs.rBodyTS, 0.0) send_trigger(tempTrigger)
globs.rBodyTS is timestamp from Optitrack 3D tracking system
I started Capture manually. after starting Pupil Mobile.
Maybe Pupil Mobile just needed time to adjust its clock? I cannot tell for sure. But I am happy that you got it working.
Yes I just tested it and sending notification before 'R' caused the problem.
Once it started then I have to restart Capture and sending 'R' first did not cause error and started recording.
Is it possible that remote annotations coming in at the same time from different threads could be causing Pupil Capture to crash? If so could I get around the roblem by storing them all and sending them before ending the recording?
I have a question.
i want to use this and put it in a map called pupil_player_settings and then plugin
but i cannot find in in my pupil player workspace
You will need to launch Player once to create these folders. Afterward, searching fo pupil_player_settings
should work. The folder is in your home folder.
Please note that this is an outdated version of the plugin. You need to make this change to get the ui working properly:
- offset_x=0,
- offset_y=0,
+ offset_x=0.0,
+ offset_y=0.0,
@papr So I'm relatively new to the Pupil Core as well as the Pupil Capture program. Can anyone tell me what an optimal angular precision (RMS of angular offset/distance) value should be for calibration using a single marker?
I am very curious about this too, as I'm unable to find any information on what a 'good' calibration is
I am trying to upload a file into Pupil Player (v3.1.16) to do post hoc gaze calibration and pupil detection, but instead of opening the normal window only the eye videos are displayed and no post hoc analysis can be done. I have tried using an older version (v2.6.19) but the world video will not display and when I attempt to do a post hoc gaze calibration the app shuts itself down. Is there any way to fix this problem?
Could you please try reproducing this issue again in v3.1 and share the pupil_player_settings folder -> player.log file with us?
Still got the question (i am really new with the program) where can i find the new custom offsett (beacuase i have no phyton) only R and how can i lauch the player again
You should be able to select it from the "Gaze Data" menu in Player, if it is installed correctly.
Hi there, I'm trying to assemble the DIY kit and I'm on the step where you replace the IR filter, but I can't access the screws to unmount the lens holder because the two circuit boards are sandwiched together. What should I do?
Hi @user-7e9436 it looks like you also sent [email removed] an email asking about DIY setup. I think it is best that we continue the discussion here if that is OK with you.
https://vimeo.com/53005603 - shows the disassembly of HD-6000 camera circa 2013 πΈ . The design of the camera might be somewhat different in 2021. But I would conjecture that the two boards __should_ still be separable as shown here from a 2019 document: https://www.thingiverse.com/thing:3337827
Hi wrp, thank you for the reply, sounds good, we can continue the discussion on here. okay, judging from the photos in the link, it may just be a component with pins that connect the two boards together, in which case, i will try to "pry" the two boards apart as gently as possible using a screw driver or something. Also, with regards to the soldering of the IR emitters, could you confirm if the "black dot/product marking/angled cut" side is the + side? I called into digi-key today and the fellow said it is the anode/positive side, but after googling online i read that anode actually means negative side, so i'm not sure if what he said is correct. Edit: Okay, so i've manged to pull them apart, replace the IR filter, and soldered them back together, now how do you sandwich the two circuit boards back together so that it can be mounted onto the camera holder?
@user-755e9e can you provide any tips regarding orientation of IR emitters?
So is it possible than to only move the gaze to left or to the right or do i need to change everthing with the post hoc calibration.Because it doesnt work yet
No, the plugin is independent of the post hoc calibration. It should be listed in the same drop down menu though. Where you able to find the plugin folder?
is there a way to map norm_pos_x and norm_pos_y coordinates onto a 1920x1080 monitor?
Yes, read more about it here https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
note that i havent used surface tracking/april tags for my data collection
In this case, there is no automated way of mapping the scene camera coordinates to screen coordinates.
i did observe after calibration a surface approximately equal to the experiment monitor, if that is my default surface after calibration, i could be able to map the coordinates?
You would have to assume that there was no relative movement between scene camera and AOI/surface/display. Any scene camera movement invalidates the mapping. That is why surface tracking requires the markers. They can be detected automatically and the relative position between scene camera and surface can be calculated for each frame, without having to assume this relationship to be fixed.
Thanks a lot for the help @papr
When I specify an exact timestamp for my annotation to be saved on the recorded data, as in:
keySet = {'topic','label','timestamp','duration'};
topic = 'annotation.sizeSEQ';
label = strcat(name,num2str(j));
timestamp = toc(runtime)-t2/1000;
duration = t2/1000;
will the singletrip from MATLAB to Capture still affect this timestamp and change it?
Capture will save the exact timestamp that comes within the annotation. Capture does not make any assumptions about its correctness and therefore does not try to modify it in anyway.
perfect, thanks π
Unfortunately not, i need this for a deep calibration or is there anonther way to do this? i use the invisible for my thesis project but we sometimes the gaze shits a little bit to the right or the left zo i want to fix that. The custom seems like a good idea but how can i made it work on my computer
The first step is to find the pupil_player_settings
folder. You can either search for it or get to it via the windows explorer: This Pc -> Drive C -> Users -> your user name -> pupil_player_settings
i found that and put in the custom file
In the plugins folder, correct?
yes
yes in the map pluginss
Now, after starting Player, go to Gaze Data and you should see three options in the drop-down menu, instead of two.
Could you please share the player.log file that is in the pupil_player_settings folder?
There is
The player file. Windows is hiding the .log extension
So can you help me based on the player log
Looks like your version of the plugin contains more incompatibilities to the current Player version. Please use this one instead: https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433
Hi! Not sure this is the right place to ask (if not, I am sorry), but I will give it a try: I am setting up a study in which I will capture pupil size with the pupil core system, using pye3d Pupil Detection. People will always look at a cross on the screen (at least that is what I tell them to do) and listen to sentences, which they repeat after a short interval. Now my quesitons are: 1) for an accurate measurement of pupil size with the 3D model (or even 2D) is it absoluterly necessary to calibrate also the eye gaze? Meaning, does it matter for the precision of the pupil size estimation resulting from the eye-camera data, whether I have calibrated to establish a reference to the world camera, or not? I know from the literature that eye-gaze position does matter, but if we assume that my PPs all behave perfectly and only fixate the cross, would I be fine without calibration? 2) If I do not calibrate for eye-gaze, will I have issues with slippage = will slppage affect pupil size estimation? Or is that a totally independent issue?...
1) no need to calibrate gaze if you only need pupil size. Only make sure that 2d detection works well and the 3d model is fit accurately. The idea of pye3d is that it is independent of the gaze angle (on the population level). Also, if you gaze angle does not change anyway, there is no need to worry about it. 2) Slippage is an issue independent of calibration. It affects both, gaze and pupil size estimation. Even though pye3d attempts to readjust its model location continuously, it requires data from different viewing angles to do so well. If you are looking mostly forward, I recommend freezing your eye model after fitting. This avoids discontinuities during the measurement. This will make the measurement susceptible to slippage. It is therefore important to monitor the model fitness and refit the model after slippage. You can also un/freeze the model programatically if you want to integrate eye model fitting phases into your experiment.
Great! That helps a lot! Thank you!
Can i fix all that problems in some way? So that i usenthe right one with no errors. But thanks in advantges
Delete the old one, download the new one, and put it in the plugins folder.
Do you have the good link van the new version because i think i have the same problem with re install
If it still does not appear, please share the log file again. It will be updated
This is the link of the correct version https://discord.com/channels/285728493612957698/285728493612957698/821414795327963176
Hey, I have a quick question. I'm using the pupil labs core with Python, and was wondering whether I should use the API or run from source. I plan to use information that is processed and output from the device live to in other processes that are running in paralle
so as someone with a decent knowledge in Python but not much experience in multiprocessing/running multiple processes at once with information being directly connected between them, should I use the API or should I run from source
You can read more about the network API here https://docs.pupil-labs.com/developer/core/network-api/
and how do you recommend I connect these parallel processes in regards to the flow of information
Thank you!
please also when I try installing dependencies on my virtual environment set on python 3.6, which I understand is the version used in the source code, I get this
ERROR: Could not find a version that satisfies the requirement requirements.txt ERROR: No matching distribution found for requirements.txt
Scanning dependencies of target uvc
[ 11%] Building C object CMakeFiles/uvc.dir/src/ctrl.c.o
[ 22%] Building C object CMakeFiles/uvc.dir/src/ctrl-gen.c.o
[ 33%] Building C object CMakeFiles/uvc.dir/src/device.c.o
[ 44%] Building C object CMakeFiles/uvc.dir/src/diag.c.o
[ 55%] Building C object CMakeFiles/uvc.dir/src/frame.c.o
[ 66%] Building C object CMakeFiles/uvc.dir/src/init.c.o
[ 77%] Building C object CMakeFiles/uvc.dir/src/stream.c.o
/Users/Karim/Desktop/CochlearityPython/libuvc/src/stream.c:479:1: warning: non-void function does not return a value [-Wreturn-type]
}
^
/Users/Karim/Desktop/CochlearityPython/libuvc/src/stream.c:570:56: warning: format specifies type 'unsigned long' but the argument has type 'uint64_t'
(aka 'unsigned long long') [-Wformat]
printf("*** Correcting clock frequency to %lu\n", strmh->corrected_clock_freq );
~~~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
%llu
2 warnings generated.
[ 88%] Building C object CMakeFiles/uvc.dir/src/misc.c.o
[100%] Linking C shared library libuvc.dylib
[100%] Built target uvc
[100%] Built target uvc
Install the project...
-- Install configuration: "Release"
CMake Error at cmake_install.cmake:49 (file):
file cannot create directory: /usr/local/lib. Maybe need administrative
privileges.
The issue here seems to be missing permissions though
is pupil labs even supported for M1 chip Mac/Big Sur?
Hi, I recommend using the bundled application and processing the data in your other processes via the network API. The bundle is supported on Big Sur/M1 Macs since v3.1
Thanks! I have another quick question, is it possible to run from source using Big Sur and the M1 chip
Hi, as you mentioned, the side of the LED with the black component should be the negative(- anode) side.
okay, got it, thank you so much. And regarding how to "re-sandwich" the two boards together so that it is actually functionable (and can get electricity through, obviously), how would that be done?
I tried installing all the dependencies like the tutorial asked, but some failed
Yeah, running from source is very tricky at the moment because the dependencies are currently very convoluted regarding m1-emulation and native support. Technically, it is possible, but our support in this regard is limited.
Is there a way to change the calibration background to a black screen instead of the white default?
Not builtin. It is possible by creating a custom plugin that subclasses https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/calibration_choreography/screen_marker_plugin.py
I fixed that issue by using sudo twice, once before and after the commands between &&
but there are several files that just can't be installed from what I can tell
for example: ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly
Got it. Do you know what property within the class changes the background color?
I think one has to call glClearColor
. But I am not sure. I will have to check next week when I am back at work.
I will have to check. I think it will be a bit more complicated than overwriting a single attribute though.
Thank you for checking!
okay thanks!
Hey, I checked and unfortunately, it is not possible to change the background via subclassing in the current implementation. As a work around, I suggest printing the calibration marker and putting it on a dark surface, and using Single Marker Calibration in Physical mode. It might be necessary to leave an additional white border around the outer black ring for detection purposes. You can read more about Single marker calibration here https://docs.pupil-labs.com/core/software/pupil-capture/#single-marker-calibration-choreography
and as a followup to this, can the applications run properly without having first installed all the dependencies? there should be some support for people who use M1 macs
The dependencies are quite specific and some are complex to build (on all platforms). This is why we provide the bundle which works out-of-the-box. We support Big Sur users fully by providing the bundled application.
Of course, we try to support people running from source as much as possible. But in this case, numpy
, one of our main dependencies, is not providing prebuilt wheels for your specific platform. If you wanted to continue, you would have to install numpy
from source. Please understand, that this is something I cannot help you with, as it exceeds my expertise.
I am not sure how you installed Python, but I recommend using the Python 3.9.1 macOS 64-bit Intel installer https://www.python.org/downloads/release/python-391/ I think this is the version that will most likely lead to success.
Also, I would like you to note that running form source is very often not necessary. Your use case should work with the bundled application. If you think that my assessment is wrong I would be happy to discuss details.
sure, thank you for your advice! let me give some more background to what I want to use pupil core for
I want to be able to track the eye movements of someone live as they are put in specific listening situations. their pupil dilation and the azimuth of their eyes should be sent live, with the least amount of latency as possible, to other processes that are running in parallel. the eye movements and angle will be used to determine which sounds are amplified
What is your latency constraint?
because we are dealing with sound and having that sound directly output live to the user via a headset, the latency, again, needs to be minimal
optimally it'd be 20ms, as that's how long it'd take the mind usually takes to notice discrepancies between what it is seeing and what it is hearing
but we're looking to minimize it as much as possible
that's why I wanted to run from source
Since version 2.6 we have the possibility to run custom pupil detectors as plugins in the bundle. https://github.com/pupil-labs/pupil/releases/tag/v2.6
These run in the eye process in the same way as if you ran from modified source code.
You can write a custom "pupil detector" that does the necessary processing.
that's awesome
as for the rest
what do you recommend I do
for minimizing the latency
Do you have a proof-of-concept that shows that the current implementation requires lower latency?
If not, I suggest building a prototype in the simplest way possible (network API) and testing how much latency you get. Afterward, you can switch to implementing it as a plugin. If this is still not quick enough, one would have to evaluate what the largest contributor to the latency is.
Specifically, image transport from the camera to the computer + pupil detection already use some of your latency budget. I do not know the exact numbers for M1 macs though.
thank you! I will test this tomorrow after I finish with my finals
one more question
the device produced by the project should be one that can work entirely independently, which means that it has to work independent of the computer
or a wired computer, full pc. we're thinking of using a mini computer in the device itself for the processing, which probably can't run the applications
does that mean that, at that stage of the project, we would need to start running it from source
In this stage, I would recommend not using Capture anymore but creating a custom Python script that uses the necessary dependencies: - pyuvc for camera access https://github.com/pupil-labs/pyuvc - 2d pupil detector https://github.com/pupil-labs/pupil-detectors/ - pye3d https://github.com/pupil-labs/pye3d-detector/
thank you
Hello all,
I am currently learning the system and have a question regarding the calibration. If I choose one of the calibration methods (e.g., screen marker, natural feature) and the participant needs to look at the calibration points on an external monitor, can he move his head towards the points while doing the calibration or should the head be directed to the center of the screen and the calibration points are only fixed with the eyes (without head movement)?
Thank you in advance!
And, a follow-up question, should be the calibration distance be the same as the stimulus distance? For example, when the stimulus (e.g., object) is about 100cm away, should the calibration (e.g., on an external monitor) then be done with the same distance?
Thank you!
Hi @user-fb397c. With the 5-point screen-based calibration, the participant should keep their head still whilst following the marker with their gaze. For the single screen-based marker, the participants should gaze at the marker whilst slowly moving their head (e.g. in a spiral pattern). The 5-point calibration can be useful when the experiment involves just looking at the screen. The single marker (either physical or screen-based) is a good option if you want to investigate eye movements that will require larger rotations, as it allows you to calibrate at those rotation angles and cover a large range of your field of view. In terms of distance, yes, it is best practice to calibrate at the viewing distance you will be assessing. Read the calibration docs here: https://docs.pupil-labs.com/core/software/pupil-capture/#calibration
Hi, is there a way to enhance detection of the printed surface markers? I noted that not all the markers pasted on the computer monitor were detected throughout the experiment. Thank you.
Thanks for sharing a picture of the setup via DM. Please increase the width of the outer white border to improve detection. See the picture below for the reference of the white border width (white area between gray cut lines and black border).
Could you share a picture of the setup with us?
@papr Noted with thanks !!
Is there a way to test a script to remotely control Core (that contains recording, annotations, etc.), without having the device plugged in?
The short answer is: Not really π You can use videos recorded with Capture as video input by dragging them onto world/eye windows. But they will generate data using the recorded timestamps, which will not be in sync with Capture's clock, i.e. die pupil data will not be in sync with your annotations
it's alright, hopefully in the near future π
Please note, this is not a feature that we will work on in the near future. Capture's architecture is not designed for coordinated playback of recordings. For that use, we recommend using Pupil Player.
Ah ok, noted π
Hello! How could I transfer marked AOIs or Surfaces from one recording to another? I have several similar trials and I worked with markers. I would like to transfer them from one video to another
You can define the surfaces in one recording, close Player, and copy the surface_definitions_v01
file to your other recordings.
thank you very much it worked perfectly
Hi, is there any way to salvage data if the recording wasn't stopped before pupil capture was closed (ie quit program without hitting R first)? If I cannibalize info files from another recording, pupil player will launch the directory but not actually display data.
The videos will likely not be recoverable. The remaining can be salvaged. What data are you depending on in your experiment?
I got an error message of sensor permission. I already enabled all permission for PupilMobile.
Please any body ?
blinks, pupil diameter, and ideally fixations
Blinks can be calculated post-hoc based on the recorded pupil confidence. The pupil data recoverable. The fixations can be calculated post-hoc based on the gaze data, which is also recoverable. Unfortunately, without scene video, there will be no visual context.
Regarding the fixations, do you need them to be placed in a visual context (scene video)?
You can share the pldata files from all affected recordings with [email removed] I can generate the missing timestamp files from them on Monday.
Awesome, thank you!
@user-7e9436 can you reverse the steps you took to separate the boards? The connector between the two boards should just be able to be plugged back in again to work.
that doesn't seem to be possible, as the connector that i pried apart needs to be stuck back onto the chipboard somehow and was not a "socket/plug" type connector. Here is a photo attached to view the components, the blue circled area is where it used to be connected by. https://pasteboard.co/JTmD6NV.jpg
how does the calibration sample duration affect the calibration and the recorded data? does it have something to do with the speed of calibration?
This parameter is indeed related to the duration of the calibration. It is the number of frames in which the calibration marker needs to be detected before proceeding to the next marker location. There is a sweet spot for that value. If you set the value too small, the subjects will have trouble tracking the marker. The longer the duration takes, the more likely it is that the subject will not fixate the marker accurately.
any guidance on this question? π
Looks like it is a pluggable connector. But you pulled the "male" connector off from the surface of the board. This will require re-soldering the male connector or starting fresh with a new camera.
i see, uh oh, i knew something like that happened but they were stuck together quite a bit. Would you happen to have a set of this camera already made that you can sell to me instead? That would make things so much easier, because trying to order another one and go through that whole process again might just end up with the same result.
If we had any hd-6000 cameras we would be happy to offer them, but we do not. The only thing we can offer is Pupil Core eye cameras - https://pupil-labs.com/products/core/accessories - but the connection geometry for this eye cam uses a triangular rail cross section vs DIY headset's circular rail cross section. Also cost for the 200hz Core eye cam might be outside of a diy project budget.
okay, i will have to give it some considerations then, perhaps taking the camera to a local electronics shop to have it put together might be the best option. thank you for your time and patience in answering all of my questions!
Can I specify teh time stamps I want to export the fixation data from recording via player? How to do that? Thank you
Yes! Use the trim marks in Player to set the exported range.
@wrp many thanks! So I can trim many parts and export those parts at one timeοΌ
In Pupil Player you set the trim marks and all plugins that export data will export for the range set. By default the trim marks are set to start and end of the recording
There are only 2 trim marks.
So if you want multiple segments you will need to set the trim marks and export for each segment
@wrp noted with thanks!!
Hello everyone, I have a question about latency. I have a unit that I bought 3 years ago. The refresh rate on the eye camera is 120 Hz. What I'm wondering about is the latency of the 3d model, in particular diameter_3d: the time lag between the moment you get the message and the time in the past when the pupil actually had that diameter.
If you are talking about the realtime delay, the response depends on your setup and may vary.
I had my monitor cycle from from black to white in a sinewave at 1 He (i.e., full period of 1 second). I simultaneously measured diameter_3d. By calculating the offset between the luminance and pupil curves, I measure the latency of the pupil signal. I found 530 ms in one subject, and 580 ms in another.
I suggest talking to @user-430fc1 regarding PLR measurements.
If you subtract the latency of my monitor, which I haven't measured but which is usually around 20-50 ms, the latency works out to around 500 ms.
Just to finish: physiologically, the latency should be around 300 ms, so that give the tracker + model delay of about 200 ms.
The way to calculate the eye tracking + transmission delay is to sync the clock of your receiver script with Pupil Capture and to subtract the data timestamp from the reception time. See the docs regarding the clocks used by Capture https://docs.pupil-labs.com/core/terminology/#timing
Do you find that a delay of 200 ms is plausible?
I cannot judge this value as it depends heavily on your setup and how you measure the delay. @user-430fc1 has been working on measuring the system's delay post-hoc for his PLR research. They will be able to give you an unbiased evaluation.
OK, thank you very much. I'll take a look at the sync procedure, and get in touch [email removed]
Hi, I'm trying to run pupil core from source on ODROID N2. When I run player, I get the following error ,message
player - [INFO] numexpr.utils: NumExpr defaulting to 6 threads. player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those. player - [ERROR] launchables.player: Process player_drop crashed with trace: Traceback (most recent call last): File "/home/odroid/src/pupil/pupil_src/launchables/player.py", line 887, in player_drop window = glfw.create_window(w, h, "Pupil Player", None, None) File "/home/odroid/src/pupil_venv/lib/python3.8/site-packages/glfw/init.py", line 1180, in create_window return _glfw.glfwCreateWindow(width, height, _to_char_p(title), File "/home/odroid/src/pupil_venv/lib/python3.8/site-packages/glfw/init.py", line 632, in errcheck _reraise(exc[1], exc[2]) File "/home/odroid/src/pupil_venv/lib/python3.8/site-packages/glfw/init.py", line 54, in _reraise raise exception.with_traceback(traceback) File "/home/odroid/src/pupil_venv/lib/python3.8/site-packages/glfw/init.py", line 611, in callback_wrapper return func(args, *kwargs) File "/home/odroid/src/pupil_venv/lib/python3.8/site-packages/glfw/init.py", line 832, in _handle_glfw_errors raise GLFWError(message, error_code=error_code) glfw.GLFWError: (65542) b'GLX: No GLXFBConfigs returned'
However, when I launch python and run create_window, it runs with no issue. So I don't think it is a OpenGL ES issue.
Have anyone tried running pupil core on Raspberry Pi or ODROID?
What I eventually want to do is run a recording with the player on ODROID and compare the performance. So I don't need a window to be opened.
Maybe one more word regarding your goal. Usually, people try to run Pupil Capture on the RPI as a mobile recording platform. But this is not what you are trying to do, correct? You are trying to run Pupil Player (without a window), correct? Please note that Player's primary purpose is the visualization and export of the recorded data. Without the former, to implement the latter in a specialised script, without the complexity that comes from running a GUI application.
Running Pupil on RPI requires changes to the source code as the glfw.SCALE_TO_MONITOR
window hint is not supported on RPI https://github.com/pupil-labs/pupil/blob/master/pupil_src/launchables/player.py#L885-L886
There are previous discussions on this topic here on Discord and in our Github issues https://github.com/pupil-labs/pupil/issues
Hi, We are having issues with the eyetracking glasses. When we connect the eyetrackers to the computer the left eye camera does not show anything. We have tried on three different computer and also tried 3 different eyetrackers but the issue remains. Do someone know how to fix this issue?
Please contact [email removed] in this regard.
Thank you, will do that.
HI, just to clarify
However, when I launch python and run create_window, it runs with no issue. this means there is no warning being displayed?
Please note that Pupil raises all glfw errors (with one exception) as they usually cause unexpected issues down the line. You can ignore specific glfw errors by adding their error code (65542
in your case) here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gl_utils/utils.py#L375-L379
Yes, no warning was displayed when running python on terminal, glfw.init() then glfw.create_window(1280, 720, "Pupil Player", None, None)
rpi
Exactly, I'm trying to use Player to 1. profile the sw in RPI 2. run the same dataset to see the impact of the change I made in the code, let's say in pye3d So, I don't need the visualisation. I need the cpu profile data and log of eye gaze location.
Out of personal curiosity, what is your motivation to run the software on a RPI/ODROID instead of a desktop computer with one of the supported operating systems? You do not need to respond if you are not ready to reveal this type of information about your project.
Hi I have recently finished an experiment and need to export the raw data from the pupil core eye tracker for analysis. However, when I open the recordings in pupil player the data visualiser at the bottom does not show anything and the blink detector plugin does not seem to be working either (see screen shot). Do you know why this is? For context I am only interested in the raw pupil position values (i.e. from the 3D model) and not gaze position (as it was not possible to run a calibration). I also need to know when the subject blinks. For the experiment I used the pupil remote plugin to start and stop the recording from a custom python script and to send remote annotations.
Hey, is it possible that you synced Pupil time to Unix epoch during your experiment?
Sorry, not sure what you mean?
Did you use any type of time sync during the recording?
This is the code I used to set up the connection with the eye tracker. I followed the instruction I found on here https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote
Ok, thank, this confirms my initial hunch. time.time()
timestamps are only accurate when using 64-bit floats. Out of technical reasons, the timeline visualisation only uses 32-bit floats, which have decreased precision. As a result, multiple data points are visualized at the same location in the timeline. This issue only affects the visualization. Your recorded timestamps remain accurate to their original precision.
I am looking at the source code for the relevant sections right now.
Okay, thanks for that. So this means the timeline visualisation does not work for recordings made using pupil remote?
It does not work well for recordings synced to Unix time time.time()
, correct. This is independent of the usage of Pupil Remote.
Okay! So long as it is just superficial I can live with it. Thanks
What about the issue I am having with the blink detector? For some, but not all, of my recording the blink detect plugin does not seem to be detecting blinks.
It says that no blinks were detected (when there are clearly blinks).
It looks like this recording includes timestamps from before the clock adjustment, i.e. there is a huge timestamp gap at the beginning of your recording. This causes the blink detector to use an incorrect filter width. As a result, no blinks can be detected.
How many recordings do you have made that show this issue?
The blink detector is very reliant on the pupil detection quality. The better the quality the better it works. It is possible that the detection is bad for these specific recordings. If you want you can share them with [email removed] and we can have a look. In the provided screenshot the detection looks good though! π€
Absolutely, thanks. I will try to email you the recording now.
Does anyone knows power requirements for core??
Are you referring to electrical power?
yes
I will forward your question to our hardware team.
is there any update on this?
thanks
@papr I have just sent you an email to data@pupil-labs.com with the recording that has the problem with the blink detector. Thanks a lot for your help.
Great, thank you. I will have a look tomorrow and come back to you via email.
Great! Thanks! Please let me know if you can't download the file via the attached link.
Thank you very much! this is very helpful. I was also wondering if there is a way to make the world camera sit more solidly in the headset? according to the API docs (https://docs.pupil-labs.com/core/hardware/#focus-world-camera) the focus of the camera included with the core headset can be set, but in previous posts these cameras are described as fixed focal length cameras (which i see as there is usually only one setting at which the camera is focused for any depth away from the camera. However, this setting requires me to rotate the camera out of its holder quite a bit and I have noticed that the camera sits quite loosely in the fixture. Is this looseness expected or is it possible that there is something going on with the headset?
Please contact [email removed] in this regard.
Thanks for investigating. How odd! I have identified two recordings (out of about 90- so not the end of the world but still a loss of data/confounded data) that have this issue. Interestingly, both recordings were at the start of the experiment (i.e. first recording made after opening pupil capture/starting my python script). Do you think that could have something to do with it?
Yeah, that makes sense. The first time you start Capture it uses its original clock. Afterward, your script adjusts the clock and Capture keeps it adjusted until shutdown.
Ah, makes sense. Interesting that this did not happen to every recording at the start of the experiment though.
It is a race condition. If you add a time.sleep()
between the clock adjustment and recording start, you can get rid of it
Alternatively, you do not perform the clock adjustment before the experiment but post-hoc https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Is there any way to resolve it post hoc or can I just not use blink detector for those recordings? Moving forward it might be worth adding a small recording at the start of the experiment (that I later delete) to reset the clock.
I can check tomorrow. Not sure how successful it will be.
Oh right.. I sent that before reading your time.sleep()
suggestion
I use time.sleep()
in other parts of my code for pauses during the experiment. Does this mean it will affect the clock time? Wondering if I should take out time.sleep()
and use core.wait()
instead.
That is fine. Sending the T command is all about syncing the clocks, such that the applications (Capture and experiment) can record data independently of each other.
Cool. Thanks. I have added time.sleep(1)
after setting up the clock as you suggested.
So, there are 24-30 data points with timestamps before the recording start. Sleeping for 1 second should be plenty.
Can you share the second recording as well? I can have a look at how much data was recorded before the clock adjustment kicked in. This can inform the required sleep duration.
Hi @papr Have you had a chance to look at the second recording? No worries if you haven't had a chance yet.
Sure! Do you mean the recording made after the one I sent you or do you mean the other recording that the blink detection did not work?
The latter
I'll do that now.
max 5V and 900mA as it is the standard USB3.0 power This is what I got as a preliminary response. Is that sufficient for your use case?
I have only one usb port on my laptop and I have 2 pupil core. I am using an usb hub to connect both cores. all 5 cameras(2 main + 3 IR cam) working but 4th IR cam is not working. When I use different pc which has multiple usb port, cores are working well. We think that we don't have enough a power over usb hub.
Hello! π Is it possible to know the visual field in degrees?
Do you mean the scene cameras field of view? See the Scene camera FOV
section at https://pupil-labs.com/products/core/tech-specs/
hi ~ my pupil core can normally shows the Fixation while recording under the shades(left), but there's not any Fixation on the output videos recorded when walking out from the shades to the sun(right). Is there any way can solved this problem?
Hi, Player only shows high-confidence data and it is likely that the sunshine is generating a lot of IR reflections in your subject's eyes, degrading the pupil detection performance (an issue that many mobile eye trackers face). I suggest enabling the eye video overlay to check the pupil detection. If you feel that the detection is good enough for your purpose, you can decrease the Minimum data confidence
threshold in Player's general settings to show lower confidence data.
thank you very much, I'll try this out !!
Hi! Thanks! I was talking during an experiment. Is it possible to know the angle according to the movement of the eye?
You can calculate the angle between gaze_point_3d values using the cosine distance. See https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L131-L134 for an implementation.
It indeed looks like there is not enough power and/or bus capacity (data transfer bandwidth). We highly recommend using a dedicated USB port per headset without intermediate devices, e.g. USB hub or passive USB extensions
thank you π
No not yet. I thought it might make sense to move that "work package" to your time zone in case I had any spontaneous questions/issues.
Thanks @papr. Much appreciated.
I sent you the filtered files via email. This is the code I used to correct the files https://nbviewer.jupyter.org/gist/papr/96f35bebcfea4e9b9552590868ca7f54
You can skip to the last line for instructions.
Fantastic! Thanks very much @papr.
Any suggestions on how to diagnose the occasional lost frame to lost seconds on the world camera only (appears as grey screen in capture and player)?
Hi there - could anyone tell me the FOV of the 200Hz eye cameras?
Actually, I do not know if we have up-to-date numbers for that. Let me come back to you on Monday.
@papr Great, thanks so much!
Hello, running pupil_service from source, and playing with window resize, I obtained the right band too narrow to use the controls (namely the "loop" control). How can I adjust or restore the correct width?
Do you see the 3 vertical bars in the top left? You can use them to change the menu's width.
Oh! I was clicking on them and nothing happened, now I see I have to drag them. Thank you!
Hi all, I'm here from the pupil labs website. Is anyone using this tech for individual self-study of attention/focus/concentration or vigilance/cognitive fatigue?
Hello, I am a novice and I have a question about surfaces. In my experiment participants read a printed text (on a sheet of paper). There are four Apriltags placed in the corners of the sheet. But in the Pupil Player there are a lot of frames where no markers are detected at all. For the rest of the frames only one marker (out of four) is detected. Are there any ways to create the surface for the all frames?
It is likely that you are missing sufficient white border around the tags. But to be sure, could you please share a picture of the paper?
Hi, what causes some fixation id to be missing from the exported data from pupil player? Is it due to the maximum dispersion and duration setting under Fixation Detector? I tried setting at max which seem to work but the fixation id changes. Or is it that I must scroll right to the end before exporting? But shouldnβt the trim marks suffice. You advice is much appreciated. Thank you
Any issue with changing the setting to this? Thank you.
Pupil Core Eye Cam1 (120 Hz)
FOVinDeg(resolution=(320, 240), horizontal=51, vertical=39, diagonal_main=61, diagonal_off=61)
FOVinDeg(resolution=(640, 480), horizontal=51, vertical=39, diagonal_main=62, diagonal_off=62)
Pupil Core Eye Cam2 (200 Hz)
FOVinDeg(resolution=(192, 192), horizontal=37, vertical=37, diagonal_main=51, diagonal_off=51)
FOVinDeg(resolution=(400, 400), horizontal=39, vertical=39, diagonal_main=53, diagonal_off=53)
Pupil Core Eye Cam3 (200 Hz, HTC Vive add-on)
FOVinDeg(resolution=(192, 192), horizontal=69, vertical=69, diagonal_main=88, diagonal_off=88)
FOVinDeg(resolution=(400, 400), horizontal=71, vertical=71, diagonal_main=91, diagonal_off=91)
Thanks so much!
What do you mean the fixation id is missing? Does the exported fixations.csv have data but no id entries?
Changing the parameters changes the number of detected fixations, and therefore the ids as well. Read more about them here https://docs.pupil-labs.com/core/best-practices/#fixation-filter-thresholds
Thanks @papr. What I meant was some fixations were missing/ skipped. Is it something about the trim marks. Seems like if I donβt scroll right to the end of the trim mark before exporting, only limited fixations get exported.
Yes, you are right. But I also want to analyse the recodings with such technical error. Is it possible somehow? Also please add information about sufficient white border to the manual.
Unfortunately, there is not much one can do regarding marker detection if they are missing the white border.
I will add the requested note to the docs.
Fixations require a bit to be detected. There is a progress indicator around the fixation detector's menu icon. You should wait until the detection has completed before exporting.
Thanks. Will take note.
Hello. I`m also fairly new to the pupil core glasses. I noticed in pupil player two different colours for pupil detection. One red and one blue. The blue one seems to be more accurate. Is there a way disregard faulty detection and use the blue one instead? The player used the gaze data from the recording. See Screenshot (is this just a visual artefact or does in affect data (x/y coordinates)? Thank you very much.
Please see this message for reference https://discord.com/channels/285728493612957698/285728493612957698/766385521291034636
Hi everyone! I'm new to pupil labs. Is there a way to analyze saccades (number, length, velocity) with pupil labs?
Hi, the software does not offer built-in saccade analysis, but there are community projects that have that implemented, e.g. by @user-2be752 https://github.com/teresa-canasbajo/bdd-driveratt/tree/master/eye_tracking/preprocessing
Hi thanks, I'll check it out
Thank you. I will try to use the 2D identification then.
The 2D and 3D detectors run at the same time. You can improve the 3D model by rolling the eyes, like in this video: https://docs.pupil-labs.com/core/#_3-check-pupil-detection
Hello, I'm wondering what factors can lead to variability in diameter_3d. I'm guessing camera distance... Are there any others?
Actually, camera distance should not be one of them (in theory). The 3d eye model estimates the position of the eyes in relation to the eye camera. Model refits can cause sudden jumps in diameter values. Slippage without model refits can cause incorrect values, too. There is a gaze-angle dependency due to the distortion of the cornea. pye3d is able to correct that at the population level.
The correct model estimation also depends on knowing the correct focal length of the eye camera
Just to show what I mean. Far left is a NeurOptics pupillometer
Were these measured at the same time?
20 trials each, alternating, with spectrally matched 1 s pulses of white light
Usually, when analysing pupil responses one would subtract trial-baseline/pre-stimulus values from the trial values
MathΓ΄t, S., Fabius, J., Van Heusden, E., & Van der Stigchel, S. (2018). Safe and sensible preprocessing and baseline correction of pupil-size data. Behavior Research Methods, 50(1), 94β106. https://doi.org/10.3758/s13428-017-1007-2
I guess there must have been some slippage of the camera as these data were collected in a dark room so some of the lower estimates don't seem too consistent π€
Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time
See also the Spontaneous fluctuations in pupil size section in MathΓ΄t, S. (2018). Pupillometry: Psychology, Physiology, and Function. Journal of Cognition, 1(1), 1β23. https://doi.org/10.5334/joc.18
This review says that a typical pupil size range is 2-8mm. Values smaller than that indicate an inaccurate 3d eye model, yes. Especially if the data is as smooth as in your graph.
The smoothness indicates that the model is at least stable. i.e. the change in pupil size might be scaled incorrectly, but the shape of the curve should be correct nonetheless.
Yes of course, it looks very stable and repeatable. I was thinking in terms of comparability with other systems, but I suppose that will always be wishful thinking given the variability between hardware, measurement principles, etc.
Yeah, the only way to truly compare these is by recording them in parallel. And it should be done, even though there is no real ground truth. π
Thanks!
Hi, I have a long pupil mobile recording files. I could stop pupil mobile recording properly and the folder structure seems fine. Hoever, pupil player stops during the initial loading. In the player.log, the following messages
2021-03-30 12:59:59,839 - MainProcess - [INFO] os_utils: Disabled idle sleep. 2021-03-30 13:00:00,187 - player - [INFO] numexpr.utils: Note: NumExpr detected 16 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. 2021-03-30 13:00:00,187 - player - [INFO] numexpr.utils: NumExpr defaulting to 8 threads. 2021-03-30 13:00:10,420 - player - [INFO] launchables.player: Starting new session with '/Volumes/PupilMobileData/20210324140539137' 2021-03-30 13:00:10,438 - player - [INFO] pupil_recording.update.new_style: Checking for world-less recording... 2021-03-30 13:01:48,044 - player - [ERROR] libav.mjpeg: error dc 2021-03-30 13:01:48,044 - player - [ERROR] libav.mjpeg: error y=52 x=2 2021-03-30 13:01:51,671 - player - [ERROR] libav.mjpeg: error dc 2021-03-30 13:01:51,672 - player - [ERROR] libav.mjpeg: error y=46 x=1 2021-03-30 13:02:06,378 - player - [ERROR] libav.mjpeg: error dc 2021-03-30 13:02:06,379 - player - [ERROR] libav.mjpeg: error y=75 x=1 2021-03-30 13:06:35,654 - player - [ERROR] libav.mp2: Header missing 2021-03-30 13:06:36,118 - MainProcess - [INFO] os_utils: Re-enabled idle sleep.
any help about this issue? thanks!
@papr Hi! Re: "Freezing" the model during posthoc pupil detection and then pressing "redetect", are the outcomes of the 3d model/gaze essentially the same between v2.6 and v3.1 because the model is 'frozen' and is not continuously updated based on past history? If we want very precise fixation information, should we always freeze or rely on v2.6? Thanks!
The outcome is not the same, as v3 ships with pye3d
which is able to correct for corneal distortion. The 3d detector in v2 and earlier does not have that feature. The question of freezing is not related to the software version, but wether your recording includes slippage or not. I suggest freezing if there is no slippage. If there is slippage, I recommend not freezing the model.
Hello, can I get any help to solve an issue related with an intel RealSense D435 camera? The main issue is that 'free(): invalid pointer' error when I try to activate RealSense plugin (realsense2_backend.py) in Pupil-labs application. Thanks in advance.
This looks like an issue that is related to the librealsense software dependency that the backend is using. I do not think that we will be able to help with that. π
@papr Okay, thanks for the reply.
Returning to what you said about model re-fits... I think I have observed these jumps in diameter_3d in my data, and occasionally the 'jumps' survive a <.95 confidence filter. For a trial-based experiment, would freezing the model before a period of interest make sense, and then unfreezing during rest periods?
would freezing the model before a period of interest make sense, and then unfreezing during rest periods? 100%. I would even ask the users to roll their eyes during the resting period to get good eye model fitting data.
Do these look like the kind of 'jumps' you were referring to?
No, this look more like false-positive high-confidence 2d data. If you follow https://doi.org/10.3758/s13428-018-1075-y you should be able to get rid of that noise.
Hello! When I want to jump from one fixation to another: Is there a possible way to use the keyboard instead of clicking "next fixation"?
The hotkeys are f
for next fixation and F
(shift + f
) for previous fixation
thank you! I couldn't find previous fixation
when I work with the annotation player instead of surface tracker in order to mark areas of interests: Is there any possibility to see in display bar the marked fixations?
Currently, this is not implemented.
I press the hot key button and it seems to work fine but I can't see the fixation I've marked
meanwhile using the surface tracker the display bar shows when this areas was focused
and there is no possibilty to correct it?
Unfortunately, this is not implemented either, at the moment.
Hello I am using pupil_service, I run it from source, and read a recorded file. I had problem with the loop option today. At the beginning the loop didn't work, although the button was colored. So I dragged the recorded file in the eye0 window multiple times. Until I realized that this makes a new camera icon and menu appear, so that there will be one for every newly dragged file (even if it is the same file). Why are there many if only the last dragged file is actually played/playable? In case it is not so, how to tell which is being played then? And how to direct pupil_service to play the desired one? By the way the loop option seems to be working now
The multiple video source instances are indeed an issue with Pupil Service. It works as expected in Pupil Capture. We will look into it. The videos play back correctly for me, though. I used an existing eye video recording made with Pupil Capture.
Hello, on Pupil Capture is there a way to save single-marker calibrations for post-hoc analysis? Also, if calibrating multiple times, are the previous calibrations overrode? And if so is there a way to access previous calibrations that are optimal?
Hi @user-d90133. Indeed you can use single markers for post-hoc calibration. If you record multiple calibration choreographies, you can also validate and choose the most optimal. You can read more about that in the docs: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection. The videos on that page show you how to perform a basic post-hoc calibration, transfer calibrations between recordings, and validate calibrations. I suggest that you try to replicate the videos to fully understand the processes.