hey, can I get any examples for using messagepack API? want to use surface tracker, world view camera module, calibration, and recording, and export function to use!
See this for starting and stopping calibration and recordings: https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote
See these for receiving surface tracker and world view data: https://github.com/pupil-labs/pupil-helpers/tree/master/python
Exports cannot be triggered remotely since Pupil Player does not expose a Network API.
auto exposure window, if I had to guess
Thank you! A second question, since this was developed for the eye cameras, are the thresholds and AE windows that are hardcoded specific to the eye cameras? If yes, would it be possible to provide some guidance on how these values were selected?
hi, i'm working on trying to project the pupil gaze data onto video captured by a second camera. I'm working with the raw gaze data in the .pldata files and wanted to know how I might use the gaze_point_3d and other data from the gaze and fixation files to achieve the same projection as seen when you load the data in the pupil player
Do you know how to read the raw data already?
Do I understand correctly that your gaze data was not calibrated to this second camera, but to the primary Core headset scene camera?
yes it's been calibrated to the scene camera and we're translating the data to the second camera
ok, thanks. I need one more clarification. Do you know how to do the necessary translation/transformation already? Or is this what you like to know?
we know the transformation, I guess what I'd really like to know is what data from the files i need/how they are being processed to make the projection
The code implementing our camera (un)projections and the default camera intrinsics can be found here https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py
ok, great, that I can help you with π Check out our coordinate system reference: https://docs.pupil-labs.com/core/terminology/#coordinate-system
yes, we've already read the data using the file_methods.py functions, this should be enough to get started with the projection. thank you!
Hi@papr, the result is as shown in the screenshot.
Please check the ...\site-packages\uvc\
folder for a file called turbojpeg.dll
and copy it to the pupl_external
folder
Thank you, now I know what is actually missing. You have pyuvc installed already, correct?
I still don't know how to fix this problem.
I think I don't have UVC installed, I just configured the environment according to Pupil Invisible Monitor's tutorial.
@user-a98526 ok, no problem, this is the missing file. It is also part of the Pupil Invisible bundle.
Yes it works, thanks for your help.
Hi@papr, Another question, how do I get IMU data in real time. Another question, how do I get IMU data in real time, I modified SENSOR_TYPES, but the following problems occurredοΌ Traceback (most recent call last): File "E:/pupil/pupil_invisible/pupil-invisible-monitor/src/invisible_data_get.py", line 102, in <module> main() # Execute example File "E:/pupil/pupil_invisible/pupil-invisible-monitor/src/invisible_data_get.py", line 50, in main for data in sensor.fetch_data(): File "E:\Anaconda\envs\pupil_invisible\lib\site-packages\ndsi\sensor.py", line 308, in fetch_data value = self.formatter.decode_msg(data_msg=data_msg) File "E:\Anaconda\envs\pupil_invisible\lib\site-packages\ndsi\formatter.py", line 308, in decode_msg return IMUValue(content) TypeError: new() takes 8 positional arguments but 18 were given*
Could you please specify the code that you are running, including your changes? You can share it easily using https://gist.github.com
This is the GistοΌ https://gist.github.com/yb525533341/a55d79d5f98253f7ce9d20757c479a81
Thanks. The changes should work. I will give it a try and come back to you once I was able to reproduce the issue (probably tomorrow). Could you let me know which Python version and ndsi version you are running?
python version=3.6.12 ndsi version=1.3 Thank you for your patient help.
Looks like the application's requirements were not updated for a while. I just updated them. In your ndsi repository copy (pupil-invisible-monitor
folder) you can do:
git pull
python -m pip install .
Actually, you should upgrade to pyndsi 1.4. It looks like your issue was fixed already in this version https://github.com/pupil-labs/pyndsi/pull/57
@papr and @user-a98526 python=3.6.12 with ndsi=v1.4 works for me (just tested)
ndsi=v1.4 works for me, thanks for the help of @papr and @nmt.
Hi@papr and@nmt, sorry to disturb you again. I am going to use the real-time images and gaze points obtained by Pupil-invisible for intention object recognition. I used YOLOv3 for object detection after obtaining world_image, and it took about 0.8s for each frame. But this has a problem. After adding the target detection, the program has been stuck in the function part of the received image, and no gaze information has been obtained. This is my gist. To simplify, I replaced the object detection process with a delay of 0.8 seconds. https://gist.github.com/yb525533341/4f86123641a9b1aa18cc0c88c892abd6
So to clarify, if you remove the time.sleep(0.8)
part, it works as expected. If you add it, the program gets stuck? Can you determine in which line?
I guess that pupil_invisible data should be in the form of a queue, and the data release process is slowed down with a delay of 0.8s.
Yes, if I delete the 0.8s delay (or YOLO detection program), it will work normally. The code is on line 55.
Apologies, I meant where/in which line does the program get stuck if you add line 55 (time.sleep(0.8)
)
Also, I suggest moving that to line 64, right before the visualization. I would recommend to avoid any heavy processing in the fetch_data()
loop
Sorry, in fact, the code is not stuck, but falls into a loop at lines 52-55.
Ah, ok, that makes sense. The fetch_data() loop returns data as long as the sensor has some available. If the loop iteration is slower than new data arrives, it will continue. Moving the YOLO/sleep out of there will solve your issue.
This is fascinating, and it does work. By the way, is there any other way of judging whether there is a wearer?
Not at the moment.
Well i have a question. I wanted to callibrate the device with this https://docs.pupil-labs.com/core/software/pupil-player/#pupil-data-and-post-hoc-detection link. However if i am following the steps i do not see the option vis eye video overlay but sapparetly
It is called Eye Overlay, you can turn it on in the Plugin Manager
It is in the version pupil player v3.1.16
Ok, but then i don`t see the button for move overlay etc, and if i am goning to the next step and do a offline pupil detection it is a post hoc. Is this because i kalibrated also on the phone in the app
and more of the functions in the video are no the same as in my pupil player
The overlay is independent of the post hoc detection and calibration. You should be able to move the overlay by dragging it
oke i understand that bu is the post hoc calibration than the same as the offline calibration because that function i doesn`t have
post-hoc == offline; real-time == online
Yes, it is the same
oke than i need to try istead of asking! Maybe ineed to try before asking! Thanks and i will return with questions
I'm trying to visualize my pupil diameter recording in each of my experiment's trials, to have the timestamps on a primary x axis and the annotation labels on a secondary x axis. I have written such a script:
` if 'TRI' in trial_label:
fig, ax1 = plt.subplots()
ax2 = ax1.twiny()
fig.set_figheight(5)
fig.set_figwidth(25)
ax1.yaxis.grid()
plt.xticks(ticks, labels)
plt.plot(pupil_data_in_trial_eye0['pupil_timestamp'].loc[pupil_data_in_trial_eye0['trial'] == trial_label], pupil_data_in_trial_eye0['diameter_3d'].loc[pupil_data_in_trial_eye0['trial'] == trial_label])
plt.plot(pupil_data_in_trial_eye1['pupil_timestamp'].loc[pupil_data_in_trial_eye1['trial'] == trial_label], pupil_data_in_trial_eye1['diameter_3d'].loc[pupil_data_in_trial_eye1['trial'] == trial_label])
plt.legend(['eye0', 'eye1'])
ax1.set_xlabel('Timestamps [s]')
ax1.set_ylabel('Diameter [mm]')
plt.title('Pupil Diameter in ' + str(label) )
plt.grid(b=True)`
and I get my trial visualization as shown, in which primary x axis timestamps are no correct.
I have also tried plotting my main data on ax1 and the ticks and labels on ax2, and also tried separating ticks and labels like
ax2.set_xticks(ticks)
ax2.set_xticklabels(labels)
but still won't work.
You are mixing pyplot and Axes apis. I suggest sticking to the Axes api. This allows for more explicit control over which object you are manipulating.
- plt.xticks(ticks, labels)
+ ax2.set_xticks(ticks)
+ ax2.set_xticklabels(labels)
- plt.plot(...)
+ ax1.plot(...)
The issue here is that ax1 and ax2 might not share the same scale. I would use https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.axes.Axes.axvline.html?highlight=axes%20vline to visualize labels in ax1 in addition instead of using ax2.yaxis.grid().
So.. suggestions are most welcome!
This would, as I had also tried before, correct ax1 timestamps, but would bring all the trial annotations in each one trial:
This looks like there is an issue with the data selection then. Please make sure that you are only drawing lines for the events that you actually want to draw.
Hello, the Eye Overlay plugin keep crashing player when I try to enable it. I'm using the latest version of the software. Any ideas what's going on?
This is a Windows-only issue. The release was updated with a fix on Monday. Please use v3.2-16 instead of v3.2-15
Hello, this might be a naive question. I'm trying to find the gaze point in the world view's video. After doing the experiment, I downloaded the raw video data, and got a bunch of file. I think that the gaze_position.csv is the one I should look for, but there are few columns that I don't quite understand. I am thinking of using norm_pos_x, norm_pos_y or gaze_point_3d_x, gaze_point_3d_y, gaze_point_3d_z to find the gaze point. Which one should I use? Or maybe should I suppose to use the data from different files? Thank you!
These refer to the same position in different coordinate systems. Read more about them here https://docs.pupil-labs.com/core/terminology/#coordinate-system See this tutorial on how to render gaze into scene video frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
Thank you so much for your support! There's one thing I am quite confused about the data. On the website, it says the normalized data should be in range 0 to 1 but I see a lot of negative data. Why is it so?
The 0-1 range refers to within-image coordinates. Negative values or values larger than 1 refer to coordinates outside of the image. This usually happens due to low quality data being mapped inaccurately. You can see that the confidence is quite low for these. I suggest filtering your data with a confidence threshold of at least 0.6 (default Player minimum data confidence)
I also have several data with confidence exactly 1, but having negative coordinates. Is this because of poor calibration, or anything else? What should I do with this number? I think it doesn't matter anyway since this value will be 0 in frame image coordinate.
Since there is no visual context for them, I would drop them, as you suggested. A poor calibration is an explanation for such values.
OK, thanks a lot for your help. There's one more thing I want to ask about calibration. In the best practice , it says I should do validation, https://docs.pupil-labs.com/core/best-practices/#validate-your-calibration. How much accuracy error would be considered as high, ex: 5%, 10%, 15%,...? And if the error is high, I think I should do the validation again instead redoing the whole calibration process like in recommendation "you need to perform another calibration", right? Or maybe I misunderstand this sentence. Finally, does "use different points for validation than you used for calibration" mean that if initially I use screen marker to do calibration, I have to use "Single Marker" or "Natural features" to do the validation?
Feel free to share an example recording of you doing the calibration to get specific feedback. Please share the data with [email removed] Accuracy is usually calculated in degrees https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy If the calibration error is high, the validation error will be so to. The validation does not change your calibration though. If you have insufficient accuracy, please recalibrate. The minimum recommended accuracy depends on your experiment.
Screen marker calib. alredy uses different locations for the validation.
OK, I understand. Thank you so much!
In https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py, the function send_trigger is meant only for the purpose of sending annotations, right?
Technically, it can be used for every kind of Pupil Capture compatible message. Please note, that custom topics will require custom receivers in order for the data to be stored during a recording.
Is it possible to use blink_detection.py or saccade_detection.py in my own script? Modules like pyglui or plugin are not recognized.
Saccade detection is not implemented. You can either extract the functional part of the plugin into an external script or install the source dependencies https://github.com/pupil-labs/pupil/#installing-dependencies
Thank you. As I already have working Pupil softwares on my system, should I go through all the steps on this page? and it would work with Python 3.8, right?
You only need to install those dependencies that you need for running the code. You can run the bundled software independently of your source dependency.
I installed pyglui manually and the concerning error was resolved. Can you tell me how to find out what dependencies I need for 'plugin'?
The simplest way is try importing it and check for ImportError
s. So trial and error basically. The code base is unfortunately a bit convoluted since it has grown much over the time.
Is there such a command available also for plugin, or a link explaining about it?
python -m pip install git+https://github.com/pupil-labs/pyglui
No, as plugins were not designed to run standalone. But you can actually try installing python -m pip install -r requirements.txt
You can find this file in the top-level directory of the pupil source code.
Additionally, you might want to run this code before importing the plugin:
import sys
# TODO: adjust part
sys.path.append("<path to pupil repo clone>/pupil_src/shared_modules/")
This will make other plugin code available for import
Is there a way to estimate camera intrinsics for custom eye cameras? I tried starting the plugin as an eye plugins, but of course it didn't work. Is it something that can be guessed from the camera specs? Thanks!
You can select the custom eye cam as a scene camera and use the default plugin. If your camera has an IR filter, it is possible that the calibration pattern is not visible though.
Of course I had not thought about that super simple solution π€¦ββοΈ . Thanks!
Does the calibration needs to be placed at the same distance as the eye from the camera? At the focal length? Would it be better to print the calibration pattern at an adapted scale and place it under the camera?
You can show the pattern in window mode and resize the window. This will scale the pattern down. Alternatively, printing a scaled-down version should work, too.
What does the base_data column in blinks.csv represent?
Blinks are based on a series of pupil data points. The base_data field refers to which of these are affected by the blink.
Do you suggest rejecting the blinks using their timestamps or their frames?
I would (always) recommend time-based over frame-index-based processing. The cameras are synced in time, not by their frame rate. Frame indices are therefore only valid within a specific camera stream and one has to take extra care that indices from different cameras are not being mixed up
Great, thanksss
The confidence column in blinks.csv is the confidence with which the blink is detected, right? and it's not the confidence with which pupil diameter is detetcted? I guess I got confused because of the blink ocnfidence measures in my recording, which do happen to be more than the blink confidence threshold.
blink confidence is the mean of the absolute filter response during the blink event, clamped at 1. https://github.com/pupil-labs/pupil/blob/eb8c2324f3fd558858ce33f3816972d93e02fcc6/pupil_src/shared_modules/blink_detection.py#L418-L422
I'm trying to record an expeirment using PsychoPy while remotely controlling Pupil Core.
The PsychoPy coder complains about the line:
pub_socket.connect("tcp://127.0.0.1:{}".format(pub_port))
The syntax checker doesn't accept this line and gives out the error:/* Syntax Error: Fix Python code */
When I remove this part: .format(pub_port)
, the error resolves, but obviously the remote connection isn't established.
Any idea what's wrong?
I have never seen this issue before. Which python version are you using?
Python 3.8 (64 bit)
I changed the line topub_socket.connect("tcp://127.0.0.1:"+ str(pub_port))
and it works now.
But I am happy to see that you were able to work around the issue
That line works for me when using Python 3.8. I feel like there might be a hidden character somewhere that causes the syntax error
I'll see if I can find where the error comes from.
I am rejecting blinks from my pupil size recordings like this:
`data_high_conf_df['blink'] = None blink_id = blinks_pd_frame['id']
for id in blink_id: this_blink_start = blinks_pd_frame['start_timestamp'].loc[blinks_pd_frame['id'] == id] this_blink_start = this_blink_start.iat[-1] this_blink_end = blinks_pd_frame['end_timestamp'].loc[blinks_pd_frame['id'] == id] this_blink_end = this_blink_end.iat[-1] blink_boolean_mask = data_high_conf_df['pupil_timestamp'].between(this_blink_start,this_blink_end) data_high_conf_df['blink'].loc[blink_boolean_mask] = 'yes' blink_mask_not_available = data_high_conf_df['blink'].isna() data_high_conf_df = data_high_conf_df.loc[blink_mask_not_available]`
and it works fine. I just wanted to make sure if it seems fine, also theoretically?
Hey,
Yes, generating a boolean mask for the pupil data in order to remove blinks makes sense.
I would like to add some suggestions that should help simplify the code:
- use https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.itertuples.html#pandas.DataFrame.itertuples instead of iterating over the ids and making .loc[df.id == id]
subselections every time. This also avoids the use of .iat[]
- use bool
type for the data_high_conf_df['blink']
column. You can do that by assigning False
initially, and True
instead of 'yes'
. Then you can use this line to filter
data_high_conf_df = data_high_conf_df.loc[~data_high_conf_df.blinks]`
(~
is the boolean invert operator; https://docs.python.org/3/library/operator.html#mapping-operators-to-functions)
It makes sense that moving the last line out of the loop does not change anything. With each loop iteration you are increasing your blink_mask_not_available
. In the code above, you first remove section a, then a and b, then a, b and c. If you move the line out of the loop, you remove a, b and c all at once. (a,b,c are blink ranges in this example)
or do I need to append all the non-blink data that I get, to a new data frame? π§
I just moved the last two line out of the loop, and seems to give out the same results!
(I usually would not recommend iterating over rows manually, bu this case is an exception as .between()
is not flexibel enough.)
Hey! perfect, thanks! I learn a lot from you π
In the long run, I would be happy if we could extract some pupil tutorials (https://github.com/pupil-labs/pupil-tutorials) from your work. π I think some of your implemented steps, e.g. pupil data rejection based on blinks, could be highly reusable for others.
Sure! I would be happy to prepare anything that would be helpful π
I think we should go over this once your project is finalized. I don't want to side track you. π
We can defnitely do that! I will let you know when it's almost finalized. Actually, as the first step of my experiments, I am almost done designing a pipeline for the use of Pupil Labs devices at our lab, as a base for all out future experiments. I'd be glad to go over it with you too.
Sounds great! Let's move this into a private conversation for more detailed organization.
Hello everyone. We recently bought an invisible device. I have two questions regarding the LSL plugin. 1. The github repo says that the coordinates are normalized to [0-1], however the recordings contain much larger values (e.g. 900 or so). Why does this happen? 2. Is there any way to also push the video through LSL? Thanks in dvance π
Hello! I've having trouble installing dependencies on my Win10 machine.
(RitNet) D:\PupilSource\pupil>pip install -r requirements.txt
Collecting pyre@ https://github.com/zeromq/pyre/archive/master.zip
Using cached https://github.com/zeromq/pyre/archive/master.zip
Ignoring cysignals: markers 'platform_system != "Windows"' don't match your environment
Ignoring av: markers 'platform_system != "Windows"' don't match your environment
ERROR: av-0.4.6-cp36-cp36m-win_amd64.whl is not a supported wheel on this platform.
Is this a common issue, and is there a known workaround?
I had success with 'pip install av'
but the same issue then with uvc
@papr ever see this before?
Pythno version is 3.7
Same error for pyav, pyuvc, pyglui
Going to try with Python 3.6
Yep, that was it.
and only now did I see the text "NOTE: Currently our build process for WIndows does not yet support Python 3.7 as pyaudio does not yet have prebuild wheels for Windows available for Python 3.7. You are thus highly encouraged to use the latest stable version of Python 3.6."
However, your link to the shared FFMPEG binaries is no longer valid.
...and theh latest build does not have all the required DLL (it's missing postproc-55.dll). I do see that there is a github package for the 4.3 shared binaries, but I'm new to github packages
Hi Guys! I am doing a visual experiment generated with MATLAB and Psychtoolbox. I would need to synchronize MATLAB and Pupil companion for sending the exact timestamp of events from MATLAB to the pupil invisible recording timestamps using the local network. Only in this way we can be precise enough regarding the timestamps, considering the issue that we record in a dark condition and our experiment suffers from low temporal precision because of this issue. If anybody probably has done this before and would be willing to give me some hint or info on the procedure, I would be so grateful. Thanks a lot in advance!
@user-17da3a have you already looked at: https://docs.pupil-labs.com/developer/invisible/#events ?
@wrp How would this be executed through Matlab?
One would need to write a matlab script to communicate with Pupil Invisible Companion similar to the helper script we wrote for Pupil Core + Matlab here: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab
@wrp the zmq solution is not trivial - fyi.
One would also need to use a Matlab-ZRE* implementation to communicate with Pupil Invisible Companion.
* https://rfc.zeromq.org/spec:36/ZRE/ The linked Python example uses https://github.com/zeromq/pyre
Getting zmq set up for matlab, unfortunately, is non-trivial that's true.
hello! @papr quick question & sorry for cross-posting. I originally asked this already in the "core" channel - but it fits better here: in my exports from pupil player, the data in pupil_positions.csv has a few less observations than the according eye videos have frames. More precisely: eye0.mp4 has 99598 frames, but pupil_positions.csv only has 99576 observations for eye0. Of course, 22 frames out of 10k frames is not a lot. However, I wonder where they "get lost". I'm trying to get a mapping between the observationas and the video frames. Any ideas? Thanks already! [Bg info: data was collected with the 200Hz VR/AR add-on and exported with pupil player v1.22. ]
Apologies for the delayed response. Are you talking about the original eye0.mp4 or the exported one?
no problem. thanks. Should be the original. Tbh, I wasn't aware that a video could also be exported.
In this case a difference is to be expected. Player only exports data inside the trim marks (first and last world video frame by default). It is likely that the original eye video is slightly longer.
ah, I see it now in v3.2.16 - I think v1.22 (that I used to work with), didn't have it - or maybe I overlooked it. However, I'm talking about the original video.
ah ok. Is it a "fair" assumption that the 22 missing frames are cut off approx. equally from the beginning and end of the recording (aka. 11 from the start and 11 from the end)?
No, I would not make any assumptions about this. Instead, I would match the timestamps in eye0_timestamps.npy
to the timestamps exported in pupil_positions.csv
. Similar to this
for ts, frame in eye_video:
if ts not in exported_pupil_data:
continue
process(frame, exported_pupil_datap[ts])
ah ja, good point. I was just working with the csv exports as I'm doing analyses in R. Didn't think of looking into th .npy files. Thx!
So eye0_timestamps.npy
should have the same number of observations as eye0.mp4
has frames?
If you export the eye videos, you also get a csv file for the exported eye video frame timestamps.
So eye0_timestamps.npy should have the same number of observations as eye0.mp4 has frames? correct
Also, you might need to be careful on how you extract the frames. Some software, e.g. OpenCV's video reader, may return an incorrect number of frames
This tutorial contains an ffmpeg command to extract the correct number of frames https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
kk, thx. I checked with ffmpeg
yea, I see it now with the "new" exports. As I said, for the original ones, I didn not export the eye videos (I think it was not implemented yet).
Possible, yes
last question: exporting was quite bothersome (it's ~250 recordings). Is there an option (or scripting example) for bulk exporting?
Unfortunately, there are none built-in. There are some community projects that read the intermediate data files in bulk and export the data to CSV https://github.com/pupil-labs/pupil-community
great, found https://github.com/tombullock/batchExportPupilLabs and will try that. Thanks for the awesome service again!
Hello there, few weeks back I was able to connect Pupil Capture with Pupil Invisible App so I was getting all the data from Pupil Invisible while being on same network. Now (I moved) I can't connect to Pupil Capture, the Video Source doesnt give me an option "One Plus 6". The only difference between previous successful connection and now was that I was connected via ethernet to my PC and now I am on the network via WiFi, do you think that this could be the problem? Thanks for answer. Boris
Could you share the Home directory -> pupil_capture_settings -> capture.log
file with us? It is likely, that Capture chooses an incorrect network interface for communication.
Thanks for the answer but I found out that my Firewall was blocking the connection, I am using third party antivirus software.