Hi, is there a software architecture of the pupil software available somewhere ?
You mean like a UML diagram or similar? Unfortunately, not.
ok, thanks for the fast answer
There is some info here though: https://docs.pupil-labs.com/developer/core/plugin-api/#development
Hello everyone !
I would like to know if there is a specific version of Capture that works on Windows 7, thanks !
Unfortunately, no. We only support Windows 10, Ubuntu 16.04 and higher, and macOS 10.13-10.15 at the moment.
Thank you for your reply ! Okay, that's what i thought after searching a bit. I tried several versions of Capture but they kept crashing. I'll just switch computer !
Hello, bit of an update on this now that I've gotten around to trying an implementation. This code is causing the error
.../pupil_detector_plugins/visualizer_2d.py, like 58, in draw_eyeball_outline
if pupil_detection_result_3d["model_confidence"] <= 0.0
KeyError: 'model_confidence'
The only parts of the class I edited were the pupil_detection_identifier, the label, the method it searches for (from "2d kevin" to the actual one I chose), and the return value of parse_pretty_class_name.
This issue is caused by the fallback pupil datum that is created if there is no datum matching this https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/pupil_detector_plugins/pye3d_plugin.py#L109-L110
i.e. as you said, there is either (A) no previous detection, or (B) no datum with a matching method
field. Can you check which of these two cases causes your issue. In case of A, check if there is actually an other detector running, and if its order
is lower than the pye3d plugin. If that is the case, check that its results are not overwritten somewhere. In case of B, CustomDetectorPlugin.pupil_detection_method
is used during create_pupil_datum()
.
I will have to look at that with a bit more time
Upon looking into it further, it seems like there is no datum with the custom method
string in the previous_detection_results.
Is there a specific attribute I need to set to have it called that? I tried pupil_detection_plugin
, pupil_detection_method
, and pupil_detection_identifier
in the custom 2d detector plugin and none of them caused the custom method
to appear in the datum list. However, there is a datum with a blank method
string. I suspect that one is it, but for some reason it doesn't have the right method
string. I haven't verified that it's it, however.
@papr Iβm running some 2d detections on the world frames that is being received and itβs extremely slow. I tried to run it on the world.mp4 and itβs quite fast. Running it real-time makes it slower probably due to the involvement of networking api and gaze estimation/pupil detection as well... is there any way to fasten the real-time processing?
Do I understand it correctly that you are running the pupil detection on the received video frames instead of subscribing to the existing pupil data?
At the moment I have subscribed to βgaze.3dβ and Iβm receiving the gaze data directly from the pupil capture... more than the detection of pupil, Iβm interested in obtaining the gaze (norm_pos_2d)
Then I do not fully understand what is slower than expected. It is possible that we are thinking about different things when you say 2d detections and gaze. Please refer to https://docs.pupil-labs.com/core/terminology/#pupil-positions as a reference.
By subscribing to the gaze data you should get access to the norm_pos data already. Are you having trouble accessing it?
By subscribing to the pupils gaze data, Iβm able to obtain the norm_pos already... on top of that, when I try to run my own object detection algorithm over the frames that are being retrieved from the network, the frames are laggy... However when I try to do the same on world.mp4, I donβt experience any lags at all... so to summarise my problem, 1. Itβs not laggy when I obtain the gaze + world frame from the pupil capture 2. It is not laggy when I use object detection algorithm over the offline video (world.mp4) 3. It is too laggy when I run the same algorithm over the feed that I receive from the network api... is there any settings to reduce the computational overhead across the networking api?
The computational overhead of the network api is very low. Usually you spend more time waiting for a new sample than recv it from the socket. From your description it sounds like you are calling your object detection more often in the Realtime case than when running it on a recorded video. Can you check?
I suggest learning about profiling your program. This will allow you to detect those functions that take long time as well as functions that are being called more often than necessary.
Sweet!!!! π¬π« on top of that, Numba has speeden the computations a tiny bit
Hi @papr can plugin control the main Pupil Capture - World window size? ex) hiding, resizing, moving, and pyglui...etc?
Technically, it can. But it is not officially supported. It is very difficult to get right and potentially unstable.
@here We have just released pye3d v0.0.5
https://pypi.org/project/pye3d/
Read about the changes here: https://github.com/pupil-labs/pye3d-detector/blob/master/CHANGELOG.md
Hello! So, the code that I'm using to get the timestamp from pupil core data and save them in an excel file is this one.
No worries. Basically, instead of using the start_timestamp_diff
you calculate the clock offset manually
- start_timestamp_unix = meta_info["start_time_system_s"]
- start_timestamp_pupil = meta_info["start_time_synced_s"]
- start_timestamp_diff = start_timestamp_unix - start_timestamp_pupil
+ recording_delay = psychopy_event_timestamp - annotation_timestamp
- gaze_positions_df["gaze_timestamp_unix"] = gaze_positions_df["gaze_timestamp"] + start_timestamp_diff
+ gaze_positions_df["gaze_timestamp_unix"] = gaze_positions_df["gaze_timestamp"] + recording_delay
So you are converting pupil time to unix time*. This offset is fixed for all timestamps in this recording. Do you see the 8-16 seconds variance within a single recording or across multiple ones?
Unix timestamps are always UTC (+0 timezone). You can change timezones of pd.to_datetime()
with this function https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.tz_convert.html That is more explicit than + 3600
on the existing pupil timestamps.
The 8-16 seconds variance is across multiple recordings... I mean, in one recording I can have 12.34543 seconds of delay and in another one like 8.23432 seconds...
One of the python files that I'm running from psychopy is this one
Which means that I'm using core.getAbsTime()... shouldn't this be the same time collected by pupil core?
getAbsTime() is unix time. Pupil Core does not use that for recording due to its limited precision*. But you are converting pupil to unix time correctly. The precision is still much higher than 1ms. Therefore, this is not the reason for your desync.
Did Capture and PsychoPy run on the same computer?
This only returns time in full seconds. I would highly recommend to not use that for future experiments as this is extremely imprecise. If you need to save time in unix epoch, use time.time()
instead.
This still does not explain timestamp differences > 1 second.
How do you know that the timestamps actually belong to each other? Is there an event that you can identify in both data streams that allows for manual alignment?
I did an experiment in psychopy and when I extract the data, I can see in the excel file in which timestamp certain event happened and, as I see through the pupil player, they're not sync... And also through the plot of both data...
Yes, same computer...
Mmh, in this case, the clocks should be synchronized.
What is the current state of your experiment? Are you still doing pre-tests or has the data collection been completed already? Depending on which is the case, I would suggest different approaches to the solution.
I've already started testing the control group and I'm going to test on patients next Monday, so I was trying to solve this problem as soon as possible, because I can't ask these people to repeat it again...
ok, thank you for this information. Assuming, that the delay is fixed within a single recording, I suggest applying an additional offset to correct the delay.
I think the easiest way would be to do that using annotations in Player. 1. Start Player and the plugin 2. Register an annotation 3. Seek to a frame from which you know that it represents a recorded psychopy event 4. Trigger the annotation by clicking the thumb button on the left or the associated hotkey 5. Export annotations 6. Load annotations.csv in your python script 7. Calculate the offset between the annotation and your psychopy event 8. Apply offset to remaining pupil data
The excel file "annotations" is empty, I mean, without values... Is it supposed?
Then, step 4 is missing.
Right, sorry π
The timestamp is 10982.207351... how am I supposed to calculate de offset with psychopy if the event is 1612785231.38653 ?
Sorry for bothering you, but probably I'm not understanding this relations that well...
Ok, I did a plot with one trial and it seems good! But, as I am calculating the offset manually (by choosing the frame from the video), can I say that the precision may not be the real one? I'm asking this because I need to calculate the time reaction between the stimulus and the person looking at it... So, should I say the reaction time may be influenced by the frame I chose?
~~Generally, I would chose the frame in which the stimulus becomes visible for the first time.~~ Actually, this depends on when your psychopy event was generated. Was it timed before the stimulus was displayed or after?
So, should I say the reaction time may be influenced by the frame I chose? Correct.
But: Your psychopy event were only recorded with full-second precision, i.e you can only measure your reaction time up to a precision of a full second anyway.
Yes, it was exactly what I tried to do! π
Please see my correction
First, I initialize psychopy (as it takes some time) and before writing the name of the participant, I click the record button on pupil capture
Sorry, I think I was not clear enough. When do you call getAbsTime()
? Before you tell psychopy to display the stimulus or after it has been rendered?
But yes, psychopy time is since the application starts
getAbsTime() gives me the time when the app starts. Then, I have to do a transformation in the excel file to add the seconds in which the ball (stimulus) appears since the app started...
Aah, ok. How do you know when the ball appeared relative to the app start?
They give me the seconds that the ball appears since the app started... so I just add this numbers
Ok, but how is that measured?
time app start + time the ball starts since the app start for example: 1612785231.38653 (app start) + 22.3865283 (seconds taken to ball appear since the app started)
These timestamps have more than 1-second-precision. getAbsTime()
would return 1612785231
instead of 1612785231.38653
.
Independently, you can skip this transformation and calculate the time offset between your relative clock (22.3865283
, the psychopy event) and pupil time (annotation
) directly. You do not need to make the effort to convert both to unix time first.
"getAbsTime() would return 1612785231 instead of 1612785231.38653." You're right, this number comes from an experience that I was testing π
Hmmm, I didn't think about that...!
Generally, I have the feeling that the clock generating the 22.3865283
offset is causing the varying delays. The unix timestamp (app starts in psychopy, recording start time system in Capture) should be reliable if they were recorded on the same computer.
I think the 22.3865283 comes from the delay the app needs to initialize plus the "welcoming text" (the app doesn't start with the stimulus immediately)... Yes, I chose using unix timestamp right because of its reliability... And here's why I was not understanding this strange delay between psychopy and pupil core...
But, as you told me before, you say it's better if I use time.time() instead of getAbsTime()?
It is the best way to get the current unix timestamp in seconds. It is limited in precision due to it being a float. But this is still in the range of microseconds. time.time_ns()
returns it in nanoseconds, should you ever need it.
I have still not understood how you have measured this π
epoch_time = epoch_time_start + polyBall.started (BM+BW)
ok, so this is what I found out:
- polyBall.started = polyBall.tStartRefresh
- polyBall.tStartRefresh
is set by win.timeOnFlip(polyBall, 'tStartRefresh')
(This is the response to my original question. The psychopy event is therefore timed at the exact time the stimulus is shown for the first time. Please annotate accordingly in Player.)
- timeOnFlip()
assigns logging.defaultClock.getTime()
(can't find what this is relative to exactly) (I assume this times program start, not experiment start)
Additionally, I noticed, that you call getAbsTime()
after DlgFromDict
. This is where your delay comes into play. You take between 8-16 seconds to click OK
on that dialogue.
I recommend the following change to fix all your future sync problems:
- expInfo['epoch_time_start'] = core.getAbsTime()
+ expInfo['epoch_time_start'] = time.time() - logging.defaultClock.getTime()
This corrects your unix time to the same clock that polyBall.started
uses. After implementing this change, you no longer need to use annotations to manually fix your offset.
Until then, use delay = polyBall.started - annotation
to correctly sync pupil to psychopy time (your data will no longer be in Unix time, but this does not really matter for reaction time calculations).
(did I mention that reading auto-generated psychopy code is a small catastrophe? :D)
Sorry for bother again π
This + expInfo['epoch_time_start'] = time.time() - logging.defaultClock.getTime()
is not working... psychopy doesn't run if I put this way...
I'm going to try with this modifications! Thank you, thank you a lot for all your help!!!!!!
Do you get an error message?
When I open the pyfile to start the stimulus, it doesn't run...
Psychopy should give you a log with an error message. Did you import the time module?
import time
Ok, yes, the import was bad written... I tested with this modifications and the plot seems nice! Thank you again!!!
I am happy to hear that this was indeed the issue for the desync. Good luck with your recordings on Monday!
Thank you! I hope I can show you all my project soon and prove how helpful the eye tracking glasses can be nowadays! π
@papr Sorry to bother you again. An hour ago I asked in π core why my plugin does not show up in the Pupil Player. You pointed me to the fact, that I was using a plugin template for Pupil Capture and not Pupil Player. However I cannot figure out how I need to adjust the pupil detection template, such that it shows up in the Pupil Player. Simply putting it into ~/pupil_player_settings/plugins
does not do the job. Could you provide me with a base template for a custom pupil detection plugin that will show up in Pupil Player?
Thanks in advance
How have you checked, that it is not already running as expected?
I have put the artificial_2d_pupil_detector.py
in ~/pupil_player_settings/plugins
and expected a pupil detection equal to a point moving on a predefined circle according to the documentation of the plugin. However the pupil is still detected by the actual Pupil Player pupil detector.
And you changed to Post-hoc pupil detection and redetected the eye videos after putting the plugin into this folder?
Okay so embarrasing π Did not know that this setting existed. Works as expected. Sorry for wasting your time.
Don't worry. I guess these steps are still missing in the documentation. We will have to add them.
yes that might be good. Thanks for your help and have a nice weekend π
Hello! We are trying to assess whether pupil core could be used for measuring spike-triggered pupil dynamics, but it seems like we would be missing either an external hardware trigger for the cameras, or an output exposure strobe signal to send to the ephys DAQ. Is there any approach the pupil community has converged on to deal with these situations? Thanks!
Hey everyone,
I am still trying to implement a custom pupil detector plugin for the Pupil Player. The detection algorithm I want to use stems from a GitHub Repo called DeepVOG (https://github.com/pydsgz/DeepVOG). This repository has multiple dependencies that are not part of the standard python distribution (i.e. numpy, tensorflow-gpu, keras, ...). I managed to get DeepVOG running inside a virtual anaconda environment. Now I want to make these dependencies visible for the Pupil Player.
Inside the Plugin API docs I read this: The bundled applications use their own isolated Python environment, i.e. the plugin will not recognize your local pip installation! Any additional dependencies need to be installed into the plugins folder, next to the plugin
However I am not quite sure how to port the necessary dependencies into the plugins folder such that they are recognized by Pupil Player. I have tried pip install --target=~/pupil_player_settings/plugins tensorflow-gpu
which kind of works but clutters the plugins
folder and causes name conflicts inside the plugins
folder as both keras and tensorflow-gpu come e.g. with a bin
folder.
I would greatly appreciate if you could point me to the appropriate way of installing the necessary dependencies such that Pupil Player can recognize them π
Thanks in advance!
In this case, I would recommend running Pupil Core software from source. Please be aware that the instructions might not work for anaconda environments, though.
Hm okay. I am on a Windows 10 machine with restricted user rights. I can imagine that running from source will be quite difficult on this machine but I can give it a try. Is there any other possibility you could think of ? I had the idea of using symlinks to map the dependencies but I am not sure if this could work
Symlinks will probably not work, if pip install --target
did not work.
You can create a virtual environment with your installed python: python -m venv <path to venv location>
. Installing python dependencies into it, does not require any administrator rights.
You will need admin rights for the camera driver installation, though. But you are probably aware of this already.
pip install --target
worked, i.e. the Pupil Player found the dependency. But it cluttered the plugins folder and caused name conflicts as stated above. So symlinks could work right?
Do you mean to create a virtual environment inside the plugins folder for my plugin dependencies or to create a virtual environment to run from source?
Good afternoon! I would need to convert PI gaze coordinates to degree of visual field. Searching in the channel I found that it was suggested to use the function unprojectPoints in the camera models module. I have tried that in this way:
import numpy as np from modules import camera_models as cm
camera_intrinsics = cm.default_intrinsics['PI world v1']['(1088, 1080)'] camera_model = cm.Radial_Dist_Camera('PI world v1', (1088, 1080), camera_intrinsics['camera_matrix'], camera_intrinsics['dist_coefs'])
coords = [np.array([0,0]),np.array([1088, 1080])]
for coord in coords: print(camera_model.unprojectPoints(coord))
I get
[[-1.1757394 -1.2322234 1. ]] [[1.099524 1.0320034 1. ]]
To my understanding pixels [0,0] and [1088,1080] should be top left and right bottom of the visual field which in the world video should be 82Β°x82Β°. Thus, I can't understand the result I get. Thank you very much!
Maybe, this was already the missing information: unprojectPoints()
does not return field of view values but the corresponding viewing angle for a given pixel, originating in the camera's origin.
Th 82x82 are measured horizontally and vertically. If you measure the angle between the mentioned 3d vectors, you will get diagonal field of view
so, if I understood correctly camera_model.unprojectPoints([0,0]) returns [[-1.1757394 -1.2322234 1. ]] which should be the angle in radians between the center of the image and the top-left corner, converted in degrees would be [ -67.3649 -70.6012] that does not make sense to me if the FoV is 82Β°x82Β°. So I am obviously missing something.
It is not an angle but a 3d direction. If you want to calculate an angle you need a second 3d direction as a reference.
Using coords = [np.array([0,0]),np.array([1088, 1080])]
makes sense if you want to get diagonal field of view, i.e. the angle between the viewing direction of the top-left and the bottom-right corners
[height//2, 0]
and [height//2, width]
.[0, width//2]
and [height, width//2]
.Hi @papr, the thing that @user-0e7e72 and I find confusing is what you exactly mean with 3D direction. If you see the example image, when calculating the eccentricity of a point on the visual field (let's say A), you'd need to know the viewing distance (D) and take a reference point (B). In our case, the reference point is always the center of the visual field, which should correspond to [0, 0] degrees of visual angle (since the reference point is itself). When computing the results using .unprojectPoints we got the result in radians (we guess?) that converted into degrees corresponds to [-67, -70] for the upper left corner and [62, 59] for the bottom right corner. Assuming that these numbers are calculated from the center of the visual field and represent a diagonal, the horizontal and vertical visual field would be approx. 130 x 130, which is clearly different from your 82 x 82. We are still struggling in understanding what's wrong here, does this help in clarifying our issue? Thanks a lot for your time and effort.
I would like to note that the method above only calculates the field of view of the scene camera, assuming that it adheres to pinpoint camera model.
Unfortunately, I won't be able to respond to your question in detail today. I will have a closer look tomorrow.
Thank you @papr, anyway just to be clear the only thing we want to achieve is converting the gaze data from pixels to degree wrt the center of the image.
In this case, you just need to calculate:
rad2deg(
arccos(
dot(
normalize(gaze_point_3d),
[0, 0, 1]
)
)
)
Please see the edit. The idea is to use the arccos of the cosine similarity to calculate the visual angle
Thanks, perfect! Now I understand what we were talking about π
@user-0e7e72 @user-a350f5 Please understand that this only calculates the visual angle of the cyclopian gaze ray, originating in the scene camera's origin, not in the subject's eyes. For the latter, you need to apply this calculation to gaze_normal0/1_x/y/z
respectively.
Thanks a lot, that was probably the source of the misunderstanding!
Thank you papr, is it possible to obtain this data with pupil invisible?
No, Pupil Invisible only returns cyclopian gaze at the moment.
Weβre doing an integration with the DIY glasses for a school project at university and I was wondering the simplest way to just get an eye vector back. We were thinking of using the pupil service for it with receive message; is that enough/would that work?
See this example. https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py You can either use Capture or Service for it.
Thanks!
Hey, once both eyes have been fitted with at 3D model, is there a transformation matrix available (or something equivalent) that would let us map features in the 2D eye images (like, eye corners) into the 3D camera space?
If you are referring to the eye camera 3d space, you do not need to fit the 3d eye model. Using the camera intrinsics is sufficient. If you need them to be in world camera 3d coordinates, you need a valid 3d calibration additionally.
Makes sense to me. I hope it makes sense to you, too ...that's a good litmus test :). Thanks, @papr
Ok. Our goal is to map the features onto the 3D eye model. THe intrinsics allow us to do a raycast of the features from the camera into camera centered world space. Given that the 3D calib gives us eye center and diameter, we can map points by colliding this ray with the sphere.
I was talking about the eye camera intrinsics, not the scene camera intrinsics. It allows you to transform eye image pixels into viewing vectors/rays originating in the eye cameras 3d coordinate origin. These can be used to collide with the sphere. The eye model has a fixed radius of 10.392304845413264
mm
https://github.com/pupil-labs/pye3d-detector/blob/master/pye3d/constants.py#L3
Yes, no confusion regarding eye/scene intrinsics. Good point about eye corners.
But the eye corners might not collide with the eye ball, depending on the viewing angle.
I'm having a problem with the latest build of Pupil Core, where the 3d detector doesn't appear at all in the list of detectors in the eye window. As a result, gaze detection with 3D gaze mapping no longer completes the Calibrations step, and the Status window under that section gives the error "Calibration failed: Not sufficient pupil data available."
Hey, have you tried restarting with default settings already? Since you were developing custom detector plugins, you might have disabled the plugin. This state might have been restored from the session settings.
I'm also using the latest version of pye3d, 0.0.5, installed through pip
Oh dang that's embarrassing, yeah that worked haha
I keep forgetting about the magic of that reset settings button
@papr I'm trying to decode the raw data that I have received using the code 1np.frombuffer(msg['raw_data'][0], dtype=np.uint8).reshape(msg['height'],
msg['width'], 3)1 but it throws an error message that says ValueError: cannot reshape array of size 145904 into shape (720,1280,3)
. Previously before updating my pupil capture, everything was working fine. what might be the problem? Do I have to change any setting in Pupil Capture?
It might be possible that you are receiving data in a non-bgr format. Can you check the network api menu and see what format the frame publisher is using.
thank you!
Hey all, I'm following up on this from awhile ago. I'm using the tutorial here to convert a video to its frames that will match with the CSV data from pupil labs. The issue is that I'm working with a large video file, and so extracting PNG frames makes things blow up.
I'm trying to extract JPG instead by using the following code:
ffmpeg -i world_clip.mp4 -vsync 0 compressed/frame%06d.jpg
and I'm getting the following error after a couple hundred frames are produced:
Invalid pts (770) <= last (770)
Video encoding failed
I feel like it's something weird having to do with -vsync
. I realize this isn't a pupillabs issue, but I'm hoping maybe someone has come across this issue before because honestly I haven't read enough about ffmpeg to understand what's going on with muxer and demuxers... if that's even important here. Thanks!
Is world_clip.mp4
the originally recorded scene video or only a slice of it? Independently, I can understand your wish for a more efficient way to process the video. I suggest using pyav
(pip install av
) instead. It is a python wrapper around ffmpeg's underlying library and allows decoding the video frame by frame without having to write the frame to disk.
import av
import numpy as np
timestamps = np.load("world_timestamps.npy")
container = av.open("world.mp4")
frames = container.decode(video=0) # generator; only decodes on demand
for ts, frame in zip(timestamps, frames):
image = frame.to_ndarray() # I am not 100% sure on the correct API here
Thanks for the lighting fast reply! world_clip.mp4
is a one minute slice of the scene video. But one min of 30 fps of PNG blows up to multiple gigs, and I want this to scale to the 30-60 min original videos, which I realize I'll probably need to process in batches anyway.
Decoding frames without saving to disk sounds huge for what I'm trying to do, so I'll definitely check this out. Thanks!
The full workflow is something like: take a frame -> run it through a TensorFlow object detection model -> take the detected objects and save them to the correct rows of the pupil lab exported CSVs to match with eye gaze data
In this case, it looks like your slice might not have been encoded correctly. The error message refers to the internal timestamps used by the mp4 container. It requires time to increase monotonically. This seems not to be the case for your clip which it fails at that points.
See the pyav docs for API details: https://pyav.org/docs/stable/
Re workflow: I suggest saving the object detection result together with the frame index and timestamps separately and merging the data post-hoc. This gives you the largest amount of flexibility later on. Editing csv files efficiently in-place is actually quite hard, if not impossible.
Great! Using a generator approach seems way more doable than trying to save batches of frames to disk. Thanks so much for the pyav tip, I'm sure that will save me tons of time!
We are using it in Pupil Player to playback video. Therefore, you can be sure that frame n-th in Player does correspond to the n-th frame in your script. This sounds trivial but is actually not always the case. Other wrappers around ffmpeg, e.g. OpenCV do not reliably return the expected number of frames from the same file.
That's super reassuring, because for the analysis I'm doing it's critical that the eye gaze data is aligned with the results from the model.
The exported pupil and gaze data have a world_index
column that refers to the scene video frame index to which they are closest in time. This is a many-to-one mapping as the eye data is usually recorded at a higher frequency than the scene camera. See the "Visualizing Image and Gaze" section from the tutorial as a reference.
Yes, I did notice that. So I'll have to join the results of the model to the many rows that represent one frame. Thanks!
I'm guessing world_timestamps.mp4
is a special file produced by pupil labs? I don't have access to that raw data at the moment to check, but it struck me as odd that you could grab an .mp4
directly to a numpy array.
That was a typo. I meant to use the .npy
extension in the example. I corrected it appropriately. See https://docs.pupil-labs.com/developer/core/recording-format/#timestamp-files for reference.
No worries, that makes much more sense. Thanks!
Hello, I am a student and am attempting to replicate the gaze point that appears when a project folder is dragged into Pupil Player. As I understand it, the gaze_point_3d
represents the intersection of where the user's two eyes are looking in the camera coordinates. I have a projection transformation to NDC for these points but when I overlay them synced up with the exported video, it does not match up very well and the point stays very close to the center. How should I go about replicating what Pupil Player does? Thank you!
This is the code that generates the gaze_point_3d
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L252-L314
Hi in this code, what does "AE_win" refer to? thank you so much for your help
Hi,
I am trying to run Pupil Capture from source, but I am encountering and Error with a DLL file not being loaded.
Any advice would be awesome! π
I have it working now! Something to do with my virtual environment, not sure what though
*to clarify I ran it from my local Python install
is there a way to save calibration part on the software? or is there a calibration configuration file in the source folder?
The latest calibration and any new calibrations are stored automatically during recordings. The gaze mapper will also store its last, successful calibration in the session settings. This means you can calibrate, close Capture, start Capture, and the calibration should be restored by the gaze mapping plugin.
Hello, is there a way to detect the camera disconnecting through the network API? When I open pupil labs with the camera connected and then disconnect after, the network API just responds with OK on any type of request (recording, calibrating,..). This is kind of odd since when I send a calibration request, nothing happens, but I still get the same OK message. For a project it is required that I am able to detect when the camera disconnects but I can't find much information in the documentation regarding camera disconnects.
Hey, the OK is meant as an acknowledgement of reception, not as a confirmation that the action was performed. You can subscribe to the video frames via the network api. See this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Instead of visualizing the data, you can just make sure the data is being received in the expected time frame
Perfect, this is exactly what I need. Thank you! π