Hi @papr - My advisor (Lei) and I are having issues with our videos being greyed out or not having audio when imported into pupil player. Lei has been in contact with your team about it recently and we were given a pre-release of v1.14 to help fix the issues. While it fixed one of our greyed out videos, by restoring the world view, it did not fix all of our videos, especially the audio playback problems we were having. I was wondering if you could help walk me through how to get audio to play through pupil player using v1.14?
@user-bda130 v1.14 does not include any fixes related to audio. I was not aware that you were having trouble with that. I might have missed that during the conversations with Lei. Please share one of the problematic recordings (complete recording including videos, audio, timestamps, etc) with data@pupil-labs.com
@papr Not a problem. I think she was mostly concerned about the greyed out videos. So it might not have been mentioned. I will send a video with audio playback issues to that email. Thank you for all your help!
Hi all: I am trying to link the eye tracker to my Matlab program. To do so, I am currently working on this link: https://github.com/fagg/matlab-zmq. But I am stuck on step (1) Ensure you have mex configured (2) run make.m. I have zeromq and Xcode installed on my Mac. The specific error is: Error using mex ld: library not found for -l/libzmq
@user-f0eff7 looks like you need to configure the path to libzmq correctly
Not sure if it is build automatically or manually jn a separate step.
Unfortunately, we do not have a Matlab environment to reproduce the problem.
thanks for help @papr.
Hi everyone, does anyone have a script to read the annotation.pldata file in R or python? For some reason the pupil player won't export my annotations even though the pupil player sees the annotations when I drag in a recording. I have tried to open the file myself using msgpack, but I haven't succeeded so far 😦
@user-64b0d2 https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/file_methods.py#L139-L155
@papr Thanks! I'm a bit confused by the input 'topic', what should I enter there?
@user-64b0d2 If you have X.pldata
and X_timestamps.npy
, then enter X
. In your case it should be: annotation
Not sure if it requires a plural s
suffix.
@papr I'm getting a an error: NameError: name 'Serialized_Dict' is not defined, do you know how to fix that?
@user-64b0d2 Serialized_Dict is a custom class, you can find it in the same source file that @papr linked further down You might want to copy it over entirely or have a look at just how it deserializes the msgpack data.
@user-64b0d2 this is the relevant part:
msgpack.unpackb(
payload,
raw=False,
use_list=False,
)
Great, thanks a lot! It is working now
Hi all , I was wondering what plugin would be most useful for experiments that require gaze fixation.
Hi - I'm running Capture from source, and recording sessions. The recording goes well and folder looks normal. When I drag it into pupil player, player closes, but does not reopen, any idea off hand why? Do I need to run player from source also?
@user-40ad0b Which version of Player do you use?
@papr Version 1.31.31
sorry 1.13.31*
@user-40ad0b could you share the player.log file after the app has crashed?
@papr
@user-40ad0b please run git fetch --tags
in the cloned repository
Afterwards, all new recordings should have a proper version number and do not have the problem above. For existing recordings, you need to change the info.csv file
ok - all of my current recording were scratch tests so no big deal if I lose them, ill try out a new one and make sure its working
They are not lost, they just were assigned a wrong version number
Hmm, just got the same error again: Traceback (most recent call last): File "launchables/player.py", line 792, in player_drop File "shared_modules/update_methods.py", line 84, in update_recording_to_recent File "shared_modules/update_methods.py", line 314, in update_recording_v0913_to_v13 File "shared_modules/file_methods.py", line 78, in load_object FileNotFoundError: [Errno 2] No such file or directory: '/Users/pa192/Desktop/Programming/Fovea/pupil/recordings/2019_08_07/000/pupil_data'
@user-40ad0b What is the output for git tag --list
?
It looks like the tags were not fetched properly yet
v0.1.0 v0.1.1 v0.2.0 v0.2.0a1 v0.2.0a2 v0.3.0b1 v0.3.0b2 v0.3.0b3 v0.3.3 v0.3.4 v0.3.4bilateral v0.3.5 v0.3.5.1 v0.3.5.2 v0.3.6 v0.3.7 v0.3.7.1 v0.3.7.2 v0.3.7.3 v0.3.7.4 v0.3.7.5 v0.3.7.6 v0.3.7.7
v0.3.7.8 v0.3.7.9 v0.3.8 v0.3.8.1 v0.3.8.2 v0.3.8.3 v0.3.9 v0.3.9.1 v0.3.9.2 v0.3.9.3 v0.3.9.4 v0.3.9.5 v0.3.9.6 v0.3.9.7 v0.4.0 v0.4.0.1 v0.4.1 v0.4.2 v0.4.3 v0.4.3.1 v0.4.4 v0.4.5 v0.4.5.2 v0.4.5.24 v0.4.6 v0.5 v0.5.1 v0.5.2 v0.5.3 v0.5.4 v0.5.5 v0.5.5.16 v0.5.5.17 v0.5.6 v0.5.7 v0.5.7.45 v0.6 v0.6.10 v0.6.17 v0.6.18 v0.6.18-windows v0.6.19 v0.6.20 v0.6.21 v0.6.22 v0.6.9 v0.7 v0.7.1 v0.7.2 v0.7.3 v0.7.4 v0.7.5 v0.7.6 v0.8 v0.8.1 v0.8.2 v0.8.3 v0.8.4 v0.8.5 v0.8.6 v0.8.7 v0.8.7-w v0.8.8 v0.9.0 v0.9.1 v0.9.10 v0.9.11 v0.9.12 v0.9.2 v0.9.3 v0.9.4 v0.9.5 v0.9.6 (END)
These are not all tags. What is your output for git remote -v
@user-40ad0b
it's pulling from my personal GitHub, fetch and push
I think I'm on an old pupil version, trying to reset it and update my pupil folder
Ok got the whole thing working - set the remote back to pupil, update the software, re-did the git fetch --tags
Now everything set
thanks!!
Hello all. I am looking at the exports pupil_positions.csv file and trying to map the pupil_timestamps column to the eye video file. Does anyone know how these timestamps work or how to relate it to the video files?
@user-c1ac42 Pupil timestamps are inferred from eye video timestamps. Therefore, the timestamps in pupil_positions.csv are a union of eye0_timestamps.npy
and eye1_timestamps.npy
.
Thank you for the response @papr . How would we infer the timestamps to the eye video footage? Are the timestamps per frame or is there some conversion so that they reflect the actual video time?
@user-c1ac42 exactly, there should be exactly one timestamp in eyeX_timestamps.npy
for each frame in eyeX.mp4
interesting, for some reason sometimes I am seeing the total count of timestamps as not being the same as the frame count for the video. I have been using pupil_positions.csv but maybe I will try it with eyeX_timestamps.npy instead.
should there be duplicate timestamps in the eyeX_timestamps.npy file?
Hi! Would it be more accuracy to use two eye camera or one?
@papr over the last few days I have been experiencing immense lag when attempting to record mobile eye-tracking data. This lag appears to severely interfere with calibration quality and is present in the recorded video even when viewing the data in pupil player. Do you know what might be causing this lag and how I might overcome it? The laptop we are using is very high spec so I doubt it will be a laptop issue?
can somebody explain whats the difference between 2d and 3d calibration also in calibration settings script there is a PositionKey function that returns "norm_pos" for 2d and "mm_pos" for 3d what are they supposed to mean?
I am brand new to Pupil Labs and have just received a pair of Pupil labs glasses with the OnePlus6 device to use whilst mobile. All I want to do is to download the software I need to use the thing on my desktop to try it out but am having trouble doing that. I've downloaded GitHub on the desktop but it's not downloading for some reason. Can someone offer me some advice?
@user-c250a8 please go to: https://github.com/pupil-labs/pupil/releases/tag/v1.14 and scroll down to the bottom assets
section
From there select the link for your operating system and unzip once it is downloaded on your machine.
Thanks wrp for your help. I did as you recommended. It's taking a while to download. It may be my firewall slowing it down.
Ok, usually github downloads are speedy. I am downloading now just to check - no speed/issues for mea t least and no visible status issues via github's status page, so likely a firewall/network issue on your side?
@user-d230c0 Re 2d vs 3d mode: 2d mode uses polynomial regression for mapping based on the pupil's normalized 2d position. This method can be very accurate but is prone to slippage and does not extrapolate well outside of the calibration area. 3d mode uses bundle adjustment to find the physical relation of the cameras to each other, and uses this relationship to linearly map pupil vectors that are yielded by the 3d eye model. The 3d model is constructed from a sequence of 2d ellipses. 3d mode is on average a bit less accurate than 2d mapping but is less prone to slippage due to the refitting of the eye model.
@user-f3a0e4 is the "lag" behavior reproducable? Do you have any plugins running? Have you tried restarting with default settings and trying to reproduce this behavior? Last question: what OS are you using?
Sorry posting in correct chat: We are trying to use a pre-trained TF model in a background task proxy, and want to update the inputs args in the recent events. Any ideas on where to start?
We noticed a few solutions here but wondering if there is a recommended way to pass in references to proxies that can be updated with recent events: https://stackoverflow.com/questions/11055303/multiprocessing-global-variable-updates-not-returned-to-parent
@wrp in subscription class there is a function called ParseData would u please tell me how is it sending data to relevant clases
@user-0b54e3 As far as I understand, you won't be able to send any references to python objects to your proxy task (if that's what you intend to do). The task will run in a separate process and cannot access objects from another process. Only data can be sent over and will be serialized (pickled) in the process. That means you cannot update the arguments sent over to the task.
If you really want to make this work, you will have to do some more coding I assume. If you look at class Task_Proxy
you will find a multiprocessing.Pipe that is used for sending data back. The False
disabled duplex functionality and makes this pipe uni-directional. For a start you could experiment with a duplex pipe Pipe(True)
and try to send data over the pipe to the task process. This will require rewriting more of the process logic though.
Not sure where to post comments about pupil-mobile, as there's no github archive. For pupil-mobile, if the PC running pupil_capture has more than one network interface card it confuses pupil_capture into thinking that mobile remote host doesn't exist with a "no host found" error.
@user-7890df This doesn't seem like an issue related to Pupil Mobile, but rather Pupil Capture. Just so I understand, are you saying that your PC has two active WiFi networks running at the same time? Or just has two separate network cards? What happens if one of the network cards is disabled? What OS and what version of Pupil software are you using?
BTW - if there was an issue you/others wanted to report re Pupil Mobile it can be done here: https://github.com/pupil-labs/pupil-mobile-app/
@wrp Windows 10, Pupil_Capture version 1.14. Yes, I think you're correct that it's Pupil Capture that is searching the networks for a Pupil Mobile device/source and getting confused. The PC has two active ethernet devices running on different networks (10.10.x.x and 192.168.x.x subnets). Pupil Mobile device is connected to 10.10.x.x network through wifi, but is not recognized by PupilCapture when both ethernet devices are active. If the non-relevant ethernet card (on the 192.168.x.x network) is deactivated, then PupilCapture can suddenly recognize the Mobile device on the network (10.10.x.x).
@user-7890df Pupil Capture uses the system's primary network interface for all its network related features. You do not need to disable the 192.168.. network, you just have to ensure, it is not the primary network interface.
@papr Thanks, this worked!