Hello, Is there a way to compute eye blinks offline based on the pupil diameter and confidence data (hmd)?
@user-66516a Pupil Player supports offline blink detection using the confidence signal. Are you looking for that or do you want to implement it yourself?
Thanks for the quick answer @papr. Unfortunately, I only have *.csv files on hand, so I'd like to implement it myself
(The team I work with extracted pupil labs data into *.csv files using python and msgpack library, but they only did so for the timestamp, pupil diameter, confidence and position. I'd like to find a way to compute the number of blinks based on the available data I have. )
@user-66516a this is definitively possible
@user-66516a This is the relevant code from the blink detector: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L344-L367
Thanks for the indication @papr ! I'll implement it shortly
dear all, is there any plug in of Object Detection that can be added on pupil ?
we need to start to use deep learning for object recognition and we wonder if anyone use something
thnks
@user-1bcd3e Check out this third-party fork of Pupil: https://github.com/jesseweisberg/pupil
ok thanks
Please be aware that this fork does not seem to be maintained anymore (last commit Dec 18, 2017).
ok
maybe it will run on an old version of pupil
It is basically an old version of Pupil since they forked the repository. If this will be used for a new project, I highly recommend to extract their changes into a plugin that can be used with a current version of Pupil.
Plugin API docs: https://docs.pupil-labs.com/developer/core/plugin-api/
Hi everyone, I wanted to modify an existing plugin to add a little bit more functionality, is there a recommended IDE to use to import the source code into and run from?
@papr specifically I'd like to perform some calculations on the raw data and export an additional .csv file at the same time as the gaze/pupil positions. After the raw data exporter plugin is loaded, where is the data held before being written to a .csv?
@user-a7d017 In this case I would recommend a custom plugin: https://docs.pupil-labs.com/developer/core/plugin-api/
Please be aware that the pupil api docs are work in progress and will be extended in the coming days.
The data is stored in self.g_pool.pupil_positions
and self.g_pool.gaze_positions
, where self
is an instance of your plugin.
You should also implement:
def on_notify(self, notification):
if notification["subject"] == "should_export":
self.do_export()
@papr Thank you for the response, very helpful. I have set up the dependencies and dev environment to run from source on Windows, but seeing as Pupil runs both Python and C code, is there a recommended IDE to use?
@user-a7d017 I personally use VS Code. Also, you rarely need to interact with the c code. So you can use any text editor you like, since the relevant code is python.
@user-b08428 did you ever find out how to send annotations via Unity?
/cc @fxlange 👆 Python reference: https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py
@papr I already looked at this code. I also looked at the API call for remote annotation here: https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations. What I am not sure about are the key-values I need to send to the function
public void Send(Dictionary<string, object> dictionary) (RequestController and in turn Request.SendRequestMessage is using this Dictionnary to send via MessagePack)
I originally thought it would be easy enough with {{"subject", "annotation"}, {"label", "my annotation label"},{"timestamp", <pupil time>}, {"duration", 1.0},{"custom_key", "custom value"}}
However it does not seem to get recorded. Any idea where I am going wrong? (Let me know if I should ask this in hmd-eyes forum instead...)
Hi @user-fc194a - indeed let's move this topic to 🥽 core-xr. in short: support for remote annotations directly in hmd-eyes is on our list and thanks to @user-141bcd we already have a Pull Request implementing it.
@papr I have my One+ connected to my pupil invisible headset. It's on the same network as my macbook running the pupil invisible monitor. The monitor doesn't seem to detect any device. Am I missing a step?
@user-4d0769 hey, what operating system are you using?
@paper macos Mojave (10.14.6)
@user-4d0769 what kind of wifi is it? Are you at an university?
It's my corporate wifi
They might be blocking udp traffic
It is recommended to run a dedicated wifi that you can configure yourself.
gotcha. Would turning on tethering on the phone and connecting the laptop to it work?
No, that does not work, since the phone does isolate the connected peers
Ok I got it working on a separate wifi as you suggested 🙂 However it only streams the eye tracking. The picture is not streaming. Any advice?
@user-4d0769 try restarting the phone.
I have a plugin for capture that accepts a signal from an external source and timestamps whenever this signal changes. The plugin for the player, then exports the signal and its timestamp to signal.csv . I want to be able to easily link the timestamped signals to the pupil_info.csv data. Am i correct in thinking the that including world_index in signal.csv file would be helpful? if so how do i add the world_index to the signal and timestamp =
HI, I try to get some particular frames out of the eye videos. somewhat successful with
video = cv2.VideoCapture("eye0.mp4") total_frames = video.get(7) frame_idx = 100
while True:
video.set(1, frame_idx)
ret, frame = video.read()
cv2.imshow('frame', frame)
frame_idx = frame_idx+1
key = cv2.waitKey(10)
if key == ord('q'):
break
video.release()
cv2.destroyAllWindows()
now matching timestamps is a bit more difficult. Is is possible to access the timestamps inside the original eye video?
Hi! Maybe this information has already been reported but I am new in the Pupil Labs chat... I would like to ask you if you could give me information about how to transform norm_x and norm_y into pixel coordinates of the screen. Thanks!
@papr Hello again, is it possible to run and debug Pupil Player using an IDE? Currently I run Player using Powershell as described in the Pupil docs and use logging but it is time consuming. I apologize for the newbie question. As part of the custom plugin I'm working on I'd also like to be able to import a file into Player, either by adding a button or by drag-and-drop into window as we do with the recording directory. Would you mind pointing me in the right direction to accomplish these?
@user-ff9c49 If you want norm coords --> world camera pixels, then multiply by the resolution of your world camera and flip over Y axis: https://github.com/pupil-labs/pupil/blob/1d8ce3200d3c799b67eaf62a73c4053edc2fe5db/pupil_src/shared_modules/methods.py#L535-L546 -- see also docs here: https://docs.pupil-labs.com/core/terminology/#coordinate-system
Hi @user-a7d017 I can quickly jump in for @papr here: I assume you are running the *.bat files from powershell? You can also run pupil player just via python with
python pupil_src/main.py player
This way you can easily set it up to run via an IDE. The specific setup depends on which IDE you are using. Visual Studio Code e.g. needs launch configurations for your debug targets (https://code.visualstudio.com/docs/editor/debugging#_launch-configurations ), here you can e.g. use:
{
"name": "Run Player",
"type": "python",
"request": "launch",
"cwd": "${workspaceFolder}/pupil_src",
"program": "${workspaceFolder}/pupil_src/main.py",
"args": [
"player"
],
"subProcess": true,
}
But any other IDE should be straighforward to attach to this script.
Regarding your extension ideas:
Drag and drop might be a bit difficult to implement for anything other than recordings. You can find the relevant code piece here: https://github.com/pupil-labs/pupil/blob/cbc4aa8f788371ac5a3822d4819ed69b9310d86e/pupil_src/launchables/player.py#L212
I'd recommend to create a custom plugin, if you want to have some UI. You can read about custom plugin development here: https://docs.pupil-labs.com/developer/core/plugin-api/
@wrp @user-ff9c49 also be aware that the origin for the norm coordinates is the bottom left of the image.
revised my answer above, thanks @papr
@wrp @papr I am sorry, but it is not entirely clear to me... I will explain my situation: I am showing a visual stimulus (using Psychtoolbox, Matlab) on a screen with a resolution of 1920, 1080 px. At the same time through UDP protocol (python-matlab) I am receiving in real time the coordinates x and y (norm_pos_x, norm_pos_y). Because these coordinates go from 0 to 1 (related to the world camera), I do not know where they are on the screen (location in pixels). So I would like to know when these coordinates are in a specific position on the screen (for example: 480, 540 px). So, to do this I have to convert the values of x,y coordinates into pixel values on the screen where the target appears... Should I first define the screen surface using Apriltags? Thank you very much.
@user-ff9c49 yes - define a surface with AprilTags as markers. Then you will be able to understand gaze relative to the surface (e.g. your screen) in norm coords, then can map back to pixel space. Relevant example that uses surfaces and maps from norm surface coords to screen coords: https://github.com/pupil-labs/pupil-helpers/blob/master/python/mouse_control.py#L85-L95
@wrp thanks a lot! I will work on it!
@wrp Hello! I defined the surface using Apriltags and I added it in the surface tracker section (called surface 1). After this...Are norm_pos_x and norm_pos_y (which come from pupil capture in real time) related to that surface?
@user-ff9c49 hey :) the norm_pos meaning depends on the type of data you are looking at. The pupil norm_pos relates to the eye image, the gaze norm_pos relates to the world image, and gaze_on_srf norm_pos relates to the surface. What topic/data type are you subscribed to? To receive data in relation to your surface, you will have to subscribe to "surface".
@papr Hi! I was telling pupil labs to send me gaze data... so then I will have to switch to surface data... Once I do this and I have defined the surface using Apriltags (I will save that surface)... I will get x and y coordinates... Will these coordinates represent the previously defined surface? After this I will have to translate those coordinates into pixels... Thanks! Pablo
If you subscribe to the surface data, it will include gaze data that has been mapped on to the surface, yes. You can convert the gaze norm pos to Pixel locations (given your surface is e.g. an image with a known resolution) by multiplying the norm_pos with the resolution.
But yes, your description of the procedure is correct.
@papr I have already subscribed through sub.setsockopt_string(zmq.SUBSCRIBE, 'surface'), before I had 'gaze'... so now the data I get come from surface.
The problem is that these data always appear when the Apriltags (with which I have previously defined the screen) are recorded by the camera, if there are no Apriltags in the visual field of the camera I do not receive any data... should the Apriltags always be present in the visual field in order to receive the data (msg.items)?
@papr In other words... as soon as the camera does not detect the markers, it does not send data... This would imply that during a visual experiment, the markers must also be present on the screen where the visual stimulus will appear?
@user-ff9c49 yes, that is correct. Without the markers, the surface cannot be tracked, and therefore it is not possible to map gaze onto it.
@papr Thanks!
my plugin uses local_g_pool.get_timestamp() to get timestamps of its events. Now I need to link these timestamps with the eye event timestamps. Are they directly relatable? i.e. Can I just merge the my plugin data from the exported csv files and the eye0 data from pupil_positions .csv, and then sort by timestamp to get a sequence of events?
@papr Hi again! After run the following part (to receive data from surface): surface_name = "screen" topic, msg = sub.recv_multipart() gaze_position = loads(msg, encoding="utf-8") if gaze_position["name"] == surface_name: gaze_on_screen = gaze_position["gaze_on_surfaces"] print(gaze_on_screen)
I get the following: [{'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.0.', 30107.333638], 'timestamp': 30107.333638}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.1.', 30107.338397], 'timestamp': 30107.338397}, {'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.0.', 30107.341707], 'timestamp': 30107.341707}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.1.', 30107.346466], 'timestamp': 30107.346466}, {'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], 'confidence': 0.0, 'on_surf': False, 'base_data': ['gaze.3d.0.', 30107.349776], 'timestamp': 30107.349776}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [1.6850792169570923, 0.20157144963741302], 'confidence': 0.017619357408812316, 'on_surf': False, 'base_data': ['gaze.3d.1.', 30107.354535], 'timestamp': 30107.354535}, .......etc.
Which are the data corresponding to the gaze coordinates (x, y) of the right and left eye? Would they be the following? Right eye: 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182] Left eye: 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663]
Yes, the number after gaze.3d. identifies which eye was used to generate the datum. If you see .01.
it is a binocular datum, which is composed from both eyes
@user-ff9c49 also be aware of the on_srf field, which tells you if the gaze point was actually on the surface or not.
@user-c5fb8b @papr Hello again, I am using the Bisector class method by_ts to retrieve one datum from the gaze positions Bisector object. I used the first timestamp of my recording as argument for by_ts and the returned object is a Serialized_Dict:
Serialized_Dict(mappingproxy({'topic': 'gaze.3d.01.', 'eye_centers_3d': mappingproxy({0: (20.774985455709622, 16.915129077756458, -18.54670009963165), 1: (-40.13386724526508, 15.28138313906937, -19.847435981683557)}), 'gaze_normals_3d': mappingproxy({0: (-0.33739927208901044, 0.07517266051180616, 0.9383553710111023), 1: (-0.1724254950770747, 0.10358711446607727, 0.9795606966206931)}), 'gaze_point_3d': (-98.5101095431107, 47.00852398806294, 312.6064929173059), 'confidence': 1.0, 'timestamp': 1584.016145, 'base_data': (mappingproxy({'topic': 'pupil', 'circle_3d': mappingproxy({'center': (-3.0519413693505326, 4.727869870084321, 94.76084000172979), 'normal': (-0.6312904829970989, 0.5401029664294477, -0.5565618669396967), 'radius': 2.3487197325443194}), 'confidence': 1.0, 'timestamp': 1584.017338, 'diameter_3d': 4.697439465088639, 'ellipse': mappingproxy({'center': (76.15720887466523, 126.83202490778612), 'axes': (15.657013090213452, 30.744993262109645), 'angle': -41.10445613827838}), 'norm_pos': (0.39665212955554807, 0.33941653693861396), 'diameter': 30.744993262109645, 'sphere': mappingproxy({'center': (4.523544426614654, -1.753365727069051, 101.43958240500615), 'radius': 12.0}), ....
I am only interested in accessing the values of key 'gaze_point_3d' for my calculations. I tried a few different ways to access the values, such as .get() and trying indexes, but none is working. What should I use to access the values?
@user-a7d017
What happens when you access by key or use .get()? Do you get an error or an exception?
Serialized_Dict
implements pythons __getitem__()
which is used when indexing an object with square-brackets [] and also implements .get()
. So I'd like to know what fails for you. Without having looked at it in more detail, I feel like this should work for example:
mydict = ... # whatever you did to get the dict
print(mydict["topic"]) # should print "gaze.3d.01"
print(mydict.get("topic")) # should print the same
@user-c5fb8b
with this code: gaze_bisector = self.g_pool.gaze_positions gaze_datum = gaze_bisector.by_ts(1584.016145) print(gaze_datum.get("gaze_point_3d"))
I receive this error:
Traceback (most recent call last): File "C:\work\pupil\pupil_src\launchables\player.py", line 583, in player p.on_notify(n) File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 160, in on_notify self.export_data(notification["ts_window"], notification["export_dir"]) File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 196, in export_data print(gaze_datum.get(gaze_point_3d)) NameError: name 'gaze_point_3d' is not defined
However I have just tried
print(gaze_datum['gaze_point_3d'])
which did work. Thanks
@user-c5fb8b In the future how can I format the discord posts to have different background/border for code snippets like in your posts?
@user-a7d017 for formatting: https://support.discordapp.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline-
@user-a7d017 From looking at you error message I figured out what went wrong:
Traceback (most recent call last):
File "C:\work\pupil\pupil_src\launchables\player.py", line 583, in player
p.on_notify(n)
File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 160, in on_notify
self.export_data(notification["ts_window"], notification["export_dir"])
File "C:\work\pupil\pupil_src\shared_modules\raw_data_exporter.py", line 196, in export_data
print(gaze_datum.get(gaze_point_3d))
NameError: name 'gaze_point_3d' is not defined
See the last line: print(gaze_datum.get(gaze_point_3d))
You simply forgot enclosing quotation marks around the string key. This should have worked as well: print(gaze_datum.get("gaze_point_3d"))
@papr So for example --> 'on_surf': False does this mean that the gaze is not within the predefined surface?
@user-ff9c49 correct
This is determined by the norm_pos of the mapped gaze datum
If any of the two coordinates, x or y, is not in the [0,1] interval, on_srf is set to false
@papr So [0,1] interval refers only to the defined surface?
These are the surface boundaries in surface coordinates
That is why we check the norm_pos of the surface mapped gaze, not of the original gaze
@papr and directly to convert these norm_pos coordinates (x,y - "gaze_on_surfaces") to pixels, would simply be, knowing the resolution of the screen... for example: screen_size = 1920,1080px
norm_pos coord. (x,y) screen coord. (x,y) (0,0) (0,0) . . (0.5,0.5) (960,540) . . (1,1) (1920,1080)
Correct
Please be aware that Pixel coordinates often have their origin in the top left though
Which means 0,0 maps to 0,1080
And 1,1 to 1920,0
@papr True! Thanks for the advice!
@papr Hi! I guess the use of markers is because you do not assume that the subject is static in front of the area defined by the markers (for example a specific screen). In the case we know that the subject position is static (ensuring stabilization with a bite bar) in front of a specific screen, could this screen be defined only through the calibration process (without the use of Apriltags)? Once the calibration area is defined, would the x and y coordinates (from 0 to 1) belong only to that calibration window? or do they still belong to the whole world camera area?
@user-ff9c49 The norm_pos coordinates always are relative to the whole world camera image. Did you record the calibration procedure as well? Then it should be possible to access the coordinates of your calibration area in world camera coordinates by using the offline calibration plugin, which will detect the markers automatically in the recording. But it will be a bit tricky to access the data and then you will have to do a manual conversion of gaze data into (your custom) surface coordinates. If you did not record the calibration procedure, you can only do this fully manually.
The much easier solution would be to use apriltag markers for your surface as this is exactly what the surface tracker plugins are designed for. The calibration procedure is not intended to be used as an "area tracker".
@papr @user-c5fb8b My situation is as follows (maybe you can advice me regarding the problems I am going to comment):
I am using Pupil Labs to perform a visual experiment in which I intend to measure saccadic eye movements. The projection of the visual stimulus is done through Psychtoolbox (Matlab). So I first made a UDP connection to send the data from Python to Matlab and have the information in real time. So the idea behind is that this information will be used in Matlab to perform a gaze contingent experiment (through Psychtoolbox - Matlab). Mainly I have two issues:
1) Delay. There is a significant delay in sending and receiving data (from when I send them with Python to when I receive and process them in Matlab). Therefore this delay would be critical for the purpose of the gaze contingent.
2) Screen definition and norm_pos (x,y) conversion in pixels for gaze contingent. First I thought to place Apriltags in the frame of the screen, the problem is that the experiment must be performed in low light conditions and therefore the detection of such Apriltags would not be efficient. I have also thought about projecting them via Matlab (in the "corners of the screen" at the same time as the experiment is running), but in this case I think they can influence the results because they can distract the subject. That is why I asked if there is any way to define the screen without the use of these markers (for example defining the window through calibration, and after that calibration the variables x and y are only referred to that "calibrated area" and not to the world camera).
@papr @user-c5fb8b I also wanted to ask the following:
Connecting from (zmq.SUBSCRIBE, 'gaze'), the data from each eye is received separately in different "frames" (line items) and in random order (for example, I receive 2 lines of data from the right eye, then 1 line from the left eye, etc.). On the other hand, if I connect from (zmq.SUBSCRIBE, 'surface'), I get the data from both eyes in the same "frame" (line items)
I know that the reason for this randomness (in the case of subscription 'gaze') is because the processes of both eyes are executed independently of each other.
But why is it that in the 'gaze' subscription the information is received from each eye in different "frames" and in the case of the 'surface' subscription the information is received in the same frame?
For example: --> Using (zmq.SUBSCRIBE, 'gaze') I get: dict_items([('topic', 'gaze.3d.1.'), ('eye_center_3d', [-8.459387 ....... etc ....... ('norm_pos', [0.534134,0.9829978])]) dict_items([('topic', 'gaze.3d.1.'), ('eye_center_3d', [-3.312847 ....... etc ....... ('norm_pos', [0.521251251, 0.9712988])]) dict_items([('topic', 'gaze.3d.0.'), ('eye_center_3d', [-5.324567 ....... etc ....... ('norm_pos', [0.534121312, 0.9123177])])
--> Using (zmq.SUBSCRIBE, 'surface') I get: dict_items([{'topic': 'gaze.3d.0._on_surface', 'norm_pos': [1.6850976943969727, 0.20157243311405182], .... etc}, {'topic': 'gaze.3d.1._on_surface', 'norm_pos': [6.512446880340576, 0.9611295461654663], .... etc])
So using the 'gaze' subscription I get information from right and left eye in different dict_items lines ("frames"), but in 'surface' subscription I get the information from both eyes in the same dict_item line. Is there a reason? Thanks!
1) The lowest delay that you can get is by subscribing directly instead of having an additional step to send the data via UDP.
2) We currently do not support markerless area-of-interest tracking.
3) The difference must be in your code and how you parse the message. All messages
send by Capture, contain at least two frames
:
a) topic, string indicating content of b)
b) payload, msgpack-encoded dictionary
It looks like your code for receiving gaze returns information about both frames
, topic and payload, while your surface code only returns information about the payload.
Please be aware, that the payload includes a field named "topic", which should correspond to the topic in frame
a)
@papr regarding your point number 2.. sorry but just to clarify, after the calibration step, x and y coordinates (0,1), to which area do they correspond? To the area delimited in the calibration or to the area of the entire world camera?
to the area of the entire world camera
Pupil Core is not a remote eye tracker, where you calibrate in relation to a real world area. Instead you calibrate to a coordinate system that is relative to the subject's head (the world camera).
@papr Ok! Thanks a lot!
@user-ff9c49 IN your case, I would recommend to give displaying the markers in the corners a try. You can make a trial run and see, if subjects actually pay attention to them
Actually, that would be a great study on its own... 🤔
@papr yes .. I am currently working on this because looks like it is one of the most positive solutions to that issue...
@papr Hi again! Regarding the screen detection (using Apriltags):
1) Can the distance at which the world camera is placed and the size of the screen be factors that hinder the detection of Apriltags? I have shown 4 markers in each of the corners. At the beginning the camera was placed 57cm (aprox.) from the screen and the screen dimensions are 38x30cm (aprox.) After observing serious problems in the detection, I have changed several times the size of the markers, their position (closer to the center of the screen), the distance of the camera, inverted the background color and the markers color, but I have not achieved a constant and accurate detection.
2) Can the curvature distortion produced by the world camera be an issue in the detection?
3) Any advice?
@user-ff9c49 Please give our new v1.17 release a try. It includes improved marker detection by default.
But yes, distortion and size of the markers matter.
@papr Hi! Just to clarify...again a question about the conversion of norm_pos coordinates to screen pixel coordinates...
The screen defined with the markers has a resolution of 1920x1080px and the world camera, 1280x720px. The origin (screen) is: 0, 1080 (means [0, 0] in norm_pos) and 1920,0 (means [1, 1] in norm_pos).
For that conversion, should I take into account the resolution of the screen defined by the markers and the resolution of the world camera? or just the screen resolution? In this case it would be: x_norm_pos_pixels = norm_pos[0](x_screen) y_norm_pos_pixels = norm_pos[1](y_screen)*-1
@user-ff9c49 you do not have to care about the world camera resolution
Hi, is there a particular set of apriltags that is detected better than others? also, i'm printing my markers (can't show them through matlab or sth like that) so I was wondering if you guys had recommendations to facilitate their detection. Thanks!
@user-2be752 if you're not already please make sure you're using the latest release of Pupil Core software (v1.17). Regarding detection robustness of apriltags, please refer to AprilTags publications: https://april.eecs.umich.edu/papers/details.php?name=krogius2019iros - note that performance will differ with a Pupil Core headset due to different camera used
Quick tips: You might want to ensure that tags are large enough and have enough white border.
Any idea why would I ever get negative timestamp values
@user-c9d205 If you sync your time with a different clock that is definitively possible
I am running req.send_string('T 0.0')
before fetching the gaze and world data. Not only do they not match (not even close), the gaze timestamps begin negative
Am I doing something wrong?
@user-c5fb8b could you check if you can reproduce this ☝
I can send my code in whole, its not very complicated
@user-c9d205 that would be helpful!
Any news?
@user-c9d205 We will come back to you as soon as we performed the test
@user-c9d205 are all of the timestamps negative? And in what range are they?
@user-c9d205 It might be that the first incoming gaze timestamps are not yet effected by the time sync. After a couple of items, the gaze and world timestamps should be in the same range. Can you check this?
The thing is I am syncing the time before even subscribing to the topics, I'll check anyway
@user-c9d205 yes, this is what I find weird, too
Timesync successful. World timestamp: 12.775317878999886, Gaze timestamp: 12.982123878999573 World timestamp: -0.11712121599975944, Gaze timestamp: 12.986158378999335 World timestamp: -0.08361121600046317, Gaze timestamp: 12.99019287899955 World timestamp: -0.050102216000595945, Gaze timestamp: 12.994227378999767 World timestamp: -0.01659221600039018, Gaze timestamp: 12.998261878999529 World timestamp: 0.016917783999815583, Gaze timestamp: 13.00229637899929 World timestamp: 0.05042778400002135, Gaze timestamp: 13.006330878999506 World timestamp: 0.08393678399988858, Gaze timestamp: 13.010365378999722 World timestamp: 0.11744678400009434, Gaze timestamp: 0.14493528399998468 World timestamp: 0.1509567840003001, Gaze timestamp: 0.1489697840002009 World timestamp: 0.18446678399959637, Gaze timestamp: 0.15300428399996235 World timestamp: 0.21797678399980214, Gaze timestamp: 0.15703878400017857
These are the first few timestamps
It does look like after a couple of stamps they sync up, like @user-c5fb8b said
Can you reproduce this at your end?
@user-c9d205 I see the same behavior with the gaze time syncup delay over here. We will have a look at the cause of this. Can you give us some info on what setup you are using? Particularly: - operating system - Pupil version - type of headset
ubuntu 16.04 version 1.17.6 How do I know the type of headset?
Is it a recent version of the pupil core headset? There are also some very old versions still around. And there's the VirtualReality headset extension for the HTC Vive.
very recent
Ok thanks! We will take a deeper look at the underlying issue.
Happy to help 🙂
Thank you too
Can someone please give me a simple link that will give me the Pupil Core software v1.17 for my Mac? I can't find it anywhere. Thank you!
@user-ff6753 Here you go: https://github.com/pupil-labs/pupil/releases/download/v1.17/pupil_v1.17-6-g5a872c8c_macos_x64.zip
In the future you can find the bundles on GitHub in the release section: https://github.com/pupil-labs/pupil/releases There you'll just have to scroll to the end of the current release, there's a section Assets with the bundles for all operating systems.
Cheers!!!
Okay. I've got all 3 apps loaded up on my Macbook Pro but none of them will open. I've rebooted but nothing. Glasses plugged in. I've open my music recording software and it doesn't recognize that anything is plugged in. Hmmmm. Any suggestions? I'm following the Pupil Labs Getting Started page. Started with Capture but nothing.
@user-ff6753 Is there any window opening if you double-click the application?
@user-ff6753 I am looking for any kind of error message you might be getting.
Absolutely nothing. They are in my apps folder. I double click and nothing. Tried to open all 3 several times.
@user-ff6753 Could you run the following command in a terminal:
/Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
and let us know what the output is?
Also, please be aware that the bundle is not supported on older macOS versions than macOS Sierra 10.12 or on Macs with an Intel Xeon processor.
I'm running 10.11.6 El Capitan. Can I get an older version?
@user-ff6753 v1.15 should be supported on your system: https://github.com/pupil-labs/pupil/releases/download/v1.15/pupil_v1.15-71-g30eb56e4_macos_x64.zip
This version doesn't work either. I made sure to unload the newer versions.
@user-ff6753 Could you run the following command in a terminal:
/Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
And let us know what the output is?
[632] Error loading Python lib '/Applications/Pupil Capture.app/Contents/MacOS/Python': dlopen: dlopen(/Applications/Pupil Capture.app/Contents/MacOS/Python, 10): Symbol not found: _clock_getres Referenced from: /Applications/Pupil Capture.app/Contents/MacOS/Python (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib in /Applications/Pupil Capture.app/Contents/MacOS/Python
@user-ff6753 Oh, I just saw that I made a mistake when I looked up the macOS 10.11 compatible version. Please try v1.14 https://github.com/pupil-labs/pupil/releases/download/v1.14/pupil_v1.14-9-g20ce19d3_macos_x64_.zip
Same result unfortunately. Any other options? Thanks again for helping me!
@user-ff6753 I am sorry about that. Could you try this (very) old version, just to check. If this one does not work there is something else going on... https://github.com/pupil-labs/pupil/releases/download/v1.11/pupil_v1.11-4-gb8870a2_macos_x64_scipy_v1.1.zip
Same thing. I have a sneaking suspicion that I'm doing something wrong now. I've never had such a difficult time getting any of my multiple MIDI controllers to work.
@user-ff6753 Two question:
1) Could you please repeat running /Applications/Pupil\ Capture.app/Contents/MacOS/pupil_capture
in a terminal.
2) MIDI controllers? I do not fully understand the connection.
I'm an electronic musician/educator working with a chiropractor that specializes in clients with severe physical disabilities. We want to introduce music production to some of them using Ableton Live.
@user-ff6753 That sounds really cool! And you want to use eye tracking as a method of interaction with ableton live?
Exactly!
Illegal instruction: 4
That's all it said this time.
Nice! I would like to hear their experience with your setup (once we got Pupil Capture running successfully 😅 )
We're not getting any younger!
I my head I see this being really cool for everyone involved.
Could you make a screenshot of the "About This Mac" window? You can open it through the "Apple" menu in the top left of your Mac. Attached you can find my machine as example.
@user-ff6753 Mmh, v1.11 should have worked on that machine. Unfortunately, there is no way for me to debug this. There is an alternative though. Instead of running the bundled application, you could run from source. These are the instructions: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-macos.md
Some of these instructions might take a while to complete since homebrew does not provide pre-compiled versions for macOS 10.11 anymore, since it macOS 10.11 has reached its end-of-life to my knowledge.
It's getting pretty late here in France and my brain is a little too fried for this. Will tackle tomorrow morning. Will you be around same bat time tomorrow for possible Q&A?
I am based in Berlin, so yes 👍
Cheers!