@papr Hi, what is the accuracy level of pupil diameter detection?
Dear pupil team, is there any possibility to try out the eyetracking glasses before buying?
@user-07d4db You basically need to create two sets of time ranges for each surface and calculate their intersections:
1. Surface detection range, calculated from surface_events.csv
: Duration between enter and exit events for each surface.
2. Gaze on surface, calculated from gaze_positions_on_surface_X.csv
: Duration of entries with consecutive positive on_surf values.
You can start by just calculating 2., but the file only includes gaze data while the surface was visible. i.e. the results might be incorrect, if the surface disappeared while the subject was looking at it.
@user-e2056a Checkout section 4.3. Pupil size estimates
of https://perceptual.mpi-inf.mpg.de/files/2018/04/dierkes18_etra.pdf It uses synthetic images to evaluate pupil size estimations since it is difficult to gather ground truth in this area.
@user-4ffdc3 You should have received a private message from @user-97ae1e in regards to your request.
Hi - I am using pupil mobile - what are the ways in which I can display the location data i.e. where they have walked? and where is this data stored?
@user-bb9207 Hi, currently the location data is neither visualised or exported. You would have to write a Pupil Player plugin or custom script to read the location files and export that data.
@papr To sync different softwares, I used the difference between the different start time, but it seems that the timestamp is not right. It seems there is a delay. Software A recorded a time, then I used it minus the delta of start time. Sorry I haven't try the plugin now.
@user-c87bad how big is the delay. What magnitude are we talking about?
@papr I have no accurate difference, but less than 1 second.
What's the common range of the delay?
@user-c87bad Than this might be due to the device's time sync inaccuracy
Under a second sounds correct for normal network time sync
@papr All right! Thank you so much!
Dear Papr, I've got an other question: I am looking for the coordinates of AOIs, I defined with the surface tracker plugin. Can you tell me where I can find these? I only found an excel file that was unreadable called "surface definitons" in the raw data output. A hint of you would be very helpful! Thanks a lot!
@papr Sorry again. I've checked my timestamps in the file. The start time system is 1564678499.62692, the start time synced is 30555.503445168, and the time my video appear is 1564678521.14308, but when I check the gaze on the surface timestamp, it starts at 30562.56187. So there is difference of around 61139.581475168. Is that ms? Also I use the plugin but I found there is no difference on the fixation and gaze timestamp. Is that right?
@user-07d4db the surfaces are defined as homographies. These basically give you matrices which you can use to transform between points in the world coordinate system and points in the surface coordinate system, and vice versa. I do not know their names by hard, though.
@user-c87bad timestamps are in seconds
Which timestamps did you substract to get that difference?
I use the start time system - start time synced.
Also, gaze on surface is not the first gaze, but the time the surface is detected the first time. This is not equivalent to the start time in info.csv
And then use my UNIX timestamp - this difference
yes, the time video appear should be the time when surface is detected
Thank you papr! But I am still confused: where can I find the coordinates of my surfaces? I want to know them, so that I am able to check, whether the fixations contained in the "fixations.csv" file where on the surface or not
@papr So the final difference is about 14 sec. That's too big.
@user-07d4db Which operating system are you on?
@user-c87bad Mmh, the question is which device is out of sync. hard to tell ๐
What do you mean by operating sytem? I am conducting my masterthis using pupil labs. know i am trying to figure out how I can match the fixations with the coordinates of the surfaces, I defined. Unfortunatly I cannot use the fixations on surface files due to problems of reproducability of the data
The new version will export fixations_on_surface again, which includes the above information already. If you use Windows or Linux, I can link you the prerelease version.
I see, ok.
How player calculates this is by multiplying the fixations norm_pos with the world_to_surface transform matrix, yielding a location within the surface coordinate system. Then we simply check if the resulting point is between 0 and 1 for each dimension.
The matrix is exported in surf_positions<name>.csv
as the img_to_surf_trans
field
thank you very much. So is it possible to match the normalised position ccordinates of a fixation in the fixations.csv files with the inforamtions in the surface-positions_name.csv?
Yes, that is what fixations_on_surface is usually for
Okay! mhh well we cannot use this file unfortunatly. Could you explain me further what m_to_screen,m_from_screen means?
So are these the two relevant columnes of the file?
The first step is to understand what an homography is https://en.m.wikipedia.org/wiki/Homography_(computer_vision)
They are basically matrices to convert points between coordinate systems.
The coordinate systems in question are the Szene image and the surface.
@papr But I just use the time.time() and the keyboard. The difference should not be so large. I doubt if I use the wrong way.
@user-c87bad Ah wait, you are using the same computer to record the data of both "sensors"?
Just have a look at clock.py please. Without having a look at your code it is difficult to judge for me if something goes wrong.
Yes, I am using the same computer. But is that a big problem?
No, then it is clear that something is wrong, since capture uses time.time() to calculate start time (system)
but wait a second
you said unix time of keyboard press - start time difference, correct?
and this result, what do you compare it to?
You basically need an event that is recorded in capture and externally to be able to calculate the time sync offset
I compare that with the first world timestamp of the gaze on the surface. Because when press, the video starts which means the surface will be detected.
If it is about the first surface detection, then checkout surface_events.csv
Also, can you share the recording with data@pupil-labs.com , such that I can try to better understand the setup?
@papr Concerning @user-07d4db 's question: Did I understood you right that there is a different matrix for every surface to transform the fixations with? Is the surface coordinate system two-dimensional or tree-dimensional? And where exacly can those matrices be found?
@user-0fde83 There are two matrices (img_to_surf_trans
and surf_to_img_trans
) for each surface for each frame
Exported in surf_positions<name>.csv
I need to clarify: This is the pre-v1.13 behaviour. The >= v1.13 behavior is more complicated since it takes the distortion of the world camera into account
But to understand the general problem surface mapping tries to solve, it is important to understand the pre v1.13 behavior first
@papr I am trying to! So the first step would be to take the the fixations data and transform them to the surface coordinate system using img_to_surf_trans, is that right?
Hi everyone, I am currently facing an issue with Pupil capture and am desperately trying to find a solution:
https://github.com/pupil-labs/pupil/issues/1570
Problem: The recorded world camera camera video is often shorter than the eye camera video, which is a consequence of missing frames. These frames appear as long grey periods when playing the files in Pupil player.
example file duration (m:s): eye - 35:28, world - 18:53 eye - 30:36, world - 27:07
I have not had issues in the past 1-2 months, but started doing longer recordings (~40 minutes) and it started appearing. I am also tracking 7 surfaces in total, but there is rarely more than 3.
Configuration: Pupil capture: 1.12.17 OS: Ubuntu 18.04, up to date i5-7200U CPU @ 2.50GHz ร 4 , 15.6 GiB ram, Intel HD Graphics 620 Pupil core with fast world camera, and 200Hz monocular camera World camera: 1280x720 at 60fps Eye camera 400x400 at 120 fps
This is the only thing holding back my study so if anyone has a solution, I would be extremely thankful.
@papr Thank you, I saw in the document it shows the pupil radius error is 0.01mm from true value of 2mm. However, under the 2d mode, the pupil size unit was not mm, it was pixel. What is the accuracy level of pupil diameter measure under 2d mode?
@papr Hi! I've sent the data and the enter time in surface_events is equal to my first gaze time on surface.
Thank you so much!
@user-0fde83 No, the actual first step (which we have not talked about explicitly yet) is to correlate fixations to world frames, so that you know which surface transform matrix to use.
@user-34688d I saw that you posted the issue on Github before. Please just link the github issue the next time instead of copy-pasting the issue text, thank you.
@papr added link
@user-34688d I will write an answer on Github for persistance. Please be patient, since you are asking multiple questions that all need time to consider
@papr +1 thanks
@user-0fde83 Regarding correlation of fixations: Fixations often span multiple world frames, which makes it difficult to decide which transform matrix to use.
I think that the surface tracker tries to deduplicate these. Depending on the implementation, this might be the reason for the fixations_on_surface not being reproducible reliably,
@papr What do you think is duplicated there?
It is important to understand, that fixations are calculated from the point of view of the scene camera, i.e. if gaze does not move within the scene image, it is considered a fixation. A fixation can span over multiple scene frames. Is this clear so far? (Surface tracking is one of the more complex processes in Pupil, so explaining them is not that easy for me ๐ )
Thank you a lot for trying, i am doing the best i can to understand. I got that so far, yes.
Ok, great. So let's assume we have three consecutive world frames during which we detected a fixation.
And during these we detect a moving surface, yielding three different transformation matrices.
A fixation with three frames, got it
Therefore, mapping the fixation with these matrices yields three different positions within the surface
i.e. a fixation in scene camera space != fixation in surface space
And looking at the implementation, until now, we have cheated our way around this fact by throwing two of the three mappings away,
One transformation matrix for each frame, right? How did you choose the frame to keep?
That is exactly the problem, we do not choose an explicit frame, but use a python dictionary which uses the fixation ids as keys.
It is totally unpredictable which of the three frames is being removed. The only thing that we are sure of is that one surface mapping remains.
And as long as the surface does not move in relation to the scene camera, that is totally fine, since the mapping should result in roughly the same spot.
But if the surface moves during these three frames, we get something like a smooth persuit movement from the point of view of the surface.
This is a fundamental problem when detecting fixations in scene camera space instead of a real world coordinate system.
The best solution for this would be to do head pose estimation, map gaze locations into the head coordinate system, and calculate fixations within this coordinate system.
What happens if some surfaces are only detected in few of these frames? Does a fixation, that usually could appear in the document, disappear then?
To finish the thought: But at this point you are chaining so many estimation processes to each other, that small estimation errors accumulate quickly to big errors that make the final fixation detection impossible.
I am not sure if I understand the question. If a fixation and a surface detection overlap temporally, the surface mapper will try to map the fixation to the surface.
A wrong thought, don't bother. Wouldn't it be possible, if a have three frames in the fixation, to cut the fixation in three parts, convert every single part with it's matrix, and then merge them to a new fixation after transformation?
A fixation does not have a single part. A fixation is just the mean of all gaze data during that period.
What I would do instead is to map all gaze independently onto the surface, and then run a fixation algorithm on the result.
The problem is that it is not possible to calculate angular differences between gaze in this coordinate system ๐
Ok, i get the problem. What is your solution in 1.13 and after?
The surface is a normalised coordinate system. So a fixation algorithm would have to get the real surface size as an input, to judge, if something is a fixation.
v1.13 and above do not handle fixations differently. v1.13 just introduces correct handling of the world cameras distortion.
But what you could also do with the existing tools:
This gives you a fixation in surface coordinates, which should be an accurate mapping of the scene-coordinate fixation, given that the surface did not move much during the fixation.
1.-3. i understand without a problem, but which results do you mean with surface mapping results?
gaze_positions_on_surface<name>.csv
-> x_norm
, y_norm
The appropriate gaze data can be identified by its timestamp
So use the timestamps from 3. to find the gaze mapping results in 1.
As we searched today for an alternative way to get to the data we need, we figured gaze_positions_on_surface has the same problem of not being able to replicated as fixations_on_surface has
Looking at the code, I do not see a reason why this should be the case.
That is what we saw, i don't know why
Ok, let's do an experiment. I give you an recording that has pre-detected gaze and surfaces. We use Player v1.12-17 to export it, and compare the gaze_positions_on_surface result.
Sounds good to me
What player should i use?
My result
Ok, just give me a minute
Sure
Here is my data
Doesn't look like the same output to me, also concerning gaze_positions_on_surface, or did i miss something?
I am investigating the differences right now
I am starting to wonder, they look the same as far as i can tell, but seem to have different sizes in terms of KB
So how did you make your comparisons today?
Also because of the file size, it was a pretty good measure to tell whether a file was the same or not with fixations_on_surface. Why do the sizes differ?
This might be due to csv dialect differences
File size is definitively not the way to go when it comes to compare the contents of an csv file
I am currently trying to read the csv with pandas and to compare values
Ok, thank you. I will look into our files to see whether there is a difference in the content
@user-0fde83 all gaze_positions_on_surface_*
files are equal
Agreed, i downloaded the data multiple times here too and it always turned out to be the same, but the fixations_on_surface data is so as well. We thoght that the mistake was the same, because sometimes the data is consistent and sometimes it is not. This file seems to be consistent. Do you get the same fixations? If you do, do you have a set of data where you don't and does the gaze_positions_on_surface change there?
I forgot to detect fixations ๐
Now the gaze files are not equal anymore oO
Where does it differ?
column equal
---
id True
start_timestamp True
duration True
start_frame True
end_frame True
norm_pos_x False
norm_pos_y False
x_scaled False
y_scaled False
on_srf True
I guess we proved the gaze files are affected as well ๐
This is due to m_to_screen
being different for the two recordings
Ah, ok, so no prove
But this shouldn't be the case though
Could this change also be responsible for missing values in fixations_on_surface or is it another mistake?
different homographies indicate either changed surface definitions or different marker detections, where the latter can lead to different amount of mapped data
ok, it looks like the transformations are different due to white spaces
Oh ok, and the columns in gaze_* that differed, only differ in 4.919842311323919e-12
I just checked our data ones more. The number of gazes differs. I have checked how data got missing in the fixations_on_surface and noticed that every "bigger" file included all smaller ones, plus some extra fixations to them. Often many fixations that followed on each other. Might it be altogether a problem of markers? If they sometimes are not noticed and the surfaces aren't defined the fixations and gazes would not show up in the files and therefore we would not be able to replicate them, but we have no problems at all if all markers are visible at all times
These files are from the same person, just downloaded twice. We have more, but unfortunately the files get too big
We have markers that are a lot harder to detect and i think therefore the mistake is getting bigger
Ok, but for our experiment the conclusion is: The files differ but only to a very very small value: https://gist.github.com/papr/fbf1c86e21d6594927c443b4b13949a3
I added the maximum difference for each column for each file that was different
Have you taken a look at the files i just sent?
No, I did not ๐ It is 23:29 over here. I am going to bed ๐
Over here too ๐
Good night, perhaps we can try to figure this out tomorrow
Theoretically, the marker detection should be reproducible. But I know it does not perform well if the markers are small.
In v1.15 we will add support for apriltags. This should eliminate a lot of these "bad detection" cases (given that you use apriltags in your recording)
@user-0fde83 Did you create a Github issue for this already? Please add the files there + the above description of the problem.
I will try to add a summary of my findings as well
Up to know i have not, i will try to when i can, but i still try to get familiar with Github
Ok, good night ๐
Good night
Hello everyone, I plan to code a piece of software to synchronize pupil labs data with other data. I see that you use ZMQ, I looked at the connection.cs code from hmd-eyes to see how to code the receiving part, but I don't understand everything (especially because I don't know how are written the messages). Do you have some doc to explain how to code a receiving part? (or some documented code?) Thx
@user-ee433b Are you bound to use c#?
Not sure, I'll have to work with some other people and I don't know which language they prefer
(As I'm the one "bringing" Pupil Labs, I'm in charge of the communication with Pupil Labs)
Understood
I think they'll use C++, but it's just a guess
You came to the correct place to communicate ๐
๐
After reading that, the hmd-eyes code should make more sense to you.
I think I've already read that. To help me, do you have a doc explaining the format of the message broadcasted by Capture?
You mean which fields each type message body has?
There is no complete overview over that. Gaze data has different fields depending on e.g. if it was mapped binocularly or monocularly
For example: what is the data from a msgType : frame.eye.0 ?
So messages published by the frame publisher are special, since they are the only case where the message has three instead of two frames
frame.eye.0
The format can be either selected in the frame publisher ui or via the start arguments when started via notification
Ok, so the images from the cam can be streamed alongside the results of the "pupil processing"
correct
Nearly everything that Capture produces as data can be streamed.
great ๐
I try to understand from connection.cs, it seems that the only time you extract data from msg, is in 'public void InitializeSubscriptionSocket(string topic)' I guess I miss something because where do you read all receiving data afterwards? I guess it's with the event/delegate 'OnReceiveData' in 'PupilTools' used in 'PupilDemo' for example where you get the data from the dictionary. But where do you build this dictionary from the msg?
Connection is only about the connection itself, not about the semantics of the data transmitted through it
@user-ee433b checkout gaze and gaze listener for gaze data
There are separate classes for separate types of messages.
moonshot question here @papr... but does pupil have any sets in stock it would be willing to lend to a recent grad getting into ux research? I found online that pupil was one of the most affordable and consumer friendly headsets - but its still too exp for me :(. just moved to berlin and was pleasantly surprised that pupil is based out of here! anyway - moonshot question
Hi @papr , can I ask a couple questions? 1. I saw in the document it shows the pupil radius error is 0.01mm from true value of 2mm. However, under the 2d mode, the pupil size unit was not mm, it was pixel. What is the accuracy level of pupil diameter measure under 2d mode?
@papr , related to the last question, which of these files are necessary to ensure the accuracy of gaze, fixation, blink and pupil size data?
Dear Papr, you had thursday night a longer conversation with @user-0fde83, a collegue of mine working with me on my masterthesis project. We are both a bit clueless how to get reliable and valid data out of my experiment, coping with the issue of unreproducability of the data. I want to measure the amount and mean duration of fixations within specified AOIs, using the surface tracker plugin. Do you have any proposal how we can cope with this proplen in an efficiant way? Thank you very much!
@papr good day! Is it possible to run recordings, which were interrupted by windows error/ laptop discharging/ sleep mode onset? The recording folder contains all the files, but pupil player can't run it correctly. It runs only gray screen like world video is not available, but at the same time it shows dynamic of confidence id 0/1 (Print screen is added)
@here We are pleased to announce the latest release of Pupil software v1.14! We highly recommend downloading the latest application bundles: https://github.com/pupil-labs/pupil/releases/tag/v1.14
hi @papr I've just installed pupil capture (latest version) on a brand new laptop running ubuntu 18.04. When i load the software it fails to find any of the eye-tracker cameras (i just get three blank gray screens). One of the error messages on the world cam screen is "init failed. Capture stared in ghost mode...". Is there something else I need to install in order for the cameras to be recognized? Thanks!
@user-e7102b are they listed as unknown in the uvs manager menu?
If so, you need to add your user to the plugdev group.
Afterwards, restart the computer and try again.
@user-99f716 Please contact info@pupil-labs.com in this regard
@user-e2056a 1. Please also check out "A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000": https://peerj.com/articles/7086/ 2. gaze, fixation, blink and pupil size data can be generated after-the-effect in Player as long as the world and eye videos and their timestamp files were correctly saved.
@user-bc5d02 Could you share the recording with data@pupil-labs.com ? I can check what can be recovered. But I am afraid that the video is most likely not recoverable.
@user-07d4db I still have to look into the reason why the gaze data on surfaces is not reproducible.
@user-07d4db @user-0fde83 regarding the exports above, that have different amount of data: Do you still have the export_info.csv
file for both?
If yes, please share them. If not please try to reproduce the case where one exports a subset of the other. Please be aware that there was an issue in v1.13 that caused less surface detections than in versions < v1.13 -> resulting in a subset of mapped gaze data.
If you want, we can try to rerun our reproduction experiment on your own dataset, instead of the Pupil Labs demo surface tracking experiment.
Dear @papr,thank you very much for your response! Which export_info.csv file do you mean? the onesyou have send us or the ones we produced with our data? I think @user-0fde83 did the experiment with you with player version v1.12, so according to your explanation, tehre shoudn't be the issue with the reduced surface detection. Yes that would be great, if we coudl rerun the experiment with our data and you have a look at it! Which player version shoud I use? the newest one v1.14?
@user-07d4db The files that we have exchanged so far did not include a export_info.csv
. We have only exchanged the surfaces
subfolder of the actual export result.
The export result includes the export_info.csv
file
And yes, please use the newly released v1.14 for the experiment.
Okay! Unfortunatly @user-0fde83 only has the complete data of the exepriment you did and he is currantly in vacations, so that I can not send to you the original export_info.csv files unfortunatly ๐ฆ I will tell him to send them as soon as I can!
great! I will download te player and retry the experiment. Shall I send you as well a data folder of us, that I will use for the experiment?
I was not referring to the export_info.csv file from our experiment but the other uploads (Data1.zip and Data2.zip) which have a different amount exported gaze samples. If you do not have access right now, that is fine.
Yes, I need one of your recordings
And I have got a further question: you proposed to us the following steps: 1. Use surface tracker to map gaze to surfaces 2. Use the fixation detector to detect fixations 3. Find all gaze that belongs to a fixation 4. Find the surface mapping results for these samples 5. Calculate the mean of the mapped gaze norm_pos values Coud this be a solution as well? or only if the gaze_on surface_name.csv is correct?
It should contain existing surface definitions
This is only a solution if the gaze samples are close enough to each other. Missing samples breaks the approach
Okay! Thank you! I am sorry to ask, but how can I give you acces to the complete data folder of a participant, as it is so big?
@user-07d4db see my direct message
thanks! just give me minute to install everything and send you the folder ๐
Do you need the eye and world viedeos to to have a look at the output? Becaus they need a lot of space and I cannot send less than 2 gb for free
world video yes, eye no
okay
@papr Thanks, adding the user to plugdev brought the cameras to life
Hey @papr , I have another question. @user-dfeeb9 and I wrote a pupil middleman script last year that we shared with pupil community to allow pupil capture to be controlled by MATLAB. When we wrote it the version of Capture was ~v1.5. I recently installed a newer version of capture on a different machine and noticed that all commands seemed to be working (e.g. start stop recording) except annotations were not being logged in capture. Presumably this is due to annotation format change in v1.9? I'd like to update the script so that annotations work with the newer versions of capture. Given that annotations are no longer special types of notifications, would it just be a case of sending annotations in the same way as we currently send other commands (e.g. start/stop rec)? Thanks
@user-e7102b you basically just need to remove the notify.
part from the message topic
There is an example python script on how to send remote annotations in the Pupil-Helpers repository. You might also need to request a different port ๐ค
Hi, I'm trying to activate my pupil device on windows 10 machine. The cameras are classified to the 'cameras' section in the device manager instead of the new category - "libusbK Usb Devices". As a result, the cameras are not recognized and I can't record videos with the device. I followed the "Windows driver troubleshooting section" - https://docs.pupil-labs.com/#troubleshooting including manually installation with Zadig (https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md) but the drivers are still apearing under 'cameras' except for the 'integarated_webcam_hd' that appears under 'libusbK Usb Devices'. Thanks in advance.
@user-b37f66 please contact info@pupil-labs.com with this information.
papr, I'm modifying the blink_detection.py and I would like some clarification on the following code ``` activity = np.fromiter((pp["confidence"] for pp in self.history), dtype=float) blink_filter = np.ones(filter_size) / filter_size blink_filter[filter_size // 2 :] *= -1
if filter_size % 2 == 1: # make filter symmetrical
blink_filter[filter_size // 2] = 0.0
# The theoretical response maximum is +-0.5
# Response of +-0.45 seems sufficient for a confidence of 1.
filter_response = activity @ blink_filter / 0.45
if (
-self.offset_confidence_threshold
<= filter_response
<= self.onset_confidence_threshold
):
return # response cannot be classified as blink onset or offset
elif filter_response > self.onset_confidence_threshold:
blink_type = "onset"
else:
blink_type = "offset"
confidence = min(abs(filter_response), 1.0) # clamp conf. value at 1.```
Hello, I noticed that the 3d model is fairly stable at 400x400 pixels (120Hz) however when increasing the frame rate to 200Hz (192x192 pixels) the model becomes very unstable and changes often - though the headgear has not moved. It would be a feature request, but I think it would be important to 'lock' the model when using real-time measurements as we would have a fairly good assumption that in a controlled condition the model shouldn't change significantly - rather than let a lower resolution jitter change the model every few (5-10) seconds. Any other way to decrease the model instability? Thanks!
@user-7890df This is already on our todo list
Hey there, I am facing a problematic situation right now. During recording, the world camera disconnects and reconnects (Only happened to me on Windows 10). If I open the file in video player, it opens normally and I can see that there is not too much missing data.. If I open it in Pupil player, the first 50% of the recording is greyed out and the other 50% is ALL of the recording.
@user-34688d which version of Player do you use?
1.12.17 on windows 10
Your issue is actually different from @user-bc5d02 's. Please upgrade to v1.14 which includes a work-around for your issue.
ok will do. btw I updated the issue on github, the fps drop was due to the computer I was using being too slow.
(I posted it a few days ago)
turning off online surface tracking also helped a lot.
I thought I could run it from a laptop, but turns out you need some serious single core performance and good hard drive to run everything at full speed.
@papr minimizing the ID0 window crashes the process.
in capture 1.14
Issue seems to be launchables\eye.py" line 795 in eye shared_modules\gl_utils\utils.py, line 96, in make_coord_system_pixel_based OpenGL\error.py, line 232, in glCheckError
OpenGL.error.GLError: GLError( err = 1281, description = b'invalid'value' baseOperation=glOrtho,
cArguments = (0,0,0,0,-1,1)
We recently purchased the Pupil Core headset. I've used it about 3 times and it worked as expected, however today the two eye cams have stopped working. The world cam works, however the eye cams donโt appear under the sources using the โLocal USBโ manager. I have tried two other computers using the most recent Pupil Capture software (v1.14) and the eye cams do not load on either computer.
When I turn off eye0 and eye1 and turn it back on, it says โinit failedโ and it starts in ghost mode. If I run 'system_profiler SPUSBDataTypeโ on all three computers, I only see Cam1 and not Eye0 or Eye1. I have tried a different USB-C and USB cable without luck. I have also confirmed the white connectors for both eye cams are secure.
Does anyone have any suggestions on troubleshooting the two eye cams further?
@user-34688d You can use Capture v1.12 and Player v1.14 btw
@user-365094 You wrote an email to info@pupil-labs.com about the same issue, is that correct?
@papr How will it deal with surfaces? I thought they were updated between version 1.12 and 1.13, and I get a message that they are deprecated.
@user-34688d Just run the surface detection offline. Just use Capture for recording the video. r do you need the surface data in real time?
And do I understand it correctly, that the crash-on-minimizing only happens in Capture v1.14, and not in v1.12?
@papr about the crash yes. v1.12.17 does not crash when minimizing, v1.14.6 does. Both on Windows 10, same hardware config.
@papr yes, that's correct
@user-365094 You should receive a response via email shortly
@papr Answering your earlier question, we do not need online tracking right now, but it was very convenient for defining our surfaces. I'll try to do as you said. Record with 1.12.17, process with player v1.14.6
If you want to save further CPU during recording, you can disable realtime pupil detection and run it offline in Player
@papr Is there a way to have surface defnitions carried over between measurement sessions?
You should be able to set up the surface definitions in one recording, and copy the surface_definitions file to other recordings
ok will give it a try
dear @papr, please see my private message ๐
@papr Copy pasting the surface_definition file is not a success. New meaasurements, with the copy pasted surface_definition file cause pupil player to stop responding after loading the file.
@user-34688d Mmh, this is unfortunate. Please create a new github issue requesting this feature
@papr ok, and in the meanwhile do you have a suggestion on how to: 1. record with 1.12.17 2. use player v1.14.6 without having to redefine all surfaces?
I would run Capture v1.14, define the surfaces, turn off surface tracker, start a recording without minimizing the eye window
Not tested though
yes we tried doing this, but it crashed after entering a surface name+enter
should I open a new issue for that too?
one sec
We cannot reproduce the surface rename issue
Could you share the capture.log file after Capture crashing?
hmm it seems to be working right now.
I'll not minimize the eye window then and we are good.
ok. Just make sure to backup the log files when something crashes
Hi, do you have a documentation for CE norms for your eye tracking cameras with illumination? I try to get ethics approval for upcoming experiments and they are very picky. Thanks in advance.
I could not find anything on the webpage.
@papr I have some further problems with sending remote annotations to Pupil capture. When I send the remote annotations while running Pupil Capture it seems to be working fine, and I see correct timestamps in capture. However, as mentioned before, I cannot export the annotations from the recording with pupil player. Now you helped me to read the annotation.pldata file manually, and I get the different annotations there, however, they all have the same timestamps which is incorrect.
My set-up is a bit complicated as I am communication from one computer to a second computer via an internet socket, and the second computer is running pupil capture and sends the annotations upon receiving input from the first computer. I adapted the remote_annotations.py script in order to do this, which still includes the time synchronisation, and I receive the message that the time sync is succesful. do you know what might be going wrong?
Here is my script:
@user-14d189 Please check my private message
Pupil Player - Does anyone know how I can add annotations with a duration?
Sorry if this is a FAQ, but what's the eye camera image size that was used for development of the 2D tracker. I just noticed that if I downscale my 800x600 frames to 400x200 or even 300x150 the tracking results get dramatically better
I think this is because discontinuities in the pupil edge outlines increase in scale as the image scales, but I haven't figured out how to scale them for a new resolution. But I'm happy to just downscale the frames as well
@user-3e42aa Just curious here. When you say tracking results, are you talking of the tracking confidence?
The confidence yes, but with the bigger resolution it doesn't often even give an estimate (and the confidence gets to 0)
@user-bb9207 unfortunately, this is not possible
@user-3e42aa That might be due to better default values for the pupil-min and pupil-max values in the lower resolution settings. Please change the eye windows mode to "algorithm", activate the higher resolution , and try to make a screenshot of a situation that yields bad detection results.
@papr It's not the pupil-min and pupil-max. And I'm actually using directly the C++ interface to Detector2D
@user-3e42aa Ok, but this does not exclude the possibility that the defaults of these variables effect the result
I did control for them
A sec
But generally, yes, it might be possible, that the detector performs better on lower resolution images
@user-3e42aa don't worry, I believe you if you say that controlled for it
Just wanted to state the defaults might be a reason for the performance differences.
This gives .99 confidence when 400x300
This one 0.0 with same settings (apart from adjusted min/max pupil for resolution)
That's full 800x600
Excuse the ugly OpenCV gui
Could you try turning off the coarse detection option?
I'm not using it, as it's done cython side
@user-3e42aa Ah correct, I was just looking at it
Then I cannot tell you what exactly is causing this issue. I mean the screenshot shows that the pupil edges were detected well. For some reason the ellipse fitting fails
I think it's a discontiunity in the curves
But there should be at least an attempt at fitting an ellipse to it
That the result is no ellipse at all is what I am confused about
I don't fully understand how the algorithm, but my hunch is that it works by following the contours but it gives up due to a too large discontiunity
@user-3e42aa We use cv::fitEllipse
to fit ellipses on the contour candidates (light blue lines)
There's quite a bit of other processing going on also, so at least for me it's hard to say where the resolution-dependency stems from
cv::findContours oddly takes no parameters, so there may be some dependency already there. In the larger resolution image the contour is broken at least in the bottom
OpenCV may actually be quite difficult to make resolution independent. E.g. the canny detector aperture is capped at some quite low value. Also the morphological operations are defined in absolute pixel units. I think the way to go is to keep to lower resolution eye videos
@user-3e42aa which is more efficient in many ways anyway ๐
Hi, I am trying to export heatmaps with the Surface Tracker Offline plugin in Pupil Player v1.14 (Windows). The data was recorded with Pupil Capture v1.12. I specified the surfaces and added widths and heights, e.g., 200.0 x 500.0. Heatmap Smoothness is set to 0.5. In the preview with the Show Heatmap option on, it looks all fine. But the exported PNGs just have 1-3 KB and resolutions of 31 x 77 px or even less. From the discussion above, I understand that I have to adjust the Heatmap Smoothness parameter to change the number of bins in the histogram which seems to influence the resolution of the heatmaps. I tried smoothness values of 0, 0.5 and 1.0. In all cases the exported heatmaps are tiny. What can I do to get heatmaps with reasonable resolutions, e.g., 200 x 500 px?
@user-f7f0ad I think @marc can answer this
@user-f7f0ad If you set the smoothness to smaller values you should get large heatmaps. E.g. at zero you should get a heatmap with width 1000 px, which is the largest possible output. The resolution of the heatmap is as you noted equal to the number of bins of the histogram. If you choose to have a coarse heatmap with few bins you will get a small heatmap. You can increase the resolution of the heatmap image (using neirest neighbor interpolation) to increase the size of the heatmap for easier presentation.
Since the current scheme for setting resolution and smoothness of the heatmap does not seem to be intuitive for many users we are considering a change to allow users to explicitly specify the resolution in terms of bins as well as the resolution in terms of image pixels for the output image.
@marc Thanks for the feedback. When I run the export with a smoothness of 0.0, the PNGs have a resolution of 1 x 1 px. Do I have to change the width and height settings?
@user-f7f0ad The size of the surface must be greater than zero, but you should not be able to set it zero in the first place using the UI. I am not able to reproduce this behavior with my recordings. Could you upload an example recording that shows this behavior somewhere and send a link to [email removed] so I can check what happens in Detail?
@marc I will try to provide an example recording and send you the link. Thanks.
Hi!!! In what kind of condition that will make there is a fixation in an area but no gaze points for the same specific time?
Also, I found there is a difference between the 'world_index' from gaze_position_on_surface <> and the start_frame_index from fixations.csv of the same timestamp, sometimes there is one frame difference. To sync, would that be better to use the frame index? Or timestamp?
Hi, I have a hardware question about the Pupil Core. Could a development board be used to stream videos over wifi (instead of a phone). If yes what is the smallest board we could use?
Hi, I have a hardware question. My Pupil Core shows right eye upside-down. is there any setting to correct this?
Hello, I have a bit of a software and hardware question. When running normal calibration procedures (screen marker, manual marker, etc.) we will often get accuracy estimations of around 2-3 degrees, with a precision metric of around 0.05-0.01. On further review of recorded video we see that error (difference between estimated gaze point and actual gaze point) increases in magnitude as gaze moves further away from the center of the world camera FOV - even within the region originally calibrated for. Any idea why this may be? We are using the wide-angle world camera lens that comes with Pupil Core, but we have the same issue with the high speed world camera lens. Re-estimation of camera intrinsics has not helped.
@user-22adbc the right eye image on Pupil Core is flipped because the sensor is physically upside down. You can leave this as is, it does not affect the pupil detection algorithm or gaze estimation. However, if you'd like to flip the image you can do so from the eye window general controls flip eye image
@user-6ec304 monocular or binocular system?
@wrp binocular.
What is the ideal distance between eye and eye camera to get the most accurate tracking? Maybe cameraโs FOV is also important?
@wrp if it would be useful, I could provide a short recording of the world and eye videos documenting this issue
Hey guys, I have a quick question on the proper care of a Pupil Core system. Would alcohol wipes/microfiber cloths be recommended for cleaning both the world camera and 200hz eye cameras used? If not, are there any alternatives?
@user-6ec304 sure, please upload a short recording that demonstrates this behavior and share the link with data@pupil-labs.com
@user-94ac2a Ideally the camera captures all movements of the eye, capturing the entire eye region but excluding eyebrows if possible. Small differences in depth should not have a negative impact on the quality of the eye image from the algorithm standpoint
@user-48e1d0 microfiber cleaning cloths for lenses can be used for the world camera and eye camera lenses. But please be gentle when cleaning lenses. Also a small puff of compressed air is sometimes the best thing for getting dust off of a lens.
Performance Question: what do you think would be the maximum frame rate for the software? So for example if one of the new USB-C 500Hz or 1000Hz cameras could be used, would the software be able to keep up (i.e. algorithm <1ms per eye position)?
Hi! I found a common error from the file exported from pupil player. I have a recording of around 2 and half minutes. The file of gaze position on surface only contain part of the data. However, the fixation data is for all the time. How can I get whole data of gaze position? Is this a bug?
Also I am pretty sure that my gaze is on the surface.
And I notice there is a notification saying the player error dc which I don't know what's that mean.
Error file is :
Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=85 x=13 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=17 x=70 Export eye0 Video - [ERROR] libav.mjpeg: overread 8 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=64 x=43 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=78 x=48 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=4 x=44 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=66 x=28 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=44 x=45 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=66 x=30 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=42 x=10 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=71 x=61 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=48 x=34 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=8 x=48 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=54 x=42 Export World Video - [ERROR] libav.mjpeg: error dc Export World Video - [ERROR] libav.mjpeg: error y=80 x=35
Another question is about the confidence. According to the doc, the confidence below 0.6 will be ignored, but from the file gaze on surface, the information correspond to the low confidence still remained. If I filter that myself, will that have influence the fixations? I mean the number of fixations on surface will reduce since need to map gaze timestamp.
I have another question with regards to "variable frame rates" of the camera system. Is this a limitation of the camera, software or both? Is there any way to have a fixed frame rate (i.e. have the software poll at a fixed interval) as would be desired for reproducible scientific applications? What would be the downsides of doing it? Thank you very much!
hi all, does anyone know if a certain type of router is preferred/better when using wifi mode on Pupil Mobile on the MotoZ3 Play?
hi guys, does anybody know whether we could export the gaze mapping and intensity map for a certain duration from Player and how?
With regards to visualization of saccades in pupil_player and binocular recordings (I have the 200Hz binocular system), fixations appear singular, but during fast eye movements, it looks likes its alternating between two paths, is that the left and right eye calibration mismatch that is plotted?
Also I've noted a bug in the pupil player, when doing repeat pupil detections and calibrations and different "trim video" brackets, it appears that in this instance, two different gaze positions are plotted, alternating even during fixation, here are the resulting stills from fixation and then from saccades. I tried to delete the "offline calibration" folder but can not reverse this behavior. Is there a way to 'start from scratch'? These recordings were done on the recent pupil_capture 1.13.29 on windows 10 Pro. At some point in the playback, the fixation becomes "split":
Then every subsequent saccade appears to alternate between these two calibrations (is it each eye, or two different calibrations plotting simultaneously?):
Hello, I'm currently having an issue playing the eye video in Pupil Player (1.13.29). When I go to play the video, the world camera video works fine; however, the eye video is frozen in place. I have looked at the mp4 file for the eye and it works fine, just not in Pupil Player. I did six separate recordings and while the first worked fine, the rest are all having this issue. Any help on how to fix this and/or to prevent this from occurring in the future would be greatly appreciated. Thanks
Hello I am new here, my name is arshad. I am from Mauritius . I am considering using pupil as part of my Mphil/PhD research
I downloaded the software but cannot find any sample or dummy file to try along
can someone help please? I haven't yet purchased the equipment. Just wanted to try the software, familiarize myself first
@papr @fxlange at exem?
Ecem? There is a pupil labs table here.
@user-8779ef yes there are some members of the Pupil Labs team at ECEM - https://pupil-labs.com/news/ecem-2019
Yep! Just met them. Good group! the invisible is very impressive.
Hi, guys! It's really emergency. I cannot get all the gaze data but can get all the fixation data from player. And I found it may be something wrong about the marker cache. Can anyone tell me how to fix it?
@user-c87bad Could you please provide a bit more description about the behavior you are observing and steps to reproduce this behavior?
hi , I can not get the fixation data from the player .also when open the image for the heatmap surface it is empty.can any one help me with that ?
@wrp I have a recording of around 2 minutes from pupil capture. After I got it to pupil player, I found there is a some red color on the marker cache line which means there is no surface detected. However, when playing the video I found this happened suddenly. The audience didn't move and markers were detected before, which means markers should be detected at that time. Then I have a test with recordings around 1 minute, 2 minutes and 3 minutes. But after dragging files to pupil player, markers all suddenly could not be detected for the last around 10 seconds. Then I used larger markers to test again with 1,2,3 minutes. The gaze data on the surface still wasn't complete. (Because the markers suddenly couldn't be detected)
Hello, would anybody be able to tell me what coordinate system is for the angular output, theta/phi?
Is this just regular spherical polar coordinates, and how is this orientated relative to the Cartesian axes?7
The manual is very unclear on this
Hi @user-3ca244 Are you talking about the raw data export?
Hi @user-c5fb8b, yes, I think so. I'm talking about the out theta and phi output keys when using the 3D gaze mapping mode
@user-3ca244 Theta and Phi are indeed spherical coordinates of the normal of the eye sphere. The corresponding components of the normal vector can be found as 'circle_3d_normal_x', 'circle_3d_normal_y' and 'circle_3d_normal_z' in the export. For conversion we use:
r = np.sqrt(x ** 2 + y ** 2 + z ** 2)
theta = np.arccos(y / r)
psi = np.arctan2(z, x)
Hope that helps!
The normal vector is already normalized, so r is always 1
Thanks @user-c5fb8b, is theta =0, phi=0 when the eye is looking directly forwards (or looking straight at the camera), or if it were to be somehow pointing at the sky?
@user-3ca244
May I ask for what purpose you need this information? I feel like you might be following a wrong lead. The theta and pi values, as well as the circle_3d_normal_*
values are outputs of the 3d detector. They refer to camera coordinates, for every eye separately. This is actually not information about where people are looking! At least not directly, but rather data from a previous processing step.
If you want to have information about whether someone is looking forward, you should take a look at the gaze_normal0_*
and gaze_normal1_*
(where * is x/y/z) values, which are gaze direction vectors already mapped to world coordinates.
You can find some more information on all exported variables in the docstring of the Raw_Data_Exporter class here: https://github.com/pupil-labs/pupil/blob/9484d32b8420fab725d2bfc70da2e7b543ed92ae/pupil_src/shared_modules/raw_data_exporter.py#L27-L102
Hello! My name is Ana Clara and I'm new here. I am doing a pilot study with the pupil and learning how to use it ๐ฌ . I would like to know if there is an acceptable data loss limit after calibration so that I can consider the data collection as valid. In the experiment, the participant should look at some images in a 660*550cm frame, 150cm away from her. Thanks!
Hi @user-ec4bd6 - welcome to the Pupil community ๐ ๐ธ I would suggest that you use the accuracy measurement as reported after calibration. Please see: https://docs.pupil-labs.com/#notes-on-calibration-accuracy
@papr hi,When I increase the max duration of the fixation, the number of fixations drops and some fixations (which were in the lower duration) doesn't exist anymore. Why?
@user-c5fb8b
@user-8fd8f6 This is expected when two consecutive short fixations are detected as one longer fixation.
@papr I know what you mean. But sometimes one fixation for a certain frame doesn't exist any more. (It was in the lower duration but is not in the longer one)
@papr :for example (attached picture) the left one is max duration 220 and the right one is max duration 600. I highlighted some of the disappear fixations.
Hi, @user-c5fb8b , Thanks for getting back to me. I am trying to get the pointing direction of the eye (or each eye individually) in the head coordinate system, with the centre of the eyeball at the origin. I'm not actually trying to find out where people are looking in world coordinates, therefore I think this information is what I need.
I think from the documentation and your explanation that the circle_3d_normal_x/y/z or the theta and phi values are what we want...? I now don't know which way the x/y/z axes are relative to the orientation of the camera or eyeball. I'm also not sure what the centre of rotation is, but I don't thing this matters, however presumably circle_3d_normal_x/y/z are vectors starting from sphere_center_x/y/z?
Thank you very much for your help so far ๐
@user-3ca244 In this case you should really use the gaze_normal0_x/y/z and gaze_normal1_x/y/z values as I described above, as they represent the view direction of the eyes in the coordinate system of the world camera. So they are both in the same coordinate system.
The world coordinate system is the same as the opencv coordinate system: positive x: right positive y: down positive z: forward
You can get the positions of the eyes in word coordinate system with eye_center0_3d_x/y/z and eye_center1_3d_x/y/z.
I recommend not using circle_3d_normal and sphere_center as they are in eye-camera coordinates, so the coordinate systems are different for eye0 and eye1 and you cannot directly compare those to each other!
I am trying to get the pointing direction of the eye (or each eye individually) in the head coordinate system, with the centre of the eyeball at the origin. Maybe I am misunderstanding you, but as the gaze_normal is just a directional vector, it does not matter where the origin is located.
@user-c5fb8b We have purchased a Pupil Core with no world camera and only one eye camera at the moment, as this is sufficient for our research. We do intend to add a world camera and perhaps a second eye camera in future.
I'd therefore like to get the coordinates relative to the persons head. I suppose the direction of axes must be determined by the orientation of the camera and will not necessarily be aligned with a purely vertical or purely horizontal axis of rotation of the eyeball.
Regarding the direction of the gaze_normal vector, you're right, it doesn't matter where the origin is. I was worried that it might matter where the origin is for the axes around which the theta and phi angles are measured, but I suppose this should not matter either as I think that the orientation of a rotated object does not depend upon the centre of rotation....
@wrp Thanks! I would like to ask something else. I looked at the output files and couldn't find (or don't know if I could find) such calibration information. Is this data stored somewhere after closing the record or can I only see it after calibration?
@user-ec4bd6 It is shown in the Accuracy Visualizer
menu directly after calibration. Alternatively, you can calculate after the effect using Offline Calibration. https://www.youtube.com/watch?v=aPLnqu26tWI&list=PLi20Yl1k_57rlznaEfrXyqiF0sUtZMMLh&index=2
Has anyone experienced the issue of capture not recording when the monitor is turned off? When turning the monitor back on, the timing of the recording session acts as if it has been recording, but when stopped, it looks like collection has halted after the initial turn off of the monitor.
@user-8fd8f6 please create a bug report on Github for this
Hi I'm a little confused on the exported data that I get. what is the difference between Diameter and Diameter_3D?
Hi @user-4ef728 You can find more specific information about the data in the documentation: https://docs.pupil-labs.com/#detailed-data-format What you find there: - diameter is the pupil diameter in pixels in the image - diameter3d is the estimated real diameter in mm
hi I have a problem with the player some the records didn't open can any one help me with that .
@user-deafd0 Does Player crash or does it show grey frames instead of the recording? In case of crashes, can you share the player.log
file after trying to open a recording in Player and it having crashed?
@papr thank you for your reply ,
that what show in my screen when I try to open the record
@user-deafd0 It looks like you have an eye video but no eye timestamps. Can you share the info.csv file of this recording?
@papr
@user-deafd0 Could you post a screenshot of the recording directory's content?
@papr
@user-deafd0 for some reason the videos do not have any timestamp files. This is an indication that the recording was not terminated properly. Do the videos open in vlc?
Hi! I get this errore dropping the recording folder in pupil player. Is there any chance to recover it?. Thank u so much
@user-c1220d this is a problem with this version of Pupil Player. Please upgrade.
hi, I have a question: How can I launch the pupil lab without the gui? I hope to launch it on a development board which doesn't x server running, so the error of opening glfw will pop up if I simply open pupil lab in the console.
@user-5e6759 there is no windowless Pupil software. You can use Pupil Service, but this still has glfw windows, so not sure this meets your constraints.
so will it be okay if I comment all lines related with glfw window?
@user-5e6759 I believe it's going to be a more involved set of changes than just commenting out a few lines of code.
Hi all, we are conducting research on gaze behaviour in sport. We used to have SMI ETG2 eye trackers, but we are not happy with them at all, and recently the smart recorder has stopped working. We are looking into the pupil trackers as an alternative. Does someone have some information for me in terms of their application to sport (accuracy, usability, etc...) Thanks
Hi @user-1e0eb8 there are a number of people in the community using Pupil Labs hardware + software in sports research. I'm not 100% sure if this research has been published yet but you might want to check the citation list: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing You might also want to take a look at some research that Kyle Lindley (and Driveline Baseball) has been doing with Pupil Core with baseball players: https://twitter.com/kylelindley_
Thanks mate.
Where can I download the Core software for MacOS?
@user-daa4af hi, unfortunately, we do not have have a macos version for the v1.15 release. Please check the v1.14 release and download its macos version
@user-daa4af We have just uploaded the v1.15 macOS release. Please give it a try.
hi pupil community, first time poster here. I was wondering if anyone might be available to help me with an experiment i am conducting
i'm using a pupil eye tracker and software to track my students (english teacher) eye movements during test taking, but am having trouble calibrating my device (monocular) to get the accuracy I need
any help would be greatly appreciated
@papr i tryed with 1.15 version but apparently it crushes when i activate "eye overlay" plugin
pupil player can actually start but only visualizing the world camera recording
@user-c1220d Looks like you are missing timestamps for eye0. Could you paste a screenshot of the recording directory?
here it is
@user-c1220d Yeah, it looks like the eye0 video was somehow corrupted. The video file is empty (0 bytes) and does not have an according timestamp file. Remove the eye0 file and retry the overlay feature.
@papr eye1 is also 0 bytes for @user-c1220d s recording!
yesh, for some reason it didnt record properly
thank you for the help, unfortunately the recording failed
Oh yeah, and there is actual timestamp files. I didn't read the screenshot row-wise but by column. Ups.
@user-c1220d But yes, it looks like the eye videos where not properly recorded. Could you share the info.csv fiel with us?
yes of course
@user-c1220d This is a very old version of Pupil Mobile. Have you considered upgrading?
yeah, maybe i should, does it have any complications with different android versions?
@user-c1220d There was a problem with detecting the world cameras on Android 9. But all older versions of Pupil Mobile have this problem. We released a work-around in the beta release.
ok thanks so much
Hi everyone, I'm trying to install all dependencies for Ubuntu 16.0.4 and I'm facing many errors installing libuvc. As described in this github issue https://github.com/pupil-labs/pyuvc/issues/53 but I'm not able to fix it.
@user-c5dc99 please post your exact error in the issue above
Any idea why we are getting IndexError from _convert_frame_index in pupil player while doing offline pupil detection?
@user-3e42aa No not really. Could you share that recording with data@pupil-labs.com ?
It's rather huge
@user-3e42aa ok, then let's start with the complete traceback of the exception
The eye videos are 3.6 gigs a piece
Ok, a sec
@user-3e42aa you can compress them via ffmpeg -i original.mp4 compressed.mp4
Ok. I actually did do that, but wanted to verify with the originals
Do you need the scene camera video as well?
@user-3e42aa yes
Could you please share the traceback nonetheless?
Coming up, had to get the laptop online
eye1 - [ERROR] launchables.eye: Process Eye1 crashed with trace: Traceback (most recent call last): File "launchables/eye.py", line 664, in eye File "shared_modules/video_capture/file_backend.py", line 281, in run_func File "shared_modules/video_capture/file_backend.py", line 454, in recent_events_own_timing File "shared_modules/video_capture/file_backend.py", line 281, in run_func File "shared_modules/video_capture/file_backend.py", line 405, in get_frame File "shared_modules/video_capture/file_backend.py", line 377, in _convert_frame_index IndexError: index 0 is out of bounds for axis 0 with size 0
@user-3e42aa which version of player do you use?
And please share the info.csv file of the recording.
v1.15-4-gbfa1cd9_linux_x64
It was recorded with another version though
Hi! Can anyone tell me how to fix it?
I have a recording of around 2 minutes from pupil capture. After I got it to pupil player, I found there is a some red color on the marker cache line which means there is no surface detected. However, when playing the video I found this happened suddenly. The audience didn't move and markers were detected before, which means markers should be detected at that time. Then I have a test with recordings around 1 minute, 2 minutes and 3 minutes. But after dragging files to pupil player, markers all suddenly could not be detected for the last around 10 seconds. Then I used larger markers to test again with 1,2,3 minutes. The gaze data on the surface still wasn't complete. (Because the markers suddenly couldn't be detected)
@user-c87bad Which version of Player do you use and on which operating system?
Hi @papr can you tell me how to get the video from world camera in a range of QR code only?
@user-40621b could you please explain. What does in a range of QR code only
mean? Does this mean crop the video based on the surface defined by markers? If so, this is not supported out of the box by Pupil software, but should be able to do post-hoc (or in real-time) by subscribing the surface
topi in the network API and cropping the video frames based on the position of the surface.
Yes that's what i mean @wrp . What is network API? How to get it? Thanks.
Hi @user-40621b please see 1. https://docs.pupil-labs.com/#interprocess-and-network-communication 2. Examples of how to communicate with Pupil software with simple python scripts: https://github.com/pupil-labs/pupil-helpers/tree/master/python
Okay @wrp I'll read first. Thanks
@user-c87bad Hey, I have seen that we did not properly respond to your question the first time that you asked it. I also saw, that there is an other open issue regarding your time measurements. We are currently in the process of processing open issues more systematically. You should receive a response to the time measurement issue next week.
Regarding the surface detection issue: Please share the recording with data@pupil-labs.com such that we can investigate the issue.
@papr Hi there! I'm also trying to integrate Pupil+Psychopy and I must ask: ยฟis it possible to give Pupil Capture the order to start recording from a few lines in Psychopy? Thank you very much!
@user-9cbfb2 yes it is, but only from the coder view.
@user-9cbfb2 https://github.com/pupil-labs/pupil-helpers/blob/master/python/pupil_remote_control.py#L47
@papr I'll try it tomorrow, thank you very much!