Folks, where is the documentatoni on your coordinate systems again? Trying to decypher theta and phi in pupil_gaze_positions_info.txt
I understand that it's the normal extending from the eyeball sphere center to the pupil center
...will theta and phi be 0,0 when the norm is pointed directly at the camera?
I thought this stuff used to be in the docs
I imagine that the axes of theta (azimuth) and phi (polar/elevation) have to be oriented along the axes defined by the camera's up vector and horizontal vector
...unless they take the calibratnoi data into account
If you compare it to this definition, it is slightly different, though https://en.wikipedia.org/wiki/Spherical_coordinate_system#Cartesian_coordinates theta
is based on y
instead of z
, while phi
is based on z/x
instead of (y/x
)
Yeahhh.... you guys flip phi and theta!
I just noticed that
Ok, no biggie, as long as we are aware
You also return phi, theta instead of theta, phi
So, had I not looked at the code, I might not have even noticed
You also return phi, theta instead of theta, phi It is exposed correctly in
Detector3D
https://github.com/pupil-labs/pye3d-detector/blob/2d8d9da87458fceb4e1d027e0159c788f84d104a/pye3d/detector_3d.py#L571-L574
Thanks, papr.
Iβm getting an error message that says βzlib isnβt foundβ while trying to run pupil capture. It used to work previously. How do I fix it? @papr
Could you please provide the full context of the error message?
world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables/world.py", line 140, in world File "/home/pupillabs/.pyenv/versions/3.6.10/envs/pupil/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 623, in exec_module File "shared_modules/methods.py", line 21, in <module> ImportError: /opt/pupil_capture/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib/x86_64-linux-gnu/libpng16.so.16)
world - [INFO] launchables.world: Process shutting down.
Do you know which version of Capture that is? @user-764f72 please try to reproduce this issue with the latest Core release on linux
@papr this is the error that Iβm getting
Iβm installed v3.3
Im following this.: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md and during pip install -r requirements.txt the pyre @ https.://github.com/zeromq/pyre/archive/master.zip fails.
Please replace it with [email removed] This is a temporary incompatibility.
Thx!
Still doesnt work. What am i missing?
Please delete the pyre and ndsi lines, and install pyndsi via the corresponding wheel from this link https://github.com/pupil-labs/pyndsi/suites/2951661456/artifacts/66406208
The link is dead. 404
Permission problem? Just drop the file to discord? Or just wait until it fixes itself on monday? :)
The fiel is too big for discord. One sec.
Thx
I dont get it. Was ndsi not in those wheels? I triied to install all of the win version ones.
I gave you the ndsi wheels by accident. π But you would have needed those, too.
Maybe lets postpone it to next week. could you please push a fix for dependencies to the main git next week? Thanks a lit for your time. Have a nice weekend.
Yes, no problem. Have a nice weekend
You too
Hi all, i've been trying to follow the instructions here for streaming gaze data using LSL: https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md
Hi, then it is likely that pylsl is not correctly linked/installed
I'm currently stuck at Step 2: there doesn't seem to be any Pupil Capture LSL Relay plugin listed in the Plugin Manager
ok, so pylsl needs to be installed into the pupil capture application somehow?
Steps 1-2 from the Installation
section, yes
i'm currently running the pupil capture app from the installer
ok, thanks, somehow i was confused whether pylsl was used to consume or emit the data
In this specific case, it is used to emit data. The Lab Recorder is then able to receive the data.
thanks, it's working perfectly now π
ok, it seems like Pupil Core cannot handle all the 3 cameras running at the same time on Windows (eye0, eye1, world)
the process crashes soon after I turn on the second eye camera, and seems like frame rate is hit right away
Could you share the C:\Users\<username>\pupil_capture_settings\capture.log file such that we can check what causes the crash?
this part seems relevant:
Process eye1:
Traceback (most recent call last):
File "launchables\eye.py", line 744, in eye
File "zmq_tools.py", line 174, in send
File "msgpack\__init__.py", line 35, in packb
File "msgpack\_packer.pyx", line 120, in msgpack._cmsgpack.Packer.__cinit__
MemoryError: Unable to allocate internal buffer.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "launchables\eye.py", line 814, in eye
File "launchables\eye.py", line 44, in __exit__
File "logging\__init__.py", line 1337, in error
File "logging\__init__.py", line 1444, in _log
File "logging\__init__.py", line 1454, in handle
File "logging\__init__.py", line 1516, in callHandlers
File "logging\__init__.py", line 865, in handle
File "zmq_tools.py", line 46, in emit
File "zmq_tools.py", line 167, in send
File "msgpack\__init__.py", line 35, in packb
File "msgpack\_packer.pyx", line 120, in msgpack._cmsgpack.Packer.__cinit__
MemoryError: Unable to allocate internal buffer.
This is the first time I have seen this error. Could you check if you have sufficient free RAM/working memory on your computer? Please also try restarting and checking if this solves the issue.
huh, that seems to be it, turns out that having 10 instances of visual studio open eats up more than 3 gigs of ram
Did you have a chance to push an update for requirements.txt? Also we need to synchronize the eyetracker to other hardware by TTL so i added some LPT code to eye.py on each framecapture. I was wondering if there is already a plugin for that so i could stay up to date with the pupillabs software and could use a precompiled version instead. What would be a better solution compared to modifiing eye.py?
requirements.txt is updated
Did you have a chance to push an update for requirements.txt? Working on that as we speak.
A related question is whether using the LSL plugin provides reasonable synchronisation guarantees? And if so, what is the upper bound of error?
yea that would require a 2nd software reading the LSL stream and then sending the TTL impulse which induces more latency. in our setup we work with 10+ devices which need to synchronize theire time base. EEG, MRT,CT, screen, PC, lasers and so on. all of them are research limited production number custom PC, rack, software driver and so on solutions. some run on 15 year old linux versions, some of the PCs you just can mess with on the software level for patient safety. so we just settled on communicating time, synchronization of clocks and events on the hardware level trigger lines via a national instruments PXIe FPGA chassis to merge it all on the smallest common denominator - so to speak. i believe those TTLs are the common ground in lab setups - it would be really appreciated to have a as precise as possible ready made solution for that in the eyetracker. for me it was just 5 lines of code in eye.py, but its difficult to keep that updated - i would prefer that the people i hand that over to could just update to the newest version of pupillabs instead of me having to build everything from scratch every few months.
one might even consider puttin some BNCs next to the eyetrackers USB plug and send a toggle for each frame captured and each camera for a less laggy/jittering solution as an additional product idea - would get rid of the none RTOS induced latency.
@user-588603 we're completely on the same page π we're trying to use pupil labs for neuroscience experiments with high-speed behaviour readouts, EMG and EEG and exactly the same constraints apply. And of course if we were going for depth recordings it would get even more strict, in that case you really need those TTLs
I would love to see the same kind of GPIO connectors found in industrial cameras that allow you to send out strobe TTLs or frame trigger pulses, those allow for both syncing multiple devices, or logging accurate timestamps for acquisition
Agreed LSL doesn't seem ideal, but from reading the Pupil API it looks like there is an option to send an event to the system to get timestamped during frame acquisition? is this done in the python software? if that's the case it's not so great, but not sure whether there is bidirectional communication with the raw imaging firmware using libuvc
@user-6ef9ea @user-588603 We are currently working on improving our time sync documentation. You can find a pre-rendered preview here https://github.com/pupil-labs/pupil-docs/blob/3ef6e46c88341717bf0ebfed8725a552ded3a26e/src/core/best-practices.md#sychronization
I think the important point to note is: Trigger delay is not a problem, as long as it is roughly constant. This way you can synchronize clocks and send a local timestamp to Capture.
See also https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py and https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py (edit: please refresh this second link as I have just updated the underlying file)
This file is BUGGY!
In case that helps/is of interest. Heres how i worked with the LPT. Its minimal changes to eye.py. Search for "tulip" for all changes in the file. This file is from a 2 or 3 years old pupillabs version and there is a bug which im currently triing to fix - not sure if in my 10 lines of code, or in the old pupillabs version which is why i wanted to update pupillabs first. The bug is that the resulting recording tables contain more frames of eye0 camera than there are trigger toggles in the connected EEG data. It seams line 643 is not called as many times as frames of eye0 saved to the tables - not sure why.
Which one are you talking about? Please note that we have just updated the remote_annotations.py
example
pupil labs version?
this file is very old - maybe 2 or 3 years. im comming back to update pupillabs and fixe the above mentioned bug atm.
Please try hard-refreshing the webpage. (the year specification in the license header is still old π¬ )
The bug is that the resulting recording tables contain more frames of eye0 camera than there are trigger toggles in the connected EEG data Are you using a custom TTL script?
there is no problem with the current version related to the above file. i just posted it because @user-6ef9ea said.: "Agreed LSL doesn't seem ideal, but from reading the Pupil API it looks like there is an option to send an event to the system to get timestamped during frame acquisition? is this done in the python software? if that's the case it's not so great,". in this file you can see in line 645 where the timestamp is made - i guess. and how i added an LPT to it - just as an example. it needs to be adopted to the newest pupil labs version. yes its my custom modfication of the eye.py file.
Oh, ok, sorry for the misunderstanding.
ill check if the newest version of pupil labs now adds the dependencies without hassles π thanks @papr
Please be aware that we are not planning on adding any TTL libraries or similar that is specific to non-Pupil-Labs hardware.
Nonetheless, you no longer need to change eye.py. Instead, you can write eye-process plugins that can be installed as simple python files in the plugin-directory and use the pre-compiled Pupil Core software
ah, very good! i try to check that out.
regarding the delay and jitter and jitter beeing more of a problem that is true, but it allways depends on the application and what you want to lock to. in our case its often i.e. the phase of a 40Hz signal. thats a wavelength of 25ms - i4f we just have +-5ms jitter thats 360Β°/5 = +-72Β° phase resolution it really depends on what you lock to. often its irrelevant, yes. a single gpio on the camera itself would get rid of the usb lag, the none real time capable windows accessing the usb whenever it feels like it and the delay to the LPT which is also sluggish and old even without all the rest. many people would appreciate a TTL output option for the device, im sure. even if its just because its plug BNC and go instead of having to think about it.
I am not familiar with some of the abbreviations. Could you let me know what i4f, LPT and BNC stand for?
LPT is the old 25 pin d-sub connector used for printers. it can emit 8 bits of TTL pulses. its from the 70s or so. its a cheap way to emit a 0V to 5V signal from a pc.
And how does this help get better timestamps from common usb cameras?
BNC is another typ of connector. the 8 bits for TTL signals are, if you need just a single bit or line often routed with BNC cables
so the current workaround is to have the pupillabs software toggle a bit on the LPT instead of reliing on the system clock of a none real time operating system. i feed that 0V to 5V logical signal into an FPGA with a 200MHz time resolution and just record the event. there are 10 other devices in the lab and they all need the same timebase for us to be able to merge the data correctly afterwards. they are on different operating systems, drivers and so on. 95% of lab equipment has TTL in or outputs in some way. like oszilloskopes. its the most general signal type id say. if you would dedicate one of the GPIOs of the microprocessor on each camera (i guess there is one with custom code?) you could toggle that GPIO every time a frame is passed to the PC via USB. you have 5V powersuplly 500mA over the USB, so there is no additional power supply unit needed. maybe just an additional transistor to make the current the GPIO can drive a bit stronger. so i would route that GPIO to a transistor base and then to a BNC plug. that skips all the delay from the USB hardware, operating system acessing the USB driver, python, python moving it over network socket and LSL to somewhere else or python moving it to a LPT. the BNC on the GPIO would replace the LPT of the PC and be more accurate.
because the microcontroller on the camera is real time capable and clocked with like 8Mhz or something. there is not buffering through serial usb connection or similar involved. much more direct, much more precise
so when you record pupil labs data and data of the system that reads this digital line from the BNC you can just count.: i have 100 trigger toggles on my digital input where the BNC is connected and i have 100 entries in the pupil labs xml table. they occured at the same time. i just ignore the PC timebase timestamp and assume the 200MHz FPGA clock was correct. data merged.
Following up after looking at your modified eye.py: - "100 entries in the pupil labs xml table" - Could you specify which file you are talking about exactly? - You only seem to send out TTLs for one camera. Why is that? Do you have a monocular headset?
Ok, that makes sense to me. So your current implementation from above definitively includes some delay then. Unfortunately, I do not know how easy it would be to change the camera hardware. This will likely require changing the firmware as well. I will talk with our hardware development team about this, but my feeling is that these changes are unlikely to happen.
ok nice. thx! im not sure whats exactly on your camera. i would imagine its the camera with something like an SPI connection to a microcontroller and on that there is firmware that moves the camera data to the USB. on that you would just need to add "toggle this GPIO" whenever the frame is captures - relatively easy code. and you could market it as high precision (maybe you reach 1MHz accuracy) product. for sure interesting for the marked with little efford imo. we would buy 3 or 4 of them :).
"only seem to send out TTLs for one camera" - there are multiple instances of eye.py - its multithreaded i guess. i cant open the same device in all of them so i opted for eye0. if i have triggers for the timestamps of eye0 i can interpolate where the events on world and eye1 camera took place. i dont have any data file with me atm.
If you need to interpolate anyway, then it would probably the best to create a custom world plugin that stores the timestamps from the time point at which the TTL was generated, instead of using the eye timestamps which include a delay between exposure and TTL.
its the xml in which you can find the events for each camera, the timestamps and a lot of additional info - like if the pupil could be detected in that frame and so on.
The thing is, Pupil Capture does not record any xml files. That is why I would like to know the exact name of the file. This way I might help lifting the mystery of why there are more entries than TTL triggers
ok, i get a file sent to me.
oh, ok, so it is possible that there is some inbetween-processing that is going wrong, too. Maybe your earlier TTL script was not wrong then?
Ok, here is the only snapshot i could get right now. i asked my coworker to get that data as files in a readable format as well, but that wont happen today. he said that the file he used from pupillabs is a .csv file which is generated from the pupil labs viewer when dropping the recording in - sorry thats as precise as i could get it atm. i asked him to send the files in the next days.
what we see on the plot as thin purple line rising all the way to the top, staying high and then dropping at the end is is Bit 0 of the LPT that i set to high in line 522 when the recording starts and which is reset to low in line 549 as recorded by the system that reads those TTLs and the rest of our bio data.
in red we see entries of timestamps of eye0 from the pupil labs generated .cvs
in blue we see events of Bit 1 of the LPT as recorded by the biosignal system. Its toggled in eye.py in line 643 with the intention to toggle once each frame.
the 4th and lowest colored events in the plot (orange maybe) can be ignored - its some irrelevant information.
so the number of red events (.csv entries) is less than the number of blue events (digital input toggles).
theory is that there are frames missing in the .csv because we couldn't explain why line 643 should be executed if no frame is recorded - but maybe i don't understand the code correctly. it was unclear to us if the first or last frames (or some in between) are missing so what my coworker did is align the events based on their time imperfection. if you record at say 60Hz frame rate there is no dt of exactly 1/60s between each 2 consecutive frames - its more that the average dt is ~1/60s. we used that information (those imperfections) to match the blue events to the red events and thus figured out that the last frames resulted in LPT toggles, but not in .csv entries (instead of the first few or some missing in between).
but it might all have been fixed already as this was done with a pupillabs version 3 years old.
Ok, I think I can explain this phenomenon. Pupil Player (the "viewer" application) exports data in the range of the scene camera frames. Since the world process stops recording slightly earlier than the eye processes, there are always some more eye frames recorded until the stop-recording signal reaches the processes. These are not exported by Player but recorded as TTL signals in your own code.
I think the more accurate approach would be to:
1. Create mapping between TTL signals and recorded raw timestamps eye0_timestamps.npy
, and then
2. look them up in pupil_positions.csv
(Pupil Player exported csv file your colleague is probably referring to)
very good. ill check that. will take a while. thanks for your time! have to go for now - kids. have a nice day! thanks a lot!
Ok, have a nice day, too!
new python wheels installed nicely. my coworker said the missmatch in frames vs triggers is about 10s - that seems a bit high unless the eye thread was running towards buffer overflow when it finally got stopped. we are currently getting a new dataset with the current pupillabs build from github and see if that helps. else ill be back with the resulting data of both systems i guess. here is my old eye.py modification in the current version of eye.py.:
solved. number of timestamps in eye0_timestamps.npy equals number of recorded trigger events. for the first 3 out of 30k timestamps in pupil_positions.csv i could not find the corresponding timestamp in eye0_timestamps.npy, but i guess thats a start hickup. its clear which trigger belongs to which pupil_positions.csv row now.
Nice!
thanks a lot for your time!
Hey, so I'm using the vive add-on, and in my C# code I'm making a lot of calls to TimeSync.GetPupilTimeStamp() but I eventually ran into this :
FiniteStateMachineException: Req.XSend - cannot send another request
I'm trying to save not just eye data, but game-related data (player positions/rotation over time, events, etc), and I need to have everything synced to one time measure(?), when I used Time.realtimeSinceStartup and timeSync.ConvertToUnityTime() I don't get time stamps that match up at all, even after calling UpdateTimeSync(), and that's why I resorted to calling GetPupilTimeStamp(), but if it's going to give me that error after a while, it's not really a viable option in my case....
Any advice?
Instead of using GetPupilTimestamp()
use UpdateTimeSync()
in the beginning, and then use ConvertToPupilTime(Time.realtimeSinceStartup)
regularly
hey, so I tried this, and there's still a discrepancy between gazeData.pupilTimeStamp
and timeSync.ConvertToPupilTime(Time.realtimeSinceStartup)
Alright, Iβll use that. Also, in receiveGaze(), should I also use that or stick to gazeData.timestamp?
gazeData.timestamp should already be in pupil time
ah, yeah that's right
Hey all,
I have some exported data that I'd like to merge together for an analysis. I would ideally like to merge together the gaze_positions.csv
and pupil_positions.csv
, but I'm realizing the row counts and timestamps (at the millisecond level) vary slightly, so merging will not be trivial. Has anyone developed a robust way of doing this? I can image writing some algorithm that will parse through and try to match closest timestamps... but that seems like a lot of work, so I'm wondering whether this has already been done / if anyone knows an easier way. I could just conduct my analysis using two different datasets, but I think it'd just be more convenient to have it all in one---so no big deal if this type of merge is actually difficult. Thanks!
Hi, each gaze datum is based on one or two pupil datums. Check out the base_data column in gaze_positions. It has one or two <eyeid>-<eyetimestamp>
entries that you can use to match the exact pupil datum used for that specific gaze datum
Yeah, that is expected, as you are comparing the time at which the gaze data was recorded/generated and the time at which it is arriving in unity
can i sort my data by the time stamp, or does it not matter?
What do you mean? Which purpose do you have in mind? Subscribing to gaze gives you three streams, 2 monocular and 1 binocular one. Each stream is monotonically increasing in time. But when mixed, they are not. Binocular data tends to be a bit delayed since the pupil data had to be cached for the pairing
sorry, let me give further context; I need to continuously collect game data (position, rotation, events, etc) from the start of the game, as well as eye data, but eye data may not always be available. I'm letting ReceiveData()
handle the event that the eyetracker has data available, but if there's nothing available, I let FixedUpdate()
handle it for me (it just saves a bunch of zeroes)
I do realize this is a pretty bad way of trying to save the data but I kinda don't have any other ideas...
I can line up the data in post but there's other data that already has to be lined up as well and things could get messy / complicated so I want to try to consolidate as much data as possible
I suggest saving only data that is actually there, i.e. don't generate empty data in fixedupdate. The same with the game data. Post-hoc you can linearly interpolate the gaze data at the time intervals of your game data samples.
Also, store the timestamps of when the data is generated, not when it is received, because the latter contains a delay
yeah, im making sure to generate the timestamp when the data is generated. The only thing is that I know I will always get my game data when ever I call for it (it's always available), so I'm not sure how to line up the gazedata with the game data, or vice versa
Aha, thanks, that should be enough to get me going.
Since you will be analysing gaze in the context of game data, and the game data is always there, I suggest aligning gaze to the game data. For that, use the interpolation function of your programming language, fit it to the gaze data, and evaluate it at the game data timestamps. I might be able to get you a simple example in Python if this is helpful to you.
yeah, a python implementation would be helpful if you already have it, but don't worry about it if you still need to write it
I do not have it ready to go. If it would still be of use early next week, I can work on the example on Tue
if you don't mind, i'd still appreciate if you could send it whenever you can
https://nbviewer.jupyter.org/gist/papr/7cf3f328c89327e8285345a40e690ec9
I attached a small gaze_positions.csv file (200 samples) and a jupyter notebook that reads the data, and interpolates norm_pos_x
at different timepoints
thank you so much!
Following up on the discussion whether or not it would be an improvement to add a trigger output to the microcontroller on each camera vs. emitting that trigger via software. red ist the delta time between 2 measured eye0 camera samples of the eeg recording system reading the digital line of the software controlled parallel port. blue is the delta time of pupil labs timestamps. green is the difference between blue and red. so at a samplerate of 120Hz (with 8,333ms optimal dt) we see jitter in an order of magnitude comparable to the time between two consecutive samples (std of green is ~2ms, RMS is ~2ms as well. histogram of green is in the 2nd figure.
red ist the delta time between 2 measured eye0 camera samples of the eeg recording system reading the digital line of the software controlled parallel port I do not understand what you have done here. Could you explain that in a bit more detail?
so the software triggering method basically jitters in the order of magnitude of the 120Hz signal. and that doesnt even consider the absolute offset of the software to parallelport delay which i would guess is in the order of ~10ms. so the real inaccuracy is likely 10ms+- what we see in green in the first plot or purple in the 2nd plot.
a hardware emitted trigger from the microcontroller would negate all that.
Also, are you collecting the pupil timestamps on Windows or Unix? Because that makes a difference, too. (not sure if we talked about hardware vs software timestamps yet)
blue is the delta time of pupil timestamps collected on windows. in each frame i emit one trigger via LPT as we discussed earlier. red is the delta time between two timestamps of the realtime system that reads the TTL output of this LPT. the difference of red and blue (green) tells how much the jitter induced between phython calling the lpt hardware and the hardware finally responding with a 5v signal is.
the plots dont include the absolute offset in time. it could be that phython to LPT output signal actually has a delay of 1 minute (much overexagurated example) plus the jitter of +-2ms on average which we see here in green.
but as already the jitter is in the order of magnitude of the samplerate its already ..... not ideal, not even mentioning the delay we dont know about.
afk 10mins
so yea. the jitter of the blue line alone is what you can see in just the pupil labs timestamps, adding the LPT jitter obviously adds to it (and we cant get arround adding that if we want to sync to external hardware) - its in the order of magnitude of the 120Hz singal and could be improoved by adding a hardware TTL output to each camera is what im saying. its not a problem for now, its what we get, just wanted to quantify it a bit as we talked about that earlier.
@user-588603 So this is the order in which things happen: 1. eye camera records frame 2. eye camera transmits frame to computer 3. Capture receives frames; creates pupil timestamp 4. eye process plugins run, including pupil detection 5. you send out TTL
So, I think the variability (green) that you are seeing comes from the processing time in 4.
And yes of course, we are in agreement that a hardware trigger would give more accurate timestamps. There might still be some variability as the cameras do not run at a fixed framerate.
Also, have you quantified how much time the sending of the TTL takes?
"There might still be some variability as the cameras do not run at a fixed framerate." - yes. it would be a lot more accurate related to when was the frame taken, but frames would still not occur at exact 120Hz.
regarding 4. i was not aware of that. i was under the impression that the timestamp is taken in line 782 of eye.py which is why i placed the "send TTL" directly next to it.
but maybe thats just for the GUI display of cpu load or processing time or something but not the actual timestamp that reaches the npy, respectively the csv.
where would be the best point in the code to send the TTL to be as close as possible to "3. Capture receives frames; creates pupil timestamp"?
This plot is based on the current version of pupil labs btw.: https://discord.com/channels/285728493612957698/446977689690177536/857574858157326337 in the meantime we also compared to older recordings taken by the bevorementionen 3 year old pupil labs software and back than there was apparently less jitter.:
this is data from the old version
Yes, earlier versions used hardware timestamps which had less variance. But we saw a large number of support cases in which the hardware timestamps were no longer in sync. As this breaks the binocular mapping completely, we decided to use software timestamps (arrival timestamp - fixed offset) as a compromise. The change was introduced in this release https://github.com/pupil-labs/pupil/releases/v1.16
we are currently investigating if the new version is producing more (to much) CPU load due to the online 3d pupil detection (which was not available in the old version) and if thats the cause and are debating if we should upgrade the CPU or downgrade the pupillabs software - though that doesnt seem to be a good solution as a system we cant upgrade anymore is problematic.
You can disable the 2d and/or 3d detection if you want and run it post-hoc.
so it seems to me that "4. eye process plugins run, including pupil detection" increased in CPU load in the meantime (between code versions) and thus we should first try to skip that as good as possible and place "5. you send out TTL" as close to "3. Capture receives frames; creates pupil timestamp" as possible.
I think I can help you out with that.
No, this line just accesses the already created timestamp. π The actual timestamping happens in the uvc backend. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L490
@user-588603 Please note that in step 3, we try to correct for the delay of step 2 (+ self.ts_offset
; Windows only). These offsets are fixed and based on the camera name https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L270 The delay in step 2 is truly dependent on the resolution, though.
This is an user plugin that you can place in your corresponding plugin directory https://gist.github.com/papr/72dbe95a67ebeab4e678c0ac8d9aedc1
I annotated the places for your necessary TTL changes. Without changes, it prints the delay between the frame timestamp and the pupil time at which the plugin is executed. Please mind that this delay is at least as long as the offset correction on Windows.
Thanks a lot! I have to implement this on monday.
For gazeData eye centers and normals, how are the axes aligned?
see 3d camera space https://docs.pupil-labs.com/core/terminology/#coordinate-system
If you need these in unity coordinates, you need an additional transformation
Sorry, but would you mind pointing me in the right direction to figure out these transformations?
@user-d1efa8 https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#map-gaze-data-to-vr-world-space
Vector3 directioninWorldSpace = gazeOrigin.TransformDirection(localGazeDirection);
thank you so much
Hey folks, can you please let me know where the calibration files are serialized? What format is a *.plcal file?
Oh, and while typing that I thought to searc the repo for *.plcal
Found calibration_storage.py. Sometimse you just need to say it out loud, or type it out...
Should be a simple msgpack file
Yep
i am getting this error whenever i try to run the source code any way someone can help?
solved: fur future refrence if any one gets stuck at a similar dll error.download ffmpeg-release-full-shared.7z from https://www.gyan.dev/ffmpeg/builds/
@papr as you answered i am trying to develop a plugin that changes Plugin.g_pool.min_calibration_confidence value to a value of my choosing however i keep getting stuck would point me in the direction on how i should get started or maybe give me some examples of similar plugins that were already developed
I can give you an example on Monday
ok thank you. i have changed all the minimum calibration threshold values to 0.4 however it still says "Dismissing pupil data due to confidence < 0.80"
0.4 is extremely low. There must be something going wrong with the pupil detection if you need the limit be that low. π
i was just doing it for testing purposes but i would definitely want it to be higher when actually working wd pupil
https://gist.github.com/papr/04b9e4b9c1758c3701bf260dfa67f83f This is the plugin. See this on how to install it https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin
Hi there, I'm trying to install pupil-detectors==1.1.1 on RPi 4 and I'm getting errors because of missing some Ceres files (ceres/jet.h: no such file or directory). How can I go about solving this? I looked at pupil-detectors' code and apparently there many other files mentioned under Ceres and I can't find any of them under src/shared_cpp/include/ceres. Thanks in advance
Ceres is primarily an external dependency. You will need to install it based on older commits in this folder https://github.com/pupil-labs/pupil/tree/master/docs (select the file fitting for your OS and look at the commit history for removed c++/ceres dependencies)
I'm using the RPi OS, do you have any advice on which OS's instructions I should follow?
ubuntu I guess
ok I'll try Thank you
one more question: by removing ceres dependencies do I lose the Detector3D module?
The old 3d detector requires ceres, yes