πŸ’» software-dev


user-b14f98 03 June, 2021, 15:52:39

Folks, where is the documentatoni on your coordinate systems again? Trying to decypher theta and phi in pupil_gaze_positions_info.txt

user-b14f98 03 June, 2021, 15:52:59

I understand that it's the normal extending from the eyeball sphere center to the pupil center

user-b14f98 03 June, 2021, 15:53:29

...will theta and phi be 0,0 when the norm is pointed directly at the camera?

user-b14f98 03 June, 2021, 15:55:24

I thought this stuff used to be in the docs

user-b14f98 03 June, 2021, 15:57:05

I imagine that the axes of theta (azimuth) and phi (polar/elevation) have to be oriented along the axes defined by the camera's up vector and horizontal vector

user-b14f98 03 June, 2021, 15:58:04

...unless they take the calibratnoi data into account

papr 03 June, 2021, 15:59:48

If you compare it to this definition, it is slightly different, though https://en.wikipedia.org/wiki/Spherical_coordinate_system#Cartesian_coordinates theta is based on y instead of z, while phi is based on z/x instead of (y/x)

user-b14f98 03 June, 2021, 16:00:03

Yeahhh.... you guys flip phi and theta!

user-b14f98 03 June, 2021, 16:00:12

I just noticed that

user-b14f98 03 June, 2021, 16:00:34

Ok, no biggie, as long as we are aware

user-b14f98 03 June, 2021, 16:00:47

You also return phi, theta instead of theta, phi

user-b14f98 03 June, 2021, 16:00:56

So, had I not looked at the code, I might not have even noticed

papr 03 June, 2021, 16:02:16

You also return phi, theta instead of theta, phi It is exposed correctly in Detector3D https://github.com/pupil-labs/pye3d-detector/blob/2d8d9da87458fceb4e1d027e0159c788f84d104a/pye3d/detector_3d.py#L571-L574

user-b14f98 03 June, 2021, 17:02:01

Thanks, papr.

user-c09cdd 09 June, 2021, 10:18:06

I’m getting an error message that says β€œzlib isn’t found” while trying to run pupil capture. It used to work previously. How do I fix it? @papr

papr 09 June, 2021, 10:20:20

Could you please provide the full context of the error message?

user-c09cdd 09 June, 2021, 10:22:52

world - [ERROR] launchables.world: Process Capture crashed with trace: Traceback (most recent call last): File "launchables/world.py", line 140, in world File "/home/pupillabs/.pyenv/versions/3.6.10/envs/pupil/lib/python3.6/site-packages/PyInstaller/loader/pyimod03_importers.py", line 623, in exec_module File "shared_modules/methods.py", line 21, in <module> ImportError: /opt/pupil_capture/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib/x86_64-linux-gnu/libpng16.so.16)

world - [INFO] launchables.world: Process shutting down.

papr 09 June, 2021, 10:28:11

Do you know which version of Capture that is? @user-764f72 please try to reproduce this issue with the latest Core release on linux

user-c09cdd 09 June, 2021, 10:23:05

@papr this is the error that I’m getting

user-c09cdd 09 June, 2021, 10:28:49

I’m installed v3.3

user-588603 11 June, 2021, 11:29:16

Im following this.: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md and during pip install -r requirements.txt the pyre @ https.://github.com/zeromq/pyre/archive/master.zip fails.

papr 11 June, 2021, 11:29:57

Please replace it with [email removed] This is a temporary incompatibility.

user-588603 11 June, 2021, 11:30:06

Chat image

user-588603 11 June, 2021, 11:30:16

Thx!

user-588603 11 June, 2021, 12:03:33

Still doesnt work. What am i missing?

Chat image

papr 11 June, 2021, 12:19:13

Please delete the pyre and ndsi lines, and install pyndsi via the corresponding wheel from this link https://github.com/pupil-labs/pyndsi/suites/2951661456/artifacts/66406208

user-588603 11 June, 2021, 12:31:19

The link is dead. 404

user-588603 11 June, 2021, 12:34:59

Permission problem? Just drop the file to discord? Or just wait until it fixes itself on monday? :)

papr 11 June, 2021, 12:35:58

The fiel is too big for discord. One sec.

user-588603 11 June, 2021, 12:39:31

Thx

user-588603 11 June, 2021, 12:58:37

Chat image

user-588603 11 June, 2021, 12:58:54

Chat image

user-588603 11 June, 2021, 12:59:27

I dont get it. Was ndsi not in those wheels? I triied to install all of the win version ones.

papr 11 June, 2021, 13:23:30

I gave you the ndsi wheels by accident. πŸ˜„ But you would have needed those, too.

user-588603 11 June, 2021, 13:00:30

Maybe lets postpone it to next week. could you please push a fix for dependencies to the main git next week? Thanks a lit for your time. Have a nice weekend.

papr 11 June, 2021, 13:23:44

Yes, no problem. Have a nice weekend

user-588603 11 June, 2021, 13:45:11

You too

user-6ef9ea 15 June, 2021, 09:56:58

Hi all, i've been trying to follow the instructions here for streaming gaze data using LSL: https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/README.md

papr 15 June, 2021, 09:58:21

Hi, then it is likely that pylsl is not correctly linked/installed

user-6ef9ea 15 June, 2021, 09:57:33

I'm currently stuck at Step 2: there doesn't seem to be any Pupil Capture LSL Relay plugin listed in the Plugin Manager

user-6ef9ea 15 June, 2021, 10:00:04

ok, so pylsl needs to be installed into the pupil capture application somehow?

papr 15 June, 2021, 10:02:02

Steps 1-2 from the Installation section, yes

user-6ef9ea 15 June, 2021, 10:01:16

i'm currently running the pupil capture app from the installer

user-6ef9ea 15 June, 2021, 10:03:35

ok, thanks, somehow i was confused whether pylsl was used to consume or emit the data

papr 15 June, 2021, 10:04:13

In this specific case, it is used to emit data. The Lab Recorder is then able to receive the data.

user-6ef9ea 15 June, 2021, 10:09:52

thanks, it's working perfectly now πŸ‘

user-6ef9ea 15 June, 2021, 11:05:41

ok, it seems like Pupil Core cannot handle all the 3 cameras running at the same time on Windows (eye0, eye1, world)

user-6ef9ea 15 June, 2021, 11:06:15

the process crashes soon after I turn on the second eye camera, and seems like frame rate is hit right away

papr 15 June, 2021, 11:07:45

Could you share the C:\Users\<username>\pupil_capture_settings\capture.log file such that we can check what causes the crash?

user-6ef9ea 15 June, 2021, 11:08:18

this part seems relevant:

Process eye1:
Traceback (most recent call last):
  File "launchables\eye.py", line 744, in eye
  File "zmq_tools.py", line 174, in send
  File "msgpack\__init__.py", line 35, in packb
  File "msgpack\_packer.pyx", line 120, in msgpack._cmsgpack.Packer.__cinit__
MemoryError: Unable to allocate internal buffer.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "multiprocessing\process.py", line 258, in _bootstrap
  File "multiprocessing\process.py", line 93, in run
  File "launchables\eye.py", line 814, in eye
  File "launchables\eye.py", line 44, in __exit__
  File "logging\__init__.py", line 1337, in error
  File "logging\__init__.py", line 1444, in _log
  File "logging\__init__.py", line 1454, in handle
  File "logging\__init__.py", line 1516, in callHandlers
  File "logging\__init__.py", line 865, in handle
  File "zmq_tools.py", line 46, in emit
  File "zmq_tools.py", line 167, in send
  File "msgpack\__init__.py", line 35, in packb
  File "msgpack\_packer.pyx", line 120, in msgpack._cmsgpack.Packer.__cinit__
MemoryError: Unable to allocate internal buffer.
papr 15 June, 2021, 11:25:44

This is the first time I have seen this error. Could you check if you have sufficient free RAM/working memory on your computer? Please also try restarting and checking if this solves the issue.

user-6ef9ea 15 June, 2021, 11:11:53

capture.log

user-6ef9ea 15 June, 2021, 12:34:19

huh, that seems to be it, turns out that having 10 instances of visual studio open eats up more than 3 gigs of ram

user-588603 16 June, 2021, 06:20:43

Did you have a chance to push an update for requirements.txt? Also we need to synchronize the eyetracker to other hardware by TTL so i added some LPT code to eye.py on each framecapture. I was wondering if there is already a plugin for that so i could stay up to date with the pupillabs software and could use a precompiled version instead. What would be a better solution compared to modifiing eye.py?

papr 16 June, 2021, 13:57:08

requirements.txt is updated

papr 16 June, 2021, 11:51:16

Did you have a chance to push an update for requirements.txt? Working on that as we speak.

user-6ef9ea 16 June, 2021, 06:53:22

A related question is whether using the LSL plugin provides reasonable synchronisation guarantees? And if so, what is the upper bound of error?

user-588603 16 June, 2021, 09:36:36

yea that would require a 2nd software reading the LSL stream and then sending the TTL impulse which induces more latency. in our setup we work with 10+ devices which need to synchronize theire time base. EEG, MRT,CT, screen, PC, lasers and so on. all of them are research limited production number custom PC, rack, software driver and so on solutions. some run on 15 year old linux versions, some of the PCs you just can mess with on the software level for patient safety. so we just settled on communicating time, synchronization of clocks and events on the hardware level trigger lines via a national instruments PXIe FPGA chassis to merge it all on the smallest common denominator - so to speak. i believe those TTLs are the common ground in lab setups - it would be really appreciated to have a as precise as possible ready made solution for that in the eyetracker. for me it was just 5 lines of code in eye.py, but its difficult to keep that updated - i would prefer that the people i hand that over to could just update to the newest version of pupillabs instead of me having to build everything from scratch every few months.

user-588603 16 June, 2021, 09:47:03

one might even consider puttin some BNCs next to the eyetrackers USB plug and send a toggle for each frame captured and each camera for a less laggy/jittering solution as an additional product idea - would get rid of the none RTOS induced latency.

user-6ef9ea 16 June, 2021, 11:01:07

@user-588603 we're completely on the same page πŸ™‚ we're trying to use pupil labs for neuroscience experiments with high-speed behaviour readouts, EMG and EEG and exactly the same constraints apply. And of course if we were going for depth recordings it would get even more strict, in that case you really need those TTLs

user-6ef9ea 16 June, 2021, 11:01:57

I would love to see the same kind of GPIO connectors found in industrial cameras that allow you to send out strobe TTLs or frame trigger pulses, those allow for both syncing multiple devices, or logging accurate timestamps for acquisition

user-6ef9ea 16 June, 2021, 11:05:45

Agreed LSL doesn't seem ideal, but from reading the Pupil API it looks like there is an option to send an event to the system to get timestamped during frame acquisition? is this done in the python software? if that's the case it's not so great, but not sure whether there is bidirectional communication with the raw imaging firmware using libuvc

papr 16 June, 2021, 11:28:21

@user-6ef9ea @user-588603 We are currently working on improving our time sync documentation. You can find a pre-rendered preview here https://github.com/pupil-labs/pupil-docs/blob/3ef6e46c88341717bf0ebfed8725a552ded3a26e/src/core/best-practices.md#sychronization

I think the important point to note is: Trigger delay is not a problem, as long as it is roughly constant. This way you can synchronize clocks and send a local timestamp to Capture.

See also https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py and https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py (edit: please refresh this second link as I have just updated the underlying file)

user-588603 16 June, 2021, 11:39:23

This file is BUGGY!

In case that helps/is of interest. Heres how i worked with the LPT. Its minimal changes to eye.py. Search for "tulip" for all changes in the file. This file is from a 2 or 3 years old pupillabs version and there is a bug which im currently triing to fix - not sure if in my 10 lines of code, or in the old pupillabs version which is why i wanted to update pupillabs first. The bug is that the resulting recording tables contain more frames of eye0 camera than there are trigger toggles in the connected EEG data. It seams line 643 is not called as many times as frames of eye0 saved to the tables - not sure why.

eye.py

papr 16 June, 2021, 11:40:21

Which one are you talking about? Please note that we have just updated the remote_annotations.py example

user-588603 16 June, 2021, 11:40:37

pupil labs version?

user-588603 16 June, 2021, 11:41:15

this file is very old - maybe 2 or 3 years. im comming back to update pupillabs and fixe the above mentioned bug atm.

papr 16 June, 2021, 11:41:45

Please try hard-refreshing the webpage. (the year specification in the license header is still old 😬 )

papr 16 June, 2021, 11:44:33

The bug is that the resulting recording tables contain more frames of eye0 camera than there are trigger toggles in the connected EEG data Are you using a custom TTL script?

user-588603 16 June, 2021, 11:44:49

there is no problem with the current version related to the above file. i just posted it because @user-6ef9ea said.: "Agreed LSL doesn't seem ideal, but from reading the Pupil API it looks like there is an option to send an event to the system to get timestamped during frame acquisition? is this done in the python software? if that's the case it's not so great,". in this file you can see in line 645 where the timestamp is made - i guess. and how i added an LPT to it - just as an example. it needs to be adopted to the newest pupil labs version. yes its my custom modfication of the eye.py file.

papr 16 June, 2021, 11:45:33

Oh, ok, sorry for the misunderstanding.

user-588603 16 June, 2021, 11:45:50

ill check if the newest version of pupil labs now adds the dependencies without hassles πŸ™‚ thanks @papr

papr 16 June, 2021, 11:48:11

Please be aware that we are not planning on adding any TTL libraries or similar that is specific to non-Pupil-Labs hardware.

Nonetheless, you no longer need to change eye.py. Instead, you can write eye-process plugins that can be installed as simple python files in the plugin-directory and use the pre-compiled Pupil Core software

user-588603 16 June, 2021, 11:50:05

ah, very good! i try to check that out.

user-588603 16 June, 2021, 11:51:54

regarding the delay and jitter and jitter beeing more of a problem that is true, but it allways depends on the application and what you want to lock to. in our case its often i.e. the phase of a 40Hz signal. thats a wavelength of 25ms - i4f we just have +-5ms jitter thats 360Β°/5 = +-72Β° phase resolution it really depends on what you lock to. often its irrelevant, yes. a single gpio on the camera itself would get rid of the usb lag, the none real time capable windows accessing the usb whenever it feels like it and the delay to the LPT which is also sluggish and old even without all the rest. many people would appreciate a TTL output option for the device, im sure. even if its just because its plug BNC and go instead of having to think about it.

papr 16 June, 2021, 11:54:16

I am not familiar with some of the abbreviations. Could you let me know what i4f, LPT and BNC stand for?

user-588603 16 June, 2021, 12:01:30

LPT is the old 25 pin d-sub connector used for printers. it can emit 8 bits of TTL pulses. its from the 70s or so. its a cheap way to emit a 0V to 5V signal from a pc.

user-588603 16 June, 2021, 12:01:37

Chat image

papr 16 June, 2021, 12:02:17

And how does this help get better timestamps from common usb cameras?

user-588603 16 June, 2021, 12:02:23

BNC is another typ of connector. the 8 bits for TTL signals are, if you need just a single bit or line often routed with BNC cables

user-588603 16 June, 2021, 12:02:23

https://en.wikipedia.org/wiki/BNC_connector

user-588603 16 June, 2021, 12:07:22

so the current workaround is to have the pupillabs software toggle a bit on the LPT instead of reliing on the system clock of a none real time operating system. i feed that 0V to 5V logical signal into an FPGA with a 200MHz time resolution and just record the event. there are 10 other devices in the lab and they all need the same timebase for us to be able to merge the data correctly afterwards. they are on different operating systems, drivers and so on. 95% of lab equipment has TTL in or outputs in some way. like oszilloskopes. its the most general signal type id say. if you would dedicate one of the GPIOs of the microprocessor on each camera (i guess there is one with custom code?) you could toggle that GPIO every time a frame is passed to the PC via USB. you have 5V powersuplly 500mA over the USB, so there is no additional power supply unit needed. maybe just an additional transistor to make the current the GPIO can drive a bit stronger. so i would route that GPIO to a transistor base and then to a BNC plug. that skips all the delay from the USB hardware, operating system acessing the USB driver, python, python moving it over network socket and LSL to somewhere else or python moving it to a LPT. the BNC on the GPIO would replace the LPT of the PC and be more accurate.

user-588603 16 June, 2021, 12:07:56

because the microcontroller on the camera is real time capable and clocked with like 8Mhz or something. there is not buffering through serial usb connection or similar involved. much more direct, much more precise

user-588603 16 June, 2021, 12:11:09

so when you record pupil labs data and data of the system that reads this digital line from the BNC you can just count.: i have 100 trigger toggles on my digital input where the BNC is connected and i have 100 entries in the pupil labs xml table. they occured at the same time. i just ignore the PC timebase timestamp and assume the 200MHz FPGA clock was correct. data merged.

papr 16 June, 2021, 12:49:41

Following up after looking at your modified eye.py: - "100 entries in the pupil labs xml table" - Could you specify which file you are talking about exactly? - You only seem to send out TTLs for one camera. Why is that? Do you have a monocular headset?

papr 16 June, 2021, 12:22:14

Ok, that makes sense to me. So your current implementation from above definitively includes some delay then. Unfortunately, I do not know how easy it would be to change the camera hardware. This will likely require changing the firmware as well. I will talk with our hardware development team about this, but my feeling is that these changes are unlikely to happen.

user-588603 16 June, 2021, 12:48:20

ok nice. thx! im not sure whats exactly on your camera. i would imagine its the camera with something like an SPI connection to a microcontroller and on that there is firmware that moves the camera data to the USB. on that you would just need to add "toggle this GPIO" whenever the frame is captures - relatively easy code. and you could market it as high precision (maybe you reach 1MHz accuracy) product. for sure interesting for the marked with little efford imo. we would buy 3 or 4 of them :).

user-588603 16 June, 2021, 12:51:32

"only seem to send out TTLs for one camera" - there are multiple instances of eye.py - its multithreaded i guess. i cant open the same device in all of them so i opted for eye0. if i have triggers for the timestamps of eye0 i can interpolate where the events on world and eye1 camera took place. i dont have any data file with me atm.

papr 16 June, 2021, 13:00:56

If you need to interpolate anyway, then it would probably the best to create a custom world plugin that stores the timestamps from the time point at which the TTL was generated, instead of using the eye timestamps which include a delay between exposure and TTL.

user-588603 16 June, 2021, 12:52:30

its the xml in which you can find the events for each camera, the timestamps and a lot of additional info - like if the pupil could be detected in that frame and so on.

papr 16 June, 2021, 12:53:26

The thing is, Pupil Capture does not record any xml files. That is why I would like to know the exact name of the file. This way I might help lifting the mystery of why there are more entries than TTL triggers

user-588603 16 June, 2021, 12:53:45

ok, i get a file sent to me.

papr 16 June, 2021, 12:55:26

oh, ok, so it is possible that there is some inbetween-processing that is going wrong, too. Maybe your earlier TTL script was not wrong then?

user-588603 16 June, 2021, 13:23:17

Chat image

user-588603 16 June, 2021, 13:23:19

Ok, here is the only snapshot i could get right now. i asked my coworker to get that data as files in a readable format as well, but that wont happen today. he said that the file he used from pupillabs is a .csv file which is generated from the pupil labs viewer when dropping the recording in - sorry thats as precise as i could get it atm. i asked him to send the files in the next days.

what we see on the plot as thin purple line rising all the way to the top, staying high and then dropping at the end is is Bit 0 of the LPT that i set to high in line 522 when the recording starts and which is reset to low in line 549 as recorded by the system that reads those TTLs and the rest of our bio data.

in red we see entries of timestamps of eye0 from the pupil labs generated .cvs

in blue we see events of Bit 1 of the LPT as recorded by the biosignal system. Its toggled in eye.py in line 643 with the intention to toggle once each frame.

the 4th and lowest colored events in the plot (orange maybe) can be ignored - its some irrelevant information.

so the number of red events (.csv entries) is less than the number of blue events (digital input toggles).

theory is that there are frames missing in the .csv because we couldn't explain why line 643 should be executed if no frame is recorded - but maybe i don't understand the code correctly. it was unclear to us if the first or last frames (or some in between) are missing so what my coworker did is align the events based on their time imperfection. if you record at say 60Hz frame rate there is no dt of exactly 1/60s between each 2 consecutive frames - its more that the average dt is ~1/60s. we used that information (those imperfections) to match the blue events to the red events and thus figured out that the last frames resulted in LPT toggles, but not in .csv entries (instead of the first few or some missing in between).

user-588603 16 June, 2021, 13:27:50

but it might all have been fixed already as this was done with a pupillabs version 3 years old.

papr 16 June, 2021, 13:30:06

Ok, I think I can explain this phenomenon. Pupil Player (the "viewer" application) exports data in the range of the scene camera frames. Since the world process stops recording slightly earlier than the eye processes, there are always some more eye frames recorded until the stop-recording signal reaches the processes. These are not exported by Player but recorded as TTL signals in your own code.

papr 16 June, 2021, 13:31:29

I think the more accurate approach would be to: 1. Create mapping between TTL signals and recorded raw timestamps eye0_timestamps.npy, and then 2. look them up in pupil_positions.csv (Pupil Player exported csv file your colleague is probably referring to)

user-588603 16 June, 2021, 13:34:44

very good. ill check that. will take a while. thanks for your time! have to go for now - kids. have a nice day! thanks a lot!

papr 16 June, 2021, 13:35:14

Ok, have a nice day, too!

user-588603 17 June, 2021, 07:55:06

new python wheels installed nicely. my coworker said the missmatch in frames vs triggers is about 10s - that seems a bit high unless the eye thread was running towards buffer overflow when it finally got stopped. we are currently getting a new dataset with the current pupillabs build from github and see if that helps. else ill be back with the resulting data of both systems i guess. here is my old eye.py modification in the current version of eye.py.:

user-588603 17 June, 2021, 07:55:10

eye.py

user-588603 17 June, 2021, 08:04:20

Chat image

user-588603 17 June, 2021, 11:18:20

solved. number of timestamps in eye0_timestamps.npy equals number of recorded trigger events. for the first 3 out of 30k timestamps in pupil_positions.csv i could not find the corresponding timestamp in eye0_timestamps.npy, but i guess thats a start hickup. its clear which trigger belongs to which pupil_positions.csv row now.

papr 17 June, 2021, 11:18:43

Nice!

user-588603 17 June, 2021, 11:18:27

thanks a lot for your time!

user-d1efa8 17 June, 2021, 20:27:07

Hey, so I'm using the vive add-on, and in my C# code I'm making a lot of calls to TimeSync.GetPupilTimeStamp() but I eventually ran into this :

FiniteStateMachineException: Req.XSend - cannot send another request

I'm trying to save not just eye data, but game-related data (player positions/rotation over time, events, etc), and I need to have everything synced to one time measure(?), when I used Time.realtimeSinceStartup and timeSync.ConvertToUnityTime() I don't get time stamps that match up at all, even after calling UpdateTimeSync(), and that's why I resorted to calling GetPupilTimeStamp(), but if it's going to give me that error after a while, it's not really a viable option in my case....

Any advice?

papr 17 June, 2021, 20:30:50

Instead of using GetPupilTimestamp() use UpdateTimeSync() in the beginning, and then use ConvertToPupilTime(Time.realtimeSinceStartup) regularly

user-d1efa8 18 June, 2021, 15:47:17

hey, so I tried this, and there's still a discrepancy between gazeData.pupilTimeStamp and timeSync.ConvertToPupilTime(Time.realtimeSinceStartup)

user-d1efa8 17 June, 2021, 20:40:42

Alright, I’ll use that. Also, in receiveGaze(), should I also use that or stick to gazeData.timestamp?

papr 17 June, 2021, 20:41:32

gazeData.timestamp should already be in pupil time

user-d1efa8 18 June, 2021, 11:27:50

ah, yeah that's right

user-90000e 18 June, 2021, 15:41:22

Hey all, I have some exported data that I'd like to merge together for an analysis. I would ideally like to merge together the gaze_positions.csv and pupil_positions.csv, but I'm realizing the row counts and timestamps (at the millisecond level) vary slightly, so merging will not be trivial. Has anyone developed a robust way of doing this? I can image writing some algorithm that will parse through and try to match closest timestamps... but that seems like a lot of work, so I'm wondering whether this has already been done / if anyone knows an easier way. I could just conduct my analysis using two different datasets, but I think it'd just be more convenient to have it all in one---so no big deal if this type of merge is actually difficult. Thanks!

papr 18 June, 2021, 15:43:31

Hi, each gaze datum is based on one or two pupil datums. Check out the base_data column in gaze_positions. It has one or two <eyeid>-<eyetimestamp> entries that you can use to match the exact pupil datum used for that specific gaze datum

papr 18 June, 2021, 15:48:03

Yeah, that is expected, as you are comparing the time at which the gaze data was recorded/generated and the time at which it is arriving in unity

user-d1efa8 18 June, 2021, 15:49:13

can i sort my data by the time stamp, or does it not matter?

papr 18 June, 2021, 15:53:23

What do you mean? Which purpose do you have in mind? Subscribing to gaze gives you three streams, 2 monocular and 1 binocular one. Each stream is monotonically increasing in time. But when mixed, they are not. Binocular data tends to be a bit delayed since the pupil data had to be cached for the pairing

user-d1efa8 18 June, 2021, 16:03:29

sorry, let me give further context; I need to continuously collect game data (position, rotation, events, etc) from the start of the game, as well as eye data, but eye data may not always be available. I'm letting ReceiveData() handle the event that the eyetracker has data available, but if there's nothing available, I let FixedUpdate() handle it for me (it just saves a bunch of zeroes)

I do realize this is a pretty bad way of trying to save the data but I kinda don't have any other ideas...

user-d1efa8 18 June, 2021, 16:04:42

I can line up the data in post but there's other data that already has to be lined up as well and things could get messy / complicated so I want to try to consolidate as much data as possible

papr 18 June, 2021, 16:07:11

I suggest saving only data that is actually there, i.e. don't generate empty data in fixedupdate. The same with the game data. Post-hoc you can linearly interpolate the gaze data at the time intervals of your game data samples.

papr 18 June, 2021, 16:07:57

Also, store the timestamps of when the data is generated, not when it is received, because the latter contains a delay

user-d1efa8 18 June, 2021, 16:25:20

yeah, im making sure to generate the timestamp when the data is generated. The only thing is that I know I will always get my game data when ever I call for it (it's always available), so I'm not sure how to line up the gazedata with the game data, or vice versa

user-90000e 18 June, 2021, 16:15:37

Aha, thanks, that should be enough to get me going.

papr 18 June, 2021, 16:35:28

Since you will be analysing gaze in the context of game data, and the game data is always there, I suggest aligning gaze to the game data. For that, use the interpolation function of your programming language, fit it to the gaze data, and evaluate it at the game data timestamps. I might be able to get you a simple example in Python if this is helpful to you.

user-d1efa8 18 June, 2021, 17:31:19

yeah, a python implementation would be helpful if you already have it, but don't worry about it if you still need to write it

papr 18 June, 2021, 17:33:30

I do not have it ready to go. If it would still be of use early next week, I can work on the example on Tue

user-d1efa8 21 June, 2021, 12:35:59

if you don't mind, i'd still appreciate if you could send it whenever you can

papr 21 June, 2021, 16:34:38

https://nbviewer.jupyter.org/gist/papr/7cf3f328c89327e8285345a40e690ec9

I attached a small gaze_positions.csv file (200 samples) and a jupyter notebook that reads the data, and interpolates norm_pos_x at different timepoints

user-d1efa8 22 June, 2021, 00:30:29

thank you so much!

user-588603 24 June, 2021, 10:56:19

Following up on the discussion whether or not it would be an improvement to add a trigger output to the microcontroller on each camera vs. emitting that trigger via software. red ist the delta time between 2 measured eye0 camera samples of the eeg recording system reading the digital line of the software controlled parallel port. blue is the delta time of pupil labs timestamps. green is the difference between blue and red. so at a samplerate of 120Hz (with 8,333ms optimal dt) we see jitter in an order of magnitude comparable to the time between two consecutive samples (std of green is ~2ms, RMS is ~2ms as well. histogram of green is in the 2nd figure.

Chat image

papr 24 June, 2021, 10:58:56

red ist the delta time between 2 measured eye0 camera samples of the eeg recording system reading the digital line of the software controlled parallel port I do not understand what you have done here. Could you explain that in a bit more detail?

user-588603 24 June, 2021, 10:56:46

Chat image

user-588603 24 June, 2021, 10:58:17

so the software triggering method basically jitters in the order of magnitude of the 120Hz signal. and that doesnt even consider the absolute offset of the software to parallelport delay which i would guess is in the order of ~10ms. so the real inaccuracy is likely 10ms+- what we see in green in the first plot or purple in the 2nd plot.

user-588603 24 June, 2021, 10:58:30

a hardware emitted trigger from the microcontroller would negate all that.

papr 24 June, 2021, 11:00:19

Also, are you collecting the pupil timestamps on Windows or Unix? Because that makes a difference, too. (not sure if we talked about hardware vs software timestamps yet)

user-588603 24 June, 2021, 11:02:22

blue is the delta time of pupil timestamps collected on windows. in each frame i emit one trigger via LPT as we discussed earlier. red is the delta time between two timestamps of the realtime system that reads the TTL output of this LPT. the difference of red and blue (green) tells how much the jitter induced between phython calling the lpt hardware and the hardware finally responding with a 5v signal is.

user-588603 24 June, 2021, 11:03:27

the plots dont include the absolute offset in time. it could be that phython to LPT output signal actually has a delay of 1 minute (much overexagurated example) plus the jitter of +-2ms on average which we see here in green.

user-588603 24 June, 2021, 11:04:06

but as already the jitter is in the order of magnitude of the samplerate its already ..... not ideal, not even mentioning the delay we dont know about.

user-588603 24 June, 2021, 11:04:13

afk 10mins

user-588603 24 June, 2021, 11:15:20

so yea. the jitter of the blue line alone is what you can see in just the pupil labs timestamps, adding the LPT jitter obviously adds to it (and we cant get arround adding that if we want to sync to external hardware) - its in the order of magnitude of the 120Hz singal and could be improoved by adding a hardware TTL output to each camera is what im saying. its not a problem for now, its what we get, just wanted to quantify it a bit as we talked about that earlier.

papr 24 June, 2021, 11:25:09

@user-588603 So this is the order in which things happen: 1. eye camera records frame 2. eye camera transmits frame to computer 3. Capture receives frames; creates pupil timestamp 4. eye process plugins run, including pupil detection 5. you send out TTL

So, I think the variability (green) that you are seeing comes from the processing time in 4.

papr 24 June, 2021, 11:28:45

And yes of course, we are in agreement that a hardware trigger would give more accurate timestamps. There might still be some variability as the cameras do not run at a fixed framerate.

Also, have you quantified how much time the sending of the TTL takes?

user-588603 24 June, 2021, 14:44:46

"There might still be some variability as the cameras do not run at a fixed framerate." - yes. it would be a lot more accurate related to when was the frame taken, but frames would still not occur at exact 120Hz.

user-588603 24 June, 2021, 14:45:10

regarding 4. i was not aware of that. i was under the impression that the timestamp is taken in line 782 of eye.py which is why i placed the "send TTL" directly next to it.

user-588603 24 June, 2021, 14:45:26

eye.py

user-588603 24 June, 2021, 14:45:59

but maybe thats just for the GUI display of cpu load or processing time or something but not the actual timestamp that reaches the npy, respectively the csv.

user-588603 24 June, 2021, 14:46:25

where would be the best point in the code to send the TTL to be as close as possible to "3. Capture receives frames; creates pupil timestamp"?

user-588603 24 June, 2021, 14:50:02

This plot is based on the current version of pupil labs btw.: https://discord.com/channels/285728493612957698/446977689690177536/857574858157326337 in the meantime we also compared to older recordings taken by the bevorementionen 3 year old pupil labs software and back than there was apparently less jitter.:

user-588603 24 June, 2021, 14:50:14

this is data from the old version

Chat image

papr 24 June, 2021, 16:18:23

Yes, earlier versions used hardware timestamps which had less variance. But we saw a large number of support cases in which the hardware timestamps were no longer in sync. As this breaks the binocular mapping completely, we decided to use software timestamps (arrival timestamp - fixed offset) as a compromise. The change was introduced in this release https://github.com/pupil-labs/pupil/releases/v1.16

user-588603 24 June, 2021, 14:51:52

we are currently investigating if the new version is producing more (to much) CPU load due to the online 3d pupil detection (which was not available in the old version) and if thats the cause and are debating if we should upgrade the CPU or downgrade the pupillabs software - though that doesnt seem to be a good solution as a system we cant upgrade anymore is problematic.

papr 24 June, 2021, 15:18:45

You can disable the 2d and/or 3d detection if you want and run it post-hoc.

user-588603 24 June, 2021, 14:54:33

so it seems to me that "4. eye process plugins run, including pupil detection" increased in CPU load in the meantime (between code versions) and thus we should first try to skip that as good as possible and place "5. you send out TTL" as close to "3. Capture receives frames; creates pupil timestamp" as possible.

papr 24 June, 2021, 15:19:17

I think I can help you out with that.

papr 24 June, 2021, 15:17:51

No, this line just accesses the already created timestamp. πŸ™‚ The actual timestamping happens in the uvc backend. https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L490

papr 24 June, 2021, 16:14:06

@user-588603 Please note that in step 3, we try to correct for the delay of step 2 (+ self.ts_offset; Windows only). These offsets are fixed and based on the camera name https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/video_capture/uvc_backend.py#L270 The delay in step 2 is truly dependent on the resolution, though.

This is an user plugin that you can place in your corresponding plugin directory https://gist.github.com/papr/72dbe95a67ebeab4e678c0ac8d9aedc1

I annotated the places for your necessary TTL changes. Without changes, it prints the delay between the frame timestamp and the pupil time at which the plugin is executed. Please mind that this delay is at least as long as the offset correction on Windows.

user-588603 24 June, 2021, 16:52:25

Thanks a lot! I have to implement this on monday.

user-d1efa8 24 June, 2021, 16:55:08

For gazeData eye centers and normals, how are the axes aligned?

papr 24 June, 2021, 16:56:00

see 3d camera space https://docs.pupil-labs.com/core/terminology/#coordinate-system

papr 24 June, 2021, 16:57:29

If you need these in unity coordinates, you need an additional transformation

user-d1efa8 24 June, 2021, 17:06:12

Sorry, but would you mind pointing me in the right direction to figure out these transformations?

papr 24 June, 2021, 17:07:06

@user-d1efa8 https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#map-gaze-data-to-vr-world-space

Vector3 directioninWorldSpace = gazeOrigin.TransformDirection(localGazeDirection);
user-d1efa8 24 June, 2021, 17:12:19

thank you so much

user-b14f98 25 June, 2021, 13:30:41

Hey folks, can you please let me know where the calibration files are serialized? What format is a *.plcal file?

user-b14f98 25 June, 2021, 13:31:12

Oh, and while typing that I thought to searc the repo for *.plcal

user-b14f98 25 June, 2021, 13:31:30

Found calibration_storage.py. Sometimse you just need to say it out loud, or type it out...

user-b14f98 25 June, 2021, 13:32:59

Yep

user-d230c0 25 June, 2021, 16:20:23

Chat image

user-d230c0 25 June, 2021, 16:20:59

i am getting this error whenever i try to run the source code any way someone can help?

user-d230c0 25 June, 2021, 17:37:17

solved: fur future refrence if any one gets stuck at a similar dll error.download ffmpeg-release-full-shared.7z from https://www.gyan.dev/ffmpeg/builds/

user-6cbd99 26 June, 2021, 15:51:07

@papr as you answered i am trying to develop a plugin that changes Plugin.g_pool.min_calibration_confidence value to a value of my choosing however i keep getting stuck would point me in the direction on how i should get started or maybe give me some examples of similar plugins that were already developed

papr 26 June, 2021, 15:55:31

I can give you an example on Monday

user-6cbd99 26 June, 2021, 15:57:00

ok thank you. i have changed all the minimum calibration threshold values to 0.4 however it still says "Dismissing pupil data due to confidence < 0.80"

papr 26 June, 2021, 17:31:52

0.4 is extremely low. There must be something going wrong with the pupil detection if you need the limit be that low. πŸ˜•

user-6cbd99 27 June, 2021, 10:10:50

i was just doing it for testing purposes but i would definitely want it to be higher when actually working wd pupil

user-573a28 28 June, 2021, 17:38:10

Hi there, I'm trying to install pupil-detectors==1.1.1 on RPi 4 and I'm getting errors because of missing some Ceres files (ceres/jet.h: no such file or directory). How can I go about solving this? I looked at pupil-detectors' code and apparently there many other files mentioned under Ceres and I can't find any of them under src/shared_cpp/include/ceres. Thanks in advance

papr 28 June, 2021, 18:39:12

Ceres is primarily an external dependency. You will need to install it based on older commits in this folder https://github.com/pupil-labs/pupil/tree/master/docs (select the file fitting for your OS and look at the commit history for removed c++/ceres dependencies)

user-573a28 28 June, 2021, 19:25:37

I'm using the RPi OS, do you have any advice on which OS's instructions I should follow?

papr 28 June, 2021, 19:40:42

ubuntu I guess

user-573a28 28 June, 2021, 19:42:16

ok I'll try Thank you

user-573a28 28 June, 2021, 20:21:33

one more question: by removing ceres dependencies do I lose the Detector3D module?

papr 28 June, 2021, 20:27:59

The old 3d detector requires ceres, yes

End of June archive