Hello @nmt
I had a some questions regarding surface data:
Hi @user-50567f π. Responses below: 1. The duration is the length of the classified fixation, yes 2. Would you be able to clarify what you mean by 'relative' time - do you mean from the start of the recording or something else? 3. That's impossible for me to tell just from the image you've shared. However, you can calculate the sampling frequency by examining the gaze or pupil data. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1039493801741389904
@nmt
So the current unix time is 1683160673. In the screenshot of the csv the world_timestamp is 8196.802. How do I translate 8196.802 to unix time?
You can follow this guide to achieve that: https://docs.pupil-labs.com/developer/core/overview/#convert-pupil-time-to-system-time
Is the DIY tracker (with 1080p logi c615 world camera) good enough to use as a warp pointer in cursor control?
This really depends on how good a calibration you get. In theory it should be possible even with Pupil DIY but your mileage may vary
And, has anyone used surface tracking to calculate head pose (position + orientation)? Seems rather straightforward but I don't see it mentioned.
Yes indeed - check out the Head Pose Tracker Plugin: https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Nice!
Haven't started building it yet. Need to figure out if it fits my use case first
~~The Logitech works with my Linux box~~
~~I'll download Pupil and check but I don't expect any problems~~
From what I've read slippage spoils the calibration over time. But pye3d in theory can compensate. Is it good enough that one calibration will last a session? Would having a tracker for both eyes help?
~~Oh, by calibration, did you mean for using the tracker or did you mean configuring the camera~~
~~The Logitech C615 I've got is the recommended world camera for the DIY build so I hope Pupil has decent parameters for it preset (ditto the Microsoft 6000).~~
Oh, you were talking to someone else.
so, for it to work well I would need it to be well-calibrated. Does slippage make recalibration very frequent or can the model compensate? Would it help if I tracked both eyes?
Tracking both eyes will definitely improve gaze estimation accuracy. Slippage is, of course, highly dependent on headset fit, environment, task, etc. There are certainly limits to what the model can do, and the only way to really know what it's going to be like for your situation is test it under your task conditions
One last thing, would it be possible to use newer cameras for the DIY? I'd like to swap out the 30hz eye cameras for something faster if possible. I skimmed the paper behind pye3d and it seems like higher framerate there would be crucial for gaze estimation accuracy, if I move my head at all. Is that correct?
Pupil Capture uses python bindings for libuvc
to capture frames, so anything compatible with that should work. I imagine this would cover any consumer-grade cameras, but if you're considering something specialized, I'd look for compatibility with libuvc
Ok, thank you!
Is it possible to extract the diameter of the blue circle from the data file generated in 3d tracking? Is there a code method for doing this? I want to do this to normalize the diameters in my plot as the circle changes size.
Hey @user-3578ef! May I ask why you want to export and normalise the diameter of the blue circle, which represents the eyeball model, rather than pupil diameter itself?
Sorry I meant to say, I want to normalize the diameter of the pupil which varies because our dive mask is flexing and pupil diameter seems to be inversely proportional to the blue circle diameter. The flexing would be due to, among other things, the pressure changes due to breathing.
@nmt Just to be more clear, I'm wrestling with the the sudden transition near the right side of the diameter plot that happens when the Pye3d tracker rescales the blue circle. Scrubbing the video file reveals that the pupil diameter is not changing much but the result in millimeters presented in the graph is jumping up when the circle diameter drops down. Notice that most of the change is instantaneous and then converges to a new larger diameter. This eye image also helps illustrate how the dive mask can compress closer to the face and suddenly increase magnification, so the situation is far from ideal for the OTS Pupil Core software but we haven't yet wanted to tinker with internals. So we are just doing analysis directly from the recording files.
I want to only get the "latest" pupil or gaze position using MATLAB. The pupil_helpers example filter_messages.m
does not deal with how to "flush" the sub message queue. I do not want to loop through all messages to get to the last one as in a tight closed-loop (e.g. refresh rate is 120Hz so 8ms to draw stimuli and do all other processing), every ms is precious to not drop frames. What is the most efficient way to do this? I didn't see a clear answer in the docs or the zeroMQ guide...
Hey @user-3578ef π. Thanks for providing an overview of your set up and issues you're seeing. Pupil size estimates are generated by pye3d, which is our 3D eye model. There are a few factors that can lead to erroneous pupil size estimates with pye3d. The first is model adjustment error, and the second is headset slippage (like the flexing of your goggles). It's likely that you're encountering both. You can read about these topics here: https://docs.pupil-labs.com/core/best-practices/#pupillometry Can you share a recording of the eye video? That will give me a clearer picture of the situation.
Thank you but I have read the docs and studied many of recordings and I do see that Pye3d occasionally undergoes what one might call a "phase change" where it resets its scale to get "a better grip" on the orb, if you will. So the question for the moment really is only that "can I get access to this internal scale change in some way, as if to say " Can I know the relative diameter of the blue circle to do some confirming experiments generating new adjusted pupil diameters?" I say this because we know the pupil cannot physically change diameter this way. This line of thought led me to also wonder if some of the smaller variations in pupil diameter being reported in the Player graph (not saccades) are caused by small changes in this internal scale maintained by Pye3d?
Hi @user-3578ef!
I have liaised with our R&D team and we think that what you're referring to are jumps of the eye model (although seeing a recording would help us to say concretely). Such jumps are caused by model errors and headset slippage, e.g. from flexing of your goggles. Jumps in the eye model will coincide with a change in the blue circle rendering, as that is the silhouette of the modelled eyeball, and a jump in estimated pupil size.
Note that the jump in estimated pupil size will be roughly proportional to the pupilβcamera distance; a 10% increase in pupilβcamera distance will cause a 10% increase in pupil size.
It might be possible to use this information to roughly correct your pupil size measurement. A good place to start would be the eyeball centre (unit: mm in eye camera space) that's reported in the raw export. You could try to analyse this data to find out where discrete updates of the eyeball centre occurred, and in principle, use it to approximately re-scale pupil size.
However, and this is a big caveat, this assumes a slippage directly along the optical axis of the eye camera. In reality there will be horizontal/vertical movements, which will add further complexity to the situation, and this has ruled out the method in previous cases.
I hope this helps!
I still haven't solved how to get only the last message (I will try ZMQ_CONFLATE or the high water mark options to force the queue to be smaller), but a related problem is that I timed the recv_message.m function that gets the eye position from the pub/sub channel (using a modified filter_messages.m
from the pupil_helpers sample code repo). I am using tic
toc
to time recv_message.m
and plotting the histogram of time in milliseconds for 500 message receives. The mean (left plot) is 5.1ms, which is not ideal in closed loop experiments, but even worse is the long tail. My experiment inter-frame interval is 8ms (120Hz) display, and this timing characteristic will result in dropped frames. I tried to simplify the receive code so it was just topic = char(zmq.core.recv(socket)); payload = zmq.core.recv(socket, bufferLength)
removing the msgpack and the warning checks, but the mean and long tail were still present. For eyelink / tobii / iRec we use TCP and can get reliable closed-loop eye position data in around 1ms or less on the same system. How can we improve closed-loop performance?
I tried the same thing using the python filter_messages.py
and see very similar timing
As I understand it as the frame rate varies, it is possible the variance in these is governed by the blocking nature of the messages? I can set ZMQ_DONTWAIT but how does this interact with the fact that messages require two recv operations for header+payload
OK, so this was the effect of blocking. Setting ZMQ_DONTWAIT and the latency drops down
In MATLAB, without parsemsgpack
parsemsgpack (which is MATLAB-only intepreted code) does cause a delay, ~2ms for a 2D message and 4ms for a 3D message, so I will probably need to try to optimise this. In Python, msgpack seems to have a negligible effect, so it is probably a binary
(thanks to @user-92dca7 for the input)
Thanks Neil that does help. I have resolved my position on this to probably add a corrective concave lens onto our encapsulated camera stalk, instead of the sapphire window we waterproof with. This will help keep the eyeball in the field of view rather than slipping during slippage. π This is related to the eyeball falling into the lower edge of the image and while the algorithm does seem to work well even in that condition, it can suddenly reject that state and try to re-converge at a new diameter. Lower magnification that can keep us away from the edge might give us the expected Pye3d performance during such slippage at a small cost in accuracy of the diameters.
No problem! Do let us know how you get on. In fact, we'd be interested in seeing more of your dive mask with integrated eye tracking, if you can share π
Hello, unfortunateley I do not understand how the fixation detector works. I have the thresholds minimum duration with 80ms and the maximum duration with 220ms (default by pupil). In my exported csv I have values from zero ms to 220 ms (e.g. 0.8 ms, 3 ms, 28 ms). So I doubt I understand the given explanation correctly. Could anyone be so kind to explain that issue to me? Thanks in advance!
Hello, I did not work for 2 years with pupil, i.e. I have the old version 3.1.0. In the latest version I can't find the possibility to adapt the maximum and minimum fixation length in the fixation detector. Where can I find these options in the latest version now?
Hi, @user-7a8fe7 - in Pupil Player, open the plugin manager and make sure the Fixation Detector is enabled. Then, in the Fixation Detector settings, you'll find the min and max duration options.
@nmt Hi Neil. I've been playing with file_methods.py and circle_detector.py, etc., but haven't succeeded in getting the projected eyeball center or radius yet. Can you tell me more about how to do this, which methods and such is sufficient.
The value you need is sphere_centre_*
. Post-hoc that's found in pupil_positions.csv, and is available on the IPC backbone in real-time: https://docs.pupil-labs.com/developer/core/network-api/#reading-from-the-ipc-backbone
parsemsgpack which is MATLAB only
@nmt I have been aware of its presence in the csv files but my workflow was to avoid generating the csv and operate in a batch mode on the numerous recordings that we collect from subjects. In that mode I can process a lot of recordings very quickly and create a 10 x10 grid of subplots for visual check of performance. Can I run the IPC backbone post-hoc from a recordings folder?
In that case, you can just extract the data from pupil.pldata
. Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1097858677207212144
Yes, that's exactly what I have been trying to do but was unable to extract the centre, I'll check that out now. Tnx.
Ok, I see from the load_pldata.py example how to get the sequence of operations on .pldata to yield eye_center_3d now.
Is it possible to order pupil core eye camera replacements with a 90 deg FOV? I believe the std ones are 50 deg??
Yay! eye0 sphere diameter = 20.78 mm in frame 5000
path = "d:\Recordings\P07\004" get_data = load_pldata_file(path, "gaze") data_gaze = get_data.data print(f"eye0 sphere diameter = {data_gaze[5000]['base_data'][0]['sphere']['radius']*2:6.2f} mm")
Well. even that's not correct, the sphere radius it gets is always the same, for all frames of all my recordings. Rats!
The radius is based on anthropomorphic average because of the single camera scale ambiguity. As noted on the docs: https://docs.pupil-labs.com/core/software/pupil-player/#pupil-positions-csv. Recommend looking at the sphere centre as per previous message
The 20.78 mm number was taken from the sphere center as twice the radius value in: 'sphere': { 'center': ( 1.1757230800994654, -7.01442322638656, 31.619273671542608), 'radius': 10.392304845413264},
Am I looking at the wrong sphere center...? My code derived from your load_pldata.py:
path = "d:\Desktop\DiveMask\Recordings\P09\004" get_data = load_pldata_file(path, "gaze") data_gaze = get_data.data data_ts = get_data.timestamps i=0 ts0 = data_ts[i] for gaze in data_gaze: radius = data_gaze[i]['base_data'][0]['sphere']['radius'] print(f"{i}: sphere diameter[{(data_ts[i]-ts0)/60:.4f}] ={2*radius:6.2f} mm") i+=1
@user-3578ef, I'm not sure I fully understand what you're attempting with this. The sphere radius is a fixed value, we have to make this assumption as described in my last message. If you want to explore the possibility of correcting pupil size using slippage data, you'd need to look at movement of the sphere centre as described here: https://discord.com/channels/285728493612957698/446977689690177536/1106614536431214694. I would also recommend working with pupil data as opposed to gaze data - pupil data is relative to eye camera coordinates. Specifically, sphere_center_*
found in pupil.pldata
Ok, to clarify let's go back to my original question (which might be misguided). I have observed that the diameter of the blue circle that is the projection the sphere of the eyeball found by Pye3d changes diameter as the scene changes due to slippage. And as that diameter increases, the corresponding pupil diameter reported is changing even though the physical pupil diameter hasn't. If I can find the diameter of the blue circle I might be able to correct the reported pupil diameter to compensate for the shifts in the camera which cause these magnification changes. So I'm hoping to find a way to get the radius of the sphere being projected on to the eye image. Yes I did try working with pupil data but the radius there is the small radius of the pupil I think - but I'll have another go at that.
@nmt Hi is it possible to extract surface data using some python script rather than using pupil player?
Hey, i don't know if any lab-streaming-layer and machine learning intrigued minds will catch this, but i've been creating an LSL based brain-computer-interface python pipeline software to test and train sklearn and tensorflow models live with data on lsl streams, got an example with the pupil-core here: https://github.com/LMBooth/pybci/tree/main/pybci/Examples/PupilLabsRightLeftEyeClose with main package install instructions here: https://github.com/LMBooth/pybci/tree/main
It would be great to get some feedback or input from other capable device holders.
Hi @nmt Hi is it possible to extract surface data using some python script rather than using pupil player?
That depends.
If the surface data already exists, you can extract it with this: https://gist.github.com/papr/87157c5da93d838012444f4f6ece6bcc
If the surface data needs to be computed, then no, there's no easy way to do that without Player.
When I run examples, there is always this issue. May I know how to solve it? Thank you~
Hi, @user-04c6a4 ππ½ - it looks like you're trying to install from source, but may be missing some build steps outlined here: https://pye3d-detector.readthedocs.io/en/stable/install_from_source.html
Having said that, building pye3d from source is really only useful if you are modifying pye3d itself. If you're just using pye3d, we recommend installing the pre-built version from pypi using pip install pye3d
@user-cdcab0
I have now installed engine 3 through cmake and set the system variables. May I know what to do with the Eigen3 build directory you mentioned
Did you build eigen following the steps on https://pye3d-detector.readthedocs.io/en/stable/install_from_source.html ? If so, you should have a folder named build
and the Eigen3_DIR
environment variable already set in your current powershell session.
Using that same powershell window, the final step from that guide is to cd
to your pye3d folder and to run python -m pip install .
@user-cdcab0
I'm trying to write a c++ script to open both eye cameras on the core platform, based on the
vis_two_cameras
example. Am I correct in understanding that the Capture class creates 2
uvc contexts, 1 for each camera?
Hi @user-1ba94f - I dipped my toes into the pyuvc
and libuvc
code, and yes - it does appear that fresh memory is allocated for the UVC context on each instance of the Capture
class
@nmt I think I now understand why I cannot find and extract the current eyeball diameter (or radius) at each iteration. I think it doesn't exist as a separate entity but computed and drawn inside OpenGL. as a component of LeGrandEye.
I believe my use of a VM was the issue. Running the same script outside of the VM allowed it to run
Good job troubleshooting that. That sounds like one of those issues that can take 5 minutes, 5 hours, or 5 days to sort out!
We don't hear too frequently from people doing what you are, but it's always nice when someone shares their solution, so thanks for posting back!
Will you continue troubleshooting your VM config or just develop directly in your host OS now?
Since Iβm developing something for the raspberry pi, my plan is to write in the vm, then compile and run on the pi. Unfortunately Iβm on a bit of a time crunch and am lacking in the knowledge department to attempt to understand and debug what the issue with the VM is.
Hi @user-9a0705 π. Would you be able to elaborate on your intended use case? Are you trying to run the fixation detector source code in a standalone manner?
Hi Neil, thank you so much for reaching back. yes, as when I call the function in my code (it is reallly basic code it is just the comunication and reccording) it didnΒ΄t let me use it. So, I enter the fixations_detector.py and tryed to run it as is and a lot of errors appeared because I was lacking various libraries, and this is the last error I got and haven't been able to go through it. IΒ΄m really new to python so I didnΒ΄t have anything installed apart from the default things I guess it brings once you download it.
The code you've been attempting to run is actually part of the pupil software and has a lot of dependencies that would prevent it running standalone. May I ask why you're trying to run bits of code standalone? A lot of things can be done using our network and/or plugin api. If you can elaborate on your specific use case we can offer some concrete guidance
@nmtHi, I still have no success running the code from source. May I ask if I am going to port pye3D to the C++ platform, is it feasible? In addition, I want to learn this code better, where should I start, or do I need to use every code file here?
Greetings @user-04c6a4. Porting everything over to c++ is of course conceptually possible, but would be quite a bit of work. Recommend checking out all of the dependencies to get a sense of just how much work it'd be π
Hello, I would like to write an app that provides a plot showing the evolution of blink rate, saccade rate, and fixation duration. Is it possible to access these measures while recording?
Hey @user-068eb9! Which product are you using?
Hi, I have the Neon glasses. I would like to parse the raw files of the IMU that is stored in the phone. Would you be able to share how you encoded the data in the binary format? The file I am referring to is extimu ps1.raw
. I see there is the protobuf specificaion imu.proto
but there is some parsing to be done in the raw binary file before I can extract the IMU data. Would you be able to help?
Hello, I created a bash script to boot up main.py capture however when running the script, I get this error: import packaging.version
ModuleNotFoundError: No module named 'packaging'.
I have already attempted to pip install packaging, I am using anaconda prompt, any help would be greatly appreciated.
I don't use anaconda, but shouldn't you prefer conda install packaging
instead of pip
?
Hey @user-2e75f4! Is there a specific reason you need to run from source? Most use cases requirements can be met using the standalone bundles in combination with our Network and Plugin APIs: - https://docs.pupil-labs.com/developer/core/network-api/#network-api - https://docs.pupil-labs.com/developer/core/plugin-api/#plugin-api
Using python 3.10