Kalman Filter — Intuition, Math, and Practical Implementation
A concise yet complete guide to the Kalman filter: intuition, state-space model, core update equations, implementation tips, and common applications.
Kalman Filter — Intuition, Math, and Practical Implementation
The Kalman filter is the optimal state estimator for linear Gaussian systems. It alternates between Predict and Update to estimate hidden states (e.g., position, velocity, attitude) from noisy sensor measurements.
This guide covers: (1) Intuition and modeling, (2) Core equations, (3) Implementation notes, and (4) Applications and extensions.
1. Intuition and Modeling
State-Space Model
We model the system in discrete time with a linear Gaussian state-space:
- : hidden state at time (e.g., position/velocity)
- : control input (e.g., throttle/steering)
- : observation/measurement (e.g., a GPS position)
- : state transition, control input, and measurement matrices
- and : process and measurement noise
Goal: at each step, combine history and the current observation to estimate with minimum mean-square error.
2. Core Algorithm
Let the previous posterior be with covariance .
Predict
Update
The Kalman gain balances model and measurement. Larger (noisier sensor) increases reliance on the model; larger (rougher dynamics) increases reliance on measurements.
3. Implementation Notes and Parameter Tuning
- Initialization: choose and . Use larger diagonal variances in if uncertain.
- Choosing :
- reflects model imperfection. Increase when dynamics are aggressive or the model is crude.
- reflects sensor unreliability. Increase for noisy or jittery measurements.
- Numerical stability:
- Keep symmetric PSD; symmetrize via if needed.
- Prefer Cholesky or linear solves over explicit matrix inverses.
- Prevent covariance blow-up with minimum noise floors or adaptive tuning.
- Dimensions: ensure match state/control/measurement dimensions.
Minimal pseudocode (TypeScript-like)
type Vec = number[];
type Mat = number[][];
function predict(x: Vec, P: Mat, A: Mat, Q: Mat, B?: Mat, u?: Vec) {
const x_pred = add(mul(A, x), B && u ? mul(B, u) : zerosLike(x));
const P_pred = add(mul(mul(A, P), transpose(A)), Q);
return { x_pred, P_pred };
}
function update(x_pred: Vec, P_pred: Mat, z: Vec, H: Mat, R: Mat) {
const S = add(mul(mul(H, P_pred), transpose(H)), R);
const K = mul(mul(P_pred, transpose(H)), inv(S));
const y = sub(z, mul(H, x_pred));
const x_new = add(x_pred, mul(K, y));
const I = identity(P_pred.length);
const P_new = mul(sub(I, mul(K, H)), P_pred);
return { x_new, P_new };
}
Use a robust linear algebra library in production; avoid implementing matrix inversion by hand.
4. Applications and Extensions
- Tracking: GPS + IMU fusion for mobile robots or UAVs.
- Sensor fusion: complementary strengths across sensors (e.g., camera + IMU).
- Finance: smoothing and latent-state estimation for volatile series.
- Attitude estimation: fusing gyro/accelerometer/magnetometer.
- AR/VR and IoT: reduce jitter; stabilize interaction/measurement.
Extensions
- EKF: linearize nonlinear systems via Jacobians at the current estimate.
- UKF: propagate sigma points; often more stable than EKF linearization.
- RTS smoother: offline smoothing (forward KF + backward pass).
5. Takeaways
The essence of the Kalman filter is modeling uncertainty and balancing prediction against measurement. Master the linear–Gaussian assumptions and covariance propagation, and the equations fall into place. A follow-up can cover EKF/UKF and practical parameter tuning.