okay..so you might not be a complete loof but I am..so I’m writing this post to the my future self..because he forgets a lot needs his memory to be refreshed every once in a while..go grab a pen paper.
K.F- It is an iterative mathematical process that uses a set of equations and consecutive data inputs to quickly estimate the true value (of position or velocity etc), when the measured values contains unpredicted or random error or uncertainty or variation.
say we need to measure temperature..but value from the sensor jumps around a lot..what we can do is take an average of the readings..which is an obvious approach but it takes memory to store previous data samples plus its slow and not really an innovative technique..so averaging things is a big no for us.
here’s how it really goes
so the above image sums the whole process..but here are few add-ons..starting from time t=0
read data samples( for example if we’re trying to estimate velocity we’d need say accelerometer data and GPS data).
Initialize the filter..at time t=0 there’d be no value in previous estimate..so we need initialize previous estimate values with measured value.
then the process starts with 2. Calculate current estimate.
to be keep in mind step 1 2 and 3 are the major part of the process.
What we need to calculate the kalman gain-
- error is estimated value.
- error in observed or measured value.
What we need to calculate current estimate(which is closest possible to true value)-
- previous estimate.
- measured value.
- kalman gain.
What we need to calculate error in current estimate-
- current estimate
- kalman gain
if you’ve been following everything with a pen and paper this should not be so hard to follow
if KG is large this implies error in measurement must have been very low.
if KG is small this implies error in estimate must have been very low.
in case 1 above-
observation: error in estimate at time t got drastically reduced.
intuition: since error in measured value is low and error in estimated value is high..estimated value should be more useless to us..and measured value be more valuable.
But thats what not happened when we calculated new error in estimate..instead of increasing the error the update decreased the error..why/? (and as a result kalman gain got cut down to half.)
in case 2 above-
observation: error in estimated at time t reduced by tiny bit.
in this case intuitions stands correct..since error in estimate is low..the new error in estimate at time t got further reduced by small amount..
which further led to new kalman gain to be reduced by very small amount.
summed observation: in both cases error in estimate got reduced.
reality: filter actually never reaches the stage 1 i.e very high error is estimate.(unless filter started off with that stage)
because whole point of the process is to reduce error in estimate by making observations..or using measured values. that is exactly why error in estimate got reduced because it corrected itself from the measured value.
when we integrate acceleration to find estimate of velocity..it drifts..as errors get accumulated over time and it keep on drifting forever..as we don’t have any real reference.
so drift in estimated_vel increases error in estimated_vel and measurements helps us to reduce the error in estimated_vel and bring our estimated_velocity closer to the true value.
badass equations/ matrix form
starting from zero
X is our state matrix which holds things we want to estimate which could be velocity or position, temperature or some angle..and could be of any dimension depending upon no. of states we are processing.
P is process covariance matrix..fancy name for error in estimate or error in process.
X naut and P naut on far left are initial state matrix and initial process covariance matrix.
X(k-1) and P(k-1) are previous state estimate and previous process covariance estimate.
X(k)(p) and P(k)(p) are prior estimates of state and co-variance estimates..prior means rough estimates.
X(k) and P(k) are estimate of state and covariance matrix which are closest to the true value.
when iteration ends we end up with these two matrices..which are then used in next iteration to find new estimates based on new incoming data..so for next iteration these become X(k-1) and P(k-1).
so process loop starts at t=0 with X naut and P naut..which should hold initial state and initial error in those states.
let’s start with finding X(k)(p) and P(k)(p) from X naut and P naut.
box on the upper right side says
X (k)(p)= A * X (k-1) + B * U(k) + w(k)
P(k)(p)= A * P(k-1) * TRANSPOSE(A) + Q(k)
so above representation in text its pretty bad..but hell it works for me..
A* X(k-1) -> Some matrix A multiplied with previous estimate of states.
example: X(k-1)= [position
velocity] which is 2 by 1 matrix
in this case matrix A would make a time update
B * U(k) -> some matrix B multiplied by U(k)..which holds the control variable in this case acceleration
A * X(k-1) + B * U(k) comes out to be
which is quite legit..
and w(k) is predicted noise matrix..we’ll settle with just the name for now.
so we calculated prior state with using matrix A and B and U(k)..these matrices vary according to the situation no of elements in state matrix etc..but matrices should be chosen as such they update past state into new state based on some sensor value.
P(k)(p)= A * P(k-1) * TRANSPOSE(A) + Q(k)
same way we can update process covariance matrix using matrix A and A transpose
Q(k) where Q(k) is process noise covariance matrix..again we’ll settle just on the name..explanation would come soon enuff.
Just like there’s a predicted State matrix there is Measured state matrix denoted by
Y(k) which stores measured value of our state..say GPS giving us position and velocity.
Y(k)= C * X(k)(m) + z(k)
where X(k) is measured state and C is some matrix from which we bring it into same format as X(k)(p).
and z(k) is measured noise covariance matrix.
covariance matrices..if you asked?