BLUE estimator

Why BLUE :

We have discussed Minimum Variance Unbiased Estimator (MVUE)   in one of the previous articles. Following points should be considered when applying MVUE to an estimation problem

  • MVUE is the optimal estimator
  • Finding a MVUE requires full knowledge of PDF (Probability Density Function) of the underlying process.
  • Even if the PDF is known, finding an MVUE is not guaranteed.
  • If PDF is unknown, it is impossible find an MVUE using techniques like Cramer Rao Lower Bound (CRLB)
  • In practice, knowledge of PDF of the underlying process is actually unknown.

Considering all the points above, the best possible solution is to resort to finding a sub-optimal estimator. When we resort to find a sub-optimal estimator

  • We may not be sure how much performance we have lost – Since we will not able to find the MVUE estimator for bench marking (due to non-availability of underlying PDF of the process).
  • We can live with it, if the variance of the sub-optimal estimator is well with in specification limits

Common Approach for finding sub-optimal Estimator:

  • Restrict the estimator to be linear in data
  • Find the linear estimator that is unbiased and has minimum variance
  • This leads to Best Linear Unbiased Estimator (BLUE)
  • To find a BLUE estimator, full knowledge of PDF is not needed. Just the first two moments (mean and variance) of the PDF is sufficient for finding the BLUE

Definition of BLUE:

Consider a data set whose parameterized PDF depends on the unknown parameter . As the BLUE restricts the estimator to be linear in data, the estimate of the parameter can be written as linear combination of data samples with some weights

Here is a vector of constants whose value we seek to find in order to meet the design specifications. Thus, the entire estimation problem boils down to finding the vector of constants – . The above equation may lead to multiple solutions for the vector .  However, we need to choose those set of values of  , that provides estimates that are unbiased and has minimum variance.

Thus seeking the set of values for   for finding a BLUE estimator that provides minimum variance, must satisfy the following two constraints

  1. The estimator must be linear in data
  2. Estimate must be unbiased

Constraint 1: Linearity Constraint:

Linearity constraint was already given above. Just repeated here for convenience.

Constraint 2: Constraint for unbiased estimates:

For the estimate to be considered unbiased, the expectation (mean) of the estimate must be equal to the true value of the estimate.

Thus,

Combining both the constraints  (1) and (2) or  (3),

Now, the million dollor question is : “When can we meet both the constraints ? “. We can meet both the constraints only when the observation is linear. That is is of the form , where is the unknown parameter that we wish to estimate.

Consider a data model, as shown below, where the observed samples are in linear form with respect to the parameter to be estimated.

Here , is zero mean process noise , whose PDF can take any form (Uniform, Gaussian, Colored etc., ). The mean of the above equation is given by

Substuiting (6) in (4) ,

Looking at the last set of equality,

The above equality can be satisfied only if

Given this condition is met, the next step is to minimize the variance of the estimate. Minimizing the variance of the estimate,

Finding BLUE:

As discussed above, in order to find a BLUE estimator for a given set of data, two constraints – linearity & unbiased estimates – must be satisfied and the variance of the estimate should be minimum. Thus the goal is to minimize the variance of which is subject to the constraint . This is a typical Lagrangian Multiplier↗ problem, which can be considered as minimizing the following equation with respect to   (Remember !!!  this is what we would like to find ).

Minimizing with respect to is equivalent to setting the first derivative of w.r.t to zero.

Substituting (12) in (9)

Finally, from (12) and (13), the co-effs of the BLUE estimator (vector of constants that weights the data samples) is given by

The BLUE estimate and the variance of the estimates are as follows

Rate this article: Note: There is a rating embedded within this post, please visit this post to rate it.

Similar topics

[1]An Introduction to Estimation Theory
[2]Bias of an Estimator
[3]Minimum Variance Unbiased Estimators (MVUE)
[4]Maximum Likelihood Estimation
[5]Maximum Likelihood Decoding
[6]Probability and Random Process
[7]Likelihood Function and Maximum Likelihood Estimation (MLE)
[8]Score, Fisher Information and Estimator Sensitivity
[9]Introduction to Cramer Rao Lower Bound (CRLB)
[10]Cramer Rao Lower Bound for Scalar Parameter Estimation
[11]Applying Cramer Rao Lower Bound (CRLB) to find a Minimum Variance Unbiased Estimator (MVUE)
[12]Efficient Estimators and CRLB
[13]Cramer Rao Lower Bound for Phase Estimation
[14]Normalized CRLB - an alternate form of CRLB and its relation to estimator sensitivity
[15]Cramer Rao Lower Bound (CRLB) for Vector Parameter Estimation
[16]The Mean Square Error – Why do we use it for estimation problems
[17]How to estimate unknown parameters using Ordinary Least Squares (OLS)
[18]Essential Preliminary Matrix Algebra for Signal Processing
[19]Why Cholesky Decomposition ? A sample case:
[20]Tests for Positive Definiteness of a Matrix
[21]Solving a Triangular Matrix using Forward & Backward Substitution
[22]Cholesky Factorization - Matlab and Python
[23]LTI system models for random signals – AR, MA and ARMA models
[24]Comparing AR and ARMA model - minimization of squared error
[25]Yule Walker Estimation
[26]AutoCorrelation (Correlogram) and persistence – Time series analysis
[27]Linear Models - Least Squares Estimator (LSE)
[28]Best Linear Unbiased Estimator (BLUE)

Books by the author


Wireless Communication Systems in Matlab
Second Edition(PDF)

(180 votes, average: 3.62 out of 5)

Checkout Added to cart

Digital Modulations using Python
(PDF ebook)

(134 votes, average: 3.56 out of 5)

Checkout Added to cart
digital_modulations_using_matlab_book_cover
Digital Modulations using Matlab
(PDF ebook)

(136 votes, average: 3.63 out of 5)

Checkout Added to cart
Hand-picked Best books on Communication Engineering
Best books on Signal Processing

Published by

Mathuranathan

Mathuranathan Viswanathan, is an author @ gaussianwaves.com that has garnered worldwide readership. He is a masters in communication engineering and has 12 years of technical expertise in channel modeling and has worked in various technologies ranging from read channel, OFDM, MIMO, 3GPP PHY layer, Data Science & Machine learning.

Post your valuable comments !!!Cancel reply