A least-squares problem is an optimisation problem with no constraints and an objective, which is as follows:

minimise

f0(x)=Axb22=i=1k(aiTxbi)2

The objective function is a sum of squares of terms of the form

aiTxbi

 

The solution can be reduced to solving a set of linear equations,

f(x)=Axb22=(Axb)T(Axb)

=((Ax)TbT)(Axb)
=xTATAxbTAxxTATb+bTb

 

If x is a global minimum of the objective function, then its gradient is the zero vector.

f(x)=(fx1,...,fxn)

The gradients are:

(xTATAx)=2ATAx,(bTAx)=ATb,(xTATb)=ATb

Calculate these gradients with respect to

x1,...,xn

Thus, the gradient of the objective function is

f(x)=2ATAxATbATb=2ATAx2ATb

To find the least squares solution, we can solve

f(x)=0

Or equivalently

ATAx=ATb

So we have the analytical solution:

x=(ATA)1ATb

 

To recognise an optimisation problem as a least-squares problem, we only need to verify that the objective is a quadratic function.

'ConvexOptimisation' 카테고리의 다른 글

Regularised least squares (RLS)  (0) 2024.01.22
Mathematical optimisation problem (basic concept)  (0) 2024.01.16

+ Recent posts