Identification Example: Least Squares Method to Fit the Standard Equation of a Circle

Identification example: least squares method to fit the standard equation of a circle

The following uses the equation of a fitting circle to demonstrate an example of second-order least squares identification. It is known that the general formula of a circle is

(

x

?

a

)

2

+

(

y

?

b

)

2

=

r

2

\begin{equation} \begin{split} (x-a)^2 + (y-b)^2 = r^2 \end{split} \end{equation}

(x?a)2 + (y?b)2=r2?
Since the least squares method cannot directly identify equations above second order, we need to process the equation

(

x

?

a

)

2

+

(

y

?

b

)

2

=

r

2

x

2

?

2

a

x

+

a

2

+

y

2

?

2

b

y

+

b

2

=

r

2

x

2

+

y

2

?

2

a

x

?

2

b

y

+

a

2

+

b

2

?

r

2

=

0

\begin{equation} \begin{split} (x-a)^2 + (y-b)^2 & amp;= r^2 \ x^2 – 2ax + a^2 + y^2 – 2by + b^2 & amp;= r^2 \ x^2 + y^2 – 2ax – 2by + a^2 + b^2 – r^2 & amp;= 0 \end{split} \end{equation}

(x?a)2 + (y?b)2×2?2ax + a2 + y2?2by + b2x2 + y2?2ax?2by + a2 + b2?r2?=r2=r2=0?
In the above formula

a

2

a^2

a2.

b

2

b^2

b2 and

r

2

r^2

r2 is a constant term when applied, so it can be regarded as a constant term

c

c

c to look at. The equation to be identified is

f

(

a

,

b

,

c

)

=

x

2

+

y

2

?

2

a

x

?

2

b

y

+

c

\begin{equation} \begin{split} f(a,b,c) = x^2 + y^2 – 2ax – 2by + c \end{split} \end{equation}

f(a,b,c)=x2 + y2?2ax?2by + c?
Here, the above formula is simplified into the matrix solution form of least squares as follows

f

(

θ

)

=

Z

m

=

[

?

2

x

?

2

y

E

]

[

a

b

c

]

=

H

m

θ

^

\begin{equation} \begin{split} f(\theta) = Z_m & amp;= \begin{bmatrix} -2x & amp; -2y & amp; E \end{bmatrix} \begin{bmatrix} a \ b \ c \end{bmatrix} \ & amp;=H_m \hat \theta \end{split} \end{equation}

f(θ)=Zm=[?2x2y?E?]
?abc?
?=Hm?θ^?
in the formula

E

E

E is the identity matrix, other letters are matrices, and the dimensions need to be consistent.
Solve it using the least squares solution formula

θ

^

=

[

a

^

b

^

c

^

]

=

H

m

?

1

Z

m

\begin{equation} \begin{split} \hat \theta = \begin{bmatrix} \hat a \ \hat b \ \hat c \end{bmatrix}= H_m^{-1}Z_m \end{split } \end{equation}

θ^=
?a^b^c^?
?=Hm?1?Zm
obtained by solving

a

^

\hat a

a^with

b

^

\hat b

b^ is the center coordinate of the identified circle

(

x

,

y

)

(x,y)

(x,y), while the radius

r

r

r, because we have merged it, we need further processing to get it, as follows

a

2

+

b

2

+

r

2

=

c

r

2

=

c

?

a

2

?

b

2

r

=

c

?

a

2

?

b

2

\begin{equation} \begin{split} a^2 + b^2 + r^2 & amp;= c \ r^2 & amp;= c – a^2 – b^2 \ r & amp; = \sqrt{c – a^2 – b^2} \end{split} \end{equation}

a2 + b2 + r2r2r?=c=c?a2?b2=c?a2?b2

Here the center of the circle is

(

0

,

0

)

(0,0)

(0,0), radius

r

=

3

r=3

r=3m as the real parameters of the circle. The following is the matlab code corresponding to this example

% Generate trajectory points of a circle without noise
t = linspace(0, 2*pi, 100);
r = 3;
x = r*cos(t);
y = r*sin(t);
figure;
plot(x,y); hold on;
axis equal
% Collect real coordinate data with noise
z_x = x + 0.2*randn(1,length(t));
z_y = y + 0.2*randn(1,length(t));
plot(z_x,z_y,'r.');
% Construct the least squares form
zm = -(z_x.^2 + z_y.^2)';
H = [-2*z_x', -2*z_y', ones(1,length(z_x))'];
% least squares solution
hat = H\zm;
x_hat = hat(1);
y_hat = hat(2);
r_hat = sqrt(-hat(3)-hat(1)^2-hat(2)^2);
fprintf("The center position of the circle is: (%f, %f)\
",x_hat,y_hat);
fprintf("The center radius of the circle is: %f m\
",r_hat);
%Drawing verification
x_fit = x_hat + r_hat*cos(t);
y_fit = y_hat + r_hat*sin(t);
plot(x_fit, y_fit,'r-');
legend('real trajectory','acquisition data','fitted trajectory','Location','best')


From the results we can see

  • Least squares is not unbiased. It amortizes all errors to each collected data point. It can only have a filter-like effect. For those parameters that require higher accuracy (two decimal places after the decimal point) bits or more), it is difficult to meet the accuracy requirements using least squares
  • Least squares is the simplest identification method. It is based on the assumption that the noise is white noise with a Gaussian (normal) distribution. Better results can be obtained. If it is not white noise, you may need to use Kalman filtering or neural network. Identification, particle swarm identification, particle filtering and other advanced identification methods