|
Software: |
To find the maxima and minima of f(x) we solve f ’(x)=0 We can obviously apply the Newton method, like we did to find roots. By analogy, if a maximum or a minimum is close to x0, then a better approximation is found by:
f ’(x0) The specified interval is explored for changes in the sign of f ’(x) to obtain an initial approximation, and the method is applied to obtain a sequence of approximations: x0, x1, x2, ... , xn. The xn is assumed correct if it differs from the previous value in the sequence by less than the specified error. The way in which the sign of f ’(x) changes indicates whether it is a maximum or a minimum: If f ’(x) goes from - to + then the point is a minimum, and if it goes from + to - it is a maximum. MAXIMA AND MINIMA OF MULTIVARIABLE FUNCTIONS Suppose that, instead of a one-variable function we have a function of several variables and we wish to find its relative extrema. We can do this using a generalization of the Newton method. To illustrate, let's say we want to find the relative maxima and minima of F(x,y,z). A necessary (though not sufficient) condition for point (a,b,c) to be an extremum is that all three equations Fx = 0 ; Fy = 0 ; Fz = 0 (I) be satisfied, where
¶F
¶F
¶F Assume (x0 ,y0,z0) is close to an extremum. Now expand all three equations into their Taylor series. Neglecting terms of order higher than the first, we get: Fxx*(x-x0) + Fxy*(y-y0) + Fxz*(z-z0) = - Fx(x0,y0,z0) Fyx*(x-x0) + Fyy*(y-y0) + Fyz*(z-z0) = - Fy(x0,y0,z0) Fzx*(x-x0) + Fzy*(y-y0) + Fzz*(z-z0) = - Fz(x0,y0,z0)
where
Fx,
Fy
and
Fz are as defined above, but evaluated at
(x0,y0,z0), and Similarly for Fyx, Fyy, Fyz, Fzx, Fzy and Fzz. If we solve this last set of linear equations for (x-x0), (y-y0) and (z-z0), we obtain corrections on the initial approximation, that is, if the solutions to the system are Dx, Dy and Dz, where x-x0=Dx; y-y0=Dy ; z-z0=Dz then a closer approximation to the extremum is x1=x0+Dx ; y1=y0+Dy ; z1=z0+Dz We now use x1, y1 and z1 as initial approximations and again set a system of linear equations then solve it to obtain x2, y2, and z2. This process is continued until all the variables in the current approximation in the sequence differ from the values of the previous approximation by less than the specified error. Points satisfying (I) above are called stationary, critical or singular points. They may be points of local maximum or minimum, or neither (saddle points). To identify a stationary point as a local minimum or a local maximum, we set up the matrix of second partial derivatives and establish whether it is positive or negative definite or neither. This is a generalization of concavity to higher dimensions. If you recall, a one-variable function has a minimum at a certain point if it’s graph at that point is concave upward (positive second derivative), and a maximum if it is concave downward (negative second derivative). Let's
illustrate using a 4-variable case. Suppose the function is F(x,y,z,t).
The matrix of second partials we need to consider is the symmetric matrix: which
is called the HESSIAN MATRIX of F, where
¶²F ¶²F
¶²F
¶²F
¶²F
¶²F
¶²F
¶²F
¶²F
¶²F
¶²F
¶²F all evaluated at the stationary point. Since
we’re assuming the function and all of its 1st and 2nd order partial derivatives
are continuous at the point we’re testing, the matrix is obviously symmetric
since
Fxy
=
Fyx,
Fxz
=
Fzx, etc. [A] is positive or negative definite if the
product is positive or negative, respectively, for all nontrivial [x y z t]. Clearly, we can't test that product for all possible [x y z t], but there are other equivalent tests. One is to check the signs of the eigenvalues: if they are all positive or all negative, the matrix is positive or negative definite, respectively. Another test is to compute the determinants A1, A2, A3 and A4 of these principal submatrices: A1 = det[Fxx] A2
= detéFxx
Fxyù A4
= detéFxx
Fxy Fxz
Fxtù all of which are evaluated at the stationary point. If the point is a minimum then [A] is positive definite: Ai > 0 for i = 1, 2, 3, 4 (all i) If the point is a maximum then [A] is negative definite:
Ai <
0 for i = 1, 3 (i odd) The
matrix may also be positive or negative semidefinite, or indefinite. |
|
Copyright © 2001-2010 Numerical Mathematics. All rights reserved.
|