Previous Next

Minimization of Functions with Two Independent Variables

Implementing a two-dimensional minimization program is more complicated because the positions and first derivatives are vectors now. Also, instead of a single second derivative, a second partial derivative matrix (Hessian) with four components must be calculated when using the Newton-Rhapson method:
d2E/dx2, d2E/dxdy, d2E/dxdy, d2E/dy2
The prescription for updating positions can be nicely expressed in the matrix notation.
For the steepest descent:
SD_2D
For the Newton-Rhapson:
NR_2D

Efficient handeling of vectors and matrices in Python is possible using the array functions. The array functions are provided by Numerical Python extensions. Three implementations, the original Numeric, a somewhat obsolete numarray, and a more recent NumPy exist and provide similar functionality for handeling vectors and matrixes. Specifically, vectors can be represented as one-dimensional arrays and matrixes as two-dimensional arrays. For example, the two-component gradient vector that is needed by steepest descent and Newton-Rhapson minimizers can be created from the partial derivatives using the array() function. Assuming that the partial derivatives dE_dx and dE_dy have been calculated from appropriate formulas, the following piece of code will create the gradient vector:

Gradient = array((dE_dx, dE_dy))

Similarly, the Hessian can be formed from known second partial derivatives as:

Hessian = array(( (d2E_dx2, d2E_dxdy) , (d2E_dxdy, d2E_dy2) ))

The matrix inversion is carried out by the inv() function. The array() and inv() functions must be imported from an appropriate Numerical Python module before they can be used. To import these functions from NumPy, use:

from numpy import *
from numpy.linalg import inv

A more complete tutorial on using Numerical Python (based on the old Numeric module) is available from O'Reilly. The Numerical Python extensions and a related SciPy library offer many science and engineering modules that facilitate rapid program development in Python.

The implementation of vectors and matrixes is somewhat different in other languages. For example, the code below illustrates a two-dimensional steepest descent minimization using the Mathematica©. A more detailed explanation of this code is given in the accompanying Notebook file. Note that we are taking advantage of the Mathematica© function Grad to obtain a vector of partial derivatives. However, the syntax is more complicated than in Python and requires good knowledge of the Mathematica© program.

<< Calculus`VectorAnalysis`
SetCoordinates[Cartesian[x,y,z]];
Energy = 2 x^2 + y^2;
Step = 0.1;
StartPoint = {-1,-1};
EXY = Energy /. {x -> StartPoint[[1]], y -> StartPoint[[2]]};
GXY = Grad[Energy][[{1,2}]] /. {x -> StartPoint[[1]], y -> StartPoint[[2]]}
MaxForce = Max[Abs[GXY]];
Print["  X  ", "  Y  ", "Energy", " ", "Force"]
Print[StartPoint, " ", EXY, " ", MaxForce]
NextPoint = StartPoint;
For[i=1, i<50, i=i+1,
  NextPoint = NextPoint - Step*GXY;
  EXY = Energy /. {x -> NextPoint[[1]], y -> NextPoint[[2]]};
  GXY = Grad[Energy][[{1,2}]] /. {x -> NextPoint[[1]], y -> NextPoint[[2]]};
  MaxForce = Max[Abs[GXY]];
  Print[NextPoint, " ", EXY," ", MaxForce]
    If [(MaxForce < 0.1), Break[] ,]
  ]

Previous Next

Materials by Dr. Kalju Kahn, Department of Chemistry and Biochemistry, UC Santa Barbara. ©2006-2010