1. Normalization and Convergence

1.1. Convergence Criteria

The convergence parameter is called Desired final accuracy (denoted by \(\delta\) in the following). In NLPQLP, several quantities are needed to be less than \(\delta\) (or some function of \(\delta\)) in order to obtain convergence:

  • The absolute value of the Langrangian function

  • The sum of constraint violations

  • The norm of the gradient of the Lagrangian function

This means that \(\delta\) is highly dependent on the magnitude of the cost function, the optimization variables, and the constraints. It is therefore recommended to normalize the optimization problem according to the procedure given below. This is the default behaviour. It is possible to turn off the normalization, but this is likely to cause convergence problems.

1.2. Normalization of the Optimization Problem

The option Automatic normalization under Calculation parameters is default on.

An optimization variable \(x\) is normalized as follows:

\[\tilde{x} = \frac{x - x_l}{x_u-x_l},\]

where \(\tilde{x}\) is the normalized optimization variable, and \(x_u\) and \(x_l\) is the upper and lower limit for the \(x\), respectively.

The cost function \(f=f(x)\) is normalized as:

\[\tilde{f}(x) = \frac{f(x)}{|f(x_0)| + \varepsilon},\]

where \(\tilde{f}\) is the normalized cost function, \(x_0\) is the initial value of the optimization variable and \(\varepsilon\) is a small constant.

A constraint \(g=g(x)\) is normalized as:

\[\tilde{g}(x) = \frac{g(x)}{|g(x_0)| + \varepsilon},\]

where \(\tilde{g}\) is the normalized constraint.