Scipy 简明教程

SciPy - Optimize

scipy.optimize package 提供了多种常用优化算法。此模块包含以下方面:

The scipy.optimize package provides several commonly used optimization algorithms. This module contains the following aspects −

  1. Unconstrained and constrained minimization of multivariate scalar functions (minimize()) using a variety of algorithms (e.g. BFGS, Nelder-Mead simplex, Newton Conjugate Gradient, COBYLA or SLSQP)

  2. Global (brute-force) optimization routines (e.g., anneal(), basinhopping())

  3. Least-squares minimization (leastsq()) and curve fitting (curve_fit()) algorithms

  4. Scalar univariate functions minimizers (minimize_scalar()) and root finders (newton())

  5. Multivariate equation system solvers (root()) using a variety of algorithms (e.g. hybrid Powell, Levenberg-Marquardt or large-scale methods such as Newton-Krylov)

Unconstrained & Constrained minimization of multivariate scalar functions

minimize() functionscipy.optimize 中多元标量函数的无约束和约束最小化算法提供了一个公共接口。为了演示最小化函数,考虑最小化 NN 变量的 Rosenbrock 函数的问题:

The minimize() function provides a common interface to unconstrained and constrained minimization algorithms for multivariate scalar functions in scipy.optimize. To demonstrate the minimization function, consider the problem of minimizing the Rosenbrock function of the NN variables −

f(x) = \sum_{i = 1}^{N-1} \:100(x_i - x_{i-1}^{2})

此函数的最小值为 0,当 xi = 1 时实现。

The minimum value of this function is 0, which is achieved when xi = 1.

Nelder–Mead Simplex Algorithm

在以下示例中,minimize() 例程与 Nelder-Mead simplex algorithm (method = 'Nelder-Mead') 一起使用(通过 method 参数选择)。我们来考虑以下示例。

In the following example, the minimize() routine is used with the Nelder-Mead simplex algorithm (method = 'Nelder-Mead') (selected through the method parameter). Let us consider the following example.

import numpy as np
from scipy.optimize import minimize

def rosen(x):

x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='nelder-mead')

print(res.x)

上述程序将生成以下输出。

The above program will generate the following output.

[7.93700741e+54  -5.41692163e+53  6.28769150e+53  1.38050484e+55  -4.14751333e+54]

单纯形算法可能是最小化行为良好的函数的最简单方法。它只需要函数评估,并且对于简单的最小化问题来说是一个不错的选择。但是,由于它不使用任何梯度评估,因此可能需要更长的时间才能找到最小值。

The simplex algorithm is probably the simplest way to minimize a fairly well-behaved function. It requires only function evaluations and is a good choice for simple minimization problems. However, because it does not use any gradient evaluations, it may take longer to find the minimum.

另一种只需要函数调用就可以找到最小值的优化算法是 Powell‘s method ,可以通过在 minimize() 函数中设置 method = 'powell' 来获得。

Another optimization algorithm that needs only function calls to find the minimum is the Powell‘s method, which is available by setting method = 'powell' in the minimize() function.

Least Squares

求解具有变量边界的非线性最小二乘问题。给定残差 f(x)(n 个实变量的 m 维实函数)和损失函数 rho(s)(标量函数),least_squares 找到代价函数 F(x) 的局部极小值。我们来考虑以下示例。

Solve a nonlinear least-squares problem with bounds on the variables. Given the residuals f(x) (an m-dimensional real function of n real variables) and the loss function rho(s) (a scalar function), least_squares find a local minimum of the cost function F(x). Let us consider the following example.

在此示例中,我们在没有独立变量边界的情况下找到 Rosenbrock 函数的最小值。

In this example, we find a minimum of the Rosenbrock function without bounds on the independent variables.

#Rosenbrock Function
def fun_rosenbrock(x):
   return np.array([10 * (x[1] - x[0]**2), (1 - x[0])])

from scipy.optimize import least_squares
input = np.array([2, 2])
res = least_squares(fun_rosenbrock, input)

print res

请注意,我们只提供残差向量。该算法将代价函数构建为残差的平方和,这给出了 Rosenbrock 函数。精确的最小值在 x = [1.0,1.0] 处。

Notice that, we only provide the vector of the residuals. The algorithm constructs the cost function as a sum of squares of the residuals, which gives the Rosenbrock function. The exact minimum is at x = [1.0,1.0].

上述程序将生成以下输出。

The above program will generate the following output.

active_mask: array([ 0., 0.])
      cost: 9.8669242910846867e-30
      fun: array([ 4.44089210e-15, 1.11022302e-16])
      grad: array([ -8.89288649e-14, 4.44089210e-14])
      jac: array([[-20.00000015,10.],[ -1.,0.]])
   message: '`gtol` termination condition is satisfied.'
      nfev: 3
      njev: 3
   optimality: 8.8928864934219529e-14
      status: 1
      success: True
         x: array([ 1., 1.])

Root finding

让我们了解根查找如何在 SciPy 中提供帮助。

Let us understand how root finding helps in SciPy.

Scalar functions

如果有一个单变量方程,有四种不同的根查找算法可以尝试。其中每种算法都需要预期中有根的间隔端点(因为函数改变符号)。一般而言, brentq 是最佳选择,但在某些情况下或出于学术目的,其他方法可能有用。

If one has a single-variable equation, there are four different root-finding algorithms, which can be tried. Each of these algorithms require the endpoints of an interval in which a root is expected (because the function changes signs). In general, brentq is the best choice, but the other methods may be useful in certain circumstances or for academic purposes.

Fixed-point solving

一个与找到函数的零点密切相关的问题是找到函数的一个不动点。函数的不动点是评估函数返回该点的点:g(x) = x。显然, gg 的不动点是 f(x) = g(x)−x 的根。等效, ff 的根是 g(x) = f(x)+x 的不动点。如果给出了一个起点,则 fixed_point 例程提供了一种简单迭代方法,使用 Aitkens sequence acceleration 来估计 gg 的不动点。

A problem closely related to finding the zeros of a function is the problem of finding a fixed point of a function. A fixed point of a function is the point at which evaluation of the function returns the point: g(x) = x. Clearly the fixed point of gg is the root of f(x) = g(x)−x. Equivalently, the root of ff is the fixed_point of g(x) = f(x)+x. The routine fixed_point provides a simple iterative method using the Aitkens sequence acceleration to estimate the fixed point of gg, if a starting point is given.

Sets of equations

可以使用 root() function 找出非线性方程组的根。有几种方法可用,其中 hybr (默认)和 lm 分别使用 MINPACK 中的 hybrid method of PowellLevenberg-Marquardt method

Finding a root of a set of non-linear equations can be achieved using the root() function. Several methods are available, amongst which hybr (the default) and lm, respectively use the hybrid method of Powell and the Levenberg-Marquardt method from the MINPACK.

以下示例考虑了单变量超越方程。

The following example considers the single-variable transcendental equation.

x2 + 2cos(x) = 0

x2 + 2cos(x) = 0

其一个根可如下找到——

A root of which can be found as follows −

import numpy as np
from scipy.optimize import root
def func(x):
   return x*2 + 2 * np.cos(x)
sol = root(func, 0.3)
print sol

上述程序将生成以下输出。

The above program will generate the following output.

fjac: array([[-1.]])
fun: array([ 2.22044605e-16])
message: 'The solution converged.'
   nfev: 10
   qtf: array([ -2.77644574e-12])
      r: array([-3.34722409])
   status: 1
   success: True
      x: array([-0.73908513])