package scipy

  1. Overview
  2. Docs
Legend:
Library
Module
Module type
Parameter
Class
Class type
type tag = [
  1. | `NonlinearConstraint
]
type t = [ `NonlinearConstraint | `Object ] Obj.t
val of_pyobject : Py.Object.t -> t
val to_pyobject : [> tag ] Obj.t -> Py.Object.t
val create : ?jac:[ `T2_point | `Callable of Py.Object.t | `Cs | `T3_point ] -> ?hess: [ `T2_point | `HessianUpdateStrategy of Py.Object.t | `Cs | `Callable of Py.Object.t | `T3_point | `None ] -> ?keep_feasible:Py.Object.t -> ?finite_diff_rel_step:[> `Ndarray ] Np.Obj.t -> ?finite_diff_jac_sparsity:[> `ArrayLike ] Np.Obj.t -> fun_:Py.Object.t -> lb:Py.Object.t -> ub:Py.Object.t -> unit -> t

Nonlinear constraint on the variables.

The constraint has the general inequality form::

lb <= fun(x) <= ub

Here the vector of independent variables x is passed as ndarray of shape (n,) and ``fun`` returns a vector with m components.

It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.

Parameters ---------- fun : callable The function defining the constraint. The signature is ``fun(x) -> array_like, shape (m,)``. lb, ub : array_like Lower and upper bounds on the constraint. Each array must have the shape (m,) or be a scalar, in the latter case a bound will be the same for all components of the constraint. Use ``np.inf`` with an appropriate sign to specify a one-sided constraint. Set components of `lb` and `ub` equal to represent an equality constraint. Note that you can mix constraints of different types: interval, one-sided or equality, by setting different components of `lb` and `ub` as necessary. jac : callable, '2-point', '3-point', 'cs', optional Method of computing the Jacobian matrix (an m-by-n matrix, where element (i, j) is the partial derivative of fi with respect to xj). The keywords '2-point', '3-point', 'cs' select a finite difference scheme for the numerical estimation. A callable must have the following signature: ``jac(x) -> ndarray, sparse matrix, shape (m, n)``. Default is '2-point'. hess : callable, '2-point', '3-point', 'cs', HessianUpdateStrategy, None, optional Method for computing the Hessian matrix. The keywords '2-point', '3-point', 'cs' select a finite difference scheme for numerical estimation. Alternatively, objects implementing `HessianUpdateStrategy` interface can be used to approximate the Hessian. Currently available implementations are:

  • `BFGS` (default option)
  • `SR1`

A callable must return the Hessian matrix of ``dot(fun, v)`` and must have the following signature: ``hess(x, v) ->

inearOperator, sparse matrix, array_like

, shape (n, n)``. Here ``v`` is ndarray with shape (m,) containing Lagrange multipliers. keep_feasible : array_like of bool, optional Whether to keep the constraint components feasible throughout iterations. A single value set this property for all components. Default is False. Has no effect for equality constraints. finite_diff_rel_step: None or array_like, optional Relative step size for the finite difference approximation. Default is None, which will select a reasonable value automatically depending on a finite difference scheme. finite_diff_jac_sparsity: None, array_like, sparse matrix, optional Defines the sparsity structure of the Jacobian matrix for finite difference estimation, its shape must be (m, n). If the Jacobian has only few non-zero elements in *each* row, providing the sparsity structure will greatly speed up the computations. A zero entry means that a corresponding element in the Jacobian is identically zero. If provided, forces the use of 'lsmr' trust-region solver. If None (default) then dense differencing will be used.

Notes ----- Finite difference schemes '2-point', '3-point', 'cs' may be used for approximating either the Jacobian or the Hessian. We, however, do not allow its use for approximating both simultaneously. Hence whenever the Jacobian is estimated via finite-differences, we require the Hessian to be estimated using one of the quasi-Newton strategies.

The scheme 'cs' is potentially the most accurate, but requires the function to correctly handles complex inputs and be analytically continuable to the complex plane. The scheme '3-point' is more accurate than '2-point' but requires twice as many operations.

Examples -------- Constrain ``x0 < sin(x1) + 1.9``

>>> from scipy.optimize import NonlinearConstraint >>> con = lambda x: x0 - np.sin(x1) >>> nlc = NonlinearConstraint(con, -np.inf, 1.9)

val to_string : t -> string

Print the object to a human-readable representation.

val show : t -> string

Print the object to a human-readable representation.

val pp : Stdlib.Format.formatter -> t -> unit

Pretty-print the object to a formatter.