least_squares — SciPy v1.14.0 Manual (2024)

scipy.optimize.

scipy.optimize.least_squares(fun, x0, jac='2-point', bounds=(-inf, inf), method='trf', ftol=1e-08, xtol=1e-08, gtol=1e-08, x_scale=1.0, loss='linear', f_scale=1.0, diff_step=None, tr_solver=None, tr_options={}, jac_sparsity=None, max_nfev=None, verbose=0, args=(), kwargs={})[source]#

Solve a nonlinear least-squares problem with bounds on the variables.

Given the residuals f(x) (an m-D real function of n realvariables) and the loss function rho(s) (a scalar function), least_squaresfinds a local minimum of the cost function F(x):

minimize F(x) = 0.5 * sum(rho(f_i(x)**2), i = 0, ..., m - 1)subject to lb <= x <= ub

The purpose of the loss function rho(s) is to reduce the influence ofoutliers on the solution.

Parameters:
funcallable

Function which computes the vector of residuals, with the signaturefun(x, *args, **kwargs), i.e., the minimization proceeds withrespect to its first argument. The argument x passed to thisfunction is an ndarray of shape (n,) (never a scalar, even for n=1).It must allocate and return a 1-D array_like of shape (m,) or a scalar.If the argument x is complex or the function fun returnscomplex residuals, it must be wrapped in a real function of realarguments, as shown at the end of the Examples section.

x0array_like with shape (n,) or float

Initial guess on independent variables. If float, it will be treatedas a 1-D array with one element. When method is ‘trf’, the initialguess might be slightly adjusted to lie sufficiently within the givenbounds.

jac{‘2-point’, ‘3-point’, ‘cs’, callable}, optional

Method of computing the Jacobian matrix (an m-by-n matrix, whereelement (i, j) is the partial derivative of f[i] with respect tox[j]). The keywords select a finite difference scheme for numericalestimation. The scheme ‘3-point’ is more accurate, but requirestwice as many operations as ‘2-point’ (default). The scheme ‘cs’uses complex steps, and while potentially the most accurate, it isapplicable only when fun correctly handles complex inputs andcan be analytically continued to the complex plane. Method ‘lm’always uses the ‘2-point’ scheme. If callable, it is used asjac(x, *args, **kwargs) and should return a good approximation(or the exact value) for the Jacobian as an array_like (np.atleast_2dis applied), a sparse matrix (csr_matrix preferred for performance) ora scipy.sparse.linalg.LinearOperator.

bounds2-tuple of array_like or Bounds, optional

There are two ways to specify bounds:

  1. Instance of Bounds class

  2. Lower and upper bounds on independent variables. Defaults to nobounds. Each array must match the size of x0 or be a scalar,in the latter case a bound will be the same for all variables.Use np.inf with an appropriate sign to disable bounds on allor some variables.

method{‘trf’, ‘dogbox’, ‘lm’}, optional

Algorithm to perform minimization.

  • ‘trf’ : Trust Region Reflective algorithm, particularly suitablefor large sparse problems with bounds. Generally robust method.

  • ‘dogbox’ : dogleg algorithm with rectangular trust regions,typical use case is small problems with bounds. Not recommendedfor problems with rank-deficient Jacobian.

  • ‘lm’ : Levenberg-Marquardt algorithm as implemented in MINPACK.Doesn’t handle bounds and sparse Jacobians. Usually the mostefficient method for small unconstrained problems.

Default is ‘trf’. See Notes for more information.

ftolfloat or None, optional

Tolerance for termination by the change of the cost function. Defaultis 1e-8. The optimization process is stopped when dF < ftol * F,and there was an adequate agreement between a local quadratic model andthe true model in the last step.

If None and ‘method’ is not ‘lm’, the termination by this condition isdisabled. If ‘method’ is ‘lm’, this tolerance must be higher thanmachine epsilon.

xtolfloat or None, optional

Tolerance for termination by the change of the independent variables.Default is 1e-8. The exact condition depends on the method used:

  • For ‘trf’ and ‘dogbox’ : norm(dx) < xtol * (xtol + norm(x)).

  • For ‘lm’ : Delta < xtol * norm(xs), where Delta isa trust-region radius and xs is the value of xscaled according to x_scale parameter (see below).

If None and ‘method’ is not ‘lm’, the termination by this condition isdisabled. If ‘method’ is ‘lm’, this tolerance must be higher thanmachine epsilon.

gtolfloat or None, optional

Tolerance for termination by the norm of the gradient. Default is 1e-8.The exact condition depends on a method used:

  • For ‘trf’ : norm(g_scaled, ord=np.inf) < gtol, whereg_scaled is the value of the gradient scaled to account forthe presence of the bounds [STIR].

  • For ‘dogbox’ : norm(g_free, ord=np.inf) < gtol, whereg_free is the gradient with respect to the variables whichare not in the optimal state on the boundary.

  • For ‘lm’ : the maximum absolute value of the cosine of anglesbetween columns of the Jacobian and the residual vector is lessthan gtol, or the residual vector is zero.

If None and ‘method’ is not ‘lm’, the termination by this condition isdisabled. If ‘method’ is ‘lm’, this tolerance must be higher thanmachine epsilon.

x_scalearray_like or ‘jac’, optional

Characteristic scale of each variable. Setting x_scale is equivalentto reformulating the problem in scaled variables xs = x / x_scale.An alternative view is that the size of a trust region along jthdimension is proportional to x_scale[j]. Improved convergence maybe achieved by setting x_scale such that a step of a given sizealong any of the scaled variables has a similar effect on the costfunction. If set to ‘jac’, the scale is iteratively updated using theinverse norms of the columns of the Jacobian matrix (as described in[JJMore]).

lossstr or callable, optional

Determines the loss function. The following keyword values are allowed:

  • ‘linear’ (default) : rho(z) = z. Gives a standardleast-squares problem.

  • ‘soft_l1’ : rho(z) = 2 * ((1 + z)**0.5 - 1). The smoothapproximation of l1 (absolute value) loss. Usually a goodchoice for robust least squares.

  • ‘huber’ : rho(z) = z if z <= 1 else 2*z**0.5 - 1. Workssimilarly to ‘soft_l1’.

  • ‘cauchy’ : rho(z) = ln(1 + z). Severely weakens outliersinfluence, but may cause difficulties in optimization process.

  • ‘arctan’ : rho(z) = arctan(z). Limits a maximum loss ona single residual, has properties similar to ‘cauchy’.

If callable, it must take a 1-D ndarray z=f**2 and return anarray_like with shape (3, m) where row 0 contains function values,row 1 contains first derivatives and row 2 contains secondderivatives. Method ‘lm’ supports only ‘linear’ loss.

f_scalefloat, optional

Value of soft margin between inlier and outlier residuals, defaultis 1.0. The loss function is evaluated as followsrho_(f**2) = C**2 * rho(f**2 / C**2), where C is f_scale,and rho is determined by loss parameter. This parameter hasno effect with loss='linear', but for other loss values it isof crucial importance.

max_nfevNone or int, optional

Maximum number of function evaluations before the termination.If None (default), the value is chosen automatically:

  • For ‘trf’ and ‘dogbox’ : 100 * n.

  • For ‘lm’ : 100 * n if jac is callable and 100 * n * (n + 1)otherwise (because ‘lm’ counts function calls in Jacobianestimation).

diff_stepNone or array_like, optional

Determines the relative step size for the finite differenceapproximation of the Jacobian. The actual step is computed asx * diff_step. If None (default), then diff_step is taken to bea conventional “optimal” power of machine epsilon for the finitedifference scheme used [NR].

tr_solver{None, ‘exact’, ‘lsmr’}, optional

Method for solving trust-region subproblems, relevant only for ‘trf’and ‘dogbox’ methods.

  • ‘exact’ is suitable for not very large problems with denseJacobian matrices. The computational complexity per iteration iscomparable to a singular value decomposition of the Jacobianmatrix.

  • ‘lsmr’ is suitable for problems with sparse and large Jacobianmatrices. It uses the iterative procedurescipy.sparse.linalg.lsmr for finding a solution of a linearleast-squares problem and only requires matrix-vector productevaluations.

If None (default), the solver is chosen based on the type of Jacobianreturned on the first iteration.

tr_optionsdict, optional

Keyword options passed to trust-region solver.

  • tr_solver='exact': tr_options are ignored.

  • tr_solver='lsmr': options for scipy.sparse.linalg.lsmr.Additionally, method='trf' supports ‘regularize’ option(bool, default is True), which adds a regularization term to thenormal equation, which improves convergence if the Jacobian isrank-deficient [Byrd] (eq. 3.4).

jac_sparsity{None, array_like, sparse matrix}, optional

Defines the sparsity structure of the Jacobian matrix for finitedifference estimation, its shape must be (m, n). If the Jacobian hasonly few non-zero elements in each row, providing the sparsitystructure will greatly speed up the computations [Curtis]. A zeroentry means that a corresponding element in the Jacobian is identicallyzero. If provided, forces the use of ‘lsmr’ trust-region solver.If None (default), then dense differencing will be used. Has no effectfor ‘lm’ method.

verbose{0, 1, 2}, optional

Level of algorithm’s verbosity:

  • 0 (default) : work silently.

  • 1 : display a termination report.

  • 2 : display progress during iterations (not supported by ‘lm’method).

args, kwargstuple and dict, optional

Additional arguments passed to fun and jac. Both empty by default.The calling signature is fun(x, *args, **kwargs) and the same forjac.

Returns:
resultOptimizeResult

OptimizeResult with the following fields defined:

xndarray, shape (n,)

Solution found.

costfloat

Value of the cost function at the solution.

funndarray, shape (m,)

Vector of residuals at the solution.

jacndarray, sparse matrix or LinearOperator, shape (m, n)

Modified Jacobian matrix at the solution, in the sense that J^T Jis a Gauss-Newton approximation of the Hessian of the cost function.The type is the same as the one used by the algorithm.

gradndarray, shape (m,)

Gradient of the cost function at the solution.

optimalityfloat

First-order optimality measure. In unconstrained problems, it isalways the uniform norm of the gradient. In constrained problems,it is the quantity which was compared with gtol during iterations.

active_maskndarray of int, shape (n,)

Each component shows whether a corresponding constraint is active(that is, whether a variable is at the bound):

  • 0 : a constraint is not active.

  • -1 : a lower bound is active.

  • 1 : an upper bound is active.

Might be somewhat arbitrary for ‘trf’ method as it generates asequence of strictly feasible iterates and active_mask isdetermined within a tolerance threshold.

nfevint

Number of function evaluations done. Methods ‘trf’ and ‘dogbox’ donot count function calls for numerical Jacobian approximation, asopposed to ‘lm’ method.

njevint or None

Number of Jacobian evaluations done. If numerical Jacobianapproximation is used in ‘lm’ method, it is set to None.

statusint

The reason for algorithm termination:

  • -1 : improper input parameters status returned from MINPACK.

  • 0 : the maximum number of function evaluations is exceeded.

  • 1 : gtol termination condition is satisfied.

  • 2 : ftol termination condition is satisfied.

  • 3 : xtol termination condition is satisfied.

  • 4 : Both ftol and xtol termination conditions are satisfied.

messagestr

Verbal description of the termination reason.

successbool

True if one of the convergence criteria is satisfied (status > 0).

See also

leastsq

A legacy wrapper for the MINPACK implementation of the Levenberg-Marquadt algorithm.

curve_fit

Least-squares minimization applied to a curve-fitting problem.

Notes

Method ‘lm’ (Levenberg-Marquardt) calls a wrapper over least-squaresalgorithms implemented in MINPACK (lmder, lmdif). It runs theLevenberg-Marquardt algorithm formulated as a trust-region type algorithm.The implementation is based on paper [JJMore], it is very robust andefficient with a lot of smart tricks. It should be your first choicefor unconstrained problems. Note that it doesn’t support bounds. Also,it doesn’t work when m < n.

Method ‘trf’ (Trust Region Reflective) is motivated by the process ofsolving a system of equations, which constitute the first-order optimalitycondition for a bound-constrained minimization problem as formulated in[STIR]. The algorithm iteratively solves trust-region subproblemsaugmented by a special diagonal quadratic term and with trust-region shapedetermined by the distance from the bounds and the direction of thegradient. This enhancements help to avoid making steps directly into boundsand efficiently explore the whole space of variables. To further improveconvergence, the algorithm considers search directions reflected from thebounds. To obey theoretical requirements, the algorithm keeps iteratesstrictly feasible. With dense Jacobians trust-region subproblems aresolved by an exact method very similar to the one described in [JJMore](and implemented in MINPACK). The difference from the MINPACKimplementation is that a singular value decomposition of a Jacobianmatrix is done once per iteration, instead of a QR decomposition and seriesof Givens rotation eliminations. For large sparse Jacobians a 2-D subspaceapproach of solving trust-region subproblems is used [STIR], [Byrd].The subspace is spanned by a scaled gradient and an approximateGauss-Newton solution delivered by scipy.sparse.linalg.lsmr. When noconstraints are imposed the algorithm is very similar to MINPACK and hasgenerally comparable performance. The algorithm works quite robust inunbounded and bounded problems, thus it is chosen as a default algorithm.

Method ‘dogbox’ operates in a trust-region framework, but considersrectangular trust regions as opposed to conventional ellipsoids [Voglis].The intersection of a current trust region and initial bounds is againrectangular, so on each iteration a quadratic minimization problem subjectto bound constraints is solved approximately by Powell’s dogleg method[NumOpt]. The required Gauss-Newton step can be computed exactly fordense Jacobians or approximately by scipy.sparse.linalg.lsmr for largesparse Jacobians. The algorithm is likely to exhibit slow convergence whenthe rank of Jacobian is less than the number of variables. The algorithmoften outperforms ‘trf’ in bounded problems with a small number ofvariables.

Robust loss functions are implemented as described in [BA]. The ideais to modify a residual vector and a Jacobian matrix on each iterationsuch that computed gradient and Gauss-Newton Hessian approximation matchthe true gradient and Hessian approximation of the cost function. Thenthe algorithm proceeds in a normal way, i.e., robust loss functions areimplemented as a simple wrapper over standard least-squares algorithms.

Added in version 0.17.0.

References

[STIR](1,2,3)

M. A. Branch, T. F. Coleman, and Y. Li, “A Subspace, Interior,and Conjugate Gradient Method for Large-Scale Bound-ConstrainedMinimization Problems,” SIAM Journal on Scientific Computing,Vol. 21, Number 1, pp 1-23, 1999.

[NR]

William H. Press et. al., “Numerical Recipes. The Art of ScientificComputing. 3rd edition”, Sec. 5.7.

[Byrd](1,2)

R. H. Byrd, R. B. Schnabel and G. A. Shultz, “Approximatesolution of the trust region problem by minimization overtwo-dimensional subspaces”, Math. Programming, 40, pp. 247-263,1988.

[Curtis]

A. Curtis, M. J. D. Powell, and J. Reid, “On the estimation ofsparse Jacobian matrices”, Journal of the Institute ofMathematics and its Applications, 13, pp. 117-120, 1974.

[JJMore](1,2,3)

J. J. More, “The Levenberg-Marquardt Algorithm: Implementationand Theory,” Numerical Analysis, ed. G. A. Watson, LectureNotes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.

[Voglis]

C. Voglis and I. E. Lagaris, “A Rectangular Trust RegionDogleg Approach for Unconstrained and Bound ConstrainedNonlinear Optimization”, WSEAS International Conference onApplied Mathematics, Corfu, Greece, 2004.

[NumOpt]

J. Nocedal and S. J. Wright, “Numerical optimization,2nd edition”, Chapter 4.

[BA]

B. Triggs et. al., “Bundle Adjustment - A Modern Synthesis”,Proceedings of the International Workshop on Vision Algorithms:Theory and Practice, pp. 298-372, 1999.

Examples

In this example we find a minimum of the Rosenbrock function without boundson independent variables.

>>> import numpy as np>>> def fun_rosenbrock(x):...  return np.array([10 * (x[1] - x[0]**2), (1 - x[0])])

Notice that we only provide the vector of the residuals. The algorithmconstructs the cost function as a sum of squares of the residuals, whichgives the Rosenbrock function. The exact minimum is at x = [1.0, 1.0].

>>> from scipy.optimize import least_squares>>> x0_rosenbrock = np.array([2, 2])>>> res_1 = least_squares(fun_rosenbrock, x0_rosenbrock)>>> res_1.xarray([ 1., 1.])>>> res_1.cost9.8669242910846867e-30>>> res_1.optimality8.8928864934219529e-14

We now constrain the variables, in such a way that the previous solutionbecomes infeasible. Specifically, we require that x[1] >= 1.5, andx[0] left unconstrained. To this end, we specify the bounds parameterto least_squares in the form bounds=([-np.inf, 1.5], np.inf).

We also provide the analytic Jacobian:

>>> def jac_rosenbrock(x):...  return np.array([...  [-20 * x[0], 10],...  [-1, 0]])

Putting this all together, we see that the new solution lies on the bound:

>>> res_2 = least_squares(fun_rosenbrock, x0_rosenbrock, jac_rosenbrock,...  bounds=([-np.inf, 1.5], np.inf))>>> res_2.xarray([ 1.22437075, 1.5 ])>>> res_2.cost0.025213093946805685>>> res_2.optimality1.5885401433157753e-07

Now we solve a system of equations (i.e., the cost function should be zeroat a minimum) for a Broyden tridiagonal vector-valued function of 100000variables:

>>> def fun_broyden(x):...  f = (3 - x) * x + 1...  f[1:] -= x[:-1]...  f[:-1] -= 2 * x[1:]...  return f

The corresponding Jacobian matrix is sparse. We tell the algorithm toestimate it by finite differences and provide the sparsity structure ofJacobian to significantly speed up this process.

>>> from scipy.sparse import lil_matrix>>> def sparsity_broyden(n):...  sparsity = lil_matrix((n, n), dtype=int)...  i = np.arange(n)...  sparsity[i, i] = 1...  i = np.arange(1, n)...  sparsity[i, i - 1] = 1...  i = np.arange(n - 1)...  sparsity[i, i + 1] = 1...  return sparsity...>>> n = 100000>>> x0_broyden = -np.ones(n)...>>> res_3 = least_squares(fun_broyden, x0_broyden,...  jac_sparsity=sparsity_broyden(n))>>> res_3.cost4.5687069299604613e-23>>> res_3.optimality1.1650454296851518e-11

Let’s also solve a curve fitting problem using robust loss function totake care of outliers in the data. Define the model function asy = a + b * exp(c * t), where t is a predictor variable, y is anobservation and a, b, c are parameters to estimate.

First, define the function which generates the data with noise andoutliers, define the model parameters, and generate data:

>>> from numpy.random import default_rng>>> rng = default_rng()>>> def gen_data(t, a, b, c, noise=0., n_outliers=0, seed=None):...  rng = default_rng(seed)......  y = a + b * np.exp(t * c)......  error = noise * rng.standard_normal(t.size)...  outliers = rng.integers(0, t.size, n_outliers)...  error[outliers] *= 10......  return y + error...>>> a = 0.5>>> b = 2.0>>> c = -1>>> t_min = 0>>> t_max = 10>>> n_points = 15...>>> t_train = np.linspace(t_min, t_max, n_points)>>> y_train = gen_data(t_train, a, b, c, noise=0.1, n_outliers=3)

Define function for computing residuals and initial estimate ofparameters.

>>> def fun(x, t, y):...  return x[0] + x[1] * np.exp(x[2] * t) - y...>>> x0 = np.array([1.0, 1.0, 0.0])

Compute a standard least-squares solution:

>>> res_lsq = least_squares(fun, x0, args=(t_train, y_train))

Now compute two solutions with two different robust loss functions. Theparameter f_scale is set to 0.1, meaning that inlier residuals shouldnot significantly exceed 0.1 (the noise level used).

>>> res_soft_l1 = least_squares(fun, x0, loss='soft_l1', f_scale=0.1,...  args=(t_train, y_train))>>> res_log = least_squares(fun, x0, loss='cauchy', f_scale=0.1,...  args=(t_train, y_train))

And, finally, plot all the curves. We see that by selecting an appropriateloss we can get estimates close to optimal even in the presence ofstrong outliers. But keep in mind that generally it is recommended to try‘soft_l1’ or ‘huber’ losses first (if at all necessary) as the other twooptions may cause difficulties in optimization process.

>>> t_test = np.linspace(t_min, t_max, n_points * 10)>>> y_true = gen_data(t_test, a, b, c)>>> y_lsq = gen_data(t_test, *res_lsq.x)>>> y_soft_l1 = gen_data(t_test, *res_soft_l1.x)>>> y_log = gen_data(t_test, *res_log.x)...>>> import matplotlib.pyplot as plt>>> plt.plot(t_train, y_train, 'o')>>> plt.plot(t_test, y_true, 'k', linewidth=2, label='true')>>> plt.plot(t_test, y_lsq, label='linear loss')>>> plt.plot(t_test, y_soft_l1, label='soft_l1 loss')>>> plt.plot(t_test, y_log, label='cauchy loss')>>> plt.xlabel("t")>>> plt.ylabel("y")>>> plt.legend()>>> plt.show()
least_squares — SciPy v1.14.0 Manual (1)

In the next example, we show how complex-valued residual functions ofcomplex variables can be optimized with least_squares(). Consider thefollowing function:

>>> def f(z):...  return z - (0.5 + 0.5j)

We wrap it into a function of real variables that returns real residualsby simply handling the real and imaginary parts as independent variables:

>>> def f_wrap(x):...  fx = f(x[0] + 1j*x[1])...  return np.array([fx.real, fx.imag])

Thus, instead of the original m-D complex function of n complexvariables we optimize a 2m-D real function of 2n real variables:

>>> from scipy.optimize import least_squares>>> res_wrapped = least_squares(f_wrap, (0.1, 0.1), bounds=([0, 0], [1, 1]))>>> z = res_wrapped.x[0] + res_wrapped.x[1]*1j>>> z(0.49999999999925893+0.49999999999925893j)
least_squares — SciPy v1.14.0 Manual (2024)
Top Articles
Cosplayer 'Scuba Steph' Talks 'Ronda McDonald,' 'The Fast Food House' And Being A Clown Girl
Declan Rice: West Ham captain can bow out a hero in European final
Main Moon Ilion Menu
Midflorida Overnight Payoff Address
Terraria Enchanting
Doublelist Paducah Ky
Melfme
Achivr Visb Verizon
41 annonces BMW Z3 occasion - ParuVendu.fr
Lima Crime Stoppers
Mission Impossible 7 Showtimes Near Regal Bridgeport Village
Morocco Forum Tripadvisor
Mephisto Summoners War
Bowie Tx Craigslist
Munich residents spend the most online for food
Aberration Surface Entrances
Arre St Wv Srj
Committees Of Correspondence | Encyclopedia.com
Alfie Liebel
Nhl Tankathon Mock Draft
Fort Mccoy Fire Map
Azur Lane High Efficiency Combat Logistics Plan
8005607994
Regal Amc Near Me
T Mobile Rival Crossword Clue
Roanoke Skipthegames Com
Costco Jobs San Diego
WRMJ.COM
Marilyn Seipt Obituary
Hrconnect Kp Login
Rgb Bird Flop
Average weekly earnings in Great Britain
The Venus Flytrap: A Complete Care Guide
Sun-Tattler from Hollywood, Florida
Toonily The Carry
Weapons Storehouse Nyt Crossword
Plead Irksomely Crossword
One Main Branch Locator
Jason Brewer Leaving Fox 25
Anhedönia Last Name Origin
The Listings Project New York
Ethan Cutkosky co*ck
Unblocked Games Gun Games
Portal Pacjenta LUX MED
My Gsu Portal
The Average Amount of Calories in a Poke Bowl | Grubby's Poke
Greatpeople.me Login Schedule
Used Sawmill For Sale - Craigslist Near Tennessee
Guy Ritchie's The Covenant Showtimes Near Look Cinemas Redlands
Marion City Wide Garage Sale 2023
Law Students
Códigos SWIFT/BIC para bancos de USA
Latest Posts
Article information

Author: Edmund Hettinger DC

Last Updated:

Views: 6058

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Edmund Hettinger DC

Birthday: 1994-08-17

Address: 2033 Gerhold Pine, Port Jocelyn, VA 12101-5654

Phone: +8524399971620

Job: Central Manufacturing Supervisor

Hobby: Jogging, Metalworking, Tai chi, Shopping, Puzzles, Rock climbing, Crocheting

Introduction: My name is Edmund Hettinger DC, I am a adventurous, colorful, gifted, determined, precious, open, colorful person who loves writing and wants to share my knowledge and understanding with you.