4.1.9. statsmodels.base._penalties

A collection of smooth penalty functions.

Penalties on vectors take a vector argument and return a scalar penalty. The gradient of the penalty is a vector with the same shape as the input value.

Penalties on covariance matrices take two arguments: the matrix and its inverse, both in unpacked (square) form. The returned penalty is a scalar, and the gradient is returned as a vector that contains the gradient with respect to the free elements in the lower triangle of the covariance matrix.

All penalties are subtracted from the log-likelihood, so greater penalty values correspond to a greater degree of penalization.

The penaties should be smooth so that they can be subtracted from log likelihood functions and optimized using standard methods (i.e. L1 penalties do not belong here).

4.1.9.1. Classes

CovariancePenalty(wt)
L2([wts]) The L2 (ridge) penalty.
PSD(wt) A penalty that converges to +infinity as the argument matrix approaches the boundary of the domain of symmetric, positive definite matrices.
Penalty(wts) A class for representing a scalar-value penalty.
PseudoHuber(dlt[, wts]) The pseudo-Huber penalty.