Module light_labyrinth.hyperparams.regularization

The light_labyrinth.hyperparams.regularization module includes Regularization classes that can be used for training Light Labyrinth models. The regularization term added to the loss function prevents model from overfitting.

Expand source code
"""
The `light_labyrinth.hyperparams.regularization` module includes `Regularization` classes
that can be used for training Light Labyrinth models. The regularization term added to the
loss function prevents model from overfitting.
"""

from ._regularization import (RegularizationL1, RegularizationL2,
                              RegularizationNone)

__all__ = ["RegularizationL1", "RegularizationL2", "RegularizationNone"]

Classes

class RegularizationL1 (lambda_factor=0.01)

L1 regularization – at each iteration of the learning process a sum of the absoute values (first norm) of model's weights is added to the error function. This stops the weights from getting too big or too small and in effect prevents (to some extent) overfitting.

The optimized error function with L1 regularization can we written as \xi(W, X, y) = \lambda |W| + \sum_{i=0}^{n-1} \sum_{j=0}^{k-1} err(y_{ij}, \hat{y}_{ij}) where \lambda>0 is a regularization factor.

Parameters


lambda_factor : float, default=0.01
The regularization factor which controls the importance of regularization. The higher it is, the less the model will overfit. Note however that too high regularization factor may prevent model from fitting at all.

Attributes


learning_rate : float
The regularization factor.

See Also

RegularizationNone
No regularization
RegularizationL2
L2 regularization

Examples

>>> from light_labyrinth.hyperparams.regularization import RegularizationL1
>>> from light_labyrinth.dim2 import LightLabyrinthClassifier
>>> model = LightLabyrinthClassifier(3, 3,
...                             regularization=RegularizationL1(0.001))
Expand source code
class RegularizationL1(_RegularizationBase):
    """
    L1 regularization -- at each iteration of the learning process a sum
    of the absoute values (first norm) of model's weights is added to the 
    error function. This stops the weights from getting too big or too
    small and in effect prevents (to some extent) overfitting.

    The optimized error function with L1 regularization can we written as
    \\[\\xi(W, X, y) =  \\lambda |W| + \sum_{i=0}^{n-1} \sum_{j=0}^{k-1} err(y_{ij}, \hat{y}_{ij})\\]
    where \\(\\lambda>0\\) is a regularization factor. 

    Parameters
    ----------
    ----------
    lambda_factor : float, default=0.01
        The regularization factor which controls the importance of regularization.
        The higher it is, the less the model will overfit. Note however that too high
        regularization factor may prevent model from fitting at all.

    Attributes
    ----------
    ----------
    learning_rate : float
        The regularization factor.

    See Also
    --------
    light_labyrinth.hyperparams.regularization.RegularizationNone : No regularization
    light_labyrinth.hyperparams.regularization.RegularizationL2 : L2 regularization

    Examples
    --------
    >>> from light_labyrinth.hyperparams.regularization import RegularizationL1
    >>> from light_labyrinth.dim2 import LightLabyrinthClassifier
    >>> model = LightLabyrinthClassifier(3, 3,
    ...                             regularization=RegularizationL1(0.001))
    """

    def __init__(self, lambda_factor=0.01):
        super().__init__("L1", [lambda_factor])
        self._lambda_factor = lambda_factor

    @property
    def lambda_factor(self):
        return self._lambda_factor

Ancestors

  • light_labyrinth.hyperparams.regularization._regularization._RegularizationBase

Instance variables

var lambda_factor
Expand source code
@property
def lambda_factor(self):
    return self._lambda_factor
class RegularizationL2 (lambda_factor=0.01)

L2 regularization – at each iteration of the learning process a sum of squared values (second norm) of model's weights is added to the error function. This stops the weights from getting too big or too small and in effect prevents (to some extent) overfitting.

The optimized error function with L2 regularization can we written as \xi(W, X, y) = \frac{\lambda}{2} ||W|| + \sum_{i=0}^{n-1} \sum_{j=0}^{k-1} err(y_{ij}, \hat{y}_{ij}) where \lambda>0 is a regularization factor.

Parameters


lambda_factor : float, default=0.01
The regularization factor which controls the importance of regularization. The higher it is, the less the model will overfit. Note however that too high regularization factor may prevent model from fitting at all.

Attributes


learning_rate : float
The regularization factor.

See Also

RegularizationNone
No regularization
RegularizationL1
L1 regularization

Examples

>>> from light_labyrinth.hyperparams.regularization import RegularizationL2
>>> from light_labyrinth.dim2 import LightLabyrinthClassifier
>>> model = LightLabyrinthClassifier(3, 3,
...                             regularization=RegularizationL2(0.001))
Expand source code
class RegularizationL2(_RegularizationBase):
    """
    L2 regularization -- at each iteration of the learning process a sum
    of squared values (second norm) of model's weights is added to the 
    error function. This stops the weights from getting too big or too
    small and in effect prevents (to some extent) overfitting.

    The optimized error function with L2 regularization can we written as
    \\[\\xi(W, X, y) =  \\frac{\\lambda}{2} ||W|| + \sum_{i=0}^{n-1} \sum_{j=0}^{k-1} err(y_{ij}, \hat{y}_{ij})\\]
    where \\(\\lambda>0\\) is a regularization factor. 

    Parameters
    ----------
    ----------
    lambda_factor : float, default=0.01
        The regularization factor which controls the importance of regularization.
        The higher it is, the less the model will overfit. Note however that too high
        regularization factor may prevent model from fitting at all.

    Attributes
    ----------
    ----------
    learning_rate : float
        The regularization factor.

    See Also
    --------
    light_labyrinth.hyperparams.regularization.RegularizationNone : No regularization
    light_labyrinth.hyperparams.regularization.RegularizationL1 : L1 regularization

    Examples
    --------
    >>> from light_labyrinth.hyperparams.regularization import RegularizationL2
    >>> from light_labyrinth.dim2 import LightLabyrinthClassifier
    >>> model = LightLabyrinthClassifier(3, 3,
    ...                             regularization=RegularizationL2(0.001))
    """

    def __init__(self, lambda_factor=0.01):
        super().__init__("L2", [lambda_factor])
        self._lambda_factor = lambda_factor

    @property
    def lambda_factor(self):
        return self._lambda_factor

Ancestors

  • light_labyrinth.hyperparams.regularization._regularization._RegularizationBase

Instance variables

var lambda_factor
Expand source code
@property
def lambda_factor(self):
    return self._lambda_factor
class RegularizationNone

None regularization – to be used when no regularization is needed.

See Also

RegularizationL1
L1 regularization
RegularizationL2
L2 regularization

Examples

>>> from light_labyrinth.hyperparams.regularization import RegularizationNone
>>> from light_labyrinth.dim2 import LightLabyrinthClassifier
>>> model = LightLabyrinthClassifier(3, 3,
...                             regularization=RegularizationNone())
Expand source code
class RegularizationNone(_RegularizationBase):
    """
    None regularization -- to be used when no regularization is needed.

    See Also
    --------
    light_labyrinth.hyperparams.regularization.RegularizationL1 : L1 regularization
    light_labyrinth.hyperparams.regularization.RegularizationL2 : L2 regularization

    Examples
    --------
    >>> from light_labyrinth.hyperparams.regularization import RegularizationNone
    >>> from light_labyrinth.dim2 import LightLabyrinthClassifier
    >>> model = LightLabyrinthClassifier(3, 3,
    ...                             regularization=RegularizationNone())
    """

    def __init__(self):
        super().__init__("None", [])

Ancestors

  • light_labyrinth.hyperparams.regularization._regularization._RegularizationBase