Module light_labyrinth.dim2

The light_labyrinth.dim2 module includes 2-dimensional Light Labyrinth models.

0.25
0.25
0
0
0.5
0.5
0.5
0.5
0.25
0.25
0.25
0.25
0
0
0
0
0.5
0.5
1.0
1.0
0.75
0.75
0
0
1.0
1.0
Input vector
x = [x1 x2 … xk]
Input vector…
Class 0
Class 0
Class 1
Class 1
w00
w00
w01
w01
w02
w02
w10
w10
w11%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20value%3D%22%26lt%3Bfont%20style%3D%26quot%3Bfont-size%3A%2012px%26quot%3B%26gt%3Bw%26lt%3Bsub%26gt%3B10%26lt%3B%2Fsub%26gt%3B%26lt%3B%2Ffont%26gt%3B%22%20style%3D%22text%3Bhtml%3D1%3BstrokeColor%3Dnone%3BfillColor%3Dnone%3Balign%3Dcenter%3BverticalAlign%3Dmiddle%3BwhiteSpace%3Dwrap%3Brounded%3D0%3BfontSize%3D15%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22360%22%20y%3D%22550%22%20width%3D%2240%22%20height%3D%2220%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E
w11%3…
w12
w12
w20
w20
w21
w21
Viewer does not support full SVG 1.1

Expand source code
"""
The `light_labyrinth.dim2` module includes 2-dimensional Light Labyrinth models.
.. include:: ../../html_utils/2dclassifier.svg
"""

from ._LightLabyrinthClassifier import LightLabyrinthClassifier
from ._LightLabyrinthRegressor import LightLabyrinthRegressor

from ._LightLabyrinthDynamicClassifier import LightLabyrinthDynamicClassifier
from ._LightLabyrinthDynamicRegressor import LightLabyrinthDynamicRegressor

from ._LightLabyrinthRandomClassifier import LightLabyrinthRandomClassifier
from ._LightLabyrinthRandomRegressor import LightLabyrinthRandomRegressor

__all__ = ["LightLabyrinthClassifier", "LightLabyrinthRegressor", \
           "LightLabyrinthDynamicClassifier", "LightLabyrinthDynamicRegressor", \
           "LightLabyrinthRandomClassifier", "LightLabyrinthRandomRegressor"]

Classes

class LightLabyrinthClassifier (height, width, bias=True, activation=ReflectiveIndexCalculator.sigmoid_dot_product, error=ErrorCalculator.mean_squared_error, optimizer=None, regularization=None, weights=None, weights_init=LightLabyrinthWeightsInit.Default, random_state=0)

A 2-dimensional Light Labyrinth model.

It is meant for k-class classification. Note that k cannot be greater than min(width, height).

    X
    |__ __ __ __ __ y0
    |__|__|__|__ y1
    |__|__|__ y2
    |__|__ y3

An example of height = 4 by width = 5 model with k = 4 outputs.

Parameters


height : int
Height of the Light Labyrinth. Note that height > 1.
width : int
Width of the Light Labyrinth. Note that width > 1.
bias : bool, default=True
Whether to use bias in each node.
activation : ReflectiveIndexCalculator, default=ReflectiveIndexCalculator.sigmoid_dot_product

Activation function applied to each node's output.

-sigmoid_dot_product - logistic function over dot product of weights and input light for a given node.

error : ErrorCalculator, default=ErrorCalculator.mean_squared_error

Error function optimized during training.

-mean_squared_error - Mean Squared Error can be used for any classification or regression task.

-cross_entropy - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

-scaled_mean_squared_error - Adaptation of MSE meant primarily for multi-label classification. Output values of consecutive pairs of output nodes are scaled to add up to \frac{1}{k}, before applying MSE.

optimizer : object, default=GradientDescent(0.01)

Optimization algorithm.

-GradientDescent - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

-RMSprop - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

-Adam - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

-Nadam - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

regularization : object, default=RegularizationL1(0.01)

Regularization technique - either L1, L2, or None.

RegularizationNone - No regularization.

RegularizationL1 - L1 regularization: \lambda\sum|W|, default: lambda_factor=0.01

RegularizationL2 - L2 regularization: \frac{\lambda}{2}\sum||W||, default: lambda_factor=0.01

weights : ndarray, optional, default=None
Initial weights. If None, weights are set according to weights_init parameter.
weights_init : LightLabyrinthWeightsInit, default=LightLabyrinthWeightsInit.Default

Method for weights initialization.

-LightLabyrinthWeightsInit.Default - default initialization.

-LightLabyrinthWeightsInit.Random - weights are initialized randomly.

-LightLabyrinthWeightsInit.Zeros - weights are initialized with zeros.

random_state : int, optional, default=0
Initial random state. If 0, initial random state will be set randomly.

Attributes


height : int
Height of the LightLabyrinth.
width : int
Width of the LightLabyrinth.
trainable_params : int
Number of trainable parameters.
weights : ndarray of shape (height-1, width-1, n_features + bias)
Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X. If bias is set to True, n_features is increased by 1.
history : LightLabyrinthLearningHistory
Learning history including accuracy and error on training and (if provided) validation sets.
bias : bool
Boolean value whether the model was trained with bias.
activation : ReflectiveIndexCalculator
Activation function used for training.
error_function : ErrorCalculator
Error function used for training.
optimizer : object
Optimization algorithm used for training, including its parameters.
regularization : object
Regularization used during training, including its parameters.
random_state : int
Random state passed during initialization.

Notes


LightLabyrinth trains iteratively. At each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the weights.

It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.

This implementation works with data represented as dense numpy arrays of floating point values.

See Also

LightLabyrinthDynamicClassifier
2-dimensional Light Labyrinth classifier trained with dynamic algorithm.
LightLabyrinth3DClassifier
3-dimensional Light Labyrinth classifier.
LightLabyrinthRegressor
2-dimensional Light Labyrinth regressor.

Examples

>>> from light_labyrinth.dim2 import LightLabyrinthClassifier
>>> from light_labyrinth.hyperparams.regularization import RegularizationL1
>>> from light_labyrinth.hyperparams.optimization import Adam
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import accuracy_score
>>> X, y = make_classification(n_samples=100, random_state=1)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
>>> clf = LightLabyrinthClassifier(width=3, height=3, 
...                                optimizer=Adam(0.1),
...                                regularization=RegularizationL1(0.1))
>>> hist = clf.fit(X_train, y_train, epochs=15)
>>> y_pred = clf.predict(X_test)
>>> accuracy_score(y_test, y_pred)
0.84
Expand source code
class LightLabyrinthClassifier(LightLabyrinth):
    """A 2-dimensional Light Labyrinth model.

        It is meant for k-class classification. 
        Note that `k` cannot be greater than `min(width, height)`.

        ```
            X
            |__ __ __ __ __ y0
            |__|__|__|__ y1
            |__|__|__ y2
            |__|__ y3
        ```

        An example of `height = 4` by `width = 5` model with `k = 4` outputs.

        Parameters
        ----------
        ----------
        height : int 
            Height of the Light Labyrinth. Note that `height > 1`.

        width : int
            Width of the Light Labyrinth. Note that `width > 1`.

        bias : bool, default=True
            Whether to use bias in each node.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`, default=`light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator.sigmoid_dot_product`
            Activation function applied to each node's output.

            -`sigmoid_dot_product` - logistic function over dot product of weights and input light for a given node.

        error : `light_labyrinth.hyperparams.error_function.ErrorCalculator`, default=`light_labyrinth.hyperparams.error_function.ErrorCalculator.mean_squared_error`
            Error function optimized during training.

            -`mean_squared_error` - Mean Squared Error can be used for any classification or regression task.

            -`cross_entropy` - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

            -`scaled_mean_squared_error` - Adaptation of MSE meant primarily for multi-label classification.
            \tOutput values of consecutive pairs of output nodes are scaled to add up to \\(\\frac{1}{k}\\), before applying MSE.

        optimizer : object, default=`light_labyrinth.hyperparams.optimization.GradientDescent(0.01)`
            Optimization algorithm. 

            -`light_labyrinth.hyperparams.optimization.GradientDescent` - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

            -`light_labyrinth.hyperparams.optimization.RMSprop` - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Adam` - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Nadam` - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6


        regularization : object, default=`light_labyrinth.hyperparams.regularization.RegularizationL1(0.01)`
            Regularization technique - either L1, L2, or None.

            `light_labyrinth.hyperparams.regularization.RegularizationNone` - No regularization.

            `light_labyrinth.hyperparams.regularization.RegularizationL1` - L1 regularization: \\(\\lambda\\sum|W|\\), default: lambda_factor=0.01

            `light_labyrinth.hyperparams.regularization.RegularizationL2` - L2 regularization: \\(\\frac{\\lambda}{2}\\sum||W||\\), default: lambda_factor=0.01

        weights: ndarray, optional, default=None
            Initial weights. If `None`, weights are set according to weights_init parameter.

        weights_init: `light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit`, default=`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default`
            Method for weights initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default` - default initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Random` - weights are initialized randomly.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Zeros` - weights are initialized with zeros.

        random_state: int, optional, default=0
            Initial random state. If 0, initial random state will be set randomly.

        Attributes
        ----------
        ----------

        height : int
            Height of the LightLabyrinth.

        width : int
            Width of the LightLabyrinth.

        trainable_params : int
            Number of trainable parameters.

        weights : ndarray of shape (height-1, width-1, n_features + bias)
            Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X.
            If bias is set to True, n_features is increased by 1.

        history : `light_labyrinth.utils.LightLabyrinthLearningHistory`
            Learning history including accuracy and error on training and (if provided) validation sets.

        bias : bool
            Boolean value whether the model was trained with bias.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`
            Activation function used for training.

        error_function : `light_labyrinth.hyperparams.error_function.ErrorCalculator`
            Error function used for training.

        optimizer : object
            Optimization algorithm used for training, including its parameters.

        regularization : object
            Regularization used during training, including its parameters.

        random_state : int
            Random state passed during initialization.

        Notes
        -----
        -----
        LightLabyrinth trains iteratively. At each time step
        the partial derivatives of the loss function with respect to the model
        parameters are computed to update the weights.

        It can also have a regularization term added to the loss function
        that shrinks model parameters to prevent overfitting.

        This implementation works with data represented as dense numpy arrays
        of floating point values.

        See Also
        --------
        light_labyrinth.dim2.LightLabyrinthDynamicClassifier : 2-dimensional Light Labyrinth classifier trained with dynamic algorithm.
        light_labyrinth.dim3.LightLabyrinth3DClassifier : 3-dimensional Light Labyrinth classifier.
        light_labyrinth.dim2.LightLabyrinthRegressor : 2-dimensional Light Labyrinth regressor.

        Examples
        --------
        >>> from light_labyrinth.dim2 import LightLabyrinthClassifier
        >>> from light_labyrinth.hyperparams.regularization import RegularizationL1
        >>> from light_labyrinth.hyperparams.optimization import Adam
        >>> from sklearn.datasets import make_classification
        >>> from sklearn.model_selection import train_test_split
        >>> from sklearn.metrics import accuracy_score
        >>> X, y = make_classification(n_samples=100, random_state=1)
        >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
        >>> clf = LightLabyrinthClassifier(width=3, height=3, 
        ...                                optimizer=Adam(0.1),
        ...                                regularization=RegularizationL1(0.1))
        >>> hist = clf.fit(X_train, y_train, epochs=15)
        >>> y_pred = clf.predict(X_test)
        >>> accuracy_score(y_test, y_pred)
        0.84
        """

    def __init__(self, height, width, bias=True,
                 activation=ReflectiveIndexCalculator.sigmoid_dot_product,
                 error=ErrorCalculator.mean_squared_error,
                 optimizer=None,
                 regularization=None,
                 weights=None,
                 weights_init=LightLabyrinthWeightsInit.Default,
                 random_state=0):
        super().__init__(height, width, bias,
                         activation,
                         error,
                         optimizer,
                         regularization,
                         weights,
                         weights_init,
                         random_state)

    def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
        """Fit the model to data matrix X and target(s) y.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
            The target values (class labels in classification, real numbers in
            regression).

        epochs : int
            Number of iterations to be performed. The solver iterates until convergence
            (determined by `stop_change`, `n_iter_check`) or this number of iterations.

        batch_size : int or float, default=1.0
            Size of mini-batches for stochastic optimizers given either as portion of 
            samples (float) or the exact number (int).
            When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

        stop_change : float, default=1e-4
            Tolerance for the optimization. When the loss or score is not improving
            by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
            convergence is considered to be reached and training stops.

        n_iter_check : int, default=0
            Maximum number of epochs to not meet ``stop_change`` improvement.
            When set to 0, exactly ``epochs`` iterations will be performed.

        epoch_check : int, default=1
            Determines how often the condition for convergence is checked.
            `epoch_check = i` means that the condition will be checked every i-th iteration.
            When set to 0 the condition will not be checked at all and the learning history will be empty.

        X_val : ndarray of shape (n_val_samples, n_features), default=None
            The validation data. 
            If `X_val` is given, `y_val` must be given as well.

        y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
            Target values of the validation set. 
            If `y_val` is given, `X_val` must be given as well.

        verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
            Verbosity level.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

        Returns
        -------
        -------
        hist : object
            Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
            accs_train, accs_val, errs_train, errs_val.
        """
        self._encoder = _SmartOneHotEncoder()
        y_transformed = self._encoder.fit_transform(y)
        y_val_transformed = self._encoder.transform(
            y_val) if y_val is not None else None
            
        return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)

    def predict(self, X):
        """Predict using the Light Labyrinth classifier.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y : ndarray of shape (n_samples,) or (n_samples, n_classes)
            The predicted classes.
        """
        y_pred = super().predict(X)
        return self._encoder.inverse_transform(y_pred)

    def predict_proba(self, X):
        """Probability estimates.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y_prob : ndarray of shape (n_samples, n_classes)
            The predicted probability of the sample for each class in the
            model.
        """
        return super().predict(X)

    def __del__(self):
        super().__del__()

Ancestors

  • light_labyrinth._bare_model.LightLabyrinth
  • light_labyrinth._bare_model._LightLabyrinthModel

Methods

def fit(self, X, y, epochs, batch_size=1.0, stop_change=0.0001, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing)

Fit the model to data matrix X and target(s) y.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.
y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
epochs : int
Number of iterations to be performed. The solver iterates until convergence (determined by stop_change, n_iter_check) or this number of iterations.
batch_size : int or float, default=1.0
Size of mini-batches for stochastic optimizers given either as portion of samples (float) or the exact number (int). When type is float, batch_size = max(1, int(batch_size * n_samples)).
stop_change : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least stop_change for n_iter_check consecutive iterations, convergence is considered to be reached and training stops.
n_iter_check : int, default=0
Maximum number of epochs to not meet stop_change improvement. When set to 0, exactly epochs iterations will be performed.
epoch_check : int, default=1
Determines how often the condition for convergence is checked. epoch_check = i means that the condition will be checked every i-th iteration. When set to 0 the condition will not be checked at all and the learning history will be empty.
X_val : ndarray of shape (n_val_samples, n_features), default=None
The validation data. If X_val is given, y_val must be given as well.
y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
Target values of the validation set. If y_val is given, X_val must be given as well.
verbosity : LightLabyrinthVerbosityLevel, default=LightLabyrinthVerbosityLevel.Nothing

Verbosity level.

-LightLabyrinthVerbosityLevel.Nothing - No output is printed.

-LightLabyrinthVerbosityLevel.Basic - Display logs about important events during the learning process.

-LightLabyrinthVerbosityLevel.Full - Detailed output from the learning process is displayed.

Returns


hist : object
Returns a LightLabyrinthLearningHistory object with fields: accs_train, accs_val, errs_train, errs_val.
Expand source code
def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
    """Fit the model to data matrix X and target(s) y.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
        The target values (class labels in classification, real numbers in
        regression).

    epochs : int
        Number of iterations to be performed. The solver iterates until convergence
        (determined by `stop_change`, `n_iter_check`) or this number of iterations.

    batch_size : int or float, default=1.0
        Size of mini-batches for stochastic optimizers given either as portion of 
        samples (float) or the exact number (int).
        When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

    stop_change : float, default=1e-4
        Tolerance for the optimization. When the loss or score is not improving
        by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
        convergence is considered to be reached and training stops.

    n_iter_check : int, default=0
        Maximum number of epochs to not meet ``stop_change`` improvement.
        When set to 0, exactly ``epochs`` iterations will be performed.

    epoch_check : int, default=1
        Determines how often the condition for convergence is checked.
        `epoch_check = i` means that the condition will be checked every i-th iteration.
        When set to 0 the condition will not be checked at all and the learning history will be empty.

    X_val : ndarray of shape (n_val_samples, n_features), default=None
        The validation data. 
        If `X_val` is given, `y_val` must be given as well.

    y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
        Target values of the validation set. 
        If `y_val` is given, `X_val` must be given as well.

    verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
        Verbosity level.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

    Returns
    -------
    -------
    hist : object
        Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
        accs_train, accs_val, errs_train, errs_val.
    """
    self._encoder = _SmartOneHotEncoder()
    y_transformed = self._encoder.fit_transform(y)
    y_val_transformed = self._encoder.transform(
        y_val) if y_val is not None else None
        
    return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)
def predict(self, X)

Predict using the Light Labyrinth classifier.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y : ndarray of shape (n_samples,) or (n_samples, n_classes)
The predicted classes.
Expand source code
def predict(self, X):
    """Predict using the Light Labyrinth classifier.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y : ndarray of shape (n_samples,) or (n_samples, n_classes)
        The predicted classes.
    """
    y_pred = super().predict(X)
    return self._encoder.inverse_transform(y_pred)
def predict_proba(self, X)

Probability estimates.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y_prob : ndarray of shape (n_samples, n_classes)
The predicted probability of the sample for each class in the model.
Expand source code
def predict_proba(self, X):
    """Probability estimates.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y_prob : ndarray of shape (n_samples, n_classes)
        The predicted probability of the sample for each class in the
        model.
    """
    return super().predict(X)
class LightLabyrinthDynamicClassifier (height, width, bias=True, activation=ReflectiveIndexCalculator.sigmoid_dot_product, error=ErrorCalculator.mean_squared_error, optimizer=None, regularization=None, weights=None, weights_init=LightLabyrinthWeightsInit.Default, random_state=0)

A 2-dimensional Light Labyrinth model trained with dynamic "one node at a time" algorithm.

It is meant for k-class classification. Note that k has to greater or equal to min(width, height).

    X
    |
    d1__c1__b1__a1__ ___ y0
    c2__b2__a2__|___ y1
    b3__a3__|___ y2
    |___|___ y3

An example of height = 4 by width = 5 model with k = 4 outputs. Nodes are trained in order: a1, a2, a3, b1, b2, b3, c1, c2, d1.

Parameters


height : int
Height of the Light Labyrinth. Note that height > 1.
width : int
Width of the Light Labyrinth. Note that width > 1.
bias : bool, default=True
Whether to use bias in each node.
activation : ReflectiveIndexCalculator, default=ReflectiveIndexCalculator.sigmoid_dot_product

Activation function applied to each node's output.

-sigmoid_dot_product - logistic function over dot product of weights and input light for a given node.

error : ErrorCalculator, default=ErrorCalculator.mean_squared_error

Error function optimized during training.

-mean_squared_error - Mean Squared Error can be used for any classification or regression task.

-cross_entropy - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

-scaled_mean_squared_error - Adaptation of MSE meant primarily for multi-label classification. Output values of consecutive pairs of output nodes are scaled to add up to \frac{1}{k}, before applying MSE.

optimizer : object, default=GradientDescent(0.01)

Optimization algorithm.

-GradientDescent - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

-RMSprop - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

-Adam - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

-Nadam - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

regularization : object, default=RegularizationL1(0.01)

Regularization technique - either L1, L2, or None.

RegularizationNone - No regularization.

RegularizationL1 - L1 regularization: \lambda\sum|W|, default: lambda_factor=0.01

RegularizationL2 - L2 regularization: \frac{\lambda}{2}\sum||W||, default: lambda_factor=0.01

weights : ndarray, optional, default=None
Initial weights. If None, weights are set according to weights_init parameter.
weights_init : LightLabyrinthWeightsInit, default=LightLabyrinthWeightsInit.Default

Method for weights initialization.

-LightLabyrinthWeightsInit.Default - default initialization.

-LightLabyrinthWeightsInit.Random - weights are initialized randomly.

-LightLabyrinthWeightsInit.Zeros - weights are initialized with zeros.

random_state : int, optional, default=0
Initial random state. If 0, initial random state will be set randomly.

Attributes


height : int
Height of the LightLabyrinth.
width : int
Width of the LightLabyrinth.
trainable_params : int
Number of trainable parameters.
weights : ndarray of shape (height-1, width-1, n_features + bias)
Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X. If bias is set to True, n_features is increased by 1.
history : LightLabyrinthLearningHistory
Learning history including accuracy and error on training and (if provided) validation sets.
bias : bool
Boolean value whether the model was trained with bias.
activation : ReflectiveIndexCalculator
Activation function used for training.
error_function : ErrorCalculator
Error function used for training.
optimizer : object
Optimization algorithm used for training, including its parameters.
regularization : object
Regularization used during training, including its parameters.
random_state : int
Random state passed during initialization.

Notes


LightLabyrinthDynamic unlike standard LightLabyrinth trains each node separately. Learning starts with nodes in the last layer. Once all the nodes in a given layer are trained, they are "frozen" and learning of the next layer begins. Training each node is an iterative process since at each time step the partial derivatives of the loss function with respect to the node parameters are computed to update the weights.

It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.

This implementation works with data represented as dense numpy arrays of floating point values.

See Also

LightLabyrinthClassifier
2-dimensional Light Labyrinth classifier trained with the standard algorithm.
LightLabyrinth3DClassifier
3-dimensional Light Labyrinth classifier.
LightLabyrinthDynamicRegressor
2-dimensional Light Labyrinth regressor trained with dynamic algorithm.

Examples

>>> from light_labyrinth.dim2 import LightLabyrinthDynamicClassifier
>>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
>>> from light_labyrinth.hyperparams.regularization import RegularizationL1
>>> from light_labyrinth.hyperparams.optimization import RMSprop
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import accuracy_score
>>> X, y = make_classification(n_samples=100, random_state=1)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
>>> clf = LightLabyrinthClassifier(width=3, height=3, 
...                                optimizer=RMSprop(0.05),
...                                regularization=RegularizationL1(0.15)
...                                weights_init=LightLabyrinthWeightsInit.Zeros)
>>> hist = clf.fit(X_train, y_train, epochs=10, batch_size=0.1)
>>> y_pred = clf.predict(X_test)
>>> accuracy_score(y_test, y_pred)
1.0
Expand source code
class LightLabyrinthDynamicClassifier(LightLabyrinthDynamic):
    """A 2-dimensional Light Labyrinth model trained with dynamic "one node at a time" algorithm.

        It is meant for k-class classification. 
        Note that `k` has to greater or equal to `min(width, height)`.

        ```
            X
            |
            d1__c1__b1__a1__ ___ y0
            c2__b2__a2__|___ y1
            b3__a3__|___ y2
            |___|___ y3
        ```

        An example of `height = 4` by `width = 5` model with `k = 4` outputs.
        Nodes are trained in order: a1, a2, a3, b1, b2, b3, c1, c2, d1.

        Parameters
        ----------
        ----------
        height : int 
            Height of the Light Labyrinth. Note that `height > 1`.

        width : int
            Width of the Light Labyrinth. Note that `width > 1`.

        bias : bool, default=True
            Whether to use bias in each node.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`, default=`light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator.sigmoid_dot_product`
            Activation function applied to each node's output.

            -`sigmoid_dot_product` - logistic function over dot product of weights and input light for a given node.

        error : `light_labyrinth.hyperparams.error_function.ErrorCalculator`, default=`light_labyrinth.hyperparams.error_function.ErrorCalculator.mean_squared_error`
            Error function optimized during training.

            -`mean_squared_error` - Mean Squared Error can be used for any classification or regression task.

            -`cross_entropy` - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

            -`scaled_mean_squared_error` - Adaptation of MSE meant primarily for multi-label classification.
            \tOutput values of consecutive pairs of output nodes are scaled to add up to \\(\\frac{1}{k}\\), before applying MSE.

        optimizer : object, default=`light_labyrinth.hyperparams.optimization.GradientDescent(0.01)`
            Optimization algorithm. 

            -`light_labyrinth.hyperparams.optimization.GradientDescent` - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

            -`light_labyrinth.hyperparams.optimization.RMSprop` - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Adam` - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Nadam` - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6


        regularization : object, default=`light_labyrinth.hyperparams.regularization.RegularizationL1(0.01)`
            Regularization technique - either L1, L2, or None.

            `light_labyrinth.hyperparams.regularization.RegularizationNone` - No regularization.

            `light_labyrinth.hyperparams.regularization.RegularizationL1` - L1 regularization: \\(\\lambda\\sum|W|\\), default: lambda_factor=0.01

            `light_labyrinth.hyperparams.regularization.RegularizationL2` - L2 regularization: \\(\\frac{\\lambda}{2}\\sum||W||\\), default: lambda_factor=0.01

        weights: ndarray, optional, default=None
            Initial weights. If `None`, weights are set according to weights_init parameter.

        weights_init: `light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit`, default=`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default`
            Method for weights initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default` - default initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Random` - weights are initialized randomly.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Zeros` - weights are initialized with zeros.

        random_state: int, optional, default=0
            Initial random state. If 0, initial random state will be set randomly.

        Attributes
        ----------
        ----------
        height : int
            Height of the LightLabyrinth.

        width : int
            Width of the LightLabyrinth.

        trainable_params : int
            Number of trainable parameters.

        weights : ndarray of shape (height-1, width-1, n_features + bias)
            Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X.
            If bias is set to True, n_features is increased by 1.

        history : `light_labyrinth.utils.LightLabyrinthLearningHistory`
            Learning history including accuracy and error on training and (if provided) validation sets.

        bias : bool
            Boolean value whether the model was trained with bias.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`
            Activation function used for training.

        error_function : `light_labyrinth.hyperparams.error_function.ErrorCalculator`
            Error function used for training.

        optimizer : object
            Optimization algorithm used for training, including its parameters.

        regularization : object
            Regularization used during training, including its parameters.

        random_state : int
            Random state passed during initialization.

        Notes
        -----
        -----
        LightLabyrinthDynamic unlike standard LightLabyrinth trains each node separately.
        Learning starts with nodes in the last layer. Once all the nodes in a given layer
        are trained, they are "frozen" and learning of the next layer begins.
        Training each node is an iterative process since at each time step
        the partial derivatives of the loss function with respect to the node
        parameters are computed to update the weights.

        It can also have a regularization term added to the loss function
        that shrinks model parameters to prevent overfitting.

        This implementation works with data represented as dense numpy arrays
        of floating point values.


        See Also
        --------
        light_labyrinth.dim2.LightLabyrinthClassifier : 2-dimensional Light Labyrinth classifier trained with the standard algorithm.
        light_labyrinth.dim3.LightLabyrinth3DClassifier : 3-dimensional Light Labyrinth classifier.
        light_labyrinth.dim2.LightLabyrinthDynamicRegressor : 2-dimensional Light Labyrinth regressor trained with dynamic algorithm.

        Examples
        --------
        >>> from light_labyrinth.dim2 import LightLabyrinthDynamicClassifier
        >>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
        >>> from light_labyrinth.hyperparams.regularization import RegularizationL1
        >>> from light_labyrinth.hyperparams.optimization import RMSprop
        >>> from sklearn.datasets import make_classification
        >>> from sklearn.model_selection import train_test_split
        >>> from sklearn.metrics import accuracy_score
        >>> X, y = make_classification(n_samples=100, random_state=1)
        >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
        >>> clf = LightLabyrinthClassifier(width=3, height=3, 
        ...                                optimizer=RMSprop(0.05),
        ...                                regularization=RegularizationL1(0.15)
        ...                                weights_init=LightLabyrinthWeightsInit.Zeros)
        >>> hist = clf.fit(X_train, y_train, epochs=10, batch_size=0.1)
        >>> y_pred = clf.predict(X_test)
        >>> accuracy_score(y_test, y_pred)
        1.0
        """

    def __init__(self, height, width, bias=True,
                 activation=ReflectiveIndexCalculator.sigmoid_dot_product,
                 error=ErrorCalculator.mean_squared_error,
                 optimizer=None,
                 regularization=None,
                 weights=None,
                 weights_init=LightLabyrinthWeightsInit.Default,
                 random_state=0):
        super().__init__(height, width, bias,
                         activation,
                         error,
                         optimizer,
                         regularization,
                         weights,
                         weights_init,
                         random_state)

    def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
        """Fit the model to data matrix X and target(s) y.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
            The target values (class labels in classification, real numbers in
            regression).

        epochs : int
            Number of iterations to be performed. The solver iterates until convergence
            (determined by `stop_change`, `n_iter_check`) or this number of iterations.

        batch_size : int or float, default=1.0
            Size of mini-batches for stochastic optimizers given either as portion of 
            samples (float) or the exact number (int).
            When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

        stop_change : float, default=1e-4
            Tolerance for the optimization. When the loss or score is not improving
            by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
            convergence is considered to be reached and training stops.

        n_iter_check : int, default=0
            Maximum number of epochs to not meet ``stop_change`` improvement.
            When set to 0, exactly ``epochs`` iterations will be performed.

        epoch_check : int, default=1
            Determines how often the condition for convergence is checked.
            `epoch_check = i` means that the condition will be checked every i-th iteration.
            When set to 0 the condition will not be checked at all and the learning history will be empty.

        X_val : ndarray of shape (n_val_samples, n_features), default=None
            The validation data. 
            If `X_val` is given, `y_val` must be given as well.

        y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
            Target values of the validation set. 
            If `y_val` is given, `X_val` must be given as well.

        verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
            Verbosity level.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

        Returns
        -------
        -------
        hist : object
            Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
            accs_train, accs_val, errs_train, errs_val.
        """
        self._encoder = _SmartOneHotEncoder()
        y_transformed = self._encoder.fit_transform(y)
        y_val_transformed = self._encoder.transform(
            y_val) if y_val is not None else None
            
        return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)

    def predict(self, X):
        """Predict using the dynamic Light Labyrinth classifier.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y : ndarray of shape (n_samples,) or (n_samples, n_classes)
            The predicted classes.
        """
        y_pred = super().predict(X)
        return self._encoder.inverse_transform(y_pred)

    def predict_proba(self, X):
        """Probability estimates.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y_prob : ndarray of shape (n_samples, n_classes)
            The predicted probability of the sample for each class in the
            model.
        """
        return super().predict(X)

    def __del__(self):
        super().__del__()

Ancestors

  • light_labyrinth._bare_model.LightLabyrinthDynamic
  • light_labyrinth._bare_model._LightLabyrinthModel

Methods

def fit(self, X, y, epochs, batch_size=1.0, stop_change=0.0001, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing)

Fit the model to data matrix X and target(s) y.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.
y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
epochs : int
Number of iterations to be performed. The solver iterates until convergence (determined by stop_change, n_iter_check) or this number of iterations.
batch_size : int or float, default=1.0
Size of mini-batches for stochastic optimizers given either as portion of samples (float) or the exact number (int). When type is float, batch_size = max(1, int(batch_size * n_samples)).
stop_change : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least stop_change for n_iter_check consecutive iterations, convergence is considered to be reached and training stops.
n_iter_check : int, default=0
Maximum number of epochs to not meet stop_change improvement. When set to 0, exactly epochs iterations will be performed.
epoch_check : int, default=1
Determines how often the condition for convergence is checked. epoch_check = i means that the condition will be checked every i-th iteration. When set to 0 the condition will not be checked at all and the learning history will be empty.
X_val : ndarray of shape (n_val_samples, n_features), default=None
The validation data. If X_val is given, y_val must be given as well.
y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
Target values of the validation set. If y_val is given, X_val must be given as well.
verbosity : LightLabyrinthVerbosityLevel, default=LightLabyrinthVerbosityLevel.Nothing

Verbosity level.

-LightLabyrinthVerbosityLevel.Nothing - No output is printed.

-LightLabyrinthVerbosityLevel.Basic - Display logs about important events during the learning process.

-LightLabyrinthVerbosityLevel.Full - Detailed output from the learning process is displayed.

Returns


hist : object
Returns a LightLabyrinthLearningHistory object with fields: accs_train, accs_val, errs_train, errs_val.
Expand source code
def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
    """Fit the model to data matrix X and target(s) y.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
        The target values (class labels in classification, real numbers in
        regression).

    epochs : int
        Number of iterations to be performed. The solver iterates until convergence
        (determined by `stop_change`, `n_iter_check`) or this number of iterations.

    batch_size : int or float, default=1.0
        Size of mini-batches for stochastic optimizers given either as portion of 
        samples (float) or the exact number (int).
        When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

    stop_change : float, default=1e-4
        Tolerance for the optimization. When the loss or score is not improving
        by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
        convergence is considered to be reached and training stops.

    n_iter_check : int, default=0
        Maximum number of epochs to not meet ``stop_change`` improvement.
        When set to 0, exactly ``epochs`` iterations will be performed.

    epoch_check : int, default=1
        Determines how often the condition for convergence is checked.
        `epoch_check = i` means that the condition will be checked every i-th iteration.
        When set to 0 the condition will not be checked at all and the learning history will be empty.

    X_val : ndarray of shape (n_val_samples, n_features), default=None
        The validation data. 
        If `X_val` is given, `y_val` must be given as well.

    y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
        Target values of the validation set. 
        If `y_val` is given, `X_val` must be given as well.

    verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
        Verbosity level.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

    Returns
    -------
    -------
    hist : object
        Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
        accs_train, accs_val, errs_train, errs_val.
    """
    self._encoder = _SmartOneHotEncoder()
    y_transformed = self._encoder.fit_transform(y)
    y_val_transformed = self._encoder.transform(
        y_val) if y_val is not None else None
        
    return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)
def predict(self, X)

Predict using the dynamic Light Labyrinth classifier.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y : ndarray of shape (n_samples,) or (n_samples, n_classes)
The predicted classes.
Expand source code
def predict(self, X):
    """Predict using the dynamic Light Labyrinth classifier.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y : ndarray of shape (n_samples,) or (n_samples, n_classes)
        The predicted classes.
    """
    y_pred = super().predict(X)
    return self._encoder.inverse_transform(y_pred)
def predict_proba(self, X)

Probability estimates.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y_prob : ndarray of shape (n_samples, n_classes)
The predicted probability of the sample for each class in the model.
Expand source code
def predict_proba(self, X):
    """Probability estimates.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y_prob : ndarray of shape (n_samples, n_classes)
        The predicted probability of the sample for each class in the
        model.
    """
    return super().predict(X)
class LightLabyrinthDynamicRegressor (height, width, bias=True, activation=ReflectiveIndexCalculator.sigmoid_dot_product, error=ErrorCalculator.mean_squared_error, optimizer=None, regularization=None, weights=None, weights_init=LightLabyrinthWeightsInit.Default, random_state=0)

A 2-dimensional Light Labyrinth model trained with dynamic "one node at a time" algorithm.

It is meant for regression.

    X
    |
    d1__c1__
    c2__b1__|
    b2__a1__|___ y
    |___|___*

An example of height = 4 by width = 3 model. The lower output is omitted. Nodes are trained in order: a1, b1, b2, c1, c2, d1.

Parameters


height : int
Height of the Light Labyrinth. Note that height > 1.
width : int
Width of the Light Labyrinth. Note that width > 1.
bias : bool, default=True
Whether to use bias in each node.
activation : ReflectiveIndexCalculator, default=ReflectiveIndexCalculator.sigmoid_dot_product

Activation function applied to each node's output.

-sigmoid_dot_product - logistic function over dot product of weights and input light for a given node.

error : ErrorCalculator, default=ErrorCalculator.mean_squared_error

Error function optimized during training.

-mean_squared_error - Mean Squared Error can be used for any classification or regression task.

-cross_entropy - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

-scaled_mean_squared_error - Adaptation of MSE meant primarily for multi-label classification. Output values of consecutive pairs of output nodes are scaled to add up to \frac{1}{k}, before applying MSE.

optimizer : object, default=GradientDescent(0.01)

Optimization algorithm.

-GradientDescent - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

-RMSprop - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

-Adam - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

-Nadam - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

regularization : object, default=RegularizationL1(0.01)

Regularization technique - either L1, L2, or None.

RegularizationNone - No regularization.

RegularizationL1 - L1 regularization: \lambda\sum|W|, default: lambda_factor=0.01

RegularizationL2 - L2 regularization: \frac{\lambda}{2}\sum||W||, default: lambda_factor=0.01

weights : ndarray, optional, default=None
Initial weights. If None, weights are set according to weights_init parameter.
weights_init : LightLabyrinthWeightsInit, default=LightLabyrinthWeightsInit.Default

Method for weights initialization.

-LightLabyrinthWeightsInit.Default - default initialization.

-LightLabyrinthWeightsInit.Random - weights are initialized randomly.

-LightLabyrinthWeightsInit.Zeros - weights are initialized with zeros.

random_state : int, optional, default=0
Initial random state. If 0, initial random state will be set randomly.

Attributes


height : int
Height of the LightLabyrinth.
width : int
Width of the LightLabyrinth.
trainable_params : int
Number of trainable parameters.
weights : ndarray of shape (height-1, width-1, n_features + bias)
Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X. If bias is set to True, n_features is increased by 1.
history : LightLabyrinthLearningHistory
Learning history including error on training and (if provided) validation sets.
bias : bool
Boolean value whether the model was trained with bias.
activation : ReflectiveIndexCalculator
Activation function used for training.
error_function : ErrorCalculator
Error function used for training.
optimizer : object
Optimization algorithm used for training, including its parameters.
regularization : object
Regularization used during training, including its parameters.
random_state : int
Random state passed during initialization.

Notes


LightLabyrinthDynamic unlike standard LightLabyrinth trains each node separately. Learning starts with nodes in the last layer. Once all the nodes in a given layer are trained, they are "frozen" and learning of the next layer begins. Training each node is an iterative process since at each time step the partial derivatives of the loss function with respect to the node parameters are computed to update the weights.

It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.

This implementation works with data represented as dense numpy arrays of floating point values.

See Also

LightLabyrinthRegressor
2-dimensional Light Labyrinth regressor trained with the standard algorithm.
LightLabyrinthRandomRegressor
2-dimensional randomized Light Labyrinth regressor.
LightLabyrinthDynamicClassifier
2-dimensional Light Labyrinth classifier trained with dynamic algorithm.

Examples

>>> from light_labyrinth.dim2 import LightLabyrinthDynamicRegressor
>>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
>>> from light_labyrinth.hyperparams.regularization import RegularizationL1
>>> from light_labyrinth.hyperparams.optimization import RMSprop
>>> from sklearn.datasets import make_regression
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import r2_score
>>> X, y = make_regression(n_samples=100, random_state=1)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
>>> model = LightLabyrinthDynamicRegressor(width=3, height=3, 
...                                optimizer=RMSprop(0.05),
...                                regularization=RegularizationL1(0.15),
...                                weights_init=LightLabyrinthWeightsInit.Zeros)
>>> hist = model.fit(X_train, y_train, epochs=10, batch_size=0.1)
>>> y_pred = model.predict(X_test)
>>> r2_score(y_test, y_pred)
0.75
Expand source code
class LightLabyrinthDynamicRegressor(LightLabyrinthDynamic):
    """A 2-dimensional Light Labyrinth model trained with dynamic "one node at a time" algorithm.

        It is meant for regression. 

        ```
            X
            |
            d1__c1__
            c2__b1__|
            b2__a1__|___ y
            |___|___*
        ```

        An example of `height = 4` by `width = 3` model. The lower output is omitted.
        Nodes are trained in order: a1, b1, b2, c1, c2, d1.

        Parameters
        ----------
        ----------
        height : int 
            Height of the Light Labyrinth. Note that `height > 1`.

        width : int
            Width of the Light Labyrinth. Note that `width > 1`.

        bias : bool, default=True
            Whether to use bias in each node.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`, default=`light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator.sigmoid_dot_product`
            Activation function applied to each node's output.

            -`sigmoid_dot_product` - logistic function over dot product of weights and input light for a given node.

        error : `light_labyrinth.hyperparams.error_function.ErrorCalculator`, default=`light_labyrinth.hyperparams.error_function.ErrorCalculator.mean_squared_error`
            Error function optimized during training.

            -`mean_squared_error` - Mean Squared Error can be used for any classification or regression task.

            -`cross_entropy` - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

            -`scaled_mean_squared_error` - Adaptation of MSE meant primarily for multi-label classification.
            \tOutput values of consecutive pairs of output nodes are scaled to add up to \\(\\frac{1}{k}\\), before applying MSE.

        optimizer : object, default=`light_labyrinth.hyperparams.optimization.GradientDescent(0.01)`
            Optimization algorithm. 

            -`light_labyrinth.hyperparams.optimization.GradientDescent` - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

            -`light_labyrinth.hyperparams.optimization.RMSprop` - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Adam` - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Nadam` - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6


        regularization : object, default=`light_labyrinth.hyperparams.regularization.RegularizationL1(0.01)`
            Regularization technique - either L1, L2, or None.

            `light_labyrinth.hyperparams.regularization.RegularizationNone` - No regularization.

            `light_labyrinth.hyperparams.regularization.RegularizationL1` - L1 regularization: \\(\\lambda\\sum|W|\\), default: lambda_factor=0.01

            `light_labyrinth.hyperparams.regularization.RegularizationL2` - L2 regularization: \\(\\frac{\\lambda}{2}\\sum||W||\\), default: lambda_factor=0.01

        weights: ndarray, optional, default=None
            Initial weights. If `None`, weights are set according to weights_init parameter.

        weights_init: `light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit`, default=`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default`
            Method for weights initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default` - default initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Random` - weights are initialized randomly.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Zeros` - weights are initialized with zeros.

        random_state: int, optional, default=0
            Initial random state. If 0, initial random state will be set randomly.

        Attributes
        ----------
        ----------
        height : int
            Height of the LightLabyrinth.

        width : int
            Width of the LightLabyrinth.

        trainable_params : int
            Number of trainable parameters.

        weights : ndarray of shape (height-1, width-1, n_features + bias)
            Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X.
            If bias is set to True, n_features is increased by 1.

        history : `light_labyrinth.utils.LightLabyrinthLearningHistory`
            Learning history including error on training and (if provided) validation sets.

        bias : bool
            Boolean value whether the model was trained with bias.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`
            Activation function used for training.

        error_function : `light_labyrinth.hyperparams.error_function.ErrorCalculator`
            Error function used for training.

        optimizer : object
            Optimization algorithm used for training, including its parameters.

        regularization : object
            Regularization used during training, including its parameters.

        random_state : int
            Random state passed during initialization.

        Notes
        -----
        -----
        LightLabyrinthDynamic unlike standard LightLabyrinth trains each node separately.
        Learning starts with nodes in the last layer. Once all the nodes in a given layer
        are trained, they are "frozen" and learning of the next layer begins.
        Training each node is an iterative process since at each time step
        the partial derivatives of the loss function with respect to the node
        parameters are computed to update the weights.

        It can also have a regularization term added to the loss function
        that shrinks model parameters to prevent overfitting.

        This implementation works with data represented as dense numpy arrays
        of floating point values.


        See Also
        --------
        light_labyrinth.dim2.LightLabyrinthRegressor : 2-dimensional Light Labyrinth regressor trained with the standard algorithm.
        light_labyrinth.dim2.LightLabyrinthRandomRegressor : 2-dimensional randomized Light Labyrinth regressor.
        light_labyrinth.dim2.LightLabyrinthDynamicClassifier : 2-dimensional Light Labyrinth classifier trained with dynamic algorithm.

        Examples
        --------
        >>> from light_labyrinth.dim2 import LightLabyrinthDynamicRegressor
        >>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
        >>> from light_labyrinth.hyperparams.regularization import RegularizationL1
        >>> from light_labyrinth.hyperparams.optimization import RMSprop
        >>> from sklearn.datasets import make_regression
        >>> from sklearn.model_selection import train_test_split
        >>> from sklearn.metrics import r2_score
        >>> X, y = make_regression(n_samples=100, random_state=1)
        >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
        >>> model = LightLabyrinthDynamicRegressor(width=3, height=3, 
        ...                                optimizer=RMSprop(0.05),
        ...                                regularization=RegularizationL1(0.15),
        ...                                weights_init=LightLabyrinthWeightsInit.Zeros)
        >>> hist = model.fit(X_train, y_train, epochs=10, batch_size=0.1)
        >>> y_pred = model.predict(X_test)
        >>> r2_score(y_test, y_pred)
        0.75
        """

    def __init__(self, height, width, bias=True,
                 activation=ReflectiveIndexCalculator.sigmoid_dot_product,
                 error=ErrorCalculator.mean_squared_error,
                 optimizer=None,
                 regularization=None,
                 weights=None,
                 weights_init=LightLabyrinthWeightsInit.Default,
                 random_state=0):
        super().__init__(height, width, bias,
                         activation,
                         error,
                         optimizer,
                         regularization,
                         weights,
                         weights_init,
                         random_state)

    def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
        """Fit the model to data matrix X and target(s) y.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        y : ndarray of shape (n_samples, 1)
            The target values.

        epochs : int
            Number of iterations to be performed. The solver iterates until convergence
            (determined by `stop_change`, `n_iter_check`) or this number of iterations.

        batch_size : int or float, default=1.0
            Size of mini-batches for stochastic optimizers given either as portion of 
            samples (float) or the exact number (int).
            When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

        stop_change : float, default=1e-4
            Tolerance for the optimization. When the loss or score is not improving
            by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
            convergence is considered to be reached and training stops.

        n_iter_check : int, default=0
            Maximum number of epochs to not meet ``stop_change`` improvement.
            When set to 0, exactly ``epochs`` iterations will be performed.

        epoch_check : int, default=1
            Determines how often the condition for convergence is checked.
            `epoch_check = i` means that the condition will be checked every i-th iteration.
            When set to 0 the condition will not be checked at all and the learning history will be empty.

        X_val : ndarray of shape (n_val_samples, n_features), default=None
            The validation data. 
            If `X_val` is given, `y_val` must be given as well.

        y_val : ndarray of shape (n_val_samples, 1), default=None
            Target values of the validation set. 
            If `y_val` is given, `X_val` must be given as well.

        verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
            Verbosity level.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

        Returns
        -------
        -------
        hist : object
            Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
            errs_train, errs_val.
        """
        self._encoder = _LightLabyrinthOutputTransformer()
        y_transformed = self._encoder.fit_transform(y)
        y_val_transformed = self._encoder.transform(
            y_val) if y_val is not None else None
            
        return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)

    def predict(self, X):
        """Predict using the dynamic Light Labyrinth regressor.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y : ndarray of shape (n_samples, 1)
            The predicted values.
        """
        y_pred = super().predict(X)
        return self._encoder.inverse_transform(y_pred)

    def __del__(self):
        super().__del__()

Ancestors

  • light_labyrinth._bare_model.LightLabyrinthDynamic
  • light_labyrinth._bare_model._LightLabyrinthModel

Methods

def fit(self, X, y, epochs, batch_size=1.0, stop_change=0.0001, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing)

Fit the model to data matrix X and target(s) y.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.
y : ndarray of shape (n_samples, 1)
The target values.
epochs : int
Number of iterations to be performed. The solver iterates until convergence (determined by stop_change, n_iter_check) or this number of iterations.
batch_size : int or float, default=1.0
Size of mini-batches for stochastic optimizers given either as portion of samples (float) or the exact number (int). When type is float, batch_size = max(1, int(batch_size * n_samples)).
stop_change : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least stop_change for n_iter_check consecutive iterations, convergence is considered to be reached and training stops.
n_iter_check : int, default=0
Maximum number of epochs to not meet stop_change improvement. When set to 0, exactly epochs iterations will be performed.
epoch_check : int, default=1
Determines how often the condition for convergence is checked. epoch_check = i means that the condition will be checked every i-th iteration. When set to 0 the condition will not be checked at all and the learning history will be empty.
X_val : ndarray of shape (n_val_samples, n_features), default=None
The validation data. If X_val is given, y_val must be given as well.
y_val : ndarray of shape (n_val_samples, 1), default=None
Target values of the validation set. If y_val is given, X_val must be given as well.
verbosity : LightLabyrinthVerbosityLevel, default=LightLabyrinthVerbosityLevel.Nothing

Verbosity level.

-LightLabyrinthVerbosityLevel.Nothing - No output is printed.

-LightLabyrinthVerbosityLevel.Basic - Display logs about important events during the learning process.

-LightLabyrinthVerbosityLevel.Full - Detailed output from the learning process is displayed.

Returns


hist : object
Returns a LightLabyrinthLearningHistory object with fields: errs_train, errs_val.
Expand source code
def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
    """Fit the model to data matrix X and target(s) y.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    y : ndarray of shape (n_samples, 1)
        The target values.

    epochs : int
        Number of iterations to be performed. The solver iterates until convergence
        (determined by `stop_change`, `n_iter_check`) or this number of iterations.

    batch_size : int or float, default=1.0
        Size of mini-batches for stochastic optimizers given either as portion of 
        samples (float) or the exact number (int).
        When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

    stop_change : float, default=1e-4
        Tolerance for the optimization. When the loss or score is not improving
        by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
        convergence is considered to be reached and training stops.

    n_iter_check : int, default=0
        Maximum number of epochs to not meet ``stop_change`` improvement.
        When set to 0, exactly ``epochs`` iterations will be performed.

    epoch_check : int, default=1
        Determines how often the condition for convergence is checked.
        `epoch_check = i` means that the condition will be checked every i-th iteration.
        When set to 0 the condition will not be checked at all and the learning history will be empty.

    X_val : ndarray of shape (n_val_samples, n_features), default=None
        The validation data. 
        If `X_val` is given, `y_val` must be given as well.

    y_val : ndarray of shape (n_val_samples, 1), default=None
        Target values of the validation set. 
        If `y_val` is given, `X_val` must be given as well.

    verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
        Verbosity level.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

    Returns
    -------
    -------
    hist : object
        Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
        errs_train, errs_val.
    """
    self._encoder = _LightLabyrinthOutputTransformer()
    y_transformed = self._encoder.fit_transform(y)
    y_val_transformed = self._encoder.transform(
        y_val) if y_val is not None else None
        
    return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)
def predict(self, X)

Predict using the dynamic Light Labyrinth regressor.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y : ndarray of shape (n_samples, 1)
The predicted values.
Expand source code
def predict(self, X):
    """Predict using the dynamic Light Labyrinth regressor.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y : ndarray of shape (n_samples, 1)
        The predicted values.
    """
    y_pred = super().predict(X)
    return self._encoder.inverse_transform(y_pred)
class LightLabyrinthRandomClassifier (height, width, features, bias=True, indices=None, activation=ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product, error=ErrorCalculator.mean_squared_error, optimizer=None, regularization=None, weights=None, weights_init=LightLabyrinthWeightsInit.Default, random_state=0)

A 2-dimensional Light Labyrinth with a randomized subset of features used at each node.

It is meant for k-class classification. Note that k cannot be greater than min(width, height).

    X
    !__.__,__ __,__ y0
    |__|__!__!__|__ y1

An example of height = 2 by width = 5 model with k = 2 outputs.

Parameters


height : int
Height of the Light Labyrinth. Note that height > 1.
width : int
Width of the Light Labyrinth. Note that width > 1.
features : int or float
Portion/number of features to be used in each node. If float is given it should be within range (0.0, 1.0]. If int is given it should not be greater than n_features.
bias : bool, default=True
Whether to use bias in each node.
indices : ndarray, optional, default=None
An array of shape (height, width, n_indices + bias) including indices to be used at each node. If None, indices will be selected randomly.
activation : ReflectiveIndexCalculatorRandom, default=ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product

Activation function applied to each node's output.

-random_sigmoid_dot_product - logistic function over dot product of weights and input light for a given node.

error : ErrorCalculator, default=ErrorCalculator.mean_squared_error

Error function optimized during training.

-mean_squared_error - Mean Squared Error can be used for any classification or regression task.

-cross_entropy - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

-scaled_mean_squared_error - Adaptation of MSE meant primarily for multi-label classification. Output values of consecutive pairs of output nodes are scaled to add up to \frac{1}{k}, before applying MSE.

optimizer : object, default=GradientDescent(0.01)

Optimization algorithm.

-GradientDescent - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

-RMSprop - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

-Adam - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

-Nadam - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

regularization : object, default=RegularizationL1(0.01)

Regularization technique - either L1, L2, or None.

RegularizationNone - No regularization.

RegularizationL1 - L1 regularization: \lambda\sum|W|, default: lambda_factor=0.01

RegularizationL2 - L2 regularization: \frac{\lambda}{2}\sum||W||, default: lambda_factor=0.01

weights : ndarray, optional, default=None
Initial weights. If None, weights are set according to weights_init parameter.
weights_init : LightLabyrinthWeightsInit, default=LightLabyrinthWeightsInit.Default

Method for weights initialization.

-LightLabyrinthWeightsInit.Default - default initialization.

-LightLabyrinthWeightsInit.Random - weights are initialized randomly.

-LightLabyrinthWeightsInit.Zeros - weights are initialized with zeros.

random_state : int, optional, default=0
Initial random state. If 0, initial random state will be set randomly.

Attributes


height : int
Height of the LightLabyrinth.
width : int
Width of the LightLabyrinth.
features : int
Number of features used in each node (excluding bias).
trainable_params : int
Number of trainable parameters.
indices : ndarray of shape (height, width, n_indices + bias)
Indices used in each node (including bias if used).
weights : ndarray of shape (height-1, height-1, n_indices + bias)
Array of weights optimized during training. If bias is set to False, n_indices is equal to the number of features in the training set X. If bias is set to True, n_indices is increased by 1.
history : LightLabyrinthLearningHistory
Learning history including accuracy and error on training and (if provided) validation sets.
bias : bool
Boolean value whether the model was trained with bias.
activation : ReflectiveIndexCalculatorRandom
Activation function used for training.
error_function : ErrorCalculator
Error function used for training.
optimizer : object
Optimization algorithm used for training, including its parameters.
regularization : object
Regularization used during training, including its parameters.
random_state : int
Random state passed during initialization.

Notes


LightLabyrinthRandom unlike standard LightLabyrinth includes only some subset of features in the splitting criterion at each node. This subset is selected randomly (unless parameter indices is specified). It can be used as a part of an ensemble - randomness should lower model's variance just like in the Random Forest model.

LightLabyrinthRandomClassifier is used in the RandomMaze2DClassifier.

It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.

This implementation works with data represented as dense numpy arrays of floating point values.

See Also

LightLabyrinthClassifier
2-dimensional Light Labyrinth classifier.
LightLabyrinth3DRandomClassifier
3-dimensional random Light Labyrinth classifier.
LightLabyrinthRandomRegressor
2-dimensional random Light Labyrinth regressor.

Examples

>>> from light_labyrinth.dim2 import LightLabyrinthRandomClassifier
>>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
>>> from light_labyrinth.hyperparams.regularization import RegularizationNone
>>> from light_labyrinth.hyperparams.optimization import RMSprop
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import accuracy_score
>>> X, y = make_classification(n_samples=1000, n_classes=4, n_informative=3)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
>>> clf = LightLabyrinthRandomClassifier(5, 5, features=5,
...                                optimizer=RMSprop(0.05),
...                                regularization=RegularizationNone()
...                                weights_init=LightLabyrinthWeightsInit.Zeros)
>>> hist = clf.fit(X_train, y_train, epochs=10, batch_size=0.1)
>>> y_pred = clf.predict(X_test)
>>> accuracy_score(y_test, y_pred)
0.73
Expand source code
class LightLabyrinthRandomClassifier(RandomLightLabyrinth):
    """A 2-dimensional Light Labyrinth with a randomized subset of features used at each node.

        It is meant for k-class classification. 
        Note that `k` cannot be greater than `min(width, height)`.

        ```
            X
            !__.__,__ __,__ y0
            |__|__!__!__|__ y1
        ```

        An example of `height = 2` by `width = 5` model with `k = 2` outputs.

        Parameters
        ----------
        ----------
        height : int 
            Height of the Light Labyrinth. Note that `height > 1`.

        width : int
            Width of the Light Labyrinth. Note that `width > 1`.

        features : int or float
            Portion/number of features to be used in each node.
            If float is given it should be within range (0.0, 1.0].
            If int is given it should not be greater than n_features.

        bias : bool, default=True
            Whether to use bias in each node.

        indices : ndarray, optional, default=None
            An array of shape (height, width, n_indices + bias) including indices
            to be used at each node. If `None`, indices will be selected randomly.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculatorRandom`, default=`light_labyrinth.hyperparams.activation.ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product`
            Activation function applied to each node's output.

            -`random_sigmoid_dot_product` - logistic function over dot product of weights and input light for a given node.

        error : `light_labyrinth.hyperparams.error_function.ErrorCalculator`, default=`light_labyrinth.hyperparams.error_function.ErrorCalculator.mean_squared_error`
            Error function optimized during training.

            -`mean_squared_error` - Mean Squared Error can be used for any classification or regression task.

            -`cross_entropy` - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

            -`scaled_mean_squared_error` - Adaptation of MSE meant primarily for multi-label classification.
            \tOutput values of consecutive pairs of output nodes are scaled to add up to \\(\\frac{1}{k}\\), before applying MSE.

        optimizer : object, default=`light_labyrinth.hyperparams.optimization.GradientDescent(0.01)`
            Optimization algorithm. 

            -`light_labyrinth.hyperparams.optimization.GradientDescent` - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

            -`light_labyrinth.hyperparams.optimization.RMSprop` - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Adam` - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Nadam` - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6


        regularization : object, default=`light_labyrinth.hyperparams.regularization.RegularizationL1(0.01)`
            Regularization technique - either L1, L2, or None.

            `light_labyrinth.hyperparams.regularization.RegularizationNone` - No regularization.

            `light_labyrinth.hyperparams.regularization.RegularizationL1` - L1 regularization: \\(\\lambda\\sum|W|\\), default: lambda_factor=0.01

            `light_labyrinth.hyperparams.regularization.RegularizationL2` - L2 regularization: \\(\\frac{\\lambda}{2}\\sum||W||\\), default: lambda_factor=0.01

        weights: ndarray, optional, default=None
            Initial weights. If `None`, weights are set according to weights_init parameter.

        weights_init: `light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit`, default=`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default`
            Method for weights initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default` - default initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Random` - weights are initialized randomly.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Zeros` - weights are initialized with zeros.

        random_state: int, optional, default=0
            Initial random state. If 0, initial random state will be set randomly.

        Attributes
        ----------
        ----------
        height : int
            Height of the LightLabyrinth.

        width : int
            Width of the LightLabyrinth.

        features : int
            Number of features used in each node (excluding bias).

        trainable_params : int
            Number of trainable parameters.

        indices : ndarray of shape (height, width, n_indices + bias)
            Indices used in each node (including bias if used).

        weights : ndarray of shape (height-1, height-1, n_indices + bias)
            Array of weights optimized during training. If bias is set to False, n_indices is equal to the number of features in the training set X.
            If bias is set to True, n_indices is increased by 1.

        history : `light_labyrinth.utils.LightLabyrinthLearningHistory`
            Learning history including accuracy and error on training and (if provided) validation sets.

        bias : bool
            Boolean value whether the model was trained with bias.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculatorRandom`
            Activation function used for training.

        error_function : `light_labyrinth.hyperparams.error_function.ErrorCalculator`
            Error function used for training.

        optimizer : object
            Optimization algorithm used for training, including its parameters.

        regularization : object
            Regularization used during training, including its parameters.

        random_state : int
            Random state passed during initialization.

        Notes
        -----
        -----
        LightLabyrinthRandom unlike standard LightLabyrinth includes only some 
        subset of features in the splitting criterion at each node. This subset
        is selected randomly (unless parameter `indices` is specified).
        It can be used as a part of an ensemble - randomness should lower model's 
        variance just like in the Random Forest model.

        LightLabyrinthRandomClassifier is used in the RandomMaze2DClassifier.

        It can also have a regularization term added to the loss function
        that shrinks model parameters to prevent overfitting.

        This implementation works with data represented as dense numpy arrays
        of floating point values.


        See Also
        --------
        light_labyrinth.dim2.LightLabyrinthClassifier : 2-dimensional Light Labyrinth classifier.
        light_labyrinth.dim3.LightLabyrinth3DRandomClassifier : 3-dimensional random Light Labyrinth classifier.
        light_labyrinth.dim2.LightLabyrinthRandomRegressor : 2-dimensional random Light Labyrinth regressor.

        Examples
        --------
        >>> from light_labyrinth.dim2 import LightLabyrinthRandomClassifier
        >>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
        >>> from light_labyrinth.hyperparams.regularization import RegularizationNone
        >>> from light_labyrinth.hyperparams.optimization import RMSprop
        >>> from sklearn.datasets import make_classification
        >>> from sklearn.model_selection import train_test_split
        >>> from sklearn.metrics import accuracy_score
        >>> X, y = make_classification(n_samples=1000, n_classes=4, n_informative=3)
        >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
        >>> clf = LightLabyrinthRandomClassifier(5, 5, features=5,
        ...                                optimizer=RMSprop(0.05),
        ...                                regularization=RegularizationNone()
        ...                                weights_init=LightLabyrinthWeightsInit.Zeros)
        >>> hist = clf.fit(X_train, y_train, epochs=10, batch_size=0.1)
        >>> y_pred = clf.predict(X_test)
        >>> accuracy_score(y_test, y_pred)
        0.73
        """

    def __init__(self, height, width, features, bias=True, indices=None,
                 activation=ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product,
                 error=ErrorCalculator.mean_squared_error,
                 optimizer=None,
                 regularization=None,
                 weights=None,
                 weights_init=LightLabyrinthWeightsInit.Default,
                 random_state=0):
        super().__init__(height, width, features, bias, indices,
                         activation,
                         error,
                         optimizer,
                         regularization,
                         weights,
                         weights_init,
                         random_state)

    def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
        """Fit the model to data matrix X and target(s) y.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
            The target values (class labels in classification, real numbers in
            regression).

        epochs : int
            Number of iterations to be performed. The solver iterates until convergence
            (determined by `stop_change`, `n_iter_check`) or this number of iterations.

        batch_size : int or float, default=1.0
            Size of mini-batches for stochastic optimizers given either as portion of 
            samples (float) or the exact number (int).
            When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

        stop_change : float, default=1e-4
            Tolerance for the optimization. When the loss or score is not improving
            by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
            convergence is considered to be reached and training stops.

        n_iter_check : int, default=0
            Maximum number of epochs to not meet ``stop_change`` improvement.
            When set to 0, exactly ``epochs`` iterations will be performed.

        epoch_check : int, default=1
            Determines how often the condition for convergence is checked.
            `epoch_check = i` means that the condition will be checked every i-th iteration.
            When set to 0 the condition will not be checked at all and the learning history will be empty.

        X_val : ndarray of shape (n_val_samples, n_features), default=None
            The validation data. 
            If `X_val` is given, `y_val` must be given as well.

        y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
            Target values of the validation set. 
            If `y_val` is given, `X_val` must be given as well.

        verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
            Verbosity level.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

        Returns
        -------
        -------
        hist : object
            Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
            accs_train, accs_val, errs_train, errs_val.
        """
        # overwrite the number of features to be used in each node (if it was given by float)
        if isinstance(self._features, float):
            self._features = max(1, int(X.shape[1] * self._features))

        self._encoder = _SmartOneHotEncoder()
        y_transformed = self._encoder.fit_transform(y)
        y_val_transformed = self._encoder.transform(
            y_val) if y_val is not None else None

        return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)

    def predict(self, X):
        """Predict using the random Light Labyrinth classifier.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y : ndarray of shape (n_samples,) or (n_samples, n_classes)
            The predicted classes.
        """
        y_pred = super().predict(X)
        return self._encoder.inverse_transform(y_pred)

    def predict_proba(self, X):
        """Probability estimates.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y_prob : ndarray of shape (n_samples, n_classes)
            The predicted probability of the sample for each class in the
            model.
        """
        return super().predict(X)

    def __del__(self):
        super().__del__()

Ancestors

  • light_labyrinth._bare_model.RandomLightLabyrinth
  • light_labyrinth._bare_model._LightLabyrinthModel

Methods

def fit(self, X, y, epochs, batch_size=1.0, stop_change=0.0001, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing)

Fit the model to data matrix X and target(s) y.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.
y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
epochs : int
Number of iterations to be performed. The solver iterates until convergence (determined by stop_change, n_iter_check) or this number of iterations.
batch_size : int or float, default=1.0
Size of mini-batches for stochastic optimizers given either as portion of samples (float) or the exact number (int). When type is float, batch_size = max(1, int(batch_size * n_samples)).
stop_change : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least stop_change for n_iter_check consecutive iterations, convergence is considered to be reached and training stops.
n_iter_check : int, default=0
Maximum number of epochs to not meet stop_change improvement. When set to 0, exactly epochs iterations will be performed.
epoch_check : int, default=1
Determines how often the condition for convergence is checked. epoch_check = i means that the condition will be checked every i-th iteration. When set to 0 the condition will not be checked at all and the learning history will be empty.
X_val : ndarray of shape (n_val_samples, n_features), default=None
The validation data. If X_val is given, y_val must be given as well.
y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
Target values of the validation set. If y_val is given, X_val must be given as well.
verbosity : LightLabyrinthVerbosityLevel, default=LightLabyrinthVerbosityLevel.Nothing

Verbosity level.

-LightLabyrinthVerbosityLevel.Nothing - No output is printed.

-LightLabyrinthVerbosityLevel.Basic - Display logs about important events during the learning process.

-LightLabyrinthVerbosityLevel.Full - Detailed output from the learning process is displayed.

Returns


hist : object
Returns a LightLabyrinthLearningHistory object with fields: accs_train, accs_val, errs_train, errs_val.
Expand source code
def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
    """Fit the model to data matrix X and target(s) y.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
        The target values (class labels in classification, real numbers in
        regression).

    epochs : int
        Number of iterations to be performed. The solver iterates until convergence
        (determined by `stop_change`, `n_iter_check`) or this number of iterations.

    batch_size : int or float, default=1.0
        Size of mini-batches for stochastic optimizers given either as portion of 
        samples (float) or the exact number (int).
        When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

    stop_change : float, default=1e-4
        Tolerance for the optimization. When the loss or score is not improving
        by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
        convergence is considered to be reached and training stops.

    n_iter_check : int, default=0
        Maximum number of epochs to not meet ``stop_change`` improvement.
        When set to 0, exactly ``epochs`` iterations will be performed.

    epoch_check : int, default=1
        Determines how often the condition for convergence is checked.
        `epoch_check = i` means that the condition will be checked every i-th iteration.
        When set to 0 the condition will not be checked at all and the learning history will be empty.

    X_val : ndarray of shape (n_val_samples, n_features), default=None
        The validation data. 
        If `X_val` is given, `y_val` must be given as well.

    y_val : ndarray of shape (n_val_samples,) or (n_val_samples, n_outputs), default=None
        Target values of the validation set. 
        If `y_val` is given, `X_val` must be given as well.

    verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
        Verbosity level.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

    Returns
    -------
    -------
    hist : object
        Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
        accs_train, accs_val, errs_train, errs_val.
    """
    # overwrite the number of features to be used in each node (if it was given by float)
    if isinstance(self._features, float):
        self._features = max(1, int(X.shape[1] * self._features))

    self._encoder = _SmartOneHotEncoder()
    y_transformed = self._encoder.fit_transform(y)
    y_val_transformed = self._encoder.transform(
        y_val) if y_val is not None else None

    return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)
def predict(self, X)

Predict using the random Light Labyrinth classifier.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y : ndarray of shape (n_samples,) or (n_samples, n_classes)
The predicted classes.
Expand source code
def predict(self, X):
    """Predict using the random Light Labyrinth classifier.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y : ndarray of shape (n_samples,) or (n_samples, n_classes)
        The predicted classes.
    """
    y_pred = super().predict(X)
    return self._encoder.inverse_transform(y_pred)
def predict_proba(self, X)

Probability estimates.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y_prob : ndarray of shape (n_samples, n_classes)
The predicted probability of the sample for each class in the model.
Expand source code
def predict_proba(self, X):
    """Probability estimates.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y_prob : ndarray of shape (n_samples, n_classes)
        The predicted probability of the sample for each class in the
        model.
    """
    return super().predict(X)
class LightLabyrinthRandomRegressor (height, width, features, bias=True, indices=None, activation=ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product, error=ErrorCalculator.mean_squared_error, optimizer=None, regularization=None, weights=None, weights_init=LightLabyrinthWeightsInit.Default, random_state=0)

A 2-dimensional Light Labyrinth with a randomized subset of features used at each node.

It is meant for regression.

    X
    !__ __,__.
    |__!__|__!
    |__!__|__!
    !__|__!__|__ y
    |__!__|__ *

An example of height = 5 by width = 4 model. The lower output is omitted.

Parameters


height : int
Height of the Light Labyrinth. Note that height > 1.
width : int
Width of the Light Labyrinth. Note that width > 1.
features : int or float
Portion/number of features to be used in each node. If float is given it should be within range (0.0, 1.0]. If int is given it should not be greater than n_features.
bias : bool, default=True
Whether to use bias in each node.
indices : ndarray, optional, default=None
An array of shape (height, width, n_indices + bias) including indices to be used at each node. If None, indices will be selected randomly.
activation : ReflectiveIndexCalculatorRandom, default=ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product

Activation function applied to each node's output.

-random_sigmoid_dot_product - logistic function over dot product of weights and input light for a given node.

error : ErrorCalculator, default=ErrorCalculator.mean_squared_error

Error function optimized during training.

-mean_squared_error - Mean Squared Error can be used for any classification or regression task.

-cross_entropy - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

-scaled_mean_squared_error - Adaptation of MSE meant primarily for multi-label classification. Output values of consecutive pairs of output nodes are scaled to add up to \frac{1}{k}, before applying MSE.

optimizer : object, default=GradientDescent(0.01)

Optimization algorithm.

-GradientDescent - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

-RMSprop - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

-Adam - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

-Nadam - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

regularization : object, default=RegularizationL1(0.01)

Regularization technique - either L1, L2, or None.

RegularizationNone - No regularization.

RegularizationL1 - L1 regularization: \lambda\sum|W|, default: lambda_factor=0.01

RegularizationL2 - L2 regularization: \frac{\lambda}{2}\sum||W||, default: lambda_factor=0.01

weights : ndarray, optional, default=None
Initial weights. If None, weights are set according to weights_init parameter.
weights_init : LightLabyrinthWeightsInit, default=LightLabyrinthWeightsInit.Default

Method for weights initialization.

-LightLabyrinthWeightsInit.Default - default initialization.

-LightLabyrinthWeightsInit.Random - weights are initialized randomly.

-LightLabyrinthWeightsInit.Zeros - weights are initialized with zeros.

random_state : int, optional, default=0
Initial random state. If 0, initial random state will be set randomly.

Attributes


height : int
Height of the LightLabyrinth.
width : int
Width of the LightLabyrinth.
features : int
Number of features used in each node (excluding bias).
trainable_params : int
Number of trainable parameters.
indices : ndarray of shape (height, width, n_indices + bias)
Indices used in each node (including bias if used).
weights : ndarray of shape (height-1, width-1, n_indices + bias)
Array of weights optimized during training. If bias is set to False, n_indices is equal to the number of features in the training set X. If bias is set to True, n_indices is increased by 1.
history : LightLabyrinthLearningHistory
Learning history including error on training and (if provided) validation sets.
bias : bool
Boolean value whether the model was trained with bias.
activation : ReflectiveIndexCalculatorRandom
Activation function used for training.
error_function : ErrorCalculator
Error function used for training.
optimizer : object
Optimization algorithm used for training, including its parameters.
regularization : object
Regularization used during training, including its parameters.
random_state : int
Random state passed during initialization.

Notes


LightLabyrinthRandom unlike standard LightLabyrinth includes only some subset of features in the splitting criterion at each node. This subset is selected randomly (unless parameter indices is specified). It can be used as a part of an ensemble - randomness should lower model's variance just like in the Random Forest model.

LightLabyrinthRandomRegressor is used in the RandomMazeRegressor.

It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.

This implementation works with data represented as dense numpy arrays of floating point values.

See Also

LightLabyrinthRegressor
2-dimensional Light Labyrinth regressor.
LightLabyrinthDynamicRegressor
3-dimensional Light Labyrinth regressor trained with dynamic algorithm.
LightLabyrinthRandomClassifier
2-dimensional random Light Labyrinth classifier.

Examples

>>> from light_labyrinth.dim2 import LightLabyrinthRandomRegressor
>>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
>>> from light_labyrinth.hyperparams.regularization import RegularizationL1
>>> from light_labyrinth.hyperparams.optimization import RMSprop
>>> from sklearn.datasets import make_regression
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import r2_score
>>> X, y = make_regression(n_samples=1000)
>>> y = y.reshape(-1,1)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
>>> clf = LightLabyrinthRandomRegressor(width=3, height=3, features=0.4,
...                                optimizer=RMSprop(0.05),
...                                regularization=RegularizationL1(0.15)
...                                weights_init=LightLabyrinthWeightsInit.Zeros)
>>> hist = clf.fit(X_train, y_train, epochs=20, batch_size=30)
>>> y_pred = clf.predict(X_test)
>>> r2_score(y_test, y_pred)
0.49
Expand source code
class LightLabyrinthRandomRegressor(RandomLightLabyrinth):
    """A 2-dimensional Light Labyrinth with a randomized subset of features used at each node.

        It is meant for regression.

        ```
            X
            !__ __,__.
            |__!__|__!
            |__!__|__!
            !__|__!__|__ y
            |__!__|__ *
        ```

        An example of `height = 5` by `width = 4` model. The lower output is omitted.

        Parameters
        ----------
        ----------
        height : int 
            Height of the Light Labyrinth. Note that `height > 1`.

        width : int
            Width of the Light Labyrinth. Note that `width > 1`.

        features : int or float
            Portion/number of features to be used in each node.
            If float is given it should be within range (0.0, 1.0].
            If int is given it should not be greater than n_features.

        bias : bool, default=True
            Whether to use bias in each node.

        indices : ndarray, optional, default=None
            An array of shape (height, width, n_indices + bias) including indices
            to be used at each node. If `None`, indices will be selected randomly.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculatorRandom`, default=`light_labyrinth.hyperparams.activation.ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product`
            Activation function applied to each node's output.

            -`random_sigmoid_dot_product` - logistic function over dot product of weights and input light for a given node.

        error : `light_labyrinth.hyperparams.error_function.ErrorCalculator`, default=`light_labyrinth.hyperparams.error_function.ErrorCalculator.mean_squared_error`
            Error function optimized during training.

            -`mean_squared_error` - Mean Squared Error can be used for any classification or regression task.

            -`cross_entropy` - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

            -`scaled_mean_squared_error` - Adaptation of MSE meant primarily for multi-label classification.
            \tOutput values of consecutive pairs of output nodes are scaled to add up to \\(\\frac{1}{k}\\), before applying MSE.

        optimizer : object, default=`light_labyrinth.hyperparams.optimization.GradientDescent(0.01)`
            Optimization algorithm. 

            -`light_labyrinth.hyperparams.optimization.GradientDescent` - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

            -`light_labyrinth.hyperparams.optimization.RMSprop` - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Adam` - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Nadam` - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6


        regularization : object, default=`light_labyrinth.hyperparams.regularization.RegularizationL1(0.01)`
            Regularization technique - either L1, L2, or None.

            `light_labyrinth.hyperparams.regularization.RegularizationNone` - No regularization.

            `light_labyrinth.hyperparams.regularization.RegularizationL1` - L1 regularization: \\(\\lambda\\sum|W|\\), default: lambda_factor=0.01

            `light_labyrinth.hyperparams.regularization.RegularizationL2` - L2 regularization: \\(\\frac{\\lambda}{2}\\sum||W||\\), default: lambda_factor=0.01

        weights: ndarray, optional, default=None
            Initial weights. If `None`, weights are set according to weights_init parameter.

        weights_init: `light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit`, default=`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default`
            Method for weights initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default` - default initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Random` - weights are initialized randomly.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Zeros` - weights are initialized with zeros.

        random_state: int, optional, default=0
            Initial random state. If 0, initial random state will be set randomly.

        Attributes
        ----------
        ----------
        height : int
            Height of the LightLabyrinth.

        width : int
            Width of the LightLabyrinth.

        features : int
            Number of features used in each node (excluding bias).

        trainable_params : int
            Number of trainable parameters.

        indices : ndarray of shape (height, width, n_indices + bias)
            Indices used in each node (including bias if used).

        weights : ndarray of shape (height-1, width-1, n_indices + bias)
            Array of weights optimized during training. If bias is set to False, n_indices is equal to the number of features in the training set X.
            If bias is set to True, n_indices is increased by 1.

        history : `light_labyrinth.utils.LightLabyrinthLearningHistory`
            Learning history including error on training and (if provided) validation sets.

        bias : bool
            Boolean value whether the model was trained with bias.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculatorRandom`
            Activation function used for training.

        error_function : `light_labyrinth.hyperparams.error_function.ErrorCalculator`
            Error function used for training.

        optimizer : object
            Optimization algorithm used for training, including its parameters.

        regularization : object
            Regularization used during training, including its parameters.

        random_state : int
            Random state passed during initialization.

        Notes
        -----
        -----
        LightLabyrinthRandom unlike standard LightLabyrinth includes only some 
        subset of features in the splitting criterion at each node. This subset
        is selected randomly (unless parameter `indices` is specified).
        It can be used as a part of an ensemble - randomness should lower model's 
        variance just like in the Random Forest model.

        LightLabyrinthRandomRegressor is used in the `light_labyrinth.ensemble.RandomMazeRegressor`.

        It can also have a regularization term added to the loss function
        that shrinks model parameters to prevent overfitting.

        This implementation works with data represented as dense numpy arrays
        of floating point values.


        See Also
        --------
        light_labyrinth.dim2.LightLabyrinthRegressor : 2-dimensional Light Labyrinth regressor.
        light_labyrinth.dim2.LightLabyrinthDynamicRegressor : 3-dimensional Light Labyrinth regressor trained with dynamic algorithm.
        light_labyrinth.dim2.LightLabyrinthRandomClassifier : 2-dimensional random Light Labyrinth classifier.

        Examples
        --------
        >>> from light_labyrinth.dim2 import LightLabyrinthRandomRegressor
        >>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
        >>> from light_labyrinth.hyperparams.regularization import RegularizationL1
        >>> from light_labyrinth.hyperparams.optimization import RMSprop
        >>> from sklearn.datasets import make_regression
        >>> from sklearn.model_selection import train_test_split
        >>> from sklearn.metrics import r2_score
        >>> X, y = make_regression(n_samples=1000)
        >>> y = y.reshape(-1,1)
        >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
        >>> clf = LightLabyrinthRandomRegressor(width=3, height=3, features=0.4,
        ...                                optimizer=RMSprop(0.05),
        ...                                regularization=RegularizationL1(0.15)
        ...                                weights_init=LightLabyrinthWeightsInit.Zeros)
        >>> hist = clf.fit(X_train, y_train, epochs=20, batch_size=30)
        >>> y_pred = clf.predict(X_test)
        >>> r2_score(y_test, y_pred)
        0.49
        """

    def __init__(self, height, width, features, bias=True, indices=None,
                 activation=ReflectiveIndexCalculatorRandom.random_sigmoid_dot_product,
                 error=ErrorCalculator.mean_squared_error,
                 optimizer=None,
                 regularization=None,
                 weights=None,
                 weights_init=LightLabyrinthWeightsInit.Default,
                 random_state=0):
        if isinstance(features, float):
            self._float_features = features
        else:
            self._float_features = None
        super().__init__(height, width, features, bias, indices,
                         activation,
                         error,
                         optimizer,
                         regularization,
                         weights,
                         weights_init,
                         random_state)

    def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
        """Fit the model to data matrix X and target(s) y.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        y : ndarray of shape (n_samples, 1)
            The target values.

        epochs : int
            Number of iterations to be performed. The solver iterates until convergence
            (determined by `stop_change`, `n_iter_check`) or this number of iterations.

        batch_size : int or float, default=1.0
            Size of mini-batches for stochastic optimizers given either as portion of 
            samples (float) or the exact number (int).
            When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

        stop_change : float, default=1e-4
            Tolerance for the optimization. When the loss or score is not improving
            by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
            convergence is considered to be reached and training stops.

        n_iter_check : int, default=0
            Maximum number of epochs to not meet ``stop_change`` improvement.
            When set to 0, exactly ``epochs`` iterations will be performed.

        epoch_check : int, default=1
            Determines how often the condition for convergence is checked.
            `epoch_check = i` means that the condition will be checked every i-th iteration.
            When set to 0 the condition will not be checked at all and the learning history will be empty.

        X_val : ndarray of shape (n_val_samples, n_features), default=None
            The validation data. 
            If `X_val` is given, `y_val` must be given as well.

        y_val : ndarray of shape (n_val_samples, 1), default=None
            Target values of the validation set. 
            If `y_val` is given, `X_val` must be given as well.

        verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
            Verbosity level.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

        Returns
        -------
        -------
        hist : object
            Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
            errs_train, errs_val.
        """
        # overwrite the number of features to be used in each node (if it was given by float)
        if self._float_features:
            self._features = max(1, int(X.shape[1] * self._float_features))

        self._encoder = _LightLabyrinthOutputTransformer()
        y_transformed = self._encoder.fit_transform(y)
        y_val_transformed = self._encoder.transform(
            y_val) if y_val is not None else None

        return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)

    def predict(self, X):
        """Predict using the random Light Labyrinth regressor.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y : ndarray of shape (n_samples, 1)
            The predicted values.
        """
        y_pred = super().predict(X)
        return self._encoder.inverse_transform(y_pred)

    def __del__(self):
        super().__del__()

Ancestors

  • light_labyrinth._bare_model.RandomLightLabyrinth
  • light_labyrinth._bare_model._LightLabyrinthModel

Methods

def fit(self, X, y, epochs, batch_size=1.0, stop_change=0.0001, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing)

Fit the model to data matrix X and target(s) y.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.
y : ndarray of shape (n_samples, 1)
The target values.
epochs : int
Number of iterations to be performed. The solver iterates until convergence (determined by stop_change, n_iter_check) or this number of iterations.
batch_size : int or float, default=1.0
Size of mini-batches for stochastic optimizers given either as portion of samples (float) or the exact number (int). When type is float, batch_size = max(1, int(batch_size * n_samples)).
stop_change : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least stop_change for n_iter_check consecutive iterations, convergence is considered to be reached and training stops.
n_iter_check : int, default=0
Maximum number of epochs to not meet stop_change improvement. When set to 0, exactly epochs iterations will be performed.
epoch_check : int, default=1
Determines how often the condition for convergence is checked. epoch_check = i means that the condition will be checked every i-th iteration. When set to 0 the condition will not be checked at all and the learning history will be empty.
X_val : ndarray of shape (n_val_samples, n_features), default=None
The validation data. If X_val is given, y_val must be given as well.
y_val : ndarray of shape (n_val_samples, 1), default=None
Target values of the validation set. If y_val is given, X_val must be given as well.
verbosity : LightLabyrinthVerbosityLevel, default=LightLabyrinthVerbosityLevel.Nothing

Verbosity level.

-LightLabyrinthVerbosityLevel.Nothing - No output is printed.

-LightLabyrinthVerbosityLevel.Basic - Display logs about important events during the learning process.

-LightLabyrinthVerbosityLevel.Full - Detailed output from the learning process is displayed.

Returns


hist : object
Returns a LightLabyrinthLearningHistory object with fields: errs_train, errs_val.
Expand source code
def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
    """Fit the model to data matrix X and target(s) y.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    y : ndarray of shape (n_samples, 1)
        The target values.

    epochs : int
        Number of iterations to be performed. The solver iterates until convergence
        (determined by `stop_change`, `n_iter_check`) or this number of iterations.

    batch_size : int or float, default=1.0
        Size of mini-batches for stochastic optimizers given either as portion of 
        samples (float) or the exact number (int).
        When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

    stop_change : float, default=1e-4
        Tolerance for the optimization. When the loss or score is not improving
        by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
        convergence is considered to be reached and training stops.

    n_iter_check : int, default=0
        Maximum number of epochs to not meet ``stop_change`` improvement.
        When set to 0, exactly ``epochs`` iterations will be performed.

    epoch_check : int, default=1
        Determines how often the condition for convergence is checked.
        `epoch_check = i` means that the condition will be checked every i-th iteration.
        When set to 0 the condition will not be checked at all and the learning history will be empty.

    X_val : ndarray of shape (n_val_samples, n_features), default=None
        The validation data. 
        If `X_val` is given, `y_val` must be given as well.

    y_val : ndarray of shape (n_val_samples, 1), default=None
        Target values of the validation set. 
        If `y_val` is given, `X_val` must be given as well.

    verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
        Verbosity level.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

    Returns
    -------
    -------
    hist : object
        Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
        errs_train, errs_val.
    """
    # overwrite the number of features to be used in each node (if it was given by float)
    if self._float_features:
        self._features = max(1, int(X.shape[1] * self._float_features))

    self._encoder = _LightLabyrinthOutputTransformer()
    y_transformed = self._encoder.fit_transform(y)
    y_val_transformed = self._encoder.transform(
        y_val) if y_val is not None else None

    return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)
def predict(self, X)

Predict using the random Light Labyrinth regressor.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y : ndarray of shape (n_samples, 1)
The predicted values.
Expand source code
def predict(self, X):
    """Predict using the random Light Labyrinth regressor.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y : ndarray of shape (n_samples, 1)
        The predicted values.
    """
    y_pred = super().predict(X)
    return self._encoder.inverse_transform(y_pred)
class LightLabyrinthRegressor (height, width, bias=True, activation=ReflectiveIndexCalculator.sigmoid_dot_product, error=ErrorCalculator.mean_squared_error, optimizer=None, regularization=None, weights=None, weights_init=LightLabyrinthWeightsInit.Default, random_state=0)

A 2-dimensional Light Labyrinth model.

It is meant for regression.

    X
    |__ 
    |__|
    |__|
    |__|__ y
    |__ *

An example of height = 5 by width = 2 model. The lower output is omitted.

Parameters


height : int
Height of the Light Labyrinth. Note that height > 1.
width : int
Width of the Light Labyrinth. Note that width > 1.
bias : bool, default=True
Whether to use bias in each node.
activation : ReflectiveIndexCalculator, default=ReflectiveIndexCalculator.sigmoid_dot_product

Activation function applied to each node's output.

-sigmoid_dot_product - logistic function over dot product of weights and input light for a given node.

error : ErrorCalculator, default=ErrorCalculator.mean_squared_error

Error function optimized during training.

-mean_squared_error - Mean Squared Error can be used for any classification or regression task.

-cross_entropy - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

-scaled_mean_squared_error - Adaptation of MSE meant primarily for multi-label classification. Output values of consecutive pairs of output nodes are scaled to add up to \frac{1}{k}, before applying MSE.

optimizer : object, default=GradientDescent(0.01)

Optimization algorithm.

-GradientDescent - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

-RMSprop - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

-Adam - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

-Nadam - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

regularization : object, default=RegularizationL1(0.01)

Regularization technique - either L1, L2, or None.

RegularizationNone - No regularization.

RegularizationL1 - L1 regularization: \lambda\sum|W|, default: lambda_factor=0.01

RegularizationL2 - L2 regularization: \frac{\lambda}{2}\sum||W||, default: lambda_factor=0.01

weights : ndarray, optional, default=None
Initial weights. If None, weights are set according to weights_init parameter.
weights_init : LightLabyrinthWeightsInit, default=LightLabyrinthWeightsInit.Default

Method for weights initialization.

-LightLabyrinthWeightsInit.Default - default initialization.

-LightLabyrinthWeightsInit.Random - weights are initialized randomly.

-LightLabyrinthWeightsInit.Zeros - weights are initialized with zeros.

random_state : int, optional, default=0
Initial random state. If 0, initial random state will be set randomly.

Attributes


height : int
Height of the LightLabyrinth.
width : int
Width of the LightLabyrinth.
trainable_params : int
Number of trainable parameters.
weights : ndarray of shape (height-1, width-1, n_features + bias)
Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X. If bias is set to True, n_features is increased by 1.
history : LightLabyrinthLearningHistory
Learning history including error on training and (if provided) validation sets.
bias : bool
Boolean value whether the model was trained with bias.
activation : ReflectiveIndexCalculator
Activation function used for training.
error_function : ErrorCalculator
Error function used for training.
optimizer : object
Optimization algorithm used for training, including its parameters.
regularization : object
Regularization used during training, including its parameters.
random_state : int
Random state passed during initialization.

Notes


LightLabyrinth trains iteratively since at each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the weights.

It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting.

This implementation works with data represented as dense numpy arrays of floating point values.

See Also

LightLabyrinthDynamicRegressor
2-dimensional Light Labyrinth regressor trained with dynamic algorithm.
LightLabyrinthRandomRegressor
2-dimensional randomized Light Labyrinth regressor.
LightLabyrinthClassifier
2-dimensional Light Labyrinth classifier.

Examples

>>> from light_labyrinth.dim2 import LightLabyrinthRegressor
>>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
>>> from light_labyrinth.hyperparams.regularization import RegularizationNone
>>> from light_labyrinth.hyperparams.optimization import GradientDescent
>>> from sklearn.datasets import make_regression
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import r2_score
>>> X, y = make_regression(n_samples=1000)
>>> y = y.reshape(-1,1)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
>>> clf = LightLabyrinthRegressor(height=4, width=6,
...                            error=ErrorCalculator.mean_squared_error,
...                            optimizer=GradientDescent(1.5),
...                            regularization=RegularizationNone(),
...                            weights_init=LightLabyrinthWeightsInit.Zeros)
>>> hist = clf.fit(X_train, y_train, epochs=20, batch_size=30)
>>> y_pred = clf.predict(X_test)
>>> r2_score(y_test, y_pred)
0.99
Expand source code
class LightLabyrinthRegressor(LightLabyrinth):
    """A 2-dimensional Light Labyrinth model.

        It is meant for regression.

        ```
            X
            |__ 
            |__|
            |__|
            |__|__ y
            |__ *
        ```

        An example of `height = 5` by `width = 2` model. The lower output is omitted. 

        Parameters
        ----------
        ----------
        height : int 
            Height of the Light Labyrinth. Note that `height > 1`.

        width : int
            Width of the Light Labyrinth. Note that `width > 1`.

        bias : bool, default=True
            Whether to use bias in each node.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`, default=`light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator.sigmoid_dot_product`
            Activation function applied to each node's output.

            -`sigmoid_dot_product` - logistic function over dot product of weights and input light for a given node.

        error : `light_labyrinth.hyperparams.error_function.ErrorCalculator`, default=`light_labyrinth.hyperparams.error_function.ErrorCalculator.mean_squared_error`
            Error function optimized during training.

            -`mean_squared_error` - Mean Squared Error can be used for any classification or regression task.

            -`cross_entropy` - Cross Entropy Loss is meant primarily for classification task but it can be used for regression as well.

            -`scaled_mean_squared_error` - Adaptation of MSE meant primarily for multi-label classification.
            \tOutput values of consecutive pairs of output nodes are scaled to add up to \\(\\frac{1}{k}\\), before applying MSE.

        optimizer : object, default=`light_labyrinth.hyperparams.optimization.GradientDescent(0.01)`
            Optimization algorithm. 

            -`light_labyrinth.hyperparams.optimization.GradientDescent` - Standard Gradient Descent with constant learning rate, default: learning_rate=0.01

            -`light_labyrinth.hyperparams.optimization.RMSprop` - RMSprop optimization algorithm, default: learning_rate=0.01, rho=0.9, momentum=0.0, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Adam` - Adam optimization algorithm, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6

            -`light_labyrinth.hyperparams.optimization.Nadam` - Adam with Nesterov momentum, default: learning_rate=0.01, beta1=0.9, beta2=0.999, epsilon=1e-6


        regularization : object, default=`light_labyrinth.hyperparams.regularization.RegularizationL1(0.01)`
            Regularization technique - either L1, L2, or None.

            `light_labyrinth.hyperparams.regularization.RegularizationNone` - No regularization.

            `light_labyrinth.hyperparams.regularization.RegularizationL1` - L1 regularization: \\(\\lambda\\sum|W|\\), default: lambda_factor=0.01

            `light_labyrinth.hyperparams.regularization.RegularizationL2` - L2 regularization: \\(\\frac{\\lambda}{2}\\sum||W||\\), default: lambda_factor=0.01

        weights: ndarray, optional, default=None
            Initial weights. If `None`, weights are set according to weights_init parameter.

        weights_init: `light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit`, default=`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default`
            Method for weights initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Default` - default initialization.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Random` - weights are initialized randomly.

            -`light_labyrinth.hyperparams.weights_init.LightLabyrinthWeightsInit.Zeros` - weights are initialized with zeros.

        random_state: int, optional, default=0
            Initial random state. If 0, initial random state will be set randomly.

        Attributes
        ----------
        ----------
        height : int
            Height of the LightLabyrinth.

        width : int
            Width of the LightLabyrinth.

        trainable_params : int
            Number of trainable parameters.

        weights : ndarray of shape (height-1, width-1, n_features + bias)
            Array of weights optimized during training. If bias is set to False, n_features is equal to the number of features in the training set X.
            If bias is set to True, n_features is increased by 1.

        history : `light_labyrinth.utils.LightLabyrinthLearningHistory`
            Learning history including error on training and (if provided) validation sets.

        bias : bool
            Boolean value whether the model was trained with bias.

        activation : `light_labyrinth.hyperparams.activation.ReflectiveIndexCalculator`
            Activation function used for training.

        error_function : `light_labyrinth.hyperparams.error_function.ErrorCalculator`
            Error function used for training.

        optimizer : object
            Optimization algorithm used for training, including its parameters.

        regularization : object
            Regularization used during training, including its parameters.

        random_state : int
            Random state passed during initialization.

        Notes
        -----
        -----
        LightLabyrinth trains iteratively since at each time step
        the partial derivatives of the loss function with respect to the model
        parameters are computed to update the weights.

        It can also have a regularization term added to the loss function
        that shrinks model parameters to prevent overfitting.

        This implementation works with data represented as dense numpy arrays
        of floating point values.


        See Also
        --------
        light_labyrinth.dim2.LightLabyrinthDynamicRegressor : 2-dimensional Light Labyrinth regressor trained with dynamic algorithm.
        light_labyrinth.dim2.LightLabyrinthRandomRegressor : 2-dimensional randomized Light Labyrinth regressor.
        light_labyrinth.dim2.LightLabyrinthClassifier : 2-dimensional Light Labyrinth classifier.

        Examples
        --------
        >>> from light_labyrinth.dim2 import LightLabyrinthRegressor
        >>> from light_labyrinth.hyperparams.weights_init import LightLabyrinthWeightsInit
        >>> from light_labyrinth.hyperparams.regularization import RegularizationNone
        >>> from light_labyrinth.hyperparams.optimization import GradientDescent
        >>> from sklearn.datasets import make_regression
        >>> from sklearn.model_selection import train_test_split
        >>> from sklearn.metrics import r2_score
        >>> X, y = make_regression(n_samples=1000)
        >>> y = y.reshape(-1,1)
        >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
        >>> clf = LightLabyrinthRegressor(height=4, width=6,
        ...                            error=ErrorCalculator.mean_squared_error,
        ...                            optimizer=GradientDescent(1.5),
        ...                            regularization=RegularizationNone(),
        ...                            weights_init=LightLabyrinthWeightsInit.Zeros)
        >>> hist = clf.fit(X_train, y_train, epochs=20, batch_size=30)
        >>> y_pred = clf.predict(X_test)
        >>> r2_score(y_test, y_pred)
        0.99
        """

    def __init__(self, height, width, bias=True,
                 activation=ReflectiveIndexCalculator.sigmoid_dot_product,
                 error=ErrorCalculator.mean_squared_error,
                 optimizer=None,
                 regularization=None,
                 weights=None,
                 weights_init=LightLabyrinthWeightsInit.Default,
                 random_state=0):
        super().__init__(height, width, bias,
                         activation,
                         error,
                         optimizer,
                         regularization,
                         weights,
                         weights_init,
                         random_state)

    def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
        """Fit the model to data matrix X and target(s) y.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        y : ndarray of shape (n_samples, 1)
            The target values.

        epochs : int
            Number of iterations to be performed. The solver iterates until convergence
            (determined by `stop_change`, `n_iter_check`) or this number of iterations.

        batch_size : int or float, default=1.0
            Size of mini-batches for stochastic optimizers given either as portion of 
            samples (float) or the exact number (int).
            When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

        stop_change : float, default=1e-4
            Tolerance for the optimization. When the loss or score is not improving
            by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
            convergence is considered to be reached and training stops.

        n_iter_check : int, default=0
            Maximum number of epochs to not meet ``stop_change`` improvement.
            When set to 0, exactly ``epochs`` iterations will be performed.

        epoch_check : int, default=1
            Determines how often the condition for convergence is checked.
            `epoch_check = i` means that the condition will be checked every i-th iteration.
            When set to 0 the condition will not be checked at all and the learning history will be empty.

        X_val : ndarray of shape (n_val_samples, n_features), default=None
            The validation data. 
            If `X_val` is given, `y_val` must be given as well.

        y_val : ndarray of shape (n_val_samples, 1), default=None
            Target values of the validation set. 
            If `y_val` is given, `X_val` must be given as well.

        verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
            Verbosity level.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

            -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

        Returns
        -------
        -------
        hist : object
            Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
            errs_train, errs_val.
        """
        self._encoder = _LightLabyrinthOutputTransformer()
        y_transformed = self._encoder.fit_transform(y)
        y_val_transformed = self._encoder.transform(
            y_val) if y_val is not None else None
            
        return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)

    def predict(self, X):
        """Predict using the Light Labyrinth regressor.

        Parameters
        ----------
        ----------
        X : ndarray of shape (n_samples, n_features)
            The input data.

        Returns
        -------
        -------
        y : ndarray of shape (n_samples, 1)
            The predicted values.
        """
        y_pred = super().predict(X)
        return self._encoder.inverse_transform(y_pred)

    def __del__(self):
        super().__del__()

Ancestors

  • light_labyrinth._bare_model.LightLabyrinth
  • light_labyrinth._bare_model._LightLabyrinthModel

Methods

def fit(self, X, y, epochs, batch_size=1.0, stop_change=0.0001, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing)

Fit the model to data matrix X and target(s) y.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.
y : ndarray of shape (n_samples, 1)
The target values.
epochs : int
Number of iterations to be performed. The solver iterates until convergence (determined by stop_change, n_iter_check) or this number of iterations.
batch_size : int or float, default=1.0
Size of mini-batches for stochastic optimizers given either as portion of samples (float) or the exact number (int). When type is float, batch_size = max(1, int(batch_size * n_samples)).
stop_change : float, default=1e-4
Tolerance for the optimization. When the loss or score is not improving by at least stop_change for n_iter_check consecutive iterations, convergence is considered to be reached and training stops.
n_iter_check : int, default=0
Maximum number of epochs to not meet stop_change improvement. When set to 0, exactly epochs iterations will be performed.
epoch_check : int, default=1
Determines how often the condition for convergence is checked. epoch_check = i means that the condition will be checked every i-th iteration. When set to 0 the condition will not be checked at all and the learning history will be empty.
X_val : ndarray of shape (n_val_samples, n_features), default=None
The validation data. If X_val is given, y_val must be given as well.
y_val : ndarray of shape (n_val_samples, 1), default=None
Target values of the validation set. If y_val is given, X_val must be given as well.
verbosity : LightLabyrinthVerbosityLevel, default=LightLabyrinthVerbosityLevel.Nothing

Verbosity level.

-LightLabyrinthVerbosityLevel.Nothing - No output is printed.

-LightLabyrinthVerbosityLevel.Basic - Display logs about important events during the learning process.

-LightLabyrinthVerbosityLevel.Full - Detailed output from the learning process is displayed.

Returns


hist : object
Returns a LightLabyrinthLearningHistory object with fields: errs_train, errs_val.
Expand source code
def fit(self, X, y, epochs, batch_size=1.0, stop_change=1e-4, n_iter_check=0, epoch_check=1, X_val=None, y_val=None, verbosity=LightLabyrinthVerbosityLevel.Nothing):
    """Fit the model to data matrix X and target(s) y.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    y : ndarray of shape (n_samples, 1)
        The target values.

    epochs : int
        Number of iterations to be performed. The solver iterates until convergence
        (determined by `stop_change`, `n_iter_check`) or this number of iterations.

    batch_size : int or float, default=1.0
        Size of mini-batches for stochastic optimizers given either as portion of 
        samples (float) or the exact number (int).
        When type is float, `batch_size = max(1, int(batch_size * n_samples))`.

    stop_change : float, default=1e-4
        Tolerance for the optimization. When the loss or score is not improving
        by at least ``stop_change`` for ``n_iter_check`` consecutive iterations,
        convergence is considered to be reached and training stops.

    n_iter_check : int, default=0
        Maximum number of epochs to not meet ``stop_change`` improvement.
        When set to 0, exactly ``epochs`` iterations will be performed.

    epoch_check : int, default=1
        Determines how often the condition for convergence is checked.
        `epoch_check = i` means that the condition will be checked every i-th iteration.
        When set to 0 the condition will not be checked at all and the learning history will be empty.

    X_val : ndarray of shape (n_val_samples, n_features), default=None
        The validation data. 
        If `X_val` is given, `y_val` must be given as well.

    y_val : ndarray of shape (n_val_samples, 1), default=None
        Target values of the validation set. 
        If `y_val` is given, `X_val` must be given as well.

    verbosity: `light_labyrinth.utils.LightLabyrinthVerbosityLevel`, default=`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing`
        Verbosity level.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Nothing` - No output is printed.

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Basic` - Display logs about important events during the learning process. 

        -`light_labyrinth.utils.LightLabyrinthVerbosityLevel.Full` - Detailed output from the learning process is displayed.

    Returns
    -------
    -------
    hist : object
        Returns a `light_labyrinth.utils.LightLabyrinthLearningHistory` object with fields: 
        errs_train, errs_val.
    """
    self._encoder = _LightLabyrinthOutputTransformer()
    y_transformed = self._encoder.fit_transform(y)
    y_val_transformed = self._encoder.transform(
        y_val) if y_val is not None else None
        
    return super().fit(X, y_transformed, epochs, batch_size, stop_change, n_iter_check, epoch_check, X_val, y_val_transformed, verbosity)
def predict(self, X)

Predict using the Light Labyrinth regressor.

Parameters


X : ndarray of shape (n_samples, n_features)
The input data.

Returns


y : ndarray of shape (n_samples, 1)
The predicted values.
Expand source code
def predict(self, X):
    """Predict using the Light Labyrinth regressor.

    Parameters
    ----------
    ----------
    X : ndarray of shape (n_samples, n_features)
        The input data.

    Returns
    -------
    -------
    y : ndarray of shape (n_samples, 1)
        The predicted values.
    """
    y_pred = super().predict(X)
    return self._encoder.inverse_transform(y_pred)