Welcome to LassoNet’s documentation!¶
Installation¶
pip install lassonet
API¶
- class lassonet.LassoNetRegressor(*, hidden_dims=(100), lambda_start='auto', lambda_seq=None, gamma=0.0, gamma_skip=0.0, path_multiplier=1.02, M=10, groups=None, dropout=0, batch_size=None, optim=None, n_iters=(1000, 100), patience=(100, 10), tol=0.99, backtrack=False, val_size=None, device=None, verbose=1, random_state=None, torch_seed=None, class_weight=None, tie_approximation=None)¶
Use LassoNet as regressor
- Parameters
hidden_dims (tuple of int, default=(100,)) – Shape of the hidden layers.
lambda_start (float, default='auto') – First value on the path. Leave ‘auto’ to estimate it automatically.
lambda_seq (iterable of float) – If specified, the model will be trained on this sequence of values, until all coefficients are zero. The dense model will always be trained first. Note: lambda_start and path_multiplier will be ignored.
gamma (float, default=0.0) – l2 penalization on the network
gamma – l2 penalization on the skip connection
path_multiplier (float, default=1.02) – Multiplicative factor (\(1 + \epsilon\)) to increase the penalty parameter over the path
M (float, default=10.0) – Hierarchy parameter.
groups (None or list of lists) – Use group LassoNet regularization. groups is a list of list such that groups[i] contains the indices of the features in the i-th group.
dropout (float, default = None) –
batch_size (int, default=None) – If None, does not use batches. Batches are shuffled at each epoch.
optim (torch optimizer or tuple of 2 optimizers, default=None) – Optimizer for initial training and path computation. Default is Adam(lr=1e-3), SGD(lr=1e-3, momentum=0.9).
n_iters (int or pair of int, default=(1000, 100)) – Maximum number of training epochs for initial training and path computation. This is an upper-bound on the effective number of epochs, since the model uses early stopping.
patience (int or pair of int or None, default=10) – Number of epochs to wait without improvement during early stopping.
tol (float, default=0.99) – Minimum improvement for early stopping: new objective < tol * old objective.
backtrack (bool, default=False) – If true, ensures the objective function decreases.
val_size (float, default=None) – Proportion of data to use for early stopping. 0 means that training data is used. To disable early stopping, set patience=None. Default is 0.1 for all models except Cox for which training data is used. If X_val and y_val are given during training, it will be ignored.
device (torch device, default=None) – Device on which to train the model using PyTorch. Default: GPU if available else CPU
verbose (int, default=1) –
random_state – Random state for validation
torch_seed – Torch state for model random initialization
class_weight (iterable of float, default=None) – If specified, weights for different classes in training. There must be one number per class.
tie_approximation (str) – Tie approximation for the Cox model, must be one of (“breslow”, “efron”).
- fit(X, y, *, X_val=None, y_val=None)¶
Train the model. Note that if lambda_ is not given, the trained model will most likely not use any feature.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- path(X, y, *, X_val=None, y_val=None, lambda_seq=None, lambda_max=inf, return_state_dicts=True, callback=None) → List[lassonet.interfaces.HistoryItem]¶
Train LassoNet on a lambda_ path. The path is defined by the class parameters: start at lambda_start and increment according to path_multiplier. The path will stop when no feature is being used anymore. callback will be called at each step on (model, history)
- score(X, y, sample_weight=None)¶
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – \(R^2\) of
self.predict(X)
wrt. y.- Return type
float
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score()
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lassonet.LassoNetClassifier(*, hidden_dims=(100), lambda_start='auto', lambda_seq=None, gamma=0.0, gamma_skip=0.0, path_multiplier=1.02, M=10, groups=None, dropout=0, batch_size=None, optim=None, n_iters=(1000, 100), patience=(100, 10), tol=0.99, backtrack=False, val_size=None, device=None, verbose=1, random_state=None, torch_seed=None, class_weight=None, tie_approximation=None)¶
Use LassoNet as classifier
- Parameters
hidden_dims (tuple of int, default=(100,)) – Shape of the hidden layers.
lambda_start (float, default='auto') – First value on the path. Leave ‘auto’ to estimate it automatically.
lambda_seq (iterable of float) – If specified, the model will be trained on this sequence of values, until all coefficients are zero. The dense model will always be trained first. Note: lambda_start and path_multiplier will be ignored.
gamma (float, default=0.0) – l2 penalization on the network
gamma – l2 penalization on the skip connection
path_multiplier (float, default=1.02) – Multiplicative factor (\(1 + \epsilon\)) to increase the penalty parameter over the path
M (float, default=10.0) – Hierarchy parameter.
groups (None or list of lists) – Use group LassoNet regularization. groups is a list of list such that groups[i] contains the indices of the features in the i-th group.
dropout (float, default = None) –
batch_size (int, default=None) – If None, does not use batches. Batches are shuffled at each epoch.
optim (torch optimizer or tuple of 2 optimizers, default=None) – Optimizer for initial training and path computation. Default is Adam(lr=1e-3), SGD(lr=1e-3, momentum=0.9).
n_iters (int or pair of int, default=(1000, 100)) – Maximum number of training epochs for initial training and path computation. This is an upper-bound on the effective number of epochs, since the model uses early stopping.
patience (int or pair of int or None, default=10) – Number of epochs to wait without improvement during early stopping.
tol (float, default=0.99) – Minimum improvement for early stopping: new objective < tol * old objective.
backtrack (bool, default=False) – If true, ensures the objective function decreases.
val_size (float, default=None) – Proportion of data to use for early stopping. 0 means that training data is used. To disable early stopping, set patience=None. Default is 0.1 for all models except Cox for which training data is used. If X_val and y_val are given during training, it will be ignored.
device (torch device, default=None) – Device on which to train the model using PyTorch. Default: GPU if available else CPU
verbose (int, default=1) –
random_state – Random state for validation
torch_seed – Torch state for model random initialization
class_weight (iterable of float, default=None) – If specified, weights for different classes in training. There must be one number per class.
tie_approximation (str) – Tie approximation for the Cox model, must be one of (“breslow”, “efron”).
- fit(X, y, *, X_val=None, y_val=None)¶
Train the model. Note that if lambda_ is not given, the trained model will most likely not use any feature.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- path(X, y, *, X_val=None, y_val=None, lambda_seq=None, lambda_max=inf, return_state_dicts=True, callback=None) → List[lassonet.interfaces.HistoryItem]¶
Train LassoNet on a lambda_ path. The path is defined by the class parameters: start at lambda_start and increment according to path_multiplier. The path will stop when no feature is being used anymore. callback will be called at each step on (model, history)
- score(X, y, sample_weight=None)¶
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – Mean accuracy of
self.predict(X)
wrt. y.- Return type
float
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lassonet.LassoNetCoxRegressor(*, hidden_dims=(100), lambda_start='auto', lambda_seq=None, gamma=0.0, gamma_skip=0.0, path_multiplier=1.02, M=10, groups=None, dropout=0, batch_size=None, optim=None, n_iters=(1000, 100), patience=(100, 10), tol=0.99, backtrack=False, val_size=None, device=None, verbose=1, random_state=None, torch_seed=None, class_weight=None, tie_approximation=None)¶
Use LassoNet for Cox regression
- Parameters
hidden_dims (tuple of int, default=(100,)) – Shape of the hidden layers.
lambda_start (float, default='auto') – First value on the path. Leave ‘auto’ to estimate it automatically.
lambda_seq (iterable of float) – If specified, the model will be trained on this sequence of values, until all coefficients are zero. The dense model will always be trained first. Note: lambda_start and path_multiplier will be ignored.
gamma (float, default=0.0) – l2 penalization on the network
gamma – l2 penalization on the skip connection
path_multiplier (float, default=1.02) – Multiplicative factor (\(1 + \epsilon\)) to increase the penalty parameter over the path
M (float, default=10.0) – Hierarchy parameter.
groups (None or list of lists) – Use group LassoNet regularization. groups is a list of list such that groups[i] contains the indices of the features in the i-th group.
dropout (float, default = None) –
batch_size (int, default=None) – If None, does not use batches. Batches are shuffled at each epoch.
optim (torch optimizer or tuple of 2 optimizers, default=None) – Optimizer for initial training and path computation. Default is Adam(lr=1e-3), SGD(lr=1e-3, momentum=0.9).
n_iters (int or pair of int, default=(1000, 100)) – Maximum number of training epochs for initial training and path computation. This is an upper-bound on the effective number of epochs, since the model uses early stopping.
patience (int or pair of int or None, default=10) – Number of epochs to wait without improvement during early stopping.
tol (float, default=0.99) – Minimum improvement for early stopping: new objective < tol * old objective.
backtrack (bool, default=False) – If true, ensures the objective function decreases.
val_size (float, default=None) – Proportion of data to use for early stopping. 0 means that training data is used. To disable early stopping, set patience=None. Default is 0.1 for all models except Cox for which training data is used. If X_val and y_val are given during training, it will be ignored.
device (torch device, default=None) – Device on which to train the model using PyTorch. Default: GPU if available else CPU
verbose (int, default=1) –
random_state – Random state for validation
torch_seed – Torch state for model random initialization
class_weight (iterable of float, default=None) – If specified, weights for different classes in training. There must be one number per class.
tie_approximation (str) – Tie approximation for the Cox model, must be one of (“breslow”, “efron”).
- fit(X, y, *, X_val=None, y_val=None)¶
Train the model. Note that if lambda_ is not given, the trained model will most likely not use any feature.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- path(X, y, *, X_val=None, y_val=None, lambda_seq=None, lambda_max=inf, return_state_dicts=True, callback=None) → List[lassonet.interfaces.HistoryItem]¶
Train LassoNet on a lambda_ path. The path is defined by the class parameters: start at lambda_start and increment according to path_multiplier. The path will stop when no feature is being used anymore. callback will be called at each step on (model, history)
- score(X_test, y_test)¶
Concordance index
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lassonet.LassoNetRegressorCV(cv=None, **kwargs)¶
See BaseLassoNet for the parameters
- cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Default is 5-fold cross-validation. See <https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.check_cv.html>
- fit(X, y)¶
Train the model. Note that if lambda_ is not given, the trained model will most likely not use any feature.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- path(X, y, *, return_state_dicts=True)¶
Train LassoNet on a lambda_ path. The path is defined by the class parameters: start at lambda_start and increment according to path_multiplier. The path will stop when no feature is being used anymore. callback will be called at each step on (model, history)
- score(X, y, sample_weight=None)¶
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – \(R^2\) of
self.predict(X)
wrt. y.- Return type
float
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score()
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lassonet.LassoNetClassifierCV(cv=None, **kwargs)¶
See BaseLassoNet for the parameters
- cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Default is 5-fold cross-validation. See <https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.check_cv.html>
- fit(X, y)¶
Train the model. Note that if lambda_ is not given, the trained model will most likely not use any feature.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- path(X, y, *, return_state_dicts=True)¶
Train LassoNet on a lambda_ path. The path is defined by the class parameters: start at lambda_start and increment according to path_multiplier. The path will stop when no feature is being used anymore. callback will be called at each step on (model, history)
- score(X, y, sample_weight=None)¶
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – Mean accuracy of
self.predict(X)
wrt. y.- Return type
float
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lassonet.LassoNetCoxRegressorCV(cv=None, **kwargs)¶
See BaseLassoNet for the parameters
- cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Default is 5-fold cross-validation. See <https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.check_cv.html>
- fit(X, y)¶
Train the model. Note that if lambda_ is not given, the trained model will most likely not use any feature.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- path(X, y, *, return_state_dicts=True)¶
Train LassoNet on a lambda_ path. The path is defined by the class parameters: start at lambda_start and increment according to path_multiplier. The path will stop when no feature is being used anymore. callback will be called at each step on (model, history)
- score(X_test, y_test)¶
Concordance index
- set_params(**params)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- lassonet.plot_path(model, path, X_test, y_test, *, score_function=None)¶
Plot the evolution of the model on the path, namely: - lambda - number of selected variables - score
- Parameters
model (LassoNetClassifier or LassoNetRegressor) –
path – output of model.path
X_test (array-like) –
y_test (array-like) –
score_function (function or None) – if None, use score_function=model.score score_function must take as input X_test, y_test
- lassonet.lassonet_path(X, y, task, *, X_val=None, y_val=None, **kwargs)¶
- Parameters
X (array-like of shape (n_samples, n_features)) – Training data
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Target values
task (str, must be "classification" or "regression") – Task
X_val (array-like of shape (n_samples, n_features)) – Validation data
y_val (array-like of shape (n_samples,) or (n_samples, n_outputs)) – Validation values
BaseLassoNet for the other parameters. (See) –