mlektic.logistic_reg package
Submodules
mlektic.logistic_reg.logistic_regression_archt module
- class mlektic.logistic_reg.logistic_regression_archt.LogisticRegressionArcht(iterations: int = 50, use_intercept: bool = True, verbose: bool = True, regularizer: Callable = None, optimizer: Optional[Tuple[Optimizer, str, int]] = None, method: str = 'logistic', metric: str = 'accuracy')[source]
Bases:
object
Logistic Regression model class supporting different training methods including logistic regression, batch gradient descent, stochastic gradient descent, and mini-batch gradient descent.
- iterations
Number of training iterations.
- Type:
int
- use_intercept
Whether to include an intercept in the model.
- Type:
bool
- verbose
Whether to print training progress.
- Type:
bool
- weights
Model weights.
- Type:
tf.Variable
- cost_history
History of cost values during training.
- Type:
list
- metric_history
History of metric values during training.
- Type:
list
- n_features
Number of features in the input data.
- Type:
int
- regularizer
Regularization function.
- Type:
callable
- optimizer
Optimizer for gradient descent.
- Type:
tf.optimizers.Optimizer
- method
Training method to use.
- Type:
str
- metric
Evaluation metric to use.
- Type:
str
- num_classes
Number of classes in the target variable.
- Type:
int
- eval(test_set: Tuple[ndarray, ndarray], metric: str) float [source]
Evaluates the model on a test set using the specified metric.
- Parameters:
test_set (tuple) – Tuple containing test input data (np.ndarray) and output data (np.ndarray).
metric (str) – Metric to use for evaluation. Options are ‘categorical_crossentropy’, ‘accuracy’, ‘precision’, ‘recall’, ‘f1_score’, ‘confusion_matrix’.
- Returns:
Evaluation result.
- Return type:
float
- Raises:
ValueError – If the specified metric is not supported.
- get_cost_history() list [source]
Returns the history of cost values during training.
- Returns:
List of cost values.
- Return type:
list
- get_intercept() Optional[ndarray] [source]
Returns the model intercept.
- Returns:
Intercept value if use_intercept is True, else None.
- Return type:
Union[np.ndarray, None]
- get_metric_history() list [source]
Returns the history of metric values during training.
- Returns:
List of metric values.
- Return type:
list
- get_parameters() ndarray [source]
Returns the model parameters (weights).
- Returns:
Array of model parameters.
- Return type:
np.ndarray
- load_model(filepath: str) None [source]
Loads the model from a file.
- Parameters:
filepath (str) – Path to the file from which the model will be loaded.
- predict(input_new: Union[ndarray, list, float]) ndarray [source]
Predicts output for new input data.
- Parameters:
input_new (Union[np.ndarray, list, float]) – New input data for prediction.
- Returns:
Predicted probabilities.
- Return type:
np.ndarray
- Raises:
ValueError – If the input does not have the expected number of features.
- save_model(filepath: str) None [source]
Saves the model to a file.
- Parameters:
filepath (str) – Path to the file where the model will be saved.
- train(train_set: Tuple[ndarray, ndarray]) LogisticRegressionArcht [source]
Trains the model on the provided training set.
- Parameters:
train_set (tuple) – Tuple containing training input data (np.ndarray) and output data (np.ndarray).
- Returns:
The trained model instance.
- Return type:
mlektic.logistic_reg.logreg_utils module
- mlektic.logistic_reg.logreg_utils.calculate_accuracy(y_true: Tensor, y_pred: Tensor) Tensor [source]
Calculates the accuracy between true labels and predicted probabilities.
- Parameters:
y_true (tf.Tensor) – True labels, one-hot encoded. Shape should be (n_samples, num_classes).
y_pred (tf.Tensor) – Predicted probabilities. Shape should be (n_samples, num_classes).
- Returns:
Accuracy.
- Return type:
tf.Tensor
- mlektic.logistic_reg.logreg_utils.calculate_categorical_crossentropy(y_true: Tensor, y_pred: Tensor) Tensor [source]
Calculates the categorical cross-entropy loss between true labels and predicted probabilities.
- Parameters:
y_true (tf.Tensor) – True labels, one-hot encoded. Shape should be (n_samples, num_classes).
y_pred (tf.Tensor) – Predicted probabilities. Shape should be (n_samples, num_classes).
- Returns:
Categorical cross-entropy loss.
- Return type:
tf.Tensor
- mlektic.logistic_reg.logreg_utils.calculate_confusion_matrix(y_true: Tensor, y_pred: Tensor) Tensor [source]
Calculates the confusion matrix between true labels and predicted probabilities.
- Parameters:
y_true (tf.Tensor) – True labels, one-hot encoded. Shape should be (n_samples, num_classes).
y_pred (tf.Tensor) – Predicted probabilities. Shape should be (n_samples, num_classes).
- Returns:
Confusion matrix of shape (2, 2).
- Return type:
tf.Tensor
- mlektic.logistic_reg.logreg_utils.calculate_f1_score(y_true: Tensor, y_pred: Tensor) Tensor [source]
Calculates the F1 score between true labels and predicted probabilities.
- Parameters:
y_true (tf.Tensor) – True labels, one-hot encoded. Shape should be (n_samples, num_classes).
y_pred (tf.Tensor) – Predicted probabilities. Shape should be (n_samples, num_classes).
- Returns:
F1 score.
- Return type:
tf.Tensor
- mlektic.logistic_reg.logreg_utils.calculate_precision(y_true: Tensor, y_pred: Tensor) Tensor [source]
Calculates the precision between true labels and predicted probabilities.
- Parameters:
y_true (tf.Tensor) – True labels, one-hot encoded. Shape should be (n_samples, num_classes).
y_pred (tf.Tensor) – Predicted probabilities. Shape should be (n_samples, num_classes).
- Returns:
Precision.
- Return type:
tf.Tensor
- mlektic.logistic_reg.logreg_utils.calculate_recall(y_true: Tensor, y_pred: Tensor) Tensor [source]
Calculates the recall between true labels and predicted probabilities.
- Parameters:
y_true (tf.Tensor) – True labels, one-hot encoded. Shape should be (n_samples, num_classes).
y_pred (tf.Tensor) – Predicted probabilities. Shape should be (n_samples, num_classes).
- Returns:
Recall.
- Return type:
tf.Tensor