Random Hyperboxes with Hyper-parameter Optimisation for Base Learners

This example shows how to use a random hyperboxes classifier, in which each base hyperbox-based model is trained on a subset of features and a subset of samples using random search-based hyper-parameter tuning and k-fold cross-validation.

While the original random hyperboxes model in the class RandomHyperboxesClassifier uses the same base learners with the same hyperparameters, the cross-validation random hyperboxes model in the class CrossValRandomHyperboxesClassifier allows each base learner to use specific hyperparameters depending on its training data by performing random research to find the best combination of hyperparameters for each base learner.

[1]:
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from hbbrain.numerical_data.ensemble_learner.cross_val_random_hyperboxes import CrossValRandomHyperboxesClassifier
from hbbrain.numerical_data.incremental_learner.onln_gfmm import OnlineGFMM

Load dataset.

This example will use the breast cancer dataset available in sklearn to demonstrate how to use this ensemble classifier.

[2]:
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import MinMaxScaler
[3]:
df = load_breast_cancer()
X = df.data
y = df.target
[4]:
# Normailise data into the range of [0, 1] as hyperbox-based models only work in the unit cube
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
[5]:
# Split data into training, validation and testing sets
Xtr_val, X_test, ytr_val, y_test = train_test_split(X, y, train_size=0.8, random_state=0)
Xtr, X_val, ytr, y_val = train_test_split(X, y, train_size=0.75, random_state=0)

This example will use the GFMM classifier with the original online learning algorithm as base learners. However, any type of hyperbox-based learning algorithms in this library can also be used to train base learners.

1. Using random subsampling to generate training sets for various base learners

a. The number of features used in each base learner is different and is bounded by a maximum number of features

Training

[6]:
# Initialise parameters
n_estimators = 20 # number of base learners
max_samples = 0.5 # sampling rate for samples
max_features = 0.5 # sampling rate to generate the maximum number of features
class_balanced = False # do not use the class-balanced sampling mode
feature_balanced = False # use different numbers of features for base learners
n_jobs = 4 # number of processes is used to build base learners
n_iter = 20 # Number of parameter settings that are randomly sampled to choose the best combination of hyperparameters
k_fold = 5 # Number of folds to conduct Stratified K-Fold cross-validation for hyperparameter tunning
[7]:
# Init a hyperbox-based model used to train base learners
# Using the GFMM classifier with the original online learning algorithm
base_estimator = OnlineGFMM()
[8]:
# Init ranges for hyperparameters of base learners to perform a random search process for hyperparameter tunning
base_estimator_params = {'theta': np.arange(0.05, 1.01, 0.05), 'theta_min':[1], 'gamma':[0.5, 1, 2, 4, 8, 16]}
[9]:
cross_val_rh_subsampling_diff_num_features_clf = CrossValRandomHyperboxesClassifier(base_estimator=base_estimator, base_estimator_params=base_estimator_params, n_estimators=n_estimators, max_samples=max_samples, max_features=max_features, class_balanced=class_balanced, feature_balanced=feature_balanced, n_iter=n_iter, k_fold=k_fold, n_jobs=n_jobs, random_state=0)
cross_val_rh_subsampling_diff_num_features_clf.fit(Xtr, ytr)
[9]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   max_features=0.5, n_estimators=20, n_iter=20,
                                   n_jobs=4, random_state=0)
[10]:
print("Training time: %.3f (s)"%(cross_val_rh_subsampling_diff_num_features_clf.elapsed_training_time))
Training time: 37.453 (s)
[11]:
print('Total number of hyperboxes from all base learners = %d'%cross_val_rh_subsampling_diff_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners = 1110

Prediction

[12]:
y_pred = cross_val_rh_subsampling_diff_num_features_clf.predict(X_test)
acc = accuracy_score(y_test, y_pred)
print(f'Testing accuracy = {acc * 100: .2f}%')
Testing accuracy =  93.86%

Apply pruning for base learners

[13]:
acc_threshold=0.5 # minimum accuracy score of the unpruned hyperboxes
keep_empty_boxes=False # False means hyperboxes that do not join the prediction process within the pruning procedure are also eliminated
cross_val_rh_subsampling_diff_num_features_clf.simple_pruning_base_estimators(X_val, y_val, acc_threshold, keep_empty_boxes)
[13]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   max_features=0.5, n_estimators=20, n_iter=20,
                                   n_jobs=4, random_state=0)
[14]:
print('Total number of hyperboxes from all base learners after pruning = %d'%cross_val_rh_subsampling_diff_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners after pruning = 671

Prediction after doing a pruning procedure

[15]:
y_pred_2 = cross_val_rh_subsampling_diff_num_features_clf.predict(X_test)
acc_pruned = accuracy_score(y_test, y_pred_2)
print(f'Testing accuracy (after pruning) = {acc_pruned * 100: .2f}%')
Testing accuracy (after pruning) =  95.61%

b. The number of features used in each base learner is the same and is equal to the given maximum number of features

[16]:
# Initialise parameters
n_estimators = 20 # number of base learners
max_samples = 0.5 # sampling rate for samples
max_features = 0.5 # sampling rate to generate the maximum number of features
class_balanced = False # do not use the class-balanced sampling mode
# use the same numbers of features for base learners and the number of used features is the given maximum number of features
feature_balanced = True
n_jobs = 4 # number of processes is used to build base learners
n_iter = 20 # Number of parameter settings that are randomly sampled to choose the best combination of hyperparameters
k_fold = 5 # Number of folds to conduct Stratified K-Fold cross-validation for hyperparameter tunning
[17]:
# Init a hyperbox-based model used to train base learners
# Using the GFMM classifier with the original online learning algorithm
base_estimator = OnlineGFMM()
[18]:
# Init ranges for hyperparameters of base learners to perform a random search process for hyperparameter tunning
base_estimator_params = {'theta': np.arange(0.05, 1.01, 0.05), 'theta_min':[1], 'gamma':[0.5, 1, 2, 4, 8, 16]}
[19]:
cross_val_rh_subsampling_same_num_features_clf = CrossValRandomHyperboxesClassifier(base_estimator=base_estimator, base_estimator_params=base_estimator_params, n_estimators=n_estimators, max_samples=max_samples, max_features=max_features, class_balanced=class_balanced, feature_balanced=feature_balanced, n_iter=n_iter, k_fold=k_fold, n_jobs=n_jobs, random_state=0)
cross_val_rh_subsampling_same_num_features_clf.fit(Xtr, ytr)
[19]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   feature_balanced=True, max_features=0.5,
                                   n_estimators=20, n_iter=20, n_jobs=4,
                                   random_state=0)
[20]:
print("Training time: %.3f (s)"%(cross_val_rh_subsampling_same_num_features_clf.elapsed_training_time))
Training time: 45.047 (s)
[21]:
print('Total number of hyperboxes from all base learners = %d'%cross_val_rh_subsampling_same_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners = 973

Prediction

[22]:
y_pred = cross_val_rh_subsampling_same_num_features_clf.predict(X_test)
acc = accuracy_score(y_test, y_pred)
print(f'Testing accuracy = {acc * 100: .2f}%')
Testing accuracy =  93.86%

Apply pruning for base learners

[23]:
acc_threshold=0.5 # minimum accuracy score of the unpruned hyperboxes
keep_empty_boxes=False # False means hyperboxes that do not join the prediction process within the pruning procedure are also eliminated
cross_val_rh_subsampling_same_num_features_clf.simple_pruning_base_estimators(X_val, y_val, acc_threshold, keep_empty_boxes)
[23]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   feature_balanced=True, max_features=0.5,
                                   n_estimators=20, n_iter=20, n_jobs=4,
                                   random_state=0)

Prediction after doing a pruning procedure

[24]:
y_pred_2 = cross_val_rh_subsampling_same_num_features_clf.predict(X_test)
acc_pruned = accuracy_score(y_test, y_pred_2)
print(f'Testing accuracy (after pruning) = {acc_pruned * 100: .2f}%')
Testing accuracy (after pruning) =  94.74%

2. Using random undersampling to generate class-balanced training sets for various base learners

a. The number of features used in each base learner is different and is bounded by a maximum number of features

Training

[25]:
# Initialise parameters
n_estimators = 20 # number of base learners
max_samples = 0.5 # sampling rate for samples
max_features = 0.5 # sampling rate to generate the maximum number of features
class_balanced = True # use the class-balanced sampling mode
feature_balanced = False # use different numbers of features for base learners
n_jobs = 4 # number of processes is used to build base learners
n_iter = 20 # Number of parameter settings that are randomly sampled to choose the best combination of hyperparameters
k_fold = 5 # Number of folds to conduct Stratified K-Fold cross-validation for hyperparameter tunning
[26]:
# Init a hyperbox-based model used to train base learners
# Using the GFMM classifier with the original online learning algorithm
base_estimator = OnlineGFMM()
[27]:
# Init ranges for hyperparameters of base learners to perform a random search process for hyperparameter tunning
base_estimator_params = {'theta': np.arange(0.05, 1.01, 0.05), 'theta_min':[1], 'gamma':[0.5, 1, 2, 4, 8, 16]}
[28]:
cross_val_rh_class_balanced_diff_num_features_clf = CrossValRandomHyperboxesClassifier(base_estimator=base_estimator, base_estimator_params=base_estimator_params, n_estimators=n_estimators, max_samples=max_samples, max_features=max_features, class_balanced=class_balanced, feature_balanced=feature_balanced, n_iter=n_iter, k_fold=k_fold, n_jobs=n_jobs, random_state=0)
cross_val_rh_class_balanced_diff_num_features_clf.fit(Xtr, ytr)
[28]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   class_balanced=True, max_features=0.5,
                                   n_estimators=20, n_iter=20, n_jobs=4,
                                   random_state=0)
[29]:
print("Training time: %.3f (s)"%(cross_val_rh_class_balanced_diff_num_features_clf.elapsed_training_time))
Training time: 33.372 (s)
[30]:
print('Total number of hyperboxes from all base learners = %d'%cross_val_rh_class_balanced_diff_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners = 1123

Prediction

[31]:
y_pred = cross_val_rh_class_balanced_diff_num_features_clf.predict(X_test)
acc = accuracy_score(y_test, y_pred)
print(f'Testing accuracy = {acc * 100: .2f}%')
Testing accuracy =  92.11%

Apply pruning for base learners

[32]:
acc_threshold=0.5 # minimum accuracy score of the unpruned hyperboxes
keep_empty_boxes=False # False means hyperboxes that do not join the prediction process within the pruning procedure are also eliminated
cross_val_rh_class_balanced_diff_num_features_clf.simple_pruning_base_estimators(X_val, y_val, acc_threshold, keep_empty_boxes)
[32]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   class_balanced=True, max_features=0.5,
                                   n_estimators=20, n_iter=20, n_jobs=4,
                                   random_state=0)
[33]:
print('Total number of hyperboxes from all base learners after pruning = %d'%cross_val_rh_class_balanced_diff_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners after pruning = 663

Prediction after doing a pruning procedure

[34]:
y_pred_2 = cross_val_rh_class_balanced_diff_num_features_clf.predict(X_test)
acc_pruned = accuracy_score(y_test, y_pred_2)
print(f'Testing accuracy (after pruning) = {acc_pruned * 100: .2f}%')
Testing accuracy (after pruning) =  94.74%

b. The number of features used in each base learner is the same and is equal to the given maximum number of features

[35]:
# Initialise parameters
n_estimators = 20 # number of base learners
max_samples = 0.5 # sampling rate for samples
max_features = 0.5 # sampling rate to generate the maximum number of features
class_balanced = True # use the class-balanced sampling mode
# use the same numbers of features for base learners and the number of used features is the given maximum number of features
feature_balanced = True
n_jobs = 4 # number of processes is used to build base learners
n_iter = 20 # Number of parameter settings that are randomly sampled to choose the best combination of hyperparameters
k_fold = 5 # Number of folds to conduct Stratified K-Fold cross-validation for hyperparameter tunning
[36]:
# Init a hyperbox-based model used to train base learners
# Using the GFMM classifier with the original online learning algorithm
base_estimator = OnlineGFMM()
[37]:
# Init ranges for hyperparameters of base learners to perform a random search process for hyperparameter tunning
base_estimator_params = {'theta': np.arange(0.05, 1.01, 0.05), 'theta_min':[1], 'gamma':[0.5, 1, 2, 4, 8, 16]}
[38]:
cross_val_rh_class_balanced_same_num_features_clf = CrossValRandomHyperboxesClassifier(base_estimator=base_estimator, base_estimator_params=base_estimator_params, n_estimators=n_estimators, max_samples=max_samples, max_features=max_features, class_balanced=class_balanced, feature_balanced=feature_balanced, n_iter=n_iter, k_fold=k_fold, n_jobs=n_jobs, random_state=0)
cross_val_rh_class_balanced_same_num_features_clf.fit(Xtr, ytr)
[38]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   class_balanced=True, feature_balanced=True,
                                   max_features=0.5, n_estimators=20, n_iter=20,
                                   n_jobs=4, random_state=0)
[39]:
print("Training time: %.3f (s)"%(cross_val_rh_class_balanced_same_num_features_clf.elapsed_training_time))
Training time: 30.501 (s)
[40]:
print('Total number of hyperboxes from all base learners = %d'%cross_val_rh_class_balanced_same_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners = 1623

Prediction

[41]:
y_pred = cross_val_rh_class_balanced_same_num_features_clf.predict(X_test)
acc = accuracy_score(y_test, y_pred)
print(f'Testing accuracy = {acc * 100: .2f}%')
Testing accuracy =  91.23%

Apply pruning for base learners

[42]:
acc_threshold=0.5 # minimum accuracy score of the unpruned hyperboxes
keep_empty_boxes=False # False means hyperboxes that do not join the prediction process within the pruning procedure are also eliminated
cross_val_rh_class_balanced_same_num_features_clf.simple_pruning_base_estimators(X_val, y_val, acc_threshold, keep_empty_boxes)
[42]:
CrossValRandomHyperboxesClassifier(base_estimator=OnlineGFMM(C=array([], dtype=float64),
                                                             V=array([], dtype=float64),
                                                             W=array([], dtype=float64)),
                                   base_estimator_params={'gamma': [0.5, 1, 2,
                                                                    4, 8, 16],
                                                          'theta': array([0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5 , 0.55,
       0.6 , 0.65, 0.7 , 0.75, 0.8 , 0.85, 0.9 , 0.95, 1.  ]),
                                                          'theta_min': [1]},
                                   class_balanced=True, feature_balanced=True,
                                   max_features=0.5, n_estimators=20, n_iter=20,
                                   n_jobs=4, random_state=0)
[43]:
print('Total number of hyperboxes from all base learners after pruning = %d'%cross_val_rh_class_balanced_same_num_features_clf.get_n_hyperboxes())
Total number of hyperboxes from all base learners after pruning = 1234

Prediction after doing a pruning procedure

[44]:
y_pred_2 = cross_val_rh_class_balanced_same_num_features_clf.predict(X_test)
acc_pruned = accuracy_score(y_test, y_pred_2)
print(f'Testing accuracy (after pruning) = {acc_pruned * 100: .2f}%')
Testing accuracy (after pruning) =  95.61%