Examples
Contents
7.9. Examples¶
In the present section, we use Gaussian process regression as well as Gaussian process classification in order to demonstrate briefly the application of scikit-learn.
We use the following toy and real-world datasets provided by scikit-learn:
diabetes dataset for single-output regression
California housing dataset for multi-output regression
breast cancer dataset for binary classification
handwritten digits dataset for multi-class classification
The scikit-learn website provides detailed information on the number of samples, the features, the labels and the specific task.
Each dataset is splitted into training and test data. Afterwards, the model is constructed from the training data and finally, the model quality of measured in use of the test data and a suitable metric.
7.9.1. Diabetes Progression¶
The data contains information on 442 diabetes patients:
10 features are given (age, sex, BMI, average blood pressure as well as six blood serum measurements)
the label is a quantitative measure of disease progression one year after baseline
import numpy as np
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, Matern, ConstantKernel, WhiteKernel
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
# load data
diabetes_dataset = load_diabetes()
X = diabetes_dataset["data"]
y = diabetes_dataset["target"].reshape(-1, 1)
# split data (80% train and 20% test)
# training: 353 samples, test: 89 samples
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# scale data
scalerX, scalery = StandardScaler(), StandardScaler()
X_train_ = scalerX.fit_transform(X_train)
y_train_ = scalery.fit_transform(y_train)
# construct GPR model
# scaled, anisotropic RBF kernel with noise
kernel = ConstantKernel() * RBF(length_scale= 10 * [1.0], length_scale_bounds=(1e-03, 1e6))
kernel += WhiteKernel(noise_level=1e-3, noise_level_bounds=(1e-05, 5))
model = GaussianProcessRegressor(kernel)
model.fit(X_train_, y_train_)
# print optimized hyperparameter values
params = model.kernel_.get_params()
noise_level = params["k2"].get_params()["noise_level"]
scaling_factor = params["k1"].get_params()["k1"].get_params()["constant_value"]
length_scales = list(params["k1"].get_params()["k2"].get_params()["length_scale"])
print("scaling factor = {:.2f}, noise level = {:.2f}".format(scaling_factor, noise_level))
print("length scales = {}".format([np.round(l, 2) for l in length_scales]))
# test model
y_pred = model.predict(scalerX.transform(X_test))
y_pred = scalery.inverse_transform(y_pred)
y_pred = np.rint(y_pred) # labels should be integer valued
r2 = r2_score(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
mae = mean_absolute_error(y_test, y_pred)
print("\nr2 = {:.4f}, mse = {:.2f}, mae = {:.2f}".format(r2, mse, mae))
scaling factor = 1.29, noise level = 0.47
length scales = [6.29, 7.5, 5.02, 7.18, 15.81, 402.33, 7.71, 389.87, 3.13, 105.5]
r2 = 0.5233, mse = 2525.46, mae = 39.48
7.9.2. House values and Incomes¶
The dataset contains information on housings in 20640 areas (block groups) in California:
7 features (median house age, average number of rooms per household, average number of bedrooms per household, block group population, average number of household members, block group latitude, block group longitude)
2 labels (median house value, median income)
import numpy as np
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, Matern, ConstantKernel, WhiteKernel
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
# load data
housing_dataset = fetch_california_housing()
# use first feature (median income) as additional label
X = housing_dataset["data"][:, 1:]
y = np.stack((housing_dataset["target"], housing_dataset["data"][:, 0]), axis=1)
# split data (50% train and 50% test)
# training: 10320 samples, test: 10320 samples
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
# scale data
scalerX, scalery = StandardScaler(), StandardScaler()
X_train_ = scalerX.fit_transform(X_train)
y_train_ = scalery.fit_transform(y_train)
# use PCA for labels
pca = PCA()
y_train_ = pca.fit_transform(y_train_)
# construct GPR model
# scaled, isotropic Matern 1.5 kernel with noise
kernel = ConstantKernel() * Matern(length_scale=1.0, length_scale_bounds=(1e-03, 1e6))
kernel += WhiteKernel(noise_level=1e-3, noise_level_bounds=(1e-05, 5))
model = GaussianProcessRegressor(kernel)
model.fit(X_train_, y_train_)
# print optimized hyperparameter values
params = model.kernel_.get_params()
noise_level = params["k2"].get_params()["noise_level"]
scaling_factor = params["k1"].get_params()["k1"].get_params()["constant_value"]
length_scale = params["k1"].get_params()["k2"].get_params()["length_scale"]
print("scaling factor = {:.2f}, noise level = {:.2f}".format(scaling_factor, noise_level))
print("length scale = {:.2f}".format(length_scale))
# test model
y_pred = model.predict(scalerX.transform(X_test))
y_pred = pca.inverse_transform(y_pred)
y_pred = scalery.inverse_transform(y_pred)
y_pred = np.rint(y_pred) # labels should be integer valued
r2 = r2_score(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
mae = mean_absolute_error(y_test, y_pred)
print("\nr2 = {:.4f}, mse = {:.2f}, mae = {:.2f}".format(r2, mse, mae))
scaling factor = 8.24, noise level = 0.20
length scale = 3.06
r2 = 0.7079, mse = 0.65, mae = 0.58
7.9.3. Breast Cancer Diagnostic Analysis¶
The data contains information on 569 biopsy results of breast cancer examinations. The features describe characteristics of the extracted cell nuclei. Each sample is labled as benign (357 in total) or malignant (212 in total).
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF, Matern, ConstantKernel, WhiteKernel
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
# load data
breast_cancer_dataset = load_breast_cancer()
X = breast_cancer_dataset["data"]
y = breast_cancer_dataset["target"]
# split data (80% train and 20% test)
# training: 455 samples, test: 114 samples
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# scale data
scalerX = StandardScaler()
X_train_ = scalerX.fit_transform(X_train)
# construct GPC model
# scaled, anisotropic absolute exponential kernel with noise
kernel = ConstantKernel() * Matern(length_scale= 30 * [1.0], length_scale_bounds=(1e-03, 1e20), nu=0.5)
kernel += WhiteKernel(noise_level=1e-3, noise_level_bounds=(1e-08, 5))
model = GaussianProcessClassifier(kernel)
model.fit(X_train_, y_train)
# print optimized hyperparameter values
params = model.kernel_.get_params()
noise_level = params["k2"].get_params()["noise_level"]
scaling_factor = params["k1"].get_params()["k1"].get_params()["constant_value"]
length_scales = list(params["k1"].get_params()["k2"].get_params()["length_scale"])
print("scaling factor = {:.2f}, noise level = {:.2f}".format(scaling_factor, noise_level))
print("length scales = {}".format([np.round(l, 2) for l in length_scales]))
# test model
y_pred = model.predict(scalerX.transform(X_test))
acc = accuracy_score(y_test, y_pred) * 100
f1 = f1_score(y_test, y_pred) * 100
precision = precision_score(y_test, y_pred) * 100
recall = recall_score(y_test, y_pred) * 100
print("\nacc = {:.2f}, f1 = {:.2f}".format(acc, f1))
print("precision = {:.2f}, recall = {:.2f}".format(precision, recall))
cm = ConfusionMatrixDisplay.from_predictions(y_test, y_pred)
scaling factor = 674.72, noise level = 0.00
length scales = [22220447.59, 48253.54, 25698107.08, 5461902.08, 139165.69, 221358.06, 77872.28, 32.85, 170944.6, 819012.1, 46.76, 83544.88, 1338096.18, 4372.21, 18749.19, 111.88, 1052014304.07, 25936.76, 6140.74, 20726331.75, 176378.45, 53.46, 25661.23, 20.01, 126.78, 151977.18, 124839808.72, 60.84, 71.55, 1009261.16]
acc = 98.25, f1 = 98.61
precision = 97.26, recall = 100.00
7.9.4. Handwritten Digits¶
The dataset contains 1797 images of handwritten single-digits with 8 x 8 pixels (see image below) and are labeled by the corresponding digit.
import numpy as np
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.pipeline import Pipeline
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF, Matern, ConstantKernel, WhiteKernel
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# load data
digits_dataset = load_digits()
X = digits_dataset["data"]
y = digits_dataset["target"]
# split data (80% train and 20% test)
# training: 1437 samples, test: 360 samples
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
fig = plt.figure(figsize=(4, 4))
for i in range(10):
image = digits_dataset.images[i]
ax = fig.add_subplot(5, 2, i+1)
ax.imshow(image, cmap='binary')
ax.axis("off")
plt.tight_layout()
# scale data
scalerX = StandardScaler()
X_train_ = scalerX.fit_transform(X_train)
# construct GPC model
# scaled, isotropic RBF kernel with noise
kernel = ConstantKernel() * RBF(length_scale= 1., length_scale_bounds=(1e-03, 1e3))
kernel += WhiteKernel(noise_level=1e-3, noise_level_bounds=(1e-07, 5))
model = GaussianProcessClassifier(kernel)
model.fit(X_train_, y_train)
# print optimized hyperparameter values
params = model.kernel_.get_params()
print("10 models in total (one-vs-rest)")
for i in range(10):
print('\nmodel {}'.format(i+1))
params_tmp = params["kernels"][i].get_params()
noise_level = params_tmp["k2"].get_params()["noise_level"]
scaling_factor = params_tmp["k1"].get_params()["k1"].get_params()["constant_value"]
length_scale = params_tmp["k1"].get_params()["k2"].get_params()["length_scale"]
print("scaling factor = {:.2f}, noise level = {:.2f}, length scale = {:.2f}".format(scaling_factor,
noise_level, length_scale))
# test model
y_pred = model.predict(scalerX.transform(X_test))
acc = accuracy_score(y_test, y_pred) * 100
f1 = f1_score(y_test, y_pred, average=None, labels=range(10)) * 100
print("\nacc = {:.2f}".format(acc))
print("f1 = {}".format(f1))
cm = confusion_matrix(y_test, y_pred)
cm = ConfusionMatrixDisplay(cm)
fig, ax = plt.subplots(figsize=(8, 8))
cm_plot = cm.plot(ax=ax)
10 models in total (one-vs-rest)
model 1
scaling factor = 11662.99, noise level = 0.00, length scale = 48.31
model 2
scaling factor = 806.02, noise level = 0.00, length scale = 12.78
model 3
scaling factor = 60834.97, noise level = 0.00, length scale = 87.88
model 4
scaling factor = 1825.51, noise level = 0.00, length scale = 18.07
model 5
scaling factor = 14733.96, noise level = 0.00, length scale = 49.48
model 6
scaling factor = 9271.52, noise level = 0.00, length scale = 43.64
model 7
scaling factor = 9826.90, noise level = 0.00, length scale = 41.69
model 8
scaling factor = 4168.60, noise level = 0.00, length scale = 33.30
model 9
scaling factor = 394.03, noise level = 0.00, length scale = 10.18
model 10
scaling factor = 1912.99, noise level = 0.00, length scale = 21.01
acc = 97.78
f1 = [100. 98.24561404 100. 97.05882353 100.
95.74468085 97.14285714 98.50746269 96.66666667 95. ]