%matplotlib inline
In [1]:
============================ Underfitting vs. Overfitting ============================
This example shows how underfitting and overfitting arise when using polynomial regression to approximate a nonlinear function, \(y =1.5 \cos (\pi x).\)
The plots shows the function \(y(x)\) and the estimated curves of of different degrees.
We observe the following:
- The linear function (polynomial with degree 1) is not sufficient to fit the training samples—this is underfitting.
- A polynomial of degree 4 approximates the true function almost perfectly and gives the smallest MSE.
- For higher degrees, the model overfits the training data, and the mean-squared errors (MSE) become very large–the model is learning the noise in the training data.
We evaluate quantitatively overfitting / underfitting by using cross-validation and then calculating the mean squared error (MSE) on the validation set. The higher the value, the less likely the model generalizes correctly from the training data since it is brittle.
In [2]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
def true_fun(X):
return np.cos(1.5 * np.pi * X)
0)
np.random.seed(
= 30
n_samples = [1, 4, 10, 15]
degrees
= np.sort(np.random.rand(n_samples))
X = true_fun(X) + np.random.randn(n_samples) * 0.1
y
=(14, 10))
plt.figure(figsizefor i in range(len(degrees)):
= plt.subplot(2, 2, i + 1)
ax =(), yticks=())
plt.setp(ax, xticks= PolynomialFeatures(degree=degrees[i], include_bias=False)
polynomial_features = LinearRegression()
linear_regression = Pipeline([("polynomial_features", polynomial_features),
pipeline "linear_regression", linear_regression)])
(
pipeline.fit(X[:, np.newaxis], y)
# Evaluate the models using cross-validation
= cross_val_score(pipeline, X[:, np.newaxis], y,
scores ="neg_mean_squared_error", cv=10)
scoring
= np.linspace(0, 1, 100)
X_test ="Model")
plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="True function")
plt.plot(X_test, true_fun(X_test), label='b', s=20, label="Samples")
plt.scatter(X, y, edgecolor"x")
plt.xlabel("y")
plt.ylabel(0, 1))
plt.xlim((-2, 2))
plt.ylim((="best")
plt.legend(loc"Degree {}\nMSE = {:.2e}(+/- {:.2e})".format(
plt.title(-scores.mean(), scores.std()))
degrees[i], plt.show()