我目前正在Jupyter笔记本中运行一个程序来对MNIST数据集进行分类。我正在尝试使用KNN分类器来完成这项工作,它需要一个多小时的运行时间。我是分类器和超参数的新手,似乎没有任何关于如何正确实现其中一个的像样的教程。有人能给我一些关于如何使用超参数进行分类的提示吗?我已经搜索并看到了GridSearchCv和RandomizedSearchCV。从他们的例子来看,他们似乎选择了不同的属性名称,并更改为代码所需的名称。如果数据只是手写数字,我不明白如何为MNIST数据集做到这一点。既然只有数字,在这种情况下就不需要超参数了吗?这是我目前仍在运行的代码。感谢您提供的任何帮助。
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
def save_fig(fig_id, tight_layout=True):
image_dir = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
if not os.path.exists(image_dir):
os.makedirs(image_dir)
path = os.path.join(image_dir, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
def sort_by_target(mnist):
reorder_train = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[:60000])]))[:, 1]
reorder_test = np.array(sorted([(target, i) for i, target in enumerate(mnist.target[60000:])]))[:, 1]
mnist.data[:60000] = mnist.data[reorder_train]
mnist.target[:60000] = mnist.target[reorder_train]
mnist.data[60000:] = mnist.data[reorder_test + 60000]
mnist.target[60000:] = mnist.target[reorder_test + 60000]
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
mnist.target = mnist.target.astype(np.int8) # fetch_openml() returns targets as strings
sort_by_target(mnist) # fetch_openml() returns an unsorted dataset
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist["data"], mnist["target"]
mnist.data.shape
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
#select and display some digit from the dataset
import matplotlib
import matplotlib.pyplot as plt
some_digit_index = 7201
some_digit = X[some_digit_index]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
save_fig("some_digit_plot")
plt.show()
#print some digit's label
print('The ground truth label for the digit above is: ',y[some_digit_index])
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
#random shuffle
import numpy as np
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
from sklearn.model_selection import cross_val_predict
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)
f1_score(y_multilabel, y_train_knn_pred, average="macro")
KNN最流行的超参数是n_neighbors
,也就是说,您考虑为新点分配标签的最近邻居的数量。默认情况下,它设置为5,但可能不是最佳选择。因此,通常最好找到适合您特定问题的最佳选择。
这就是你如何为你的例子找到最佳超参数:
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
param_grid = {"n_neighbors" : [3,5,7]}
KNN=KNeighborsClassifier()
grid=GridSearchCV(KNN, param_grid = param_grid , cv = 5, scoring = 'accuracy', return_train_score = False)
grid.fit(X_train,y_train)
这样做的目的是将KNN模型的性能与您设置的n_neighbors
的不同值进行比较。然后当你这样做:
print(grid.best_score_)
print(grid.best_params_)
它将向您展示最佳性能分数是多少,以及实现了参数选择。
所有这些都与您使用MNIST数据的事实无关。你可以将这种方法用于任何其他分类任务,只要你认为KNN可能是你的任务的明智选择(这对于图像分类来说可能是有争议的)。从一个任务到另一个任务,唯一会改变的是超参数的最优值。
PS:我建议不要使用y_multilabel
术语,因为这可能指的是一个特定的分类任务,其中每个数据点可以有几个标签,而MNIST中的情况并非如此(每个图像一次只代表一个数字)。