我对机器学习很陌生,所以我正在玩一些例子等等。代码中指定的图像大小为(28,28)但由于某种原因,我一直得到相同的ValueError我不明白为什么会这样。
代码如下:
import pandas as pd
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
# input image dimensions
img_rows, img_cols = 28, 28
batch_size = 128 # Number of images used in each optimization step
nb_classes = 10 # One class per digit
nb_epoch = 35 # Number of times the whole data is used to learn
# Read the train and test datasets
train = pd.read_csv("../input/train.csv").values
test = pd.read_csv("../input/test.csv").values
# Reshape the data to be used by a Theano CNN. Shape is
# (nb_of_samples, nb_of_color_channels, img_width, img_heigh)
X_train = train[:, 1:].reshape(train.shape[0], 1, img_rows, img_cols)
X_test = test.reshape(test.shape[0], 1, img_rows, img_cols)
y_train = train[:, 0] # First data is label (already removed from X_train)
# Make the value floats in [0;1] instead of int in [0;255]
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices (ie one-hot vectors)
Y_train = np_utils.to_categorical(y_train, nb_classes)
#Display the shapes to check if everything's ok
print('X_train shape:', X_train.shape)
print('Y_train shape:', Y_train.shape)
print('X_test shape:', X_test.shape)
model = Sequential()
# For an explanation on conv layers see http://cs231n.github.io/convolutional-networks/#conv
# By default the stride/subsample is 1
# border_mode "valid" means no zero-padding.
# If you want zero-padding add a ZeroPadding layer or, if stride is 1 use border_mode="same"
model.add(Convolution2D(12, 5, 5, border_mode='valid',input_shape=(1,img_rows, img_cols)))
model.add(Activation('relu'))
# For an explanation on pooling layers see http://cs231n.github.io/convolutional-networks/#pool
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.15))
model.add(Convolution2D(24, 5, 5))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.15))
# Flatten the 3D output to 1D tensor for a fully connected layer to accept the input
model.add(Flatten())
model.add(Dense(180))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(100))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes)) #Last layer with one output per class
model.add(Activation('softmax')) #We want a score simlar to a probability for each class
# The function to optimize is the cross entropy between the true label and the output (softmax) of the model
# We will use adadelta to do the gradient descent see http://cs231n.github.io/neural-networks-3/#ada
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"])
# Make the model learn
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
# Predict the label for X_test
yPred = model.predict_classes(X_test)
# Save prediction in file for Kaggle submission
np.savetxt('mnist-pred.csv', np.c_[range(1,len(yPred)+1),yPred], delimiter=',', header = 'ImageId,Label', comments = '', fmt='%d')
所以问题在于使用的卷积大小。卷积运算通常会降低图像的维数。类似地,每个池化操作都会减小内存大小。你有非常小的图像,但应用了为更大的图像设计的模型架构,因此在某些时候,在一个卷积/池化之后,你实际上有一个比下面过滤器大小更小的输出图像,这是一个定义不清的操作。
暂时解决这个问题-删除第二卷积和maxpooling层,因为这些操作(提供参数)不能在如此小的数据上执行。一般来说,你应该首先理解卷积是如何工作的,而不是应用别人的模型,因为参数对于良好的性能是至关重要的——如果你应用降低分辨率的变换——你将无法学到任何东西。因此,一旦你对卷积的工作原理有了一些直观的了解,你就可以回去尝试不同的架构,但是没有一个"神奇"的公式来计算架构,因此我无法为你提供"刚刚好"的参数——从移除额外的卷积和池开始,然后在你对数据和模型有了更好的理解后再回去尝试其他的可能性。