我正在考虑尝试使用交叉验证选择超参数(例如正则化),或者可能训练模型的多个初始化,然后选择具有最高交叉验证精度的模型。实现k-fold或CV是简单但乏味/烦人的(特别是如果我试图在不同的CPU, GPU甚至不同的计算机上训练不同的模型等)。我希望像TensorFlow这样的库为它的用户实现这样的东西,这样我们就不必编写100次相同的代码。因此,TensorFlow有一个库或一些东西,可以帮助我做交叉验证?
作为更新,似乎可以使用scikit learn或其他东西来做到这一点。如果是这样的话,那么如果有人能提供一个简单的神经网络训练和scikit学习交叉验证的例子,那就太棒了!不确定这是否适用于多个cpu, gpu,集群等。
正如已经讨论过的,tensorflow不提供自己的方法来交叉验证模型。推荐使用KFold
。这有点乏味,但可行。下面是一个用tensorflow
和KFold
交叉验证MNIST模型的完整示例:
from sklearn.model_selection import KFold
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Parameters
learning_rate = 0.01
batch_size = 500
# TF graph
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
pred = tf.nn.softmax(tf.matmul(x, W) + b)
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.global_variables_initializer()
mnist = input_data.read_data_sets("data/mnist-tf", one_hot=True)
train_x_all = mnist.train.images
train_y_all = mnist.train.labels
test_x = mnist.test.images
test_y = mnist.test.labels
def run_train(session, train_x, train_y):
print "nStart training"
session.run(init)
for epoch in range(10):
total_batch = int(train_x.shape[0] / batch_size)
for i in range(total_batch):
batch_x = train_x[i*batch_size:(i+1)*batch_size]
batch_y = train_y[i*batch_size:(i+1)*batch_size]
_, c = session.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
if i % 50 == 0:
print "Epoch #%d step=%d cost=%f" % (epoch, i, c)
def cross_validate(session, split_size=5):
results = []
kf = KFold(n_splits=split_size)
for train_idx, val_idx in kf.split(train_x_all, train_y_all):
train_x = train_x_all[train_idx]
train_y = train_y_all[train_idx]
val_x = train_x_all[val_idx]
val_y = train_y_all[val_idx]
run_train(session, train_x, train_y)
results.append(session.run(accuracy, feed_dict={x: val_x, y: val_y}))
return results
with tf.Session() as session:
result = cross_validate(session)
print "Cross-validation result: %s" % result
print "Test accuracy: %f" % session.run(accuracy, feed_dict={x: test_x, y: test_y})
随着数据集越来越大,交叉验证的成本也越来越高。在深度学习中,我们通常使用大数据集。简单的训练就可以了。Tensorflow没有内置的cv机制,因为它通常不用于神经网络。在神经网络中,网络的效率主要取决于数据集、epoch数和学习率。
我在sklearn中使用过cv您可以查看链接:https://github.com/hackmaster0110/Udacity-Data-Analyst-Nano-Degree-Projects/
在其中,转到Identify fraud from enron data(In Project folder)