PC在CIFAR 10数据集上训练CNN时冻结



我正在使用TensorFlow在CIFAR-10数据集上训练。当我运行训练循环时,我的电脑会冻结。

# forward propagation
# convolution layer 1
c1 = tf.nn.conv2d(x_train, w1, strides = [1,1,1,1], padding = 'SAME')
# activation function for c1: relu
r1 = tf.nn.relu(c1)
# maxpooling
p1 = tf.nn.max_pool(r1, ksize = [1,2,2,1], strides = [1,1,1,1], padding = 'SAME')
print('p1 shape: ',p1.shape)
# convolution layer 2
c2 = tf.nn.conv2d(p1, w2, strides = [1,1,1,1], padding='SAME')
# activation function for c2: relu
r2 = tf.nn.relu(c2)
# maxpooling
p2 = tf.nn.max_pool(r2, ksize = [1,2,2,1], strides = [1,2,2,1], padding = 'SAME')
print('p2 shape: ',p2.shape)
# fully connected layer
l1 = tf.contrib.layers.flatten(p2)
# fully connected layer
final = tf.contrib.layers.fully_connected(l1, 10, activation_fn = None)
print('output layer shape: ',final.shape)

我正在使用SoftMax交叉熵和Adam Optimizer:

# training and optimization
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = final, labels = y_train))
# using adam optimizer 
optimize = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)

这是冻结的地方:

# creating tensorflow session
se = tf.Session()
# initializing variables
se.run(tf.global_variables_initializer())
# training the graph
for i in range(1000):
    x_batch, y_batch = mini_batch(x_train, y_train, 110)
    se.run(optimize, {x: x_batch, y: y_batch})
cost = se.run(cross_entropy, {x: x_train, y: y_train})
print(cost)

好吧,如果您还提到您的PC配置,那就太好了。尽管如此,您正在运行的程序并不是一个计算困难的程序或一个包含无限循环的程序,因此我认为,您可能会在PC上出现问题,您可能会在其中运行很多应用程序,因为您的Python守护程序为此无法进行足够的分配,因此发生冻结/悬挂问题,鉴于此代码在我的Macbook Pro 2012上运行良好,不一定是与代码相关的问题。

最新更新