简单的卷积神经网络无法降低成本



我使用tensorflow制作了我的第一个CNN。目标是识别100x100像素的黑白照片中是否有狗或猫。这个问题是神经网络的一个奇怪的行为/输出。第一个输出是Ok,但其他输出太过极端。我认为原始输出(softmax之前的最后一个值)可能过高。但我找不到减少它的方法。我已经尝试过但没有成功:

  • 将权重和偏差从random_normal更改为1和0
  • 大大降低学习率
  • 使用tf.image.per_image_standardination()

我的代码:

from PIL import Image
import tensorflow as tf
import numpy as np
import os, random
nClasses = 2
iHeight = 100
iWidth = 100
ds=[]

w = {
"w1": tf.Variable(tf.ones([5,5,1,32],dtype=float)),
"w2": tf.Variable(tf.ones([5,5,32,64],dtype=float)),
"w3": tf.Variable(tf.ones([90*90*64,50],dtype=float)),
"w4": tf.Variable(tf.ones([50,2],dtype=float)),

}
b = {
"b1": tf.Variable(tf.zeros([32], dtype=float)),
"b2": tf.Variable(tf.zeros([64], dtype=float)),
"b3": tf.Variable(tf.zeros([50], dtype=float)),
"b4": tf.Variable(tf.zeros([2], dtype=float)),
}

def loadImage(path):
img = Image.open(path)
data = np.asarray(img,dtype=float)
return data

for file in os.listdir("dataset/cat/"):
ds.append({"input":loadImage("dataset/cat/"+file), "output":np.array([[1,0]],dtype=float)})
for file in os.listdir("dataset/dog/"):
ds.append({"input":loadImage("dataset/dog/"+file), "output":np.array([[0,1]],dtype=float)})
rawInput = tf.placeholder(tf.float32, [iWidth, iHeight], name="input")
output = tf.placeholder(tf.float32, [None, nClasses], name="iClass")
input = tf.reshape(rawInput, [-1,100,100,1])
#First convolution and pooling
conv1 = tf.nn.conv2d(input, w["w1"], strides=[1,1,1,1], padding="VALID")
conv1_b = tf.nn.bias_add(conv1, b["b1"])
conv1_a = tf.nn.relu(conv1_b)
pool1 = tf.nn.max_pool(conv1_a,[1,2,2,1],strides=[1,1,1,1], padding = "VALID")
#Second convolution and pooling
conv2 = tf.nn.conv2d(pool1, w["w2"], strides=[1,1,1,1], padding="VALID")
conv2_b = tf.nn.bias_add(conv2, b["b2"])
conv2_a = tf.nn.relu(conv2_b)
pool2 = tf.nn.max_pool(conv2_a,[1,2,2,1],strides=[1,1,1,1], padding = "VALID")
#Transforming to the first fully connected layer
pool2_fcl = tf.reshape(pool2,[1, 90*90*64])
fcl1=tf.matmul(pool2_fcl,w["w3"])
fcl1_b=tf.nn.bias_add(fcl1,b["b3"])
fcl1_a=tf.nn.relu(fcl1_b)

#Second fully connected layer
fcl2=tf.matmul(fcl1,w["w4"])
fcl2_b =tf.nn.bias_add(fcl2,b["b4"])
pred= tf.nn.softmax(fcl2_b)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=fcl2_b,labels=output))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for x in range (100):
y = random.randint(0,len(ds)-1)
optimizerVal,rawOutput, predVal,labelVal,costVal = sess.run([optimizer,fcl2_b,pred,output,cost],feed_dict={rawInput:ds[y]["input"], output:ds[y]["output"]})
print("Raw Output (fcl2_b): %snPred: %snLabel: %snCost:%snn"%(rawOutput, predVal,labelVal,costVal))
print(b["b1"].eval())

输出:

原始输出(fcl2_b):[[1.0071293e+14 1.0071293e+14]Pred:[[0.5 0.5]]标签:[[1。0.]]成本:0.6931472

原始输出(fcl2_b):[[1.0680586e+14 1.0680579e+14]Pred:[[1。0.]]标签:[[1。0.]]成本:0.0

原始输出(fcl2_b):[[6.8944283e+13 6.8944161e+13]]Pred:[[1。0.]]标签:[[0。1.]]成本:121634820.0

原始输出(fcl2_b):[[5.6959584e+13.5959487e+13]]Pred:[[1。0.]]标签:[[1。0.]]成本:0.0

原始输出(fcl2_b):[[8.481456e+13 8.481446e+13]]Pred:[[1。0.]]标签:[[0。1.]]成本:92274690.0

感谢您的帮助!

尝试更改权重初始化:

tf.Variable(tf.random_normal([5,5,1,32]), dtype=tf.float32)

偏差是好的,但你应该"对称打破"权重。请参见此处的示例。

最新更新