神经网络的函数近似 - 损失 0



我正在尝试创建一个 NN 来近似函数(正弦、cos、custom...(,但我正在为格式而苦苦挣扎,我不想使用输入标签,而是使用输入输出。如何更改它?

我正在遵循本教程

import tensorflow as tf
import random
from math import sin
import numpy as np

n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500
n_inputs = 1 # CHANGES HERE
n_outputs = 1 #CHANGES HERE
batch_size = 100
x = tf.placeholder('float', [None, n_inputs]) #CHANGES HERE
y = tf.placeholder('float', [None, n_outputs]) #CHANGES HERE
def neural_network_model(data):
hidden_layer_1 = {'weights':tf.Variable(tf.random_normal([n_inputs, n_nodes_hl1])),
'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))} #CHANGES HERE
hidden_layer_2 = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
'biases': tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_layer_3 = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
'biases': tf.Variable(tf.random_normal([n_nodes_hl2]))}
output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_outputs])),
'biases': tf.Variable(tf.random_normal([n_outputs]))} #CHANGES HERE

l1 = tf.add(tf.matmul(data, hidden_layer_1['weights']), hidden_layer_1['biases'])
l1 = tf.nn.relu(l1)

l2 = tf.add(tf.matmul(l1, hidden_layer_2['weights']), hidden_layer_2['biases'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2, hidden_layer_3['weights']), hidden_layer_3['biases'])
l3 = tf.nn.relu(l3)
output = tf.add(tf.matmul(l3, output_layer['weights']), output_layer['biases'])
return output
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optmizer = tf.train.AdamOptimizer().minimize(cost)
epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs): 
loss = 0
for _ in range(batch_size^2): #CHANGES HERE
batch_x, batch_y = generate_input_output(batch_size) #CHANGES HERE
a, c = sess.run([optmizer, cost], feed_dict={x: batch_x, y:batch_y})
loss += c
print("Epoch:", epoch+1, "out of", epochs, "- Loss:", loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
test_x, test_y = generate_input_output(batch_size) #CHANGES HERE
print('Accuracy', accuracy.eval({x:test_x, y:test_y}))
def desired_function(x): #CHANGES HERE
return sin(x)
def generate_input_output(batch_size): #CHANGES HERE
batch_x = [random.uniform(-10, 10) for _ in range(batch_size)]
batch_y = [desired_function(x) for x in batch_x]
batch_x = np.reshape(batch_x, (-1, 1))
batch_y = np.reshape(batch_y, (-1, 1))
return batch_x, batch_y
train_neural_network(x)

你的解决方案对我来说似乎很冗长。我将发布一个简化得多的解决方案,您能否指出您不了解其中的哪一部分?

import tensorflow as tf
import numpy as np
n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500
batch_size = 100
x = tf.placeholder('float', [None, 1])
y = tf.placeholder('float', [None, 1])
x1 = tf.contrib.layers.fully_connected(x, n_nodes_hl1)
x2 = tf.contrib.layers.fully_connected(x1, n_nodes_hl2)
x3 = tf.contrib.layers.fully_connected(x2, n_nodes_hl3)
result = tf.contrib.layers.fully_connected(x3, 1,
activation_fn=None)
loss = tf.nn.l2_loss(result - y)
train_op = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(10000):
xpts = np.random.rand(batch_size) * 10
ypts = np.sin(xpts)
_, loss_result = sess.run([train_op, loss],
feed_dict={x: xpts[:, None],
y: ypts[:, None]})
print('iteration {}, loss={}'.format(i, loss_result))

在我看来,您的解决方案旨在用于分类,而您尚未完全重写它以进行回归,因为那里还剩下softmax_cross_entropy_with_logits之类的东西,您当然不希望在回归网络中使用。

我自己没有尝试过,所以可能需要更改更多内容才能运行模型,但您肯定要更改此行:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))

更像是:

cost = tf.reduce_sum(tf.square(prediction - y))

基本上,在这种情况下,您的成本函数要简单得多...您只想减少网络输出与预期 Y 值之间的平方差之和。

最新更新