为什么我在 Ames Housing 数据集上完成的 Tensorflow 线性回归实现收敛得非常非常慢?



我正在尝试在Kaggle上可用的Ames Housing数据集上使用线性回归。

我通过首先删除许多功能对数据进行了一些手动清理。然后,我使用以下实现进行训练。

train_size = np.shape(x_train)[0]
valid_size = np.shape(x_valid)[0]
test_size = np.shape(x_test)[0]
num_features = np.shape(x_train)[1]
graph = tf.Graph()
with graph.as_default():
# Input
tf_train_dataset = tf.constant(x_train)
tf_train_labels = tf.constant(y_train)
tf_valid_dataset = tf.constant(x_valid)
tf_test_dataset = tf.constant(x_test)
# Variables
weights = tf.Variable(tf.truncated_normal([num_features, 1]))
biases = tf.Variable(tf.zeros([1]))
# Loss Computation
train_prediction = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.losses.mean_squared_error(tf_train_labels, train_prediction)
# Optimizer
# Gradient descent optimizer with learning rate = alpha
alpha = tf.constant(0.000000003, dtype=tf.float64)
optimizer = tf.train.GradientDescentOptimizer(alpha).minimize(loss)
# Predictions
valid_prediction = tf.matmul(tf_valid_dataset, weights) + biases
test_prediction = tf.matmul(tf_test_dataset, weights) + biases

这是我的图形的运行方式:

num_steps = 10001
def accuracy(prediction, labels):
return ((prediction - labels) ** 2).mean(axis=None)
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 1000 == 0):
print('Loss at step %d: %f' % (step, l))
print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), y_valid))
t_pred = test_prediction.eval()
print('Test accuracy: %.1f%%' % accuracy(t_pred, y_test))

这是我尝试过的:

  1. 我尝试提高学习率。但是,如果我将学习率提高到超出我现在使用的学习率,模型将无法收敛,即损失爆炸到无穷大。

  2. 将迭代次数增加到 10,000,000。我迭代的时间越长,损失收敛得越慢(这是可以理解的(。但我离合理的值还很远。损失通常是一个 10 位数字

我在图表上做错了什么吗?还是线性回归是一个糟糕的选择,我应该尝试使用另一种算法?任何帮助和建议将不胜感激!

工作代码

import csv
import tensorflow as tf
import numpy as np
with open('train.csv', 'rt') as f:
reader = csv.reader(f)
your_list = list(reader)
def toFloatNoFail( data ) :
try :
return float(data)
except :
return 0
data = [ [ toFloatNoFail(x) for x in row ] for row in your_list[1:] ]
data = np.array( data ).astype( float )
x_train = data[:,:-1]
print x_train.shape
y_train = data[:,-1:]
print y_train.shape

num_features = np.shape(x_train)[1]
# Input
tf_train_dataset = tf.constant(x_train, dtype=tf.float32)
tf_train_labels = tf.constant(y_train, dtype=tf.float32)
# Variables
weights = tf.Variable(tf.truncated_normal( [num_features, 1] , dtype=tf.float32))
biases = tf.Variable(tf.constant(0.0, dtype=tf.float32 ))
train_prediction = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean( tf.square( tf.log(tf_train_labels) - tf.log(train_prediction) ))
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
num_steps = 10001
def accuracy(prediction, labels):
return ((prediction - labels) ** 2).mean(axis=None)

with tf.Session() as session:
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 1000 == 0):
print('Loss at step %d: %f' % (step, l))

关键变化的解释

您的损失函数没有根据价格进行缩放。 上述损失函数考虑到您实际上只对与原始价格成比例的价格误差感兴趣。 因此,对于一百万美元的房子来说,减掉 5,000 美元不应该像减掉 5,000 美元的房子减掉 5,000 美元那么糟糕。

新的损失函数是:

loss = tf.reduce_mean( tf.square( tf.log(tf_train_labels) - tf.log(train_prediction) ))

最新更新