RNN模型没有学习.图中所附的预测值和实际值是一条平面线



我已经重新缩放、归一化、初始化、放弃了时间序列预测RNN模型,但它还没有学习。图中所示的预测值和实际值为零处的平线。我搞不清楚我的模型出了什么问题。您的意见将得到极大的赞赏。这是整个代码

[Predicted and actual value graph[][1]][1]
# split into train and test sets
train_size = int(len(dataset) * 0.80)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:]
dataset[train_size:len(dataset),:]
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler = scaler.fit(train)
train_scaled = scaler.transform(train)
test_scaled = scaler.transform(test)  
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)  
# reshape into X=t and Y=t+1
look_back = 40
train_X, train_y = create_dataset(train_scaled, look_back)
test_X, test_y = create_dataset(test_scaled, look_back)
# reshape input to be [samples, time steps, features]
train_X= np.expand_dims(train_X, axis=1)
test_X = np.expand_dims(test_X, axis=1)
print ('Train_X shape', train_X.shape, 'Test_X.shape', test_X.shape)
model = Sequential()
model.add(SimpleRNN(64, input_shape=(1,window_size), 
kernel_initializer = 'random_normal', 
bias_initializer= 'random_normal'))
#model.add(Dropout(0.2))
model.add(Dense(32, kernel_initializer = 'random_normal',
bias_initializer= 'random_normal'))
#model.add(Dropout(0.2))
model.add(Dense(1, activation = 'linear'))
optimizer = keras.optimizers.adam(lr=0.1)
model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
#fit network history =model.fit(train_X, train_y, epochs=2,         
batch_size=32, 
validation_data=(test_X, test_y), verbose=1, shuffle=False)
def predict_and_score(model, X, Y): # Make predictions on the     
original 
scale of the data. pred =                 
scaler.inverse_transform(model.predict(X))  
# Prepare Y data to also be on the original scale for   
interpretability. 
orig_data = scaler.inverse_transform([Y]) # Calculate RMSE. score 
= math.sqrt(mean_squared_error(orig_data[0], pred[:, 0]))  
return(score, pred)
rmse_train, train_predict = predict_and_score(model, train_X,   
train_y)
rmse_test, test_predict = predict_and_score(model, test_X, 
test_y)
print("Training data score: %.2f RMSE" % rmse_train)
print("Test data score: %.2f RMSE" % rmse_test)

Training data score: 151198.09 RMSE
Test data score: 125302.35 RMSE
[enter image description here][1]

一个明显的问题是数据:它的范围非常荒谬。虽然神经网络在理论上可以学习"任何东西",但我想说,实际上不可能学习对以下输出进行建模:(1(大部分时间为零,(2(然后范围高达1e14。

虽然(1(可能是你的领域固有的,但(2(似乎没有必要。

首先将数据缩小到[0,1]的范围,这样至少可以学到一些东西。

相关内容

最新更新