Keras - val_accuracy在LSTM模型中停留在0.0000e+00



下面是我预测加密货币价格的Keras模型。问题是虽然lossval_loss降低,但accuracy停留在某一值(2.5840e-04)不变,val_accuracy停留在0.0000e+00。我仔细检查了我的每一个输入,但没有发现任何错误。我的型号有问题吗?

(这是一个链接到我的笔记本如果你需要)

这是我的数据准备(ethusd.csv是一个标准的股票格式数据集。你可以在我的笔记本上看到上面的链接)

DATASET_PATH = "../input/crifier/ethusd.csv"
import math
import matplotlib.pyplot as plt
import tensorflow.keras as keras
import pandas as pd
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import *
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
from sklearn.utils import shuffle
df=pd.read_csv(DATASET_PATH)
df = df[-100000:-90000]
global WINDOW
global LEN_TRAIN
WINDOW = 400
TRAIN_RATIO = 0.9
LEN_TRAIN = int(LEN_DF*TRAIN_RATIO)
LEN_TEST = int(LEN_DF - LEN_TRAIN)
training_set = df.iloc[:LEN_TRAIN, 1:2].values
test_set = df.iloc[LEN_TRAIN:, 1:2].values
def reshaper(dataset):

sc = MinMaxScaler(feature_range = (0, 1))

dataset_scaled = sc.fit_transform(dataset)
# Creating a data structure with 60 time-steps and 1 output
X_scaled = []
y_scaled = []
for i in range(0, len(dataset)-WINDOW):
X_scaled.append(dataset_scaled[i:i+WINDOW, 0])
y_scaled.append(dataset_scaled[i+WINDOW, 0])
X_scaled, y_scaled = np.array(X_scaled), np.array(y_scaled)
X_scaled = np.reshape(X_scaled, (X_scaled.shape[0], X_scaled.shape[1], 1))
return X_scaled, y_scaled,sc
X_train, y_train, sc_train = reshaper(training_set)
X_train, y_train = shuffle(X_train, y_train)
X_test, y_test, sc_test = reshaper(test_set)

这是我的模型

model = Sequential()
#Adding the first LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50))
model.add(Dropout(0.2))
# Adding the output layer
model.add(Dense(units = 1))
# Compiling the RNN
model.compile(optimizer = Adam(learning_rate=0.001), loss = 'mean_squared_error',metrics=['accuracy'])
model.fit(X_train, y_train, validation_split=0.1, epochs = 100, batch_size = 1000, verbose=2)

我一直得到以下精度和val_accuracy

Epoch 1/100
8/8 - 3s - loss: 0.0991 - accuracy: 1.2920e-04 - val_loss: 0.0046 - val_accuracy: 0.0000e+00
Epoch 2/100
8/8 - 2s - loss: 0.0217 - accuracy: 2.5840e-04 - val_loss: 0.0068 - val_accuracy: 0.0000e+00
Epoch 3/100
8/8 - 2s - loss: 0.0132 - accuracy: 2.5840e-04 - val_loss: 0.0047 - val_accuracy: 0.0000e+00
.
.
.
Epoch 28/100
8/8 - 2s - loss: 0.0036 - accuracy: 2.5840e-04 - val_loss: 0.0011 - val_accuracy: 0.0000e+00
Epoch 29/100
8/8 - 2s - loss: 0.0034 - accuracy: 2.5840e-04 - val_loss: 0.0012 - val_accuracy: 0.0000e+00
Epoch 30/100
8/8 - 2s - loss: 0.0033 - accuracy: 2.5840e-04 - val_loss: 0.0011 - val_accuracy: 0.0000e+00

你的模型是回归模型(你预测的是float而不是category)。此处不适用准确性

当计算精度时,keras将你的输出与标签进行比较。例如,即使输出是0.499999 999,它也会给你零精度,但标签是0.5

相关内容

  • 没有找到相关文章

最新更新