我已经对我的数据集做了一些预处理,并试图拟合我的ML模型,但它返回以下错误消息:"未能将NumPy数组转换为张量(不支持的对象类型tensorflow.python.framework.ops.EagerTensor)。">
原始数据集是这样的:
Size Version Count TimeStamp Amount Label
319 1 31 2019-03-21 321 1
482 2 91 2014-09-28 912 2
标签列是Y,其余列是特征(x)。在将数据输入ML模型之前,我标准化了大小、计数和数量列。下面是我的代码:
numeric_features = ['Size', 'Count', 'Amount']
train_df, test_df = train_test_split(df, test_size=0.3) # train-test split
# create normalizer based on train data
normalizer = preprocessing.Normalization()
normalizer.adapt(train_df[numeric_features].values)
# Apply normalizer
train_df[numeric_features] = normalizer(train_df[numeric_features].values)
train_X = train_df.iloc[:, :-1]
train_Y = pd.get_dummies(train_df.Label, prefix='Label')
# reshape X data
train_X = np.array(train_X)
train_X = np.reshape(train_X, (train_X.shape[0],1,train_X.shape[1]))
train_Y = np.array(train_Y)
# Create a model
model_cnn = keras.Sequential()
model_cnn.add(layers.Conv1D(64, 1, activation='relu'))
model_cnn.add(layers.GlobalMaxPooling1D())
model_cnn.add(layers.Dense(2, activation='softmax'))
model_cnn.compile(optimizer = tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy,
metrics=['accuracy'])
# Train the model
epochs = 25
batch_size = 16
# This line throws an error.
history = model_cnn.fit(train_X, train_Y, epochs=epochs, batch_size=batch_size)
我不知道是哪个部分造成的错误。我想知道这是不是因为标准化。当我规范化数字列时,每个单元格都有这种类型的值:& lt; tf。张量:shape=(), dtype=float32, numpy=-0.12194966>.
主要问题是你使用的是数据帧所以你会有数组的数组你可以在数据帧中调用.to_numpy()
来解决一些问题但在某些情况下,这会给你一个numpy数组的numpy数组,所以我建议在准备的最后一步调用np.asarray()
而不是np.array