Conv-Net模型中的误差拟合模型:值误差



我正试图为一个数据集构建一个CNN模型,它看起来像这样:它有11个特征列和一个标签列,标签列为0或1。

程序:

import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow.keras as keras
import matplotlib.pyplot as plt
import pandas as pd
DATA_PATH = "testt.csv"

def load_data(data_path):
"""Loads training dataset from json file.
:param data_path (str): Path to json file containing data
:return X (ndarray): Inputs
:return y (ndarray): Targets
"""

data = pd.read_csv(DATA_PATH)

y = np.array(data["Label"])
X = np.array(data.drop(labels = ["Label"],axis = 1) )

return X, y

def prepare_datasets(test_size, validation_size):
"""Loads data and splits it into train, validation and test sets.
:param test_size (float): Value in [0, 1] indicating percentage of data set to allocate to test split
:param validation_size (float): Value in [0, 1] indicating percentage of train set to allocate to validation split
:return X_train (ndarray): Input training set
:return X_validation (ndarray): Input validation set
:return X_test (ndarray): Input test set
:return y_train (ndarray): Target training set
:return y_validation (ndarray): Target validation set
:return y_test (ndarray): Target test set
"""
# load data
X, y = load_data(DATA_PATH)
# create train, validation and test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size)
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=validation_size)
# add an axis to input sets
X_train = X_train[..., np.newaxis]
X_validation = X_validation[..., np.newaxis]
X_test = X_test[..., np.newaxis]
return X_train, X_validation, X_test, y_train, y_validation, y_test

def build_model(input_shape):
"""Generates CNN model
:param input_shape (tuple): Shape of input set
:return model: CNN model
"""
# build network topology
model = keras.Sequential()
# 1st conv layer
model.add(keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
model.add(keras.layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same'))
model.add(keras.layers.BatchNormalization())
# 2nd conv layer
model.add(keras.layers.Conv2D(32, (3, 3), activation='relu'))
model.add(keras.layers.MaxPooling2D((3, 3), strides=(2, 2), padding='same'))
model.add(keras.layers.BatchNormalization())
# 3rd conv layer
model.add(keras.layers.Conv2D(32, (2, 2), activation='relu'))
model.add(keras.layers.MaxPooling2D((2, 2), strides=(2, 2), padding='same'))
model.add(keras.layers.BatchNormalization())
# flatten output and feed it into dense layer
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dropout(0.3))
# output layer
model.add(keras.layers.Dense(10, activation='softmax'))
return model


if __name__ == "__main__":
# get train, validation, test splits
X_train, X_validation, X_test, y_train, y_validation, y_test = prepare_datasets(0.25, 0.2)
# create network
input_shape = (X_train.shape[1], X_train.shape[2], 1)
model = build_model(input_shape)
# compile model
optimiser = keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=optimiser,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# train model
history = model.fit(X_train, y_train, validation_data=(X_validation, y_validation), batch_size=32, epochs=30)

当我尝试运行这个时,我得到了这个错误:

InvalidArgumentError Traceback(上次调用的最近一次(/create_c_op中的usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py(图形、节点def、输入、控制输入、操作def(1852尝试:->1853 c_op=pywrap_tf_session.tf_FinishOperation(op_desc(1854年,除了错误。InvalidArgumentError as e:

InvalidArgumentError:对于"{{node conv2d/conv2d}}=conv2d[T=DT_FLOAT,data_format="NHWC",expansitions=[1,1,1],explicit_paddings=[],paddings=";"VALID";,progress=[1,1,1,1],use_cudnn_on_gpu=true](占位符,conv2d/conv2d/ReadVariableOp(',输入形状:[?,11,1,1],[3,3,32]。

在处理上述异常的过程中,发生了另一个异常:

ValueError Traceback(最近一次调用最后一次(18帧/create_c_op中的usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py(图形、节点def、输入、控制输入、操作def(1854年,除了错误。InvalidArgumentError为e:1855#转换为ValueError以实现向后兼容性。->1856引发ValueError(str(e((18571858返回操作(_op(

ValueError:对于"{{node conv2d/conv2d}}=conv2d[T=DT_FLOAT,data_format="NHWC",expansitions=[1,1,1],explicit_paddings=[],paddings=";"VALID";,progress=[1,1,1,1],use_cudnn_on_gpu=true](占位符,conv2d/conv2d/ReadVariableOp(',输入形状:[?,11,1,1],[3,3,32]。

为了澄清,每个基准面都有尺寸[11]的特征和尺寸[1]的标签。因此,您的整个数据集具有维度X=[n_examples,11]和Y=[n_eExamples,1]。

在这种情况下,Conv2D不是正确的操作。Conv2D期望一组2D特征平面输入(即期望的数据大小[n_examples,n_feature_planes,width,height](。它通常用于RGB图像,其中每种颜色表示沿n_feature_planes的一个维度。

如果您希望每个示例的大小为[11]的数据向量在空间上相关(例如,1D音频向量在空间上将相关(,则可以使用Conv1D,在这种情况下,将使用大小为[n_examples,n_feature_vectors,length_of_feature_vector]的输入(在这种情况中,length_of_feature_vector为11。因此,您可以将数据从大小[n_examples,11]扩展到大小[n_eExamples,1,11],然后可以使用Conv1D

然而,如果您的特征在空间上不相关(例如,是不同的属性(,则不应使用卷积层。

最新更新