如何在tensorflow数据集中扩展每个批次的维度



我创建了一个tf.data数据集,然而,当我试图用它来拟合我的Sequential CNN模型时,我一直遇到这个错误。

ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 28, 28]

目前,我的训练数据集的形式是("x_train","y_train"(,其中"x_train"中的每个批次都有形状[32,28,28],"y_train"中的每一个批次都有(32,(。如何将每个"x_train"批次的尺寸更改为[32,28,28,1],而不更改"y_train"中每个批次的形状?

这是我的全部代码:

#imports
import tarfile
import numpy as np
import pandas as pd
import matplotlib
import tensorflow as tf
# Get Data

def get_images():
"""Get the fashion-mnist images.
Returns
-------
(x_train, x_test) : tuple of uint8 arrays
Grayscale image data with shape (num_samples, 28, 28)
(y_train, y_test) : tuple of uint8 arrays
Labels (integers in range 0-9) with shape (num_samples,)
Examples
--------
>>> from reader import get_images
>>> (x_train, y_train), (x_test, y_test) = get_images() 
Notes
-----
The data is split into train and test sets as described in the original paper [1].
References
----------
1. Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: a Novel Image Dataset for 
Benchmarking Machine Learning Algorithms. CoRR [Internet]. 2017;abs/1708.07747.
Available from: http://arxiv.org/abs/1708.07747
"""
with tarfile.open('data.tar.gz', 'r') as f:
f.extractall()
df_train = pd.read_csv('fashion_mnist_train.csv')
df_test = pd.read_csv('fashion_mnist_test.csv')
x_train = df_train.drop('label', axis=1).to_numpy(np.uint8)
y_train = df_train['label'].to_numpy(np.uint8)
x_test = df_test.drop('label', axis=1).to_numpy(np.uint8)
y_test = df_test['label'].to_numpy(np.uint8)
return (x_train, y_train), (x_test, y_test)
(x_train,y_train),(x_test,y_test)=get_images()
clothing=['top','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle boot']
BUFFER_SIZE=1000
BATCH_SIZE=32
#Reshape x_train and y_train, and scale them to the range [0,1]
new_x_train=[]
new_x_test=[]
for i,train in enumerate(x_train):
#print(np.shape(train))
arr=np.reshape(x_train[i],(28,28))
arr=arr/255.0
new_x_train.append(arr)



for i,test in enumerate(x_test):
arr=np.reshape(x_test[i],(28,28))
arr=arr/255.0
new_x_test.append(arr)
train_dataset = tf.data.Dataset.from_tensor_slices((new_x_train,y_train)).shuffle(BUFFER_SIZE).batch(BATCH_SIZE,drop_remainder=True)
from keras.layers import LeakyReLU
CNN_model= tf.keras.Sequential()
#CNN_model.add(tf.keras.layers.Lambda(tf.py_function(expand_dims)))
CNN_model.add(tf.keras.layers.Conv2D(
20, (5,5), strides=(1, 1), padding='valid',
kernel_initializer='glorot_uniform'))
#CNN_model.add(tf.keras.layers(tf.keras.layers.Lambda(
#   function)

CNN_model.add(LeakyReLU(alpha=0.05))

CNN_model.add(tf.keras.layers.MaxPool2D(
pool_size=(2, 2), strides=None, padding='valid'))
CNN_model.add(tf.keras.layers.Conv2D(
50, (3,3), strides=(1, 1), padding='valid',
kernel_initializer='glorot_uniform'))
CNN_model.add(LeakyReLU(alpha=0.05))
CNN_model.add(tf.keras.layers.MaxPool2D(
pool_size=(2, 2), strides=None, padding='valid'))  

CNN_model.add(tf.keras.layers.Conv2D(
10, (1,1), strides=(1, 1), padding='valid',
kernel_initializer='glorot_uniform'))
CNN_model.add(LeakyReLU(alpha=0.05))
CNN_model.add(tf.keras.layers.GlobalAveragePooling2D())
CNN_model.add(tf.keras.layers.Softmax(axis=-1))

CNN_model.compile(loss="sparse_categorical_crossentropy",
optimizer="adam",
metrics=["accuracy"])
CNN_history = CNN_model.fit(train_dataset, epochs=10)

您可以尝试使用此

arr=np.reshape(x_test[i],(1, 28,28))

而不是这个

arr=np.reshape(x_test[i],(28,28))

如果你最后一次使用通道,你可以把1作为第三个调光。

你可以做一个简单的扩展dims:

import numpy as np
x_train = np.expand_dims(x_train, axis=-1)

但奇怪的是,我能问一下你是如何加载数据的吗?发电机功能?

相关内容

  • 没有找到相关文章

最新更新