包含多个元素的数组的真值是二义性的.在Model,fit()中使用a.any()或a.all() &



我正在尝试图像到图像映射。当使用keras.utils.Sequence作为输入时,不支持y参数,我必须创建一个自定义数据发生器。误差我发现了这个由别人创建的数据发生器,并修改了它以适应我的需要。现在我又得到了这个错误。我不知道为什么我得到这个错误。

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.layers import Input, Dropout, Flatten, Dense
from keras.layers import Convolution2D
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import UpSampling2D
from keras.models import Model
from tensorflow.keras.layers import BatchNormalization
import os
import numpy as np
import cv2
class image_gen():
def __init__(self, sdir, batch_size,input_size, shuffle):          
self.batch_index=0  
self.sdir=sdir # directory containing input images
self.batch_size=batch_size   #batch size is number of samples in a batch
# tuple (width, height)  for target image 
self.input_size=input_size   
self.shuffle=shuffle   # set to True to shuffle images, False for no shuffle
self.label_list=[] # initialize list to hold sequential list of total labels generated
self.image_list=[] # initialize list to hold sequential list of total images filenames generated
self.s_list=os.listdir(self.sdir)   # list of images in directory      

def get_images(self):    # gets a batch of input images, resizes input image to make target images        
input_image_batch=[] # initialize list to hold a batch of target images
# initialize list to hold batches of  input images 
sample_count=len(self.s_list)  # determine total number of images available         
for i in range(self.batch_index * self.batch_size, (self.batch_index + 1) * self.batch_size  ): #iterate for  a batch
j=i % sample_count # cycle j value over range of available  images
k=j % self.batch_size  # cycle k value over batch size
if self.shuffle: # if shuffle select a random integer between 0 and sample_count-1 to pick as the image=label pair
m=np.random.randint(low=0, high=sample_count-1, size=None, dtype=int) 
else:
m=j   # no shuffle         
path_to_img=os.path.join(self.sdir, self.s_list[m]) # define the path to the m th image 
input_image=cv2.imread(path_to_img)
#create the target image from the input image            
input_image_batch.append(input_image)        

input_image_array=np.array(input_image_batch) 

self.batch_index=self.batch_index +1         
yield (input_image_array)
batch_size=32
target_shape=(3402,3401,)
sdir=r"D:Input\train" # set this to your image directory
shuffle=False
gen=image_gen(sdir, batch_size,target_shape,shuffle) # instantiate an instance of the class
input_images=next(gen.get_images()) # get a batch of inputs, labels
batch_size=32
target_shape=(3402,3401,)
sdir=r"D:\Validation\valid" # set this to your image directory
shuffle=False
gen=image_gen(sdir, batch_size,target_shape,shuffle) # instantiate an instance of the class
Valid_images=next(gen.get_images()) # get a batch of inputs, labels
batch_size=32
target_shape=(3402,3401,)
sdir=r"D:\Output\train" # set this to your image directory
shuffle=False
gen=image_gen(sdir, batch_size,target_shape,shuffle) # instantiate an instance of the class
output_images=next(gen.get_images()) # get a batch of inputs, labels
base_model = tf.keras.applications.ResNet50(
include_top=False,
weights="imagenet",
input_shape=(3402,3401,3),
pooling=None,  
)
for layer in base_model.layers[:]:
layer.trainable = False
model = Sequential()
model.add(base_model)
model.add(Convolution2D(3,9,activation='relu',padding='same'))
model.add(UpSampling2D())
model.add(UpSampling2D())
model.add(BatchNormalization())
model.add(Convolution2D(3,9,activation='relu',padding='same'))
model.build((None, 3402, 3401, 3))
model.summary()
model.compile(optimizer="adam", loss='mean_squared_error', metrics=['mean_squared_error'])
model.fit(input_images,output_images,validation_data = Valid_images,epochs=100)

错误:

ValueError                                Traceback (most recent call last)
<ipython-input-20-41ebbbad4a4a> in <module>
----> 1 model.fit(input_images,output_images,validation_data = Valid_images,epochs=100)
~anaconda3libsite-packageskerasenginetraining.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1095               (x, y, sample_weight), validation_split=validation_split))
1096 
-> 1097     if validation_data:
1098       val_x, val_y, val_sample_weight = (
1099           data_adapter.unpack_x_y_sample_weight(validation_data))
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

看起来像模型。fit期望validation_data参数的参数类型与您提供的参数类型不同。

关于model.fit的Keras文档状态:

validation_data:

用于评估损失的数据和最后的任何模型指标每一个时代。该模型将不会在此数据上进行训练。因此,请注意事实证明,验证丢失的数据提供了使用Validation_split或validation_data不受正则化的影响像噪声和dropout这样的图层。Validation_data将重写validation_split .

validation_data可以是:

  • Numpy数组或张量的元组(x_val, y_val)。
  • NumPy数组的元组(x_val, y_val, val_sample_weights)。
  • tf.data.Dataset。
  • Python生成器或keras.utils.Sequence返回(inputs, targets)或(inputs, targets, sample_weights)。

进入model.fit()的验证数据是什么类型的?

关于错误信息;如果使用了if variable:语句,并且variable是numpy数组,那么它是否应该求值为True是有歧义的。

例如:

arr = np.array([True, True, False])
if arr:
...

这个值应该为真吗?

arr = np.array([True, True, True])
if arr:
...

还是只有当所有元素都为真时才为真?

如果想让前者的值为真,可以使用arr.any()。如果所有元素的值都需要为True,那么整个数组也需要这样计算,那么应该使用arr.all()。当两个选项都没有提供时,将引发异常。

现在这可能是keras模块中的一个错误,但我认为更有可能的是model.fit()期望与您输入的不同的输入。

最新更新