如何在Tensorflow分割模型中实现骰子丢失



我是TensorFlow的新手,我正在尝试在我的图像分割模型中实现骰子丢失。问题是,我得到的所有教程都只是展示了函数的外观。即

def dice_loss(inputs, target):
num = target.size(0)
inputs = inputs.reshape(num, -1)
target = target.reshape(num, -1)
smooth = 1.0
intersection = (inputs * target)
dice = (2. * intersection.sum(1) + smooth) / (inputs.sum(1) + target.sum(1) + smooth)
dice = 1 - dice.sum() / num
return dice

但他们都没有向我展示如何在模型中实际调用它。即:

OUTPUT_CLASSES = 2
inputs = tf.keras.layers.Input((256, 256, 3))
myTransformer = unet_model(output_channels=OUTPUT_CLASSES)
myTransformer.compile(optimizer = 'Adam', loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics = ['accuracy'] )

我已经尝试了下面的代码,但很明显,我的代码中没有变量target

OUTPUT_CLASSES = 2
inputs = tf.keras.layers.Input((256, 256, 3))
myTransformer = unet_model(output_channels=OUTPUT_CLASSES)
myTransformer.compile(optimizer = 'Adam', loss = dice_loss(inputs, outputs),
metrics = ['accuracy'] )

我的问题是,我该如何调用它,以及我该传递哪些参数?

试着把它放在类似的类中

class DiceLoss(tf.keras.losses.Loss):
def __init__(self):
super().__init__()
def call(self, y_true, y_pred):
inputs = y_pred
target = y_true
inputs = inputs.reshape(num, -1)
target = target.reshape(num, -1)
smooth = 1.0
intersection = (inputs * target)
dice = (2. * intersection.sum(1) + smooth) / (inputs.sum(1) + 
target.sum(1) + smooth)
dice = 1 - dice.sum() / num
return dice

然后在编译时这样调用它

OUTPUT_CLASSES = 2
inputs = tf.keras.layers.Input((256, 256, 3))
myTransformer = unet_model(output_channels=OUTPUT_CLASSES)
myTransformer.compile(optimizer = 'Adam', loss = DiceLoss(),
metrics = ['accuracy'] )

最新更新