如何在 Keras 中计算多个类别的总损失



假设我有以下参数的网络:

  1. 用于语义分割的全卷积网络
  2. 损失=加权二进制交叉熵(但它可以是任何损失函数,没关系)
  3. 5 类 - 输入是图像,基本事实是二进制掩码
  4. 批量大小 = 16

现在,我知道损失是按以下方式计算的:二进制交叉熵应用于图像中每个像素的每个类。因此,从本质上讲,每个像素将有 5 个损耗值

此步骤之后会发生什么?

当我训练我的网络时,它只打印一个时期的单个损失值。 需要发生许多级别的损失累积才能生成单个值,并且在文档/代码中根本不清楚它是如何发生的。

首先组合什么 - (1) 类的损失值(例如 5 个值(每个类一个)按像素合并),然后图像中的所有像素
  1. 或 (2) 每个单独类的图像中的所有像素,然后合并所有类损失?
  2. 这些不同的像素组合究竟是如何发生的——在哪里求和/在哪里取平均值?
  3. Keras的binary_crossentropy平均值超过axis=-1.那么这是每个类所有像素的平均值还是所有类的平均值,还是两者兼而有之?

换一种说法:不同类别的损耗如何组合成图像的单个损耗值?

这在文档中根本没有解释,对于在 keras 上进行多类预测的人来说非常有帮助,无论网络类型如何。这是指向 keras 代码开头的链接,其中首先传入损失函数。

我能找到的最接近解释的是

损失:字符串(目标函数的名称)或目标函数。见损失。如果模型有多个输出,则可以通过传递字典或损失列表对每个输出使用不同的损失。模型将最小化的损失值将是所有单个损失的总和

来自克拉斯。那么这是否意味着图像中每个类的损失只是相加的?

此处的示例代码供某人试用。下面是从 Kaggle 借用并针对多标签预测进行修改的基本实现:

# Build U-Net model
num_classes = 5
IMG_DIM = 256
IMG_CHAN = 3
weights = {0: 1, 1: 1, 2: 1, 3: 1, 4: 1000} #chose an extreme value just to check for any reaction
inputs = Input((IMG_DIM, IMG_DIM, IMG_CHAN))
s = Lambda(lambda x: x / 255) (inputs)
c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (s)
c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (c1)
p1 = MaxPooling2D((2, 2)) (c1)
c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (p1)
c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (c2)
p2 = MaxPooling2D((2, 2)) (c2)
c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (p2)
c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (c3)
p3 = MaxPooling2D((2, 2)) (c3)
c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (p3)
c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (c4)
p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (p4)
c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (c5)
u6 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c5)
u6 = concatenate([u6, c4])
c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (u6)
c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (c6)
u7 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (u7)
c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (c7)
u8 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = concatenate([u8, c2])
c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (u8)
c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (c8)
u9 = Conv2DTranspose(8, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (u9)
c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (c9)
outputs = Conv2D(num_classes, (1, 1), activation='sigmoid') (c9)
model = Model(inputs=[inputs], outputs=[outputs])
model.compile(optimizer='adam', loss=weighted_loss(weights), metrics=[mean_iou])
def weighted_loss(weightsList):
def lossFunc(true, pred):
axis = -1 #if channels last 
#axis=  1 #if channels first        
classSelectors = K.argmax(true, axis=axis) 
classSelectors = [K.equal(tf.cast(i, tf.int64), tf.cast(classSelectors, tf.int64)) for i in range(len(weightsList))]
classSelectors = [K.cast(x, K.floatx()) for x in classSelectors]
weights = [sel * w for sel,w in zip(classSelectors, weightsList)] 
weightMultiplier = weights[0]
for i in range(1, len(weights)):
weightMultiplier = weightMultiplier + weights[i]
loss = BCE_loss(true, pred) - (1+dice_coef(true, pred))
loss = loss * weightMultiplier
return loss
return lossFunc
model.summary()

实际的BCE-DICE损失函数可以在这里找到。

问题的动机:基于上面的代码,20 个 epoch 后网络的总验证损失为 ~1%;然而,前 4 个类的联合分数的平均交集每个都超过 95%,但最后一个类的平均交集为 23%。清楚地表明5年级的表现一点也不好。但是,这种准确性的损失根本没有反映在损失中。因此,这意味着样本的个体损失以一种完全抵消我们看到的第5类的巨大损失的方式组合在一起。因此,当每个样品的损失在批次中合并时,它仍然非常低。我不确定如何协调此信息。

虽然我已经在相关答案中提到了这个答案的一部分,但让我们一步一步地检查源代码,并提供更多详细信息,以具体找到答案。

首先,让我们 feedforward(!):有一个对weighted_loss函数的调用,它接受y_truey_predsample_weightmask作为输入:

weighted_loss = weighted_losses[i]
# ...
output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)

weighted_loss实际上是列表的一个元素,其中包含传递给fit方法的所有(增强的)损失函数:

weighted_losses = [
weighted_masked_objective(fn) for fn in loss_functions]

我提到的"增强"一词在这里很重要。这是因为,正如你在上面看到的,实际的损失函数被另一个叫做weighted_masked_objective的函数包装,该函数定义如下:

def weighted_masked_objective(fn):
"""Adds support for masking and sample-weighting to an objective function.
It transforms an objective function `fn(y_true, y_pred)`
into a sample-weighted, cost-masked objective function
`fn(y_true, y_pred, weights, mask)`.
# Arguments
fn: The objective function to wrap,
with signature `fn(y_true, y_pred)`.
# Returns
A function with signature `fn(y_true, y_pred, weights, mask)`.
"""
if fn is None:
return None
def weighted(y_true, y_pred, weights, mask=None):
"""Wrapper function.
# Arguments
y_true: `y_true` argument of `fn`.
y_pred: `y_pred` argument of `fn`.
weights: Weights tensor.
mask: Mask tensor.
# Returns
Scalar tensor.
"""
# score_array has ndim >= 2
score_array = fn(y_true, y_pred)
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in Theano
mask = K.cast(mask, K.floatx())
# mask should have the same shape as score_array
score_array *= mask
#  the loss per batch should be proportional
#  to the number of unmasked samples.
score_array /= K.mean(mask)
# apply sample weighting
if weights is not None:
# reduce score_array to same ndim as weight array
ndim = K.ndim(score_array)
weight_ndim = K.ndim(weights)
score_array = K.mean(score_array,
axis=list(range(weight_ndim, ndim)))
score_array *= weights
score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx()))
return K.mean(score_array)
return weighted

所以,有一个嵌套函数weighted,它实际上调用fnscore_array = fn(y_true, y_pred)中的实损失函数。现在,具体来说,在OP提供的示例的情况下,fn(即损失函数)是binary_crossentropy。因此,我们需要看一下 Keras 中binary_crossentropy()的定义:

def binary_crossentropy(y_true, y_pred):
return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)

反过来,调用后端函数K.binary_crossentropy()。在使用 Tensorflow 作为后端的情况下,K.binary_crossentropy()的定义如下:

def binary_crossentropy(target, output, from_logits=False):
"""Binary crossentropy between an output tensor and a target tensor.
# Arguments
target: A tensor with the same shape as `output`.
output: A tensor.
from_logits: Whether `output` is expected to be a logits tensor.
By default, we consider that `output`
encodes a probability distribution.
# Returns
A tensor.
"""
# Note: tf.nn.sigmoid_cross_entropy_with_logits
# expects logits, Keras expects probabilities.
if not from_logits:
# transform back to logits
_epsilon = _to_tensor(epsilon(), output.dtype.base_dtype)
output = tf.clip_by_value(output, _epsilon, 1 - _epsilon)
output = tf.log(output / (1 - output))
return tf.nn.sigmoid_cross_entropy_with_logits(labels=target,
logits=output)

tf.nn.sigmoid_cross_entropy_with_logits返回:

与具有分量逻辑损失的logits形状相同的张量。

现在,让我们反向传播(!):考虑到上面的说明,K.binray_crossentropy的输出形状将与y_pred(或y_true)相同。正如OP所提到的,y_true有一个形状(batch_size, img_dim, img_dim, num_classes).因此,K.mean(..., axis=-1)应用于形状为(batch_size, img_dim, img_dim, num_classes)的张量,从而产生形状为(batch_size, img_dim, img_dim)的输出张量。因此,所有类的损失值都是图像中每个像素的平均值。因此,上面提到的weighted函数中score_array的形状将是(batch_size, img_dim, img_dim)。还有一个步骤:函数中的 return 语句再次取平均值weightedreturn K.mean(score_array).那么它是如何计算平均值的呢?如果你看一下后端函数的定义mean你会发现axis参数默认是None的:

def mean(x, axis=None, keepdims=False):
"""Mean of a tensor, alongside the specified axis.
# Arguments
x: A tensor or variable.
axis: A list of integer. Axes to compute the mean.
keepdims: A boolean, whether to keep the dimensions or not.
If `keepdims` is `False`, the rank of the tensor is reduced
by 1 for each entry in `axis`. If `keepdims` is `True`,
the reduced dimensions are retained with length 1.
# Returns
A tensor with the mean of elements of `x`.
"""
if x.dtype.base_dtype == tf.bool:
x = tf.cast(x, floatx())
return tf.reduce_mean(x, axis, keepdims)

它调用给定axis=None参数的tf.reduce_mean(),取输入张量所有轴的平均值并返回一个值。因此,计算形状(batch_size, img_dim, img_dim)的整个张量的平均值,这转化为对批处理中的所有标签及其所有像素进行平均值,并作为表示损失值的单个标量值返回。然后,Keras 会报告此损失值并用于优化。


奖励:如果我们的模型有多个输出层,因此使用了多个损失函数怎么办?

记住我在这个答案中提到的第一段代码:

weighted_loss = weighted_losses[i]
# ...
output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)

如您所见,有一个用于索引数组的i变量。您可能猜对了:它实际上是一个循环的一部分,该循环使用其指定的损失函数计算每个输出层的损失值,然后取所有这些损失值的(加权)总和来计算总损失:

# Compute total loss.
total_loss = None
with K.name_scope('loss'):
for i in range(len(self.outputs)):
if i in skip_target_indices:
continue
y_true = self.targets[i]
y_pred = self.outputs[i]
weighted_loss = weighted_losses[i]
sample_weight = sample_weights[i]
mask = masks[i]
loss_weight = loss_weights_list[i]
with K.name_scope(self.output_names[i] + '_loss'):
output_loss = weighted_loss(y_true, y_pred,
sample_weight, mask)
if len(self.outputs) > 1:
self.metrics_tensors.append(output_loss)
self.metrics_names.append(self.output_names[i] + '_loss')
if total_loss is None:
total_loss = loss_weight * output_loss
else:
total_loss += loss_weight * output_loss
if total_loss is None:
if not self.losses:
raise ValueError('The model cannot be compiled '
'because it has no loss to optimize.')
else:
total_loss = 0.
# Add regularization penalties
# and other layer-specific losses.
for loss_tensor in self.losses:
total_loss += loss_tensor  
1) What gets combined first - (1) the loss values of the class(for instance 10 values(one for each class) get combined per pixel) and

然后是图像中的所有像素或 (2)图像中的所有像素 每个单独的班级,那么所有的班级损失都合并了吗? 2)这些不同的像素组合究竟是如何发生的 - 在哪里求和/在哪里取平均值?

我对(1)的回答: 在训练一批图像时,通过计算非线性函数、损失和优化(更新权重)来训练由像素值组成的数组。损失不是针对每个像素值计算的;而是针对每个图像计算的。

像素值 (X_train)、权重和偏差 (b) 用于 sigmoid(对于最简单的非线性示例)来计算预测的 y 值。这与y_train(一次一批)一起用于计算损失,使用SGD,动量,亚当等优化方法之一进行优化,以更新权重和偏差。

我对(2)的回答: 在非线性操作期间,像素值(X_train)与权重(通过点积)组合并添加到偏差中以形成预测的目标值。

在一个批次中,可能存在属于不同类的训练示例。将相应的目标值(针对每个类)与相应的预测值进行比较以计算损失。这些是 因此,将所有损失相加是完全可以的。

它们属于一个类还是多个类真的无关紧要,只要您将其与正确类的相应目标进行比较即可。 有道理吗?

最新更新