张量流中类不平衡二元分类器的损失函数



我正在尝试将深度学习应用于目标类(500k,31K)之间高度类不平衡的二元分类问题。我想编写一个自定义损失函数,它应该是这样的:最小化(100-((predicted_smallerclass)/(total_smallerclass))*100)

感谢关于我如何构建此逻辑的任何指示。

您可以通过乘以 logit 将类权重添加到损失函数中。常规交叉熵损失是这样的:

loss(x, class) = -log(exp(x[class]) / (sum_j exp(x[j])))
               = -x[class] + log(sum_j exp(x[j]))

在加权情况下:

loss(x, class) = weights[class] * -x[class] + log(sum_j exp(weights[class] * x[j]))

因此,通过将 logits 相乘,您可以按每个类的类权重重新缩放每个类的预测。

例如:

ratio = 31.0 / (500.0 + 31.0)
class_weight = tf.constant([ratio, 1.0 - ratio])
logits = ... # shape [batch_size, 2]
weighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2]
xent = tf.nn.softmax_cross_entropy_with_logits(
  weighted_logits, labels, name="xent_raw")

现在有一个标准损失函数,支持每批的重量:

tf.losses.sparse_softmax_cross_entropy(labels=label, logits=logits, weights=weights)

其中权重应从类权重转换为每个示例的权重(形状为 [batch_size])。请参阅此处的文档。

你提出的代码对我来说似乎是错误的。损失应该乘以重量,我同意。

但是,如果将 logit 乘以类权重,则以:

weights[class] * -x[class] + log( sum_j exp(x[j] * weights[class]) )

第二项不等于:

weights[class] * log(sum_j exp(x[j]))

为了证明这一点,我们可以将后者重写为:

log( (sum_j exp(x[j]) ^ weights[class] )

所以这是我建议的代码:

ratio = 31.0 / (500.0 + 31.0)
class_weight = tf.constant([[ratio, 1.0 - ratio]])
logits = ... # shape [batch_size, 2]
weight_per_label = tf.transpose( tf.matmul(labels
                           , tf.transpose(class_weight)) ) #shape [1, batch_size]
# this is the weight for each datapoint, depending on its label
xent = tf.mul(weight_per_label
         , tf.nn.softmax_cross_entropy_with_logits(logits, labels, name="xent_raw") #shape [1, batch_size]
loss = tf.reduce_mean(xent) #shape 1

使用 tf.nn.weighted_cross_entropy_with_logits() 并将pos_weight设置为 1/(预期的正数比率)。

您可以在 tensorflow https://www.tensorflow.org/api_guides/python/contrib.losses 查看指南

虽然指定标量损失会重新缩放整个批次的损失,但我们有时希望重新缩放每个批次样本的损失。例如,如果我们有一些对我们更重要的例子来正确获得,我们可能希望比其他样本有更高的损失,这些样本的错误不太重要。在这种情况下,我们可以提供长度batch_size的权重向量,这会导致批次中每个样品的损失被相应的权重元素缩放。例如,考虑一个分类问题的情况,我们希望最大限度地提高准确性,但我们特别感兴趣的是获得特定类的高精度:

inputs, labels = LoadData(batch_size=3)
logits = MyModelPredictions(inputs)
# Ensures that the loss for examples whose ground truth class is `3` is 5x
# higher than the loss for all other examples.
weight = tf.multiply(4, tf.cast(tf.equal(labels, 3), tf.float32)) + 1
onehot_labels = tf.one_hot(labels, num_classes=5)
tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=weight)

我不得不使用多个类的类似不平衡数据集,这就是我的工作方式,希望它能帮助寻找类似解决方案的人:

这进入您的培训模块:

from sklearn.utils.class_weight import compute_sample_weight
#use class weights for handling unbalanced dataset
if mode == 'INFER' #test/dev mode, not weighing loss in test mode
   sample_weights = np.ones(labels.shape)
else:
   sample_weights = compute_sample_weight(class_weight='balanced', y=labels)

这位于模型类定义中:

#an extra placeholder for sample weights
#assuming you already have batch_size tensor
self.sample_weight = tf.placeholder(dtype=tf.float32, shape=[None],
                       name='sample_weights')
cross_entropy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
                       labels=self.label, logits=logits, 
                       name='cross_entropy_loss')
cross_entropy_loss = tf.reduce_sum(cross_entropy_loss*self.sample_weight) / batch_size

对两个类的操作tf.nn.weighted_cross_entropy_with_logits():

classes_weights = tf.constant([0.1, 1.0])
cross_entropy = tf.nn.weighted_cross_entropy_with_logits(logits=logits, targets=labels, pos_weight=classes_weights)
""" Weighted binary crossentropy between an output tensor and a target tensor.
# Arguments
    pos_weight: A coefficient to use on the positive examples.
# Returns
    A loss function supposed to be used in model.compile().
"""
def weighted_binary_crossentropy(pos_weight=1):
    def _to_tensor(x, dtype):
        """Convert the input `x` to a tensor of type `dtype`.
        # Arguments
            x: An object to be converted (numpy array, list, tensors).
            dtype: The destination type.
        # Returns
            A tensor.
        """
        return tf.convert_to_tensor(x, dtype=dtype)
  
  
    def _calculate_weighted_binary_crossentropy(target, output, from_logits=False):
        """Calculate weighted binary crossentropy between an output tensor and a target tensor.
        # Arguments
            target: A tensor with the same shape as `output`.
            output: A tensor.
            from_logits: Whether `output` is expected to be a logits tensor.
                By default, we consider that `output`
                encodes a probability distribution.
        # Returns
            A tensor.
        """
        # Note: tf.nn.sigmoid_cross_entropy_with_logits
        # expects logits, Keras expects probabilities.
        if not from_logits:
            # transform back to logits
            _epsilon = _to_tensor(K.epsilon(), output.dtype.base_dtype)
            output = tf.clip_by_value(output, _epsilon, 1 - _epsilon)
            output = log(output / (1 - output))
        target = tf.dtypes.cast(target, tf.float32)
        return tf.nn.weighted_cross_entropy_with_logits(labels=target, logits=output, pos_weight=pos_weight)

    def _weighted_binary_crossentropy(y_true, y_pred):
        return K.mean(_calculate_weighted_binary_crossentropy(y_true, y_pred), axis=-1)
    
    return _weighted_binary_crossentropy

用法:

pos = #count of positive class
neg = #count of negative class
total = pos + neg
weight_for_0 = (1 / neg)*(total)/2.0 
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
model = <your model>
model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
    loss=weighted_binary_crossentropy(weight_for_1),
    metrics=tf.keras.metrics.Precision(name='precision')
)

最新更新