如何实现修改的交叉熵损失函数



我目前正在为我的大学课程做一个更改检测项目,我一直在写一个自定义的损失函数。我知道我必须使用函数闭包才能使用模型层的数据,但我不知道足够的tensorflow/keras知识来编写有效的代码。损失函数方程

这是修正的交叉熵损失方程,我正试图把它变成代码。损失需要矩阵W,我必须使用模型的输入来计算,即X1和X2。所以现在我有这个。

def cmg_loss(X1,X2):
def loss(y_true,y_pred):
print(X1)
if X1.shape[0] == None:
X1 = tf.reshape(X1,(224,224,3))
X2 = tf.reshape(X2,(224,224,3))
cmm = [get_cmm(X1,X2)]
else:
cmm = [get_cmm(X1[i],X2[i]) for i in range(X1.shape[0])]

N = tf.convert_to_tensor(y_true.shape[0],dtype=tf.float32)
N_val = y_true.shape[0]
loss = tf.convert_to_tensor(0.0)
if(N_val == None):
loss = get_cmgloss(y_true[0],y_pred[0],cmm[0])
loss = tf.math.multiply(-1.0,loss)
return tf.math.divide(loss,N)

else:
for i in range(N_val):
print(i)
print("CMM len ", len(cmm))
x = get_cmgloss(y_true[i],y_pred[i],cmm[i])
loss = tf.math.add(loss,get_cmgloss(y_true[i],y_pred[i],cmm[i]))
loss = tf.math.multiply(-1.0,loss)
return tf.math.divide(loss,N)
return loss

def get_cmgloss(y_true,y_pred,W):
y_true = tf.cast(y_true,dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
betaminus = findbetaminus(y_pred)
betaplus = 1 - betaminus
betaminus = betaminus.astype('float32')
betaplus = betaplus.astype('float32')
loss = tf.convert_to_tensor(0.0)
N = tf.convert_to_tensor(y_true.shape[0] * y_true.shape[0],dtype=tf.float32)
betaminus_matrix = tf.fill((224,224), betaminus)
betaplus_matrix = tf.fill((224,224), betaplus)
one_matrix = tf.fill((224,224), 1.0)
first_term = tf.math.multiply(betaminus_matrix,tf.math.multiply(y_true,tf.math.log(y_pred)))
second_term = tf.math.multiply(betaplus_matrix,tf.math.multiply(tf.math.subtract(one_matrix,y_true), tf.math.log(tf.math.subtract(one_matrix,y_pred))))
sum_first_second = tf.math.add(first_term, second_term)
prod = tf.math.multiply(W,sum_first_second)

loss = tf.math.reduce_sum(prod)
#loss = K.sum(K.sum(betaminus_matrix * y_true * tf.math.log(y_pred),betaplus_matrix * (1 - y_true) * tf.math.log(1 - y_pred)))
loss = tf.math.multiply(-1.0, loss)
return tf.math.divide(loss,N)

def findbetaminus(gt):
count_1 = tf.math.count_nonzero(gt == 1)
size = gt.shape[0] * gt.shape[1]
return count_1 / size;
def get_cmm(x1,x2):
b1_diff_sq = tf.math.squared_difference(x1[:,:,0],x2[:,:,0])
b2_diff_sq = tf.math.squared_difference(x1[:,:,1],x2[:,:,1])
b3_diff_sq = tf.math.squared_difference(x1[:,:,2],x2[:,:,2])
sum_3bands = tf.math.add(tf.math.add(b1_diff_sq,b2_diff_sq),b3_diff_sq)
cmm = tf.math.sqrt(sum_3bands)
#print(cmm)

max_val = tf.reduce_max(cmm)
#print("MAX_VAL ", max_val)
max_val_matrix = tf.fill((224,224), max_val)
cmm_bar = tf.divide(cmm,max_val_matrix)
#print(cmm_bar)

mean_cmm_bar = tf.reduce_mean(cmm_bar)
#print("MEAN_VAL ", mean_cmm_bar)
mean_cmm_bar_matrix = tf.fill((224,224), mean_cmm_bar)
#print(mean_cmm_bar_matrix)
condition = tf.math.greater(mean_cmm_bar_matrix, cmm_bar)
return tf.where(condition, mean_cmm_bar_matrix, cmm_bar)
#print(weights)

如果你能指导我如何开发一个利用其他层数据的损失函数,并在计算中调用多个函数,那将是一个很大的帮助。

如果你想使用更高级的损失函数,你必须使用tf.GradientTape自己完成训练步骤,而不是使用fit方法。你可以在网上和tensorflow文档中找到很多例子。这需要做更多的工作,但功能要强大得多,因为您基本上可以在调用方法中从自定义模型中输出张量列表,然后计算目标损耗并选择更改的参数。

最新更新