用于将分类网络更改为回归网络的损失函数



我正试图将一个将点云分为40个不同类别的神经网络改为一个在三个点(x,y,z(中预测点云特定特性的回归网络。对于损失函数,我已经替换了这个损失值:

loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)  
classify_loss = tf.reduce_mean(loss)   
tf.summary.scalar('classify loss', classify_loss)

这个:

mse = tf.keras.losses.MeanSquaredError()
regression_loss=mse(pred, label)
regression_loss = tf.cast(regression_loss, tf.float32)   
tf.summary.scalar('regression loss', regression_loss)

我只将最后一层输出从40更改为3,并为网络实际开始训练做了一些小事情。我使用tf.cast是因为在网络中间计算了另一个损失乘以0.001,并将其添加到该损失中以得出平均损失,在此之前,它不断增加x(regression _lloss(为int,y(middle _loss(为float的误差,并且它们无法添加,这也很奇怪。"labels"的值在零附近很小,但除了随机时间外,损失几乎一直保持在零。如果有人能帮我找到问题或更换丢失功能,我将不胜感激。日志如下:

**** EPOCH 000 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
Model saved in file: log/model.ckpt
**** EPOCH 001 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 002 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.019608
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.015625
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 003 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 004 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 005 ****
----0-----
mean loss: 0.019608
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 006 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 007 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.015625
----3-----
mean loss: 0.015625
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 008 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 009 ****
----0-----
mean loss: 0.015625
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 010 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.015625
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
Model saved in file: log/model.ckpt
**** EPOCH 011 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 012 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 013 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.000000
----2-----
mean loss: 0.000000
----3-----
mean loss: 0.000000
----4-----
mean loss: 0.000000
----0-----
----1-----
eval mean loss: 0.000000
**** EPOCH 014 ****
----0-----
mean loss: 0.000000
----1-----
mean loss: 0.015625
----2-----

我曾想过将回归损失乘以100,但我认为这不会有多大区别。

我当天就找到了原因:(如果有人遇到这种问题,这就是我做错的地方:

对于标签占位符,我忘记将tf.int32更改为tf.float32,所以标签都转换为int32,它会不断计算零的损失!

labels_pl = tf.placeholder(tf.float32, shape=(batch_size,3))

相关内容

  • 没有找到相关文章

最新更新