目标检测 YOLOV5:loss介绍
·
目录
1.BCEWithLogitsLoss
1.1pytorch源码中的相关代码
class BCEWithLogitsLoss(_Loss):
def __init__(self, weight: Optional[Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean',
pos_weight: Optional[Tensor] = None) -> None:
super(BCEWithLogitsLoss, self).__init__(size_average, reduce, reduction)
self.register_buffer('weight', weight)
self.register_buffer('pos_weight', pos_weight)
self.weight: Optional[Tensor]
self.pos_weight: Optional[Tensor]
def forward(self, input: Tensor, target: Tensor) -> Tensor:
return F.binary_cross_entropy_with_logits(input, target,
self.weight,
pos_weight=self.pos_weight,
reduction=self.reduction)
1.2 数学原理
BCEWithLogitsLoss是将BCELoss(BCE:Binary cross entropy)和sigmoid融合了,也就是说省略了sigmoid这个步骤;
BCELoss的数学公式:
对于二分类的三个训练样本,计算方法:
import torch
import torch.nn as nn
input = torch.randn(3,3)
target = torch.FloatTensor([[0,1,1],[0,0,1],[1,0,1]])
loss = nn.BCELoss()
m = nn.Sigmoid()
input_m = m(input)
result = loss(input_m, target)
结果 result=tensor(1.0224)
而使用BCEWithLogitsLoss
loss_1 = nn.BCEWithLogitsLoss()
result = loss(input_m, target)
结果result=tensor(1.0224)
2.FocalLoss
2.1 pytorch源码
class FocalLoss(nn.Module):
# Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
super(FocalLoss, self).__init__()
self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
self.gamma = gamma
self.alpha = alpha
self.reduction = loss_fcn.reduction
self.loss_fcn.reduction = 'none' # required to apply FL to each element
def forward(self, pred, true):
loss = self.loss_fcn(pred, true)
# p_t = torch.exp(-loss)
# loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
# TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
pred_prob = torch.sigmoid(pred) # prob from logits
p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
modulating_factor = (1.0 - p_t) ** self.gamma
loss *= alpha_factor * modulating_factor
if self.reduction == 'mean':
return loss.mean()
elif self.reduction == 'sum':
return loss.sum()
else: # 'none'
return loss
2.2 数学原理
Focal Loss 是何恺明设计的为了解决one-stage目标检测在训练阶段前景类和背景类极度不均衡(如1:1000)的场景的损失函数。它是由二分类交叉熵改造而来的。
其中,和均可以调节的超参数。为模型预测,其值介于(0-1)之间。
当y=1时,->1,表示easy positive,它对权重的贡献->0;
当y=0时,->0,表示easy negative,它对权重的贡献->0.
因此,Focal Loss降低了背景类的同时,也降低了easy positive和easy negative的权重;
是对Focal Loss的调节;
由标准交叉熵推理出Focal Loss:
标准交叉熵
其中,p是模型预测属于y=1的概率。为了方便标记,定义如下:
交叉熵CE重写为:
-平衡交叉熵:
有一种解决类别不平衡的方法就是引入[0,1]之间的权重因子:当y=1时,取;当 y=0时,取1-.随着
的增大,会对背景类的权重进行降低,从而加大对背景类的惩罚,从而减轻背景类数量太多对训练造成的影响;类似pt ,可将-CE写为:
替他链接:
Pytorch详解BCELoss和BCEWithLogitsLoss_豪哥的博客-CSDN博客_bcewithlogitsloss
更多推荐
已为社区贡献6条内容
所有评论(0)