项目作者: CoinCheung

项目描述 :
label-smooth, amsoftmax, focal-loss, triplet-loss, lovasz-softmax. Maybe useful
高级语言: Python
项目地址: git://github.com/CoinCheung/pytorch-loss.git
创建时间: 2019-04-10T07:48:03Z
项目社区:https://github.com/CoinCheung/pytorch-loss

开源协议:MIT License

下载


pytorch-loss

My implementation of label-smooth, amsoftmax, partial-fc, focal-loss, dual-focal-loss, triplet-loss, giou/diou/ciou-loss/func, affinity-loss, pc_softmax_cross_entropy, ohem-loss(softmax based on line hard mining loss), large-margin-softmax(bmvc2019), lovasz-softmax-loss, and dice-loss(both generalized soft dice loss and batch soft dice loss). Maybe this is useful in my future work.

Also tried to implement swish, hard-swish(hswish) and mish activation functions.

Additionally, cuda based one-hot function is added (support label smooth).

Newly add an “Exponential Moving Average(EMA)” operator.

Add convolution ops, such as coord-conv2d, and dynamic-conv2d(dy-conv2d).

Some operators are implemented with pytorch cuda extension, so you need to compile it first:

  1. $ python -m pip install .

After installing, now you can pick up what you need and use the losses or ops like one of thes:

  1. from pytorch_loss import SwishV1, SwishV2, SwishV3
  2. from pytorch_loss import HSwishV1, HSwishV2, HSwishV3
  3. from pytorch_loss import MishV1, MishV2, MishV3
  4. from pytorch_loss import convert_to_one_hot, convert_to_one_hot_cu, OnehotEncoder
  5. from pytorch_loss import EMA
  6. from pytorch_loss import TripletLoss
  7. from pytorch_loss import SoftDiceLossV1, SoftDiceLossV2, SoftDiceLossV3
  8. from pytorch_loss import PCSoftmaxCrossEntropyV1, PCSoftmaxCrossEntropyV2
  9. from pytorch_loss import LargeMarginSoftmaxV1, LargeMarginSoftmaxV2, LargeMarginSoftmaxV3
  10. from pytorch_loss import LabelSmoothSoftmaxCEV1, LabelSmoothSoftmaxCEV2, LabelSmoothSoftmaxCEV3
  11. from pytorch_loss import GIOULoss, DIOULoss, CIOULoss
  12. from pytorch_loss import iou_func, giou_func, diou_func, ciou_func
  13. from pytorch_loss import FocalLossV1, FocalLossV2, FocalLossV3
  14. from pytorch_loss import Dual_Focal_loss
  15. from pytorch_loss import GeneralizedSoftDiceLoss, BatchSoftDiceLoss
  16. from pytorch_loss import AMSoftmax
  17. from pytorch_loss import AffinityFieldLoss, AffinityLoss
  18. from pytorch_loss import OhemCELoss, OhemLargeMarginLoss
  19. from pytorch_loss import LovaszSoftmaxV1, LovaszSoftmaxV3
  20. from pytorch_loss import TaylorCrossEntropyLossV1, TaylorCrossEntropyLossV3
  21. from pytorch_loss import InfoNceDist
  22. from pytorch_loss import PartialFCAMSoftmax
  23. from pytorch_loss import TaylorSoftmaxV1, TaylorSoftmaxV3
  24. from pytorch_loss import LogTaylorSoftmaxV1, LogTaylorSoftmaxV3
  25. from pytorch_loss import CoordConv2d, DY_Conv2d

Note that some losses or ops have 3 versions, like LabelSmoothSoftmaxCEV1, LabelSmoothSoftmaxCEV2, LabelSmoothSoftmaxCEV3, here V1 means the implementation with pure pytorch ops and use torch.autograd for backward computation, V2 means implementation with pure pytorch ops but use self-derived formula for backward computation, and V3 means implementation with cuda extension. Generally speaking, the V3 ops are faster and more memory efficient, since I have tried to squeeze everything in one cuda kernel function, which in most cases brings less overhead than a combination of pytorch ops.

For those who happen to find this repo, if you see errors in my code, feel free to open an issue to correct me.