项目作者: zh217

项目描述 :
Auto Segmentation Criterion (ASG) implemented in pytorch
高级语言: C++
项目地址: git://github.com/zh217/torch-asg.git
创建时间: 2019-03-26T02:58:48Z
项目社区:https://github.com/zh217/torch-asg

开源协议:GNU General Public License v3.0

下载


Auto Segmentation Criterion (ASG) for pytorch

This repo contains a pytorch implementation of the auto segmentation criterion (ASG), introduced in the paper
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System by Facebook.

As mentioned in this blog post by Daniel Galvez,
ASG, being an alternative to the connectionist temporal classification (CTC) criterion widely used in deep learning,
has the advantage of being a globally normalized model without the conditional independence assumption of CTC and the
potential of playing better with
WFST frameworks.

Unfortunately, Facebook’s implementation in its official
wav2letter++ project is based on the ArrayFire C++ framework, which
makes experimentation rather difficult. Hence we have ported the ASG implementation in wav2letter++ to pytorch as
C++ extensions.

Our implementation should produce the same result as Facebook’s, but the implementation is completely different.
For example, in their implementation after doing an alpha recursion during the forward pass, they just brute force the
back-propagation during the backward pass, whereas we do a proper alpha-beta recursion during the forward pass, and
during the backward pass there is no recursion at all. Our implementation has the benefit of much higher parallelism
potential. Another difference is that we try to use pytorch’s native
functions as much as possible, whereas Facebook’s implementation is basically a gigantic hand-written C code working
on raw arrays.

In the doc folder, you can find the maths derivation of our implementation.

Project status

  • CPU (openmp) implementation
  • GPU (cuda) implementation
  • testing
  • performance tuning and comparison
  • Viterbi decoders
  • generalization to better integrate with general WFSTs decoders

Using the project

Ensure pytorch > 1.01 is installed, clone the project and in terminal do

  1. cd torch_asg
  2. pip install .

Tested with python 3.7.1. You need to have suitable C++ toolchain installed. For GPU, you need to have an nVidia card
with compute capability >= 6.

Then in your python code:

  1. import torch
  2. from torch_asg import ASGLoss
  3. def test_run():
  4. num_labels = 7
  5. input_batch_len = 6
  6. num_batches = 2
  7. target_batch_len = 5
  8. asg_loss = ASGLoss(num_labels=num_labels,
  9. reduction='mean', # mean (default), sum, none
  10. gpu_no_stream_impl=False, # see below for explanation
  11. forward_only=False # see below for explanation
  12. )
  13. for i in range(1):
  14. # Note that inputs follows the CTC convention so that the batch dimension is 1 instead of 0,
  15. # in order to have a more efficient GPU implementation
  16. inputs = torch.randn(input_batch_len, num_batches, num_labels, requires_grad=True)
  17. targets = torch.randint(0, num_labels, (num_batches, target_batch_len))
  18. input_lengths = torch.randint(1, input_batch_len + 1, (num_batches,))
  19. target_lengths = torch.randint(1, target_batch_len + 1, (num_batches,))
  20. loss = asg_loss.forward(inputs, targets, input_lengths, target_lengths)
  21. print('loss', loss)
  22. # You can get the transition matrix if you need it.
  23. # transition[i, j] is transition score from label j to label i.
  24. print('transition matrix', asg_loss.transition)
  25. loss.backward()
  26. print('transition matrix grad', asg_loss.transition.grad)
  27. print('inputs grad', inputs.grad)
  28. test_run()

There are two options for the loss constructor that warrants further explanation:

  • gpu_no_stream_impl: by default, if you are using GPU, we are using an implementation that is highly concurrent by
    doing some rather complicated CUDA streams manipulation. You can turn this concurrent implementation off by setting
    this parameter to true, and then CUDA kernel launches are serial. Useful for debugging.
  • forward_only: by default, our implementation does quite a lot of work during the forward pass concurrently that is
    only useful for calculating the gradients. If you don’t need the gradient, setting this parameter to true will give
    a further speed boost. Note that the forward-only mode is automatically active when your model is in evaluation mode.

Compared to Facebook’s implementation, we have also omitted scaling based on input/output lengths. If you need it, you
can do it yourself by using the None reduction and scale the individual scores before summing/averaging.