1. 01 9月, 2021 2 次提交
  2. 31 8月, 2021 3 次提交
  3. 30 8月, 2021 1 次提交
  4. 21 8月, 2021 1 次提交
  5. 17 7月, 2021 3 次提交
    • Nan Zheng's avatar
      Added more fusion and vectorized kernel for transducer (#1125) · 0c2c6eea
      Nan Zheng 创作于
      * Added support for fused ReLU and dropout into transducer joint
      
      * Reorganized code selection path in transducer joint fwd
      * Added support for fused ReLU+dropout into transducer joint
      
      * Vectorize transducer loss backward with fused softmax (#3)
      
      * Nanz/transducer loss (#4)
      
      * Vectorize transducer loss backward with fused softmax
      
      * Added a predicate to avoid potential IMA
      
      * Nanz/transducer loss (#5)
      
      * Vectorize transducer loss backward with fused softmax
      
      * Added a predicate to avoid potentional IMA
      
      * Added more predicates to avoid IMAs
      
      * Updated documentations for newly added features.
      
      * Fixed a error in transducer.py
      0c2c6eea
    • yjk21's avatar
      Adds small-batch kernels (#1126) · ed719967
      yjk21 创作于
      ed719967
    • X Wang's avatar
      local_rank fix (#1129) · c1378e6f
      X Wang 创作于
      * local_rank and install cuda version fix
      c1378e6f
  6. 15 6月, 2021 2 次提交
  7. 26 5月, 2021 1 次提交
  8. 17 5月, 2021 1 次提交
  9. 20 4月, 2021 1 次提交
  10. 17 4月, 2021 3 次提交
  11. 16 4月, 2021 1 次提交
  12. 15 4月, 2021 3 次提交
  13. 24 3月, 2021 2 次提交
  14. 23 2月, 2021 1 次提交
  15. 10 2月, 2021 1 次提交
  16. 20 1月, 2021 1 次提交
  17. 18 12月, 2020 2 次提交
  18. 04 12月, 2020 3 次提交
  19. 02 12月, 2020 1 次提交
  20. 01 12月, 2020 1 次提交
  21. 20 10月, 2020 1 次提交
    • lly-zero-one's avatar
      Optimize the sync batchnorm by batching the communication (#980) · 8a1ed9e8
      lly-zero-one 创作于
      In this PR, we mainly tried to optimize the performance of Syncatchnorm and also fixed one potential issue in the welford_parallel kernel implementation.
      
      For performance improvement, we batched the mean/var/count all_gather communication together and sent it once in the forward path
      We also batch the all_reduce in backward path
      We add the contiguous call on the input of welford_parallel kernel.
      If there is any standard perf benchmark, I would be happy to run it.
      8a1ed9e8
  22. 29 9月, 2020 1 次提交
  23. 16 9月, 2020 1 次提交
  24. 15 9月, 2020 2 次提交
  25. 15 8月, 2020 1 次提交