1. 07 Feb, 2020 4 commits
  2. 06 Feb, 2020 4 commits
  3. 05 Feb, 2020 1 commit
  4. 04 Feb, 2020 1 commit
  5. 03 Feb, 2020 1 commit
  6. 01 Feb, 2020 3 commits
  7. 31 Jan, 2020 5 commits
  8. 30 Jan, 2020 4 commits
    • Yuxin Wu's avatar
      Let pairwise_iou_rotated support large number of boxes (#376) · 746bee3b
      Yuxin Wu authored
      Summary:
      Fix https://github.com/facebookresearch/detectron2/issues/757
      Pull Request resolved: https://github.com/fairinternal/detectron2/pull/376
      
      Test Plan:
      ```
      python -m unittest  tests.test_rotated_boxes.TestRotatedBoxesLayer.test_iou_too_many_boxes_cuda
      ```
      
      Reviewed By: rbgirshick
      
      Differential Revision: D19617036
      
      Pulled By: ppwwyyxx
      
      fbshipit-source-id: 93df1ee7b3f7d939de76a1faeb6c1e10f051758c
      746bee3b
    • Yuxin Wu's avatar
      do not run evaluation in the end of trainer · 6137cdbc
      Yuxin Wu authored
      Summary:
      With D19319879, evaluation will run even if the training fails. This is not a reasonable default.
      This reverts D19319879.
      
      The issue that D19319879 tried to solve is: when a verification failed, the resumed job would not run evaluation again
      and the result verification will be skipped in the resumed job (due to lack of eval metrics).
      It causes our integration tests to pass even though the verification has actually failed (as a result we didn't catch a accuracy regression).
      This issue can be addressed differently by enforcing the existence of evaluation metrics whenever `cfg.TEST.EXPECTED_RESULTS` is not empty, as done in this diff.
      
      A cleaner solution would require making more objects "checkpointable" so they know where they are after resume.
      
      Reviewed By: rbgirshick
      
      Differential Revision: D19617328
      
      fbshipit-source-id: 105e7648ef406cf7171ae88e9ac7df983a5bc39b
      6137cdbc
    • Yuxin Wu's avatar
      Fix deformable conv backward for latest pytorch · 707063be
      Yuxin Wu authored
      Summary:
      Pytorch changes the behavior of copy_ to preserve memory layout,
      in the stack of https://github.com/pytorch/pytorch/pull/30089.
      
      Our kernel needs to adopt to such change.
      
      Reviewed By: rbgirshick
      
      Differential Revision: D19617086
      
      fbshipit-source-id: 992a0e957aa86c48152ffdcea1e9c184b3a47be2
      707063be
    • Luka Sterbic's avatar
      Back out "do post-processing on the same device as the model" · eacad541
      Luka Sterbic authored
      Summary: Original commit changeset: f45acba6d502
      
      Reviewed By: JanEbbing
      
      Differential Revision: D19619669
      
      fbshipit-source-id: 0cc0f01fde1ca3c500d43b1a145d8cddbc659b6c
      eacad541
  9. 29 Jan, 2020 2 commits
    • Ross Girshick's avatar
      Make application of norm weight decay more robust · e29680b5
      Ross Girshick authored
      Summary: The current logic for finding feature normalization modules is based on fragile string matching. For models that include norm modules in, for example, an `nn.Sequential` the logic fails to detect the module (because it appears like `0.weight`, `0.bias`, ...). This diff makes the matching more robust by matching against a set of normalization module types. The downside is that this set needs to be maintained if new normalization types, that do not inherit from any of these, are added.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D19393513
      
      fbshipit-source-id: e81fe36df7e09ec33addbca515cefdfd86c69fb9
      e29680b5
    • Yanghan Wang's avatar
      do post-processing on the same device as the model · b92760ba
      Yanghan Wang authored
      Summary:
      Previously during inference ProtobufModel.forward is numpy->numpy, which doesn't contain device information. This diff changes it to torch.Tensor->torch.Tensor thus can return tensors on the proper device.
      
      Worth mention that it's possible to return outputs with mixed devices (some on cpu and some on gpu), so `infer_device_type` is called to analyze output device types given known input device types. `_wrapped_model.device` is stored to figure out input device types.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D19568765
      
      fbshipit-source-id: f45acba6d502c6100a2d8d29508f01c550c1c43d
      b92760ba
  10. 28 Jan, 2020 1 commit
  11. 26 Jan, 2020 1 commit
  12. 25 Jan, 2020 1 commit
  13. 24 Jan, 2020 3 commits
    • xmyqsh's avatar
      fix typo · d8148a37
      xmyqsh authored
      Summary: Pull Request resolved: https://github.com/facebookresearch/detectron2/pull/751
      
      Differential Revision: D19554580
      
      Pulled By: ppwwyyxx
      
      fbshipit-source-id: 131472171a62708942e7ef48ab84c889417d03d4
      d8148a37
    • Yuxin Wu's avatar
      update docs · 5e04cffd
      Yuxin Wu authored
      Summary: Pull Request resolved: https://github.com/fairinternal/detectron2/pull/373
      
      Differential Revision: D19553213
      
      Pulled By: ppwwyyxx
      
      fbshipit-source-id: 285396f9344c10758048a1de3101017931b0f98f
      5e04cffd
    • Yanghan Wang's avatar
      exporting to Caffe2 on cuda (gpu) device · 85cfd95b
      Yanghan Wang authored
      Summary:
      A few changes to support exporting to GPU device.
      - update `to_device` to directly use exposed copy ops.
      - call `_assign_device_option` during `export_caffe2_detection_model`.
      - (optional) fuse unnecessary copy ops if possible.
      
      Now exported model will match the MODEL.DEVICE.
      
      There's no need of changing inference code, because seems Caffe2 now can automatically convert cpu input to the corresponding device, (fetch output is always on cpu as numpy).
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D19504424
      
      fbshipit-source-id: 752a4fd275c64c1d3e08f04b7ed6c41710ba1e47
      85cfd95b
  14. 23 Jan, 2020 1 commit
    • Sam Pepose's avatar
      Fix bug in build_optimizer · dfb400e0
      Sam Pepose authored
      Summary:
      The SGD optimizer should always use `cfg.SOLVER.BASE_LR` learning rate.
      
      Bug:
      If the last named param is a bias term and `cfg.SOLVER.BIAS_LR_FACTOR != 1.0`, then the lr will be set to an incorrect multiple.
      
      Reviewed By: rbgirshick
      
      Differential Revision: D19517514
      
      fbshipit-source-id: 1eacb70753df93dcd0b731548322493807d20877
      dfb400e0
  15. 21 Jan, 2020 2 commits
  16. 17 Jan, 2020 1 commit
  17. 16 Jan, 2020 3 commits
    • Vasil Khalidov's avatar
      added version check for isort · 13fb72f0
      Vasil Khalidov authored
      Summary: Added `isort` version check to ensure that produced formatting is compliant.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D19421825
      
      fbshipit-source-id: 51b8c113e09a1cc233f631f3cd4b1bdc485c4652
      13fb72f0
    • Sam Pepose's avatar
      Make BoxMode JSON serializable · 38c53575
      Sam Pepose authored
      Summary: `BoxMode`, an `Enum`, is not JSON-serializable. This diff changes it to an `IntEnum` which is serializable.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D19403320
      
      fbshipit-source-id: 2f2b7352eafa0d983b007f5554861a952f5ba063
      38c53575
    • Vasil Khalidov's avatar
      added GPSm evaluation mode · bcd919d9
      Vasil Khalidov authored
      Summary:
      Added evaluation based on GPSm metric, which uses geometric mean of geodesic distance (GPS) and mask intersection over union (IOU) as a proximity measure. As compared to the previous GPS metric, it favors good mask estimates and penalizes situations where all pixels are estimated as foreground.
      
      Created separate files for different test types (following general detectron2 structure). This way it's more convenient to run selected tests.
      
      Reviewed By: MarcSzafraniec
      
      Differential Revision: D19375107
      
      fbshipit-source-id: bec54a897a09b9e43f3332a2e4ada19417b9ef08
      bcd919d9
  18. 15 Jan, 2020 1 commit
    • Vasil Khalidov's avatar
      rcnn meta arch, hide visualizer import · 08aaa2de
      Vasil Khalidov authored
      Summary:
      Visualizer import at the top level of the general RCNN module implies importing `matplotlib` at every run (tests, train, eval etc.). Currently there are quite some warnings generated by matplotlib, which clutters the outputs.
      
      I suggest to move visualizer import to the dedicated method, since most of the time, one would not need any visualization.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D19389988
      
      fbshipit-source-id: e6e6a00e38d084e5d19ee02400f9741186e1bb00
      08aaa2de
  19. 14 Jan, 2020 1 commit