From 455a0b8dbfdede7ca53597155a7ae063f9bef6e1 Mon Sep 17 00:00:00 2001
From: impiga <xjtu_lyt@163.com>
Date: Tue, 13 Apr 2021 00:25:27 +0800
Subject: [PATCH] Update README.md

---
 README.md       | 217 ++++++++++++++++++------------------------------
 README_zh-CN.md | 166 ------------------------------------
 2 files changed, 81 insertions(+), 302 deletions(-)
 delete mode 100644 README_zh-CN.md

diff --git a/README.md b/README.md
index 07ee0859..a52917c7 100644
--- a/README.md
+++ b/README.md
@@ -1,166 +1,111 @@
-<div align="center">
-  <img src="resources/mmdet-logo.png" width="600"/>
-</div>
+# Swin Transformer for Object Detection
 
-**News**: We released the technical report on [ArXiv](https://arxiv.org/abs/1906.07155).
+This repo contains the supported code and configuration files to reproduce object detection results of [Swin Transformer](https://arxiv.org/pdf/2103.14030.pdf). It is based on [mmdetection](https://github.com/open-mmlab/mmdetection).
 
-Documentation: https://mmdetection.readthedocs.io/
+## Updates
 
-## Introduction
+***04/12/2021*** Initial commits
 
-English | [绠€浣撲腑鏂嘳(README_zh-CN.md)
-
-MMDetection is an open source object detection toolbox based on PyTorch. It is
-a part of the [OpenMMLab](https://openmmlab.com/) project.
+## Results and Models
 
-The master branch works with **PyTorch 1.3+**.
-The old v1.x branch works with PyTorch 1.1 to 1.4, but v2.0 is strongly recommended for faster speed, higher performance, better design and more friendly usage.
+### Mask R-CNN
 
-![demo image](resources/coco_test_12510.jpg)
+| Backbone | Pretrain | Lr Schd | box mAP | mask mAP | #params | FLOPs | config | log | model |
+| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
+| Swin-T | ImageNet-1K | 3x | 46.0 | 41.6 | 48M | 267G | [config](configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.py) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/mask_rcnn_swin_tiny_patch4_window7.log.json)/[baidu](https://pan.baidu.com/s/1Te-Ovk4yaavmE4jcIOPAaw) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/mask_rcnn_swin_tiny_patch4_window7.pth)/[baidu](https://pan.baidu.com/s/1YpauXYAFOohyMi3Vkb6DBg) |
+| Swin-S | ImageNet-1K | 3x | 48.5 | 43.3 | 69M | 359G | [config](configs/swin/mask_rcnn_swin_small_patch4_window7_mstrain_480-800_adamw_3x_coco.py) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/mask_rcnn_swin_small_patch4_window7.log.json)/[baidu](https://pan.baidu.com/s/1ymCK7378QS91yWlxHMf1yw) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/mask_rcnn_swin_small_patch4_window7.pth)/[baidu](https://pan.baidu.com/s/1V4w4aaV7HSjXNFTOSA6v6w) |
 
-### Major features
+### Cascade Mask R-CNN
 
-- **Modular Design**
+| Backbone | Pretrain | Lr Schd | box mAP | mask mAP | #params | FLOPs | config | log | model |
+| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |
+| Swin-T | ImageNet-1K | 3x | 50.4 | 43.7 | 86M | 745G | [config](configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/cascade_mask_rcnn_swin_tiny_patch4_window7.log.json)/[baidu](https://pan.baidu.com/s/1GW_ic617Ak_NpRayOqPSOA) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/cascade_mask_rcnn_swin_tiny_patch4_window7.pth)/[baidu](https://pan.baidu.com/s/1i-izBrODgQmMwTv6F6-x3A) |
+| Swin-S | ImageNet-1K | 3x | 51.9 | 45.0 | 107M | 838G | [config](configs/swin/cascade_mask_rcnn_swin_small_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/cascade_mask_rcnn_swin_small_patch4_window7.log.json)/[baidu](https://pan.baidu.com/s/17Vyufk85vyocxrBT1AbavQ) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/cascade_mask_rcnn_swin_small_patch4_window7.pth)/[baidu](https://pan.baidu.com/s/1Sv9-gP1Qpl6SGOF6DBhUbw) |
+| Swin-B | ImageNet-1K | 3x | 51.9 | 45.0 | 145M | 982G | [config](configs/swin/cascade_mask_rcnn_swin_base_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/cascade_mask_rcnn_swin_base_patch4_window7.log.json)/[baidu](https://pan.baidu.com/s/1UZAR39g-0kE_aGrINwfVHg) | [github](https://github.com/SwinTransformer/storage/releases/download/v1.0.2/cascade_mask_rcnn_swin_base_patch4_window7.pth)/[baidu](https://pan.baidu.com/s/1tHoC9PMVnldQUAfcF6FT3A) |
 
-  We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules.
+### RepPoints V2
 
-- **Support of multiple frameworks out of box**
+| Backbone | Pretrain | Lr Schd | box mAP | mask mAP | #params | FLOPs |
+| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
+| Swin-T | ImageNet-1K | 3x | 50.0 | - | 45M | 283G |
 
-  The toolbox directly supports popular and contemporary detection frameworks, *e.g.* Faster RCNN, Mask RCNN, RetinaNet, etc.
+### Mask RepPoints V2
 
-- **High efficiency**
+| Backbone | Pretrain | Lr Schd | box mAP | mask mAP | #params | FLOPs |
+| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
+| Swin-T | ImageNet-1K | 3x | 50.3 | 43.6 | 47M | 292G |
 
-  All basic bbox and mask operations run on GPUs. The training speed is faster than or comparable to other codebases, including [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) and [SimpleDet](https://github.com/TuSimple/simpledet).
+**Notes**: 
 
-- **State of the art**
+- **Pre-trained models can be downloaded from [Swin Transformer for ImageNet Classification](https://github.com/microsoft/Swin-Transformer)**.
+- Access code for `baidu` is `swin`.
 
-  The toolbox stems from the codebase developed by the *MMDet* team, who won [COCO Detection Challenge](http://cocodataset.org/#detection-leaderboard) in 2018, and we keep pushing it forward.
+## Usage
 
-Apart from MMDetection, we also released a library [mmcv](https://github.com/open-mmlab/mmcv) for computer vision research, which is heavily depended on by this toolbox.
+### Installation
 
-## License
+Please refer to [get_started.md](https://github.com/impiga/SwinTransformer-object-detection/blob/main/docs/get_started.md) for installation and dataset preparation.
 
-This project is released under the [Apache 2.0 license](LICENSE).
-
-## Changelog
-
-v2.11.0 was released in 01/04/2021.
-Please refer to [changelog.md](docs/changelog.md) for details and release history.
-A comparison between v1.x and v2.0 codebases can be found in [compatibility.md](docs/compatibility.md).
-
-## Benchmark and model zoo
-
-Results and models are available in the [model zoo](docs/model_zoo.md).
-
-Supported backbones:
-
-- [x] ResNet (CVPR'2016)
-- [x] ResNeXt (CVPR'2017)
-- [x] VGG (ICLR'2015)
-- [x] HRNet (CVPR'2019)
-- [x] RegNet (CVPR'2020)
-- [x] Res2Net (TPAMI'2020)
-- [x] ResNeSt (ArXiv'2020)
-
-Supported methods:
-
-- [x] [RPN (NeurIPS'2015)](configs/rpn)
-- [x] [Fast R-CNN (ICCV'2015)](configs/fast_rcnn)
-- [x] [Faster R-CNN (NeurIPS'2015)](configs/faster_rcnn)
-- [x] [Mask R-CNN (ICCV'2017)](configs/mask_rcnn)
-- [x] [Cascade R-CNN (CVPR'2018)](configs/cascade_rcnn)
-- [x] [Cascade Mask R-CNN (CVPR'2018)](configs/cascade_rcnn)
-- [x] [SSD (ECCV'2016)](configs/ssd)
-- [x] [RetinaNet (ICCV'2017)](configs/retinanet)
-- [x] [GHM (AAAI'2019)](configs/ghm)
-- [x] [Mask Scoring R-CNN (CVPR'2019)](configs/ms_rcnn)
-- [x] [Double-Head R-CNN (CVPR'2020)](configs/double_heads)
-- [x] [Hybrid Task Cascade (CVPR'2019)](configs/htc)
-- [x] [Libra R-CNN (CVPR'2019)](configs/libra_rcnn)
-- [x] [Guided Anchoring (CVPR'2019)](configs/guided_anchoring)
-- [x] [FCOS (ICCV'2019)](configs/fcos)
-- [x] [RepPoints (ICCV'2019)](configs/reppoints)
-- [x] [Foveabox (TIP'2020)](configs/foveabox)
-- [x] [FreeAnchor (NeurIPS'2019)](configs/free_anchor)
-- [x] [NAS-FPN (CVPR'2019)](configs/nas_fpn)
-- [x] [ATSS (CVPR'2020)](configs/atss)
-- [x] [FSAF (CVPR'2019)](configs/fsaf)
-- [x] [PAFPN (CVPR'2018)](configs/pafpn)
-- [x] [Dynamic R-CNN (ECCV'2020)](configs/dynamic_rcnn)
-- [x] [PointRend (CVPR'2020)](configs/point_rend)
-- [x] [CARAFE (ICCV'2019)](configs/carafe/README.md)
-- [x] [DCNv2 (CVPR'2019)](configs/dcn/README.md)
-- [x] [Group Normalization (ECCV'2018)](configs/gn/README.md)
-- [x] [Weight Standardization (ArXiv'2019)](configs/gn+ws/README.md)
-- [x] [OHEM (CVPR'2016)](configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py)
-- [x] [Soft-NMS (ICCV'2017)](configs/faster_rcnn/faster_rcnn_r50_fpn_soft_nms_1x_coco.py)
-- [x] [Generalized Attention (ICCV'2019)](configs/empirical_attention/README.md)
-- [x] [GCNet (ICCVW'2019)](configs/gcnet/README.md)
-- [x] [Mixed Precision (FP16) Training (ArXiv'2017)](configs/fp16/README.md)
-- [x] [InstaBoost (ICCV'2019)](configs/instaboost/README.md)
-- [x] [GRoIE (ICPR'2020)](configs/groie/README.md)
-- [x] [DetectoRS (ArXix'2020)](configs/detectors/README.md)
-- [x] [Generalized Focal Loss (NeurIPS'2020)](configs/gfl/README.md)
-- [x] [CornerNet (ECCV'2018)](configs/cornernet/README.md)
-- [x] [Side-Aware Boundary Localization (ECCV'2020)](configs/sabl/README.md)
-- [x] [YOLOv3 (ArXiv'2018)](configs/yolo/README.md)
-- [x] [PAA (ECCV'2020)](configs/paa/README.md)
-- [x] [YOLACT (ICCV'2019)](configs/yolact/README.md)
-- [x] [CentripetalNet (CVPR'2020)](configs/centripetalnet/README.md)
-- [x] [VFNet (ArXix'2020)](configs/vfnet/README.md)
-- [x] [DETR (ECCV'2020)](configs/detr/README.md)
-- [x] [CascadeRPN (NeurIPS'2019)](configs/cascade_rpn/README.md)
-- [x] [SCNet (AAAI'2021)](configs/scnet/README.md)
-
-Some other methods are also supported in [projects using MMDetection](./docs/projects.md).
-
-## Installation
+### Inference
+```
+# single-gpu testing
+python tools/test.py <CONFIG_FILE> <DET_CHECKPOINT_FILE> --eval bbox segm
 
-Please refer to [get_started.md](docs/get_started.md) for installation.
+# multi-gpu testing
+tools/dist_test.sh <CONFIG_FILE> <DET_CHECKPOINT_FILE> <GPU_NUM> --eval bbox segm
+```
 
-## Getting Started
+### Training
 
-Please see [get_started.md](docs/get_started.md) for the basic usage of MMDetection.
-We provide [colab tutorial](demo/MMDet_Tutorial.ipynb), and full guidance for quick run [with existing dataset](docs/1_exist_data_model.md) and [with new dataset](docs/2_new_data_model.md) for beginners.
-There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [adding new dataset](docs/tutorials/new_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), [customizing models](docs/tutorials/customize_models.md), [customizing runtime settings](docs/tutorials/customize_runtime.md) and [useful tools](docs/useful_tools.md).
+To train a detector with pre-trained models, run:
+```
+# single-gpu training
+python tools/train.py <CONFIG_FILE> --cfg-options model.pretrained=<PRETRAIN_MODEL> [model.backbone.use_checkpoint=True] [other optional arguments]
 
-Please refer to [FAQ](docs/faq.md) for frequently asked questions.
+# multi-gpu training
+tools/dist_train.sh <CONFIG_FILE> <GPU_NUM> --cfg-options model.pretrained=<PRETRAIN_MODEL> [model.backbone.use_checkpoint=True] [other optional arguments] 
+```
+For example, to train a Cascade Mask R-CNN model with a `Swin-T` backbone and 8 gpus, run:
+```
+tools/dist_train.sh configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_coco.py 8 --cfg-options model.pretrained=<PRETRAIN_MODEL> 
+```
 
-## Contributing
+**Note:** `use_checkpoint` is used to save GPU memory. Please refer to [this page](https://pytorch.org/docs/stable/checkpoint.html) for more details.
 
-We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
 
-## Acknowledgement
-
-MMDetection is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
-We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors.
-
-## Citation
-
-If you use this toolbox or benchmark in your research, please cite this project.
+### Apex (optional):
+We use apex for mixed precision training by default. To install apex, run:
+```
+git clone https://github.com/NVIDIA/apex
+cd apex
+pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
+```
+If you would like to disable apex, modify the type of runner as `EpochBasedRunner` and comment out the following code block in the [configuration files](configs/swin):
+```
+# do not use mmdet version fp16
+fp16 = None
+optimizer_config = dict(
+    type="DistOptimizerHook",
+    update_interval=1,
+    grad_clip=None,
+    coalesce=True,
+    bucket_size_mb=-1,
+    use_fp16=True,
+)
+```
 
+## Citing Swin Transformer
 ```
-@article{mmdetection,
-  title   = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
-  author  = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
-             Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
-             Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
-             Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
-             Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
-             and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
-  journal= {arXiv preprint arXiv:1906.07155},
-  year={2019}
+@article{liu2021Swin,
+  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
+  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
+  journal={arXiv preprint arXiv:2103.14030},
+  year={2021}
 }
 ```
 
-## Projects in OpenMMLab
-
-- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
-- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
-- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
-- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
-- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
-- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
-- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
-- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
-- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
+## Other Links
+
+> **Image Classification**: See [Swin Transformer for Image Classification](https://github.com/microsoft/Swin-Transformer).
+
+> **Semantic Segmentation**: See [Swin Transformer for Semantic Segmentation](https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation).
diff --git a/README_zh-CN.md b/README_zh-CN.md
deleted file mode 100644
index e84e1f45..00000000
--- a/README_zh-CN.md
+++ /dev/null
@@ -1,166 +0,0 @@
-<div align="center">
-  <img src="resources/mmdet-logo.png" width="600"/>
-</div>
-
-**鏂伴椈**: 鎴戜滑鍦� [ArXiv](https://arxiv.org/abs/1906.07155) 涓婂叕寮€浜嗘妧鏈姤鍛娿€�
-
-鏂囨。: https://mmdetection.readthedocs.io/
-
-## 绠€浠�
-
-[English](README.md) | 绠€浣撲腑鏂�
-
-MMDetection 鏄竴涓熀浜� PyTorch 鐨勭洰鏍囨娴嬪紑婧愬伐鍏风銆傚畠鏄� [OpenMMLab](https://openmmlab.com/) 椤圭洰鐨勪竴閮ㄥ垎銆�
-
-涓诲垎鏀唬鐮佺洰鍓嶆敮鎸� PyTorch 1.3 浠ヤ笂鐨勭増鏈€�
-
-v1.x 鐨勫巻鍙茬増鏈敮鎸� PyTorch 1.1 鍒� 1.4锛屼絾鏄垜浠己鐑堝缓璁敤鎴蜂娇鐢ㄦ柊鐨� 2.x 鐨勭増鏈紝鏂扮殑鐗堟湰閫熷害鏇村揩锛屾€ц兘鏇撮珮锛屾湁鏇翠紭闆呯殑浠g爜璁捐锛屽鐢ㄦ埛浣跨敤涔熸洿鍔犲弸濂姐€�
-
-![demo image](resources/coco_test_12510.jpg)
-
-### 涓昏鐗规€�
-
-- **妯″潡鍖栬璁�**
-
-  MMDetection 灏嗘娴嬫鏋惰В鑰︽垚涓嶅悓鐨勬ā鍧楃粍浠讹紝閫氳繃缁勫悎涓嶅悓鐨勬ā鍧楃粍浠讹紝鐢ㄦ埛鍙互渚挎嵎鍦版瀯寤鸿嚜瀹氫箟鐨勬娴嬫ā鍨�
-
-- **涓板瘜鐨勫嵆鎻掑嵆鐢ㄧ殑绠楁硶鍜屾ā鍨�**
-
-  MMDetection 鏀寔浜嗕紬澶氫富娴佺殑鍜屾渶鏂扮殑妫€娴嬬畻娉曪紝渚嬪 Faster R-CNN锛孧ask R-CNN锛孯etinaNet 绛夈€�
-
-- **閫熷害蹇�**
-
-  鍩烘湰鐨勬鍜� mask 鎿嶄綔閮藉疄鐜颁簡 GPU 鐗堟湰锛岃缁冮€熷害姣斿叾浠栦唬鐮佸簱鏇村揩鎴栬€呯浉褰擄紝鍖呮嫭 [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) 鍜� [SimpleDet](https://github.com/TuSimple/simpledet)銆�
-
-- **鎬ц兘楂�**
-
-  MMDetection 杩欎釜绠楁硶搴撴簮鑷簬 COCO 2018 鐩爣妫€娴嬬珵璧涚殑鍐犲啗鍥㈤槦 *MMDet* 鍥㈤槦寮€鍙戠殑浠g爜锛屾垜浠湪涔嬪悗鎸佺画杩涜浜嗘敼杩涘拰鎻愬崌銆�
-
-闄や簡 MMDetection 涔嬪锛屾垜浠繕寮€婧愪簡璁$畻鏈鸿瑙夊熀纭€搴� [MMCV](https://github.com/open-mmlab/mmcv)锛孧MCV 鏄� MMDetection 鐨勪富瑕佷緷璧栥€�
-
-## 寮€婧愯鍙瘉
-
-璇ラ」鐩噰鐢� [Apache 2.0 寮€婧愯鍙瘉](LICENSE)銆�
-
-## 鏇存柊鏃ュ織
-
-鏈€鏂扮殑鏈堝害鐗堟湰 v2.11.0 鍦� 2021.04.01 鍙戝竷銆�
-濡傛灉鎯充簡瑙f洿澶氱増鏈洿鏂扮粏鑺傚拰鍘嗗彶淇℃伅锛岃闃呰[鏇存柊鏃ュ織](docs/changelog.md)銆�
-鍦╗鍏煎鎬ц鏄庢枃妗(docs/compatibility.md)涓垜浠彁渚涗簡 1.x 鍜� 2.0 鐗堟湰鐨勮缁嗘瘮杈冦€�
-
-## 鍩哄噯娴嬭瘯鍜屾ā鍨嬪簱
-
-娴嬭瘯缁撴灉鍜屾ā鍨嬪彲浠ュ湪[妯″瀷搴揮(docs/model_zoo.md)涓壘鍒般€�
-
-宸叉敮鎸佺殑楠ㄥ共缃戠粶锛�
-
-- [x] ResNet (CVPR'2016)
-- [x] ResNeXt (CVPR'2017)
-- [x] VGG (ICLR'2015)
-- [x] HRNet (CVPR'2019)
-- [x] RegNet (CVPR'2020)
-- [x] Res2Net (TPAMI'2020)
-- [x] ResNeSt (ArXiv'2020)
-
-宸叉敮鎸佺殑绠楁硶锛�
-
-- [x] [RPN (NeurIPS'2015)](configs/rpn)
-- [x] [Fast R-CNN (ICCV'2015)](configs/fast_rcnn)
-- [x] [Faster R-CNN (NeurIPS'2015)](configs/faster_rcnn)
-- [x] [Mask R-CNN (ICCV'2017)](configs/mask_rcnn)
-- [x] [Cascade R-CNN (CVPR'2018)](configs/cascade_rcnn)
-- [x] [Cascade Mask R-CNN (CVPR'2018)](configs/cascade_rcnn)
-- [x] [SSD (ECCV'2016)](configs/ssd)
-- [x] [RetinaNet (ICCV'2017)](configs/retinanet)
-- [x] [GHM (AAAI'2019)](configs/ghm)
-- [x] [Mask Scoring R-CNN (CVPR'2019)](configs/ms_rcnn)
-- [x] [Double-Head R-CNN (CVPR'2020)](configs/double_heads)
-- [x] [Hybrid Task Cascade (CVPR'2019)](configs/htc)
-- [x] [Libra R-CNN (CVPR'2019)](configs/libra_rcnn)
-- [x] [Guided Anchoring (CVPR'2019)](configs/guided_anchoring)
-- [x] [FCOS (ICCV'2019)](configs/fcos)
-- [x] [RepPoints (ICCV'2019)](configs/reppoints)
-- [x] [Foveabox (TIP'2020)](configs/foveabox)
-- [x] [FreeAnchor (NeurIPS'2019)](configs/free_anchor)
-- [x] [NAS-FPN (CVPR'2019)](configs/nas_fpn)
-- [x] [ATSS (CVPR'2020)](configs/atss)
-- [x] [FSAF (CVPR'2019)](configs/fsaf)
-- [x] [PAFPN (CVPR'2018)](configs/pafpn)
-- [x] [Dynamic R-CNN (ECCV'2020)](configs/dynamic_rcnn)
-- [x] [PointRend (CVPR'2020)](configs/point_rend)
-- [x] [CARAFE (ICCV'2019)](configs/carafe/README.md)
-- [x] [DCNv2 (CVPR'2019)](configs/dcn/README.md)
-- [x] [Group Normalization (ECCV'2018)](configs/gn/README.md)
-- [x] [Weight Standardization (ArXiv'2019)](configs/gn+ws/README.md)
-- [x] [OHEM (CVPR'2016)](configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py)
-- [x] [Soft-NMS (ICCV'2017)](configs/faster_rcnn/faster_rcnn_r50_fpn_soft_nms_1x_coco.py)
-- [x] [Generalized Attention (ICCV'2019)](configs/empirical_attention/README.md)
-- [x] [GCNet (ICCVW'2019)](configs/gcnet/README.md)
-- [x] [Mixed Precision (FP16) Training (ArXiv'2017)](configs/fp16/README.md)
-- [x] [InstaBoost (ICCV'2019)](configs/instaboost/README.md)
-- [x] [GRoIE (ICPR'2020)](configs/groie/README.md)
-- [x] [DetectoRS (ArXix'2020)](configs/detectors/README.md)
-- [x] [Generalized Focal Loss (NeurIPS'2020)](configs/gfl/README.md)
-- [x] [CornerNet (ECCV'2018)](configs/cornernet/README.md)
-- [x] [Side-Aware Boundary Localization (ECCV'2020)](configs/sabl/README.md)
-- [x] [YOLOv3 (ArXiv'2018)](configs/yolo/README.md)
-- [x] [PAA (ECCV'2020)](configs/paa/README.md)
-- [x] [YOLACT (ICCV'2019)](configs/yolact/README.md)
-- [x] [CentripetalNet (CVPR'2020)](configs/centripetalnet/README.md)
-- [x] [VFNet (ArXix'2020)](configs/vfnet/README.md)
-- [x] [DETR (ECCV'2020)](configs/detr/README.md)
-- [x] [CascadeRPN (NeurIPS'2019)](configs/cascade_rpn/README.md)
-- [x] [SCNet (AAAI'2021)](configs/scnet/README.md)
-
-鎴戜滑鍦╗鍩轰簬 MMDetection 鐨勯」鐩甝(./docs/projects.md)涓垪涓句簡涓€浜涘叾浠栫殑鏀寔鐨勭畻娉曘€�
-
-## 瀹夎
-
-璇峰弬鑰僛蹇€熷叆闂ㄦ枃妗(docs/get_started.md)杩涜瀹夎銆�
-
-## 蹇€熷叆闂�
-
-璇峰弬鑰僛蹇€熷叆闂ㄦ枃妗(docs/get_started.md)瀛︿範 MMDetection 鐨勫熀鏈娇鐢ㄣ€�
-鎴戜滑鎻愪緵浜� [colab 鏁欑▼](demo/MMDet_Tutorial.ipynb)锛屼篃涓烘柊鎵嬫彁渚涗簡瀹屾暣鐨勮繍琛屾暀绋嬶紝鍒嗗埆閽堝[宸叉湁鏁版嵁闆哴(docs/1_exist_data_model.md)鍜孾鏂版暟鎹泦](docs/2_new_data_model.md) 瀹屾暣鐨勪娇鐢ㄦ寚鍗�
-
-鎴戜滑涔熸彁渚涗簡涓€浜涜繘闃舵暀绋嬶紝鍐呭瑕嗙洊浜� [finetune 妯″瀷](docs/tutorials/finetune.md)锛孾澧炲姞鏂版暟鎹泦鏀寔](docs/tutorials/new_dataset.md)锛孾璁捐鏂扮殑鏁版嵁棰勫鐞嗘祦绋媇(docs/tutorials/data_pipeline.md)锛孾澧炲姞鑷畾涔夋ā鍨媇(ocs/tutorials/customize_models.md)锛孾澧炲姞鑷畾涔夌殑杩愯鏃堕厤缃甝(docs/tutorials/customize_runtime.md)锛孾甯哥敤宸ュ叿鍜岃剼鏈琞(docs/useful_tools.md)銆�
-
-濡傛灉閬囧埌闂锛岃鍙傝€� [FAQ 椤甸潰](docs/faq.md)銆�
-
-## 璐$尞鎸囧崡
-
-鎴戜滑鎰熻阿鎵€鏈夌殑璐$尞鑰呬负鏀硅繘鍜屾彁鍗� MMDetection 鎵€浣滃嚭鐨勫姫鍔涖€傝鍙傝€僛璐$尞鎸囧崡](.github/CONTRIBUTING.md)鏉ヤ簡瑙e弬涓庨」鐩础鐚殑鐩稿叧鎸囧紩銆�
-
-## 鑷磋阿
-
-MMDetection 鏄竴娆剧敱鏉ヨ嚜涓嶅悓楂樻牎鍜屼紒涓氱殑鐮斿彂浜哄憳鍏卞悓鍙備笌璐$尞鐨勫紑婧愰」鐩€傛垜浠劅璋㈡墍鏈変负椤圭洰鎻愪緵绠楁硶澶嶇幇鍜屾柊鍔熻兘鏀寔鐨勮础鐚€咃紝浠ュ強鎻愪緵瀹濊吹鍙嶉鐨勭敤鎴枫€� 鎴戜滑甯屾湜杩欎釜宸ュ叿绠卞拰鍩哄噯娴嬭瘯鍙互涓虹ぞ鍖烘彁渚涚伒娲荤殑浠g爜宸ュ叿锛屼緵鐢ㄦ埛澶嶇幇宸叉湁绠楁硶骞跺紑鍙戣嚜宸辩殑鏂版ā鍨嬶紝浠庤€屼笉鏂负寮€婧愮ぞ鍖烘彁渚涜础鐚€�
-
-## 寮曠敤
-
-濡傛灉浣犲湪鐮旂┒涓娇鐢ㄤ簡鏈」鐩殑浠g爜鎴栬€呮€ц兘鍩哄噯锛岃鍙傝€冨涓� bibtex 寮曠敤 MMDetection銆�
-
-```
-@article{mmdetection,
-  title   = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
-  author  = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
-             Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
-             Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
-             Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
-             Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
-             and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
-  journal= {arXiv preprint arXiv:1906.07155},
-  year={2019}
-}
-```
-
-## OpenMMLab 鐨勫叾浠栭」鐩�
-
-- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 璁$畻鏈鸿瑙夊熀纭€搴�
-- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 鍥惧儚鍒嗙被宸ュ叿绠�
-- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 鐩爣妫€娴嬪伐鍏风
-- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 鏂颁竴浠i€氱敤 3D 鐩爣妫€娴嬪钩鍙�
-- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 璇箟鍒嗗壊宸ュ叿绠�
-- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 鏂颁竴浠h棰戠悊瑙e伐鍏风
-- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 涓€浣撳寲瑙嗛鐩爣鎰熺煡骞冲彴
-- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 濮挎€佷及璁″伐鍏风
-- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 鍥惧儚瑙嗛缂栬緫宸ュ叿绠�
-- 
GitLab