# Tutorial 1: Inference, testing and training with predefined models and standard datasets
# Tutorial 1: Inference, testing, and training with predefined models and standard datasets
Welcome to MMDetection's tutorial.
MMDetection provides hundreds of predefined and pre-trained detection models in [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html)), and supports multiple standard datasets, including Pascal VOC, COCO, CityScapes, LVIS, etc. This tutorial will show how to perform common tasks on these pre-trained models and standard datasets, including:
MMDetection provides hundreds of predefined and pretrained detection models in [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html)), and supports multiple standard datasets, including Pascal VOC, COCO, CityScapes, LVIS, etc. This tutorial will show how to perform common tasks on these pretrained models and standard datasets, including:
- Use existing models to inference on given images.
- Test pre-trained models on standard datasets.
- Test pretrained models on standard datasets.
- Train predefined models on standard datasets.
## Inference with pre-trained models
## Inference with pretrained models
By inference, we mean using trained models to detect objects on images. In MMDetection, the model structure is defined by a python [configuration file]() and pre-trained model parameters are save in a Pytorch checkpoint file, usually with `.pth` extension name .
By inference, we mean using trained models to detect objects on images. In MMDetection, the model structure is defined by a python [configuration file]() and pretrained model parameters are save in a Pytorch checkpoint file, usually with `.pth` extension name .
To start with, we recommend [Faster RCNN](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn) with this [configuration](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) and this [checkpoints](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth). It is recommended to save the parameter to `checkpoint_file` directory.
...
...
@@ -99,7 +99,12 @@ Source codes are available [here](https://github.com/open-mmlab/mmdetection/tree
To evaluate a model's accuracy, one usually tests the model on some standard datasets.
MMDetection supports multiple public datasets including COCO, Pascal VOC, CityScapes, and [more](https://github.com/open-mmlab/mmdetection/tree/master/configs/_base_/datasets).
With MMDetection, users can easily test their models.
### Prepare datasets
Standard datasets like Pascal VOC and COCO are available from official websites or mirrors.
It is recommended to symlink the dataset root to `$MMDETECTION/data`.
If your folder structure is different, you may need to change the corresponding paths in config files.
```
```plain
mmdetection
├── mmdet
├── tools
...
...
@@ -163,33 +178,47 @@ The cityscapes annotations have to be converted into the coco format using `tool
-`--show-dir`: If specified, detection results will be plotted on the images and saved to the specified directory. It is only applicable to single GPU testing and used for debugging and visualization. You do NOT need a GUI available in your environment for using this option.
-`--show-score-thr`: If specified, detections with score below this threshold will be removed.
Examples:
#### Examples
Assume that you have already downloaded the checkpoints to the directory `checkpoints/`.