Skip to content
Snippets Groups Projects
Commit 0613aedb authored by Kai Chen's avatar Kai Chen
Browse files

update readme and version

parent f713757f
No related branches found
No related tags found
No related merge requests found
......@@ -34,6 +34,10 @@ This project is released under the [Apache 2.0 license](LICENSE).
## Updates
v0.5.2 (21/10/2018)
- Add support for custom datasets.
- Add a script to convert PASCAL VOC annotations to the expected format.
v0.5.1 (20/10/2018)
- Add BBoxAssigner and BBoxSampler, the `train_cfg` field in config files are restructured.
- `ConvFCRoIHead` / `SharedFCRoIHead` are renamed to `ConvFCBBoxHead` / `SharedFCBBoxHead` for consistency.
......@@ -209,6 +213,48 @@ Expected results in WORK_DIR:
> 1. We recommend using distributed training with NCCL2 even on a single machine, which is faster. Non-distributed training is for debugging or other purposes.
> 2. The default learning rate is for 8 GPUs. If you use less or more than 8 GPUs, you need to set the learning rate proportional to the GPU num. E.g., modify lr to 0.01 for 4 GPUs or 0.04 for 16 GPUs.
### Train on custom datasets
We define a simple annotation format.
The annotation of a dataset is a list of dict, each dict corresponds to an image.
There are 3 field `filename` (relative path), `width`, `height` for testing,
and an additional field `ann` for training. `ann` is also a dict containing at least 2 fields:
`bboxes` and `labels`, both of which are numpy arrays. Some datasets may provide
annotations like crowd/difficult/ignored bboxes, we use `bboxes_ignore` and `labels_ignore`
to cover them.
Here is an example.
```
[
{
'filename': 'a.jpg',
'width': 1280,
'height': 720,
'ann': {
'bboxes': <np.ndarray> (n, 4),
'labels': <np.ndarray> (n, ),
'bboxes_ignore': <np.ndarray> (k, 4),
'labels_ignore': <np.ndarray> (k, 4) (optional field)
}
},
...
]
```
There are two ways to work with custom datasets.
- online conversion
You can write a new Dataset class inherited from `CustomDataset`, and overwrite two methods
`load_annotations(self, ann_file)` and `get_ann_info(self, idx)`, like [CocoDataset](mmdet/datasets/coco.py).
- offline conversion
You can convert the annotation format to the expected format above and save it to
a pickle file, like [pascal_voc.py](tools/convert_datasets/pascal_voc.py).
Then you can simply use `CustomDataset`.
## Technical details
Some implementation details and project structures are described in the [technical details](TECHNICAL_DETAILS.md).
......
......@@ -271,4 +271,4 @@ class CustomDataset(Dataset):
data = dict(img=imgs, img_meta=img_metas)
if self.proposals is not None:
data['proposals'] = proposals
return data
\ No newline at end of file
return data
......@@ -12,7 +12,7 @@ def readme():
MAJOR = 0
MINOR = 5
PATCH = 1
PATCH = 2
SUFFIX = ''
SHORT_VERSION = '{}.{}.{}{}'.format(MAJOR, MINOR, PATCH, SUFFIX)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment