Skip to content
Snippets Groups Projects
Unverified Commit 42f5b6c6 authored by Haian Huang(深度眸)'s avatar Haian Huang(深度眸) Committed by GitHub
Browse files

Add Tag to config (#4426)

parent 6cdb0a37
No related branches found
No related tags found
No related merge requests found
Showing
with 38 additions and 0 deletions
# Probabilistic Anchor Assignment with IoU Prediction for Object Detection
[ALGORITHM]
## Results and Models
We provide config files to reproduce the object detection results in the
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```
@inproceedings{liu2018path,
author = {Shu Liu and
......
# PASCAL VOC Dataset
[DATASET]
## Results and Models
| Architecture | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```latex
@inproceedings{cao2019prime,
title={Prime sample attention in object detection},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```latex
@InProceedings{kirillov2019pointrend,
title={{PointRend}: Image Segmentation as Rendering},
......
......@@ -2,6 +2,8 @@
## Introduction
[BACKBONE]
We implement RegNetX and RegNetY models in detection systems and provide their first results on Mask R-CNN, Faster R-CNN and RetinaNet.
The pre-trained modles are converted from [model zoo of pycls](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md).
......
......@@ -7,6 +7,8 @@ We provide code support and configuration files to reproduce the results in the
## Introduction
[ALGORITHM]
**RepPoints**, initially described in [arXiv](https://arxiv.org/abs/1904.11490), is a new representation method for visual objects, on which visual understanding tasks are typically centered. Visual object representation, aiming at both geometric description and appearance feature extraction, is conventionally achieved by `bounding box + RoIPool (RoIAlign)`. The bounding box representation is convenient to use; however, it provides only a rectangular localization of objects that lacks geometric precision and may consequently degrade feature quality. Our new representation, RepPoints, models objects by a `point set` instead of a `bounding box`, which learns to adaptively position themselves over an object in a manner that circumscribes the object’s `spatial extent` and enables `semantically aligned feature extraction`. This richer and more flexible representation maintains the convenience of bounding boxes while facilitating various visual understanding applications. This repo demonstrated the effectiveness of RepPoints for COCO object detection.
Another feature of this repo is the demonstration of an `anchor-free detector`, which can be as effective as state-of-the-art anchor-based detection methods. The anchor-free detector can utilize either `bounding box` or `RepPoints` as the basic object representation.
......
......@@ -2,6 +2,8 @@
## Introduction
[BACKBONE]
We propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.
| Backbone |Params. | GFLOPs | top-1 err. | top-5 err. |
......
......@@ -2,6 +2,8 @@
## Introduction
[BACKBONE]
```latex
@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```latex
@inproceedings{lin2017focal,
title={Focal loss for dense object detection},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```latex
@inproceedings{ren2015faster,
title={Faster r-cnn: Towards real-time object detection with region proposal networks},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
We provide config files to reproduce the object detection results in the ECCV 2020 Spotlight paper for [Side-Aware Boundary Localization for More Precise Object Detection](https://arxiv.org/abs/1912.04260).
```latex
......
......@@ -2,6 +2,8 @@
## Introduction
[OTHERS]
```latex
@article{he2018rethinking,
title={Rethinking imagenet pre-training},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```latex
@article{Liu_2016,
title={SSD: Single Shot MultiBox Detector},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```
@InProceedings{li2019scale,
title={Scale-Aware Trident Networks for Object Detection},
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
**VarifocalNet (VFNet)** learns to predict the IoU-aware classification score which mixes the object presence confidence and localization accuracy together as the detection score for a bounding box. The learning is supervised by the proposed Varifocal Loss (VFL), based on a new star-shaped bounding box feature representation (the features at nine yellow sampling points). Given the new representation, the object localization accuracy is further improved by refining the initially regressed bounding box. The full paper is available at: [https://arxiv.org/abs/2008.13367](https://arxiv.org/abs/2008.13367).
<div align="center">
......
# WIDER Face Dataset
[DATASET]
To use the WIDER Face dataset you need to download it
and extract to the `data/WIDERFace` folder. Annotation in the VOC format
can be found in this [repo](https://github.com/sovrasov/wider-face-pascal-voc-annotations.git).
......
# **Y**ou **O**nly **L**ook **A**t **C**oefficien**T**s
[ALGORITHM]
```
██╗ ██╗ ██████╗ ██╗ █████╗ ██████╗████████╗
╚██╗ ██╔╝██╔═══██╗██║ ██╔══██╗██╔════╝╚══██╔══╝
......
......@@ -2,6 +2,8 @@
## Introduction
[ALGORITHM]
```latex
@misc{redmon2018yolov3,
title={YOLOv3: An Incremental Improvement},
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment