Skip to content
/ OA-MIL Public

[ECCV 2022] Robust Object Detection With Inaccurate Bounding Boxes

License

Notifications You must be signed in to change notification settings

cxliu0/OA-MIL

Repository files navigation

OA-MIL

This repository includes the official implementation of the paper:

Robust Object Detection With Inaccurate Bounding Boxes

European Conference on Computer Vision (ECCV), 2022

Chengxin Liu1, Kewei Wang1, Hao Lu1, Zhiguo Cao1, and Ziming Zhang2

1Huazhong University of Science and Technology, China

2Worcester Polytechnic Institute, USA

Paper | Supplementary

Highlights

  • Robust: OA-MIL is robust to inaccurate box annotations, and also effective on clean data;
  • Generic: Our formulation is general and applicable to both one-stage and two-stage detectors;
  • No extra parameters: OA-MIL does not introduce extra model parameters.

Installation

Python Pytorch

  • Set up environment
# env
conda create -n oamil python=3.7
conda activate oamil

# install pytorch
conda install pytorch==1.10.0 torchvision==0.11.0 -c pytorch -c conda-forge
  • Install
# clone 
git clone https://github.com/cxliu0/OA-MIL.git
cd OA-MIL

# install dependecies
pip install -r requirements/build.txt

# install mmcv (will take a while to process)
cd mmcv
MMCV_WITH_OPS=1 pip install -e . 

# install OA-MIL
cd ..
pip install -e .

Data Preparation

  • Download VOC2007 and COCO datasets. We expect the directory structure to be as follows:
OA-MIL
├── data
│    ├── VOCdevkit
│    │    ├── VOC2007
│    │        ├── Annotations
│    │        ├── ImageSets
│    │        ├── JPEGImages
│    ├── coco
│        ├── train2017
│        ├── val2017
│        ├── annotations
│            ├── instances_train2017.json
│            ├── instances_val2017.json
├── configs
├── mmcv
├── ...
  • Generate noisy annotations:
# generate noisy VOC2007 (e.g., 40% noise)
python ./utils/gen_noisy_voc.py --box_noise_level 0.4

# generate noisy COCO (e.g., 40% noise)
python ./utils/gen_noisy_coco.py --box_noise_level 0.4
  • Alternatively, the noisy annotation files (coco dataset) we used are available at google drive.

Training

All models of OA-MIL are trained with a total batch size of 16.

  • To train OA-MIL on VOC2007, run
sh train_voc07.sh

Please refer to faster_rcnn_r50_fpn_voc_oamil.py for model configuration

  • To train OA-MIL on COCO, run
sh train_coco.sh

Please refer to faster_rcnn_r50_fpn_coco_oamil.py for model configuration

Inference

/path/to/model_config: modify it to the path of model config, e.g., ./configs/faster_rcnn/faster_rcnn_r50_fpn_1x_voc_oamil.py

/path/to/model_checkpoint: modify it to the path of model checkpoint

  • Run
sh test.sh

FAQ

  • Is OA-MIL applicable to clean data?

    Yes, OA-MIL is applicable to clean data. Here we show some results on the clean VOC2007 and COCO datasets:

    • VOC2007
    Method mAP@0.5
    FasterRCNN 77.2
    OA-MIL FasterRCNN 78.6
    • COCO
    Method AP AP50 AP75
    FasterRCNN 37.9 58.1 40.9
    OA-MIL FasterRCNN 38.1 58.1 41.4
  • Where are the noisy annotation files the paper used?

    • The noisy annotation files of the coco dataset are available at google drive;
    • For the GWHD dataset, please refer to this issue.

Citation

If you find this work or code useful for your research, please consider citing:

@inproceedings{liu2022oamil,
  title={Robust Object Detection With Inaccurate Bounding Boxes},
  author={Liu, Chengxin and Wang, Kewei and Lu, Hao and Cao, Zhiguo and Zhang, Ziming},
  booktitle={Proceeding of European Conference on Computer Vision (ECCV)},
  year={2022}
}

Acknowledgement

This repository is based on mmdetection.