项目作者: bismex

项目描述 :
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
高级语言: Python
项目地址: git://github.com/bismex/MetaBIN.git
创建时间: 2020-09-08T06:40:54Z
项目社区:https://github.com/bismex/MetaBIN

开源协议:

下载


Feel free to visit my homepage and awesome person re-id github page


Meta Batch-Instance Normalization for Generalizable Person Re-Identification (MetaBIN), [CVPR 2021]


<Illustration of unsuccessful generalization scenarios and our framework>

  • (a) Under-style-normalization happens when the trained BN model fails to distinguish identities on unseen domains.
  • (b) Over-style-normalization happens when the trained IN model removes even ID-discriminative information.
  • (c) Our key idea is to generalize BIN layers by simulating the preceding cases in a meta-learning pipeline. By overcoming the harsh situations, our model learns to avoid overfitting to source styles.

MetaBIN

git clone our_repository

1) Prerequisites

  • Ubuntu 18.04
  • Python 3.6
  • Pytorch 1.7+
  • NVIDIA GPU (>=8,000MiB)
  • Anaconda 4.8.3
  • CUDA 10.1 (optional)
  • Recent GPU driver (Need to support AMP [link])

2) Preparation

  1. conda create -n MetaBIN python=3.6
  2. conda activate MetaBIN
  3. conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.1 -c pytorch
  4. pip install tensorboard
  5. pip install Cython
  6. pip install yacs
  7. pip install termcolor
  8. pip install tabulate
  9. pip install scikit-learn
  10. pip install h5py
  11. pip install imageio
  12. pip install openpyxl
  13. pip install matplotlib
  14. pip install pandas
  15. pip install seaborn

3) Test only

  • Download our model [link] to MetaBIN/logs/Sample/DG-mobilenet

    1. ├── MetaBIN/logs/Sample/DG-mobilenet
    2. ├── last_checkpoint
    3. ├── model_0099999.pth
    4. ├── result.png
  • Download test datasets [link] to MetaBIN/datasets/

    1. ├── MetaBIN/datasets
    2. ├── GRID
    3. ├── prid_2011
    4. ├── QMUL-iLIDS
    5. ├── viper
  • Execute run_file
    cd MetaBIN/
    sh run_evaluate.sh

  • you can get the following results

Datasets Rank-1 Rank-5 Rank-10 mAP mINP TPR@FPR=0.0001 TPR@FPR=0.001 TPR@FPR=0.01
ALL_GRID_average 49.68% 67.52% 76.80% 58.10% 58.10% 0.00% 0.00% 46.35%
ALL_GRID_std 2.30% 3.56% 3.14% 2.58% 2.58% 0.00% 0.00% 26.49%
ALL_VIPER_only_10_average 56.90% 76.71% 82.03% 65.98% 65.98% 0.00% 0.00% 50.97%
ALL_VIPER_only_10_std 2.97% 2.11% 2.06% 2.35% 2.35% 0.00% 0.00% 8.45%
ALL_PRID_average 72.50% 88.20% 91.30% 79.78% 79.78% 0.00% 0.00% 91.00%
ALL_PRID_std 2.20% 2.60% 2.00% 1.88% 1.88% 0.00% 0.00% 1.47%
ALL_iLIDS_average 79.67% 93.33% 97.33% 85.51% 85.51% 0.00% 0.00% 56.13%
ALL_iLIDS_std 4.40% 2.47% 2.26% 2.80% 2.80% 0.00% 0.00% 15.77%
all_average 64.69% 81.44% 86.86% 72.34% 72.34% 0.00% 0.00% 61.11%
  • Other models [link]

Advanced (train new models)

4) Check the below repository structure

  1. MetaBIN/
  2. ├── configs/
  3. ├── datasets/ (*need to download and connect it by symbolic link [check section 4], please check the folder name*)
  4. ├── *cuhk02
  5. ├── *cuhk03
  6. ├── *CUHK-SYSU
  7. ├── *DukeMTMC-reID
  8. ├── *GRID
  9. ├── *Market-1501-v15.09.15
  10. ├── *prid_2011
  11. ├── *QMUL-iLIDS
  12. ├── *viper
  13. ├── demo/
  14. ├── fastreid/
  15. ├── logs/
  16. ├── pretrained/
  17. ├── tests/
  18. ├── tools/
  19. '*' means symbolic links which you make (check below sections)

5) download dataset and connect it

  • Download dataset

    • For single-source DG
      • Need to download Market1501, DukeMTMC-REID [check section 8-1,2]
    • For multi-source DG
      • Training: Market1501, DukeMTMC-REID, CUHK02, CUHK03, CUHK-SYSU [check section 8-1,2,3,4,5]
      • Testing: GRID, PRID, QMUL i-LIDS, VIPer [check section 8-6,7,8,9]
  • Symbolic link (recommended)

    • Check symbolic_link_dataset.sh
    • Modify each directory (need to change)
    • cd MetaBIN
    • bash symbolic_link_dataset.sh
  • Direct connect (not recommended)

    • If you don’t want to make symbolic link, move each dataset folder into ./datasets/
    • Check the folder name for each dataset

6) Create pretrained and logs folder

  • Symbolic link (recommended)

    • Make ‘MetaBIN(logs)’ and ‘MetaBIN(pretrained)’ folder outside MetaBIN
      1. ├── MetaBIN
      2. ├── configs/
      3. ├── ....
      4. ├── tools/
      5. ├── MetaBIN(logs)
      6. ├── MetaBIN(pretrained)
    • cd MetaBIN
    • bash symbolic_link_others.sh
    • Download pretrained models and change name
      • mobilenetv2_x1_0: [link]
      • mobilenetv2_x1_4: [link]
      • change name as mobilenetv2_1.0.pth, mobilenetv2_1.4.pth
    • Or download pretrained models [link]
  • Direct connect (not recommended)

    • Make ‘pretrained’ and ‘logs’ folder in MetaBIN
    • Move the pretrained models to pretrained

7) Train

  • If you run code in pycharm

    • tools/train_net.py -> Edit congifuration
    • Working directory: your folders/MetaBIN/
    • Parameters: --config-file ./configs/Sample/DG-mobilenet.yml
  • Single GPU

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml

  • Single GPU (specific GPU)

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml MODEL.DEVICE "cuda:0"

  • Resume (model weights is automatically loaded based on last_checkpoint file in logs)

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml --resume

  • Evaluation only

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml --eval-only

8) Datasets

  • (1) Market1501

    • Create a directory named Market-1501-v15.09.15
    • Download the dataset to Market-1501-v15.09.15 from link and extract the files.
    • The data structure should look like
      1. Market-1501-v15.09.15/
      2. ├── bounding_box_test/
      3. ├── bounding_box_train/
      4. ├── gt_bbox/
      5. ├── gt_query/
      6. ├── query/
  • (2) DukeMTMC-reID

    • Create a directory called DukeMTMC-reID
    • Download DukeMTMC-reID from link and extract the files.
    • The data structure should look like
      1. DukeMTMC-reID/
      2. ├── bounding_box_test/
      3. ├── bounding_box_train/
      4. ├── query/
  • (3) CUHK02

    • Create cuhk02 folder
    • Download the data from link and put it under cuhk02.
      • The data structure should look like
        1. cuhk02/
        2. ├── P1/
        3. ├── P2/
        4. ├── P3/
        5. ├── P4/
        6. ├── P5/
  • (4) CUHK03

    • Create cuhk03 folder
    • Download dataset to cuhk03 from link and extract “cuhk03_release.zip”, resulting in “cuhk03/cuhk03_release/”.
    • Download the new split (767/700) from person-re-ranking. What you need are “cuhk03_new_protocol_config_detected.mat” and “cuhk03_new_protocol_config_labeled.mat”. Put these two mat files under cuhk03.
    • The data structure should look like
      1. cuhk03/
      2. ├── cuhk03_release/
      3. ├── cuhk03_new_protocol_config_detected.mat
      4. ├── cuhk03_new_protocol_config_labeled.mat
  • (5) Person Search (CUHK-SYSU)

    • Create a directory called CUHK-SYSU
    • Download CUHK-SYSU from link and extract the files.
    • Cropped images can be created by my matlab code make_cropped_image.m (this code is included in the datasets folder)
    • The data structure should look like
      1. CUHK-SYSU/
      2. ├── annotation/
      3. ├── Image/
      4. ├── cropped_image/
      5. ├── make_cropped_image.m (my matlab code)
  • (6) GRID

    • Create a directory called GRID
    • Download GRID from link and extract the files.
    • Split sets (splits.json) can be created by python code grid.py
    • The data structure should look like
    1. GRID/
    2. ├── gallery/
    3. ├── probe/
    4. ├── splits_single_shot.json (This will be created by `grid.py` in `fastreid/data/datasets/` folder)
  • (7) PRID

    • Create a directory called prid_2011
    • Download prid_2011 from link and extract the files.
    • Split sets (splits_single_shot.json) can be created by python code prid.py
    • The data structure should look like
    1. prid_2011/
    2. ├── single_shot/
    3. ├── multi_shot/
    4. ├── splits_single_shot.json (This will be created by `prid.py` in `fastreid/data/datasets/` folder)
  • (8) QMUL i-LIDS

    1. QMUL-iLIDS/
    2. ├── images/
    3. ├── splits.json (This will be created by `iLIDS.py` in `fastreid/data/datasets/` folder)
  • (9) VIPer

    • Create a directory called viper
    • Download viper from link and extract the files.
    • Split sets can be created by my matlab code make_split.m (this code is included in the datasets folder)
    • The data structure should look like
      1. viper/
      2. ├── cam_a/
      3. ├── cam_b/
      4. ├── make_split.m (my matlab code)
      5. ├── split_1a # Train: split1, Test: split2 ([query]cam1->[gallery]cam2)
      6. ├── split_1b # Train: split2, Test: split1 (cam1->cam2)
      7. ├── split_1c # Train: split1, Test: split2 (cam2->cam1)
      8. ├── split_1d # Train: split2, Test: split1 (cam2->cam1)
      9. ...
      10. ...
      11. ├── split_10a
      12. ├── split_10b
      13. ├── split_10c
      14. ├── split_10d

9) Code structure

  • Our code is based on fastreid link

  • fastreid/config/defaults.py: default settings (parameters)

  • fastreid/data/datasets/: about datasets

  • tools/train_net.py: Main code (train/test/tsne/visualize)

  • fastreid/engine/defaults.py: build dataset, build model
    • fastreid/data/build.py: build datasets (base model/meta-train/meta-test)
    • fastreid/data/samplers/triplet_sampler.py: data sampler
    • fastreid/modeling/meta_arch/metalearning.py: build model
      • fastreid/modeling/backbones/mobilenet_v2.py or resnet.py: backbone network
      • fastreid/heads/metalearning_head.py: head network (bnneck)
    • fastreid/solver/build.py: build optimizer and scheduler
  • fastreid/engine/train_loop.py: main train code
    • run_step_meta_learning1(): update base model
    • run_step_meta_learning2(): update balancing parameters (meta-learning)

10) Handling errors

  • AMP
    • If the version of your GPU driver is old, you cannot use AMP(automatic mixed precision).
    • If so, modify the AMP option to False in /MetaBIN/configs/Sample/DG-mobilenet.yml
    • The memory usage will increase.
  • Fastreid evaluation
    • If a compile error occurs in fastreid, run the following command.
    • cd fastreid/evaluation/rank_cylib; make all
  • No such file or directory ‘logs/Sample’
    • Please check logs (section 3)
  • No such file or directory ‘pretrained’
    • Please check pretrained (section 6)
  • No such file or directory ‘datasets’
    • Please check datasets (section 8)
  • RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
    • Please check the CUDA version on your graphics card and pytorch.

Citation

  1. @InProceedings{choi2021metabin,
  2. title = {Meta Batch-Instance Normalization for Generalizable Person Re-Identification},
  3. author = {Choi, Seokeon and Kim, Taekyung and Jeong, Minki and Park, Hyoungseob and Kim, Changick},
  4. booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  5. month = {June},
  6. year = {2021}
  7. }