Fuyu_Wang be3acce3cb | ||
---|---|---|
.. | ||
scripts | ||
src | ||
README.md | ||
create_dataset.py | ||
eval.py | ||
requirements.txt | ||
train.py |
README.md
Contents
- SRCNN Description
- Model Architecture
- Dataset
- Environment Requirements
- Quick Start
- Script Description
- Model Description
- ModelZoo Homepage
NASNet Description
SRCNN learns an end-to-end mapping between low- and high-resolution images, with little extra pre/post-processing beyond the optimization. With a lightweight structure, the SRCNN has achieved superior performance than the state-of-the-art methods.
Paper: Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang. Image Super-Resolution Using Deep Convolutional Networks. 2014.
Model architecture
The overall network architecture of SRCNN is show below:
Dataset
- Training Dataset
- ILSVRC2013_DET_train: 395918 images, 200 classes
- Evaluation Dataset
- Set5: 5 images
- Set14: 14 images
- Set5 & Set14 download url: http://vllab.ucmerced.edu/wlai24/LapSRN/results/SR_testing_datasets.zip
- BSDS200: 200 images
- BSDS200 download url: http://vllab.ucmerced.edu/wlai24/LapSRN/results/SR_training_datasets.zip
- Data format: RGB images.
- Note: Data will be processed in src/dataset.py
Environment Requirements
- Hardware GPU
- Prepare hardware environment with GPU processor.
- Framework
- For more information, please check the resources below:
Script description
Script and sample code
.
└─srcnn
├─README.md
├─scripts
├─run_distribute_train_gpu.sh # launch distributed training with gpu platform
└─run_eval_gpu.sh # launch evaluating with gpu platform
├─src
├─config.py # parameter configuration
├─dataset.py # data preprocessing
├─metric.py # accuracy metric
├─utils.py # some functions which is commonly used
├─srcnn.py # network definition
├─create_dataset.py # generating mindrecord training dataset
├─eval.py # eval net
└─train.py # train net
Script Parameters
Parameters for both training and evaluating can be set in config.py.
'lr': 1e-4, # learning rate
'patch_size': 33, # patch_size
'stride': 99, # stride
'scale': 2, # image scale
'epoch_size': 20, # total epoch numbers
'batch_size': 16, # input batchsize
'save_checkpoint': True, # whether saving ckpt file
'keep_checkpoint_max': 10, # max numbers to keep checkpoints
'save_checkpoint_path': 'outputs/' # save checkpoint path
Training Process
Dataset
To create dataset, download the training dataset firstly and then convert them to mindrecord files. We can deal with it as follows.
python create_dataset.py --src_folder=/dataset/ILSVRC2013_DET_train --output_folder=/dataset/mindrecord_dir
Usage
GPU:
sh run_distribute_train_gpu.sh DEVICE_NUM VISIABLE_DEVICES(0,1,2,3,4,5,6,7) DATASET_PATH
Launch
# distributed training example(8p) for GPU
sh run_distribute_train_gpu.sh 8 0,1,2,3,4,5,6,7 /dataset/train
# standalone training example for GPU
sh run_distribute_train_gpu.sh 1 0 /dataset/train
You can find checkpoint file together with result in log.
Evaluation Process
Usage
# Evaluation
sh run_eval_gpu.sh DEVICE_ID DATASET_PATH CHECKPOINT_PATH
Launch
# Evaluation with checkpoint
sh run_eval_gpu.sh 1 /dataset/val /ckpt_dir/srcnn-20_*.ckpt
Result
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
result {'PSNR': 36.72421418219669}
Model description
Performance
Training Performance
Parameters | SRCNN |
---|---|
Resource | NV PCIE V100-32G |
uploaded Date | 03/02/2021 |
MindSpore Version | master |
Dataset | ImageNet2013 scale:2 |
Training Parameters | src/config.py |
Optimizer | Adam |
Loss Function | MSELoss |
Loss | 0.00179 |
Total time | 1 h 8ps |
Checkpoint for Fine tuning | 671 K(.ckpt file) |
Inference Performance
Parameters | |
---|---|
Resource | NV PCIE V100-32G |
uploaded Date | 03/02/2021 |
MindSpore Version | master |
Dataset | Set5/Set14/BSDS200 scale:2 |
batch_size | 1 |
PSNR | 36.72/32.58/33.81 |
ModelZoo Homepage
Please check the official homepage.