fixed the bad links

This commit is contained in:
caojiewen 2021-04-09 12:22:07 +08:00
parent b50db3eba2
commit fc986f98e0
4 changed files with 26 additions and 24 deletions

View File

@ -17,7 +17,9 @@ If you find our work useful in your research or publication, please cite our wor
}
## Model architecture
### The overall network architecture of IPT is shown as below:
### The overall network architecture of IPT is shown as below
![architecture](./image/ipt.png)
## Dataset
@ -27,12 +29,9 @@ The benchmark datasets can be downloaded as follows:
For super-resolution:
Set5,
[Set14](https://sites.google.com/site/romanzeyde/research-interests),
[B100](https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/),
[Urban100](https://sites.google.com/site/jbhuang0604/publications/struct_sr).
Urban100.
For denoising:
@ -47,11 +46,15 @@ The result images are converted into YCbCr color space. The PSNR is evaluated on
## Requirements
### Hardware (GPU)
> Prepare hardware environment with GPU.
### Framework
> [MindSpore](https://www.mindspore.cn/install/en)
### For more information, please check the resources below
### For more information, please check the resources below
[MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
[MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
@ -61,7 +64,7 @@ The result images are converted into YCbCr color space. The PSNR is evaluated on
### Scripts and Sample Code
```
```bash
IPT
├── eval.py # inference entry
├── image
@ -95,23 +98,25 @@ IPT
## Evaluation
### Evaluation Process
> Inference example:
> For SR x4:
```
```bash
python eval.py --dir_data ../../data/ --data_test Set14 --nochange --test_only --ext img --chop_new --scale 4 --pth_path ./model/IPT_sr4.ckpt
```
> Or one can run following script for all tasks.
```
```bash
sh scripts/run_eval.sh
```
### Evaluation Result
The result are evaluated by the value of PSNR (Peak Signal-to-Noise Ratio), and the format is as following.
```
```bash
result: {"Mean psnr of Se5 x4 is 32.68"}
```
@ -144,4 +149,4 @@ Derain results:
## ModeZoo Homepage
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

View File

@ -37,18 +37,18 @@ Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)
## [Mixed Precision(Ascend)](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
- HardwareAscend/GPU/CPU
- Prepare hardware environment with Ascend、GPU or CPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Prepare hardware environment with Ascend、GPU or CPU processor.
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script description](#contents)

View File

@ -76,10 +76,8 @@ Dataset used: [COCO2017](https://cocodataset.org/)
# [Environment Requirements](#contents)
- HardwareAscend
- Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Prepare hardware environment with Ascend processor.
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)

View File

@ -25,8 +25,7 @@ An effective and efficient architecture performance evaluation scheme is essenti
# [Dataset](#contents)
- - Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)
- Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html)
- Dataset size: 60000 colorful images in 10 classes
- Train: 50000 images
- Test: 10000 images
@ -37,18 +36,18 @@ An effective and efficient architecture performance evaluation scheme is essenti
## [Mixed Precision(Ascend)](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents)
- HardwareAscend/GPU/CPU
- Prepare hardware environment with Ascend、GPU or CPU processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
- Prepare hardware environment with Ascend、GPU or CPU processor.
- Framework
- [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
- [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
# [Script description](#contents)