forked from mindspore-Ecosystem/mindspore
update unet README
This commit is contained in:
parent
cac91018ad
commit
1e0f164c23
|
@ -28,7 +28,7 @@
|
|||
|
||||
## [Unet Description](#contents)
|
||||
|
||||
Unet Medical model for 2D image segmentation. This implementation is as described in the original paper [UNet: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597). Unet, in the 2015 ISBI cell tracking competition, many of the best are obtained. In this paper, a network model for medical image segmentation is proposed, and a data enhancement method is proposed to effectively use the annotation data to solve the problem of insufficient annotation data in the medical field. A U-shaped network structure is also used to extract the context and location information.
|
||||
Unet for 2D image segmentation. This implementation is as described in the original paper [UNet: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597). Unet, in the 2015 ISBI cell tracking competition, many of the best are obtained. In this paper, a network model for medical image segmentation is proposed, and a data enhancement method is proposed to effectively use the annotation data to solve the problem of insufficient annotation data in the medical field. A U-shaped network structure is also used to extract the context and location information.
|
||||
|
||||
UNet++ is a neural architecture for semantic and instance segmentation with re-designed skip pathways and deep supervision.
|
||||
|
||||
|
@ -71,7 +71,8 @@ After installing MindSpore via the official website, you can start training and
|
|||
|
||||
- Select the network and dataset to use
|
||||
|
||||
Refer to `src/config.py`. We support some parameter configurations for quick start. You can set `'model'` to `'unet_medical'`,`'unet_nested'` or `'unet_simple'` to select which net to use. We support `ISBI` and `Cell_nuclei` two dataset, you can set `'dataset'` to `'Cell_nuclei'` to use `Cell_nuclei` dataset, default is `ISBI`.
|
||||
1. Select `cfg_unet` in `src/config.py`. We support unet and unet++, and we provide some parameter configurations for quick start.
|
||||
2. If you want other parameters, please refer to `src/config.py`. You can set `'model'` to `'unet_nested'` or `'unet_simple'` to select which net to use. We support `ISBI` and `Cell_nuclei` two dataset, you can set `'dataset'` to `'Cell_nuclei'` to use `Cell_nuclei` dataset, default is `ISBI`.
|
||||
|
||||
- Run on Ascend
|
||||
|
||||
|
@ -157,6 +158,7 @@ Parameters for both training and evaluation can be set in config.py
|
|||
'name': 'Unet', # model name
|
||||
'lr': 0.0001, # learning rate
|
||||
'epochs': 400, # total training epochs when run 1p
|
||||
'repeat': 400, # Repeat times pre one epoch
|
||||
'distribute_epochs': 1600, # total training epochs when run 8p
|
||||
'batchsize': 16, # training batch size
|
||||
'cross_valid_ind': 1, # cross valid ind
|
||||
|
@ -185,6 +187,7 @@ Parameters for both training and evaluation can be set in config.py
|
|||
'img_size': [96, 96], # image size
|
||||
'lr': 3e-4, # learning rate
|
||||
'epochs': 200, # total training epochs when run 1p
|
||||
'repeat': 10, # Repeat times pre one epoch
|
||||
'distribute_epochs': 1600, # total training epochs when run 8p
|
||||
'batchsize': 16, # batch size
|
||||
'num_classes': 2, # the number of classes in the dataset
|
||||
|
@ -206,6 +209,8 @@ Parameters for both training and evaluation can be set in config.py
|
|||
'eval_interval': 1 # valuation interval when run_eval is True
|
||||
```
|
||||
|
||||
*Note: total steps pre epoch is floor(epochs / repeat), because unet dataset usually is small, we repeat the dataset to avoid drop too many images when add batch size.*
|
||||
|
||||
## [Training Process](#contents)
|
||||
|
||||
### Training
|
||||
|
@ -269,7 +274,7 @@ You can add `run_eval` to start shell and set it True, if you want evaluation wh
|
|||
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., "username/unet/ckpt_unet_medical_adam-48_600.ckpt".
|
||||
|
||||
```shell
|
||||
python eval.py --data_url=/path/to/data/ --ckpt_path=/path/to/checkpoint/ > eval.log 2>&1 &
|
||||
python eval.py --data_url=/path/to/data/ --ckpt_path=/path/to/unet.ckpt > eval.log 2>&1 &
|
||||
OR
|
||||
bash scripts/run_standalone_eval.sh [DATASET] [CHECKPOINT]
|
||||
```
|
||||
|
@ -278,7 +283,7 @@ The above python command will run in the background. You can view the results th
|
|||
|
||||
```shell
|
||||
# grep "Cross valid dice coeff is:" eval.log
|
||||
============== Cross valid dice coeff is: {'dice_coeff': 0.9085704886070473}
|
||||
============== Cross valid dice coeff is: {'dice_coeff': 0.9111}
|
||||
```
|
||||
|
||||
## [Model Description](#contents)
|
||||
|
@ -292,17 +297,16 @@ The above python command will run in the background. You can view the results th
|
|||
| Model Version | Unet |
|
||||
| Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
|
||||
| uploaded Date | 09/15/2020 (month/day/year) |
|
||||
| MindSpore Version | 1.0.0 |
|
||||
| MindSpore Version | 1.2.0 |
|
||||
| Dataset | ISBI |
|
||||
| Training Parameters | 1pc: epoch=400, total steps=600, batch_size = 16, lr=0.0001 |
|
||||
| | 8pc: epoch=1600, total steps=300, batch_size = 16, lr=0.0001 |
|
||||
| Optimizer | ADAM |
|
||||
| Optimizer | Adam |
|
||||
| Loss Function | Softmax Cross Entropy |
|
||||
| outputs | probability |
|
||||
| Loss | 0.22070312 |
|
||||
| Speed | 1pc: 267 ms/step; 8pc: 280 ms/step; |
|
||||
| Total time | 1pc: 2.67 mins; 8pc: 1.40 mins |
|
||||
| Parameters (M) | 93M |
|
||||
| Speed | 1pc: 267 ms/step |
|
||||
| Total time | 1pc: 2.67 mins |
|
||||
| Parameters (M) | 93M |
|
||||
| Checkpoint for Fine tuning | 355.11M (.ckpt file) |
|
||||
| Scripts | [unet script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/unet) |
|
||||
|
||||
|
@ -345,7 +349,7 @@ Set options `resume` to True in `config.py`, and set `resume_ckpt` to the path o
|
|||
|
||||
```python
|
||||
'resume': True,
|
||||
'resume_ckpt': 'ckpt_0/ckpt_unet_medical_adam_1-1_600.ckpt',
|
||||
'resume_ckpt': 'ckpt_0/ckpt_unet_sample_adam_1-1_600.ckpt',
|
||||
'transfer_training': False,
|
||||
'filter_weight': ["final.weight"]
|
||||
```
|
||||
|
@ -356,7 +360,7 @@ Do the same thing as resuming traing above. In addition, set `transfer_training`
|
|||
|
||||
```python
|
||||
'resume': True,
|
||||
'resume_ckpt': 'ckpt_0/ckpt_unet_medical_adam_1-1_600.ckpt',
|
||||
'resume_ckpt': 'ckpt_0/ckpt_unet_sample_adam_1-1_600.ckpt',
|
||||
'transfer_training': True,
|
||||
'filter_weight': ["final.weight"]
|
||||
```
|
||||
|
|
|
@ -31,7 +31,7 @@
|
|||
|
||||
## U-Net说明
|
||||
|
||||
U-Net医学模型基于二维图像分割。实现方式见论文[UNet:Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)。在2015年ISBI细胞跟踪竞赛中,U-Net获得了许多最佳奖项。论文中提出了一种用于医学图像分割的网络模型和数据增强方法,有效利用标注数据来解决医学领域标注数据不足的问题。U型网络结构也用于提取上下文和位置信息。
|
||||
U-Net模型基于二维图像分割。实现方式见论文[UNet:Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)。在2015年ISBI细胞跟踪竞赛中,U-Net获得了许多最佳奖项。论文中提出了一种用于医学图像分割的网络模型和数据增强方法,有效利用标注数据来解决医学领域标注数据不足的问题。U型网络结构也用于提取上下文和位置信息。
|
||||
|
||||
UNet++是U-Net的增强版本,使用了新的跨层链接方式和深层监督,可以用于语义分割和实例分割。
|
||||
|
||||
|
@ -75,7 +75,8 @@ UNet++是U-Net的增强版本,使用了新的跨层链接方式和深层监督
|
|||
|
||||
- 选择模型及数据集
|
||||
|
||||
我们在`src/config.py`预备了一些网络及数据集的参数配置用于快速体验。也可以通过设置`'model'` 为 `'unet_medical'`,`'unet_nested'` 或者 `'unet_simple'` 来选择使用什么网络结构。我们支持`ISBI` 和 `Cell_nuclei`两种数据集处理,默认使用`ISBI`,可以设置`'dataset'` 为 `'Cell_nuclei'`使用`Cell_nuclei`数据集。
|
||||
1. 在`src/config.py`中选择相应的配置项赋给`cfg_unet`,现在支持unet和unet++,我们在`src/config.py`预备了一些网络及数据集的参数配置用于快速体验。
|
||||
2. 如果使用其他的参数,也可以参考`src/config.py`通过设置`'model'` 为 `'unet_nested'` 或者 `'unet_simple'` 来选择使用什么网络结构。我们支持`ISBI` 和 `Cell_nuclei`两种数据集处理,默认使用`ISBI`,可以设置`'dataset'` 为 `'Cell_nuclei'`使用`Cell_nuclei`数据集。
|
||||
|
||||
- Ascend处理器环境运行
|
||||
|
||||
|
@ -161,6 +162,7 @@ bash scripts/docker_start.sh unet:20.1.0 [DATA_DIR] [MODEL_DIR]
|
|||
'name': 'Unet', # 模型名称
|
||||
'lr': 0.0001, # 学习率
|
||||
'epochs': 400, # 运行1p时的总训练轮次
|
||||
'repeat': 400, # 每一遍epoch重复数据集的次数
|
||||
'distribute_epochs': 1600, # 运行8p时的总训练轮次
|
||||
'batchsize': 16, # 训练批次大小
|
||||
'cross_valid_ind': 1, # 交叉验证指标
|
||||
|
@ -182,6 +184,7 @@ bash scripts/docker_start.sh unet:20.1.0 [DATA_DIR] [MODEL_DIR]
|
|||
'img_size': [96, 96], # 输入图像大小
|
||||
'lr': 3e-4, # 学习率
|
||||
'epochs': 200, # 运行1p时的总训练轮次
|
||||
'repeat': 10, # 每一遍epoch重复数据集的次数
|
||||
'distribute_epochs': 1600, # 运行8p时的总训练轮次
|
||||
'batchsize': 16, # 训练批次大小
|
||||
'num_classes': 2, # 数据集类数
|
||||
|
@ -199,6 +202,8 @@ bash scripts/docker_start.sh unet:20.1.0 [DATA_DIR] [MODEL_DIR]
|
|||
'filter_weight': ['final1.weight', 'final2.weight', 'final3.weight', 'final4.weight'] # 迁移学习过滤参数名
|
||||
```
|
||||
|
||||
注意: 实际运行时的每epoch的step数为 floor(epochs / repeat)。这是因为unet的数据集一般都比较小,每一遍epoch重复数据集用来避免在加batch时丢掉过多的图片。
|
||||
|
||||
## 训练过程
|
||||
|
||||
### 用法
|
||||
|
@ -262,7 +267,7 @@ step: 300, loss is 0.18949677, fps is 57.63118508760329
|
|||
在运行以下命令之前,请检查用于评估的检查点路径。将检查点路径设置为绝对全路径,如"username/unet/ckpt_unet_medical_adam-48_600.ckpt"。
|
||||
|
||||
```shell
|
||||
python eval.py --data_url=/path/to/data/ --ckpt_path=/path/to/checkpoint/ > eval.log 2>&1 &
|
||||
python eval.py --data_url=/path/to/data/ --ckpt_path=/path/to/unet.ckpt/ > eval.log 2>&1 &
|
||||
OR
|
||||
bash scripts/run_standalone_eval.sh [DATASET] [CHECKPOINT]
|
||||
```
|
||||
|
@ -271,7 +276,7 @@ step: 300, loss is 0.18949677, fps is 57.63118508760329
|
|||
|
||||
```shell
|
||||
# grep "Cross valid dice coeff is:" eval.log
|
||||
============== Cross valid dice coeff is: {'dice_coeff': 0.9085704886070473}
|
||||
============== Cross valid dice coeff is: {'dice_coeff': 0.9111}
|
||||
```
|
||||
|
||||
## 模型描述
|
||||
|
@ -285,11 +290,10 @@ step: 300, loss is 0.18949677, fps is 57.63118508760329
|
|||
| 模型版本 | U-Net |
|
||||
| 资源 | Ascend 910;CPU 2.60GHz,192核;内存 755GB;系统 Euler2.8 |
|
||||
| 上传日期 | 2020-9-15 |
|
||||
| MindSpore版本 | 1.0.0 |
|
||||
| MindSpore版本 | 1.2.0 |
|
||||
| 数据集 | ISBI |
|
||||
| 训练参数 | 1pc: epoch=400, total steps=600, batch_size = 16, lr=0.0001 |
|
||||
| | 8pc: epoch=1600, total steps=300, batch_size = 16, lr=0.0001 |
|
||||
| 优化器 | ADAM |
|
||||
| 优化器 | Adam |
|
||||
| 损失函数 | Softmax交叉熵 |
|
||||
| 输出 | 概率 |
|
||||
| 损失 | 0.22070312 |
|
||||
|
|
|
@ -19,6 +19,7 @@ cfg_unet_medical = {
|
|||
'img_size': [572, 572],
|
||||
'lr': 0.0001,
|
||||
'epochs': 400,
|
||||
'repeat': 400,
|
||||
'distribute_epochs': 1600,
|
||||
'batchsize': 16,
|
||||
'cross_valid_ind': 1,
|
||||
|
@ -44,6 +45,7 @@ cfg_unet_nested = {
|
|||
'img_size': [576, 576],
|
||||
'lr': 0.0001,
|
||||
'epochs': 400,
|
||||
'repeat': 400,
|
||||
'distribute_epochs': 1600,
|
||||
'batchsize': 16,
|
||||
'cross_valid_ind': 1,
|
||||
|
@ -73,6 +75,7 @@ cfg_unet_nested_cell = {
|
|||
'img_size': [96, 96],
|
||||
'lr': 3e-4,
|
||||
'epochs': 200,
|
||||
'repeat': 10,
|
||||
'distribute_epochs': 1600,
|
||||
'batchsize': 16,
|
||||
'cross_valid_ind': 1,
|
||||
|
@ -101,6 +104,7 @@ cfg_unet_simple = {
|
|||
'img_size': [576, 576],
|
||||
'lr': 0.0001,
|
||||
'epochs': 400,
|
||||
'repeat': 400,
|
||||
'distribute_epochs': 1600,
|
||||
'batchsize': 16,
|
||||
'cross_valid_ind': 1,
|
||||
|
@ -120,7 +124,7 @@ cfg_unet_simple = {
|
|||
'eval_resize': False
|
||||
}
|
||||
|
||||
cfg_unet = cfg_unet_medical
|
||||
cfg_unet = cfg_unet_simple
|
||||
if not ('dataset' in cfg_unet and cfg_unet['dataset'] == 'Cell_nuclei') and cfg_unet['eval_resize']:
|
||||
print("ISBI dataset not support resize to original image size when in evaluation.")
|
||||
cfg_unet['eval_resize'] = False
|
||||
|
|
|
@ -80,7 +80,7 @@ def train_net(args_opt,
|
|||
else:
|
||||
criterion = CrossEntropyWithLogits()
|
||||
if 'dataset' in cfg and cfg['dataset'] == "Cell_nuclei":
|
||||
repeat = 10
|
||||
repeat = cfg['repeat']
|
||||
dataset_sink_mode = True
|
||||
per_print_times = 0
|
||||
train_dataset = create_cell_nuclei_dataset(data_dir, cfg['img_size'], repeat, batch_size,
|
||||
|
@ -90,7 +90,7 @@ def train_net(args_opt,
|
|||
eval_resize=cfg["eval_resize"], split=0.8,
|
||||
python_multiprocessing=False)
|
||||
else:
|
||||
repeat = epochs
|
||||
repeat = cfg['repeat']
|
||||
dataset_sink_mode = False
|
||||
per_print_times = 1
|
||||
train_dataset, valid_dataset = create_dataset(data_dir, repeat, batch_size, True, cross_valid_ind,
|
||||
|
|
Loading…
Reference in New Issue