mod comment

This commit is contained in:
changzherui 2022-03-15 21:56:44 +08:00
parent a60ed1a957
commit cebfbdd723
8 changed files with 12 additions and 9 deletions

View File

@ -12,7 +12,7 @@ mindspore.export
**参数:**
- **net** (Cell) MindSpore网络结构。
- **inputs** (Tensor) 网络的输入,如果网络有多个输入,需要将张量组成元组
- **inputs** (Union[Tensor, Dasaset) 网络的输入,如果网络有多个输入,需要一同传入。当传入的类型为 `Dataset`将会把数据预处理行为同步保存起来。需要手动调整batch的大小当前仅支持获取 `Dataset``image`
- **file_name** (str) 导出模型的文件名称。
- **file_format** (str) MindSpore目前支持导出"AIR""ONNX"和"MINDIR"格式的模型。

View File

@ -10,7 +10,7 @@
**参数:**
- **per_print_times** (int) - 表示每隔多少个step打印一次loss。默认值1。
- **has_trained_epoch** (int) - 表示已经训练了多少个epoch设置了该参数LossMonitor将监控该数值之后epoch的loss值。默认值0。
- **has_trained_epoch** (int) - 表示已经训练了多少个epoch设置了该参数LossMonitor将监控该数值之后epoch的loss值。默认值0。
**异常:**

View File

@ -6,6 +6,7 @@
.. note::
在分布式训练场景下请为每个训练进程指定不同的目录来保存checkpoint文件。否则可能会训练失败。
如何在 `model` 方法中使用此回调函数默认将会把优化器中的参数保存到checkpoint文件中。
**参数:**

View File

@ -24,7 +24,7 @@
#include "debug/env_config_parser.h"
#include "mindspore/core/utils/log_adapter.h"
const int maxNameLength = 32;
const int maxNameLength = 64;
namespace mindspore {
class BaseRecorder {
public:

View File

@ -344,6 +344,8 @@ class ModelCheckpoint(Callback):
Note:
In the distributed training scenario, please specify different directories for each training process
to save the checkpoint file. Otherwise, the training may fail.
If this callback is used in the `model` function, the checkpoint file will saved
parameters of the optimizer by default.
Args:
prefix (str): The prefix name of checkpoint files. Default: "CKP".

View File

@ -36,7 +36,7 @@ class LambdaCallback(Callback):
Examples:
>>> import numpy as np
>>> import mindspore.dataset as ds
>>> from mindspore.train.callback import History
>>> from mindspore.train.callback import LambdaCallback
>>> from mindspore import Model, nn
>>> data = {"x": np.float32(np.random.rand(64, 10)), "y": np.random.randint(0, 5, (64,))}
>>> train_dataset = ds.NumpySlicesDataset(data=data).batch(32)

View File

@ -172,7 +172,7 @@ class Model:
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None)
>>> # For details about how to build the dataset, please refer to the function `create_dataset` in tutorial
>>> # document on the official website:
>>> # https://www.mindspore.cn/tutorials/zh-CN/master/quick_start.html
>>> # https://www.mindspore.cn/tutorials/zh-CN/master/beginner/quick_start.html
>>> dataset = create_custom_dataset()
>>> model.train(2, dataset)
"""
@ -953,8 +953,8 @@ class Model:
if context.get_context("device_target") == "CPU" and dataset_sink_mode:
dataset_sink_mode = False
logger.warning("CPU cannot support dataset sink mode currently."
"So the evaluating process will be performed with dataset non-sink mode.")
logger.info("CPU cannot support dataset sink mode currently."
"So the evaluating process will be performed with dataset non-sink mode.")
with _CallbackManager(callbacks) as list_callback:
if dataset_sink_mode:

View File

@ -785,8 +785,8 @@ def export(net, *inputs, file_name, file_format='AIR', **kwargs):
Args:
net (Cell): MindSpore network.
inputs (Union[Tensor, tuple(Tensor), Dataset]): While the input type is Tensor, it represents the inputs
of the `net`, if the network has multiple inputs, incoming tuple(Tensor). While its type is Dataset,
inputs (Union[Tensor, Dataset]): While the input type is Tensor, it represents the inputs
of the `net`, if the network has multiple inputs, set them together. While its type is Dataset,
it represents the preprocess behavior of the `net`, data preprocess operations will be serialized.
In second situation, you should adjust batch size of dataset script manually which will impact on
the batch size of 'net' input. Only supports parse "image" column from dataset currently.