forked from mindspore-Ecosystem/mindspore
输入参数校准v1
This commit is contained in:
parent
9a47dce7e5
commit
1d48f7387b
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.text.JiebaMode
|
||||
=================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.JiebaMode(value, names=None, *, module=None, qualname=None, type=None, start=1)
|
||||
.. py:class:: mindspore.dataset.text.JiebaMode
|
||||
|
||||
:class:`JiebaTokenizer` 的枚举值。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.text.NormalizeForm
|
||||
=====================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.NormalizeForm(value, names=None, *, module=None, qualname=None, type=None, start=1)
|
||||
.. py:class:: mindspore.dataset.text.NormalizeForm
|
||||
|
||||
:class:`NormalizeUTF8` 的枚举值。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.text.SPieceTokenizerLoadType
|
||||
===============================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.SPieceTokenizerLoadType(value, names=None, *, module=None, qualname=None, type=None, start=1)
|
||||
.. py:class:: mindspore.dataset.text.SPieceTokenizerLoadType
|
||||
|
||||
:class:`SentencePieceTokenizer` 加载类型的枚举值。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.text.SPieceTokenizerOutType
|
||||
==============================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.SPieceTokenizerOutType(value, names=None, *, module=None, qualname=None, type=None, start=1)
|
||||
.. py:class:: mindspore.dataset.text.SPieceTokenizerOutType
|
||||
|
||||
:class:`SentencePieceTokenizer` 输出类型的枚举值。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.text.SentencePieceModel
|
||||
==========================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.SentencePieceModel(value, names=None, *, module=None, qualname=None, type=None, start=1)
|
||||
.. py:class:: mindspore.dataset.text.SentencePieceModel
|
||||
|
||||
`SentencePiece` 分词方法的枚举类。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.text.SentencePieceVocab
|
||||
==========================================
|
||||
|
||||
.. py:class:: mindspore.dataset.text.SentencePieceVocab(cde.SentencePieceVocab)
|
||||
.. py:class:: mindspore.dataset.text.SentencePieceVocab
|
||||
|
||||
用于执行分词的SentencePiece对象。
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ mindspore.dataset.text.transforms.JiebaTokenizer
|
|||
- **TypeError** - 参数 `hmm_path` 和 `mp_path` 类型不为string。
|
||||
- **TypeError** - 参数 `with_offsets` 类型不为bool。
|
||||
|
||||
.. py:method:: add_word(self, word, freq=None)
|
||||
.. py:method:: add_word(word, freq=None)
|
||||
|
||||
将用户定义的词添加到 JiebaTokenizer 的字典中。
|
||||
|
||||
|
@ -37,7 +37,7 @@ mindspore.dataset.text.transforms.JiebaTokenizer
|
|||
- **word** (str) - 要添加到 JiebaTokenizer 词典中的单词,注意通过此接口添加的单词不会被写入本地的模型文件中。
|
||||
- **freq** (int,可选) - 要添加的单词的频率。频率越高,单词被分词的机会越大。默认值:None,使用默认频率。
|
||||
|
||||
.. py:method:: add_dict(self, user_dict)
|
||||
.. py:method:: add_dict(user_dict)
|
||||
|
||||
将用户定义的词添加到 JiebaTokenizer 的字典中。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.dataset.vision.py_transforms.ToTensor
|
||||
===============================================
|
||||
|
||||
.. py:class:: mindspore.dataset.vision.py_transforms.ToTensor(output_type=numpy.float32)
|
||||
.. py:class:: mindspore.dataset.vision.py_transforms.ToTensor(output_type=np.float32)
|
||||
|
||||
将输入的PIL或numpy.ndarray图像转换为指定数据类型的numpy.ndarray图像,此时像素值取值将由[0, 255]变为[0.0, 1.0],图像的shape将由(H, W, C)变为(C, H, W)。
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
|
||||
.. py:class:: mindspore.mindrecord.FileWriter(file_name, shard_num=1)
|
||||
.. py:class:: mindspore.mindrecord.FileWriter(file_name, shard_num=1, overwrite=False)
|
||||
|
||||
将用户自定义的数据转为MindRecord格式数据集的类。
|
||||
|
||||
|
|
|
@ -162,7 +162,7 @@ Boost能够自动加速网络,如减少BN/梯度冻结/累积梯度等。
|
|||
|
||||
- **Valuerror** – Boost的模式不在["auto", "manual", "enable_all", "disable_all"]这个列表中。
|
||||
|
||||
.. py:method:: network_auto_process_train()
|
||||
.. py:method:: network_auto_process_train(network, optimizer)
|
||||
|
||||
使用Boost算法训练。
|
||||
|
||||
|
@ -171,7 +171,7 @@ Boost能够自动加速网络,如减少BN/梯度冻结/累积梯度等。
|
|||
- network (Cell),训练网络。
|
||||
- optimizer (Union[Cell]),用于更新权重的优化器。
|
||||
|
||||
.. py:method:: network_auto_process_eval()
|
||||
.. py:method:: network_auto_process_eval(network)
|
||||
|
||||
使用Boost算法推理。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.Tensor
|
||||
================
|
||||
|
||||
.. py:class:: mindspore.Tensor(input_data=None, dtype=None, shape=None, init=None)
|
||||
.. py:class:: mindspore.Tensor(input_data=None, dtype=None, shape=None, init=None, internal=False)
|
||||
|
||||
张量,即存储多维数组(n-dimensional array)的数据结构。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.BatchNorm2d
|
||||
=========================
|
||||
|
||||
.. py:class:: mindspore.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW')
|
||||
.. py:class:: mindspore.nn.BatchNorm2d(num_features, eps=1e-5, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW')
|
||||
|
||||
对输入的四维数据进行批归一化层(Batch Normalization Layer)。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.ConfusionMatrix
|
||||
============================
|
||||
|
||||
.. py:class:: mindspore.nn.ConfusionMatrix(num_classes, normalize='no_norm', threshold=0.5)
|
||||
.. py:class:: mindspore.nn.ConfusionMatrix(num_classes, normalize='NO_NORM', threshold=0.5)
|
||||
|
||||
计算混淆矩阵(confusion matrix),通常用于评估分类模型的性能,包括二分类和多分类场景。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.Dice
|
||||
==================
|
||||
|
||||
.. py:class:: mindspore.nn.Dice(smooth=1e-05)
|
||||
.. py:class:: mindspore.nn.Dice(smooth=1e-5)
|
||||
|
||||
集合相似性度量。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.DiceLoss
|
||||
======================
|
||||
|
||||
.. py:class:: mindspore.nn.DiceLoss(smooth=1e-05)
|
||||
.. py:class:: mindspore.nn.DiceLoss(smooth=1e-5)
|
||||
|
||||
Dice系数是一个集合相似性loss,用于计算两个样本之间的相似性。当分割结果最好时,Dice系数的值为1,当分割结果最差时,Dice系数的值为0。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.InstanceNorm2d
|
||||
============================
|
||||
|
||||
.. py:class:: mindspore.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros')
|
||||
.. py:class:: mindspore.nn.InstanceNorm2d(num_features, eps=1e-5, momentum=0.1, affine=True, gamma_init='ones', beta_init='zeros')
|
||||
|
||||
对四维输入实现实例归一化层(Instance Normalization Layer)。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.LSTMCell
|
||||
======================
|
||||
|
||||
.. py:class:: mindspore.nn.LSTMCell(input_size, hidden_size, has_bias=True)
|
||||
.. py:class:: mindspore.nn.LSTMCell(*args, **kwargs)
|
||||
|
||||
长短期记忆网络单元(LSTMCell)。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.LayerNorm
|
||||
=======================
|
||||
|
||||
.. py:class:: mindspore.nn.LayerNorm(normalized_shape, begin_norm_axis=-1, begin_params_axis=-1, gamma_init='ones', beta_init='zeros', epsilon=1e-07)
|
||||
.. py:class:: mindspore.nn.LayerNorm(normalized_shape, begin_norm_axis=-1, begin_params_axis=-1, gamma_init='ones', beta_init='zeros', epsilon=1e-7)
|
||||
|
||||
在mini-batch输入上应用层归一化(Layer Normalization)。
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
mindspore.nn.OneHot
|
||||
====================
|
||||
|
||||
.. py:class:: mindspore.nn.OneHot(axis=-1, depth=1, on_value=1.0, off_value=0.0, dtype=mindspore.float32)
|
||||
.. py:class:: mindspore.nn.OneHot(axis=-1, depth=1, on_value=1.0, off_value=0.0, dtype=mstype.float32)
|
||||
|
||||
返回一个one-hot类型的Tensor。
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. py:class:: mindspore.train.callback.LossMonitor(per_print_times=1)
|
||||
.. py:class:: mindspore.train.callback.LossMonitor(per_print_times=1, has_trained_epoch=0)
|
||||
|
||||
监控训练的loss。
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. py:class:: mindspore.nn.transformer.MoEConfig(expert_num=1, capacity_factor=1.1, aux_loss_factor=0.05, num_experts_chosen=1, noisy_policy=None, noisy_epsilon=1e-2)
|
||||
.. py:class:: mindspore.nn.transformer.MoEConfig(expert_num=1, capacity_factor=1.1, aux_loss_factor=0.05, num_experts_chosen=1)
|
||||
|
||||
MoE (Mixture of Expert)的配置。
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. py:class:: mindspore.nn.transformer.TransformerOpParallelConfig(data_parallel=1, model_parallel=1, pipeline_stage=1, micro_batch_num=1, recompute=False, optimizer_shard=False, gradient_aggregation_group=4, vocab_emb_dp=True)
|
||||
.. py:class:: mindspore.nn.transformer.TransformerOpParallelConfig(data_parallel=1, model_parallel=1, expert_parallel=1, pipeline_stage=1, micro_batch_num=1, recompute=default_transformer_recompute_config, optimizer_shard=False, gradient_aggregation_group=4, vocab_emb_dp=True)
|
||||
|
||||
用于设置数据并行、模型并行等等并行配置的TransformerOpParallelConfig。
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
.. py:class:: mindspore.nn.transformer.TransformerRecomputeConfig(recompute=False, parallel_optimizer_comm_recompute=False,
|
||||
mp_comm_recompute=True, recompute_slice_activation=False)
|
||||
.. py:class:: mindspore.nn.transformer.TransformerRecomputeConfig(recompute=False, parallel_optimizer_comm_recompute=False, mp_comm_recompute=True, recompute_slice_activation=False)
|
||||
|
||||
Transformer的重计算配置接口。
|
||||
|
||||
|
|
Loading…
Reference in New Issue