fix issues in release
This commit is contained in:
parent
f40e159dd1
commit
fc3948d43c
|
@ -8,8 +8,8 @@
|
|||
|
||||
#### PyNative
|
||||
|
||||
- The default mode of MindSpore is switched to PyNative. If you want to manually set the mode, please refer to https://www.mindspore.cn/tutorials/zh-CN/master/advanced/compute_graph.html.
|
||||
- Support dynamic shape without padding, three networks are supported as demos: Transformer-GPU, YOLOV5-GPU, ASR-Ascend. Transformer-GPU and YOLOV5-GPU can be downloaded from https://gitee.com/mindspore/models/tree/dynamic_shape. Only the following operators are available on Ascend backend:Add、Assign、BatchMatMul、BiasAdd、BiasAddGrad、Cast、Conv2D、Conv2DBackpropFilter、Conv2DBackpropInput、CTCLoss、Div、Dropout、DropoutDoMask、Equal、ExpandDims、Gather、GetNext、LayerNorm、LayerNormGrad、LessEqual、Load、Log、LogicalAnd、LogicalNot、LogicalOr、LogSoftmax、LogSoftmaxGrad、MatMul、Maximum、Mul、Neg、NotEqual、NPUAllocFloatStatus、NPUClearFloatStatus、OneHot、RealDiv、Reciprocal、ReduceMean、ReduceSum、ReLU、ReluGrad、Reshape、Select、Softmax、StridedSlice、Sub、Tile、Transpose、UnsortedSegmentSum、ZerosLike。The remaining operators have not been fully verified, please use them as appropriate.
|
||||
- The default mode of MindSpore is switched to PyNative. If you want to manually set the mode, please refer to [Computational Graph](https://www.mindspore.cn/tutorials/en/r2.0.0-alpha/advanced/compute_graph.html).
|
||||
- Support dynamic shape without padding, three networks are supported as demos: Transformer-GPU, YOLOV5-GPU, ASR-Ascend. Transformer-GPU and YOLOV5-GPU can be downloaded from [models](https://gitee.com/mindspore/models/tree/dynamic_shape). Only the following operators are available on Ascend backend:Add、Assign、BatchMatMul、BiasAdd、BiasAddGrad、Cast、Conv2D、Conv2DBackpropFilter、Conv2DBackpropInput、CTCLoss、Div、Dropout、DropoutDoMask、Equal、ExpandDims、Gather、GetNext、LayerNorm、LayerNormGrad、LessEqual、Load、Log、LogicalAnd、LogicalNot、LogicalOr、LogSoftmax、LogSoftmaxGrad、MatMul、Maximum、Mul、Neg、NotEqual、NPUAllocFloatStatus、NPUClearFloatStatus、OneHot、RealDiv、Reciprocal、ReduceMean、ReduceSum、ReLU、ReluGrad、Reshape、Select、Softmax、StridedSlice、Sub、Tile、Transpose、UnsortedSegmentSum、ZerosLike。The remaining operators have not been fully verified, please use them as appropriate.
|
||||
|
||||
#### DataSet
|
||||
|
||||
|
@ -28,7 +28,7 @@
|
|||
|
||||
Scatter Operators:ScatterAdd,ScatterDiv,ScatterMax,ScatterMul,ScatterNdAdd,ScatterNdSub,ScatterNdUpdate,ScatterSub,TensorScatterAdd,TensorScatterDiv,TensorScatterMax,TensorScatterMax,TensorScatterMul,TensorScatterAdd,TensorScatterUpdate.
|
||||
|
||||
- Add new apis `transform_checkpoints` and `transform_checkpoint_by_rank` to transfer the distributed checkpoint files by strategy files. Please refer to https://www.mindspore.cn/tutorials/experts/en/master/parallel/resilience_train_and_predict.html。
|
||||
- Add new apis `transform_checkpoints` and `transform_checkpoint_by_rank` to transfer the distributed checkpoint files by strategy files. Please refer to [Distributed Resilience Training and Inference](https://www.mindspore.cn/tutorials/experts/en/r2.0.0-alpha/parallel/resilience_train_and_predict.html)。
|
||||
|
||||
### API Change
|
||||
|
||||
|
|
|
@ -8,8 +8,8 @@
|
|||
|
||||
#### PyNative
|
||||
|
||||
- MindSpore默认模式切换成PyNative模式。需要手动设置模式可以参考文档:https://www.mindspore.cn/tutorials/zh-CN/master/advanced/compute_graph.html
|
||||
- 完成动态shape执行方案重构,提升反向构图性能,支持非padding方案的动态shape网络编程,当前主要验证网络Transformer-GPU、YOLOV5-GPU、ASR-Ascend。Transformer-GPU和YOLOV5-GPU可以从以下链接获取:https://gitee.com/mindspore/models/tree/dynamic_shape 。Ascend后端受算子适配度限制,只支持下列算子:Add、Assign、BatchMatMul、BiasAdd、BiasAddGrad、Cast、Conv2D、Conv2DBackpropFilter、Conv2DBackpropInput、CTCLoss、Div、Dropout、DropoutDoMask、Equal、ExpandDims、Gather、GetNext、LayerNorm、LayerNormGrad、LessEqual、Load、Log、LogicalAnd、LogicalNot、LogicalOr、LogSoftmax、LogSoftmaxGrad、MatMul、Maximum、Mul、Neg、NotEqual、NPUAllocFloatStatus、NPUClearFloatStatus、OneHot、RealDiv、Reciprocal、ReduceMean、ReduceSum、ReLU、ReluGrad、Reshape、Select、Softmax、StridedSlice、Sub、Tile、Transpose、UnsortedSegmentSum、ZerosLike。其余算子未经过完整验证, 请酌情使用。
|
||||
- MindSpore默认模式切换成PyNative模式。需要手动设置模式可以参考文档[计算图](https://www.mindspore.cn/tutorials/zh-CN/r2.0.0-alpha/advanced/compute_graph.html)。
|
||||
- 完成动态shape执行方案重构,提升反向构图性能,支持非padding方案的动态shape网络编程,当前主要验证网络Transformer-GPU、YOLOV5-GPU、ASR-Ascend。从[models仓](https://gitee.com/mindspore/models/tree/dynamic_shape)获取Transformer-GPU和YOLOV5-GPU。Ascend后端受算子适配度限制,只支持下列算子:Add、Assign、BatchMatMul、BiasAdd、BiasAddGrad、Cast、Conv2D、Conv2DBackpropFilter、Conv2DBackpropInput、CTCLoss、Div、Dropout、DropoutDoMask、Equal、ExpandDims、Gather、GetNext、LayerNorm、LayerNormGrad、LessEqual、Load、Log、LogicalAnd、LogicalNot、LogicalOr、LogSoftmax、LogSoftmaxGrad、MatMul、Maximum、Mul、Neg、NotEqual、NPUAllocFloatStatus、NPUClearFloatStatus、OneHot、RealDiv、Reciprocal、ReduceMean、ReduceSum、ReLU、ReluGrad、Reshape、Select、Softmax、StridedSlice、Sub、Tile、Transpose、UnsortedSegmentSum、ZerosLike。其余算子未经过完整验证,请酌情使用。
|
||||
|
||||
#### DataSet
|
||||
|
||||
|
@ -26,7 +26,7 @@
|
|||
|
||||
Math类算子:SquaredDifference、 Erfinv、 MaskedFill、 SplitV、 Gamma、 KLDivLoss、 LinSpace。Scatter类算子:ScatterAdd、ScatterDiv、ScatterMax、ScatterMul、ScatterNdAdd、ScatterNdSub、ScatterNdUpdate、ScatterSub、TensorScatterAdd、TensorScatterDiv、TensorScatterMax、TensorScatterMax、TensorScatterMul、TensorScatterAdd、TensorScatterUpdate。
|
||||
|
||||
- 增加`transform_checkpoints`和`transform_checkpoint_by_rank`接口。给定转换前后的策略文件,即可实现对分布式权重转换。详情可以参考:https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/resilience_train_and_predict.html 。
|
||||
- 增加`transform_checkpoints`和`transform_checkpoint_by_rank`接口。给定转换前后的策略文件,即可实现对分布式权重转换。详情请参考[分布式弹性训练与推理](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/parallel/resilience_train_and_predict.html)。
|
||||
|
||||
### API变更
|
||||
|
||||
|
@ -91,8 +91,8 @@
|
|||
- [STABLE] `mindspore.ops.Trace` 新增算子原语。
|
||||
- [STABLE] `mindspore.ops.UpsampleNearest3D` 新增算子原语。
|
||||
- [STABLE] `mindspore.ops.UpsampleTrilinear3D` 新增算子原语。
|
||||
- [STABLE]`mindspore.parallel.transform_checkpoints` 新增分布式权重转换接口。
|
||||
- [STABLE]`mindspore.parallel.transform_checkpoint_by_rank` 新增分布式权重转换接口。
|
||||
- [STABLE] `mindspore.parallel.transform_checkpoints` 新增分布式权重转换接口。
|
||||
- [STABLE] `mindspore.parallel.transform_checkpoint_by_rank` 新增分布式权重转换接口。
|
||||
|
||||
#### 非兼容性变更
|
||||
|
||||
|
|
Loading…
Reference in New Issue