forked from mindspore-Ecosystem/mindspore
fix error words.
This commit is contained in:
parent
5debb583e2
commit
e7bc7798ea
57
RELEASE.md
57
RELEASE.md
|
@ -10,44 +10,44 @@
|
||||||
- [BETA] Add CV models on Ascend: midas_V2, attgan, FairMOT, CenterNet_resnet101, SEResNext, YOLOV3-tiny, RetinaFace
|
- [BETA] Add CV models on Ascend: midas_V2, attgan, FairMOT, CenterNet_resnet101, SEResNext, YOLOV3-tiny, RetinaFace
|
||||||
- [STABLE] Add CV models on GPU: ssd_mobilenetv1_fpn, shufflenetv1, tinyDarkNet, CNN-CTC, unet++, DeepText, SqueezeNet
|
- [STABLE] Add CV models on GPU: ssd_mobilenetv1_fpn, shufflenetv1, tinyDarkNet, CNN-CTC, unet++, DeepText, SqueezeNet
|
||||||
- [STABLE] Add NLP models on GPU: GRU, GNMT2, Bert-Squad
|
- [STABLE] Add NLP models on GPU: GRU, GNMT2, Bert-Squad
|
||||||
- [STABLE] Add recommand models on GPU: NCF
|
- [STABLE] Add recommend models on GPU: NCF
|
||||||
- [BETA] Add CV models on GPU: FaceAttribute, FaceDetection, FaceRecongnition SENet,
|
- [BETA] Add CV models on GPU: FaceAttribute, FaceDetection, FaceRecongnition SENet,
|
||||||
- [BETA] Add Audio models on GPU: DeepSpeech2
|
- [BETA] Add Audio models on GPU: DeepSpeech2
|
||||||
- [STABLE]`model_zoo` has been seperated to an individual repository`models`
|
- [STABLE]`model_zoo` has been separated to an individual repository`models`
|
||||||
|
|
||||||
#### FrontEnd
|
#### FrontEnd
|
||||||
|
|
||||||
* [STABLE] Support`while` and`break`,`continue` statements of training network in`GRAPH_MODE`.
|
- [STABLE] Support`while` and`break`,`continue` statements of training network in`GRAPH_MODE`.
|
||||||
* [BETA] Support export MindIR file after model training in cloud side and evaluate in edge side by import the MindIR file.
|
- [BETA] Support export MindIR file after model training in cloud side and evaluate in edge side by import the MindIR file.
|
||||||
* [STABLE] Support forward mode auto-diff interface Jvp(Jacobian-Vector-Product).
|
- [STABLE] Support forward mode auto-diff interface Jvp(Jacobian-Vector-Product).
|
||||||
* [STABLE] Support backward mode auto-diff interface Vjp(Vector-Jacobian-Product).
|
- [STABLE] Support backward mode auto-diff interface Vjp(Vector-Jacobian-Product).
|
||||||
|
|
||||||
#### Auto Parallel
|
#### Auto Parallel
|
||||||
|
|
||||||
* [STABLE] Support distributed pipeline inference.
|
- [STABLE] Support distributed pipeline inference.
|
||||||
* [STABLE] Add implementation of the sparse attention and its distributed operator.
|
- [STABLE] Add implementation of the sparse attention and its distributed operator.
|
||||||
* [STABLE] Add implementations of distributed operator of Conv2d/Conv2dTranspose/Conv2dBackpropInput/Maxpool/Avgpool/Batchnorm/Gatherd.
|
- [STABLE] Add implementations of distributed operator of Conv2d/Conv2dTranspose/Conv2dBackpropInput/Maxpool/Avgpool/Batchnorm/Gatherd.
|
||||||
* [STABLE] Support configuring the dataset strategy on distributed training and inference mode.
|
- [STABLE] Support configuring the dataset strategy on distributed training and inference mode.
|
||||||
* [STABLE] Add high level API of the Transformer module.
|
- [STABLE] Add high level API of the Transformer module.
|
||||||
|
|
||||||
#### Executor
|
#### Executor
|
||||||
|
|
||||||
* [STABLE] Support AlltoAll operator.
|
- [STABLE] Support AlltoAll operator.
|
||||||
* [STABLE] CPU operator (Adam) performance optimization increased by 50%.
|
- [STABLE] CPU operator (Adam) performance optimization increased by 50%.
|
||||||
* [BETA] Support Adam offload feature, reduce the static memory usage of Pangu large model by 50%.
|
- [BETA] Support Adam offload feature, reduce the static memory usage of Pangu large model by 50%.
|
||||||
* [STABLE] MindSpore Ascend backend supports configuration operator generation and loading cache path.
|
- [STABLE] MindSpore Ascend backend supports configuration operator generation and loading cache path.
|
||||||
* [STABLE] MindSpore Ascend backend supports lazy build in PyNaitve mode and compilation performance improved by 10 times.
|
- [STABLE] MindSpore Ascend backend supports lazy build in PyNaitve mode and compilation performance improved by 10 times.
|
||||||
* [STABLE] The function or Cell decorated by ms_function supports gradient calculation in PyNative mode.
|
- [STABLE] The function or Cell decorated by ms_function supports gradient calculation in PyNative mode.
|
||||||
* [STABLE] The outermost network supports parameters of non tensor type in PyNative mode.
|
- [STABLE] The outermost network supports parameters of non tensor type in PyNative mode.
|
||||||
|
|
||||||
#### DataSet
|
#### DataSet
|
||||||
|
|
||||||
* [BETA] Add a new method for class Model to support auto data preprocessing in scenario of Ascend 310 inference.
|
- [BETA] Add a new method for class Model to support auto data preprocessing in scenario of Ascend 310 inference.
|
||||||
* [STABLE] Add a new drawing tool to visualize detection/segmentation datasets.
|
- [STABLE] Add a new drawing tool to visualize detection/segmentation datasets.
|
||||||
* [STABLE] Support a new tensor operaiton named ConvertColor to support color space transform of images.
|
- [STABLE] Support a new tensor operation named ConvertColor to support color space transform of images.
|
||||||
* [STABLE] Enhance the following tensor operations to handle multiple columns simultaneously: RandomCrop, RandomHorizontalFlip, RandomResize, RandomResizedCrop, RandomVerticalFlip.
|
- [STABLE] Enhance the following tensor operations to handle multiple columns simultaneously: RandomCrop, RandomHorizontalFlip, RandomResize, RandomResizedCrop, RandomVerticalFlip.
|
||||||
* [STABLE] Support electromagnetic simulation dataset loading and data augmentation.
|
- [STABLE] Support electromagnetic simulation dataset loading and data augmentation.
|
||||||
* [STABLE] Optimze the error logs of Dataset to make them more friendly to users.
|
- [STABLE] Optimize the error logs of Dataset to make them more friendly to users.
|
||||||
|
|
||||||
#### Federated Learning
|
#### Federated Learning
|
||||||
|
|
||||||
|
@ -68,7 +68,6 @@
|
||||||
Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
|
Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
|
||||||
devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompute` to enable the recomputation of the communication operations.
|
devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompute` to enable the recomputation of the communication operations.
|
||||||
|
|
||||||
|
|
||||||
### Bug fixes
|
### Bug fixes
|
||||||
|
|
||||||
#### FrontEnd
|
#### FrontEnd
|
||||||
|
@ -77,9 +76,9 @@ devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompu
|
||||||
|
|
||||||
#### Executor
|
#### Executor
|
||||||
|
|
||||||
* RunTask failed when parameter_broadcast is enabled in PyNative mode. ([!23255](https://gitee.com/mindspore/mindspore/pulls/23255))
|
- RunTask failed when parameter_broadcast is enabled in PyNative mode. ([!23255](https://gitee.com/mindspore/mindspore/pulls/23255))
|
||||||
* An illegal memory access was encountered in the dynamic shape net on GPU.
|
- An illegal memory access was encountered in the dynamic shape net on GPU.
|
||||||
* Fix tune failed for DynamicRnn. ([!21081](https://gitee.com/mindspore/mindspore/pulls/21081))
|
- Fix tune failed for DynamicRnn. ([!21081](https://gitee.com/mindspore/mindspore/pulls/21081))
|
||||||
|
|
||||||
#### Dataset
|
#### Dataset
|
||||||
|
|
||||||
|
@ -97,7 +96,7 @@ devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompu
|
||||||
2. Support dynamic filter Convolution.
|
2. Support dynamic filter Convolution.
|
||||||
3. Support serializing float32 weight into float16 weight for reducing size of model file.
|
3. Support serializing float32 weight into float16 weight for reducing size of model file.
|
||||||
4. Provide unified runtime API for developer reusing their code between cloud side and end side.
|
4. Provide unified runtime API for developer reusing their code between cloud side and end side.
|
||||||
5. Now developer can configure build-in pass as custom passes.
|
5. Now developer can configure built-in pass as custom passes.
|
||||||
6. Now user can specify format and shape of model inputs while converting model.
|
6. Now user can specify format and shape of model inputs while converting model.
|
||||||
7. Support multiple devices inference, includeing CPU, NPU, GPU. User can set devices in mindspore::Context.
|
7. Support multiple devices inference, includeing CPU, NPU, GPU. User can set devices in mindspore::Context.
|
||||||
8. Support mixed precision inference. User can set inference precision by LoadConfig API.
|
8. Support mixed precision inference. User can set inference precision by LoadConfig API.
|
||||||
|
|
Loading…
Reference in New Issue