updata release note

This commit is contained in:
jjfeing 2021-09-26 11:20:16 +08:00
parent 24ebb966e9
commit d8bcea4d9b
1 changed files with 142 additions and 0 deletions

View File

@ -1,3 +1,145 @@
# MindSpore 1.5.0
## MindSpore 1.5.0 Release Notes
### Major Features and Improvements
#### NewModels
- [STABLE] Add CV model on Ascend: Fast-SCNN
- [BETA] Add CV models on Ascend: midas_V2, attgan, FairMOT, CenterNet_resnet101, SEResNext, YOLOV3-tiny, RetinaFace
- [STABLE] Add CV models on GPU: ssd_mobilenetv1_fpn, shufflenetv1, tinyDarkNet, CNN-CTC, unet++, DeepText, SqueezeNet
- [STABLE] Add NLP models on GPU: GRU, GNMT2, Bert-Squad
- [STABLE] Add recommand models on GPU: NCF
- [BETA] Add CV models on GPU: FaceAttribute, FaceDetection, FaceRecongnition SENet,
- [BETA] Add Audio models on GPU: DeepSpeech2
- [STABLE]`model_zoo` has been seperated to an individual repository`models`
#### FrontEnd
* [STABLE] Support`while` and`break`,`continue` statements of training network in`GRAPH_MODE`.
* [BETA] Support export MindIR file after model training in cloud side and evaluate in edge side by import the MindIR file.
* [STABLE] Support forward mode auto-diff interface Jvp(Jacobian-Vector-Product).
* [STABLE] Support backward mode auto-diff interface Vjp(Vector-Jacobian-Product).
#### Auto Parallel
* [STABLE] Support distributed pipeline inference.
* [STABLE] Add implementation of the sparse attention and its distributed operator.
* [STABLE] Add implementations of distributed operator of Conv2d/Conv2dTranspose/Conv2dBackpropInput/Maxpool/Avgpool/Batchnorm/Gatherd.
* [STABLE] Support configuring the dataset strategy on distributed training and inference mode.
* [STABLE] Add high level API of the Transformer module.
#### Executor
* [STABLE] Support AlltoAll operator.
* [STABLE] CPU operator (Adam) performance optimization increased by 50%.
* [BETA] Support Adam offload feature, reduce the static memory usage of Pangu large model by 50%.
* [STABLE] MindSpore Ascend backend supports configuration operator generation and loading cache path.
* [STABLE] MindSpore Ascend backend supports lazy build in PyNaitve mode and compilation performance improved by 10 times.
* [STABLE] The function or Cell decorated by ms_function supports gradient calculation in PyNative mode.
* [STABLE] The outermost network supports parameters of non tensor type in PyNative mode.
#### DataSet
* [BETA] Add a new method for class Model to support auto data preprocessing in scenario of Ascend 310 inference.
* [STABLE] Add a new drawing tool to visualize detection/segmentation datasets.
* [STABLE] Support a new tensor operaiton named ConvertColor to support color space transform of images.
* [STABLE] Enhance the following tensor operations to handle multiple columns simultaneously: RandomCrop, RandomHorizontalFlip, RandomResize, RandomResizedCrop, RandomVerticalFlip.
* [STABLE] Support electromagnetic simulation dataset loading and data augmentation.
* [STABLE] Optimze the error logs of Dataset to make them more friendly to users.
#### Federated Learning
#### Running Data Recorder
- [STABLE] RDR saves collected data files within directories named by Rank ID on distributed training on Ascend, GPU and CPU.
#### GraphKernel Fusion
### API Change
#### Backwards Incompatible Change
##### Python API
###### New Recomputation Configuration for AutoParallel and SemiAutoParallel Scenarios
Configuring the recomputation of the communication operations generated by the model parallel and optimizer parallel to save the memory on the
devices. Users can pass `mp_comm_recompute` and `parallel_optimizer_comm_recompute` to enable the recomputation of the communication operations.
### Bug fixes
#### FrontEnd
- Fix bug of too many subgraphs when network include`for` statement.([!23669](https://gitee.com/mindspore/mindspore/pulls/23669))
#### Executor
* RunTask failed when parameter_broadcast is enabled in PyNative mode. ([!23255](https://gitee.com/mindspore/mindspore/pulls/23255))
* An illegal memory access was encountered in the dynamic shape net on GPU.
* Fix tune failed for DynamicRnn. ([!21081](https://gitee.com/mindspore/mindspore/pulls/21081))
#### Dataset
- Optimize thread monitoring to solve the problem of running multiple multiprocessesing on Windwos. ([!23232](https://gitee.com/mindspore/mindspore/pulls/23232))
- Fix bugs of Dataset tensor operations in lite mode. ([!21999](https://gitee.com/mindspore/mindspore/pulls/21999))
- Fix memory increasing when using create_dict_iterator in for loop. ([!22529](https://gitee.com/mindspore/mindspore/pulls/22529))([!22529](https://gitee.com/mindspore/mindspore/pulls/22529))
## MindSpore Lite
### Major Features and Improvements
#### Converter and runtime
1. Optimize TDNN-like streaming model by reusing the result of last inference.
2. Support dynamic filter Convolution.
3. Support serializing float32 weight into float16 weight for reducing size of model file.
4. Provide unified runtime API for developer reusing their code between cloud side and end side.
5. Now developer can configure build-in pass as custom passes.
6. Now user can specify format and shape of model inputs while converting model.
7. Support multiple devices inference, includeing CPU, NPU, GPU. User can set devices in mindspore::Context.
8. Support mixed precision inference. User can set inference precision by LoadConfig API.
9. Support custom operator registration and enable inference on third-party hardware.
#### ARM backend optimization
1. Support the nchw data format of some Operators, such as Conv, InstanceNorm, etc. The performance of some models convertered from onnx and caffe is greatly improved.
2. Fix bugs of memory leak on NPU.
#### Post quantization
1. Weight quantization supports mixed bit quantization.
2. Full quantization supports data pre-processing.
3. Adjust the quantization parameters from the command line to the configuration file.
#### Training on Device
1. Unify lite external api with MindSpore.
2. Implement static memory allocator and common workspace for TODsave memory 10-20%.
3. Provide getgradients and setgradients interfaceget and set optimizer params interfaces to support MOE Model.
4. Support user specified output node when export IOD Model.
5. Support more text networks (tinybert,albert) and operators.
#### Codegen
1. Support kernel register for custom op. Third-party hardware like NNIE can be accessed through it.
### API Change
#### API Incompatible Change
##### C++ API
### Contributors
Thanks goes to these wonderful people:
Adel, AGroupofProbiotocs, anthonyaje, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, eric, Eric, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Islam Amin, Jesse, , Jiabin Liu, jianghui58, jiangzhiwen, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, Jonathan, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, laiyongqiang, leonwanghui, Li, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, luoyang, lvchangquan, lvliang, lz, mahdi, Mahdi, maning202007, Margaret_wangrui, mayang, mengyuanli, Ming_blue, nhussain, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangnan39@huawei.com, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xulei2020, Xun, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghaibo5@huawei.com, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, Zhenglong Li, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Zirui, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
Contributions of any kind are welcome!
# MindSpore 1.3.0
## MindSpore 1.3.0 Release Notes