update r1.8 release note

This commit is contained in:
XianglongZeng 2022-07-15 17:35:19 +08:00
parent 600a7b14dc
commit 82ee9bc6a6
2 changed files with 198 additions and 0 deletions

View File

@ -2,6 +2,112 @@
[查看中文](./RELEASE_CN.md)
# MindSpore 1.8.0
## MindSpore 1.8.0 Release Notes
### Major Features and Improvements
#### OS
- [STABLE] Support Python 3.8(Linux/Windows/Mac).
- [STABLE] Installation improved with more detailed install guide and automated shell scripts.
- [STABLE] Support operator computing with multi-thread under Windows.
- [STABLE] Compatible with GCC from version 7.3 to 9.x.
#### FrontEnd
- [BETA] Add `mindspore.Model.fit` API, add `mindspore.callback.EarlyStopping` and `mindspore.callback.ReduceLROnPlateau` in Callback.
- [BETA] Support custom operator implemented by MindSpore Hybrid DSL.
- [BETA] Support custom operator implemented by Julia.
- [STABLE] The export() interface supports the export of a model using a user-defined encryption algorithm, and the load() interface supports the import of a model using a user-defined decryption algorithm.
- [BETA] [Unified_Dynamic_and_Static_Graphs] [Usability] Constant type data(tuple/list/dict) can be set to mutable during graph compile.
- [BETA] [Unified_Dynamic_and_Static_Graphs] JIT fallback is used to support the control flow capability in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python raise statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python assert statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The Python print statement is supported in the graph mode constant scenario.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The str.format() method is supported in graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] The slice method can be used to assign a value to the list in graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] Custom classes can be created and invoke instances in graph mode.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] Obtaining the properties of a class from the Cell array and the custom class array is supported.
- [STABLE] [Unified_Dynamic_and_Static_Graphs] Expand the isinstance capability in graph mode.
- [STABLE] Rename the decorator 'ms_hybrid' of custom operator to 'ms_kernel'.
- [BETA] Support custom operator by Hybrid DSL on the backend of CPU.
- [BETA] Support custom operator scheduling intrinstics on Ascend via new AKG Polyhedral scheduler.
#### PyNative
- [STABLE] Implement the AdamWeightDecay operator to replace the original small operator combination mode.
- [STABLE] In PyNative mode, use ms_function to decorate the optimizer.
- [STABLE] Optimize the execution performance of PyNative bprop graph and ms_function.
#### Auto Parallel
- [STABLE]Support AllToAll Operator in the KernelByKernel execution mode.
- [STABLE]Support using MPI to lanuch the graph mode.
- [STABLE]The initialization of the model weight can be configured by the seed. If you do not set the random number seed through the mindspore.set_seed command, the weights initialized by each parameter is determined by the current fragment index. If the random number seed is configured, the initialization results of the same shape and weight of the same segmentation policy are the same.
- [STABLE]The HCCL shields internal full-mesh and non-full-mesh connections. Allows both fully-connected AllToAllv and hierarchical AllToAllv during a training session.
- [BETA]CPU optimizer fusion. Multiple optimizer operators are combined by data type through cross-parameter fusion, improving performance. Currently, It has been verified on CPU AdamWeightDecay optimizer. You can use the flatten_weights method in the network cell class to enable this feature.
#### Executor
- [STABLE] Provides southbound API.
- [STABLE] Multi actor fusion execution to optimize the execution performance of runtime.
- [STABLE] Nopop operators (eg. reshape) execute elimination.
- [STABLE] Embedded cache architecture switching unified distributed runtime.
- [STABLE] Parameter Server switching unified distributed runtime.
- [STABLE] Support Parameter Server mode training on CPU.
#### DataSet
- [STABLE] When using the map operation for dataset objects and the parameters like: num_parallel_workers > 1 and python_multiprocessing=True, the multi sub-process mechanism is optimized, so that the data channel and child processes are mapped one by one, avoiding excessive file handle occupation, and closing_pool interface is also deleted.
- [STABLE] Support a batch of Vision, Text and Audio data augmentation operations.
- [STABLE] Fix a bug where the flat_map method of the Dataset class does not flatten the result.
- [STABLE] Unify import paths of dataset augmentation APIs to provide more easier way to use, refer to [latest api usages](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.vision.html).
#### GraphKernel Fusion
#### Federated Learning
#### Debug
### API Change
#### Backwards Incompatible Change
##### Python API
- Deprecated usage: `import mindspore.dataset.engine.datasets as ds`. Use `import mindspore.dataset as ds` instead as recommended in [mindspore doc](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.html).
- Add `mindspore.ms_class` interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
- Deprecate `mindspore.SparseTensor` and use `mindspore.COOTensor` instead. ([!28505]())
- Add Tensor init arg `internal` for internal use.
- DVPP simulation algorithm is no longer supported, remove `mindspore.dataset.vision.c_transforms.SoftDvppDecodeRandomCropResizeJpeg` and `mindspore.dataset.vision.c_transforms.SoftDvppDecodeResizeJpeg` interfaces.
- MindSpore's QAT feature is refactoring, and corresponding interfaces under the `mindspore.compression` package have been removed ([!31364]()). We will re-provide MindSpore's QAT feature based on MindSpore Rewrite in version r1.8, which is currently in the demo state ([!30974]()).
- Add `on_train_epoch_end` method in LossMonitor to print metric information when LossMonitor is used with `mindspore.Model.fit`.
- Add "train" or "eval" mode in the print content of TimeMonitor。
- The input arg `filter_prefix` of `mindspore.load_checkpoint` interface: empty string ("") is no longer supported, and the matching rules are changed from strong matching to fuzzy matching.
## MindSpore Lite
### Major Features and Improvements
#### API
- [STABLE] Added C++ and Python APIs for model conversion.
- [STABLE] Added Python APIs for model inference.
#### Post Training Quantization
- [STABLE] Support perlayer quantization, and built-in CLE to optimize perlayer quantization accuracy.
### Contributors
Thanks goes to these wonderful people:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
Contributions of any kind are welcome!
## MindSpore 1.7.0 Release Notes
### Major Features and Improvements

View File

@ -2,6 +2,98 @@
[View English](./RELEASE.md)
# MindSpore 1.8.0
## MindSpore 1.8.0 Release Notes
### 主要特性和增强
#### FrontEnd
- [BETA] 提供`mindspore.Model.fit` API增加两种callback方法 `mindspore.callback.EarlyStopping``mindspore.callback.ReduceLROnPlateau`
- [BETA] 自定义算子支持Julia算子。
- [STABLE] export()接口支持自定义加密算法导出模型load()接口支持自定义解密算法导入模型。
- [BETA] [动静统一] [易用性] 图编译支持常量类型设置可变(1.8版本支持tuple/list/dict)。
- [BETA] [动静统一] 常量场景下控制流内支持JIT Fallback功能。
- [STABLE] [动静统一] 支持图模式常量场景下Python raise语句。
- [STABLE] [动静统一] 支持图模式常量场景下Python assert语句。
- [STABLE] [动静统一] 支持图模式常量场景下Python print语句。
- [STABLE] [动静统一] 支持图模式str.format()方法。
- [STABLE] [动静统一] 支持图模式用slice方法对list赋值。
- [STABLE] [动静统一] 图模式支持创建和调用自定义类的实例。
- [STABLE] [动静统一] 支持从Cell数组/自定义类数组中获取类的属性。
- [STABLE] [动静统一] 图模式下isinstance支持场景扩展。
- [STABLE] 自定义算子修饰符'ms_hybrid'重名为'ms_kernel'。
- [BETA] 自定义算子Hybrid DSL支持CPU后端。
- [BETA] 自定义算子昇腾后端新增自定义调度原语语法支持。
#### PyNative
- [STABLE] 实现AdamWeightDecay算子替代原有小算子组合方式。
- [STABLE] 动态图下使用动静结合的方式执行优化器。
- [STABLE] 优化PyNative反向图和ms_function的执行性能。
#### Auto Parallel
- [STABLE]对接AllToAll单算子模式。在KernelByKernel的执行模式下支持AllToAll算子调用。
- [STABLE]整图下沉支持MPI启动。整图下沉的模式下支持使用MPI的方式启动。
- [STABLE]模型权重的Seed提供并行接口配置。在用户不通过mindspore.set_seed设置随机数种子时每个参数初始化的随机数种子为当前分片索引决定。当配置随机数种子之后相同shape以及相同切分策略的权重其初始化的结果一致。
- [STABLE]HCCL屏蔽内部全连接/非全连接。允许一次训练过程中同时有全连接AllToAllv和分级AllToAllv。
- [BETA] CPU优化器融合。通过优化器跨参数融合将多个优化器算子按数据类型融合成带来性能提升。目前已在CPU AdamWeightDecay优化器上做过验证。用户可以通过网络cell类中的flatten_weights方法启用该功能。
#### Executor
- [STABLE] 开放南向芯片对接接口。
- [STABLE] 使用多Actor融合执行提升运行时的执行性能。
- [STABLE] NopOp算子(eg. Reshape)执行消除。
- [STABLE] Embedding Cache架构切换统一分布式运行时。
- [STABLE] Parameter Server训练切换统一分布式运行时。
- [STABLE] 支持CPU Parameter Server模式训练。
#### DataSet
- [STABLE] 对于数据集对象使用map操作时同时num_parallel_workers>1并且python_multiprocessing=True时进行了多进程的机制优化使得数据通道与子进程一一映射避免了过多的文件句柄占用同时close_pool这个接口也被删除。
- [STABLE] 新增一批Vision、Text和Audio类数据增强操作。
- [STABLE] 修复数据集类的flat_map方法未将结果展平的错误。
- [STABLE] 统一数据集增强API的导入路径提供更简单的使用方法请参阅[最新的API用法](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.vision.html)。
### API变更
#### 非兼容性变更
##### Python API
- 修改register_backward_hook功能对应hook的梯度返回值类型将梯度返回值统一改成tuple类型。([!31876](https://gitee.com/mindspore/mindspore/pulls/31876))
- 弃用的import用法 `import mindspore.dataset.engine.datasets as ds` 因其import目录过深且过度依赖Python目录结构。推荐使用官方推荐用法 `import mindspore.dataset as ds` ,更多参考详见 [API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.html).
- 新增`mindspore.ms_class` 接口作为用户自定义类的类装饰器使得MindSpore能够识别用户自定义类并且获取这些类的属性和方法。([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
- `mindspore.SparseTensor`接口废弃使用,对应新接口为`mindspore.COOTensor`. ([!28505]())
- Tensor新增一个入参`internal`,作为框架内部使用。
- 不再支持DVPP模拟算法删除 `mindspore.dataset.vision.c_transforms.SoftDvppDecodeRandomCropResizeJpeg``mindspore.dataset.vision.c_transforms.SoftDvppDecodeResizeJpeg` 接口。
- LossMonitor中增加`on_train_epoch_end` 方法,实现在 `mindspore.Model.fit` 中使用时打印epoch级别的metric信息。
- TimeMonitor打印内容变更打印内容加入”train“或“eval”用于区分训练和推理阶段。
- load_checkpoint 接口的`filter_prefix`:不再支持空字符串(""),匹配规则由强匹配修改为模糊匹配。
## MindSpore Lite
### 主要特性和增强
#### API
- [STABLE] 新增模型转换的C++和Python API.
- [STABLE] 新增模型推理的Python API.
#### 后量化
- [STABLE] 后量化支持PerLayer量化同时内置CLE算法优化精度。
### 贡献者
感谢以下人员做出的贡献:
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking, shu-kun-zhang.
欢迎以任何形式对项目提供贡献!
## MindSpore 1.7.0 Release Notes
### 主要特性和增强