forked from mindspore-Ecosystem/mindspore
update releasenotes
This commit is contained in:
parent
d981193d3e
commit
25363f5f21
91
RELEASE.md
91
RELEASE.md
|
@ -1,3 +1,94 @@
|
|||
[查看中文](./RELEASE_CN.md)
|
||||
|
||||
# MindSpore 1.7.0
|
||||
|
||||
## MindSpore 1.7.0 Release Notes
|
||||
|
||||
### Major Features and Improvements
|
||||
|
||||
#### OS
|
||||
|
||||
- [STABLE] Support Python 3.8(Linux/Windows/Mac).
|
||||
- [STABLE] Installation improved with more detailed install guide and automated shell scripts.
|
||||
- [STABLE] Support operator computing with multi-thread under Windows.
|
||||
- [STABLE] Compatible with GCC from version 7.3 to 9.x.
|
||||
|
||||
#### FrontEnd
|
||||
|
||||
- [STABLE] Support dynamic weight decay for optimizers, that is weight decay value will change according to the global step during training.
|
||||
- [STABLE] Add four methods to create Tensor, they are `mindspore.numpy.rand()`, `mindspore.numpy.randn()`, `mindspore.numpy.randint()`, and `mindspore.ops.arange()`.
|
||||
- [STABLE] Add `mindspore.callback.History` and `mindspore.callback.LambdaCallback` in Callback.
|
||||
- [BETA] Support custom operator implemented by MindSpore Hybrid DSL.
|
||||
- [BETA] Support custom operator implemented by Julia.
|
||||
- [STABLE] Support accessing attributes and methods of user-defined classes through `mindspore.ms_class` decorator.
|
||||
- [STABLE] Support training when a network has side effect operations and control flow statements at the same time.
|
||||
- [STABLE] Support for more complex control flow syntax, such as a `for` loop statement in the body of a `while` loop.
|
||||
- [STABLE] The performance of networks with complex syntax control flow statements are improved by decreasing the num of subgraphs.
|
||||
|
||||
#### PyNative
|
||||
|
||||
- [STABLE] Add Hook functions in PyNative mode, including register_forward_pre_hook, register_forward_hook, register_backward_hook.
|
||||
- [STABLE] Optimize execution of PyNative mode,front-end and back-end execution in parallel.
|
||||
|
||||
#### Auto Parallel
|
||||
|
||||
- [STABLE] Support Top-K routing, data parallel and optimizer state parallel when enable MoE.
|
||||
- [STABLE] Support AllGather/ReduceScatter communication operator fusion. Support AllReuduce fusion by the data volume size in data parallel mode.
|
||||
- [STABLE] Support ops.clip_by_global_norm.
|
||||
- [STABLE] Support AdaSum optimizer in parallel mode.
|
||||
- [STABLE] Support automatic optimizer state parallel.
|
||||
- [STABLE] Support AlltoAll configurable. Support automatically add virtualdataset cell.
|
||||
- [STABLE] Support automatically infer trainable parameters in pipeline training.
|
||||
- [STABLE] Support clusters where the device number is not the power of 2.
|
||||
- [STABLE] Support sharding propagation in auto parallel mode.
|
||||
- [STABLE] Support optimizer offload under the unified runtime.
|
||||
- [STABLE] Support Adafactor operator on CPU.
|
||||
- [STABLE] Support sharding at H/W axis for Conv2d/Conv2DTranspose operator. Support operators such as ResizeBilinear,ROIAlign, CropAndResize, BoundingBoxEncode, IOU and RandomChoiceWithMask.
|
||||
|
||||
#### Executor
|
||||
|
||||
- [BETA] [Failure Recovery Under Data Parallel Training](https://www.mindspore.cn/tutorials/experts/en/r1.7/parallel/train_gpu.html#%E5%AE%B9%E7%81%BE%E6%81%A2%E5%A4%8D) Support auto failure recovery under data parallel training mode.
|
||||
- [BETA] Support automatic search for the number of threads in CPU, which will get the optimal number of threads to run. It costs 50 steps for search, the performance will be stable and optimal after that. For test, the data should be available after 50 steps.
|
||||
|
||||
#### DataSet
|
||||
|
||||
- [STABLE] Add dataset operations mapping between TensorFlow.data module and MindSpore.dataset module, [check list](https://www.mindspore.cn/docs/en/r1.7/note/api_mapping/tensorflow_api_mapping.html#tf-data).
|
||||
- [STABLE] Python multiprocessing optimization and make processes exit normally.
|
||||
- [STABLE] Support [Dataset Autotune](https://www.mindspore.cn/tutorials/experts/en/r1.7/debug/dataset_autotune.html) for tuning the speed of dataset pipeline automatically.
|
||||
- [BETA] [Dataset Offload](https://www.mindspore.cn/docs/en/r1.7/design/dataset_offload.html) support new operations: RandomColorAdjust, RandomSharpness, TypeCast.
|
||||
- When `__getitem__`/`__next__` methods of GeneratorDataset return a single NumPy object, the corresponding output will be a single data column.
|
||||
- When specify too many processes or threads for loading dataset may cause `RuntimeError: can't start new thread`, use `ulimit -u 10240` to increase the number of workers available to resolve it.
|
||||
|
||||
### API Change
|
||||
|
||||
#### Backwards Incompatible Change
|
||||
|
||||
##### Python API
|
||||
|
||||
- When using `register_backward_hook` interface and returning new gradient in backward hook function, the return format is changed to `tuple`.([Change single ret to tuple ret for pynative cell hook · Pull Request !31876 · MindSpore/mindspore - Gitee.com](https://gitee.com/mindspore/mindspore/pulls/31876))
|
||||
- Deprecated usage: `import mindspore.dataset.engine.datasets as ds`. Use `import mindspore.dataset as ds` instead as recommended in [mindspore doc](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.html).
|
||||
- Add `mindspore.ms_class` interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
|
||||
- Deprecate `mindspore.SparseTensor` and use `mindspore.COOTensor` instead. ([!28505]())
|
||||
- Add Tensor init arg `internal` for internal use.
|
||||
- MindSpore's QAT feature is refactoring, and corresponding interfaces under the `mindspore.compression` package have been removed ([!31364]()). We will re-provide MindSpore's QAT feature based on MindSpore Rewrite in version r1.8, which is currently in the demo state ([!30974]()).
|
||||
|
||||
## MindSpore Lite
|
||||
|
||||
### Major Features and Improvements
|
||||
|
||||
#### Post quantization
|
||||
|
||||
- [STABLE] Post training quantization support dynamic quant.
|
||||
- [BETA] Support post quantized model to run on NVIDIA GPU.
|
||||
|
||||
### Contributors
|
||||
|
||||
Thanks goes to these wonderful people:
|
||||
|
||||
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
|
||||
|
||||
Contributions of any kind are welcome!
|
||||
|
||||
# MindSpore 1.6.0
|
||||
|
||||
## MindSpore 1.6.0 Release Notes
|
||||
|
|
|
@ -0,0 +1,88 @@
|
|||
[View English](./RELEASE.md)
|
||||
|
||||
# MindSpore 1.7.0
|
||||
|
||||
## MindSpore 1.7.0 Release Notes
|
||||
|
||||
### 主要特性和增强
|
||||
|
||||
#### OS
|
||||
|
||||
- [STABLE] 支持Python 3.8版本(Linux/Windows/Mac)。
|
||||
- [STABLE] 安装简化,提供详细安装指南以及自动化安装脚本。
|
||||
- [STABLE] Windows版本支持算子多线程。
|
||||
- [STABLE] GCC兼容7.3到9.x版本。
|
||||
|
||||
#### FrontEnd
|
||||
|
||||
- [STABLE] 优化器支持动态权重衰减,即训练期间权重衰减值随着step的增加而变化。
|
||||
- [STABLE] 增加四种创建Tensor的方法,分别是`mindspore.numpy.rand()`、`mindspore.numpy.randn()`、`mindspore.numpy.randint()`和`mindspore.ops.arange ()`。
|
||||
- [STABLE] 增加两种callback方法 `mindspore.callback.History` 和 `mindspore.callback.LambdaCallback`。
|
||||
- [BETA] 自定义算子支持Julia算子。
|
||||
- [STABLE] 通过 `mindspore.ms_class` 类装饰器,支持获取用户自定义类的属性和方法。
|
||||
- [STABLE] 支持同时存在副作用算子和控制流语句的网络的训练。
|
||||
- [STABLE] 支持更复杂的控制流语法,比如在while的循环体里使用for语句。
|
||||
- [STABLE] 通过减少子图数量,提升包含复杂控制流语法的网络的性能。
|
||||
|
||||
#### PyNative
|
||||
|
||||
- [STABLE] 在PyNative模式下支持hook函数功能,包括前向hook接口register_forward_pre_hook, register_forward_hook和反向hook接口register_backward_hook。
|
||||
- [STABLE] 优化PyNative模式执行性能,将前端Python与后端C++进行并行执行。
|
||||
|
||||
#### Auto Parallel
|
||||
|
||||
- [STABLE] 在MoE场景中支持TopK的路由、数据并行和优化器切分。
|
||||
- [STABLE] 支持AllGather/ReduceScatter通信算子融合, 在DATA_PARALLEL模式支持AllReduce按数据量大小编译。
|
||||
- [STABLE] 在并行模式下支持ops.clip_by_global_norm。
|
||||
- [STABLE] 在并行模式下支持AdaSum优化器。
|
||||
- [STABLE] 支持自动优化器切分。
|
||||
- [STABLE] 支持AllltoAll可配置开启.支持自动插入VirtualDatasetCell。
|
||||
- [STABLE] 在流水线并行训练中,支持自动推断可训练的参数。
|
||||
- [STABLE] 支持集群的设备数目不为2的幂次方。
|
||||
- [STABLE] 在自动并行模式中支持策略传播。
|
||||
- [STABLE] 在统一运行时中支持异构训练。
|
||||
- [STABLE] 支持CPU的Adafactor算子。
|
||||
- [STABLE] 支持Conv2d/Conv2D的H/W轴切分和Transpose算子。支持ResizeBilinear、ROIAlign、CropAndResize、BoundingBoxEncode、IOU和RandomChoiceWithMask等分布式算子。
|
||||
|
||||
#### Executor
|
||||
|
||||
- [BETA] [数据并行训练容灾](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/parallel/train_gpu.html#%E5%AE%B9%E7%81%BE%E6%81%A2%E5%A4%8D) 支持多卡数据并行训练容灾恢复。
|
||||
- [BETA] 支持在cpu下的线程数搜索,获取最优线程数进行执行。整个搜索过程需要耗时50个steps,整体的性能会在50个steps后达到稳定的状态。在测试性能的时候,需要以50个steps之后的数据作为标准。
|
||||
|
||||
#### DataSet
|
||||
|
||||
- [STABLE] 增加了数据处理API的差异文档,比较TensorFlow.data与MindSpore.dataset部分算子的差异,详见 [对比文档](https://www.mindspore.cn/docs/zh-CN/r1.7/note/api_mapping/tensorflow_api_mapping.html#tf-data).
|
||||
- [STABLE] Python多进程逻辑优化,保证不同异常场景的正常退出。
|
||||
- [STABLE] 支持[自动数据加速](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/debug/dataset_autotune.html),可以自适应调节数据处理管道的执行速度。
|
||||
- [BETA] [数据处理异构加速](https://www.mindspore.cn/docs/zh-CN/r1.7/design/dataset_offload.html) 支持了新的数据增强操作: RandomColorAdjust, RandomSharpness, TypeCast.
|
||||
- GeneratorDataset加载自定义数据集时,当__getitem__/__next__方法返回单个NumPy对象,对应会输出单个数据列。
|
||||
- 用户在数据预处理中使用过多的进程数/线程数情况下,会出现错误RuntimeError: can't start new thread,可以通过 ulimit -u 10240 增加当前用户可用的线程/进程数解决。
|
||||
|
||||
### API变更
|
||||
|
||||
#### 非兼容性变更
|
||||
|
||||
##### Python API
|
||||
|
||||
- 修改register_backward_hook功能对应hook的梯度返回值类型,将梯度返回值统一改成tuple类型。([!31876](https://gitee.com/mindspore/mindspore/pulls/31876))
|
||||
- 弃用的import用法: `import mindspore.dataset.engine.datasets as ds` ,因其import目录过深且过度依赖Python目录结构。推荐使用官方推荐用法 `import mindspore.dataset as ds` ,更多参考详见 [API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.html).
|
||||
- 新增`mindspore.ms_class` 接口,作为用户自定义类的类装饰器,使得MindSpore能够识别用户自定义类,并且获取这些类的属性和方法。([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
|
||||
- `mindspore.SparseTensor`接口废弃使用,对应新接口为`mindspore.COOTensor`. ([!28505]())
|
||||
- Tensor新增一个入参`internal`,作为框架内部使用。
|
||||
|
||||
## MindSpore Lite
|
||||
|
||||
### 主要特性和增强
|
||||
|
||||
#### 后量化
|
||||
|
||||
- [STABLE] 后量化支持动态量化算法.
|
||||
- [BETA] 后量化模型支持在英伟达GPU推理.
|
||||
|
||||
### 贡献者
|
||||
|
||||
感谢以下人员做出的贡献:
|
||||
|
||||
AGroupofProbiotocs, anzhengqi, askmiao, baihuawei, baiyangfan, bai-yangfan, bingyaweng, BowenK, buxue, caifubi, CaoJian, caojian05, caozhou, Cathy, changzherui, chenbo116, chenfei, chengxianbin, chenhaozhe, chenjianping, chenzomi, chenzupeng, chujinjin, cj, cjh9368, Corleone, damon0626, danish, Danish, davidmc, dayschan, doitH, dong-li001, fary86, fuzhiye, Gaoxiong, GAO_HYP_XYJ, gengdongjie, Gogery, gongdaguo, gray0v0, gukecai, guoqi, gzhcv, hangq, hanhuifeng2020, Harshvardhan, He, heleiwang, hesham, hexia, Hoai, HuangBingjian, huangdongrun, huanghui, huangxinjing, huqi, huzhifeng, hwjiaorui, Jiabin Liu, jianghui58, Jiaqi, jin-xiulang, jinyaohui, jjfeing, John, jonyguo, JulyAi, jzg, kai00, kingfo, kingxian, kpy, kswang, liuyongqi, laiyongqiang, leonwanghui, liangchenghui, liangzelang, lichen_101010, lichenever, lihongkang, lilei, limingqi107, ling, linqingke, Lin Xh, liubuyu, liuwenhao4, liuxiao78, liuxiao93, liuyang_655, liuzhongkai, Lixia, lixian, liyanliu, liyong, lizhenyu, luopengting, lvchangquan, lvliang, lz, maning202007, Margaret_wangrui, mengyuanli, Ming_blue, ms_yan, ougongchang, panfengfeng, panyifeng, Payne, Peilin, peixu_ren, Pengyongrong, qianlong, qianjiahong, r1chardf1d0, riemann_penn, rmdyh, Sheng, shenwei41, simson, Simson, Su, sunsuodong, tao_yunhao, tinazhang, VectorSL, , Wan, wandongdong, wangdongxu, wangmin, wangyue01, wangzhe, wanyiming, Wei, wenchunjiang, wilfChen, WilliamLian, wsc, wudenggang, wukesong, wuweikang, wuxuejian, Xiao Tianci, Xiaoda, xiefangqi, xinyunfan, xuanyue, xuyongfei, yanghaitao, yanghaitao1, yanghaoran, YangLuo, yangruoqi713, yankai, yanzhenxiang2020, yao_yf, yepei6, yeyunpeng, Yi, yoni, yoonlee666, yuchaojie, yujianfeng, yuximiao, zengzitao, Zhang, zhanghuiyao, zhanghui_china, zhangxinfeng3, zhangyihui, zhangz0911gm, zhanke, zhanyuan, zhaodezan, zhaojichen, zhaoting, zhaozhenlong, zhengjun10, zhiqwang, zhoufeng, zhousiyi, zhouyaqiang, zhouyifengCode, Zichun, Ziyan, zjun, ZPaC, wangfengwfwf, zymaa, gerayking.
|
||||
|
||||
欢迎以任何形式对项目提供贡献!
|
Loading…
Reference in New Issue