fix format of release

Signed-off-by: Ting Wang <kathy.wangting@huawei.com>
This commit is contained in:
Ting Wang 2022-04-24 17:28:09 +08:00
parent 9c255de251
commit 2083014879
3 changed files with 23 additions and 23 deletions

2
OWNERS
View File

@ -27,7 +27,7 @@ files:
- guoqi1024 - guoqi1024
- baochong - baochong
".*\.md$": ".*\\.md$":
approvers: approvers:
- gemini524 - gemini524
- Hanshize - Hanshize

View File

@ -17,7 +17,7 @@
- [STABLE] Support dynamic weight decay for optimizers, that is weight decay value will change according to the increasing step during training. - [STABLE] Support dynamic weight decay for optimizers, that is weight decay value will change according to the increasing step during training.
- [STABLE] Add four methods to create Tensor, which are `mindspore.numpy.rand()`, `mindspore.numpy.randn()`, `mindspore.numpy.randint()`, and `mindspore.ops.arange()`. - [STABLE] Add four methods to create Tensor, which are `mindspore.numpy.rand()`, `mindspore.numpy.randn()`, `mindspore.numpy.randint()`, and `mindspore.ops.arange()`.
- [STABLE] Add `mindspore.callback.History` in Callback. - [STABLE] Add `mindspore.train.callback.History` in Callback.
- [BETA] Support custom operator implemented by Julia operator. - [BETA] Support custom operator implemented by Julia operator.
- [STABLE] Support accessing attributes and methods of user-defined classes through `mindspore.ms_class` class decorator. - [STABLE] Support accessing attributes and methods of user-defined classes through `mindspore.ms_class` class decorator.
- [STABLE] Support training when a network has side effect operations and control flow statements at the same time. - [STABLE] Support training when a network has side effect operations and control flow statements at the same time.
@ -46,7 +46,7 @@
#### Executor #### Executor
- [BETA] [Failure Recovery Under Data Parallel Training](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html#%E5%AE%B9%E7%81%BE%E6%81%A2%E5%A4%8D) Support auto failure recovery under data parallel training mode. - [BETA] [Failure Recovery Under Data Parallel Training](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_gpu.html#%E5%AE%B9%E7%81%BE%E6%81%A2%E5%A4%8D) Support auto failure recovery under data parallel training mode.
- [BETA] Support searching for the number of threads under the CPU to obtain the optimal number of threads for execution. The entire search process takes 50 steps, and the overall performance will reach a stable state after 50 steps. When testing performance, data after 50 steps need to be used as a standard. - [BETA] Support searching for the number of threads under the CPU to obtain the optimal number of threads for execution. The entire search process takes 50 steps, and the overall performance will reach a stable state after 50 steps. When testing performance, data after 50 steps need to be used as a standard.
#### DataSet #### DataSet
@ -55,8 +55,8 @@
- [STABLE] Python multiprocessing optimization and make processes exit normally. - [STABLE] Python multiprocessing optimization and make processes exit normally.
- [STABLE] Support [Dataset Autotune](https://www.mindspore.cn/tutorials/experts/en/master/debug/dataset_autotune.html) for tuning the speed of dataset pipeline automatically. - [STABLE] Support [Dataset Autotune](https://www.mindspore.cn/tutorials/experts/en/master/debug/dataset_autotune.html) for tuning the speed of dataset pipeline automatically.
- [BETA] [Dataset Offload](https://www.mindspore.cn/docs/en/master/design/dataset_offload.html) support new data augmentation operations: RandomColorAdjust, RandomSharpness, TypeCast. - [BETA] [Dataset Offload](https://www.mindspore.cn/docs/en/master/design/dataset_offload.html) support new data augmentation operations: RandomColorAdjust, RandomSharpness, TypeCast.
- Output a single data column when __getitem__/__next__ methods of GeneratorDataset return a single NumPy object. - Output a single data column when `__getitem__/__next__` methods of GeneratorDataset return a single NumPy object.
- Use ulimit -u 10240 to increase the number of threads/processes available to the current user when specify too many processes or threads for loading dataset may cause RuntimeError: can't start new thread. - Use `ulimit -u 10240` to increase the number of threads/processes available to the current user when specify too many processes or threads for loading dataset may cause RuntimeError: can't start new thread.
### API Change ### API Change
@ -65,9 +65,9 @@
##### Python API ##### Python API
- Modify the gradient return value type of the hook corresponding to the register_backward_hook function, and change the gradient return value to the tuple type uniformly.([!31876](https://gitee.com/mindspore/mindspore/pulls/31876)) - Modify the gradient return value type of the hook corresponding to the register_backward_hook function, and change the gradient return value to the tuple type uniformly.([!31876](https://gitee.com/mindspore/mindspore/pulls/31876))
- Deprecated usage: `import mindspore.dataset.engine.datasets as ds`. Use `import mindspore.dataset as ds` instead as recommended in [mindspore doc](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.html). - Deprecated usage: `import mindspore.dataset.engine.datasets as ds`. Use `import mindspore.dataset as ds` instead as recommended in [mindspore doc](https://www.mindspore.cn/docs/en/master/api_python/mindspore.dataset.html).
- Add `mindspore.ms_class` interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods([!30855](https://gitee.com/mindspore/mindspore/pulls/30855)) - Add `mindspore.ms_class` interface, as class decorator for user-defined classes. It allows MindSpore to identify user-defined classes and access their attributes and methods([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
- Deprecate `mindspore.SparseTensor` and use `mindspore.COOTensor` instead. ([!28505]()) - Deprecate `mindspore.SparseTensor` and use `mindspore.COOTensor` instead. ([!28505](https://gitee.com/mindspore/mindspore/pulls/28505))
## MindSpore Lite ## MindSpore Lite

View File

@ -9,7 +9,7 @@
#### OS #### OS
- [STABLE] 支持Python 3.8版本Linux/Windows/Mac - [STABLE] 支持Python 3.8版本Linux/Windows/Mac
- [STABLE] 安装简化,提供详细安装指南以及自动化安装脚本。 - [STABLE] 简化安装,提供详细安装指南和自动化安装脚本。
- [STABLE] Windows版本支持算子多线程。 - [STABLE] Windows版本支持算子多线程。
- [STABLE] GCC兼容7.3到9.x版本。 - [STABLE] GCC兼容7.3到9.x版本。
@ -17,7 +17,7 @@
- [STABLE] 优化器支持动态权重衰减即训练期间权重衰减值随着step的增加而变化。 - [STABLE] 优化器支持动态权重衰减即训练期间权重衰减值随着step的增加而变化。
- [STABLE] 增加四种创建Tensor的方法分别是`mindspore.numpy.rand()`、`mindspore.numpy.randn()`、`mindspore.numpy.randint()`和`mindspore.ops.arange ()`。 - [STABLE] 增加四种创建Tensor的方法分别是`mindspore.numpy.rand()`、`mindspore.numpy.randn()`、`mindspore.numpy.randint()`和`mindspore.ops.arange ()`。
- [STABLE] 增加一种callback方法 `mindspore.callback.History`。 - [STABLE] 增加一种callback方法 `mindspore.train.callback.History`。
- [BETA] 自定义算子支持Julia算子。 - [BETA] 自定义算子支持Julia算子。
- [STABLE] 通过 `mindspore.ms_class` 类装饰器,支持获取用户自定义类的属性和方法。 - [STABLE] 通过 `mindspore.ms_class` 类装饰器,支持获取用户自定义类的属性和方法。
- [STABLE] 支持同时存在副作用算子和控制流语句的网络的训练。 - [STABLE] 支持同时存在副作用算子和控制流语句的网络的训练。
@ -26,8 +26,8 @@
#### PyNative #### PyNative
- [STABLE] 在PyNative模式下支持hook函数功能包括前向hook接口register_forward_pre_hook, register_forward_hook和反向hook接口register_backward_hook。 - [STABLE] 在PyNative模式下支持hook函数功能包括前向hook接口register_forward_pre_hookregister_forward_hook和反向hook接口register_backward_hook。
- [STABLE] 优化PyNative模式执行性能将前端Python与后端C++进行并行执行 - [STABLE] 优化PyNative模式执行性能并行执行前端Python与后端C++
#### Auto Parallel #### Auto Parallel
@ -36,7 +36,7 @@
- [STABLE] 在并行模式下支持ops.clip_by_global_norm。 - [STABLE] 在并行模式下支持ops.clip_by_global_norm。
- [STABLE] 在并行模式下支持AdaSum优化器。 - [STABLE] 在并行模式下支持AdaSum优化器。
- [STABLE] 支持自动优化器切分。 - [STABLE] 支持自动优化器切分。
- [STABLE] 支持AllltoAll可配置开启.支持自动插入VirtualDatasetCell。 - [STABLE] 支持AlltoAll可配置开启支持自动插入VirtualDatasetCell。
- [STABLE] 在流水线并行训练中,支持自动推断可训练的参数。 - [STABLE] 在流水线并行训练中,支持自动推断可训练的参数。
- [STABLE] 支持集群的设备数目不为2的幂次方。 - [STABLE] 支持集群的设备数目不为2的幂次方。
- [STABLE] 在自动并行模式中支持策略传播。 - [STABLE] 在自动并行模式中支持策略传播。
@ -47,16 +47,16 @@
#### Executor #### Executor
- [BETA] [数据并行训练容灾](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_gpu.html#%E5%AE%B9%E7%81%BE%E6%81%A2%E5%A4%8D) 支持多卡数据并行训练容灾恢复。 - [BETA] [数据并行训练容灾](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_gpu.html#%E5%AE%B9%E7%81%BE%E6%81%A2%E5%A4%8D) 支持多卡数据并行训练容灾恢复。
- [BETA] 支持在cpu下的线程数搜索获取最优线程数进行执行。整个搜索过程需要耗时50个steps整体的性能会在50个steps后达到稳定的状态。在测试性能的时候需要以50个steps之后的数据作为标准。 - [BETA] 支持在CPU下的线程数搜索获取最优线程数来执行。整个搜索过程需要耗时50个steps整体的性能会在50个steps后达到稳定的状态。在测试性能的时候需要以50个steps之后的数据作为标准。
#### DataSet #### DataSet
- [STABLE] 增加了数据处理API的差异文档比较TensorFlow.data与MindSpore.dataset部分算子的差异详见 [对比文档](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/tensorflow_api_mapping.html#tf-data). - [STABLE] 增加了数据处理API的差异文档比较TensorFlow.data与MindSpore.dataset部分算子的差异详见 [对比文档](https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/tensorflow_api_mapping.html#tf-data)
- [STABLE] Python多进程逻辑优化保证不同异常场景的正常退出。 - [STABLE] Python多进程逻辑优化保证不同异常场景的正常退出。
- [STABLE] 支持[自动数据加速](https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/dataset_autotune.html),可以自适应调节数据处理管道的执行速度。 - [STABLE] 支持[自动数据加速](https://www.mindspore.cn/tutorials/experts/zh-CN/master/debug/dataset_autotune.html),可以自适应调节数据处理管道的执行速度。
- [BETA] [数据处理异构加速](https://www.mindspore.cn/docs/zh-CN/master/design/dataset_offload.html) 支持了新的数据增强操作: RandomColorAdjust, RandomSharpness, TypeCast。 - [BETA] [数据处理异构加速](https://www.mindspore.cn/docs/zh-CN/master/design/dataset_offload.html) 支持了新的数据增强操作: RandomColorAdjust、RandomSharpness和TypeCast。
- GeneratorDataset加载自定义数据集时当__getitem__/__next__方法返回单个NumPy对象对应会输出单个数据列。 - GeneratorDataset加载自定义数据集时`__getitem__/__next__`方法返回单个NumPy对象对应会输出单个数据列。
- 用户在数据预处理中使用过多的进程数/线程数情况下会出现错误RuntimeError: can't start new thread可以通过 ulimit -u 10240 增加当前用户可用的线程/进程数解决。 - 用户在数据预处理中使用过多的进程数/线程数情况下会出现错误RuntimeError: can't start new thread可以通过 `ulimit -u 10240` 增加当前用户可用的线程/进程数解决。
### API变更 ### API变更
@ -65,9 +65,9 @@
##### Python API ##### Python API
- 修改register_backward_hook功能对应hook的梯度返回值类型将梯度返回值统一改成tuple类型。([!31876](https://gitee.com/mindspore/mindspore/pulls/31876)) - 修改register_backward_hook功能对应hook的梯度返回值类型将梯度返回值统一改成tuple类型。([!31876](https://gitee.com/mindspore/mindspore/pulls/31876))
- 弃用的import用法 `import mindspore.dataset.engine.datasets as ds` 因其import目录过深且过度依赖Python目录结构。推荐使用官方推荐用法 `import mindspore.dataset as ds` ,更多参考详见 [API文档](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.html). - 弃用的import用法 `import mindspore.dataset.engine.datasets as ds` 因其import目录过深且过度依赖Python目录结构。推荐使用 `import mindspore.dataset as ds` ,更多参考详见 [API文档](https://www.mindspore.cn/docs/zh-CN/master/api_python/mindspore.dataset.html)
- 新增`mindspore.ms_class` 接口作为用户自定义类的类装饰器使得MindSpore能够识别用户自定义类并且获取这些类的属性和方法。([!30855](https://gitee.com/mindspore/mindspore/pulls/30855)) - 新增`mindspore.ms_class` 接口作为用户自定义类的类装饰器使得MindSpore能够识别用户自定义类并且获取这些类的属性和方法。([!30855](https://gitee.com/mindspore/mindspore/pulls/30855))
- `mindspore.SparseTensor`接口废弃使用,对应新接口为`mindspore.COOTensor`. ([!28505]()) - `mindspore.SparseTensor`接口废弃使用,对应新接口为`mindspore.COOTensor`。 ([!28505](https://gitee.com/mindspore/mindspore/pulls/28505))
- Tensor新增一个入参`internal`,作为框架内部使用。 - Tensor新增一个入参`internal`,作为框架内部使用。
## MindSpore Lite ## MindSpore Lite
@ -76,8 +76,8 @@
#### 后量化 #### 后量化
- [STABLE] 后量化支持动态量化算法. - [STABLE] 后量化支持动态量化算法
- [BETA] 后量化模型支持在英伟达GPU推理. - [BETA] 后量化模型支持在英伟达GPU上执行推理。
### 贡献者 ### 贡献者