update releasenote for master
This commit is contained in:
parent
ac7c9ab0ee
commit
dd4298bb03
|
@ -20,7 +20,7 @@
|
|||
|
||||
#### Inference
|
||||
|
||||
- [BETA] The integrated architecture of large model inference, upgrade, training, and promotion unifies scripts, distributed policies, and runtime. The period from training to inference deployment of typical large models is reduced to days. Large operators are integrated to reduce the inference latency and effectively improve the network throughput.
|
||||
- [DEMO] The integrated architecture of large model inference, upgrade, training, and promotion unifies scripts, distributed policies, and runtime. The period from training to inference deployment of typical large models is reduced to days. Large operators are integrated to reduce the inference latency and effectively improve the network throughput.
|
||||
|
||||
#### AutoParallel
|
||||
|
||||
|
@ -77,13 +77,13 @@
|
|||
|
||||
- [BETA] mindspore.ops.TopK now supports the second input k as an int32 type tensor.
|
||||
|
||||
#### Bug fixes
|
||||
### Bug Fixes
|
||||
|
||||
- [#I92H93] Fixed the issue of 'Launch kernel failed' when using the Print operator to print string objects on the Ascend platform.
|
||||
- [#I8S6LY] Fixed RuntimeError: Attribute dyn_input_sizes of Default/AddN-op1 is [const vector]{}, of which size is less than 0 error of variable-length input operator, such as AddN or Concat, for dynamic shape process in graph mode on the Ascend platform.
|
||||
- [#I9ADZS] Fixed the data timeout issue in network training due to inefficient dataset recovery in the fault recovery scenario.
|
||||
|
||||
#### Contributors
|
||||
### Contributors
|
||||
|
||||
Thanks goes to these wonderful people:
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
|
||||
#### Inference
|
||||
|
||||
- [BETA] 大模型推理升级训推一体架构,实现脚本、分布式策略和运行时的统一,典型大模型训练到推理部署周期下降到天级,通过融合大算子降低推理时延,有效提升网络吞吐量。
|
||||
- [DEMO] 大模型推理升级训推一体架构,实现脚本、分布式策略和运行时的统一,典型大模型训练到推理部署周期下降到天级,通过融合大算子降低推理时延,有效提升网络吞吐量。
|
||||
|
||||
#### AutoParallel
|
||||
|
||||
|
@ -68,7 +68,7 @@
|
|||
|
||||
- [BETA] 支持用户设置CANN的options配置项,配置项分为global和session二类,用户可以通过mindspore.set_context(ascend_config={"ge_options": {"global": {"global_option": "option_value"}, "session": {"session_option": "option_value"}}})进行配置。
|
||||
|
||||
#### API Change
|
||||
#### API变更
|
||||
|
||||
- 新增 mindspore.hal接口,开放流、事件以及设备管理能力。
|
||||
- 新增 mindspore.multiprocessing 接口,提供了创建多进程的能力。
|
||||
|
@ -77,7 +77,7 @@
|
|||
|
||||
- [BETA] mindspore.ops.TopK当前支持第二个输入k为Int32类型的张量。
|
||||
|
||||
#### Bug fixes
|
||||
### 问题修复
|
||||
|
||||
- [#I92H93] 修复了昇腾平台下使用Print算子打印字符串对象时,Print算子报错Launch kernel failed的问题。
|
||||
- [#I8S6LY] 修复了昇腾平台图模式动态shape流程下,变长输入算子(如 AddN、Concat)报错RuntimeError: Attribute dyn_input_sizes of Default/AddN-op1 is [const vector]{}, of which size is less than 0的问题。
|
||||
|
|
Loading…
Reference in New Issue