liubuyu
|
43c79eb853
|
mindspore path adjust
|
2020-07-14 18:07:28 +08:00 |
yao_yf
|
f0bf438a55
|
reshape strategy search
|
2020-05-09 17:00:37 +08:00 |
Xiaoda Zhang
|
0ac50a19f5
|
Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost.
|
2020-04-14 11:39:31 +08:00 |
buxue
|
5841fe010e
|
Support pow's second input could be tensor and fix bug in bprop of pow
|
2020-04-11 16:16:58 +08:00 |
yangzhenzhang
|
b34c0e7a17
|
add parallel op for dropoutdomask
|
2020-04-11 12:11:22 +08:00 |
c00425699
|
b413638f23
|
refactor OperatorCostPtr in OperatorInfo
|
2020-04-09 20:37:52 +08:00 |
mindspore-ci-bot
|
2e6e94b2b6
|
!177 prelu operator support parallel on the channel
Merge pull request !177 from yao_yf/fix_auto_parallel_prelu
|
2020-04-09 14:08:41 +08:00 |
yao_yf
|
b5e3fa9593
|
fix auto parallel prelu
|
2020-04-08 20:45:08 +08:00 |
Xiaoda Zhang
|
a153fad874
|
This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
|
2020-04-08 11:52:19 +08:00 |
c00425699
|
3bb48ffee1
|
use std::vector instead of std::list to promote performance for parallel module
|
2020-03-31 15:16:00 +08:00 |
zhunaipan
|
930a1fb0a8
|
initial version
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
|
2020-03-27 22:54:54 +08:00 |