Commit Graph

334 Commits

Author SHA1 Message Date
yangzhenzhang af0d28de48 add parallel op for batchnorm 2021-06-17 15:43:24 +08:00
i-robot 85d860e6a2 !16457 [AutoParallel]pipeline_split_adapt_master
Merge pull request !16457 from lichen/pipeline_split_adapt_master
2021-06-11 11:37:40 +08:00
lichenever db5d508356 pipeline_split_adapt_master 2021-06-10 20:17:33 +08:00
yangzhenzhang 7a40741048 add parallel operator for conv2d 2021-06-10 14:58:51 +08:00
Ziyan 95ac0f6d58 fix optimizer weight shard config 2021-06-08 10:44:17 +08:00
chenhaozhe 9da8534396 change _Loss to Loss 2021-06-03 15:26:59 +08:00
mindspore-ci-bot 1c8fda25ef !16478 handle load op in step parallel
From: @gong_zi_yan
Reviewed-by: @yangzhenzhang,@stsuteng
Signed-off-by: @stsuteng
2021-05-29 09:32:35 +08:00
mindspore-ci-bot b45b63fc58 !17239 add parallel gathernd test case
From: @hanyang001
Reviewed-by: @stsuteng,@yangzhenzhang
Signed-off-by: @stsuteng
2021-05-29 09:11:25 +08:00
Wan Hanyang c51dff2634 add parallel gathernd test case 2021-05-28 17:21:11 +08:00
Wan Hanyang 3ce521d78f add parallel layernorm test case 2021-05-28 17:20:04 +08:00
Ziyan 4b17493e52 handle load in step parallel 2021-05-28 09:16:40 +08:00
yangzhenzhang d711d98f07 clean duplicate code 2021-05-25 17:25:34 +08:00
yao_yf 732d13ccff parallel dropout support repeated compute 2021-05-20 19:42:13 +08:00
yangzhenzhang 6aa3859131 modify check strategy for scatter update 2021-05-10 11:22:00 +08:00
Ziyan 2a752f24bf enable not fully use opt shard 2021-05-07 15:32:48 +08:00
yao_yf e967f1939b parallel envs variable check 2021-05-07 09:12:53 +08:00
mindspore-ci-bot 78fcdbc7c9 !15790 modify scatter update op
From: @yangzhenzhang
Reviewed-by: @kisnwang,@stsuteng
Signed-off-by: @stsuteng
2021-04-28 10:48:19 +08:00
yangzhenzhang 075f680a42 modify scatter update op 2021-04-27 20:23:14 +08:00
Xiaoda Zhang aa52399200 Making the Tile operator to have more parallel strategies 2021-04-27 11:36:40 +08:00
yao_yf 093ef784de dont insert virtualoutput for scalar 2021-04-26 19:55:16 +08:00
mindspore-ci-bot 3cfd58e8e0 !15643 insert virtual div only for first input of dropout do mask
From: @yangzhenzhang
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
2021-04-26 09:21:00 +08:00
mindspore-ci-bot 49d6c029a6 !15542 split axis and batch for gather
From: @yangzhenzhang
Reviewed-by: @kisnwang,@stsuteng,@stsuteng
Signed-off-by: @stsuteng,@stsuteng
2021-04-25 19:33:09 +08:00
yangzhenzhang 5828973978 fix bug for dropout do mask 2021-04-25 16:47:44 +08:00
yao_yf 21276408b8 parallel virtual_out_ops 2021-04-25 11:18:54 +08:00
yangzhenzhang 213922574e split axis and batch for gatherv2 2021-04-23 16:59:35 +08:00
yangzhenzhang c2ca2232c5 add select op 2021-04-20 09:23:22 +08:00
mindspore-ci-bot 1c9d3c0aa0 !15353 add parallel operator for scatter update
From: @yangzhenzhang
Reviewed-by: @kisnwang,@stsuteng
Signed-off-by: @stsuteng
2021-04-20 09:03:08 +08:00
mindspore-ci-bot 0fd1726e79 !15172 Clean GraphKernel's codes from frontend
From: @dayschan
Reviewed-by: @gaoxiong1,@dylangeng,@gaoxiong1
Signed-off-by: @dylangeng,@dylangeng
2021-04-19 09:34:35 +08:00
yangzhenzhang 9cdd70433f add scatterupdate op 2021-04-17 17:32:48 +08:00
yangzhenzhang d070af122f add topk op 2021-04-17 14:12:49 +08:00
dayschan 771e3f61f3 Clean GraphKernel's codes from frontend
1. set class GraphKernel as deprecated, and treat it as Cell
2. set class InplaceAssign as deprecated, suggested using Assign instead.
3. set op_selector as deprecated, removed the _selected_ops and _selected_grad_ops, replaced with real operations
4. removed the two passes of GraphKernel from frontend
5. removed the GraphKernel's codes from other modules
2021-04-17 11:03:34 +08:00
yangzhenzhang f9f5df368e add gathernd op 2021-04-16 14:29:03 +08:00
yangzhenzhang bcd2ecc403 check layouts for shared parameter 2021-04-15 10:39:05 +08:00
yao_yf a83fb3316b fix parallel timeout 2021-04-02 20:19:26 +08:00
yao_yf 4d0635eabe set parallel communication init flag in parallel ut 2021-03-30 11:21:57 +08:00
dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 2021-03-18 19:02:28 +08:00
mindspore-ci-bot 7454ac8ecd !13382 [PipelineSplit]change pipeline key word
From: @lichen666
Reviewed-by: @kisnwang,@zhunaipan
Signed-off-by: @zhunaipan
2021-03-16 20:20:32 +08:00
lichenever a2b2727ba8 change_pipeline_key_word 2021-03-16 14:02:35 +08:00
LianLiguang 17b9758543 unify range ops 2021-03-16 10:47:42 +08:00
mindspore-ci-bot 7ba21f8d8c !12900 Add communication parallel mode.
From: @liujunzhu
Reviewed-by: @zhoufeng54,@guoqi1024
Signed-off-by: @guoqi1024
2021-03-06 15:55:49 +08:00
liujunzhu 6541b96c40 Add communication parallel mode. 2021-03-05 21:36:03 +08:00
Ziyan ec9793861f fix grad accu 2021-03-05 10:28:10 +08:00
mindspore-ci-bot 7ff2b3b499 !12781 fix bug of amp bn cast
From: @jojobugfree
Reviewed-by: 
Signed-off-by:
2021-03-04 10:14:51 +08:00
caifubi a6959c2a13 fix bn cast bug 2021-03-02 17:24:47 +08:00
yangzhenzhang a70d616841 mini step grad accumulation 2021-03-01 10:20:12 +08:00
wangshuide2020 72e938eb06 change dimension of input for FusedBatchNormEx from 2D to 4D in test_two_matmul_batchnorm_ex. 2021-02-25 09:37:59 +08:00
He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad
The basic idea is: exploits data dependency to control the execution order
of side-effect operations, and keep the semantics of ANF unchanged.

The ControlDepend primitive is removed and there are two primitives added:

1. UpdateState:
```
  a = Assign(para, value)
```
became:
```
  a = Assign(para, value, u)
  u = UpdateState(u, a)
```

2. Load:
```
  x = Add(para, value)
```
became:
```
  p = Load(para, u)
  x = Add(p, value)
  u = UpdateState(u, p)
```
2021-02-08 09:01:15 +08:00
jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 2021-02-05 17:19:52 +08:00
mindspore-ci-bot 74652eb942 !12044 modify pack to stack
From: @jinyaohui
Reviewed-by: 
Signed-off-by:
2021-02-04 21:02:09 +08:00
jinyaohui 8022f9a6ed modify pack to stack 2021-02-04 18:54:31 +08:00