Commit Graph

12325 Commits

Author SHA1 Message Date
mindspore-ci-bot 84a61bd015 !27 add log for kernel runtime in order to trace performance
Merge pull request !27 from shibeiji/master
2020-03-31 11:13:27 +08:00
lvliang b3a306489d auto enbale dynamic mem pool 2020-03-31 10:30:01 +08:00
leonwanghui 976af212e9 回退 'Pull Request !17 : [AutoParallel]Fix bug in the case of two cast' 2020-03-31 10:27:40 +08:00
mindspore-ci-bot 140a15924c !17 [AutoParallel]Fix bug in the case of two cast
Merge pull request !17 from lichen/fix_cast_bug
2020-03-31 10:25:04 +08:00
mindspore-ci-bot 02a25407c4 !30 use string::find instead of equal to distinguish training graph
Merge pull request !30 from chenhaozhe/enable-variable-acc-for-training-graph
2020-03-31 10:18:06 +08:00
lianliguang 9d5890d9b9 fix bug of got a error transdata's dest format 2020-03-31 10:04:34 +08:00
zhaozhenlong b12e6ff780 add operator diag and diag_part 2020-03-31 09:44:56 +08:00
mindspore-ci-bot c1c8fef9ca !24 Change strategy for structure output
Merge pull request !24 from 步学/structure-output
2020-03-31 09:38:44 +08:00
mindspore-ci-bot 4f5755003a !29 Add some prompt information for ease of use
Merge pull request !29 from jonyguo/add_more_log_info_and_testcase
2020-03-30 22:27:19 +08:00
seatea 840280e784 Correct the comments for `RandomChoiceWithMask` op. 2020-03-30 22:17:32 +08:00
mindspore-ci-bot 062b744b19 !12 Fix dtype bug for loss_scale and weight_decay
Merge pull request !12 from seatea/dynamic-loss-scale
2020-03-30 22:06:35 +08:00
mindspore-ci-bot 3ab402e110 !7 adapt relu6grad
Merge pull request !7 from zhaozhenlong/adapte-relu6grad
2020-03-30 21:58:39 +08:00
lichenever b4d34973bc fix_cast_bug 2020-03-30 21:13:45 +08:00
chenhaozhe cab5503280 use find instead of equal to distinguish training graph 2020-03-30 20:10:56 +08:00
mindspore-ci-bot 44cd0c1f90 !13 Check input shape for `NMSWithMask` op
Merge pull request !13 from seatea/NMSWithMask-check-shape
2020-03-30 19:52:41 +08:00
buxue 0da0bdcf40 Fix bug structure output when there is depend whose first input is constant in outputs 2020-03-30 19:49:46 +08:00
Ziyan 4cbcd8e907 enable use float type learning rate in lars optimizer 2020-03-30 19:05:33 +08:00
zhaozhenlong 9862dea3cf adapt relu6grad and graphengine modified 2020-03-30 19:01:03 +08:00
shibeiji 468e257a14 add log for kernel runtime in order to trace performance 2020-03-30 17:55:28 +08:00
jonyguo 34e42bd6f9 1. add more log info for dataset & mindrecord, 2. add two new testcase for MindDataset 2020-03-30 17:24:24 +08:00
seatea 6c03542eec Fix dtype bug for loss_scale and weight_decay.
1.Change dtype of scale to dtype of grad in loss_scale.py;
2.Change dtype of weight_decay to dtype of weight in optimizer.py.
2020-03-30 17:14:52 +08:00
seatea 7b7a6a45a0 Check if the shape of the input of NMSWithMask is (N, 5). 2020-03-30 12:10:35 +08:00
jjfeing 86f5c69995 change parallel complie num: 32->16 2020-03-30 10:54:53 +08:00
GinFung 468dbc3557 Add matmul biasadd fusion pass 2020-03-30 10:12:50 +08:00
zhunaipan 930a1fb0a8 initial version
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
2020-03-27 22:54:54 +08:00