zhanghuiyao
07ee4054c5
fix some network built-in function problem.
2021-05-28 16:58:10 +08:00
mindspore-ci-bot
56bb6468df
!16800 covert python to C++ in BiasAddGrad operator
...
From: @shen_jingxing
Reviewed-by: @ginfung,@zh_qh
Signed-off-by: @zh_qh
2021-05-28 16:57:10 +08:00
mindspore-ci-bot
bfeb511100
!16967 clean code check
...
From: @Margaret_wangrui
Reviewed-by: @ginfung
Signed-off-by:
2021-05-28 16:55:00 +08:00
mindspore-ci-bot
36d3c59c23
!17108 clean pclint check
...
From: @Margaret_wangrui
Reviewed-by: @ginfung,@zh_qh
Signed-off-by: @zh_qh
2021-05-28 16:54:50 +08:00
zhaoting
e938898336
add fp16 support for CPU Adam
2021-05-28 16:52:08 +08:00
累到崴脚
59935cd7d9
update
...
updaterun_eval.sh
'update'
'update'
'update'
'updatereadme'
'fix'
update eval.py
'update'
update README
fixREADME
fixREADME
fix_help
to_research
fix
fixnetwork
fix
2021
fix
fix
fix
2021-05-28 16:45:09 +08:00
mindspore-ci-bot
0d8ec8db88
!17143 clean code of probability
...
From: @bingyaweng
Reviewed-by: @wang_zi_dong,@sunnybeike
Signed-off-by: @sunnybeike
2021-05-28 16:27:47 +08:00
mindspore-ci-bot
f96e149e1e
!16977 ascend 310&910 joint compile, and ascend 310 package lose weight
...
From: @nicholas_yhr
Reviewed-by: @zhoufeng54,@zhoufeng54,@xsmq
Signed-off-by: @zhoufeng54,@xsmq
2021-05-28 16:18:45 +08:00
zengzitao
07752d7aaf
fix warpctc bug when open graph_kernel flag
2021-05-28 16:14:49 +08:00
huangbo77
eaadd8746a
fixes improper usage of built-in function _Loss
2021-05-28 16:13:25 +08:00
mindspore-ci-bot
ea93cc380a
!17000 clear code check warnings
...
From: @shibeiji
Reviewed-by: @wuxuejian,@wuxuejian,@liangchenghui,@wuxuejian
Signed-off-by: @wuxuejian,@liangchenghui
2021-05-28 16:04:36 +08:00
mindspore-ci-bot
ae5adc2986
!17155 add st model of micro
...
From: @yangjie159
Reviewed-by: @wangchengyuan,@hangangqiang
Signed-off-by: @wangchengyuan,@hangangqiang
2021-05-28 15:31:51 +08:00
mindspore-ci-bot
ecbdb6a853
!17063 maskrcnn_mobilenetv1 export
...
From: @zhangxiaoxiao16
Reviewed-by: @c_34,@oacjiewen
Signed-off-by: @c_34
2021-05-28 15:29:59 +08:00
mindspore-ci-bot
f5fb195f04
!17189 check input model buffer size
...
From: @hangangqiang
Reviewed-by: @zhanghaibo5,@zhang_xue_tong
Signed-off-by: @zhanghaibo5,@zhang_xue_tong
2021-05-28 15:17:54 +08:00
VectorSL
03210aee81
clean code2
2021-05-28 15:15:45 +08:00
YangLuo
ac319b3a79
Fix mindrecord UTs: files existed causes write exception
2021-05-28 15:13:48 +08:00
caifubi
8056d9709c
fix gpu pynative allreduce fail
2021-05-28 15:12:00 +08:00
Su Teng
54eeb3e789
Add sparse attenton related ops
2021-05-28 15:10:40 +08:00
mindspore-ci-bot
40ca285ab3
!17052 added scale out
...
From: @anancds
Reviewed-by: @cristoval,@limingqi107
Signed-off-by:
2021-05-28 15:05:39 +08:00
wangshuide2020
0f9745bd20
add default value of optional attribute for nllloss.
2021-05-28 14:56:50 +08:00
dayschan
2ac8c65327
Add GraphKernelPassManager to manage the passes of GraphKernel
...
Refactor the original "PassManager" class, and derive the "GraphKernelPassManager"
GraphKernel's ir files are dumped into a new sub-directory "graph_kernel" in the original "verbose_ir_files"
All GraphKernel's passes are divided into 3 levels, and controlled by the flag "opt_level" by default.
when the opt_level is greaterequal to the pass's level, this pass will run.
The default "opt_level" is 2 when GraphKernel is enabled.
Levels:
1. Basic features, like cluster, splitter, and some preprocess, postprocess.
2. All stable features, mainly includes the optimization passes.
3. Experimental features, like stitch-fusion, parallel-fusion.
The two flags "enable_pass" and "disable_pass" are available in this commit.
User can manually enable some passes when it's disabled by "opt_level", or disable the enabled passes,
by specifying that pass in this format: "stage_id.pass_id" or "stage_name.pass_name", multiple passes are separated by comma(",")
the stage/pass index and stage/pass name can be found from the ir filename.
e.g. "--enable_pass=cluster.graph_kernel_expander,1.1,1.2"
Others:
1. the pass "tensor_promotion" is not useful, remove it.
2. put the pass "InsertPadOps" before "ArithmeticSimplify".
2021-05-28 14:50:37 +08:00
ling
58d30d0c65
mindrt modify include .h
...
device_type
2021-05-28 14:42:46 +08:00
mindspore-ci-bot
9dcc495c24
!17109 [MD] fix code check
...
From: @liyong126
Reviewed-by: @liucunwei,@jonyguo
Signed-off-by: @liucunwei
2021-05-28 14:32:09 +08:00
simson
8d701f17d7
add check of undetermined type in addn infer
2021-05-28 14:27:07 +08:00
mindspore-ci-bot
30be164856
!16767 ci format checking issues fix
...
From: @zhangzhaoju
Reviewed-by: @hwhewei,@ginfung,@zh_qh
Signed-off-by: @zh_qh
2021-05-28 14:16:48 +08:00
mindspore-ci-bot
932ec6e643
!17034 revert convback and add convtranspore
...
From: @changzherui
Reviewed-by: @zhoufeng54,@liangchenghui
Signed-off-by: @liangchenghui
2021-05-28 14:16:41 +08:00
zuochuanyong
8302820597
switch implementation of BiasAdd op to nnacl
2021-05-28 14:07:10 +08:00
mindspore-ci-bot
14cf33a6df
!17197 回退 'Pull Request !17026 : add relative-position-attention in mindspore lite & nnacl'
...
From: @xsmq
Reviewed-by: @zhoufeng54,@guoqi1024
Signed-off-by: @guoqi1024
2021-05-28 13:42:04 +08:00
gongxiaoqing
f26891df3a
回退 'Pull Request !17026 : add relative-position-attention in mindspore lite & nnacl'
2021-05-28 12:53:40 +08:00
liuyihong
143009a51b
enable graph kernel
2021-05-28 12:40:03 +08:00
yanghaoran
8b4ff8708f
ascend 310/910 joint compile, and ascend 310 pacakge lose weight
2021-05-28 12:11:20 +08:00
hangangqiang
8e1b5f8220
check input model buffer size
2021-05-28 11:39:33 +08:00
looop5
68f55e1e93
expand conv2d when input format is DefaultFormat but attr format is NHWC
2021-05-28 11:38:32 +08:00
wanyiming
6dfb903988
clean cpu code
2021-05-28 11:37:16 +08:00
mindspore-ci-bot
9fdc0035a0
!17046 add modelzoo network retinaface_resnet50 testcase
...
From: @anzhengqi
Reviewed-by: @c_34,@wuxuejian
Signed-off-by: @c_34
2021-05-28 11:09:58 +08:00
yuzhenhua
1a69ef0a3c
ascend310 inference for xception
2021-05-28 11:09:37 +08:00
mindspore-ci-bot
63ba14b5c6
!17026 add relative-position-attention in mindspore lite & nnacl
...
From: @hangangqiang
Reviewed-by: @zhanghaibo5,@zhang_xue_tong
Signed-off-by: @zhang_xue_tong
2021-05-28 10:54:24 +08:00
baihuawei
8665faac0a
clear warning
2021-05-28 10:43:29 +08:00
yangjie159
5cf7037ecd
add st models
2021-05-28 10:05:59 +08:00
mindspore-ci-bot
f5a23ddf26
!17168 drop test_kron
...
From: @jachua
Reviewed-by: @wmzheng2020,@xsmq
Signed-off-by: @xsmq
2021-05-28 10:01:01 +08:00
huangmengxi
0a588f1f44
drop test kron
2021-05-28 09:44:43 +08:00
mindspore-ci-bot
615a22b071
!16791 Comprehensive API changes part 2
...
From: @dinglinhe123
Reviewed-by: @liangchenghui,@liangchenghui
Signed-off-by: @liangchenghui,@liangchenghui
2021-05-28 09:37:59 +08:00
Yang Jiao
f9aefb24d5
enable tinybert graph kernel
2021-05-28 09:35:36 +08:00
shen_jingxing
87175b9d22
BiasAddGrad
2021-05-28 09:22:50 +08:00
Ziyan
4b17493e52
handle load in step parallel
2021-05-28 09:16:40 +08:00
liyong
322c342979
fix codecheck
2021-05-28 09:14:29 +08:00
zhangxinfeng3
d86465602a
clean code of probability
2021-05-28 08:58:40 +08:00
hangangqiang
1329355b33
add relative-position-attention in mindspore lite & nnacl
2021-05-28 08:52:32 +08:00
changzherui
d9e2da299d
Revert "!16599 c++ infer for conv2dbackpropfilter and conv2dbackpropinput"
...
This reverts commit 3be79efd80
, reversing
changes made to cf4479756a
.
2021-05-27 22:24:13 +08:00
mindspore-ci-bot
f22e0522fe
!16947 fix precision error of ml_location_lane_counter.onnx
...
From: @wangyanling10
Reviewed-by: @zhanghaibo5,@jpc_chenjianping
Signed-off-by: @jpc_chenjianping
2021-05-27 21:16:20 +08:00