forked from mindspore-Ecosystem/mindspore
!30330 Add SparseTensor attr
Merge pull request !30330 from wangrao124/add_sparse_attr
This commit is contained in:
commit
55c16c6ed8
|
@ -8,6 +8,8 @@ mindspore
|
|||
:toctree: mindspore
|
||||
|
||||
mindspore.Tensor
|
||||
mindspore.COOTensor
|
||||
mindspore.CSRTensor
|
||||
mindspore.RowTensor
|
||||
mindspore.SparseTensor
|
||||
|
||||
|
|
|
@ -0,0 +1,96 @@
|
|||
mindspore.COOTensor
|
||||
===================
|
||||
|
||||
.. py:class:: mindspore.COOTensor(indices=None, values=None, shape=None)
|
||||
|
||||
用来表示某一张量在给定索引上非零元素的集合。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **indices** (Tensor) - 形状为 `[N, ndims]` 的二维整数张量,其中N和ndims分别表示稀疏张量中 `values` 的数量和COOTensor维度的数量。
|
||||
- **values** (Tensor) - 形状为 `[N]` 的一维张量,用来给 `indices` 中的每个元素提供数值。
|
||||
- **shape** (tuple(int)) - 形状为ndims的整数元组,用来指定稀疏矩阵的稠密形状。
|
||||
- **coo_tensor** (COOTensor) - COOTensor对象,用来初始化新的COOTensor。
|
||||
|
||||
**输出:**
|
||||
|
||||
COOTensor,由 `indices` 、 `values` 和 `shape` 组成。
|
||||
|
||||
.. py:method:: indices
|
||||
:property:
|
||||
|
||||
返回COOTensor的索引值。
|
||||
|
||||
.. py:method:: values
|
||||
:property:
|
||||
|
||||
返回COOTensor的非零元素值。
|
||||
|
||||
.. py:method:: shape
|
||||
:property:
|
||||
|
||||
稀疏矩阵的稠密形状。
|
||||
|
||||
.. py:method:: dtype
|
||||
:property:
|
||||
|
||||
返回稀疏矩阵非零元素值数据类型。
|
||||
|
||||
.. py:method:: size
|
||||
:property:
|
||||
|
||||
返回稀疏矩阵非零元素值数量。
|
||||
|
||||
.. py:method:: itemsize
|
||||
:property:
|
||||
|
||||
返回每个非零元素所占字节数。
|
||||
|
||||
.. py:method:: ndim
|
||||
:property:
|
||||
|
||||
稀疏矩阵的稠密维度。
|
||||
|
||||
.. py:method:: to_csr()
|
||||
|
||||
将COOTensor转换为CSRTensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
CSRTensor。
|
||||
|
||||
.. py:method:: to_dense()
|
||||
|
||||
将COOTensor转换为稠密Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor。
|
||||
|
||||
.. py:method:: to_tuple()
|
||||
|
||||
将COOTensor的索引,非零元素,以及形状信息作为tuple返回。
|
||||
|
||||
**返回:**
|
||||
|
||||
tuple(Tensor, Tensor, tuple(int))
|
||||
|
||||
.. py:method:: abs()
|
||||
|
||||
对所有非零元素取绝对值,并返回新的COOTensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
CSRTensor。
|
||||
|
||||
.. py:method:: astype(dtype)
|
||||
|
||||
返回指定数据类型的COOTensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **dytpe** (`mindspore.dtype`) - 指定数据类型。
|
||||
|
||||
**返回:**
|
||||
|
||||
COOTensor。
|
|
@ -0,0 +1,127 @@
|
|||
mindspore.CSRTensor
|
||||
===================
|
||||
|
||||
.. py:class:: mindspore.CSRTensor(indptr=None, indices=None, values=None, shape=None)
|
||||
|
||||
用来表示某一张量在给定索引上非零元素的集合。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **indptr** (Tensor) - 形状为 `[M]` 的一维整数张量,其中M等于 `shape[0] + 1` , 表示每行非零元素的在 `values` 中存储的起止位置。
|
||||
- **indices** (Tensor) - 形状为 `[N]` 的一维整数张量,其中N等于非零元素数量,表示每个元素的列索引值。
|
||||
- **values** (Tensor) - 形状为 `[N]` 的一维张量,用来表示索引对应的数值。
|
||||
- **shape** (tuple(int)) - 形状为ndims的整数元组,用来指定稀疏矩阵的稠密形状。
|
||||
- **csr_tensor** (CSRTensor) - CSRTensor对象,用来初始化新的CSRTensor。
|
||||
|
||||
**输出:**
|
||||
|
||||
CSRTensor,由 `indptr` 、 `indices` 、 `values` 和 `shape` 组成。
|
||||
|
||||
.. py:method:: indptr
|
||||
:property:
|
||||
|
||||
返回CSRTensor的行偏移量。
|
||||
|
||||
.. py:method:: indices
|
||||
:property:
|
||||
|
||||
返回CSRTensor的列索引值。
|
||||
|
||||
.. py:method:: values
|
||||
:property:
|
||||
|
||||
返回CSRTensor的非零元素值。
|
||||
|
||||
.. py:method:: shape
|
||||
:property:
|
||||
|
||||
稀疏矩阵的稠密形状。
|
||||
|
||||
.. py:method:: dtype
|
||||
:property:
|
||||
|
||||
返回稀疏矩阵非零元素值数据类型。
|
||||
|
||||
.. py:method:: size
|
||||
:property:
|
||||
|
||||
返回稀疏矩阵非零元素值数量。
|
||||
|
||||
.. py:method:: itemsize
|
||||
:property:
|
||||
|
||||
返回每个非零元素所占字节数。
|
||||
|
||||
.. py:method:: ndim
|
||||
:property:
|
||||
|
||||
稀疏矩阵的稠密维度。
|
||||
|
||||
.. py:method:: to_coo()
|
||||
|
||||
将CSRTensor转换为COOTensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
COOTensor。
|
||||
|
||||
.. py:method:: to_dense()
|
||||
|
||||
将CSRTensor转换为稠密Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor。
|
||||
|
||||
.. py:method:: to_tuple()
|
||||
|
||||
将CSRTensor的行偏移量,列索引,非零元素,以及形状信息作为tuple返回。
|
||||
|
||||
**返回:**
|
||||
|
||||
tuple(Tensor,Tensor, Tensor, tuple(int))
|
||||
|
||||
.. py:method:: abs()
|
||||
|
||||
对所有非零元素取绝对值,并返回新的CSRTensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
CSRTensor。
|
||||
|
||||
.. py:method:: astype(dtype)
|
||||
|
||||
返回指定数据类型的CSRTensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **dytpe** (`mindspore.dtype`) - 指定数据类型。
|
||||
|
||||
**返回:**
|
||||
|
||||
CSRTensor。
|
||||
|
||||
.. py:method:: mv(dense_vector)
|
||||
|
||||
返回CSRTensor右乘稠密矩阵的矩阵乘法运算结果。
|
||||
形状为 `[M, N]` 的CSRTensor,需要适配形状为 `[N, 1]` 的稠密向量,得到结果为 `[M, 1]` 的稠密向量。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **dense_vector** (Tensor) - 形状为 `[N,1]` 的一维张量,其中N等于CSRTensor的列数。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor。
|
||||
|
||||
.. py:method:: sum(axis)
|
||||
|
||||
对CSRTensor的某个轴求和。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **axis** (int) - 求和轴。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor。
|
|
@ -101,7 +101,8 @@ const mindspore::HashSet<std::string> sparse_op_set = {{prim::kSparseTensorDense
|
|||
{prim::kCSRMV},
|
||||
{prim::kCSRMul},
|
||||
{prim::kCSRGather},
|
||||
{prim::kCSR2COO}};
|
||||
{prim::kCSR2COO},
|
||||
{prim::kCSRDiv}};
|
||||
|
||||
COMMON_EXPORT bool IsCustomCSROP(const AnfNodePtr &cnode);
|
||||
} // namespace mindspore
|
||||
|
|
|
@ -214,19 +214,27 @@ BuiltInTypeMap &GetMethodMap() {
|
|||
{
|
||||
{"__add__", prim::kPrimRowTensorAdd}, // P.row_tensor_add
|
||||
}},
|
||||
{kObjectTypeJTagged, {}},
|
||||
{kObjectTypeSymbolicKeyType, {}},
|
||||
{kObjectTypeEnvType, {}},
|
||||
{kObjectTypeCOOTensorType,
|
||||
{
|
||||
{"to_csr", std::string("coo_to_csr")},
|
||||
{"to_dense", std::string("coo_to_dense")},
|
||||
}},
|
||||
{kObjectTypeCSRTensorType,
|
||||
{
|
||||
{"to_coo", std::string("csr_to_coo")},
|
||||
{"to_dense", std::string("csr_to_dense")},
|
||||
}}};
|
||||
{"astype", std::string("csr_astype")}, // C.csr_astype
|
||||
{"abs", std::string("csr_abs")}, // C.csr_abs
|
||||
{"sum", std::string("csr_sum")}, // C.csr_sum
|
||||
{"mv", std::string("csr_mv")}, // C.csr_mv
|
||||
{"to_tuple", std::string("csr_to_tuple")}, // C.csr_to_tuple
|
||||
{"to_coo", std::string("csr_to_coo")}, // C.csr_to_coo
|
||||
{"to_dense", std::string("csr_to_dense")}, // C.csr_to_dense
|
||||
}},
|
||||
{kObjectTypeCOOTensorType,
|
||||
{
|
||||
{"astype", std::string("coo_astype")}, // C.coo_astype
|
||||
{"abs", std::string("coo_abs")}, // C.coo_abs
|
||||
{"to_tuple", std::string("coo_to_tuple")}, // C.coo_to_tuple
|
||||
{"to_csr", std::string("coo_to_csr")}, // C.coo_to_csr
|
||||
{"to_dense", std::string("coo_to_dense")}, // C.coo_to_dense
|
||||
}},
|
||||
{kObjectTypeJTagged, {}},
|
||||
{kObjectTypeSymbolicKeyType, {}},
|
||||
{kObjectTypeEnvType, {}}};
|
||||
return method_map;
|
||||
}
|
||||
|
||||
|
@ -254,6 +262,10 @@ BuiltInTypeMap &GetAttrMap() {
|
|||
{"values", prim::kPrimCOOTensorGetValues}, // F.coo_tensor_get_values
|
||||
{"indices", prim::kPrimCOOTensorGetIndices}, // F.coo_tensor_get_indices
|
||||
{"shape", prim::kPrimCOOTensorGetDenseShape}, // F.coo_tensor_get_dense_shape
|
||||
{"dtype", std::string("dtype_")}, // C.dtype_
|
||||
{"size", std::string("sparse_size_")}, // C.sparse_size_
|
||||
{"ndim", std::string("sparse_ndim_")}, // C.sparse_ndim_
|
||||
{"itemsize", std::string("itemsize_")}, // C.itemsize_
|
||||
}},
|
||||
{kObjectTypeCSRTensorType,
|
||||
{
|
||||
|
@ -261,6 +273,10 @@ BuiltInTypeMap &GetAttrMap() {
|
|||
{"values", prim::kPrimCSRTensorGetValues}, // F.csr_tensor_get_values
|
||||
{"indices", prim::kPrimCSRTensorGetIndices}, // F.csr_tensor_get_indices
|
||||
{"shape", prim::kPrimCSRTensorGetDenseShape}, // F.csr_tensor_get_shape
|
||||
{"dtype", std::string("dtype_")}, // C.dtype_
|
||||
{"size", std::string("sparse_size_")}, // C.sparse_size_
|
||||
{"ndim", std::string("sparse_ndim_")}, // C.sparse_ndim_
|
||||
{"itemsize", std::string("itemsize_")}, // C.itemsize_
|
||||
}},
|
||||
};
|
||||
return attr_map;
|
||||
|
|
|
@ -320,8 +320,8 @@ size_t CountValueNum(const ValueTuplePtr &value_tuple) {
|
|||
|
||||
bool IsCustomCSROP(const AnfNodePtr &cnode) {
|
||||
MS_EXCEPTION_IF_NULL(cnode);
|
||||
const PrimitiveSet prims{prim::kPrimCSRReduceSum, prim::kPrimCSRMul, prim::kPrimCSRMV,
|
||||
prim::kPrimCSRGather, prim::kPrimCSR2COO, prim::kPrimCOO2CSR};
|
||||
const PrimitiveSet prims{prim::kPrimCSRReduceSum, prim::kPrimCSRMul, prim::kPrimCSRMV, prim::kPrimCSRGather,
|
||||
prim::kPrimCSR2COO, prim::kPrimCOO2CSR, prim::kPrimCSRDiv};
|
||||
return IsOneOfPrimitiveCNode(cnode, prims);
|
||||
}
|
||||
} // namespace mindspore
|
||||
|
|
|
@ -161,6 +161,8 @@ AbstractBasePtr InferImplCOOTensorGetDenseShape(const AnalysisEnginePtr &, const
|
|||
const AbstractBasePtrList &args_spec_list);
|
||||
AbstractBasePtr InferImplCSRMul(const AnalysisEnginePtr &, const PrimitivePtr &primitive,
|
||||
const AbstractBasePtrList &args_spec_list);
|
||||
AbstractBasePtr InferImplCSRDiv(const AnalysisEnginePtr &, const PrimitivePtr &primitive,
|
||||
const AbstractBasePtrList &args_spec_list);
|
||||
AbstractBasePtr InferImplCSRMV(const AnalysisEnginePtr &, const PrimitivePtr &primitive,
|
||||
const AbstractBasePtrList &args_spec_list);
|
||||
AbstractBasePtr InferImplCSRReduceSum(const AnalysisEnginePtr &, const PrimitivePtr &primitive,
|
||||
|
|
|
@ -454,6 +454,40 @@ AbstractBasePtr InferImplCSRMul(const AnalysisEnginePtr &, const PrimitivePtr &p
|
|||
return ret;
|
||||
}
|
||||
|
||||
AbstractBasePtr InferImplCSRDiv(const AnalysisEnginePtr &, const PrimitivePtr &primitive,
|
||||
const AbstractBasePtrList &args_spec_list) {
|
||||
// Inputs: a sparse tensor and a dense tensor.
|
||||
constexpr auto kCSRDivInputsNum = 2;
|
||||
constexpr auto kCSRDivShapeSize = 2;
|
||||
const std::string op_name = primitive->name();
|
||||
CheckArgsSize(op_name, args_spec_list, kCSRDivInputsNum);
|
||||
auto sparse = CheckArg<AbstractCSRTensor>(op_name, args_spec_list, 0);
|
||||
auto dense = CheckArg<AbstractTensor>(op_name, args_spec_list, 1);
|
||||
MS_EXCEPTION_IF_NULL(sparse);
|
||||
MS_EXCEPTION_IF_NULL(sparse->shape());
|
||||
MS_EXCEPTION_IF_NULL(sparse->values());
|
||||
MS_EXCEPTION_IF_NULL(sparse->indices());
|
||||
MS_EXCEPTION_IF_NULL(dense);
|
||||
|
||||
auto sparse_shape = sparse->shape()->shape();
|
||||
auto dense_shape = dense->shape()->shape();
|
||||
if (sparse_shape.size() != kCSRDivShapeSize || dense_shape.size() != kCSRDivShapeSize) {
|
||||
MS_EXCEPTION(ValueError) << "Currently, only support " << kCSRDivShapeSize << "-D inputs!"
|
||||
<< "but sparse tensor has " << sparse_shape.size() << " dimensions, "
|
||||
<< "and dense tensor has " << dense_shape.size() << " dimensions, ";
|
||||
}
|
||||
auto ret = sparse->values()->Broaden();
|
||||
|
||||
MS_EXCEPTION_IF_NULL(sparse->indices()->shape());
|
||||
auto nnz_vec = sparse->indices()->shape()->shape();
|
||||
int csr_avg_rows = nnz_vec[0] / dense_shape[0];
|
||||
primitive->set_attr(kCSRAvgRows, MakeValue(csr_avg_rows));
|
||||
primitive->set_attr(kCSRDenseShape, MakeValue(sparse_shape));
|
||||
primitive->set_attr(kIsCSR, MakeValue(true));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
AbstractBasePtr InferImplCSRMV(const AnalysisEnginePtr &, const PrimitivePtr &primitive,
|
||||
const AbstractBasePtrList &args_spec_list) {
|
||||
// Inputs: a sparse tensor and a dense tensor.
|
||||
|
|
|
@ -235,6 +235,7 @@ PrimitiveEvalImplMap &GetPrimitiveToEvalImplMap() {
|
|||
{prim::kPrimCSRTensorGetIndices, R{InferImplCSRTensorGetIndices, nullptr, true}},
|
||||
{prim::kPrimCSRTensorGetDenseShape, R{InferImplCSRTensorGetDenseShape, nullptr, true}},
|
||||
{prim::kPrimCSRMul, R{InferImplCSRMul, nullptr, true}},
|
||||
{prim::kPrimCSRDiv, R{InferImplCSRDiv, nullptr, true}},
|
||||
{prim::kPrimCSRMV, R{InferImplCSRMV, nullptr, true}},
|
||||
{prim::kPrimCSRReduceSum, R{InferImplCSRReduceSum, nullptr, true}},
|
||||
{prim::kPrimCSRGather, R{InferImplCSRGather, nullptr, true}},
|
||||
|
|
|
@ -167,6 +167,7 @@ constexpr auto kCSRMul = "CSRMul";
|
|||
constexpr auto kCSRGather = "CSRGather";
|
||||
constexpr auto kCSR2COO = "CSR2COO";
|
||||
constexpr auto kCOO2CSR = "COO2CSR";
|
||||
constexpr auto kCSRDiv = "CSRDiv";
|
||||
|
||||
// Meta Function Graph
|
||||
constexpr auto kJ = "J";
|
||||
|
@ -612,6 +613,7 @@ GVAR_DEF(PrimitivePtr, kPrimCSRMul, std::make_shared<Primitive>(kCSRMul));
|
|||
GVAR_DEF(PrimitivePtr, kPrimCSRGather, std::make_shared<Primitive>(kCSRGather));
|
||||
GVAR_DEF(PrimitivePtr, kPrimCSR2COO, std::make_shared<Primitive>(kCSR2COO));
|
||||
GVAR_DEF(PrimitivePtr, kPrimCOO2CSR, std::make_shared<Primitive>(kCOO2CSR));
|
||||
GVAR_DEF(PrimitivePtr, kPrimCSRDiv, std::make_shared<Primitive>(kCSRDiv));
|
||||
|
||||
// TensorList
|
||||
GVAR_DEF(PrimitivePtr, kPrimTensorListFromTensor, std::make_shared<Primitive>("TensorListFromTensor"));
|
||||
|
|
|
@ -438,6 +438,7 @@ GVAR_DEF(TypePtr, kTensorTypeFP16, std::make_shared<TensorType>(std::make_shared
|
|||
GVAR_DEF(TypePtr, kTensorTypeFP32, std::make_shared<TensorType>(std::make_shared<Float>(32)));
|
||||
GVAR_DEF(TypePtr, kTensorTypeFP64, std::make_shared<TensorType>(std::make_shared<Float>(64)));
|
||||
GVAR_DEF(TypePtr, kCSRTensorType, std::make_shared<CSRTensorType>());
|
||||
GVAR_DEF(TypePtr, kCOOTensorType, std::make_shared<COOTensorType>());
|
||||
} // namespace mindspore
|
||||
|
||||
#endif // MINDSPORE_CORE_IR_DTYPE_H_
|
||||
|
|
|
@ -37,12 +37,12 @@ ValuePtr DTypeInferValue(const PrimitivePtr &primitive, const std::vector<Abstra
|
|||
if (type->isa<TensorType>()) {
|
||||
const std::set<TypePtr> valid_types = {kTensorType};
|
||||
return CheckAndConvertUtils::CheckTensorTypeValid("input_x", type, valid_types, op_name);
|
||||
} else if (type->isa<CSRTensorType>()) {
|
||||
const std::set<TypePtr> valid_types = {kCSRTensorType};
|
||||
return CheckAndConvertUtils::CheckCSRTensorTypeValid("input_x", type, valid_types, op_name);
|
||||
} else {
|
||||
const std::set<TypePtr> valid_types = {kCSRTensorType, kCOOTensorType};
|
||||
return CheckAndConvertUtils::CheckSparseTensorTypeValid("input_x", type, valid_types, op_name);
|
||||
}
|
||||
MS_EXCEPTION(TypeError) << "For Primitive[" << op_name << "], the input argument[input_x] "
|
||||
<< "must be a Tensor or CSRTensor but got " << type->ToString() << ".";
|
||||
<< "must be a Tensor, CSRTensor or COOTensor, but got " << type->ToString() << ".";
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
|
|
|
@ -552,16 +552,22 @@ TypePtr CheckAndConvertUtils::CheckTensorTypeValid(const std::string &type_name,
|
|||
return CheckTensorSubClass(type_name, type, check_list, prim_name);
|
||||
}
|
||||
|
||||
TypePtr CheckAndConvertUtils::CheckCSRTensorTypeValid(const std::string &type_name, const TypePtr &type,
|
||||
const std::set<TypePtr> &check_list,
|
||||
const std::string &prim_name) {
|
||||
TypePtr CheckAndConvertUtils::CheckSparseTensorTypeValid(const std::string &type_name, const TypePtr &type,
|
||||
const std::set<TypePtr> &check_list,
|
||||
const std::string &prim_name) {
|
||||
MS_EXCEPTION_IF_NULL(type);
|
||||
if (!type->isa<CSRTensorType>()) {
|
||||
if (!type->isa<CSRTensorType>() && !type->isa<COOTensorType>()) {
|
||||
MS_EXCEPTION(TypeError) << "For Primitive[" << prim_name << "], the input argument[" << type_name
|
||||
<< "] must be a CSRTensor but got " << type->ToString() << ".";
|
||||
<< "] must be a CSRTensor or COOTensor, but got " << type->ToString() << ".";
|
||||
}
|
||||
TypePtr element = nullptr;
|
||||
if (type->isa<CSRTensorType>()) {
|
||||
auto csr_tensor_type = type->cast<CSRTensorTypePtr>();
|
||||
element = csr_tensor_type->element();
|
||||
} else if (type->isa<COOTensorType>()) {
|
||||
auto coo_tensor_type = type->cast<COOTensorTypePtr>();
|
||||
element = coo_tensor_type->element();
|
||||
}
|
||||
auto csr_tensor_type = type->cast<CSRTensorTypePtr>();
|
||||
auto element = csr_tensor_type->element();
|
||||
MS_EXCEPTION_IF_NULL(element);
|
||||
return element;
|
||||
}
|
||||
|
|
|
@ -239,8 +239,8 @@ class MS_CORE_API CheckAndConvertUtils {
|
|||
const std::string &prim_name);
|
||||
static TypePtr CheckTensorTypeValid(const std::string &type_name, const TypePtr &type,
|
||||
const std::set<TypePtr> &check_list, const std::string &prim_name);
|
||||
static TypePtr CheckCSRTensorTypeValid(const std::string &type_name, const TypePtr &type,
|
||||
const std::set<TypePtr> &check_list, const std::string &prim_name);
|
||||
static TypePtr CheckSparseTensorTypeValid(const std::string &type_name, const TypePtr &type,
|
||||
const std::set<TypePtr> &check_list, const std::string &prim_name);
|
||||
static TypePtr CheckSubClass(const std::string &type_name, const TypePtr &type,
|
||||
const std::set<TypePtr> &template_types, const std::string &prim_name);
|
||||
static TypePtr CheckScalarOrTensorTypesSame(const std::map<std::string, TypePtr> &args,
|
||||
|
|
|
@ -1894,3 +1894,62 @@ def filter_(fun, iter_):
|
|||
if fun(elem):
|
||||
result.append(elem)
|
||||
return result
|
||||
|
||||
##################
|
||||
# Sparse methods #
|
||||
##################
|
||||
|
||||
|
||||
def csr_astype(x, dtype):
|
||||
"""Implementation of `astype` for CSRTensor."""
|
||||
data = F.cast(x.values, dtype)
|
||||
return F.make_csr_tensor(x.indptr, x.indices, data, x.shape)
|
||||
|
||||
def csr_sum(x, axis):
|
||||
"""Implementation of `sum` for CSRTensor."""
|
||||
return F.csr_reduce_sum(x, axis)
|
||||
|
||||
def csr_abs(x):
|
||||
"""Implementation of `abs` for CSRTensor."""
|
||||
data = F.absolute(x.values)
|
||||
return F.make_csr_tensor(x.indptr, x.indices, data, x.shape)
|
||||
|
||||
def csr_mv(x, dense_vector):
|
||||
"""Implementation of `abs` for CSRTensor."""
|
||||
return F.csr_mv(x, dense_vector)
|
||||
|
||||
def csr_to_tuple(x):
|
||||
"""Implementation of `to_tuple` for CSRTensor."""
|
||||
res = (x.indptr, x.indices, x.values, x.shape)
|
||||
return res
|
||||
|
||||
def coo_astype(x, dtype):
|
||||
"""Implementation of `astype` for COOTensor."""
|
||||
data = F.cast(x.values, dtype)
|
||||
return F.make_coo_tensor(x.indices, data, x.shape)
|
||||
|
||||
def coo_to_tuple(x):
|
||||
"""Implementation of `to_tuple` for COOTensor."""
|
||||
return x.indices, x.values, x.shape
|
||||
|
||||
def coo_abs(x):
|
||||
"""Implementation of `abs` for COOTensor."""
|
||||
data = F.absolute(x.values)
|
||||
return F.make_coo_tensor(x.indices, data, x.shape)
|
||||
|
||||
################
|
||||
# Sparse Attrs #
|
||||
################
|
||||
|
||||
|
||||
def sparse_size_(x):
|
||||
"""
|
||||
Return the size of SparseTensor.values. That is the number of non-zero values in SparseTensor.
|
||||
"""
|
||||
return size_(x.values)
|
||||
|
||||
def sparse_ndim_(x):
|
||||
"""
|
||||
Return the ndim of SparseTensor, according to its dense shape.
|
||||
"""
|
||||
return F.tuple_len(x.shape)
|
||||
|
|
|
@ -2424,6 +2424,7 @@ class COOTensor(COOTensor_):
|
|||
supplies the values for each element in `indices`.
|
||||
shape (tuple(int)): A integer tuple of size `ndims`,
|
||||
which specifies the dense_shape of the sparse tensor.
|
||||
coo_tensor (COOTensor): A COOTensor object.
|
||||
|
||||
Returns:
|
||||
COOTensor, composed of `indices`, `values`, and `shape`.
|
||||
|
@ -2486,6 +2487,61 @@ class COOTensor(COOTensor_):
|
|||
return tensor_operator_registry.get("tensor_scatter_update")(
|
||||
zeros_tensor, self.indices, self.values)
|
||||
|
||||
@property
|
||||
def dtype(self):
|
||||
"""Return the dtype of the values of COOTensor (:class:`mindspore.dtype`)."""
|
||||
return self._dtype
|
||||
|
||||
@property
|
||||
def size(self):
|
||||
"""Return the number of non-zero values."""
|
||||
return self.values.size
|
||||
|
||||
@property
|
||||
def itemsize(self):
|
||||
"""Return the length of one tensor element in bytes."""
|
||||
return self.values.itemsize
|
||||
|
||||
@property
|
||||
def ndim(self):
|
||||
"""Return the number of tensor dimensions."""
|
||||
return len(self.shape)
|
||||
|
||||
def astype(self, dtype):
|
||||
"""
|
||||
Return a copy of the COOTensor, cast its values to a specified type.
|
||||
|
||||
Args:
|
||||
dtype (class:`mindspore.dtype`): Designated tensor dtype.
|
||||
|
||||
Returns:
|
||||
COOTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
>>> from mindspore import Tensor, COOTensor
|
||||
>>> indices = Tensor([[0, 1], [1, 2]])
|
||||
>>> values = Tensor([1, 2], dtype=ms.float32)
|
||||
>>> shape = (3, 4)
|
||||
>>> x = COOTensor(indices, values, shape)
|
||||
>>> print(x.astype(ms.float64).dtype)
|
||||
Float64
|
||||
"""
|
||||
data = self.values.astype(dtype)
|
||||
return COOTensor(self.indices, data, self.shape)
|
||||
|
||||
def to_tuple(self):
|
||||
"""Return indices, values and shape as a tuple."""
|
||||
return self.indices, self.values, self.shape
|
||||
|
||||
def abs(self):
|
||||
"""Return absolute value element-wisely."""
|
||||
data = self.values.abs()
|
||||
return COOTensor(self.indices, data, self.shape)
|
||||
|
||||
|
||||
class CSRTensor(CSRTensor_):
|
||||
"""
|
||||
|
@ -2565,6 +2621,13 @@ class CSRTensor(CSRTensor_):
|
|||
res = tensor_operator_registry.get('csr_mul')(self, other)
|
||||
return CSRTensor(self.indptr, self.indices, res, self.shape)
|
||||
|
||||
def __div__(self, other):
|
||||
res = tensor_operator_registry.get('csr_div')(self, other)
|
||||
return CSRTensor(self.indptr, self.indices, res, self.shape)
|
||||
|
||||
def __truediv__(self, other):
|
||||
return self.__div__(other)
|
||||
|
||||
@property
|
||||
def indptr(self):
|
||||
return Tensor(self._indptr)
|
||||
|
@ -2581,18 +2644,128 @@ class CSRTensor(CSRTensor_):
|
|||
def shape(self):
|
||||
return self._shape
|
||||
|
||||
@property
|
||||
def dtype(self):
|
||||
"""Return the dtype of the values of CSRTensor (:class:`mindspore.dtype`)."""
|
||||
return self._dtype
|
||||
|
||||
@property
|
||||
def size(self):
|
||||
"""Return the number of non-zero values."""
|
||||
return self.values.size
|
||||
|
||||
@property
|
||||
def itemsize(self):
|
||||
"""Return the length of one tensor element in bytes."""
|
||||
return self.values.itemsize
|
||||
|
||||
@property
|
||||
def ndim(self):
|
||||
"""Return the number of tensor dimensions."""
|
||||
return len(self.shape)
|
||||
|
||||
def to_tuple(self):
|
||||
"""Return indptr, indices, values and shape as a tuple."""
|
||||
return self.indptr, self.indices, self.values, self.shape
|
||||
|
||||
def to_coo(self):
|
||||
"""Return a COOTensor."""
|
||||
row_indices = tensor_operator_registry.get("csr2coo")(self.indptr, self.values.shape[0])
|
||||
coo_indices = tensor_operator_registry.get("stack")(1)((row_indices, self.indices))
|
||||
return COOTensor(coo_indices, self.values, self.shape)
|
||||
|
||||
def to_dense(self):
|
||||
"""Return a dense Tensor."""
|
||||
coo_tensor = self.to_coo()
|
||||
return coo_tensor.to_dense()
|
||||
|
||||
def astype(self, dtype):
|
||||
"""
|
||||
Return a copy of the CSRTensor, cast its values to a specified type.
|
||||
|
||||
Args:
|
||||
dtype (class:`mindspore.dtype`): Designated tensor dtype.
|
||||
|
||||
Returns:
|
||||
CSRTensor.
|
||||
|
||||
Supported Platforms:
|
||||
``Ascend`` ``GPU`` ``CPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore as ms
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> indptr = Tensor([0, 1, 2])
|
||||
>>> indices = Tensor([0, 1])
|
||||
>>> values = Tensor([1, 2], dtype=ms.float32)
|
||||
>>> shape = (2, 4)
|
||||
>>> csr_tensor = CSRTensor(indptr, indices, values, shape)
|
||||
>>> print(x.astype(ms.float64).dtype)
|
||||
Float64
|
||||
"""
|
||||
data = self.values.astype(dtype)
|
||||
return CSRTensor(self.indptr, self.indices, data, self.shape)
|
||||
|
||||
def mv(self, dense_vector):
|
||||
"""
|
||||
Sparse matrix-vector multiplication.
|
||||
|
||||
Args:
|
||||
dense_vector (Tensor) - A dense Tensor.
|
||||
|
||||
Returns:
|
||||
Tensor.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> indptr = Tensor([0, 1, 2])
|
||||
>>> indices = Tensor([0, 1])
|
||||
>>> values = Tensor([2, 1], dtype=mstype.float32)
|
||||
>>> dense_shape = (2, 4)
|
||||
>>> csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
>>> dense = Tensor([[1], [1], [1], [1]], dtype=mstype.float32)
|
||||
>>> print(csr_tensor.mv(dense))
|
||||
[[2.]
|
||||
[1.]]
|
||||
"""
|
||||
return tensor_operator_registry.get("csr_mv")(self, dense_vector)
|
||||
|
||||
def sum(self, axis):
|
||||
"""
|
||||
Reduces a dimension of a CSRTensor by summing all elements in the dimension.
|
||||
|
||||
Args:
|
||||
axis (int) - The dimensions to reduce.
|
||||
|
||||
Returns:
|
||||
Tensor, the dtype is the same as `sparse_tensor.values`.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> indptr = Tensor([0, 1, 2])
|
||||
>>> indices = Tensor([0, 1])
|
||||
>>> values = Tensor([2, 1], dtype=mstype.float32)
|
||||
>>> dense_shape = (2, 4)
|
||||
>>> csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
>>> print(csr_tensor.sum(1))
|
||||
[[2.]
|
||||
[1.]]
|
||||
"""
|
||||
return tensor_operator_registry.get("csr_reduce_sum")(self, axis)
|
||||
|
||||
def abs(self):
|
||||
"""Return absolute value element-wisely."""
|
||||
data = self.values.abs()
|
||||
return CSRTensor(self.indptr, self.indices, data, self.shape)
|
||||
|
||||
|
||||
def _vm_compare(*args):
|
||||
"""Implement `vm_compare` for tensor."""
|
||||
|
|
|
@ -28,4 +28,5 @@ from .csr_mul import _csr_mul_akg
|
|||
from .csr_gather import _csr_gather_akg
|
||||
from .csr2coo import _csr2coo_akg
|
||||
from .coo2csr import _coo2csr_akg
|
||||
from .csr_div import _csr_div_akg
|
||||
# Please insert op register in lexicographical order of the filename.
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
# Copyright 2022 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""CSRDiv op"""
|
||||
from mindspore.ops.op_info_register import op_info_register, AkgGpuRegOp, DataType
|
||||
|
||||
csr_div_op_info = AkgGpuRegOp("CSRDiv") \
|
||||
.fusion_type("OPAQUE") \
|
||||
.input(0, "indptr") \
|
||||
.input(1, "indices") \
|
||||
.input(2, "values") \
|
||||
.input(4, "dense_tensor") \
|
||||
.output(0, "output0") \
|
||||
.dtype_format(DataType.I64_Default, DataType.I64_Default, DataType.F32_Default, \
|
||||
DataType.F32_Default, \
|
||||
DataType.F32_Default) \
|
||||
.dtype_format(DataType.I32_Default, DataType.I32_Default, DataType.F32_Default, \
|
||||
DataType.F32_Default, \
|
||||
DataType.F32_Default) \
|
||||
.get_op_info()
|
||||
|
||||
@op_info_register(csr_div_op_info)
|
||||
def _csr_div_akg():
|
||||
"""CSRDiv AutoDiff register"""
|
||||
return
|
|
@ -18,6 +18,7 @@
|
|||
from . import _compile_utils as utils
|
||||
from ...composite import base
|
||||
from ... import functional as F
|
||||
from ....common import CSRTensor
|
||||
|
||||
|
||||
div = base.MultitypeFuncGraph("div", True)
|
||||
|
@ -26,6 +27,17 @@ div is a metafuncgraph object which will div two objects according to input type
|
|||
using ".register" decorator
|
||||
"""
|
||||
|
||||
@div.register("CSRTensor", "Tensor")
|
||||
def _csrtensor_div_tensor(x, y):
|
||||
"""
|
||||
Returns x / y where x is CSRTensor and y is Tensor.
|
||||
|
||||
Outputs:
|
||||
CSRTensor, equal to x * y.
|
||||
"""
|
||||
data = F.csr_div(x, y)
|
||||
return CSRTensor(x.indptr, x.indices, data, x.shape)
|
||||
|
||||
|
||||
@div.register("Number", "Number")
|
||||
def _div_scalar(x, y):
|
||||
|
|
|
@ -150,6 +150,7 @@ scatter_nd_update = P.ScatterNdUpdate()
|
|||
stack = P.Stack()
|
||||
|
||||
csr_mul = _csr_ops.CSRMul()
|
||||
csr_div = _csr_ops.CSRDiv()
|
||||
csr_mv = _csr_ops.CSRMV()
|
||||
csr_reduce_sum = _csr_ops.CSRReduceSum()
|
||||
csr_gather = _csr_ops.CSRGather()
|
||||
|
@ -676,6 +677,9 @@ tensor_operator_registry.register('floor', floor)
|
|||
tensor_operator_registry.register('csr_mul', csr_mul)
|
||||
tensor_operator_registry.register('csr2coo', csr2coo)
|
||||
tensor_operator_registry.register('coo2csr', coo2csr)
|
||||
tensor_operator_registry.register('csr_div', csr_div)
|
||||
tensor_operator_registry.register('csr_mv', csr_mv)
|
||||
tensor_operator_registry.register('csr_reduce_sum', csr_reduce_sum)
|
||||
tensor_operator_registry.register('narrow', narrow)
|
||||
tensor_operator_registry.register('sort', sort)
|
||||
tensor_operator_registry.register('zeros', zeros)
|
||||
|
|
|
@ -36,12 +36,13 @@ class CSRReduceSum(PrimitiveWithInfer):
|
|||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, CSRTensor, ops
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = ops.CSRReduceSum()
|
||||
... self.op = _csr_ops.CSRReduceSum()
|
||||
...
|
||||
... def construct(self, indptr, indices, values, dense_shape, axis):
|
||||
... csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
|
@ -83,12 +84,13 @@ class CSRMV(PrimitiveWithInfer):
|
|||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, CSRTensor, ops
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = ops.CSRMV()
|
||||
... self.op = _csr_ops.CSRMV()
|
||||
...
|
||||
... def construct(self, indptr, indices, values, dense_shape, dense):
|
||||
... csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
|
@ -135,12 +137,13 @@ class CSRMul(PrimitiveWithInfer):
|
|||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, CSRTensor, ops
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = ops.CSRMul()
|
||||
... self.op = _csr_ops.CSRMul()
|
||||
...
|
||||
... def construct(self, indptr, indices, values, dense_shape, dense):
|
||||
... csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
|
@ -184,12 +187,13 @@ class CSRGather(PrimitiveWithInfer):
|
|||
|
||||
Examples:
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, ops
|
||||
>>> from mindspore import Tensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = ops.CSRGather()
|
||||
... self.op = _csr_ops.CSRGather()
|
||||
...
|
||||
... def construct(self, indptr, indices, dense, sparse_shape):
|
||||
... return self.op(indptr, indices, dense, sparse_shape)
|
||||
|
@ -228,11 +232,12 @@ class CSR2COO(PrimitiveWithInfer):
|
|||
|
||||
Examples:
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, ops
|
||||
>>> from mindspore import Tensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = ops.CSR2COO()
|
||||
... self.op = _csr_ops.CSR2COO()
|
||||
...
|
||||
... def construct(self, indptr, nnz):
|
||||
... return self.op(indptr, nnz)
|
||||
|
@ -267,12 +272,13 @@ class COO2CSR(PrimitiveWithInfer):
|
|||
|
||||
Examples:
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, ops
|
||||
>>> from mindspore import Tensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = ops.COO2CSR()
|
||||
... self.op = _csr_ops.COO2CSR()
|
||||
...
|
||||
... def construct(self, row_indices, height):
|
||||
... return self.op(row_indices, height)
|
||||
|
@ -286,3 +292,52 @@ class COO2CSR(PrimitiveWithInfer):
|
|||
def __init__(self):
|
||||
"""Initialize COO2CSR"""
|
||||
self.init_prim_io_names(inputs=['row_indices', 'height'], outputs=['output'])
|
||||
|
||||
|
||||
class CSRDiv(PrimitiveWithInfer):
|
||||
"""
|
||||
Elemwise division on a CSRTensor and a dense tensor.
|
||||
|
||||
Note:
|
||||
The op outputs a 1-D dense tensor whose shape and values are the same as input `CSRTensor.values`.
|
||||
If expect a CSRTensor output, please use `/` directly, e.g. `x / y`, can be CSRTensor.
|
||||
|
||||
Inputs:
|
||||
- **sparse_tensor** (CSRTensor) - A CSRTensor.
|
||||
- **dense_tensor** (Tensor) - A Tensor.
|
||||
|
||||
Outputs:
|
||||
Tensor, the dtype and shape is the same as `sparse_tensor.values`.
|
||||
|
||||
Supported Platforms:
|
||||
``GPU``
|
||||
|
||||
Examples:
|
||||
>>> import mindspore
|
||||
>>> import mindspore.nn as nn
|
||||
>>> from mindspore import Tensor, CSRTensor
|
||||
>>> from mindspore.ops.operations import _csr_ops
|
||||
>>> from mindspore import dtype as mstype
|
||||
>>> class Net(nn.Cell):
|
||||
... def __init__(self):
|
||||
... super(Net, self).__init__()
|
||||
... self.op = _csr_ops.CSRDiv()
|
||||
...
|
||||
... def construct(self, indptr, indices, values, dense_shape, dense):
|
||||
... csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
... return self.op(csr_tensor, dense)
|
||||
>>> indptr = Tensor([0, 1, 2])
|
||||
>>> indices = Tensor([0, 1])
|
||||
>>> values = Tensor([2, 1], dtype=mstype.float32)
|
||||
>>> dense_shape = (2, 4)
|
||||
>>> dense = Tensor([[1., 1, 1, 1], [1, 1, 1, 1]], dtype=mstype.float32)
|
||||
>>> out = Net()(indptr, indices, values, dense_shape, dense)
|
||||
>>> print(out)
|
||||
[2. 1.]
|
||||
"""
|
||||
|
||||
@prim_attr_register
|
||||
def __init__(self):
|
||||
"""Initialize CSRDiv"""
|
||||
self.init_prim_io_names(inputs=['indptr', 'indices', 'values', 'dense_shape', 'dense_tensor'],
|
||||
outputs=['output'])
|
||||
|
|
|
@ -19,6 +19,7 @@ import numpy as np
|
|||
|
||||
from mindspore import Tensor, COOTensor, ms_function, nn, context
|
||||
from mindspore.common import dtype as mstype
|
||||
from mindspore.ops import functional as F
|
||||
|
||||
|
||||
context.set_context(mode=context.GRAPH_MODE)
|
||||
|
@ -71,7 +72,7 @@ def test_coo_tensor_in_while():
|
|||
"""
|
||||
class COOTensorWithControlWhile(nn.Cell):
|
||||
def __init__(self, shape):
|
||||
super().__init__()
|
||||
super(COOTensorWithControlWhile, self).__init__()
|
||||
self.shape = shape
|
||||
|
||||
@ms_function
|
||||
|
@ -127,3 +128,82 @@ def test_coo_method():
|
|||
to_dense_expect = np.array(
|
||||
[[0., 1., 0., 0.], [0., 0., 2., 0.], [0., 0., 0., 0.]], dtype=np.float32)
|
||||
assert np.allclose(to_dense_output.asnumpy(), to_dense_expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_gpu_training
|
||||
@pytest.mark.env_onecard
|
||||
def test_dtype_coo_tensor():
|
||||
"""
|
||||
Feature: Test F.dtype with COOTensor.
|
||||
Description: Test: F.dtype(x), x.dtype.
|
||||
Expectation: Success.
|
||||
"""
|
||||
indices = Tensor([[0, 1], [1, 2]])
|
||||
values = Tensor([1, 2], dtype=mstype.float32)
|
||||
shape = (3, 4)
|
||||
|
||||
def pynative_test():
|
||||
x = COOTensor(indices, values, shape)
|
||||
return F.dtype(x), x.dtype
|
||||
graph_test = ms_function(pynative_test)
|
||||
|
||||
out1, out2 = pynative_test()
|
||||
out3, out4 = graph_test()
|
||||
assert out1 in [mstype.float32]
|
||||
assert out2 in [mstype.float32]
|
||||
assert out3 in [mstype.float32]
|
||||
assert out4 in [mstype.float32]
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_arm_ascend_training
|
||||
@pytest.mark.platform_x86_ascend_training
|
||||
@pytest.mark.platform_x86_gpu_training
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
def test_coo_attr():
|
||||
"""
|
||||
Feature: Test COOTensor GetAttr in Graph and PyNative.
|
||||
Description: Test COOTensor.indices, COOTensor.values, COOTensor.shape.
|
||||
Expectation: Success.
|
||||
"""
|
||||
indices = Tensor([[0, 1], [1, 2]])
|
||||
values = Tensor([1, 2], dtype=mstype.float32)
|
||||
shape = (3, 4)
|
||||
coo = COOTensor(indices, values, shape)
|
||||
|
||||
def test_pynative_1():
|
||||
return coo.indices, coo.values, coo.shape
|
||||
|
||||
def test_pynative_2():
|
||||
return coo.astype(mstype.int32)
|
||||
|
||||
def test_pynative_3():
|
||||
return coo.to_tuple()
|
||||
|
||||
test_graph_1 = ms_function(test_pynative_1)
|
||||
test_graph_2 = ms_function(test_pynative_2)
|
||||
test_graph_3 = ms_function(test_pynative_3)
|
||||
|
||||
py_indices, py_values, py_shape = test_pynative_1()
|
||||
py_coo = test_pynative_2()
|
||||
py_tuple = test_pynative_3()
|
||||
|
||||
g_indices, g_values, g_shape = test_graph_1()
|
||||
g_coo = test_graph_2()
|
||||
g_tuple = test_graph_3()
|
||||
|
||||
coo1 = COOTensor(py_indices, py_values, py_shape)
|
||||
coo2 = COOTensor(g_indices, g_values, g_shape)
|
||||
# check coo attr
|
||||
compare_coo(coo1, coo2)
|
||||
# check astype
|
||||
compare_coo(py_coo, g_coo)
|
||||
# check to_tuple
|
||||
assert len(py_tuple) == len(g_tuple)
|
||||
for i, _ in enumerate(py_tuple):
|
||||
if isinstance(py_tuple[i], Tensor):
|
||||
assert (py_tuple[i].asnumpy() == g_tuple[i].asnumpy()).all()
|
||||
else:
|
||||
assert py_tuple[i] == g_tuple[i]
|
||||
|
|
|
@ -79,17 +79,48 @@ def test_csr_attr():
|
|||
indices = Tensor([0, 1])
|
||||
values = Tensor([1, 2], dtype=mstype.float32)
|
||||
shape = (2, 6)
|
||||
def test_pynative():
|
||||
csr = CSRTensor(indptr, indices, values, shape)
|
||||
return csr.indptr, csr.indices, csr.values, csr.shape
|
||||
test_graph = ms_function(test_pynative)
|
||||
csr = CSRTensor(indptr, indices, values, shape)
|
||||
|
||||
csr1_tuple = test_pynative()
|
||||
csr2_tuple = test_graph()
|
||||
def test_pynative_1():
|
||||
return csr.indptr, csr.indices
|
||||
|
||||
csr1 = CSRTensor(*csr1_tuple)
|
||||
csr2 = CSRTensor(*csr2_tuple)
|
||||
def test_pynative_2():
|
||||
return csr.values, csr.shape
|
||||
|
||||
def test_pynative_3():
|
||||
return csr.astype(mstype.int32)
|
||||
|
||||
def test_pynative_4():
|
||||
return csr.to_tuple()
|
||||
|
||||
test_graph_1 = ms_function(test_pynative_1)
|
||||
test_graph_2 = ms_function(test_pynative_2)
|
||||
test_graph_3 = ms_function(test_pynative_3)
|
||||
test_graph_4 = ms_function(test_pynative_4)
|
||||
|
||||
py_indptr, py_indices = test_pynative_1()
|
||||
py_values, py_shape = test_pynative_2()
|
||||
py_csr = test_pynative_3()
|
||||
py_tuple = test_pynative_4()
|
||||
|
||||
g_indptr, g_indices = test_graph_1()
|
||||
g_values, g_shape = test_graph_2()
|
||||
g_csr = test_graph_3()
|
||||
g_tuple = test_graph_4()
|
||||
|
||||
csr1 = CSRTensor(py_indptr, py_indices, py_values, py_shape)
|
||||
csr2 = CSRTensor(g_indptr, g_indices, g_values, g_shape)
|
||||
# check csr attr
|
||||
compare_csr(csr1, csr2)
|
||||
# check astype
|
||||
compare_csr(py_csr, g_csr)
|
||||
# check to_tuple
|
||||
assert len(py_tuple) == len(g_tuple)
|
||||
for i, _ in enumerate(py_tuple):
|
||||
if isinstance(py_tuple[i], Tensor):
|
||||
assert (py_tuple[i].asnumpy() == g_tuple[i].asnumpy()).all()
|
||||
else:
|
||||
assert py_tuple[i] == g_tuple[i]
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
|
@ -123,7 +154,7 @@ def test_csr_tensor_in_while():
|
|||
|
||||
class CSRTensorWithControlWhile(nn.Cell):
|
||||
def __init__(self, shape):
|
||||
super().__init__()
|
||||
super(CSRTensorWithControlWhile, self).__init__()
|
||||
self.op1 = CSRTensorValuesDouble()
|
||||
self.op2 = CSRTensorValuesAdd2()
|
||||
self.shape = shape
|
||||
|
@ -193,7 +224,7 @@ def test_csr_tensor_in_while_cpu():
|
|||
|
||||
class CSRTensorWithControlWhile(nn.Cell):
|
||||
def __init__(self, shape):
|
||||
super().__init__()
|
||||
super(CSRTensorWithControlWhile, self).__init__()
|
||||
self.op1 = CSRTensorValuesDouble()
|
||||
self.op2 = CSRTensorValuesAdd2()
|
||||
self.shape = shape
|
||||
|
@ -241,28 +272,38 @@ def test_csr_ops():
|
|||
dense_vector = Tensor([[1.], [1], [1], [1]], dtype=mstype.float32)
|
||||
csr_tensor = CSRTensor(indptr, indices, values, dense_shape)
|
||||
|
||||
def test_ops_pynative():
|
||||
def test_ops_pynative_dense():
|
||||
dense1 = csr_reducesum(csr_tensor, 1)
|
||||
dense2 = csrmv(csr_tensor, dense_vector)
|
||||
return dense1, dense2
|
||||
|
||||
def test_ops_pynative_sparse():
|
||||
sparse1 = csr_tensor * dense_tensor
|
||||
sparse2 = dense_tensor * csr_tensor
|
||||
return dense1, dense2, sparse1, sparse2
|
||||
sparse3 = csr_tensor / dense_tensor
|
||||
return sparse1, sparse2, sparse3
|
||||
|
||||
test_ops_graph = ms_function(test_ops_pynative)
|
||||
test_ops_graph_dense = ms_function(test_ops_pynative_dense)
|
||||
test_ops_graph_sparse = ms_function(test_ops_pynative_sparse)
|
||||
|
||||
pynative_res = test_ops_pynative()
|
||||
graph_res = test_ops_graph()
|
||||
pynative_res_dense = test_ops_pynative_dense()
|
||||
graph_res_dense = test_ops_graph_dense()
|
||||
expect1 = np.array([[2.], [1.]], dtype=np.float32)
|
||||
expect2 = np.array([[2.], [1.]], dtype=np.float32)
|
||||
assert np.allclose(pynative_res_dense[0].asnumpy(), expect1)
|
||||
assert np.allclose(pynative_res_dense[1].asnumpy(), expect2)
|
||||
assert np.allclose(graph_res_dense[0].asnumpy(), expect1)
|
||||
assert np.allclose(graph_res_dense[1].asnumpy(), expect2)
|
||||
|
||||
pynative_res_sparse = test_ops_pynative_sparse()
|
||||
graph_res_sparse = test_ops_graph_sparse()
|
||||
expect3 = np.array([2., 1.], dtype=np.float32)
|
||||
assert np.allclose(pynative_res[0].asnumpy(), expect1)
|
||||
assert np.allclose(pynative_res[1].asnumpy(), expect2)
|
||||
assert np.allclose(pynative_res[2].values.asnumpy(), expect3)
|
||||
assert np.allclose(pynative_res[3].values.asnumpy(), expect3)
|
||||
assert np.allclose(graph_res[0].asnumpy(), expect1)
|
||||
assert np.allclose(graph_res[1].asnumpy(), expect2)
|
||||
assert np.allclose(graph_res[2].values.asnumpy(), expect3)
|
||||
assert np.allclose(graph_res[3].values.asnumpy(), expect3)
|
||||
assert np.allclose(pynative_res_sparse[0].values.asnumpy(), expect3)
|
||||
assert np.allclose(pynative_res_sparse[1].values.asnumpy(), expect3)
|
||||
assert np.allclose(pynative_res_sparse[2].values.asnumpy(), expect3)
|
||||
assert np.allclose(graph_res_sparse[0].values.asnumpy(), expect3)
|
||||
assert np.allclose(graph_res_sparse[1].values.asnumpy(), expect3)
|
||||
assert np.allclose(graph_res_sparse[2].values.asnumpy(), expect3)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
|
@ -279,7 +320,7 @@ def test_csrtensor_export_and_import_mindir():
|
|||
"""
|
||||
class TestCSRTensor(nn.Cell):
|
||||
def __init__(self, shape):
|
||||
super().__init__()
|
||||
super(TestCSRTensor, self).__init__()
|
||||
self.shape = shape
|
||||
|
||||
def construct(self, indptr, indices, values):
|
||||
|
@ -317,7 +358,7 @@ def test_csrops_export_and_import_mindir():
|
|||
"""
|
||||
class TestCSRNet(nn.Cell):
|
||||
def __init__(self, shape):
|
||||
super().__init__()
|
||||
super(TestCSRNet, self).__init__()
|
||||
self.shape = shape
|
||||
self.csr_reducesum = _csr_ops.CSRReduceSum()
|
||||
self.csr_mv = _csr_ops.CSRMV()
|
||||
|
@ -421,13 +462,15 @@ def test_dtype_csr_tensor():
|
|||
|
||||
def pynative_test():
|
||||
x = CSRTensor(indptr, indices, values, shape)
|
||||
return F.dtype(x)
|
||||
return F.dtype(x), x.dtype
|
||||
graph_test = ms_function(pynative_test)
|
||||
|
||||
out1 = pynative_test()
|
||||
out2 = graph_test()
|
||||
out1, out2 = pynative_test()
|
||||
out3, out4 = graph_test()
|
||||
assert out1 in [mstype.float32]
|
||||
assert out2 in [mstype.float32]
|
||||
assert out3 in [mstype.float32]
|
||||
assert out4 in [mstype.float32]
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
|
|
Loading…
Reference in New Issue