!33669 [MS][ops]ajust bitwise ops to new interface
Merge pull request !33669 from KXiong/master
This commit is contained in:
commit
3945caf231
|
@ -777,3 +777,81 @@ mindspore.Tensor
|
|||
**返回:**
|
||||
|
||||
Tensor,具有与入参 `shape` 相同的维度。
|
||||
|
||||
.. py:method:: bitwise_and(x)
|
||||
|
||||
逐元素计算 :math:`x & y` 的值。
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = x_{i} \wedge y_{i}
|
||||
|
||||
.. note::
|
||||
- 输入 `x` 和 `y` 遵循 `隐式类型转换规则 <https://www.mindspore.cn/docs/zh-CN/master/note/operator_list_implicit.html>`_ ,使数据类型保持一致。
|
||||
- 输入必须是两个Tensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (Tensor) - 第一个输入,是一个数据类型为uint8、uint16、unint32、uint64、int8、int16、int32或int64的Tensor。
|
||||
- **y** (Tensor) - 第二个输入,是一个与 `x` 相同类型的Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,是一个与`x` 相同类型的Tensor。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 或 `y` 不是Tensor。
|
||||
- **RuntimeError** - 输入的 `x` 与 `y` 不符合参数类型转换规则。
|
||||
|
||||
.. py:method:: bitwise_or(x)
|
||||
|
||||
逐元素计算 :math:`x | y` 的值。
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = x_{i} \mid y_{i}
|
||||
|
||||
.. note::
|
||||
- 输入 `x` 和 `y` 遵循 `隐式类型转换规则 <https://www.mindspore.cn/docs/zh-CN/master/note/operator_list_implicit.html>`_ ,使数据类型保持一致。
|
||||
- 输入必须是两个Tensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (Tensor) - 第一个输入,是一个数据类型为uint8、uint16、unint32、uint64、int8、int16、int32或int64的Tensor。
|
||||
- **y** (Tensor) - 第二个输入,是一个与 `x` 相同类型的Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,是一个与`x` 相同类型的Tensor。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 或 `y` 不是Tensor。
|
||||
- **RuntimeError** - 输入的 `x` 与 `y` 不符合参数类型转换规则。
|
||||
|
||||
.. py:method:: bitwise_xor(x)
|
||||
|
||||
逐元素计算 :math:`x ^ y` 的值。
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = x_{i} \oplus y_{i}
|
||||
|
||||
.. note::
|
||||
- 输入 `x` 和 `y` 遵循 `隐式类型转换规则 <https://www.mindspore.cn/docs/zh-CN/master/note/operator_list_implicit.html>`_ ,使数据类型保持一致。
|
||||
- 输入必须是两个Tensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (Tensor) - 第一个输入,是一个数据类型为uint8、uint16、unint32、uint64、int8、int16、int32或int64的Tensor。
|
||||
- **y** (Tensor) - 第二个输入,是一个与 `x` 相同类型的Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,是一个与`x` 相同类型的Tensor。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 或 `y` 不是Tensor。
|
||||
- **RuntimeError** - 输入的 `x` 与 `y` 不符合参数类型转换规则。
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
mindspore.ops.BitwiseAnd
|
||||
==================
|
||||
|
||||
.. py:class:: mindspore.ops.BitwiseAnd()
|
||||
|
||||
逐元素计算 :math:`x & y` 的值。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.bitwise_and`。
|
|
@ -0,0 +1,8 @@
|
|||
mindspore.ops.BitwiseOr
|
||||
==================
|
||||
|
||||
.. py:class:: mindspore.ops.BitwiseOr()
|
||||
|
||||
逐元素计算 :math:`x | y` 的值。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.bitwise_or`。
|
|
@ -0,0 +1,8 @@
|
|||
mindspore.ops.BitwiseXor
|
||||
==================
|
||||
|
||||
.. py:class:: mindspore.ops.BitwiseXor()
|
||||
|
||||
逐元素计算 :math:`x ^ y` 的值。
|
||||
|
||||
更多参考详见 :func:`mindspore.ops.bitwise_xor`。
|
|
@ -0,0 +1,28 @@
|
|||
mindspore.ops.bitwise_and
|
||||
========================
|
||||
|
||||
.. py:function:: mindspore.ops.bitwise_and(x, y)
|
||||
|
||||
逐元素计算 :math:`x & y` 的值。
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = x_{i} \wedge y_{i}
|
||||
|
||||
.. note::
|
||||
- 输入 `x` 和 `y` 遵循 `隐式类型转换规则 <https://www.mindspore.cn/docs/zh-CN/master/note/operator_list_implicit.html>`_ ,使数据类型保持一致。
|
||||
- 输入必须是两个Tensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (Tensor) - 第一个输入,是一个数据类型为uint8、uint16、unint32、uint64、int8、int16、int32或int64的Tensor。
|
||||
- **y** (Tensor) - 第二个输入,是一个与 `x` 相同类型的Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,是一个与`x` 相同类型的Tensor。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 或 `y` 不是Tensor。
|
||||
- **RuntimeError** - 输入的 `x` 与 `y` 不符合参数类型转换规则。
|
|
@ -0,0 +1,28 @@
|
|||
mindspore.ops.bitwise_or
|
||||
========================
|
||||
|
||||
.. py:function:: mindspore.ops.bitwise_or(x, y)
|
||||
|
||||
逐元素计算 :math:`x | y` 的值。
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = x_{i} \mid y_{i}
|
||||
|
||||
.. note::
|
||||
- 输入 `x` 和 `y` 遵循 `隐式类型转换规则 <https://www.mindspore.cn/docs/zh-CN/master/note/operator_list_implicit.html>`_ ,使数据类型保持一致。
|
||||
- 输入必须是两个Tensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (Tensor) - 第一个输入,是一个数据类型为uint8、uint16、unint32、uint64、int8、int16、int32或int64的Tensor。
|
||||
- **y** (Tensor) - 第二个输入,是一个与 `x` 相同类型的Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,是一个与`x` 相同类型的Tensor。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 或 `y` 不是Tensor。
|
||||
- **RuntimeError** - 输入的 `x` 与 `y` 不符合参数类型转换规则。
|
|
@ -0,0 +1,29 @@
|
|||
mindspore.ops.bitwise_xor
|
||||
========================
|
||||
|
||||
.. py:function:: mindspore.ops.bitwise_xor(x, y)
|
||||
|
||||
逐元素计算 :math:`x ^ y` 的值。
|
||||
|
||||
.. math::
|
||||
|
||||
out_i = x_{i} \oplus y_{i}
|
||||
|
||||
.. note::
|
||||
- 输入 `x` 和 `y` 遵循 `隐式类型转换规则 <https://www.mindspore.cn/docs/zh-CN/master/note/operator_list_implicit.html>`_ ,使数据类型保持一致。
|
||||
- 输入必须是两个Tensor。
|
||||
|
||||
**参数:**
|
||||
|
||||
- **x** (Tensor) - 第一个输入,是一个数据类型为uint8、uint16、unint32、uint64、int8、int16、int32或int64的Tensor。
|
||||
- **y** (Tensor) - 第二个输入,是一个与 `x` 相同类型的Tensor。
|
||||
|
||||
**返回:**
|
||||
|
||||
Tensor,是一个与`x` 相同类型的Tensor。
|
||||
|
||||
**异常:**
|
||||
|
||||
- **TypeError** - `x` 或 `y` 不是Tensor。
|
||||
- **RuntimeError** - 输入的 `x` 与 `y` 不符合参数类型转换规则。
|
||||
|
|
@ -181,6 +181,9 @@ BuiltInTypeMap &GetMethodMap() {
|
|||
{"transpose", std::string("transpose")}, // P.transpose
|
||||
{"flatten", std::string("flatten")}, // P.reshape(,-1)
|
||||
{"reshape", std::string("reshape")}, // P.reshape()
|
||||
{"bitwise_and", std::string("bitwise_and")}, // P.BitwiseAnd()
|
||||
{"bitwise_or", std::string("bitwise_or")}, // P.BitwiseOr()
|
||||
{"bitwise_xor", std::string("bitwise_xor")}, // P.BitwiseXor()
|
||||
{"ravel", std::string("ravel")}, // P.reshape(,(-1,))
|
||||
{"swapaxes", std::string("swapaxes")}, // P.transpose()
|
||||
{"narrow", std::string("narrow")}, // narrow()
|
||||
|
|
|
@ -18,12 +18,8 @@
|
|||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <cmath>
|
||||
#include <type_traits>
|
||||
#include <unordered_map>
|
||||
#include <memory>
|
||||
#include <map>
|
||||
#include <functional>
|
||||
#include <algorithm>
|
||||
#include <utility>
|
||||
|
||||
|
@ -35,66 +31,87 @@ namespace kernel {
|
|||
namespace {
|
||||
const size_t kBitwiseInputsNum = 2;
|
||||
const size_t kBitwiseOutputsNum = 1;
|
||||
} // namespace
|
||||
|
||||
template <typename T>
|
||||
class BitwiseCpuTypeFunc : public CpuKernelFunc {
|
||||
public:
|
||||
BitwiseCpuTypeFunc() = default;
|
||||
~BitwiseCpuTypeFunc() override = default;
|
||||
bool RunFunc(const std::vector<AddressPtr> &inputs, const std::vector<AddressPtr> &,
|
||||
const std::vector<AddressPtr> &outputs) override {
|
||||
const auto *input1 = reinterpret_cast<T *>(inputs[0]->addr);
|
||||
const auto *input2 = reinterpret_cast<T *>(inputs[1]->addr);
|
||||
auto *output = reinterpret_cast<T *>(outputs[0]->addr);
|
||||
compute_func_(this, input1, input2, output);
|
||||
return true;
|
||||
bool BitwiseCpuKernelMod::Init(const BaseOperatorPtr &base_operator, const std::vector<KernelTensorPtr> &inputs,
|
||||
const std::vector<KernelTensorPtr> &outputs) {
|
||||
if (!base_operator) {
|
||||
MS_LOG(ERROR) << "For " << kernel_type_ << ", cast " << kernel_type_ << " ops failed!";
|
||||
return false;
|
||||
}
|
||||
kernel_name_ = base_operator->name();
|
||||
if (inputs.size() != kBitwiseInputsNum || outputs.size() != kBitwiseOutputsNum) {
|
||||
MS_LOG(ERROR) << "For" << kernel_name_ << ": input and output size should be " << kBitwiseInputsNum << " and "
|
||||
<< kBitwiseOutputsNum << ", but get " << inputs.size() << " and " << outputs.size();
|
||||
return false;
|
||||
}
|
||||
input_type_1_ = inputs[0]->GetDtype();
|
||||
input_type_2_ = inputs[1]->GetDtype();
|
||||
if (input_type_1_ != input_type_2_) {
|
||||
MS_LOG(ERROR) << "For '" << kernel_name_ << "', input1 and input2 must have the same type. But got input1 type "
|
||||
<< input_type_1_ << ", input2 type " << input_type_2_;
|
||||
return false;
|
||||
}
|
||||
|
||||
void InitFunc(const CNodePtr &kernel_node) override {
|
||||
MS_EXCEPTION_IF_NULL(kernel_node);
|
||||
kernel_name_ = common::AnfAlgo::GetCNodeName(kernel_node);
|
||||
size_t input_num = common::AnfAlgo::GetInputTensorNum(kernel_node);
|
||||
CHECK_KERNEL_INPUTS_NUM(input_num, kBitwiseInputsNum, common::AnfAlgo::GetCNodeName(kernel_node));
|
||||
size_t output_num = common::AnfAlgo::GetOutputTensorNum(kernel_node);
|
||||
CHECK_KERNEL_OUTPUTS_NUM(output_num, kBitwiseOutputsNum, common::AnfAlgo::GetCNodeName(kernel_node));
|
||||
input_type_1_ = AnfAlgo::GetInputDeviceDataType(kernel_node, 0);
|
||||
input_type_2_ = AnfAlgo::GetOutputDeviceDataType(kernel_node, 0);
|
||||
if (input_type_1_ != input_type_2_) {
|
||||
MS_LOG(EXCEPTION) << "For '" << kernel_name_
|
||||
<< "', input1 and input2 must have the same type. But got input1 type " << input_type_1_
|
||||
<< ", input2 type " << input_type_2_;
|
||||
}
|
||||
input_shape_1_ = common::AnfAlgo::GetPrevNodeOutputInferShape(kernel_node, 0);
|
||||
input_shape_2_ = common::AnfAlgo::GetPrevNodeOutputInferShape(kernel_node, 1);
|
||||
output_shape_ = common::AnfAlgo::GetOutputInferShape(kernel_node, 0);
|
||||
auto kernel_attr = GetKernelAttrFromTensors(inputs, outputs);
|
||||
auto [is_match, index] = MatchKernelAttr(kernel_attr, GetOpSupport());
|
||||
if (!is_match) {
|
||||
MS_LOG(ERROR) << kernel_name_ << " does not support this kernel data type: " << kernel_attr;
|
||||
return false;
|
||||
}
|
||||
kernel_func_ = func_list_[index].second;
|
||||
return true;
|
||||
}
|
||||
|
||||
static const std::unordered_map<std::string, TypeComputeFunc> bitwise_func_map{
|
||||
{prim::kPrimBitwiseAnd->name(), &BitwiseCpuTypeFunc<T>::BitwiseCompute},
|
||||
{prim::kPrimBitwiseOr->name(), &BitwiseCpuTypeFunc<T>::BitwiseCompute},
|
||||
{prim::kPrimBitwiseXor->name(), &BitwiseCpuTypeFunc<T>::BitwiseCompute}};
|
||||
if (bitwise_func_map.find(kernel_name_) == bitwise_func_map.end()) {
|
||||
MS_LOG(EXCEPTION) << "For '" << kernel_name_ << "', only supports operators in "
|
||||
<< Unorderedmap2Str(bitwise_func_map) << ", but got " << kernel_name_;
|
||||
}
|
||||
compute_func_ = bitwise_func_map.at(kernel_name_);
|
||||
bool BitwiseCpuKernelMod::Resize(const BaseOperatorPtr &base_operator, const std::vector<KernelTensorPtr> &inputs,
|
||||
const std::vector<KernelTensorPtr> &outputs,
|
||||
const std::map<uint32_t, tensor::TensorPtr> &others) {
|
||||
if (!NativeCpuKernelMod::Resize(base_operator, inputs, outputs, others)) {
|
||||
MS_LOG(WARNING) << kernel_name_ << " reinit failed.";
|
||||
return false;
|
||||
}
|
||||
std::vector<int64_t> input_shape_1 = inputs[0]->GetShapeVector();
|
||||
std::vector<int64_t> input_shape_2 = inputs[1]->GetShapeVector();
|
||||
std::vector<int64_t> output_shape = outputs[0]->GetShapeVector();
|
||||
auto in_shape_size_1 = input_shape_1.size();
|
||||
auto in_shape_size_2 = input_shape_2.size();
|
||||
auto output_shape_size = output_shape.size();
|
||||
if (in_shape_size_1 != in_shape_size_2 || in_shape_size_1 != output_shape_size) {
|
||||
MS_LOG(ERROR) << "For '" << kernel_name_ << "', input shape is invalid, input0 shape size should be the same as "
|
||||
<< "input1 shape size and output shape size but got input0 shape size " << in_shape_size_1
|
||||
<< " input 1 shape size" << in_shape_size_2 << " output shape size " << output_shape_size;
|
||||
return false;
|
||||
}
|
||||
|
||||
private:
|
||||
std::string kernel_name_;
|
||||
TypeId input_type_1_{kTypeUnknown};
|
||||
TypeId input_type_2_{kTypeUnknown};
|
||||
std::vector<size_t> input_shape_1_;
|
||||
std::vector<size_t> input_shape_2_;
|
||||
std::vector<size_t> output_shape_;
|
||||
if (output_shape.size() > max_dims_) {
|
||||
MS_LOG(EXCEPTION) << "For '" << kernel_name_
|
||||
<< "', the dimension of output should be less than or equal to 7, but got " << output_shape.size()
|
||||
<< ".";
|
||||
}
|
||||
input_shape_1_.resize(input_shape_1.size(), 1);
|
||||
input_shape_2_.resize(input_shape_2.size(), 1);
|
||||
output_shape_.resize(output_shape.size(), 1);
|
||||
for (size_t i = 0; i < input_shape_1.size(); i++) {
|
||||
input_shape_1_[i] = static_cast<size_t>(input_shape_1[i]);
|
||||
}
|
||||
for (size_t i = 0; i < input_shape_2.size(); i++) {
|
||||
input_shape_2_[i] = static_cast<size_t>(input_shape_2[i]);
|
||||
}
|
||||
for (size_t i = 0; i < output_shape.size(); i++) {
|
||||
output_shape_[i] = static_cast<size_t>(output_shape[i]);
|
||||
}
|
||||
|
||||
void BitwiseCompute(const T *input1, const T *input2, T *output);
|
||||
|
||||
using TypeComputeFunc = std::function<void(BitwiseCpuTypeFunc *, const T *, const T *, T *)>;
|
||||
TypeComputeFunc compute_func_{nullptr};
|
||||
};
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
void BitwiseCpuTypeFunc<T>::BitwiseCompute(const T *input1, const T *input2, T *output) {
|
||||
bool BitwiseCpuKernelMod::LaunchKernel(const std::vector<kernel::AddressPtr> &inputs,
|
||||
const std::vector<kernel::AddressPtr> &outputs) {
|
||||
CHECK_KERNEL_INPUTS_NUM(inputs.size(), kBitwiseInputsNum, kernel_name_);
|
||||
CHECK_KERNEL_OUTPUTS_NUM(outputs.size(), kBitwiseOutputsNum, kernel_name_);
|
||||
T *input1 = reinterpret_cast<T *>(inputs[0]->addr);
|
||||
T *input2 = reinterpret_cast<T *>(inputs[1]->addr);
|
||||
T *output = reinterpret_cast<T *>(outputs[0]->addr);
|
||||
if (output_shape_.size() == 0) {
|
||||
(void)output_shape_.insert(output_shape_.begin(), 1);
|
||||
}
|
||||
|
@ -107,115 +124,45 @@ void BitwiseCpuTypeFunc<T>::BitwiseCompute(const T *input1, const T *input2, T *
|
|||
auto iter = base_iter;
|
||||
iter.SetPos(start);
|
||||
for (size_t i = start; i < end; i++) {
|
||||
T y_val = (input2[iter.GetInputPosB()]);
|
||||
T bit_val = static_cast<T>(sizeof(T) * 8 - 1);
|
||||
if (y_val > bit_val) {
|
||||
y_val = bit_val;
|
||||
}
|
||||
if (this->kernel_name_.compare(prim::kPrimBitwiseAnd->name()) == 0) {
|
||||
output[i] = static_cast<T>(input1[iter.GetInputPosA()] & y_val);
|
||||
output[i] = static_cast<T>(input1[iter.GetInputPosA()] & input2[iter.GetInputPosB()]);
|
||||
} else if (this->kernel_name_.compare(prim::kPrimBitwiseOr->name()) == 0) {
|
||||
output[i] = static_cast<T>(input1[iter.GetInputPosA()] | y_val);
|
||||
output[i] = static_cast<T>(input1[iter.GetInputPosA()] | input2[iter.GetInputPosB()]);
|
||||
} else if (this->kernel_name_.compare(prim::kPrimBitwiseXor->name()) == 0) {
|
||||
output[i] = static_cast<T>(input1[iter.GetInputPosA()] ^ y_val);
|
||||
output[i] = static_cast<T>(input1[iter.GetInputPosA()] ^ input2[iter.GetInputPosB()]);
|
||||
} else {
|
||||
MS_LOG(EXCEPTION) << "For '" << this->kernel_name_ << "', kernel name must be '" << this->kernel_name_
|
||||
MS_LOG(EXCEPTION) << "For '" << this->kernel_name_ << "', kernel name should be '" << this->kernel_name_
|
||||
<< "', but got " << this->kernel_name_;
|
||||
}
|
||||
iter.GenNextPos();
|
||||
}
|
||||
};
|
||||
ParallelLaunchAutoSearch(task, output_size_, this, ¶llel_search_info_);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
std::shared_ptr<CpuKernelFunc> SpecializeBitwiseFunc() {
|
||||
return std::make_shared<BitwiseCpuTypeFunc<T>>();
|
||||
}
|
||||
using BitwiseCpuFuncCreator = std::function<std::shared_ptr<CpuKernelFunc>()>;
|
||||
static std::map<std::string, std::vector<std::pair<KernelAttr, BitwiseCpuFuncCreator>>> kernel_attr_lists = {
|
||||
{prim::kPrimBitwiseAnd->name(),
|
||||
{{KernelAttr().AddInputAttr(kNumberTypeInt8).AddInputAttr(kNumberTypeInt8).AddOutputAttr(kNumberTypeInt8),
|
||||
SpecializeBitwiseFunc<int8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt16).AddInputAttr(kNumberTypeInt16).AddOutputAttr(kNumberTypeInt16),
|
||||
SpecializeBitwiseFunc<int16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt32).AddInputAttr(kNumberTypeInt32).AddOutputAttr(kNumberTypeInt32),
|
||||
SpecializeBitwiseFunc<int>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt64).AddInputAttr(kNumberTypeInt64).AddOutputAttr(kNumberTypeInt64),
|
||||
SpecializeBitwiseFunc<int64_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt8).AddInputAttr(kNumberTypeUInt8).AddOutputAttr(kNumberTypeUInt8),
|
||||
SpecializeBitwiseFunc<uint8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt16).AddInputAttr(kNumberTypeUInt16).AddOutputAttr(kNumberTypeUInt16),
|
||||
SpecializeBitwiseFunc<uint16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt32).AddInputAttr(kNumberTypeUInt32).AddOutputAttr(kNumberTypeUInt32),
|
||||
SpecializeBitwiseFunc<uint32_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt64).AddInputAttr(kNumberTypeUInt64).AddOutputAttr(kNumberTypeUInt64),
|
||||
SpecializeBitwiseFunc<uint64_t>}}},
|
||||
{prim::kPrimBitwiseOr->name(),
|
||||
{{KernelAttr().AddInputAttr(kNumberTypeInt8).AddInputAttr(kNumberTypeInt8).AddOutputAttr(kNumberTypeInt8),
|
||||
SpecializeBitwiseFunc<int8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt16).AddInputAttr(kNumberTypeInt16).AddOutputAttr(kNumberTypeInt16),
|
||||
SpecializeBitwiseFunc<int16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt32).AddInputAttr(kNumberTypeInt32).AddOutputAttr(kNumberTypeInt32),
|
||||
SpecializeBitwiseFunc<int>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt64).AddInputAttr(kNumberTypeInt64).AddOutputAttr(kNumberTypeInt64),
|
||||
SpecializeBitwiseFunc<int64_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt8).AddInputAttr(kNumberTypeUInt8).AddOutputAttr(kNumberTypeUInt8),
|
||||
SpecializeBitwiseFunc<uint8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt16).AddInputAttr(kNumberTypeUInt16).AddOutputAttr(kNumberTypeUInt16),
|
||||
SpecializeBitwiseFunc<uint16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt32).AddInputAttr(kNumberTypeUInt32).AddOutputAttr(kNumberTypeUInt32),
|
||||
SpecializeBitwiseFunc<uint32_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt64).AddInputAttr(kNumberTypeUInt64).AddOutputAttr(kNumberTypeUInt64),
|
||||
SpecializeBitwiseFunc<uint64_t>}}},
|
||||
{prim::kPrimBitwiseXor->name(),
|
||||
{{KernelAttr().AddInputAttr(kNumberTypeInt8).AddInputAttr(kNumberTypeInt8).AddOutputAttr(kNumberTypeInt8),
|
||||
SpecializeBitwiseFunc<int8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt16).AddInputAttr(kNumberTypeInt16).AddOutputAttr(kNumberTypeInt16),
|
||||
SpecializeBitwiseFunc<int16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt32).AddInputAttr(kNumberTypeInt32).AddOutputAttr(kNumberTypeInt32),
|
||||
SpecializeBitwiseFunc<int>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt64).AddInputAttr(kNumberTypeInt64).AddOutputAttr(kNumberTypeInt64),
|
||||
SpecializeBitwiseFunc<int64_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt8).AddInputAttr(kNumberTypeUInt8).AddOutputAttr(kNumberTypeUInt8),
|
||||
SpecializeBitwiseFunc<uint8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt16).AddInputAttr(kNumberTypeUInt16).AddOutputAttr(kNumberTypeUInt16),
|
||||
SpecializeBitwiseFunc<uint16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt32).AddInputAttr(kNumberTypeUInt32).AddOutputAttr(kNumberTypeUInt32),
|
||||
SpecializeBitwiseFunc<uint32_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt64).AddInputAttr(kNumberTypeUInt64).AddOutputAttr(kNumberTypeUInt64),
|
||||
SpecializeBitwiseFunc<uint64_t>}}}};
|
||||
} // namespace
|
||||
|
||||
void BitwiseCpuKernelMod::InitKernel(const CNodePtr &kernel_node) {
|
||||
MS_EXCEPTION_IF_NULL(kernel_node);
|
||||
|
||||
kernel_name_ = common::AnfAlgo::GetCNodeName(kernel_node);
|
||||
if (kernel_name_ != kernel_type_) {
|
||||
MS_LOG(EXCEPTION) << "For '" << kernel_name_ << "', kernel type must be '" << kernel_name_ << "', but got "
|
||||
<< kernel_type_;
|
||||
}
|
||||
|
||||
auto kernel_attr = GetKernelAttrFromNode(kernel_node);
|
||||
auto [is_match, index] = MatchKernelAttr(kernel_attr, GetOpSupport());
|
||||
if (!is_match) {
|
||||
MS_LOG(EXCEPTION) << "'" << kernel_name_ << "' does not support this kernel data type: " << kernel_attr;
|
||||
}
|
||||
|
||||
func_obj_ = kernel_attr_lists[kernel_name_][index].second();
|
||||
func_obj_->InitFunc(kernel_node);
|
||||
}
|
||||
std::vector<std::pair<KernelAttr, BitwiseCpuKernelMod::BitwiseLaunchFunc>> BitwiseCpuKernelMod::func_list_ = {
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt8).AddInputAttr(kNumberTypeInt8).AddOutputAttr(kNumberTypeInt8),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<int8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt16).AddInputAttr(kNumberTypeInt16).AddOutputAttr(kNumberTypeInt16),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<int16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt32).AddInputAttr(kNumberTypeInt32).AddOutputAttr(kNumberTypeInt32),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<int32_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeInt64).AddInputAttr(kNumberTypeInt64).AddOutputAttr(kNumberTypeInt64),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<int64_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt8).AddInputAttr(kNumberTypeUInt8).AddOutputAttr(kNumberTypeUInt8),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<uint8_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt16).AddInputAttr(kNumberTypeUInt16).AddOutputAttr(kNumberTypeUInt16),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<uint16_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt32).AddInputAttr(kNumberTypeUInt32).AddOutputAttr(kNumberTypeUInt32),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<uint32_t>},
|
||||
{KernelAttr().AddInputAttr(kNumberTypeUInt64).AddInputAttr(kNumberTypeUInt64).AddOutputAttr(kNumberTypeUInt64),
|
||||
&BitwiseCpuKernelMod::LaunchKernel<uint64_t>}};
|
||||
|
||||
std::vector<KernelAttr> BitwiseCpuKernelMod::GetOpSupport() {
|
||||
auto iter = kernel_attr_lists.find(kernel_type_);
|
||||
if (iter == kernel_attr_lists.end()) {
|
||||
MS_LOG(EXCEPTION) << "For '" << kernel_name_ << "', kernel type must be '" << kernel_name_ << "', but got "
|
||||
<< kernel_type_;
|
||||
}
|
||||
|
||||
std::vector<KernelAttr> support_list;
|
||||
(void)std::transform(iter->second.begin(), iter->second.end(), std::back_inserter(support_list),
|
||||
[](const std::pair<KernelAttr, BitwiseCpuFuncCreator> &pair) { return pair.first; });
|
||||
(void)std::transform(func_list_.begin(), func_list_.end(), std::back_inserter(support_list),
|
||||
[](const std::pair<KernelAttr, BitwiseLaunchFunc> &pair) { return pair.first; });
|
||||
|
||||
return support_list;
|
||||
}
|
||||
|
|
|
@ -23,6 +23,8 @@
|
|||
#include <iostream>
|
||||
#include <string>
|
||||
#include <complex>
|
||||
#include <map>
|
||||
#include <utility>
|
||||
|
||||
#include "plugin/device/cpu/kernel/cpu_kernel.h"
|
||||
#include "plugin/factory/ms_factory.h"
|
||||
|
@ -30,23 +32,44 @@
|
|||
|
||||
namespace mindspore {
|
||||
namespace kernel {
|
||||
class BitwiseCpuKernelMod : public DeprecatedNativeCpuKernelMod {
|
||||
class BitwiseCpuKernelMod : public NativeCpuKernelMod {
|
||||
public:
|
||||
BitwiseCpuKernelMod() = default;
|
||||
explicit BitwiseCpuKernelMod(const std::string &kernel_type) : kernel_type_(kernel_type) {}
|
||||
~BitwiseCpuKernelMod() override = default;
|
||||
void InitKernel(const CNodePtr &kernel_node) override;
|
||||
|
||||
bool Launch(const std::vector<AddressPtr> &inputs, const std::vector<AddressPtr> &workspace,
|
||||
const std::vector<AddressPtr> &outputs) override {
|
||||
return func_obj_->RunFunc(inputs, workspace, outputs);
|
||||
return kernel_func_(this, inputs, outputs);
|
||||
}
|
||||
|
||||
bool Init(const BaseOperatorPtr &base_operator, const std::vector<KernelTensorPtr> &inputs,
|
||||
const std::vector<KernelTensorPtr> &outputs) override;
|
||||
|
||||
bool Resize(const BaseOperatorPtr &base_operator, const std::vector<KernelTensorPtr> &inputs,
|
||||
const std::vector<KernelTensorPtr> &outputs,
|
||||
const std::map<uint32_t, tensor::TensorPtr> &others = std::map<uint32_t, tensor::TensorPtr>()) override;
|
||||
|
||||
protected:
|
||||
std::vector<KernelAttr> GetOpSupport() override;
|
||||
|
||||
private:
|
||||
std::shared_ptr<CpuKernelFunc> func_obj_;
|
||||
template <typename T>
|
||||
bool LaunchKernel(const std::vector<kernel::AddressPtr> &inputs, const std::vector<kernel::AddressPtr> &outputs);
|
||||
|
||||
using BitwiseLaunchFunc = std::function<bool(BitwiseCpuKernelMod *, const std::vector<kernel::AddressPtr> &,
|
||||
const std::vector<kernel::AddressPtr> &)>;
|
||||
static std::vector<std::pair<KernelAttr, BitwiseLaunchFunc>> func_list_;
|
||||
BitwiseLaunchFunc kernel_func_;
|
||||
|
||||
std::string kernel_type_{"Unknown"};
|
||||
std::string kernel_name_;
|
||||
TypeId input_type_1_{kTypeUnknown};
|
||||
TypeId input_type_2_{kTypeUnknown};
|
||||
std::vector<size_t> input_shape_1_;
|
||||
std::vector<size_t> input_shape_2_;
|
||||
std::vector<size_t> output_shape_;
|
||||
const size_t max_dims_{7};
|
||||
};
|
||||
} // namespace kernel
|
||||
} // namespace mindspore
|
||||
|
|
|
@ -31,7 +31,6 @@ from ...ops.composite.multitype_ops import _compile_utils as compile_utils
|
|||
from ...ops.operations._inner_ops import Format
|
||||
from ...ops.primitive import constexpr
|
||||
|
||||
|
||||
__all__ = ['MultitypeFuncGraph', 'env_get', 'hyper_add', 'zeros_like', 'ones_like']
|
||||
|
||||
shape_ = P.Shape()
|
||||
|
@ -184,7 +183,7 @@ def strides_(x):
|
|||
return strides
|
||||
|
||||
|
||||
def astype(x, dtype, copy=True): # pylint: disable=redefined-outer-name
|
||||
def astype(x, dtype, copy=True): # pylint: disable=redefined-outer-name
|
||||
"""
|
||||
Return a copy of the tensor, casted to a specified type.
|
||||
|
||||
|
@ -391,10 +390,10 @@ def swapaxes(x, axis1, axis2):
|
|||
new_perm = None
|
||||
if axis2 + 1 < x.ndim:
|
||||
new_perm = perm[0:axis1] + perm[axis2:axis2 + 1] + \
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1] + perm[axis2 + 1:]
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1] + perm[axis2 + 1:]
|
||||
else:
|
||||
new_perm = perm[0:axis1] + perm[axis2:axis2 + 1] + \
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1]
|
||||
perm[axis1 + 1:axis2] + perm[axis1:axis1 + 1]
|
||||
|
||||
return F.transpose(x, new_perm)
|
||||
|
||||
|
@ -590,7 +589,7 @@ def copy(x):
|
|||
return x
|
||||
|
||||
|
||||
def max(x, axis=None, keepdims=False, initial=None, where=True): # pylint: disable=redefined-builtin
|
||||
def max(x, axis=None, keepdims=False, initial=None, where=True): # pylint: disable=redefined-builtin
|
||||
"""
|
||||
Returns the maximum of a tensor or maximum along an axis.
|
||||
|
||||
|
@ -635,7 +634,7 @@ def max(x, axis=None, keepdims=False, initial=None, where=True): # pylint: disab
|
|||
axis=axis, keepdims=keepdims, initial=initial, where=where)
|
||||
|
||||
|
||||
def min(x, axis=None, keepdims=False, initial=None, where=True): # pylint: disable=redefined-builtin
|
||||
def min(x, axis=None, keepdims=False, initial=None, where=True): # pylint: disable=redefined-builtin
|
||||
"""
|
||||
Returns the minimum of a tensor or minimum along an axis.
|
||||
|
||||
|
@ -779,11 +778,11 @@ def diagonal(x, offset=0, axis1=0, axis2=1):
|
|||
e = e.astype(mstype.float32)
|
||||
if offset > 0:
|
||||
e_left = F.fill(dtype, (n, offset), 0)
|
||||
e_right = e[..., 0:m-offset:1]
|
||||
e_right = e[..., 0:m - offset:1]
|
||||
e = P.Concat(1)((e_left, e_right)).astype(dtype)
|
||||
elif offset < 0:
|
||||
e_upper = F.fill(dtype, (-offset, m), 0)
|
||||
e_lower = e[0:n+offset:1, ...]
|
||||
e_lower = e[0:n + offset:1, ...]
|
||||
e = P.Concat(0)((e_upper, e_lower)).astype(dtype)
|
||||
e = P.BroadcastTo(shape)(e)
|
||||
|
||||
|
@ -791,7 +790,7 @@ def diagonal(x, offset=0, axis1=0, axis2=1):
|
|||
res = F.reduce_sum(prod.astype(mstype.float32), -1)
|
||||
|
||||
begin = ()
|
||||
for i in range(ndim-2):
|
||||
for i in range(ndim - 2):
|
||||
begin += (0,)
|
||||
last_dim_begin = max_(0, -offset)
|
||||
begin += (last_dim_begin,)
|
||||
|
@ -1031,7 +1030,7 @@ def searchsorted(x, v, side='left', sorter=None):
|
|||
|
||||
sort_range = F.make_range(get_log2_size(F.shape_mul(a.shape) + 1))
|
||||
for _ in sort_range:
|
||||
mid = (i - F.neg_tensor(j))//2
|
||||
mid = (i - F.neg_tensor(j)) // 2
|
||||
mask = less_op(v, F.gather_nd(a, mid.reshape(mid.shape + (1,))))
|
||||
i = F.select(mask, i, mid)
|
||||
j = F.select(mask, mid, j)
|
||||
|
@ -1271,7 +1270,7 @@ def std(x, axis=None, ddof=0, keepdims=False):
|
|||
return F.tensor_pow(x_var, 0.5)
|
||||
|
||||
|
||||
def sum(x, axis=None, dtype=None, keepdims=False, initial=None): # pylint: disable=redefined-builtin
|
||||
def sum(x, axis=None, dtype=None, keepdims=False, initial=None): # pylint: disable=redefined-builtin
|
||||
"""
|
||||
Return sum of array elements over a given axis.
|
||||
|
||||
|
@ -1537,6 +1536,21 @@ def view(x, *shape):
|
|||
return F.reshape(x, shape)
|
||||
|
||||
|
||||
def bitwise_and(x, y):
|
||||
"""Returns bitwise `and` of two tensors element-wise."""
|
||||
return F.bitwise_and(x, y)
|
||||
|
||||
|
||||
def bitwise_or(x, y):
|
||||
"""Returns bitwise `or` of two tensors element-wise."""
|
||||
return F.bitwise_or(x, y)
|
||||
|
||||
|
||||
def bitwise_xor(x, y):
|
||||
"""Returns bitwise `xor` of two tensors element-wise."""
|
||||
return F.bitwise_xor(x, y)
|
||||
|
||||
|
||||
def while_cond(x):
|
||||
"""For while condition, if the condition is a tensor, the loop will not be unrolled"""
|
||||
if F.issubclass_(F.typeof(x), F.typeof(mstype.tensor)):
|
||||
|
@ -1756,6 +1770,7 @@ class SequenceIterator:
|
|||
|
||||
Iterator to use for sequences like List, Array.
|
||||
"""
|
||||
|
||||
def __init__(self, idx, seq):
|
||||
self.idx = idx
|
||||
self.seq = seq
|
||||
|
@ -1810,6 +1825,7 @@ def list_insert(self_, index, obj):
|
|||
"""Insert into list"""
|
||||
return _insert(self_, index, obj)
|
||||
|
||||
|
||||
#################
|
||||
# Array methods #
|
||||
#################
|
||||
|
@ -1828,6 +1844,7 @@ def filter_(fun, iter_):
|
|||
result.append(elem)
|
||||
return result
|
||||
|
||||
|
||||
##################
|
||||
# Sparse methods #
|
||||
##################
|
||||
|
@ -1877,6 +1894,7 @@ def coo_abs(x):
|
|||
data = F.absolute(x.values)
|
||||
return F.make_coo_tensor(x.indices, data, x.shape)
|
||||
|
||||
|
||||
################
|
||||
# Sparse Attrs #
|
||||
################
|
||||
|
|
|
@ -636,6 +636,78 @@ class Tensor(Tensor_):
|
|||
shape = shape[0]
|
||||
return tensor_operator_registry.get('reshape')()(self, shape)
|
||||
|
||||
def bitwise_and(self, x):
|
||||
"""
|
||||
Returns bitwise `and` of two tensors element-wise.
|
||||
|
||||
Refer to :func:`mindspore.ops.bitwise_and` for more detail.
|
||||
|
||||
Args:
|
||||
x (Tensor): The input tensor.
|
||||
|
||||
Returns:
|
||||
Tensor, has the same type as the `x`.
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> a = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
|
||||
>>> b = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
|
||||
>>> output = a.bitwise_and(b)
|
||||
>>> print(output)
|
||||
[ 0 0 1 -1 1 0 1]
|
||||
"""
|
||||
self._init_check()
|
||||
return tensor_operator_registry.get('bitwise_and')(self, x)
|
||||
|
||||
def bitwise_or(self, x):
|
||||
"""
|
||||
Returns bitwise `or` of two tensors element-wise.
|
||||
|
||||
Refer to :func:`mindspore.ops.bitwise_or` for more detail.
|
||||
|
||||
Args:
|
||||
x (Tensor): The input tensor.
|
||||
|
||||
Returns:
|
||||
Tensor, has the same type as the `x`.
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> a = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
|
||||
>>> b = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
|
||||
>>> output = a.bitwise_or(b)
|
||||
>>> print(output)
|
||||
[ 0 1 1 -1 -1 3 3]
|
||||
"""
|
||||
self._init_check()
|
||||
return tensor_operator_registry.get('bitwise_or')(self, x)
|
||||
|
||||
def bitwise_xor(self, x):
|
||||
"""
|
||||
Returns bitwise `xor` of two tensors element-wise.
|
||||
|
||||
Refer to :func:`mindspore.ops.bitwise_xor` for more detail.
|
||||
|
||||
Args:
|
||||
x (Tensor): The input tensor.
|
||||
|
||||
Returns:
|
||||
Tensor, has the same type as the `x`.
|
||||
|
||||
Examples:
|
||||
>>> from mindspore import Tensor
|
||||
>>> import numpy as np
|
||||
>>> a = Tensor(np.array([0, 0, 1, -1, 1, 1, 1]), mindspore.int16)
|
||||
>>> b = Tensor(np.array([0, 1, 1, -1, -1, 2, 3]), mindspore.int16)
|
||||
>>> output = a.bitwise_xor(b)
|
||||
>>> print(output)
|
||||
[ 0 1 0 0 -2 3 2]
|
||||
"""
|
||||
self._init_check()
|
||||
return tensor_operator_registry.get('bitwise_xor')(self, x)
|
||||
|
||||
def expand_as(self, x):
|
||||
"""
|
||||
Expand the dimension of target tensor to the dimension of input tensor.
|
||||
|
|
|
@ -83,8 +83,12 @@ def _handle_broadcasting(x, x_shape, y_shape):
|
|||
@vmap_rules_getters.register(P.NotEqual)
|
||||
@vmap_rules_getters.register(P.LogicalOr)
|
||||
@vmap_rules_getters.register(P.LogicalAnd)
|
||||
@vmap_rules_getters.register(P.BitwiseAnd)
|
||||
@vmap_rules_getters.register(P.BitwiseOr)
|
||||
@vmap_rules_getters.register(P.BitwiseXor)
|
||||
def get_broadcast_binary_op_vmap_rule(prim, axis_size):
|
||||
"""VmapRule for binary operations with broadcasting, such as `Add` and `Sub`."""
|
||||
|
||||
def vmap_rule(x_bdim, y_bdim):
|
||||
is_all_none, result = vmap_general_preprocess(prim, x_bdim, y_bdim)
|
||||
if is_all_none:
|
||||
|
|
|
@ -142,6 +142,7 @@ def pack(x):
|
|||
".")
|
||||
return stack(x)
|
||||
|
||||
|
||||
partial = P.Partial()
|
||||
# depend: mount a node to another node
|
||||
depend = P.Depend()
|
||||
|
@ -970,6 +971,9 @@ tensor_operator_registry.register('broadcast_to', P.BroadcastTo)
|
|||
tensor_operator_registry.register('matmul', P.MatMul)
|
||||
tensor_operator_registry.register('argmax', P.Argmax)
|
||||
tensor_operator_registry.register('cumsum', P.CumSum)
|
||||
tensor_operator_registry.register('bitwise_and', bitwise_and)
|
||||
tensor_operator_registry.register('bitwise_or', bitwise_or)
|
||||
tensor_operator_registry.register('bitwise_xor', bitwise_xor)
|
||||
tensor_operator_registry.register('reduce_max', P.ReduceMax)
|
||||
tensor_operator_registry.register('reduce_min', P.ReduceMin)
|
||||
tensor_operator_registry.register('maximum', P.Maximum)
|
||||
|
|
|
@ -0,0 +1,295 @@
|
|||
# Copyright 2020-2021 Huawei Technologies Co., Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ============================================================================
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
import mindspore.context as context
|
||||
import mindspore.nn as nn
|
||||
from mindspore import Tensor
|
||||
from mindspore.ops import operations as P
|
||||
from mindspore.ops import functional as F
|
||||
from mindspore.ops.functional import vmap
|
||||
|
||||
|
||||
class OpNetWrapper(nn.Cell):
|
||||
"""OpNetWrapper"""
|
||||
|
||||
def __init__(self, op):
|
||||
"""__init__"""
|
||||
super(OpNetWrapper, self).__init__()
|
||||
self.op = op
|
||||
|
||||
def construct(self, *inputs):
|
||||
"""construct"""
|
||||
return self.op(*inputs)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('shape', [(2,), (4, 5), (3, 4, 5, 6), (3, 4, 5, 6, 2)])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_and(shape, dtype):
|
||||
"""
|
||||
Feature: BitwiseAnd cpu kernel.
|
||||
Description: test the rightness of BitwiseAnd cpu kernel.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
op = P.BitwiseAnd()
|
||||
op_wrapper = OpNetWrapper(op)
|
||||
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
outputs = op_wrapper(Tensor(x_np), Tensor(y_np))
|
||||
outputs_functional = F.bitwise_and(Tensor(x_np), Tensor(y_np))
|
||||
expect = np.bitwise_and(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
assert np.allclose(outputs_functional.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('shape', [(2,), (4, 5), (3, 4, 5, 6), (3, 4, 5, 6, 2)])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_or(shape, dtype):
|
||||
"""
|
||||
Feature: BitwiseOr cpu kernel.
|
||||
Description: test the rightness of BitwiseOr cpu kernel.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
op = P.BitwiseOr()
|
||||
op_wrapper = OpNetWrapper(op)
|
||||
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
outputs = op_wrapper(Tensor(x_np), Tensor(y_np))
|
||||
outputs_functional = F.bitwise_or(Tensor(x_np), Tensor(y_np))
|
||||
expect = np.bitwise_or(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
assert np.allclose(outputs_functional.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('shape', [(2,), (4, 5), (3, 4, 5, 6), (3, 4, 5, 6, 2)])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_xor(shape, dtype):
|
||||
"""
|
||||
Feature: BitwiseXor cpu kernel.
|
||||
Description: test the rightness of BitwiseXor cpu kernel.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
op = P.BitwiseXor()
|
||||
op_wrapper = OpNetWrapper(op)
|
||||
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
outputs = op_wrapper(Tensor(x_np), Tensor(y_np))
|
||||
outputs_functional = F.bitwise_xor(Tensor(x_np), Tensor(y_np))
|
||||
expect = np.bitwise_xor(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
assert np.allclose(outputs_functional.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('op', [P.BitwiseAnd(), P.BitwiseOr(), P.BitwiseXor()])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_vmap(op, dtype):
|
||||
"""
|
||||
Feature: Bitwise cpu kernel.
|
||||
Description: test the rightness of Bitwise vmap feature.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
|
||||
def test_add(x, y):
|
||||
return op(x, y)
|
||||
|
||||
x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]).astype(dtype))
|
||||
y = Tensor(np.array([[-3, -2, -1], [3, 2, 1]]).astype(dtype))
|
||||
outputs = vmap(test_add, in_axes=(0, 1), out_axes=0)(x, y)
|
||||
|
||||
x_manual = np.array([[1, 2], [3, 4], [5, 6]]).astype(dtype)
|
||||
y_manual = np.array([[-3, 3], [-2, 2], [-1, 1]]).astype(dtype)
|
||||
|
||||
def manually_batched(xs, ws):
|
||||
output = []
|
||||
for i in range(xs.shape[0]):
|
||||
output.append(test_add(Tensor(xs[i]), Tensor(ws[i])).asnumpy())
|
||||
return np.stack(output)
|
||||
|
||||
expect = manually_batched(x_manual, y_manual)
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
@pytest.mark.parametrize('mode', [context.GRAPH_MODE, context.PYNATIVE_MODE])
|
||||
@pytest.mark.parametrize('shape', [(2,), (4, 5), (3, 4, 5, 6), (3, 4, 5, 6, 2)])
|
||||
def test_bitwise_and_tensor_interface(dtype, mode, shape):
|
||||
"""
|
||||
Feature: BitwiseAnd cpu kernel.
|
||||
Description: test the rightness of BitwiseAnd tensor interface.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=mode, device_target='CPU')
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
outputs = Tensor(x_np).bitwise_and(Tensor(y_np))
|
||||
expect = np.bitwise_and(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
@pytest.mark.parametrize('mode', [context.GRAPH_MODE, context.PYNATIVE_MODE])
|
||||
@pytest.mark.parametrize('shape', [(2,), (4, 5), (3, 4, 5, 6), (3, 4, 5, 6, 2)])
|
||||
def test_bitwise_or_tensor_interface(dtype, mode, shape):
|
||||
"""
|
||||
Feature: BitwiseOr cpu kernel.
|
||||
Description: test the rightness of BitwiseOr tensor interface.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=mode, device_target='CPU')
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
outputs = Tensor(x_np).bitwise_or(Tensor(y_np))
|
||||
expect = np.bitwise_or(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
@pytest.mark.parametrize('mode', [context.GRAPH_MODE, context.PYNATIVE_MODE])
|
||||
@pytest.mark.parametrize('shape', [(2,), (4, 5), (3, 4, 5, 6), (3, 4, 5, 6, 2)])
|
||||
def test_bitwise_xor_tensor_interface(dtype, mode, shape):
|
||||
"""
|
||||
Feature: BitwiseXor cpu kernel.
|
||||
Description: test the rightness of BitwiseXor tensor interface.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=mode, device_target='CPU')
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*shape) * prop).astype(dtype)
|
||||
outputs = Tensor(x_np).bitwise_xor(Tensor(y_np))
|
||||
expect = np.bitwise_xor(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('xshape', [(2, 3)])
|
||||
@pytest.mark.parametrize('yshape', [(1, 1), (1, 3), (2, 1)])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_and_broadcast(xshape, yshape, dtype):
|
||||
"""
|
||||
Feature: BitwiseAnd cpu kernel.
|
||||
Description: test the rightness of BitwiseAnd cpu kernel broadcast.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
op = P.BitwiseAnd()
|
||||
op_wrapper = OpNetWrapper(op)
|
||||
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*xshape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*yshape) * prop).astype(dtype)
|
||||
outputs = op_wrapper(Tensor(x_np), Tensor(y_np))
|
||||
outputs_functional = F.bitwise_and(Tensor(x_np), Tensor(y_np))
|
||||
expect = np.bitwise_and(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
assert np.allclose(outputs_functional.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('xshape', [(2, 3)])
|
||||
@pytest.mark.parametrize('yshape', [(1, 1), (1, 3), (2, 1)])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_or_broadcast(xshape, yshape, dtype):
|
||||
"""
|
||||
Feature: BitwiseOr cpu kernel.
|
||||
Description: test the rightness of BitwiseOr cpu kernel broadcast.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
op = P.BitwiseOr()
|
||||
op_wrapper = OpNetWrapper(op)
|
||||
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*xshape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*yshape) * prop).astype(dtype)
|
||||
outputs = op_wrapper(Tensor(x_np), Tensor(y_np))
|
||||
outputs_functional = F.bitwise_or(Tensor(x_np), Tensor(y_np))
|
||||
expect = np.bitwise_or(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
assert np.allclose(outputs_functional.asnumpy(), expect)
|
||||
|
||||
|
||||
@pytest.mark.level0
|
||||
@pytest.mark.platform_x86_cpu
|
||||
@pytest.mark.env_onecard
|
||||
@pytest.mark.parametrize('xshape', [(2, 3)])
|
||||
@pytest.mark.parametrize('yshape', [(1, 1), (1, 3), (2, 1)])
|
||||
@pytest.mark.parametrize('dtype', [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64])
|
||||
def test_bitwise_xor_broadcast(xshape, yshape, dtype):
|
||||
"""
|
||||
Feature: BitwiseXor cpu kernel.
|
||||
Description: test the rightness of BitwiseXor cpu kernel broadcast.
|
||||
Expectation: Success.
|
||||
"""
|
||||
context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
|
||||
op = P.BitwiseXor()
|
||||
op_wrapper = OpNetWrapper(op)
|
||||
|
||||
prop = 100 if np.random.random() > 0.5 else -100
|
||||
x_np = (np.random.randn(*xshape) * prop).astype(dtype)
|
||||
y_np = (np.random.randn(*yshape) * prop).astype(dtype)
|
||||
outputs = op_wrapper(Tensor(x_np), Tensor(y_np))
|
||||
outputs_functional = F.bitwise_xor(Tensor(x_np), Tensor(y_np))
|
||||
expect = np.bitwise_xor(x_np, y_np)
|
||||
|
||||
assert np.allclose(outputs.asnumpy(), expect)
|
||||
assert np.allclose(outputs_functional.asnumpy(), expect)
|
Loading…
Reference in New Issue