[MS][LITE] code docs lite python api fix v3

This commit is contained in:
luoyuan 2022-08-15 11:08:54 +08:00
parent 58d6725764
commit 05c9ac7da9
11 changed files with 154 additions and 85 deletions

View File

@ -24,18 +24,23 @@ mindspore_lite.Context
- **enable_parallel** (bool可选) - 设置状态是否启用并行执行模型推理或并行训练。默认值False。
异常:
- **TypeError** - `thread_num` 不是int类型或None。
- **TypeError** - `inter_op_parallel_num` 不是int类型或None。
- **TypeError** - `thread_affinity_mode` 不是int类型或None。
- **TypeError** - `thread_affinity_core_list` 不是list类型或None。
- **TypeError** - `thread_affinity_core_list` 是list类型但元素不是int类型或None。
- **TypeError** - `thread_num` 既不是int类型也不是None。
- **TypeError** - `inter_op_parallel_num` 既不是int类型也不是None。
- **TypeError** - `thread_affinity_mode` 既不是int类型也不是None。
- **TypeError** - `thread_affinity_core_list` 既不是list类型也不是None。
- **TypeError** - `thread_affinity_core_list` 是list类型但元素既不是int类型也不是None。
- **TypeError** - `enable_parallel` 不是bool类型。
- **ValueError** - `thread_num` 小于0。
- **ValueError** - `inter_op_parallel_num` 小于0。
.. py:method:: append_device_info(device_info)
设置张量的数据类型。
将一个用户定义的设备信息附加到上下文中。
.. note::
添加GPU设备信息后必须在调用上下文之前添加CPU设备信息。因为当GPU不支持算子时系统将尝试CPU是否支持该算子。此时需要切换至带有CPU设备信息的上下文中。
添加Ascend设备信息后必须在调用上下文之前添加CPU设备信息。因为当在Ascend上不支持算子时系统将尝试CPU是否支持算子。此时需要切换至带有CPU设备信息的上下文中。
参数:
- **device_info** (DeviceInfo) - 实例化的设备信息。

View File

@ -11,11 +11,11 @@ mindspore_lite.Converter
参数:
- **fmk_type** (FmkType) - 输入模型框架类型。选项FmkType.TF | FmkType.CAFFE | FmkType.ONNX | FmkType.MINDIR | FmkType.TFLITE。
- **model_file** (str) - 输入模型文件路径。e.g. "/home/user/model.prototxt"。选项TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite"。
- **output_file** (str) - 输出模型文件路径。不需加后缀,可自动生成.ms后缀。e.g. "/home/user/model.prototxt"它将生成名为model.prototxt.ms的模型在/home/user/路径下。
- **output_file** (str) - 输出模型文件路径。可自动生成.ms后缀。e.g. "/home/user/model.prototxt"它将生成名为model.prototxt.ms的模型在/home/user/路径下。
- **weight_file** (str可选) - 输入模型权重文件。仅当输入模型框架类型为FmkType.CAFFE时必选。e.g. "/home/user/model.caffemodel"。默认值:""。
- **config_file** (str可选) - 作为训练后量化或离线拆分算子并行的配置文件路径禁用算子融合功能并将插件设置为so路径。默认值""。
- **weight_fp16** (bool可选) - 在Float16数据类型中序列化常量张量仅对Float32数据类型中的常量张量有效。默认值""。
- **input_shape** (dict{string:list[int]},可选) - 设置模型输入的维度输入维度的顺序与原始模型一致。对于某些模型模型结构可以进一步优化但转换后的模型可能会失去动态形状的特征。e.g. {"inTensor1": [1, 32, 32, 32], "inTensor2": [1, 1, 32, 32]}。默认值:""。
- **input_shape** (dict{str: list[int]},可选) - 设置模型输入的维度输入维度的顺序与原始模型一致。对于某些模型模型结构可以进一步优化但转换后的模型可能会失去动态形状的特征。e.g. {"inTensor1": [1, 32, 32, 32], "inTensor2": [1, 1, 32, 32]}。默认值:""。
- **input_format** (Format可选) - 指定导出模型的输入格式。仅对四维输入有效。选项Format.NHWC | Format.NCHW。默认值Format.NHWC。
- **input_data_type** (DataType可选) - 输入张量的数据类型默认与模型中定义的类型相同。默认值DataType.FLOAT32。
- **output_data_type** (DataType可选) - 输出张量的数据类型默认与模型中定义的类型相同。默认值DataType.FLOAT32。
@ -23,7 +23,7 @@ mindspore_lite.Converter
- **decrypt_key** (str可选) - 用于解密文件的密钥以十六进制字符表示。仅当fmk_type为FmkType.MINDIR时有效。默认值""。
- **decrypt_mode** (str可选) - MindIR文件的解密方法。仅在设置decrypt_key时有效。选项"AES-GCM" | "AES-CBC"。默认值:"AES-GCM"。
- **enable_encryption** (bool可选) - 是否导出加密模型。默认值False。
- **encrypt_key** (str可选) - 用于加密文件的密钥以十六进制字符表示。仅支持AES-GCM密钥长度为16。默认值""。
- **encrypt_key** (str可选) - 用于加密文件的密钥,以十六进制字符表示。仅支持decrypt_mode是"AES-GCM"密钥长度为16。默认值""。
- **infer** (bool可选) - 转换后是否进行预推理。默认值False。
- **train_model** (bool可选) - 模型是否将在设备上进行训练。默认值False。
- **no_fusion** (bool可选) - 避免融合优化默认允许融合优化。默认值False。
@ -34,12 +34,11 @@ mindspore_lite.Converter
- **TypeError** - `output_file` 不是str类型。
- **TypeError** - `weight_file` 不是str类型。
- **TypeError** - `config_file` 不是str类型。
- **TypeError** - `config_info` 是dict类型但dict的键不是str类型。
- **TypeError** - `config_info` 是dict类型但dict的值不是str类型。
- **TypeError** - `weight_fp16` 不是bool类型。
- **TypeError** - `input_shape` 不是dict类型或None。
- **TypeError** - `input_shape` 是dict类型但dict的值不是list类型。
- **TypeError** - `input_shape` 是dict类型dict的值是list类型但dict的值的元素不是int类型。
- **TypeError** - `input_shape` 既不是dict类型也不是None。
- **TypeError** - `input_shape` 是dict类型但key不是str类型。
- **TypeError** - `input_shape` 是dict类型key是str类型但value不是list类型。
- **TypeError** - `input_shape` 是dict类型key是str类型value是list类型但value的元素不是int类型。
- **TypeError** - `input_format` 不是Format类型。
- **TypeError** - `input_data_type` 不是DataType类型。
- **TypeError** - `output_data_type` 不是DataType类型。
@ -68,6 +67,9 @@ mindspore_lite.Converter
获取转换的配置信息。配套set_config_info方法使用用于在线推理场景。在get_config_info前请先用set_config_info方法赋值。
返回:
dict{str: dict{str: str}},在转换中设置的配置信息。
.. py:method:: set_config_info(section, config_info)
设置转换时的配置信息。配套get_config_info方法使用用于在线推理场景。
@ -81,12 +83,14 @@ mindspore_lite.Converter
- "mixed_bit_weight_quant_param":混合位权重量化参数部分。量化的配置参数之一。
- "full_quant_param" 全量化参数部分。量化的配置参数之一。
- "data_preprocess_param":数据预处理参数部分。量化的配置参数之一。
- "registry":扩展配置参数部分。量化的配置参数之一。
- "registry":扩展配置参数部分。扩展的配置参数之一。
- **config_info** (dict{str},可选) - 配置参数列表。配合section一起设置confile的个别参数。e.g. 对于section是"common_quant_param"config_info是{"quant_type":"WEIGHT_QUANT"}。默认值None。
- **config_info** (dict{str: str},可选) - 配置参数列表。配合section一起设置confile的个别参数。e.g. 对于section是"common_quant_param"config_info是{"quant_type":"WEIGHT_QUANT"}。默认值None。
有关训练后量化的配置参数,请参见 `quantization <https://www.mindspore.cn/lite/docs/zh-CN/master/use/post_training_quantization.html>`_
有关扩展的配置参数,请参见 `extension <https://www.mindspore.cn/lite/docs/zh-CN/master/use/nnie.html#%E6%89%A9%E5%B1%95%E9%85%8D%E7%BD%AE>`_
异常:
- **TypeError** - `section` 不是str类型。
- **TypeError** - `config_info` 不是dict类型。
- **TypeError** - `config_info` 是dict类型但key不是str类型。
- **TypeError** - `config_info` 是dict类型key是str类型但value不是str类型。

View File

@ -38,7 +38,7 @@ mindspore_lite.DataType
* **用法**
由于Python API中的 `mindspore_lite.Tensor` 是直接使用pybind11技术包装C++ API"DataType"在Python API和C++ API之间有一对一的对应关系修改 `DataType` 的方法在 `tensor` 类的set和get方法中。
由于Python API中的 `mindspore_lite.Tensor` 是直接使用pybind11技术包装C++ API `DataType` 在Python API和C++ API之间有一对一的对应关系修改 `DataType` 的方法在 `tensor` 类的set和get方法中。
- `set_data_type`: 在 `data_type_py_cxx_map` 中以Python API中的 `DataType` 为关键字进行查询并获取C++ API中的 `DataType` 将其传递给C++ API中的 `set_data_type` 方法。
- `get_data_type`: 通过C++ API中的 `get_data_type` 方法在C++ API中获取 `DataType` 以C++ API中的 `DataType` 为关键字在 `data_type_cxx_py_map` 中查询返回在Python API中的 `DataType`

View File

@ -43,7 +43,7 @@ mindspore_lite.Format
* **用法**
由于Python API中的 `mindspore_lite.Tensor` 是直接使用pybind11技术包装C++ API"Format"在Python API和C++ API之间有一对一的对应关系修改 `Format` 的方法在 `tensor` 类的set和get方法中。
由于Python API中的 `mindspore_lite.Tensor` 是直接使用pybind11技术包装C++ API `Format` 在Python API和C++ API之间有一对一的对应关系修改 `Format` 的方法在 `tensor` 类的set和get方法中。
- `set_format`: 在 `format_py_cxx_map` 中以Python API中的 `Format` 为关键字进行查询并获取C++ API中的 `Format` 将其传递给C++ API中的 `set_format` 方法。
- `get_format`: 通过C++ API中的 `get_format` 方法在C++ API中获取 `Format` 以C++ API中的 `Format` 为关键字在 `format_cxx_py_map` 中查询返回在Python API中的 `Format`

View File

@ -15,7 +15,7 @@ mindspore_lite.ModelParallelRunner
异常:
- **TypeError** - `model_path` 不是str类型。
- **TypeError** - `runner_config` 不是RunnerConfig类型或None。
- **TypeError** - `runner_config` 既不是RunnerConfig类型也不是None。
- **RuntimeError** - `model_path` 文件路径不存在。
- **RuntimeError** - 初始化模型并行Runner失败。

View File

@ -10,12 +10,12 @@ mindspore_lite.RunnerConfig
参数:
- **context** (Context可选) - 定义用于在执行期间存储选项的上下文。默认值None。
- **workers_num** (int可选) - workers的数量。默认值None。
- **config_info** (dict{str, dict{str, str}},可选) - 传递模型权重文件路径的嵌套映射。例如:{"weight": {"weight_path": "/home/user/weight.cfg"}}。默认值None。key当前支持["weight"]value为dict格式其中的key当前支持["weight_path"]其中的value为权重的路径例如"/home/user/weight.cfg"。
- **config_info** (dict{str: dict{str: str}},可选) - 传递模型权重文件路径的嵌套映射。例如:{"weight": {"weight_path": "/home/user/weight.cfg"}}。默认值None。key当前支持["weight"]value为dict格式其中的key当前支持["weight_path"]其中的value为权重的路径例如"/home/user/weight.cfg"。
异常:
- **TypeError** - `context` 不是Context类型或None。
- **TypeError** - `workers_num` 不是int类型或None。
- **TypeError** - `config_info` 不是dict类型或None。
- **TypeError** - `context` 既不是Context类型也不是None。
- **TypeError** - `workers_num` 既不是int类型也不是None。
- **TypeError** - `config_info` 既不是dict类型也不是None。
- **TypeError** - `config_info` 是dict类型但key不是str类型。
- **TypeError** - `config_info` 是dict类型key是str类型但value不是dict类型。
- **TypeError** - `config_info` 是dict类型key是str类型value是dict类型但value的key不是str类型。

View File

@ -9,7 +9,7 @@ mindspore_lite.Tensor
- **tensor** (Tensor可选) - 被存储在新张量中的数据可以是其它Tensor。默认值None。
异常:
- **TypeError** - `tensor` 不是Tensor类型或None。
- **TypeError** - `tensor` 既不是Tensor类型也不是None。
.. py:method:: get_data_size()
@ -51,7 +51,7 @@ mindspore_lite.Tensor
获取张量的形状。
返回:
list[int],张量的形状。
list[int],张量的形状。
.. py:method:: get_tensor_name()

View File

@ -47,18 +47,18 @@ class Context:
Default: False.
Raises:
TypeError: `thread_num` is not an int or None.
TypeError: `inter_op_parallel_num` is not an int or None.
TypeError: `thread_affinity_mode` is not an int or None.
TypeError: `thread_affinity_core_list` is not a list or None.
TypeError: `thread_affinity_core_list` is a list, but the elements are not int or None.
TypeError: `thread_num` is neither an int nor None.
TypeError: `inter_op_parallel_num` is neither an int nor None.
TypeError: `thread_affinity_mode` is neither an int nor None.
TypeError: `thread_affinity_core_list` is neither a list nor None.
TypeError: `thread_affinity_core_list` is a list, but the elements are neither int nor None.
TypeError: `enable_parallel` is not a bool.
ValueError: `thread_num` is less than 0.
ValueError: `inter_op_parallel_num` is less than 0.
Examples:
>>> import mindspore_lite as mslite
>>> context = mslite.Context(thread_num=1, inter_op_parallel_num=1, thread_afffinity_mode=1,
>>> context = mslite.Context(thread_num=1, inter_op_parallel_num=1, thread_affinity_mode=1,
... enable_parallel=False)
>>> print(context)
thread_num: 1,
@ -66,7 +66,7 @@ class Context:
thread_affinity_mode: 1,
thread_affinity_core_list: [],
enable_parallel: False,
device_list: 0, .
device_list: .
"""
def __init__(self, thread_num=None, inter_op_parallel_num=None, thread_affinity_mode=None, \
@ -107,6 +107,15 @@ class Context:
"""
Append one user-defined device info to the context.
Note:
After gpu device info is added, cpu device info must be added before call context.
Because when ops are not supported on GPU, The system will try whether the CPU supports it.
At that time, need to switch to the context with cpu device info.
After Ascend device info is added, cpu device info must be added before call context.
Because when ops are not supported on Ascend, The system will try whether the CPU supports it.
At that time, need to switch to the context with cpu device info.
Args:
device_info (DeviceInfo): the instance of device info.
@ -118,8 +127,8 @@ class Context:
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
>>> print(context)
thread_num: 2,
inter_op_parallel_num: 1,
thread_num: 0,
inter_op_parallel_num: 0,
thread_affinity_mode: 0,
thread_affinity_core_list: [],
enable_parallel: False,
@ -158,6 +167,13 @@ class CPUDeviceInfo(DeviceInfo):
enable_fp16: True.
>>> context = mslite.Context()
>>> context.append_device_info(cpu_device_info)
>>> print(context)
thread_num: 0,
inter_op_parallel_num: 0,
thread_affinity_mode: 0,
thread_affinity_core_list: [],
enable_parallel: False,
device_list: 0, .
"""
def __init__(self, enable_fp16=False):
@ -194,11 +210,11 @@ class GPUDeviceInfo(DeviceInfo):
enable_fp16: False.
>>> cpu_device_info = mslite.CPUDeviceInfo(enable_fp16=False)
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo(gpu_device_info))
>>> context.append_device_info(mslite.CPUDeviceInfo(cpu_device_info))
>>> context.append_device_info(gpu_device_info)
>>> context.append_device_info(cpu_device_info)
>>> print(context)
thread_num: 2,
inter_op_parallel_num: 1,
thread_num: 0,
inter_op_parallel_num: 0,
thread_affinity_mode: 0,
thread_affinity_core_list: [],
enable_parallel: False,
@ -233,7 +249,7 @@ class GPUDeviceInfo(DeviceInfo):
>>> device_info = mslite.GPUDeviceInfo(device_id=1, enable_fp16=True)
>>> rank_id = device_info.get_rank_id()
>>> print(rank_id)
1
0
"""
return self._device_info.get_rank_id()
@ -276,8 +292,8 @@ class AscendDeviceInfo(DeviceInfo):
>>> context.append_device_info(ascend_device_info)
>>> context.append_device_info(cpu_device_info)
>>> print(context)
thread_num: 2,
inter_op_parallel_num: 1,
thread_num: 0,
inter_op_parallel_num: 0,
thread_affinity_mode: 0,
thread_affinity_core_list: [],
enable_parallel: False

View File

@ -75,7 +75,7 @@ class Converter:
Options: "AES-GCM" | "AES-CBC". Default: "AES-GCM".
enable_encryption (bool, optional): Whether to export the encryption model. Default: False.
encrypt_key (str, optional): The key used to encrypt the file, expressed in hexadecimal characters.
Only support AES-GCM and the key length is 16. Default: "".
Only support decrypt_mode is "AES-GCM", the key length is 16. Default: "".
infer (bool, optional): Whether to do pre-inference after convert. Default: False.
train_model (bool, optional): whether the model is going to be trained on device. Default: False.
no_fusion(bool, optional): Avoid fusion optimization, fusion optimization is allowed by default. Default: False.
@ -86,12 +86,11 @@ class Converter:
TypeError: `output_file` is not a str.
TypeError: `weight_file` is not a str.
TypeError: `config_file` is not a str.
TypeError: `config_info` is a dict, but the keys are not str.
TypeError: `config_info` is a dict, but the values are not str.
TypeError: `weight_fp16` is not a bool.
TypeError: `input_shape` is not a dict or None.
TypeError: `input_shape` is a dict, but the values are not list.
TypeError: `input_shape` is a dict, the values are list, but the value's elements are not int.
TypeError: `input_shape` is neither a dict nor None.
TypeError: `input_shape` is a dict, but the keys are not str.
TypeError: `input_shape` is a dict, the keys are str, but the values are not list.
TypeError: `input_shape` is a dict, the keys are str, the values are list, but the value's elements are not int.
TypeError: `input_format` is not a Format.
TypeError: `input_data_type` is not a DataType.
TypeError: `output_data_type` is not a DataType.
@ -110,19 +109,22 @@ class Converter:
RuntimeError: `config_file` is not "", but `config_file` does not exist.
Examples:
>>> # Download the model package and extract it, model download link:
>>> # https://download.mindspore.cn/model_zoo/official/lite/quick_start/micro/mobilenetv2.tar.gz
>>> import mindspore_lite as mslite
>>> converter = mslite.Converter(mslite.FmkType.TFLITE, "mobilenetv2.tflite", "mobilenetv2.tflite")
>>> converter = mslite.Converter(mslite.FmkType.kFmkTypeTflite, "./mobilenetv2/mobilenet_v2_1.0_224.tflite",
... "mobilenet_v2_1.0_224.tflite")
>>> print(converter)
config_file: ,
config_info: ,
config_info: {},
weight_fp16: False,
input_shape: {},
input_format: Format.NHWC,
input_data_type: DataType.FLOAT32,
output_data_type: DataType.FLOAT32,
export_mindir: MINDIR_LITE,
export_mindir: ModelType.MINDIR_LITE,
decrypt_key: ,
decrypt_mode: ,
decrypt_mode: AES-GCM,
enable_encryption: False,
encrypt_key: ,
infer: False,
@ -253,10 +255,15 @@ class Converter:
Raises:
TypeError: `section` is not a str.
TypeError: `config_info` is not a dict.
TypeError: `config_info` is a dict, but the keys are not str.
TypeError: `config_info` is a dict, the keys are str, but the values are not str.
Examples:
>>> # Download the model package and extract it, model download link:
>>> # https://download.mindspore.cn/model_zoo/official/lite/quick_start/micro/mobilenetv2.tar.gz
>>> import mindspore_lite as mslite
>>> converter = mslite.Converter(mslite.FmkType.TFLITE, "mobilenetv2.tflite", "mobilenetv2.tflite")
>>> converter = mslite.Converter(mslite.FmkType.kFmkTypeTflite, "./mobilenetv2/mobilenet_v2_1.0_224.tflite",
... "mobilenet_v2_1.0_224.tflite")
>>> section = "common_quant_param"
>>> config_info = {"quant_type":"WEIGHT_QUANT"}
>>> converter.set_config_info(section, config_info)
@ -272,11 +279,14 @@ class Converter:
Please use set_config_info method before get_config_info.
Returns:
dict{str, dict{str, str}, the config info which has been set in converter.
dict{str, dict{str, str}}, the config info which has been set in converter.
Examples:
>>> # Download the model package and extract it, model download link:
>>> # https://download.mindspore.cn/model_zoo/official/lite/quick_start/micro/mobilenetv2.tar.gz
>>> import mindspore_lite as mslite
>>> converter = mslite.Converter(mslite.FmkType.TFLITE, "mobilenetv2.tflite", "mobilenetv2.tflite")
>>> converter = mslite.Converter(mslite.FmkType.kFmkTypeTflite, "./mobilenetv2/mobilenet_v2_1.0_224.tflite",
... "mobilenet_v2_1.0_224.tflite")
>>> section = "common_quant_param"
>>> config_info_in = {"quant_type":"WEIGHT_QUANT"}
>>> converter.set_config_info(section, config_info_in)
@ -294,9 +304,13 @@ class Converter:
RuntimeError: converter model failed.
Examples:
>>> # Download the model package and extract it, model download link:
>>> # https://download.mindspore.cn/model_zoo/official/lite/quick_start/micro/mobilenetv2.tar.gz
>>> import mindspore_lite as mslite
>>> converter = mslite.Converter(mslite.FmkType.TFLITE, "mobilenetv2.tflite", "mobilenetv2.tflite")
>>> converter = mslite.Converter(mslite.FmkType.kFmkTypeTflite, "./mobilenetv2/mobilenet_v2_1.0_224.tflite",
... "mobilenet_v2_1.0_224.tflite")
>>> converter.converter()
CONVERT RESULT SUCCESS:0
"""
ret = self._converter.converter()
if not ret.IsOk():

View File

@ -86,6 +86,7 @@ class Model:
RuntimeError: `model_path` does not exist.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> model = mslite.Model()
>>> context = mslite.Context()
@ -125,6 +126,7 @@ class Model:
ValueError: The size of the elements of `inputs` is not equal to the size of the elements of `dims` .
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> model = mslite.Model()
>>> context = mslite.Context()
@ -182,27 +184,29 @@ class Model:
RuntimeError: predict model failed.
Examples:
>>> # predict which indata is from file
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> # in_data download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/input.bin
>>> # 1. predict which indata is from file
>>> import mindspore_lite as mslite
>>> import numpy ad np
>>> import numpy as np
>>> model = mslite.Model()
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
>>> model.build_from_file("mobilenetv2.ms", mslite.ModelType.MINDIR_LITE, context)
>>> inputs = model.get_inputs()
>>> outputs = model.get_outputs()
>>> in_data = np.fromfile("mobilenetv2.ms.bin", dtype=np.float32)
>>> in_data = np.fromfile("input.bin", dtype=np.float32)
>>> inputs[0].set_data_from_numpy(in_data)
>>> model.predict(inputs, outputs)
>>> for output in outputs:
... data = output.get_data_to_numpy()
... print("outputs: ", data)
outputs: [[8.9401474e-05 4.4536911e-05 1.0089713e-04 ... 3.2687691e-05
3.6021424e-04 8.3650106e-05]]
>>> # predict which indata is numpy array
...
outputs: [[1.0227193e-05 9.9270510e-06 1.6968443e-05 ... 6.6909502e-06
2.1626458e-06 1.2400946e-04]]
>>> # 2. predict which indata is numpy array
>>> import mindspore_lite as mslite
>>> import numpy ad np
>>> import numpy as np
>>> model = mslite.Model()
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
@ -212,16 +216,16 @@ class Model:
>>> for input in inputs:
... in_data = np.arange(1 * 224 * 224 * 3, dtype=np.float32).reshape((1, 224, 224, 3))
... input.set_data_from_numpy(in_data)
...
>>> model.predict(inputs, outputs)
>>> for output in outputs:
... data = output.get_data_to_numpy()
... print("outputs: ", data)
...
outputs: [[0.00035889 0.00065501 0.00052926 ... 0.00018387 0.00148318 0.00116824]]
>>> # predict which indata is new mslite tensor with numpy array
>>> # 3. predict which indata is new mslite tensor with numpy array
>>> import mindspore_lite as mslite
>>> import numpy ad np
>>> import numpy as np
>>> model = mslite.Model()
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
@ -234,15 +238,16 @@ class Model:
... input_tensor.set_data_type(input.get_data_type())
... input_tensor.set_shape(input.get_shape())
... input_tensor.set_format(input.get_format())
... input_tensor.set_tensor_name(input.get_data_name())
... input_tensor.set_tensor_name(input.get_tensor_name())
... in_data = np.arange(1 * 224 * 224 * 3, dtype=np.float32).reshape((1, 224, 224, 3))
... input_tensor.set_data_from_numpy(in_data)
... input_tensors.append(input_tensor)
...
>>> model.predict(input_tensors, outputs)
>>> for output in outputs:
... data = output.get_data_to_numpy()
... print("outputs: ", data)
...
outputs: [[0.00035889 0.00065501 0.00052926 ... 0.00018387 0.00148318 0.00116824]]
"""
if not isinstance(inputs, list):
@ -274,6 +279,7 @@ class Model:
list[Tensor], the inputs tensor list of the model.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> model = mslite.Model()
>>> context = mslite.Context()
@ -294,6 +300,7 @@ class Model:
list[Tensor], the outputs tensor list of the model.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> model = mslite.Model()
>>> context = mslite.Context()
@ -321,6 +328,7 @@ class Model:
RuntimeError: get input by tensor name failed.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> model = mslite.Model()
>>> context = mslite.Context()
@ -356,6 +364,7 @@ class Model:
RuntimeError: get output by tensor name failed.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> model = mslite.Model()
>>> context = mslite.Context()
@ -393,9 +402,9 @@ class RunnerConfig:
value of it is the path of weight, e.g. "/home/user/weight.cfg".
Raises:
TypeError: `context` is not a Context or None.
TypeError: `workers_num` is not an int or None.
TypeError: `config_info` is not a dict or None.
TypeError: `context` is neither a Context nor None.
TypeError: `workers_num` is neither an int nor None.
TypeError: `config_info` is neither a dict nor None.
TypeError: `config_info` is a dict, but the key is not str.
TypeError: `config_info` is a dict, the key is str, but the value is not dict.
TypeError: `config_info` is a dict, the key is str, the value is dict, but the key of value is not str.
@ -477,11 +486,12 @@ class ModelParallelRunner:
Raises:
TypeError: `model_path` is not a str.
TypeError: `runner_config` is not a RunnerConfig or None.
TypeError: `runner_config` is neither a RunnerConfig nor None.
RuntimeError: `model_path` does not exist.
RuntimeError: ModelParallelRunner's init failed.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
@ -519,6 +529,8 @@ class ModelParallelRunner:
RuntimeError: predict model failed.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> # in_data download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/input.bin
>>> import mindspore_lite as mslite
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
@ -526,13 +538,14 @@ class ModelParallelRunner:
>>> model_parallel_runner = mslite.ModelParallelRunner()
>>> model_parallel_runner.init(model_path="mobilenetv2.ms", runner_config=runner_config)
>>> inputs = model_parallel_runner.get_inputs()
>>> in_data = np.fromfile("mobilenetv2.ms.bin", dtype=np.float32)
>>> in_data = np.fromfile("input.bin", dtype=np.float32)
>>> inputs[0].set_data_from_numpy(in_data)
>>> outputs = model_parallel_runner.get_outputs()
>>> model_parallel_runner.predict(inputs, outputs)
>>> for output in outputs:
... data = output.get_data_to_numpy()
... print("outputs: ", data)
...
outputs: [[8.9401474e-05 4.4536911e-05 1.0089713e-04 ... 3.2687691e-05
3.6021424e-04 8.3650106e-05]]
"""
@ -567,6 +580,7 @@ class ModelParallelRunner:
list[Tensor], the inputs tensor list of the model.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
@ -588,6 +602,7 @@ class ModelParallelRunner:
list[Tensor], the outputs tensor list of the model.
Examples:
>>> # model download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/mobilenetv2.ms
>>> import mindspore_lite as mslite
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())

View File

@ -159,7 +159,7 @@ class Tensor:
tensor(Tensor): The data to be stored in a new tensor. It can be another Tensor. Default: None.
Raises:
TypeError: `tensor` is not a Tensor or None.
TypeError: `tensor` is neither a Tensor nor None.
Examples:
>>> import mindspore_lite as mslite
@ -171,7 +171,7 @@ class Tensor:
shape: [],
format: Format.NCHW,
element_num: 1,
data_size: 0.
data_size: 4.
"""
def __init__(self, tensor=None):
@ -214,7 +214,7 @@ class Tensor:
>>> tensor = mslite.Tensor()
>>> tensor.set_tensor_name("tensor0")
>>> tensor_name = tensor.get_tensor_name()
>>> print(tenser_name)
>>> print(tensor_name)
tensor0
"""
return self._tensor.get_tensor_name()
@ -379,22 +379,37 @@ class Tensor:
RuntimeError: The data size of `numpy_obj` is not equal to the data size of the tensor.
Examples:
>>> # data is from file
>>> # in_data download link: https://download.mindspore.cn/model_zoo/official/lite/quick_start/input.bin
>>> # 1. set tensor data which is from file
>>> import mindspore_lite as mslite
>>> import numpy ad np
>>> import numpy as np
>>> tensor = mslite.Tensor()
>>> tensor.set_shape([1, 224, 224, 3])
>>> tensor.set_data_type(mslite.DataType.FLOAT32)
>>> in_data = np.fromfile("mobilenetv2.ms.bin", dtype=np.float32)
>>> in_data = np.fromfile("input.bin", dtype=np.float32)
>>> tensor.set_data_from_numpy(in_data)
>>> # data is numpy arrange
>>> print(tensor)
tensor_name: ,
data_type: DataType.FLOAT32,
shape: [1, 224, 224, 3],
format: Format.NCHW,
element_num: 150528,
data_size: 602112.
>>> # 2. set tensor data which is numpy arange
>>> import mindspore_lite as mslite
>>> import numpy ad np
>>> import numpy as np
>>> tensor = mslite.Tensor()
>>> tensor.set_shape([1, 2, 2, 3])
>>> tensor.set_data_type(mslite.DataType.FLOAT32)
>>> in_data = np.arrange(1 * 2 * 2 * 3, dtype=np.float32)
>>> in_data = np.arange(1 * 2 * 2 * 3, dtype=np.float32)
>>> tensor.set_data_from_numpy(in_data)
>>> print(tensor)
tensor_name: ,
data_type: DataType.FLOAT32,
shape: [1, 2, 2, 3],
format: Format.NCHW,
element_num: 12,
data_size: 48.
"""
if not isinstance(numpy_obj, numpy.ndarray):
raise TypeError(f"numpy_obj must be numpy.ndarray, but got {type(numpy_obj)}.")
@ -430,11 +445,11 @@ class Tensor:
Examples:
>>> import mindspore_lite as mslite
>>> import numpy ad np
>>> import numpy as np
>>> tensor = mslite.Tensor()
>>> tensor.set_shape([1, 2, 2, 3])
>>> tensor.set_data_type(mslite.DataType.FLOAT32)
>>> in_data = np.arrange(1 * 2 * 2 * 3, dtype=np.float32)
>>> in_data = np.arange(1 * 2 * 2 * 3, dtype=np.float32)
>>> tensor.set_data_from_numpy(in_data)
>>> data = tensor.get_data_to_numpy()
>>> print(data)