[MSLITE] Python API of converter supports PYTORCH model

This commit is contained in:
wang_shaocong 2022-09-16 14:45:22 +08:00
parent 3ebaecdbb1
commit c4ccf29371
5 changed files with 11 additions and 6 deletions

View File

@ -9,8 +9,8 @@ mindspore_lite.Converter
参数默认值是None时表示不设置。
参数:
- **fmk_type** (FmkType) - 输入模型框架类型。选项FmkType.TF | FmkType.CAFFE | FmkType.ONNX | FmkType.MINDIR | FmkType.TFLITE。
- **model_file** (str) - 输入模型文件路径。e.g. "/home/user/model.prototxt"。选项TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite"。
- **fmk_type** (FmkType) - 输入模型框架类型。选项FmkType.TF | FmkType.CAFFE | FmkType.ONNX | FmkType.MINDIR | FmkType.TFLITE | FmkType.PYTORCH
- **model_file** (str) - 输入模型文件路径。e.g. "/home/user/model.prototxt"。选项TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite" | PYTORCH: "\*.pt or \*.pth"
- **output_file** (str) - 输出模型文件路径。可自动生成.ms后缀。e.g. "/home/user/model.prototxt"它将生成名为model.prototxt.ms的模型在/home/user/路径下。
- **weight_file** (str可选) - 输入模型权重文件。仅当输入模型框架类型为FmkType.CAFFE时必选。e.g. "/home/user/model.caffemodel"。默认值:""。
- **config_file** (str可选) - 作为训练后量化或离线拆分算子并行的配置文件路径禁用算子融合功能并将插件设置为so路径。默认值""。

View File

@ -26,4 +26,5 @@ mindspore_lite.FmkType
``FmkType.ONNX`` ONNX模型的框架类型该模型使用.onnx作为后缀
``FmkType.MINDIR`` MindSpore模型的框架类型该模型使用.mindir作为后缀
``FmkType.TFLITE`` TensorFlow Lite模型的框架类型该模型使用.tflite作为后缀
``FmkType.PYTORCH`` PyTorch模型的框架类型该模型使用.pt或.pth作为后缀
=========================== ====================================================

View File

@ -43,7 +43,7 @@ Converter
``FmkType.ONNX`` ONNX model's framework type, and the model uses .onnx as suffix
``FmkType.MINDIR`` MindSpore model's framework type, and the model uses .mindir as suffix
``FmkType.TFLITE`` TensorFlow Lite model's framework type, and the model uses .tflite as suffix
``FmkType.PYTORCH`` PYTORCH model's framework type, and the model uses .pt or .pth as suffix
``FmkType.PYTORCH`` PyTorch model's framework type, and the model uses .pt or .pth as suffix
=========================== ============================================================================
.. autosummary::

View File

@ -36,6 +36,7 @@ class FmkType(Enum):
ONNX = 2
MINDIR = 3
TFLITE = 4
PYTORCH = 5
class Converter:
@ -47,9 +48,10 @@ class Converter:
Args:
fmk_type (FmkType): Input model framework type. Options: FmkType.TF | FmkType.CAFFE | FmkType.ONNX |
FmkType.MINDIR | FmkType.TFLITE.
FmkType.MINDIR | FmkType.TFLITE | FmkType.PYTORCH.
model_file (str): Path of the input model. e.g. "/home/user/model.prototxt". Options:
TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite".
TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite" |
PYTORCH: "\*.pt or \*.pth".
output_file (str): Path of the output model. The suffix .ms can be automatically generated.
e.g. "/home/user/model.prototxt", it will generate the model named model.prototxt.ms in /home/user/
weight_file (str, optional): Input model weight file. Required only when fmk_type is FmkType.CAFFE.
@ -175,6 +177,7 @@ class Converter:
FmkType.ONNX: _c_lite_wrapper.FmkType.kFmkTypeOnnx,
FmkType.MINDIR: _c_lite_wrapper.FmkType.kFmkTypeMs,
FmkType.TFLITE: _c_lite_wrapper.FmkType.kFmkTypeTflite,
FmkType.PYTORCH: _c_lite_wrapper.FmkType.kFmkTypePytorch,
}
self._converter = _c_lite_wrapper.ConverterBind(fmk_type_py_cxx_map.get(fmk_type), model_file, output_file,
weight_file)

View File

@ -27,7 +27,8 @@ void ConverterPyBind(const py::module &m) {
.value("kFmkTypeCaffe", converter::FmkType::kFmkTypeCaffe)
.value("kFmkTypeOnnx", converter::FmkType::kFmkTypeOnnx)
.value("kFmkTypeMs", converter::FmkType::kFmkTypeMs)
.value("kFmkTypeTflite", converter::FmkType::kFmkTypeTflite);
.value("kFmkTypeTflite", converter::FmkType::kFmkTypeTflite)
.value("kFmkTypePytorch", converter::FmkType::kFmkTypePytorch);
py::class_<Converter, std::shared_ptr<Converter>>(m, "ConverterBind")
.def(py::init<converter::FmkType, const std::string &, const std::string &, const std::string &>())