diff --git a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst index f575be1597e..3a22c17c98c 100644 --- a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst +++ b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.Converter.rst @@ -9,8 +9,8 @@ mindspore_lite.Converter 参数默认值是None时表示不设置。 参数: - - **fmk_type** (FmkType) - 输入模型框架类型。选项:FmkType.TF | FmkType.CAFFE | FmkType.ONNX | FmkType.MINDIR | FmkType.TFLITE。 - - **model_file** (str) - 输入模型文件路径。e.g. "/home/user/model.prototxt"。选项:TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite"。 + - **fmk_type** (FmkType) - 输入模型框架类型。选项:FmkType.TF | FmkType.CAFFE | FmkType.ONNX | FmkType.MINDIR | FmkType.TFLITE | FmkType.PYTORCH。 + - **model_file** (str) - 输入模型文件路径。e.g. "/home/user/model.prototxt"。选项:TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite" | PYTORCH: "\*.pt or \*.pth"。 - **output_file** (str) - 输出模型文件路径。可自动生成.ms后缀。e.g. "/home/user/model.prototxt",它将生成名为model.prototxt.ms的模型在/home/user/路径下。 - **weight_file** (str,可选) - 输入模型权重文件。仅当输入模型框架类型为FmkType.CAFFE时必选。e.g. "/home/user/model.caffemodel"。默认值:""。 - **config_file** (str,可选) - 作为训练后量化或离线拆分算子并行的配置文件路径,禁用算子融合功能并将插件设置为so路径。默认值:""。 diff --git a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.FmkType.rst b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.FmkType.rst index a54772329a6..459f66aa54d 100644 --- a/docs/api/lite_api_python/mindspore_lite/mindspore_lite.FmkType.rst +++ b/docs/api/lite_api_python/mindspore_lite/mindspore_lite.FmkType.rst @@ -26,4 +26,5 @@ mindspore_lite.FmkType ``FmkType.ONNX`` ONNX模型的框架类型,该模型使用.onnx作为后缀 ``FmkType.MINDIR`` MindSpore模型的框架类型,该模型使用.mindir作为后缀 ``FmkType.TFLITE`` TensorFlow Lite模型的框架类型,该模型使用.tflite作为后缀 + ``FmkType.PYTORCH`` PyTorch模型的框架类型,该模型使用.pt或.pth作为后缀 =========================== ==================================================== diff --git a/docs/api/lite_api_python_en/mindspore_lite.rst b/docs/api/lite_api_python_en/mindspore_lite.rst index 41155261ed4..77caf213b92 100644 --- a/docs/api/lite_api_python_en/mindspore_lite.rst +++ b/docs/api/lite_api_python_en/mindspore_lite.rst @@ -43,7 +43,7 @@ Converter ``FmkType.ONNX`` ONNX model's framework type, and the model uses .onnx as suffix ``FmkType.MINDIR`` MindSpore model's framework type, and the model uses .mindir as suffix ``FmkType.TFLITE`` TensorFlow Lite model's framework type, and the model uses .tflite as suffix - ``FmkType.PYTORCH`` PYTORCH model's framework type, and the model uses .pt or .pth as suffix + ``FmkType.PYTORCH`` PyTorch model's framework type, and the model uses .pt or .pth as suffix =========================== ============================================================================ .. autosummary:: diff --git a/mindspore/lite/python/api/converter.py b/mindspore/lite/python/api/converter.py index c2e494bf37f..9fbbf237a19 100644 --- a/mindspore/lite/python/api/converter.py +++ b/mindspore/lite/python/api/converter.py @@ -36,6 +36,7 @@ class FmkType(Enum): ONNX = 2 MINDIR = 3 TFLITE = 4 + PYTORCH = 5 class Converter: @@ -47,9 +48,10 @@ class Converter: Args: fmk_type (FmkType): Input model framework type. Options: FmkType.TF | FmkType.CAFFE | FmkType.ONNX | - FmkType.MINDIR | FmkType.TFLITE. + FmkType.MINDIR | FmkType.TFLITE | FmkType.PYTORCH. model_file (str): Path of the input model. e.g. "/home/user/model.prototxt". Options: - TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite". + TF: "\*.pb" | CAFFE: "\*.prototxt" | ONNX: "\*.onnx" | MINDIR: "\*.mindir" | TFLITE: "\*.tflite" | + PYTORCH: "\*.pt or \*.pth". output_file (str): Path of the output model. The suffix .ms can be automatically generated. e.g. "/home/user/model.prototxt", it will generate the model named model.prototxt.ms in /home/user/ weight_file (str, optional): Input model weight file. Required only when fmk_type is FmkType.CAFFE. @@ -175,6 +177,7 @@ class Converter: FmkType.ONNX: _c_lite_wrapper.FmkType.kFmkTypeOnnx, FmkType.MINDIR: _c_lite_wrapper.FmkType.kFmkTypeMs, FmkType.TFLITE: _c_lite_wrapper.FmkType.kFmkTypeTflite, + FmkType.PYTORCH: _c_lite_wrapper.FmkType.kFmkTypePytorch, } self._converter = _c_lite_wrapper.ConverterBind(fmk_type_py_cxx_map.get(fmk_type), model_file, output_file, weight_file) diff --git a/mindspore/lite/python/src/converter_pybind.cc b/mindspore/lite/python/src/converter_pybind.cc index 7898b0bcb8c..56f1b35fc3b 100644 --- a/mindspore/lite/python/src/converter_pybind.cc +++ b/mindspore/lite/python/src/converter_pybind.cc @@ -27,7 +27,8 @@ void ConverterPyBind(const py::module &m) { .value("kFmkTypeCaffe", converter::FmkType::kFmkTypeCaffe) .value("kFmkTypeOnnx", converter::FmkType::kFmkTypeOnnx) .value("kFmkTypeMs", converter::FmkType::kFmkTypeMs) - .value("kFmkTypeTflite", converter::FmkType::kFmkTypeTflite); + .value("kFmkTypeTflite", converter::FmkType::kFmkTypeTflite) + .value("kFmkTypePytorch", converter::FmkType::kFmkTypePytorch); py::class_>(m, "ConverterBind") .def(py::init())