forked from mindspore-Ecosystem/mindspore
lenet and mobilenetv2 add 310infer
This commit is contained in:
parent
1e3c89c1b3
commit
98a6fa0c6b
|
@ -12,9 +12,14 @@
|
||||||
- [Training](#training)
|
- [Training](#training)
|
||||||
- [Evaluation Process](#evaluation-process)
|
- [Evaluation Process](#evaluation-process)
|
||||||
- [Evaluation](#evaluation)
|
- [Evaluation](#evaluation)
|
||||||
|
- [Inference Process](#inference-process)
|
||||||
|
- [Export MindIR](#export-mindir)
|
||||||
|
- [Infer on Ascend310](#infer-on-ascend310)
|
||||||
|
- [result](#result)
|
||||||
- [Model Description](#model-description)
|
- [Model Description](#model-description)
|
||||||
- [Performance](#performance)
|
- [Performance](#performance)
|
||||||
- [Evaluation Performance](#evaluation-performance)
|
- [Evaluation Performance](#evaluation-performance)
|
||||||
|
- [Inference Performance](#inference-performance)
|
||||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||||
|
|
||||||
## [LeNet Description](#contents)
|
## [LeNet Description](#contents)
|
||||||
|
@ -82,7 +87,9 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
|
||||||
├── lenet
|
├── lenet
|
||||||
├── README.md // descriptions about lenet
|
├── README.md // descriptions about lenet
|
||||||
├── requirements.txt // package needed
|
├── requirements.txt // package needed
|
||||||
|
├── ascend310_infer // application for 310 inference
|
||||||
├── scripts
|
├── scripts
|
||||||
|
│ ├──run_infer_310.sh // infer in 310
|
||||||
│ ├──run_standalone_train_cpu.sh // train in cpu
|
│ ├──run_standalone_train_cpu.sh // train in cpu
|
||||||
│ ├──run_standalone_train_gpu.sh // train in gpu
|
│ ├──run_standalone_train_gpu.sh // train in gpu
|
||||||
│ ├──run_standalone_train_ascend.sh // train in ascend
|
│ ├──run_standalone_train_ascend.sh // train in ascend
|
||||||
|
@ -90,11 +97,13 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
|
||||||
│ ├──run_standalone_eval_gpu.sh // evaluate in gpu
|
│ ├──run_standalone_eval_gpu.sh // evaluate in gpu
|
||||||
│ ├──run_standalone_eval_ascend.sh // evaluate in ascend
|
│ ├──run_standalone_eval_ascend.sh // evaluate in ascend
|
||||||
├── src
|
├── src
|
||||||
|
│ ├──aipp.cfg // aipp config
|
||||||
│ ├──dataset.py // creating dataset
|
│ ├──dataset.py // creating dataset
|
||||||
│ ├──lenet.py // lenet architecture
|
│ ├──lenet.py // lenet architecture
|
||||||
│ ├──config.py // parameter configuration
|
│ ├──config.py // parameter configuration
|
||||||
├── train.py // training script
|
├── train.py // training script
|
||||||
├── eval.py // evaluation script
|
├── eval.py // evaluation script
|
||||||
|
├── postprocess.py // postprocess script
|
||||||
```
|
```
|
||||||
|
|
||||||
## [Script Parameters](#contents)
|
## [Script Parameters](#contents)
|
||||||
|
@ -157,6 +166,38 @@ You can view the results through the file "log.txt". The accuracy of the test da
|
||||||
'Accuracy': 0.9842
|
'Accuracy': 0.9842
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## [Inference Process](#contents)
|
||||||
|
|
||||||
|
### Export MindIR
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
|
||||||
|
```
|
||||||
|
|
||||||
|
The ckpt_file parameter is required,
|
||||||
|
`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
|
||||||
|
|
||||||
|
### Infer on Ascend310
|
||||||
|
|
||||||
|
Before performing inference, the mindir file must bu exported by `export.py` script. We only provide an example of inference using MINDIR model.
|
||||||
|
Current batch_Size can only be set to 1.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Ascend310 inference
|
||||||
|
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `DVPP` is mandatory, and must choose from ["DVPP", "CPU"], it's case-insensitive.The size of the picture that Lenet performs inference is [32, 32], the DVPP hardware limits the width of divisible by 16, and the height is divisible by 2. The network conforms to the standard, and the network can pre-process the image through DVPP.
|
||||||
|
- `DEVICE_ID` is optional, default value is 0.
|
||||||
|
|
||||||
|
### result
|
||||||
|
|
||||||
|
Inference result is saved in current path, you can find result like this in acc.log file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
'Accuracy': 0.9843
|
||||||
|
```
|
||||||
|
|
||||||
## [Model Description](#contents)
|
## [Model Description](#contents)
|
||||||
|
|
||||||
### [Performance](#contents)
|
### [Performance](#contents)
|
||||||
|
@ -179,6 +220,20 @@ You can view the results through the file "log.txt". The accuracy of the test da
|
||||||
| Checkpoint for Fine tuning | 482k (.ckpt file) |
|
| Checkpoint for Fine tuning | 482k (.ckpt file) |
|
||||||
| Scripts | [LeNet Script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/lenet)s |
|
| Scripts | [LeNet Script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/lenet)s |
|
||||||
|
|
||||||
|
#### Inference Performance
|
||||||
|
|
||||||
|
| Parameters | Ascend |
|
||||||
|
| ------------------- | --------------------------- |
|
||||||
|
| Model Version | LeNet |
|
||||||
|
| Resource | Ascend 310; CentOS 3.10 |
|
||||||
|
| Uploaded Date | 07/05/2021 (month/day/year) |
|
||||||
|
| MindSpore Version | 1.2.0 |
|
||||||
|
| Dataset | Mnist |
|
||||||
|
| batch_size | 1 |
|
||||||
|
| outputs | Accuracy |
|
||||||
|
| Accuracy | Accuracy=0.9843 |
|
||||||
|
| Model for inference | 482K(.ckpt file) |
|
||||||
|
|
||||||
## [Description of Random Situation](#contents)
|
## [Description of Random Situation](#contents)
|
||||||
|
|
||||||
In dataset.py, we set the seed inside ```create_dataset``` function.
|
In dataset.py, we set the seed inside ```create_dataset``` function.
|
||||||
|
|
|
@ -15,9 +15,14 @@
|
||||||
- [训练](#训练)
|
- [训练](#训练)
|
||||||
- [评估过程](#评估过程)
|
- [评估过程](#评估过程)
|
||||||
- [评估](#评估)
|
- [评估](#评估)
|
||||||
|
- [推理过程](#推理过程)
|
||||||
|
- [导出MindIR](#导出mindir)
|
||||||
|
- [在Ascend310执行推理](#在ascend310执行推理)
|
||||||
|
- [结果](#结果)
|
||||||
- [模型描述](#模型描述)
|
- [模型描述](#模型描述)
|
||||||
- [性能](#性能)
|
- [性能](#性能)
|
||||||
- [评估性能](#评估性能)
|
- [评估性能](#评估性能)
|
||||||
|
- [推理性能](#推理性能)
|
||||||
- [随机情况说明](#随机情况说明)
|
- [随机情况说明](#随机情况说明)
|
||||||
- [ModelZoo主页](#modelzoo主页)
|
- [ModelZoo主页](#modelzoo主页)
|
||||||
|
|
||||||
|
@ -86,19 +91,23 @@ sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
|
||||||
├── lenet
|
├── lenet
|
||||||
├── README.md // Lenet描述
|
├── README.md // Lenet描述
|
||||||
├── requirements.txt // 需要的包
|
├── requirements.txt // 需要的包
|
||||||
|
├── ascend310_infer // 用于310推理
|
||||||
├── scripts
|
├── scripts
|
||||||
│ ├──run_standalone_train_cpu.sh // CPU训练
|
│ ├──run_standalone_train_cpu.sh // CPU训练
|
||||||
|
│ ├──run_infer_310.sh // 310推理
|
||||||
│ ├──run_standalone_train_gpu.sh // GPU训练
|
│ ├──run_standalone_train_gpu.sh // GPU训练
|
||||||
│ ├──run_standalone_train_ascend.sh // Ascend训练
|
│ ├──run_standalone_train_ascend.sh // Ascend训练
|
||||||
│ ├──run_standalone_eval_cpu.sh // CPU评估
|
│ ├──run_standalone_eval_cpu.sh // CPU评估
|
||||||
│ ├──run_standalone_eval_gpu.sh // GPU评估
|
│ ├──run_standalone_eval_gpu.sh // GPU评估
|
||||||
│ ├──run_standalone_eval_ascend.sh // Ascend评估
|
│ ├──run_standalone_eval_ascend.sh // Ascend评估
|
||||||
├── src
|
├── src
|
||||||
│ ├──dataset.py // 创建数据集
|
│ ├──aipp.cfg // aipp配置文件
|
||||||
│ ├──lenet.py // Lenet架构
|
│ ├──dataset.py // 创建数据集
|
||||||
|
│ ├──lenet.py // Lenet架构
|
||||||
│ ├──config.py // 参数配置
|
│ ├──config.py // 参数配置
|
||||||
├── train.py // 训练脚本
|
├── train.py // 训练脚本
|
||||||
├── eval.py // 评估脚本
|
├── eval.py // 评估脚本
|
||||||
|
├── postprocess.py // 310推理后处理脚本
|
||||||
```
|
```
|
||||||
|
|
||||||
## 脚本参数
|
## 脚本参数
|
||||||
|
@ -159,6 +168,38 @@ sh run_standalone_eval_ascend.sh Data ckpt/checkpoint_lenet-1_1875.ckpt
|
||||||
'Accuracy':0.9842
|
'Accuracy':0.9842
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 推理过程
|
||||||
|
|
||||||
|
### 导出MindIR
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
|
||||||
|
```
|
||||||
|
|
||||||
|
参数ckpt_file为必填项,
|
||||||
|
`EXPORT_FORMAT` 必须在 ["AIR", "MINDIR"]中选择。
|
||||||
|
|
||||||
|
### 在Ascend310执行推理
|
||||||
|
|
||||||
|
在执行推理前,mindir文件必须通过`export.py`脚本导出。以下展示了使用minir模型执行推理的示例。
|
||||||
|
目前仅支持batch_Size为1的推理。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Ascend310 inference
|
||||||
|
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `DVPP` 为必填项,需要在["DVPP", "CPU"]选择,大小写均可。Lenet执行推理的图片尺寸为[32, 32],DVPP硬件限制宽为16整除,高为2整除,网络符合标准,网络可以通过DVPP对图像进行前处理。
|
||||||
|
- `DEVICE_ID` 可选,默认值为0。
|
||||||
|
|
||||||
|
### 结果
|
||||||
|
|
||||||
|
推理结果保存在脚本执行的当前路径,你可以在acc.log中看到以下精度计算结果。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
'Accuracy':0.9843
|
||||||
|
```
|
||||||
|
|
||||||
## 模型描述
|
## 模型描述
|
||||||
|
|
||||||
## 性能
|
## 性能
|
||||||
|
@ -181,6 +222,20 @@ sh run_standalone_eval_ascend.sh Data ckpt/checkpoint_lenet-1_1875.ckpt
|
||||||
| 微调检查点 | 482k (.ckpt文件) |
|
| 微调检查点 | 482k (.ckpt文件) |
|
||||||
| 脚本 | [LeNet脚本](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/lenet) |
|
| 脚本 | [LeNet脚本](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/lenet) |
|
||||||
|
|
||||||
|
### 推理性能
|
||||||
|
|
||||||
|
| 参数 | Ascend |
|
||||||
|
| ------------- | ----------------------------|
|
||||||
|
| 模型版本 | LeNet |
|
||||||
|
| 资源 | Ascend 310;系统 CentOS 3.10 |
|
||||||
|
| 上传日期 | 2021-05-07 |
|
||||||
|
| MindSpore版本 | 1.2.0 |
|
||||||
|
| 数据集 | Mnist |
|
||||||
|
| batch_size | 1 |
|
||||||
|
| 输出 | Accuracy |
|
||||||
|
| 准确率 | Accuracy=0.9843 |
|
||||||
|
| 推理模型 | 482K(.ckpt文件) |
|
||||||
|
|
||||||
## 随机情况说明
|
## 随机情况说明
|
||||||
|
|
||||||
在dataset.py中,我们设置了“create_dataset”函数内的种子。
|
在dataset.py中,我们设置了“create_dataset”函数内的种子。
|
||||||
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
cmake_minimum_required(VERSION 3.14.1)
|
||||||
|
project(Ascend310Infer)
|
||||||
|
add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -g -std=c++17 -Werror -Wall -fPIE -Wl,--allow-shlib-undefined")
|
||||||
|
set(PROJECT_SRC_ROOT ${CMAKE_CURRENT_LIST_DIR}/)
|
||||||
|
option(MINDSPORE_PATH "mindspore install path" "")
|
||||||
|
include_directories(${MINDSPORE_PATH})
|
||||||
|
include_directories(${MINDSPORE_PATH}/include)
|
||||||
|
include_directories(${PROJECT_SRC_ROOT})
|
||||||
|
find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib)
|
||||||
|
file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*)
|
||||||
|
|
||||||
|
add_executable(main src/main.cc src/utils.cc)
|
||||||
|
target_link_libraries(main ${MS_LIB} ${MD_LIB} gflags)
|
|
@ -0,0 +1,23 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
if [ ! -d out ]; then
|
||||||
|
mkdir out
|
||||||
|
fi
|
||||||
|
cd out || exit
|
||||||
|
cmake .. \
|
||||||
|
-DMINDSPORE_PATH="`pip show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`"
|
||||||
|
make
|
|
@ -0,0 +1,32 @@
|
||||||
|
/**
|
||||||
|
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef MINDSPORE_INFERENCE_UTILS_H_
|
||||||
|
#define MINDSPORE_INFERENCE_UTILS_H_
|
||||||
|
|
||||||
|
#include <sys/stat.h>
|
||||||
|
#include <dirent.h>
|
||||||
|
#include <vector>
|
||||||
|
#include <string>
|
||||||
|
#include <memory>
|
||||||
|
#include "include/api/types.h"
|
||||||
|
|
||||||
|
std::vector<std::string> GetAllFiles(std::string_view dirName);
|
||||||
|
DIR *OpenDir(std::string_view dirName);
|
||||||
|
std::string RealPath(std::string_view path);
|
||||||
|
mindspore::MSTensor ReadFileToTensor(const std::string &file);
|
||||||
|
int WriteResult(const std::string& imageFile, const std::vector<mindspore::MSTensor> &outputs);
|
||||||
|
#endif
|
|
@ -0,0 +1,159 @@
|
||||||
|
/**
|
||||||
|
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
#include <sys/time.h>
|
||||||
|
#include <gflags/gflags.h>
|
||||||
|
#include <dirent.h>
|
||||||
|
#include <iostream>
|
||||||
|
#include <string>
|
||||||
|
#include <algorithm>
|
||||||
|
#include <iosfwd>
|
||||||
|
#include <vector>
|
||||||
|
#include <fstream>
|
||||||
|
#include <sstream>
|
||||||
|
|
||||||
|
#include "include/api/model.h"
|
||||||
|
#include "include/api/context.h"
|
||||||
|
#include "include/api/types.h"
|
||||||
|
#include "include/api/serialization.h"
|
||||||
|
#include "include/dataset/vision_ascend.h"
|
||||||
|
#include "include/dataset/execute.h"
|
||||||
|
#include "include/dataset/vision.h"
|
||||||
|
#include "inc/utils.h"
|
||||||
|
|
||||||
|
using mindspore::Context;
|
||||||
|
using mindspore::Serialization;
|
||||||
|
using mindspore::Model;
|
||||||
|
using mindspore::Status;
|
||||||
|
using mindspore::ModelType;
|
||||||
|
using mindspore::GraphCell;
|
||||||
|
using mindspore::kSuccess;
|
||||||
|
using mindspore::MSTensor;
|
||||||
|
using mindspore::dataset::Execute;
|
||||||
|
using mindspore::dataset::TensorTransform;
|
||||||
|
using mindspore::dataset::vision::Resize;
|
||||||
|
using mindspore::dataset::vision::HWC2CHW;
|
||||||
|
using mindspore::dataset::vision::Normalize;
|
||||||
|
using mindspore::dataset::vision::Decode;
|
||||||
|
|
||||||
|
DEFINE_string(mindir_path, "", "mindir path");
|
||||||
|
DEFINE_string(dataset_path, ".", "dataset path");
|
||||||
|
DEFINE_int32(device_id, 0, "device id");
|
||||||
|
DEFINE_string(aipp_path, "", "aipp path");
|
||||||
|
DEFINE_string(cpu_dvpp, "", "cpu or dvpp process");
|
||||||
|
DEFINE_int32(image_height, 32, "image height");
|
||||||
|
DEFINE_int32(image_width, 32, "image width");
|
||||||
|
|
||||||
|
int main(int argc, char **argv) {
|
||||||
|
gflags::ParseCommandLineFlags(&argc, &argv, true);
|
||||||
|
if (RealPath(FLAGS_mindir_path).empty()) {
|
||||||
|
std::cout << "Invalid mindir" << std::endl;
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto context = std::make_shared<Context>();
|
||||||
|
auto ascend310 = std::make_shared<mindspore::Ascend310DeviceInfo>();
|
||||||
|
ascend310->SetDeviceID(FLAGS_device_id);
|
||||||
|
ascend310->SetBufferOptimizeMode("off_optimize");
|
||||||
|
context->MutableDeviceInfo().push_back(ascend310);
|
||||||
|
mindspore::Graph graph;
|
||||||
|
Serialization::Load(FLAGS_mindir_path, ModelType::kMindIR, &graph);
|
||||||
|
if (FLAGS_cpu_dvpp == "DVPP") {
|
||||||
|
if (RealPath(FLAGS_aipp_path).empty()) {
|
||||||
|
std::cout << "Invalid aipp path" << std::endl;
|
||||||
|
return 1;
|
||||||
|
} else {
|
||||||
|
ascend310->SetInsertOpConfigPath(FLAGS_aipp_path);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Model model;
|
||||||
|
Status ret = model.Build(GraphCell(graph), context);
|
||||||
|
if (ret != kSuccess) {
|
||||||
|
std::cout << "ERROR: Build failed." << std::endl;
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto all_files = GetAllFiles(FLAGS_dataset_path);
|
||||||
|
std::map<double, double> costTime_map;
|
||||||
|
size_t size = all_files.size();
|
||||||
|
|
||||||
|
for (size_t i = 0; i < size; ++i) {
|
||||||
|
struct timeval start = {0};
|
||||||
|
struct timeval end = {0};
|
||||||
|
double startTimeMs;
|
||||||
|
double endTimeMs;
|
||||||
|
std::vector<MSTensor> inputs;
|
||||||
|
std::vector<MSTensor> outputs;
|
||||||
|
std::cout << "Start predict input files:" << all_files[i] << std::endl;
|
||||||
|
if (FLAGS_cpu_dvpp == "DVPP") {
|
||||||
|
std::shared_ptr<TensorTransform> decode(new Decode());
|
||||||
|
auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
|
||||||
|
std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
|
||||||
|
Execute composeDecode({decode, resize});
|
||||||
|
auto imgDvpp = std::make_shared<MSTensor>();
|
||||||
|
composeDecode(ReadFileToTensor(all_files[i]), imgDvpp.get());
|
||||||
|
inputs.emplace_back(imgDvpp->Name(), imgDvpp->DataType(), imgDvpp->Shape(),
|
||||||
|
imgDvpp->Data().get(), imgDvpp->DataSize());
|
||||||
|
} else {
|
||||||
|
std::shared_ptr<TensorTransform> decode(new Decode());
|
||||||
|
std::shared_ptr<TensorTransform> hwc2chw(new HWC2CHW());
|
||||||
|
std::shared_ptr<TensorTransform> normalize(
|
||||||
|
new Normalize({123.675, 116.28, 103.53}, {58.395, 57.120, 57.375}));
|
||||||
|
auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
|
||||||
|
std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
|
||||||
|
auto resizeShape1 = {1, FLAGS_image_height};
|
||||||
|
std::shared_ptr<TensorTransform> reshape_one_channel(new Resize(resizeShape1));
|
||||||
|
Execute composeDecode({decode, resize, normalize, hwc2chw, reshape_one_channel});
|
||||||
|
auto img = MSTensor();
|
||||||
|
auto image = ReadFileToTensor(all_files[i]);
|
||||||
|
composeDecode(image, &img);
|
||||||
|
std::vector<MSTensor> model_inputs = model.GetInputs();
|
||||||
|
inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(),
|
||||||
|
img.Data().get(), img.DataSize());
|
||||||
|
}
|
||||||
|
|
||||||
|
gettimeofday(&start, nullptr);
|
||||||
|
ret = model.Predict(inputs, &outputs);
|
||||||
|
gettimeofday(&end, nullptr);
|
||||||
|
if (ret != kSuccess) {
|
||||||
|
std::cout << "Predict " << all_files[i] << " failed." << std::endl;
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
startTimeMs = (1.0 * start.tv_sec * 1000000 + start.tv_usec) / 1000;
|
||||||
|
endTimeMs = (1.0 * end.tv_sec * 1000000 + end.tv_usec) / 1000;
|
||||||
|
costTime_map.insert(std::pair<double, double>(startTimeMs, endTimeMs));
|
||||||
|
WriteResult(all_files[i], outputs);
|
||||||
|
}
|
||||||
|
double average = 0.0;
|
||||||
|
int inferCount = 0;
|
||||||
|
|
||||||
|
for (auto iter = costTime_map.begin(); iter != costTime_map.end(); iter++) {
|
||||||
|
double diff = 0.0;
|
||||||
|
diff = iter->second - iter->first;
|
||||||
|
average += diff;
|
||||||
|
inferCount++;
|
||||||
|
}
|
||||||
|
average = average / inferCount;
|
||||||
|
std::stringstream timeCost;
|
||||||
|
timeCost << "NN inference cost average time: "<< average << " ms of infer_count " << inferCount << std::endl;
|
||||||
|
std::cout << "NN inference cost average time: "<< average << "ms of infer_count " << inferCount << std::endl;
|
||||||
|
std::string fileName = "./time_Result" + std::string("/test_perform_static.txt");
|
||||||
|
std::ofstream fileStream(fileName.c_str(), std::ios::trunc);
|
||||||
|
fileStream << timeCost.str();
|
||||||
|
fileStream.close();
|
||||||
|
costTime_map.clear();
|
||||||
|
return 0;
|
||||||
|
}
|
|
@ -0,0 +1,130 @@
|
||||||
|
/**
|
||||||
|
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "inc/utils.h"
|
||||||
|
|
||||||
|
#include <fstream>
|
||||||
|
#include <algorithm>
|
||||||
|
#include <iostream>
|
||||||
|
|
||||||
|
using mindspore::MSTensor;
|
||||||
|
using mindspore::DataType;
|
||||||
|
|
||||||
|
std::vector<std::string> GetAllFiles(std::string_view dirName) {
|
||||||
|
struct dirent *filename;
|
||||||
|
DIR *dir = OpenDir(dirName);
|
||||||
|
if (dir == nullptr) {
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
std::vector<std::string> res;
|
||||||
|
while ((filename = readdir(dir)) != nullptr) {
|
||||||
|
std::string dName = std::string(filename->d_name);
|
||||||
|
if (dName == "." || dName == ".." || filename->d_type != DT_REG) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
res.emplace_back(std::string(dirName) + "/" + filename->d_name);
|
||||||
|
}
|
||||||
|
std::sort(res.begin(), res.end());
|
||||||
|
for (auto &f : res) {
|
||||||
|
std::cout << "image file: " << f << std::endl;
|
||||||
|
}
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
|
int WriteResult(const std::string& imageFile, const std::vector<MSTensor> &outputs) {
|
||||||
|
std::string homePath = "./result_Files";
|
||||||
|
for (size_t i = 0; i < outputs.size(); ++i) {
|
||||||
|
size_t outputSize;
|
||||||
|
std::shared_ptr<const void> netOutput;
|
||||||
|
netOutput = outputs[i].Data();
|
||||||
|
outputSize = outputs[i].DataSize();
|
||||||
|
int pos = imageFile.rfind('/');
|
||||||
|
std::string fileName(imageFile, pos + 1);
|
||||||
|
fileName.replace(fileName.find('.'), fileName.size() - fileName.find('.'), ".bin");
|
||||||
|
std::string outFileName = homePath + "/" + fileName;
|
||||||
|
FILE * outputFile = fopen(outFileName.c_str(), "wb");
|
||||||
|
fwrite(netOutput.get(), outputSize, sizeof(char), outputFile);
|
||||||
|
fclose(outputFile);
|
||||||
|
outputFile = nullptr;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
mindspore::MSTensor ReadFileToTensor(const std::string &file) {
|
||||||
|
if (file.empty()) {
|
||||||
|
std::cout << "Pointer file is nullptr" << std::endl;
|
||||||
|
return mindspore::MSTensor();
|
||||||
|
}
|
||||||
|
|
||||||
|
std::ifstream ifs(file);
|
||||||
|
if (!ifs.good()) {
|
||||||
|
std::cout << "File: " << file << " is not exist" << std::endl;
|
||||||
|
return mindspore::MSTensor();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!ifs.is_open()) {
|
||||||
|
std::cout << "File: " << file << "open failed" << std::endl;
|
||||||
|
return mindspore::MSTensor();
|
||||||
|
}
|
||||||
|
|
||||||
|
ifs.seekg(0, std::ios::end);
|
||||||
|
size_t size = ifs.tellg();
|
||||||
|
mindspore::MSTensor buffer(file, mindspore::DataType::kNumberTypeUInt8, {static_cast<int64_t>(size)}, nullptr, size);
|
||||||
|
|
||||||
|
ifs.seekg(0, std::ios::beg);
|
||||||
|
ifs.read(reinterpret_cast<char *>(buffer.MutableData()), size);
|
||||||
|
ifs.close();
|
||||||
|
|
||||||
|
return buffer;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
DIR *OpenDir(std::string_view dirName) {
|
||||||
|
if (dirName.empty()) {
|
||||||
|
std::cout << " dirName is null ! " << std::endl;
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
std::string realPath = RealPath(dirName);
|
||||||
|
struct stat s;
|
||||||
|
lstat(realPath.c_str(), &s);
|
||||||
|
if (!S_ISDIR(s.st_mode)) {
|
||||||
|
std::cout << "dirName is not a valid directory !" << std::endl;
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
DIR *dir;
|
||||||
|
dir = opendir(realPath.c_str());
|
||||||
|
if (dir == nullptr) {
|
||||||
|
std::cout << "Can not open dir " << dirName << std::endl;
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
std::cout << "Successfully opened the dir " << dirName << std::endl;
|
||||||
|
return dir;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string RealPath(std::string_view path) {
|
||||||
|
char realPathMem[PATH_MAX] = {0};
|
||||||
|
char *realPathRet = nullptr;
|
||||||
|
realPathRet = realpath(path.data(), realPathMem);
|
||||||
|
|
||||||
|
if (realPathRet == nullptr) {
|
||||||
|
std::cout << "File: " << path << " is not exist.";
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string realPath(realPathMem);
|
||||||
|
std::cout << path << " realpath is: " << realPath << std::endl;
|
||||||
|
return realPath;
|
||||||
|
}
|
|
@ -12,7 +12,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
# ============================================================================
|
# ============================================================================
|
||||||
"""export checkpoint file into air, onnx, mindir models"""
|
"""export checkpoint file into air, mindir models"""
|
||||||
|
|
||||||
from src.model_utils.config import config
|
from src.model_utils.config import config
|
||||||
from src.model_utils.device_adapter import get_device_id
|
from src.model_utils.device_adapter import get_device_id
|
||||||
|
|
|
@ -0,0 +1,47 @@
|
||||||
|
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
"""post process for 310 inference"""
|
||||||
|
import os
|
||||||
|
import argparse
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
batch_size = 1
|
||||||
|
parser = argparse.ArgumentParser(description="lenet acc calculation")
|
||||||
|
parser.add_argument("--result_path", type=str, required=True, help="result files path.")
|
||||||
|
parser.add_argument("--img_path", type=str, required=True, help="image file path.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def calcul_acc(labels, preds):
|
||||||
|
return sum(1 for x, y in zip(labels, preds) if x == y) / len(labels)
|
||||||
|
|
||||||
|
|
||||||
|
def get_result(result_path, img_path):
|
||||||
|
files = os.listdir(img_path)
|
||||||
|
preds = []
|
||||||
|
labels = []
|
||||||
|
for file in files:
|
||||||
|
file_name = file.split('.')[0]
|
||||||
|
label = int(file_name.split('_')[-1])
|
||||||
|
labels.append(label)
|
||||||
|
resultPath = os.path.join(result_path, file_name + '.bin')
|
||||||
|
output = np.fromfile(resultPath, dtype=np.float32)
|
||||||
|
preds.append(np.argmax(output, axis=0))
|
||||||
|
acc = calcul_acc(labels, preds)
|
||||||
|
print("accuracy: {}".format(acc))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
get_result(args.result_path, args.img_path)
|
|
@ -0,0 +1,105 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
if [[ $# -lt 3 || $# -gt 4 ]]; then
|
||||||
|
echo "Usage: bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
|
||||||
|
DVPP is mandatory, and must choose from [DVPP|CPU], it's case-insensitive
|
||||||
|
DEVICE_ID is optional, it can be set by environment variable device_id, otherwise the value is zero"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
get_real_path(){
|
||||||
|
if [ "${1:0:1}" == "/" ]; then
|
||||||
|
echo "$1"
|
||||||
|
else
|
||||||
|
echo "$(realpath -m $PWD/$1)"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
model=$(get_real_path $1)
|
||||||
|
data_path=$(get_real_path $2)
|
||||||
|
DVPP=${3^^}
|
||||||
|
|
||||||
|
device_id=0
|
||||||
|
if [ $# == 4 ]; then
|
||||||
|
device_id=$4
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "mindir name: "$model
|
||||||
|
echo "dataset path: "$data_path
|
||||||
|
echo "image process mode: "$DVPP
|
||||||
|
echo "device id: "$device_id
|
||||||
|
|
||||||
|
export ASCEND_HOME=/usr/local/Ascend/
|
||||||
|
if [ -d ${ASCEND_HOME}/ascend-toolkit ]; then
|
||||||
|
export PATH=$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/atc/bin:$PATH
|
||||||
|
export LD_LIBRARY_PATH=/usr/local/lib:$ASCEND_HOME/ascend-toolkit/latest/atc/lib64:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
|
||||||
|
export TBE_IMPL_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe
|
||||||
|
export PYTHONPATH=${TBE_IMPL_PATH}:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/python/site-packages:$PYTHONPATH
|
||||||
|
export ASCEND_OPP_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp
|
||||||
|
else
|
||||||
|
export PATH=$ASCEND_HOME/atc/ccec_compiler/bin:$ASCEND_HOME/atc/bin:$PATH
|
||||||
|
export LD_LIBRARY_PATH=/usr/local/lib:$ASCEND_HOME/atc/lib64:$ASCEND_HOME/acllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
|
||||||
|
export PYTHONPATH=$ASCEND_HOME/atc/python/site-packages:$PYTHONPATH
|
||||||
|
export ASCEND_OPP_PATH=$ASCEND_HOME/opp
|
||||||
|
fi
|
||||||
|
|
||||||
|
function compile_app()
|
||||||
|
{
|
||||||
|
cd ../ascend310_infer || exit
|
||||||
|
bash build.sh &> build.log
|
||||||
|
}
|
||||||
|
|
||||||
|
function infer()
|
||||||
|
{
|
||||||
|
cd - || exit
|
||||||
|
if [ -d result_Files ]; then
|
||||||
|
rm -rf ./result_Files
|
||||||
|
fi
|
||||||
|
if [ -d time_Result ]; then
|
||||||
|
rm -rf ./time_Result
|
||||||
|
fi
|
||||||
|
mkdir result_Files
|
||||||
|
mkdir time_Result
|
||||||
|
if [ "$DVPP" == "DVPP" ];then
|
||||||
|
../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --device_id=$device_id --cpu_dvpp=$DVPP --aipp_path=../src/aipp.cfg --image_height=32 --image_width=32 &> infer.log
|
||||||
|
elif [ "$DVPP" == "CPU" ]; then
|
||||||
|
../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --cpu_dvpp=$DVPP --device_id=$device_id --image_height=32 --image_width=32 &> infer.log
|
||||||
|
else
|
||||||
|
echo "image process mode must be in [DVPP|CPU]"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function cal_acc()
|
||||||
|
{
|
||||||
|
python3.7 ../postprocess.py --result_path=./result_Files --img_path=$data_path &> acc.log &
|
||||||
|
}
|
||||||
|
|
||||||
|
compile_app
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "compile app code failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
infer
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo " execute inference failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
cal_acc
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "calculate accuracy failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
|
@ -0,0 +1,13 @@
|
||||||
|
aipp_op {
|
||||||
|
aipp_mode : static
|
||||||
|
input_format : RGB888_U8
|
||||||
|
related_input_rank : 0
|
||||||
|
csc_switch : false
|
||||||
|
rbuv_swap_switch : false
|
||||||
|
mean_chn_0 : 124
|
||||||
|
mean_chn_1 : 117
|
||||||
|
mean_chn_2 : 104
|
||||||
|
var_reci_chn_0 : 0.0171247538316637
|
||||||
|
var_reci_chn_1 : 0.0175070028011204
|
||||||
|
var_reci_chn_2 : 0.0174291938997821
|
||||||
|
}
|
|
@ -10,11 +10,15 @@
|
||||||
- [Script and Sample Code](#script-and-sample-code)
|
- [Script and Sample Code](#script-and-sample-code)
|
||||||
- [Training Process](#training-process)
|
- [Training Process](#training-process)
|
||||||
- [Evaluation Process](#eval-process)
|
- [Evaluation Process](#eval-process)
|
||||||
- [Model Export](#model-export)
|
- [Inference Process](#inference-process)
|
||||||
|
- [Export MindIR](#export-mindir)
|
||||||
|
- [Infer on Ascend310](#infer-on-ascend310)
|
||||||
|
- [result](#result)
|
||||||
- [Model Description](#model-description)
|
- [Model Description](#model-description)
|
||||||
- [Performance](#performance)
|
- [Performance](#performance)
|
||||||
- [Training Performance](#training-performance)
|
- [Training Performance](#training-performance)
|
||||||
- [Evaluation Performance](#evaluation-performance)
|
- [Evaluation Performance](#evaluation-performance)
|
||||||
|
- [Inference Performance](#inference-performance)
|
||||||
- [Description of Random Situation](#description-of-random-situation)
|
- [Description of Random Situation](#description-of-random-situation)
|
||||||
- [ModelZoo Homepage](#modelzoo-homepage)
|
- [ModelZoo Homepage](#modelzoo-homepage)
|
||||||
|
|
||||||
|
@ -64,12 +68,15 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
```python
|
```python
|
||||||
├── MobileNetV2
|
├── MobileNetV2
|
||||||
├── README.md # descriptions about MobileNetV2
|
├── README.md # descriptions about MobileNetV2
|
||||||
|
├── ascend310_infer # application for 310 inference
|
||||||
├── scripts
|
├── scripts
|
||||||
|
│ ├──run_infer_310.sh # shell script for 310 infer
|
||||||
│ ├──run_train.sh # shell script for train, fine_tune or incremental learn with CPU, GPU or Ascend
|
│ ├──run_train.sh # shell script for train, fine_tune or incremental learn with CPU, GPU or Ascend
|
||||||
│ ├──run_eval.sh # shell script for evaluation with CPU, GPU or Ascend
|
│ ├──run_eval.sh # shell script for evaluation with CPU, GPU or Ascend
|
||||||
│ ├──cache_util.sh # a collection of helper functions to manage cache
|
│ ├──cache_util.sh # a collection of helper functions to manage cache
|
||||||
│ ├──run_train_nfs_cache.sh # shell script for train with NFS dataset and leverage caching service for better performance
|
│ ├──run_train_nfs_cache.sh # shell script for train with NFS dataset and leverage caching service for better performance
|
||||||
├── src
|
├── src
|
||||||
|
│ ├──aipp.cfg # aipp config
|
||||||
│ ├──args.py # parse args
|
│ ├──args.py # parse args
|
||||||
│ ├──config.py # parameter configuration
|
│ ├──config.py # parameter configuration
|
||||||
│ ├──dataset.py # creating dataset
|
│ ├──dataset.py # creating dataset
|
||||||
|
@ -81,6 +88,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
|
||||||
├── eval.py # evaluation script
|
├── eval.py # evaluation script
|
||||||
├── export.py # export mindir script
|
├── export.py # export mindir script
|
||||||
├── mindspore_hub_conf.py # mindspore hub interface
|
├── mindspore_hub_conf.py # mindspore hub interface
|
||||||
|
├── postprocess.py # postprocess script
|
||||||
```
|
```
|
||||||
|
|
||||||
## [Training process](#contents)
|
## [Training process](#contents)
|
||||||
|
@ -226,13 +234,37 @@ CPU: sh run_train_nfs_cache.sh CPU [TRAIN_DATASET_PATH]
|
||||||
> With cache enabled, a standalone cache server will be started in the background to cache the dataset in memory. However, Please make sure the dataset fits in memory (around 120GB of memory is required for caching ImageNet train dataset).
|
> With cache enabled, a standalone cache server will be started in the background to cache the dataset in memory. However, Please make sure the dataset fits in memory (around 120GB of memory is required for caching ImageNet train dataset).
|
||||||
> Users can choose to shutdown the cache server after training or leave it alone for future usage.
|
> Users can choose to shutdown the cache server after training or leave it alone for future usage.
|
||||||
|
|
||||||
## [Model Export](#contents)
|
## [Inference process](#contents)
|
||||||
|
|
||||||
|
### Export MindIR
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EXPORT_FORMAT]
|
python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EXPORT_FORMAT]
|
||||||
```
|
```
|
||||||
|
|
||||||
`EXPORT_FORMAT` should be in ["AIR", "ONNX", "MINDIR"]
|
The ckpt_file parameter is required,
|
||||||
|
`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
|
||||||
|
|
||||||
|
### Infer on Ascend310
|
||||||
|
|
||||||
|
Before performing inference, the mindir file must bu exported by `export.py` script. We only provide an example of inference using MINDIR model.
|
||||||
|
Current batch_Size can only be set to 1.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Ascend310 inference
|
||||||
|
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `DVPP` is mandatory, and must choose from ["DVPP", "CPU"], it's case-insensitive.The size of the picture that MobilenetV2 performs inference is [224, 224], the DVPP hardware limits the width of divisible by 16, and the height is divisible by 2. The network conforms to the standard, and the network can pre-process the image through DVPP.
|
||||||
|
- `DEVICE_ID` is optional, default value is 0.
|
||||||
|
|
||||||
|
### result
|
||||||
|
|
||||||
|
Inference result is saved in current path, you can find result like this in acc.log file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
'Accuracy': 0.71654
|
||||||
|
```
|
||||||
|
|
||||||
# [Model description](#contents)
|
# [Model description](#contents)
|
||||||
|
|
||||||
|
@ -258,6 +290,20 @@ python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EX
|
||||||
| Checkpoint for Fine tuning | 27.3 M | 27.3 M |
|
| Checkpoint for Fine tuning | 27.3 M | 27.3 M |
|
||||||
| Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/mobilenetv2)|
|
| Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/mobilenetv2)|
|
||||||
|
|
||||||
|
### Inference Performance
|
||||||
|
|
||||||
|
| Parameters | Ascend |
|
||||||
|
| ------------------- | --------------------------- |
|
||||||
|
| Model Version | MobilenetV2 |
|
||||||
|
| Resource | Ascend 310; CentOS 3.10 |
|
||||||
|
| Uploaded Date | 11/05/2021 (month/day/year) |
|
||||||
|
| MindSpore Version | 1.2.0 |
|
||||||
|
| Dataset | ImageNet |
|
||||||
|
| batch_size | 1 |
|
||||||
|
| outputs | Accuracy |
|
||||||
|
| Accuracy | Accuracy=0.71654 |
|
||||||
|
| Model for inference | 27.3M(.ckpt file) |
|
||||||
|
|
||||||
# [Description of Random Situation](#contents)
|
# [Description of Random Situation](#contents)
|
||||||
|
|
||||||
<!-- In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. -->
|
<!-- In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py. -->
|
||||||
|
|
|
@ -17,10 +17,14 @@
|
||||||
- [用法](#用法-1)
|
- [用法](#用法-1)
|
||||||
- [启动](#启动-1)
|
- [启动](#启动-1)
|
||||||
- [结果](#结果-1)
|
- [结果](#结果-1)
|
||||||
- [模型导出](#模型导出)
|
- [推理过程](#推理过程)
|
||||||
|
- [导出MindIR](#导出mindir)
|
||||||
|
- [在Ascend310执行推理](#在ascend310执行推理)
|
||||||
|
- [结果](#结果)
|
||||||
- [模型描述](#模型描述)
|
- [模型描述](#模型描述)
|
||||||
- [性能](#性能)
|
- [性能](#性能)
|
||||||
- [训练性能](#训练性能)
|
- [训练性能](#训练性能)
|
||||||
|
- [推理性能](#推理性能)
|
||||||
- [随机情况说明](#随机情况说明)
|
- [随机情况说明](#随机情况说明)
|
||||||
- [ModelZoo主页](#modelzoo主页)
|
- [ModelZoo主页](#modelzoo主页)
|
||||||
|
|
||||||
|
@ -70,12 +74,15 @@ MobileNetV2总体网络架构如下:
|
||||||
```python
|
```python
|
||||||
├── MobileNetV2
|
├── MobileNetV2
|
||||||
├── README.md # MobileNetV2相关描述
|
├── README.md # MobileNetV2相关描述
|
||||||
|
├── ascend310_infer # 用于310推理
|
||||||
├── scripts
|
├── scripts
|
||||||
│ ├──run_train.sh # 使用CPU、GPU或Ascend进行训练、微调或增量学习的shell脚本
|
│ ├──run_train.sh # 使用CPU、GPU或Ascend进行训练、微调或增量学习的shell脚本
|
||||||
│ ├──run_eval.sh # 使用CPU、GPU或Ascend进行评估的shell脚本
|
│ ├──run_eval.sh # 使用CPU、GPU或Ascend进行评估的shell脚本
|
||||||
│ ├──cache_util.sh # 包含一些使用cache的帮助函数
|
│ ├──cache_util.sh # 包含一些使用cache的帮助函数
|
||||||
│ ├──run_train_nfs_cache.sh # 使用NFS的数据集进行训练并利用缓存服务进行加速的shell脚本
|
│ ├──run_train_nfs_cache.sh # 使用NFS的数据集进行训练并利用缓存服务进行加速的shell脚本
|
||||||
|
│ ├──run_infer_310.sh # 使用Dvpp 或CPU算子进行推理的shell脚本
|
||||||
├── src
|
├── src
|
||||||
|
│ ├──aipp.cfg # aipp配置
|
||||||
│ ├──args.py # 参数解析
|
│ ├──args.py # 参数解析
|
||||||
│ ├──config.py # 参数配置
|
│ ├──config.py # 参数配置
|
||||||
│ ├──dataset.py # 创建数据集
|
│ ├──dataset.py # 创建数据集
|
||||||
|
@ -86,7 +93,9 @@ MobileNetV2总体网络架构如下:
|
||||||
│ ├──utils.py # 加载ckpt_file进行微调或增量学习
|
│ ├──utils.py # 加载ckpt_file进行微调或增量学习
|
||||||
├── train.py # 训练脚本
|
├── train.py # 训练脚本
|
||||||
├── eval.py # 评估脚本
|
├── eval.py # 评估脚本
|
||||||
|
├── export.py # 模型导出脚本
|
||||||
├── mindspore_hub_conf.py # MindSpore Hub接口
|
├── mindspore_hub_conf.py # MindSpore Hub接口
|
||||||
|
├── postprocess.py # 推理后处理脚本
|
||||||
```
|
```
|
||||||
|
|
||||||
## 训练过程
|
## 训练过程
|
||||||
|
@ -232,13 +241,37 @@ CPU: sh run_train_nfs_cache.sh CPU [TRAIN_DATASET_PATH]
|
||||||
> 缓存服务开启后,我们将在后台启动一个独立的缓存服务器以将数据集缓存在内存中。用户在使用缓存前需确保内存大小足够缓存数据集中的图片(缓存ImageNet的训练集约需要120GB的内存空间)。
|
> 缓存服务开启后,我们将在后台启动一个独立的缓存服务器以将数据集缓存在内存中。用户在使用缓存前需确保内存大小足够缓存数据集中的图片(缓存ImageNet的训练集约需要120GB的内存空间)。
|
||||||
> 在训练结束后,可以选择关闭缓存服务器或不关闭它以继续为未来的训练提供缓存服务。
|
> 在训练结束后,可以选择关闭缓存服务器或不关闭它以继续为未来的训练提供缓存服务。
|
||||||
|
|
||||||
## 模型导出
|
## 推理过程
|
||||||
|
|
||||||
|
### 导出MindIR
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EXPORT_FORMAT]
|
python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EXPORT_FORMAT]
|
||||||
```
|
```
|
||||||
|
|
||||||
`EXPORT_FORMAT` 可选 ["AIR", "ONNX", "MINDIR"].
|
参数ckpt_file为必填项,
|
||||||
|
`EXPORT_FORMAT` 可选 ["AIR", "MINDIR"].
|
||||||
|
|
||||||
|
### 在Ascend310执行推理
|
||||||
|
|
||||||
|
在执行推理前,mindir文件必须通过`export.py`脚本导出。以下展示了使用minir模型执行推理的示例。
|
||||||
|
目前仅支持batch_Size为1的推理。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Ascend310 inference
|
||||||
|
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `DVPP` 为必填项,需要在["DVPP", "CPU"]选择,大小写均可。Mobilenetv2执行推理的图片尺寸为[224, 224],DVPP硬件限制宽为16整除,高为2整除,网络符合标准,网络可以通过DVPP对图像进行前处理。
|
||||||
|
- `DEVICE_ID` 可选,默认值为0。
|
||||||
|
|
||||||
|
### 结果
|
||||||
|
|
||||||
|
推理结果保存在脚本执行的当前路径,你可以在acc.log中看到以下精度计算结果。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
'Accuracy':0.71654
|
||||||
|
```
|
||||||
|
|
||||||
# 模型描述
|
# 模型描述
|
||||||
|
|
||||||
|
@ -264,6 +297,20 @@ python export.py --platform [PLATFORM] --ckpt_file [CKPT_PATH] --file_format [EX
|
||||||
| 微调检查点 | 27.3M | 27.3M |
|
| 微调检查点 | 27.3M | 27.3M |
|
||||||
| 脚本 | [链接](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/mobilenetv2)|
|
| 脚本 | [链接](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/mobilenetv2)|
|
||||||
|
|
||||||
|
### 推理性能
|
||||||
|
|
||||||
|
| 参数 | Ascend |
|
||||||
|
| -------------- | ---------------------------|
|
||||||
|
| 模型版本 | MobilenetV2 |
|
||||||
|
| 资源 | Ascend 310;系统 CentOS 3.10 |
|
||||||
|
| 上传日期 | 2021-05-11 |
|
||||||
|
| MindSpore版本 | 1.2.0 |
|
||||||
|
| 数据集 | ImageNet |
|
||||||
|
| batch_size | 1 |
|
||||||
|
| 输出 | Accuracy |
|
||||||
|
| 准确率 | Accuracy=0.71654 |
|
||||||
|
| 推理模型 | 27.3M(.ckpt文件) |
|
||||||
|
|
||||||
# 随机情况说明
|
# 随机情况说明
|
||||||
|
|
||||||
<!-- `dataset.py`中设置了“create_dataset”函数内的种子,同时还使用了train.py中的随机种子。-->
|
<!-- `dataset.py`中设置了“create_dataset”函数内的种子,同时还使用了train.py中的随机种子。-->
|
||||||
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
cmake_minimum_required(VERSION 3.14.1)
|
||||||
|
project(Ascend310Infer)
|
||||||
|
add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
|
||||||
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -g -std=c++17 -Werror -Wall -fPIE -Wl,--allow-shlib-undefined")
|
||||||
|
set(PROJECT_SRC_ROOT ${CMAKE_CURRENT_LIST_DIR}/)
|
||||||
|
option(MINDSPORE_PATH "mindspore install path" "")
|
||||||
|
include_directories(${MINDSPORE_PATH})
|
||||||
|
include_directories(${MINDSPORE_PATH}/include)
|
||||||
|
include_directories(${PROJECT_SRC_ROOT})
|
||||||
|
find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib)
|
||||||
|
file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*)
|
||||||
|
|
||||||
|
add_executable(main src/main.cc src/utils.cc)
|
||||||
|
target_link_libraries(main ${MS_LIB} ${MD_LIB} gflags)
|
|
@ -0,0 +1,23 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
if [ ! -d out ]; then
|
||||||
|
mkdir out
|
||||||
|
fi
|
||||||
|
cd out || exit
|
||||||
|
cmake .. \
|
||||||
|
-DMINDSPORE_PATH="`pip show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`"
|
||||||
|
make
|
|
@ -0,0 +1,32 @@
|
||||||
|
/**
|
||||||
|
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef MINDSPORE_INFERENCE_UTILS_H_
|
||||||
|
#define MINDSPORE_INFERENCE_UTILS_H_
|
||||||
|
|
||||||
|
#include <sys/stat.h>
|
||||||
|
#include <dirent.h>
|
||||||
|
#include <vector>
|
||||||
|
#include <string>
|
||||||
|
#include <memory>
|
||||||
|
#include "include/api/types.h"
|
||||||
|
|
||||||
|
std::vector<std::string> GetAllFiles(std::string_view dirName);
|
||||||
|
DIR *OpenDir(std::string_view dirName);
|
||||||
|
std::string RealPath(std::string_view path);
|
||||||
|
mindspore::MSTensor ReadFileToTensor(const std::string &file);
|
||||||
|
int WriteResult(const std::string& imageFile, const std::vector<mindspore::MSTensor> &outputs);
|
||||||
|
#endif
|
|
@ -0,0 +1,161 @@
|
||||||
|
/**
|
||||||
|
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
#include <sys/time.h>
|
||||||
|
#include <gflags/gflags.h>
|
||||||
|
#include <dirent.h>
|
||||||
|
#include <iostream>
|
||||||
|
#include <string>
|
||||||
|
#include <algorithm>
|
||||||
|
#include <iosfwd>
|
||||||
|
#include <vector>
|
||||||
|
#include <fstream>
|
||||||
|
#include <sstream>
|
||||||
|
|
||||||
|
#include "include/api/model.h"
|
||||||
|
#include "include/api/context.h"
|
||||||
|
#include "include/api/types.h"
|
||||||
|
#include "include/api/serialization.h"
|
||||||
|
#include "include/dataset/vision_ascend.h"
|
||||||
|
#include "include/dataset/execute.h"
|
||||||
|
#include "include/dataset/vision.h"
|
||||||
|
#include "inc/utils.h"
|
||||||
|
|
||||||
|
using mindspore::Context;
|
||||||
|
using mindspore::Serialization;
|
||||||
|
using mindspore::Model;
|
||||||
|
using mindspore::Status;
|
||||||
|
using mindspore::ModelType;
|
||||||
|
using mindspore::GraphCell;
|
||||||
|
using mindspore::kSuccess;
|
||||||
|
using mindspore::MSTensor;
|
||||||
|
using mindspore::dataset::Execute;
|
||||||
|
using mindspore::dataset::TensorTransform;
|
||||||
|
using mindspore::dataset::vision::Resize;
|
||||||
|
using mindspore::dataset::vision::HWC2CHW;
|
||||||
|
using mindspore::dataset::vision::Normalize;
|
||||||
|
using mindspore::dataset::vision::Decode;
|
||||||
|
using mindspore::dataset::vision::CenterCrop;
|
||||||
|
|
||||||
|
|
||||||
|
DEFINE_string(mindir_path, "", "mindir path");
|
||||||
|
DEFINE_string(dataset_path, ".", "dataset path");
|
||||||
|
DEFINE_int32(device_id, 0, "device id");
|
||||||
|
DEFINE_string(aipp_path, "../../scripts/aipp.cfg", "aipp path");
|
||||||
|
DEFINE_string(cpu_dvpp, "", "cpu or dvpp process");
|
||||||
|
DEFINE_int32(image_height, 224, "image height");
|
||||||
|
DEFINE_int32(image_width, 224, "image width");
|
||||||
|
|
||||||
|
int main(int argc, char **argv) {
|
||||||
|
gflags::ParseCommandLineFlags(&argc, &argv, true);
|
||||||
|
if (RealPath(FLAGS_mindir_path).empty()) {
|
||||||
|
std::cout << "Invalid mindir" << std::endl;
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto context = std::make_shared<Context>();
|
||||||
|
auto ascend310 = std::make_shared<mindspore::Ascend310DeviceInfo>();
|
||||||
|
ascend310->SetDeviceID(FLAGS_device_id);
|
||||||
|
ascend310->SetBufferOptimizeMode("off_optimize");
|
||||||
|
context->MutableDeviceInfo().push_back(ascend310);
|
||||||
|
mindspore::Graph graph;
|
||||||
|
Serialization::Load(FLAGS_mindir_path, ModelType::kMindIR, &graph);
|
||||||
|
if (FLAGS_cpu_dvpp == "DVPP") {
|
||||||
|
if (RealPath(FLAGS_aipp_path).empty()) {
|
||||||
|
std::cout << "Invalid aipp path" << std::endl;
|
||||||
|
return 1;
|
||||||
|
} else {
|
||||||
|
ascend310->SetInsertOpConfigPath(FLAGS_aipp_path);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Model model;
|
||||||
|
Status ret = model.Build(GraphCell(graph), context);
|
||||||
|
if (ret != kSuccess) {
|
||||||
|
std::cout << "ERROR: Build failed." << std::endl;
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto all_files = GetAllFiles(FLAGS_dataset_path);
|
||||||
|
std::map<double, double> costTime_map;
|
||||||
|
size_t size = all_files.size();
|
||||||
|
|
||||||
|
for (size_t i = 0; i < size; ++i) {
|
||||||
|
struct timeval start = {0};
|
||||||
|
struct timeval end = {0};
|
||||||
|
double startTimeMs;
|
||||||
|
double endTimeMs;
|
||||||
|
std::vector<MSTensor> inputs;
|
||||||
|
std::vector<MSTensor> outputs;
|
||||||
|
std::cout << "Start predict input files:" << all_files[i] << std::endl;
|
||||||
|
if (FLAGS_cpu_dvpp == "DVPP") {
|
||||||
|
std::shared_ptr<TensorTransform> decode(new Decode());
|
||||||
|
auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
|
||||||
|
std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
|
||||||
|
Execute composeDecode({decode, resize});
|
||||||
|
auto imgDvpp = std::make_shared<MSTensor>();
|
||||||
|
composeDecode(ReadFileToTensor(all_files[i]), imgDvpp.get());
|
||||||
|
inputs.emplace_back(imgDvpp->Name(), imgDvpp->DataType(), imgDvpp->Shape(),
|
||||||
|
imgDvpp->Data().get(), imgDvpp->DataSize());
|
||||||
|
} else {
|
||||||
|
std::shared_ptr<TensorTransform> decode(new Decode());
|
||||||
|
std::shared_ptr<TensorTransform> hwc2chw(new HWC2CHW());
|
||||||
|
std::shared_ptr<TensorTransform> normalize(
|
||||||
|
new Normalize({123.675, 116.28, 103.53}, {58.395, 57.120, 57.375}));
|
||||||
|
auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
|
||||||
|
std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
|
||||||
|
auto crop_size = {224, 224};
|
||||||
|
std::shared_ptr<TensorTransform> center_crop(new CenterCrop(center_crop));
|
||||||
|
Execute transform({decode, resize, center_crop, normalize, hwc2chw});
|
||||||
|
auto img = MSTensor();
|
||||||
|
auto image = ReadFileToTensor(all_files[i]);
|
||||||
|
composeDecode(image, &img);
|
||||||
|
std::vector<MSTensor> model_inputs = model.GetInputs();
|
||||||
|
inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(),
|
||||||
|
img.Data().get(), img.DataSize());
|
||||||
|
}
|
||||||
|
|
||||||
|
gettimeofday(&start, nullptr);
|
||||||
|
ret = model.Predict(inputs, &outputs);
|
||||||
|
gettimeofday(&end, nullptr);
|
||||||
|
if (ret != kSuccess) {
|
||||||
|
std::cout << "Predict " << all_files[i] << " failed." << std::endl;
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
startTimeMs = (1.0 * start.tv_sec * 1000000 + start.tv_usec) / 1000;
|
||||||
|
endTimeMs = (1.0 * end.tv_sec * 1000000 + end.tv_usec) / 1000;
|
||||||
|
costTime_map.insert(std::pair<double, double>(startTimeMs, endTimeMs));
|
||||||
|
WriteResult(all_files[i], outputs);
|
||||||
|
}
|
||||||
|
double average = 0.0;
|
||||||
|
int inferCount = 0;
|
||||||
|
|
||||||
|
for (auto iter = costTime_map.begin(); iter != costTime_map.end(); iter++) {
|
||||||
|
double diff = 0.0;
|
||||||
|
diff = iter->second - iter->first;
|
||||||
|
average += diff;
|
||||||
|
inferCount++;
|
||||||
|
}
|
||||||
|
average = average / inferCount;
|
||||||
|
std::stringstream timeCost;
|
||||||
|
timeCost << "NN inference cost average time: "<< average << " ms of infer_count " << inferCount << std::endl;
|
||||||
|
std::cout << "NN inference cost average time: "<< average << "ms of infer_count " << inferCount << std::endl;
|
||||||
|
std::string fileName = "./time_Result" + std::string("/test_perform_static.txt");
|
||||||
|
std::ofstream fileStream(fileName.c_str(), std::ios::trunc);
|
||||||
|
fileStream << timeCost.str();
|
||||||
|
fileStream.close();
|
||||||
|
costTime_map.clear();
|
||||||
|
return 0;
|
||||||
|
}
|
|
@ -0,0 +1,130 @@
|
||||||
|
/**
|
||||||
|
* Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include "inc/utils.h"
|
||||||
|
|
||||||
|
#include <fstream>
|
||||||
|
#include <algorithm>
|
||||||
|
#include <iostream>
|
||||||
|
|
||||||
|
using mindspore::MSTensor;
|
||||||
|
using mindspore::DataType;
|
||||||
|
|
||||||
|
std::vector<std::string> GetAllFiles(std::string_view dirName) {
|
||||||
|
struct dirent *filename;
|
||||||
|
DIR *dir = OpenDir(dirName);
|
||||||
|
if (dir == nullptr) {
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
std::vector<std::string> res;
|
||||||
|
while ((filename = readdir(dir)) != nullptr) {
|
||||||
|
std::string dName = std::string(filename->d_name);
|
||||||
|
if (dName == "." || dName == ".." || filename->d_type != DT_REG) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
res.emplace_back(std::string(dirName) + "/" + filename->d_name);
|
||||||
|
}
|
||||||
|
std::sort(res.begin(), res.end());
|
||||||
|
for (auto &f : res) {
|
||||||
|
std::cout << "image file: " << f << std::endl;
|
||||||
|
}
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
|
int WriteResult(const std::string& imageFile, const std::vector<MSTensor> &outputs) {
|
||||||
|
std::string homePath = "./result_Files";
|
||||||
|
for (size_t i = 0; i < outputs.size(); ++i) {
|
||||||
|
size_t outputSize;
|
||||||
|
std::shared_ptr<const void> netOutput;
|
||||||
|
netOutput = outputs[i].Data();
|
||||||
|
outputSize = outputs[i].DataSize();
|
||||||
|
int pos = imageFile.rfind('/');
|
||||||
|
std::string fileName(imageFile, pos + 1);
|
||||||
|
fileName.replace(fileName.find('.'), fileName.size() - fileName.find('.'), ".bin");
|
||||||
|
std::string outFileName = homePath + "/" + fileName;
|
||||||
|
FILE * outputFile = fopen(outFileName.c_str(), "wb");
|
||||||
|
fwrite(netOutput.get(), outputSize, sizeof(char), outputFile);
|
||||||
|
fclose(outputFile);
|
||||||
|
outputFile = nullptr;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
mindspore::MSTensor ReadFileToTensor(const std::string &file) {
|
||||||
|
if (file.empty()) {
|
||||||
|
std::cout << "Pointer file is nullptr" << std::endl;
|
||||||
|
return mindspore::MSTensor();
|
||||||
|
}
|
||||||
|
|
||||||
|
std::ifstream ifs(file);
|
||||||
|
if (!ifs.good()) {
|
||||||
|
std::cout << "File: " << file << " is not exist" << std::endl;
|
||||||
|
return mindspore::MSTensor();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!ifs.is_open()) {
|
||||||
|
std::cout << "File: " << file << "open failed" << std::endl;
|
||||||
|
return mindspore::MSTensor();
|
||||||
|
}
|
||||||
|
|
||||||
|
ifs.seekg(0, std::ios::end);
|
||||||
|
size_t size = ifs.tellg();
|
||||||
|
mindspore::MSTensor buffer(file, mindspore::DataType::kNumberTypeUInt8, {static_cast<int64_t>(size)}, nullptr, size);
|
||||||
|
|
||||||
|
ifs.seekg(0, std::ios::beg);
|
||||||
|
ifs.read(reinterpret_cast<char *>(buffer.MutableData()), size);
|
||||||
|
ifs.close();
|
||||||
|
|
||||||
|
return buffer;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
DIR *OpenDir(std::string_view dirName) {
|
||||||
|
if (dirName.empty()) {
|
||||||
|
std::cout << " dirName is null ! " << std::endl;
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
std::string realPath = RealPath(dirName);
|
||||||
|
struct stat s;
|
||||||
|
lstat(realPath.c_str(), &s);
|
||||||
|
if (!S_ISDIR(s.st_mode)) {
|
||||||
|
std::cout << "dirName is not a valid directory !" << std::endl;
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
DIR *dir;
|
||||||
|
dir = opendir(realPath.c_str());
|
||||||
|
if (dir == nullptr) {
|
||||||
|
std::cout << "Can not open dir " << dirName << std::endl;
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
std::cout << "Successfully opened the dir " << dirName << std::endl;
|
||||||
|
return dir;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string RealPath(std::string_view path) {
|
||||||
|
char realPathMem[PATH_MAX] = {0};
|
||||||
|
char *realPathRet = nullptr;
|
||||||
|
realPathRet = realpath(path.data(), realPathMem);
|
||||||
|
|
||||||
|
if (realPathRet == nullptr) {
|
||||||
|
std::cout << "File: " << path << " is not exist.";
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string realPath(realPathMem);
|
||||||
|
std::cout << path << " realpath is: " << realPath << std::endl;
|
||||||
|
return realPath;
|
||||||
|
}
|
|
@ -27,7 +27,7 @@ parser.add_argument("--device_id", type=int, default=0, help="Device id")
|
||||||
parser.add_argument("--batch_size", type=int, default=1, help="batch size")
|
parser.add_argument("--batch_size", type=int, default=1, help="batch size")
|
||||||
parser.add_argument("--ckpt_file", type=str, required=True, help="Checkpoint file path.")
|
parser.add_argument("--ckpt_file", type=str, required=True, help="Checkpoint file path.")
|
||||||
parser.add_argument("--file_name", type=str, default="mobilenetv2", help="output file name.")
|
parser.add_argument("--file_name", type=str, default="mobilenetv2", help="output file name.")
|
||||||
parser.add_argument("--file_format", type=str, choices=["AIR", "ONNX", "MINDIR"], default="AIR", help="file format")
|
parser.add_argument("--file_format", type=str, choices=["AIR", "MINDIR"], default="AIR", help="file format")
|
||||||
parser.add_argument('--platform', type=str, default="Ascend", choices=("Ascend", "GPU", "CPU"),
|
parser.add_argument('--platform', type=str, default="Ascend", choices=("Ascend", "GPU", "CPU"),
|
||||||
help='run platform, only support GPU, CPU and Ascend')
|
help='run platform, only support GPU, CPU and Ascend')
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
|
@ -0,0 +1,52 @@
|
||||||
|
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
"""post process for 310 inference"""
|
||||||
|
import os
|
||||||
|
import argparse
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
batch_size = 1
|
||||||
|
parser = argparse.ArgumentParser(description="mobilenetv2 acc calculation")
|
||||||
|
parser.add_argument("--result_path", type=str, required=True, help="result files path.")
|
||||||
|
parser.add_argument("--label_path", type=str, required=True, help="label path.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def calcul_acc(labels, preds):
|
||||||
|
return sum(1 for x, y in zip(labels, preds) if x == y) / len(labels)
|
||||||
|
|
||||||
|
|
||||||
|
def get_result(result_path, label_path):
|
||||||
|
files = os.listdir(result_path)
|
||||||
|
preds = []
|
||||||
|
labels = []
|
||||||
|
label_dict = {}
|
||||||
|
with open(label_path, 'w') as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
for line in lines:
|
||||||
|
label_dict[line.split(',')[0]] = line.split(',')[1]
|
||||||
|
for file in files:
|
||||||
|
file_name = file.split('.')[0]
|
||||||
|
label = int(label_dict[file_name + '.JEPG'])
|
||||||
|
labels.append(label)
|
||||||
|
resultPath = os.path.join(result_path, file)
|
||||||
|
output = np.fromfile(resultPath, dtype=np.float32)
|
||||||
|
preds.append(np.argmax(output, axis=0))
|
||||||
|
acc = calcul_acc(labels, preds)
|
||||||
|
print("accuracy: {}".format(acc))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
get_result(args.result_path, args.label_path)
|
|
@ -0,0 +1,105 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Copyright 2021 Huawei Technologies Co., Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
if [[ $# -lt 3 || $# -gt 4 ]]; then
|
||||||
|
echo "Usage: bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]
|
||||||
|
DVPP is mandatory, and must choose from [DVPP|CPU], it's case-insensitive
|
||||||
|
DEVICE_ID is optional, it can be set by environment variable device_id, otherwise the value is zero"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
get_real_path(){
|
||||||
|
if [ "${1:0:1}" == "/" ]; then
|
||||||
|
echo "$1"
|
||||||
|
else
|
||||||
|
echo "$(realpath -m $PWD/$1)"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
model=$(get_real_path $1)
|
||||||
|
data_path=$(get_real_path $2)
|
||||||
|
DVPP=${3^^}
|
||||||
|
|
||||||
|
device_id=0
|
||||||
|
if [ $# == 4 ]; then
|
||||||
|
device_id=$4
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "mindir name: "$model
|
||||||
|
echo "dataset path: "$data_path
|
||||||
|
echo "image process mode: "$DVPP
|
||||||
|
echo "device id: "$device_id
|
||||||
|
|
||||||
|
export ASCEND_HOME=/usr/local/Ascend/
|
||||||
|
if [ -d ${ASCEND_HOME}/ascend-toolkit ]; then
|
||||||
|
export PATH=$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/atc/bin:$PATH
|
||||||
|
export LD_LIBRARY_PATH=/usr/local/lib:$ASCEND_HOME/ascend-toolkit/latest/atc/lib64:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
|
||||||
|
export TBE_IMPL_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe
|
||||||
|
export PYTHONPATH=${TBE_IMPL_PATH}:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/python/site-packages:$PYTHONPATH
|
||||||
|
export ASCEND_OPP_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp
|
||||||
|
else
|
||||||
|
export PATH=$ASCEND_HOME/atc/ccec_compiler/bin:$ASCEND_HOME/atc/bin:$PATH
|
||||||
|
export LD_LIBRARY_PATH=/usr/local/lib:$ASCEND_HOME/atc/lib64:$ASCEND_HOME/acllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
|
||||||
|
export PYTHONPATH=$ASCEND_HOME/atc/python/site-packages:$PYTHONPATH
|
||||||
|
export ASCEND_OPP_PATH=$ASCEND_HOME/opp
|
||||||
|
fi
|
||||||
|
|
||||||
|
function compile_app()
|
||||||
|
{
|
||||||
|
cd ../ascend310_infer || exit
|
||||||
|
bash build.sh &> build.log
|
||||||
|
}
|
||||||
|
|
||||||
|
function infer()
|
||||||
|
{
|
||||||
|
cd - || exit
|
||||||
|
if [ -d result_Files ]; then
|
||||||
|
rm -rf ./result_Files
|
||||||
|
fi
|
||||||
|
if [ -d time_Result ]; then
|
||||||
|
rm -rf ./time_Result
|
||||||
|
fi
|
||||||
|
mkdir result_Files
|
||||||
|
mkdir time_Result
|
||||||
|
if [ "$DVPP" == "DVPP" ];then
|
||||||
|
../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --device_id=$device_id --cpu_dvpp=$DVPP --aipp_path=../src/aipp.cfg --image_height=256 --image_width=256 &> infer.log
|
||||||
|
elif [ "$DVPP" == "CPU" ]; then
|
||||||
|
../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --cpu_dvpp=$DVPP --device_id=$device_id --image_height=256 --image_width=256 &> infer.log
|
||||||
|
else
|
||||||
|
echo "image process mode must be in [DVPP|CPU]"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function cal_acc()
|
||||||
|
{
|
||||||
|
python3.7 ../postprocess.py --result_path=./result_Files --label_path=../label.txt &> acc.log &
|
||||||
|
}
|
||||||
|
|
||||||
|
compile_app
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "compile app code failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
infer
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo " execute inference failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
cal_acc
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "calculate accuracy failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
|
@ -0,0 +1,20 @@
|
||||||
|
aipp_op {
|
||||||
|
aipp_mode : static
|
||||||
|
input_format : RGB888_U8
|
||||||
|
related_input_rank : 0
|
||||||
|
csc_switch : false
|
||||||
|
rbuv_swap_switch : false
|
||||||
|
src_image_size_w :256
|
||||||
|
src_image_size_h :256
|
||||||
|
crop :true
|
||||||
|
load_start_pos_w :16
|
||||||
|
load_start_pos_h :16
|
||||||
|
crop_size_w :224
|
||||||
|
crop_size_h :224
|
||||||
|
mean_chn_0 : 124
|
||||||
|
mean_chn_1 : 117
|
||||||
|
mean_chn_2 : 104
|
||||||
|
var_reci_chn_0 : 0.0171247538316637
|
||||||
|
var_reci_chn_1 : 0.0175070028011204
|
||||||
|
var_reci_chn_2 : 0.0174291938997821
|
||||||
|
}
|
Loading…
Reference in New Issue