!9322 Add Posenet Demo README

From: @liuxiao78
Reviewed-by: @zhang_xue_tong,@zhanghaibo5
Signed-off-by: @zhang_xue_tong
This commit is contained in:
mindspore-ci-bot 2020-12-03 10:35:21 +08:00 committed by Gitee
commit 6258f7f19a
11 changed files with 576 additions and 158 deletions

View File

@ -1,13 +1,12 @@
# Demo of Image Classification
## Demo of Image Classification
The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite image classification models to perform on-device inference, classify the content captured by a device camera, and display the most possible classification result on the application's image preview screen.
### Running Dependencies
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
- Native development kit (NDK) 21.3
- CMake 3.10.2 [CMake](https://cmake.org/download)
- CMake 3.10.2 [CMake](https://cmake.org/download)
- Android software development kit (SDK) 26 or later
- JDK 1.8 or later
@ -21,9 +20,7 @@ The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) an
![start_sdk](images/sdk_management.png)
(Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`.
![project_structure](images/project_structure.png)
If you have any Android Studio configuration problem when trying this demo, please refer to item 5 to resolve it.
2. Connect to an Android device and runs the image classification application.
@ -39,13 +36,24 @@ The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) an
![result](images/app_result.jpg)
4. The solutions of Android Studio configuration problems:
| | Warning | Solution |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | Gradle sync failed: NDK not configured. | Specify the installed ndk directory in local.propertiesndk.dir={ndk的安装目录} |
| 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download corresponding [NDK Version](https://developer.android.com/ndk/downloads)and specify the sdk directory in Project Structure - Android NDK location.You can refer to the figure below. |
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Update Android Studio Version in Tools - help - Checkout for Updates. |
| 4 | SSL peer shut down incorrectly | Run this demo again. |
![project_structure](images/project_structure.png)
## Detailed Description of the Sample Program
This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html).
### Sample Program Structure
```
```text
app
├── src/main
@ -58,12 +66,12 @@ app
│ | └── MindSporeNetnative.h # header file
│ |
│ ├── java # application code at the Java layer
│ │ └── com.mindspore.himindsporedemo
│ │ └── com.mindspore.himindsporedemo
│ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling
│ │ │ └── ...
│ │ └── widget # implementation related to camera enabling and drawing
│ │ └── ...
│ │
│ │
│ ├── res # resource files related to Android
│ └── AndroidManifest.xml # Android configuration file
@ -84,7 +92,7 @@ Note: if the automatic download fails, please manually download the relevant lib
mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz [Download link](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.0.1/lite/android_aarch64/mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz)
```
```text
android{
defaultConfig{
externalNativeBuild{
@ -93,7 +101,7 @@ android{
}
}
ndk{
ndk{
abiFilters'armeabi-v7a', 'arm64-v8a'
}
}
@ -102,7 +110,7 @@ android{
Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
```
```text
# ============== Set MindSpore Dependencies. =============
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
@ -120,7 +128,7 @@ set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
# --------------- MindSpore Lite set End. --------------------
# Link target library.
# Link target library.
target_link_libraries(
...
# --- mindspore ---
@ -132,7 +140,7 @@ target_link_libraries(
### Downloading and Deploying a Model File
In this example, the download.gradle File configuration auto download `mobilenetv2.ms `and placed in the 'app/libs/arm64-v8a' directory.
In this example, the download.gradle File configuration auto download `mobilenetv2.ms`and placed in the 'app/libs/arm64-v8a' directory.
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
@ -142,11 +150,11 @@ mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/officia
Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
- Load a model file. Create and configure the context for model inference.
- Load a model file. Create and configure the context for model inference.
```cpp
// Buffer is the model data passed in by the Java layer
@ -154,24 +162,24 @@ The inference code process is as follows. For details about the complete code, s
char *modelBuffer = CreateLocalModelBuffer(env, buffer);
```
- Create a session.
- Create a session.
```cpp
void **labelEnv = new void *;
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
// Create context.
mindspore::lite::Context *context = new mindspore::lite::Context;
context->thread_num_ = num_thread;
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
delete(context);
```
- Load the model file and build a computational graph for inference.
- Load the model file and build a computational graph for inference.
```cpp
void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
@ -183,7 +191,7 @@ The inference code process is as follows. For details about the complete code, s
}
```
2. Convert the input image into the Tensor format of the MindSpore model.
2. Convert the input image into the Tensor format of the MindSpore model.
Convert the image data to be detected into the Tensor format of the MindSpore model.
@ -230,9 +238,9 @@ The inference code process is as follows. For details about the complete code, s
inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
```
3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
- Perform graph execution and on-device inference.
- Perform graph execution and on-device inference.
```cpp
// After the model and image tensor data is loaded, run inference.
@ -305,5 +313,5 @@ The inference code process is as follows. For details about the complete code, s
}
return categoryScore;
}
```

View File

@ -2,18 +2,17 @@
本示例程序演示了如何在端侧利用MindSpore Lite C++ APIAndroid JNI以及MindSpore Lite 图像分类模型完成端侧推理实现对设备摄像头捕获的内容进行分类并在App图像预览界面中显示出最可能的分类结果。
### 运行依赖
- Android Studio >= 3.2 (推荐4.0以上版本)
- NDK 21.3
- CMake 3.10.2 [CMake](https://cmake.org/download)
- CMake 3.10.2 [CMake](https://cmake.org/download)
- Android SDK >= 26
- JDK >= 1.8
- JDK >= 1.8
### 构建与运行
1. 在Android Studio中加载本示例源码并安装相应的SDK指定SDK版本后由Android Studio自动安装
1. 在Android Studio中加载本示例源码并安装相应的SDK指定SDK版本后由Android Studio自动安装
![start_home](images/home.png)
@ -21,15 +20,13 @@
![start_sdk](images/sdk_management.png)
可选若安装时出现NDK版本问题可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定SDK的位置。
![project_structure](images/project_structure.png)
使用过程中若出现Android Studio配置问题可参考第5项解决。
2. 连接Android设备运行图像分类应用程序。
通过USB连接Android设备调试点击`Run 'app'`即可在您的设备上运行本示例项目。
* 注:编译过程中Android Studio会自动下载MindSpore Lite、模型文件等相关依赖项编译过程需做耐心等待。
> 编译过程中Android Studio会自动下载MindSpore Lite、模型文件等相关依赖项编译过程需做耐心等待。
![run_app](images/run_app.PNG)
@ -45,6 +42,16 @@
![result](images/app_result.jpg)
4. Android Studio 配置问题解决方案可参考下表:
| | 报错 | 解决方案 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | Gradle sync failed: NDK not configured. | 在local.properties中指定安装的ndk目录ndk.dir={ndk的安装目录} |
| 2 | Requested NDK version did not match the version requested by ndk.dir | 可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)并在Project Structure - Android NDK location设置中指定SDK的位置可参考下图完成 |
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | 在工具栏-help-Checkout for Updates中更新版本 |
| 4 | SSL peer shut down incorrectly | 重新构建 |
![project_structure](images/project_structure.png)
## 示例程序详细说明
@ -54,7 +61,7 @@
### 示例程序结构
```
```text
app
├── src/main
│ ├── assets # 资源文件
@ -68,12 +75,12 @@ app
| | └── MsNetWork.cpp # MindSpre接口封装
│ |
│ ├── java # java层应用代码
│ │ └── com.mindspore.himindsporedemo
│ │ └── com.mindspore.himindsporedemo
│ │ ├── gallery.classify # 图像处理及MindSpore JNI调用相关实现
│ │ │ └── ...
│ │ └── widget # 开启摄像头及绘制相关实现
│ │ └── ...
│ │
│ │
│ ├── res # 存放Android相关的资源文件
│ └── AndroidManifest.xml # Android配置文件
@ -96,13 +103,13 @@ Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通
本示例中build过程由download.gradle文件自动下载MindSpore Lite 版本文件,并放置在`app/src/main/cpp/`目录下。
* 注:若自动下载失败,请手动下载相关库文件,解压并放在对应位置:
> 若自动下载失败,请手动下载相关库文件,解压并放在对应位置:
mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz [下载链接](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.0.1/lite/android_aarch64/mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz)
在app的`build.gradle`文件中配置CMake编译支持以及`arm64-v8a`的编译支持,如下所示:
```
```text
android{
defaultConfig{
externalNativeBuild{
@ -111,7 +118,7 @@ android{
}
}
ndk{
ndk{
abiFilters 'arm64-v8a'
}
}
@ -120,7 +127,7 @@ android{
在`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。
```
```text
# ============== Set MindSpore Dependencies. =============
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
@ -138,7 +145,7 @@ set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
# --------------- MindSpore Lite set End. --------------------
# Link target library.
# Link target library.
target_link_libraries(
...
# --- mindspore ---
@ -152,41 +159,43 @@ target_link_libraries(
从MindSpore Model Hub中下载模型文件本示例程序中使用的终端图像分类模型文件为`mobilenetv2.ms`同样通过download.gradle脚本在APP构建时自动下载并放置在`app/src/main/assets`工程目录下。
* 注若下载失败请手动下载模型文件mobilenetv2.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)。
> 若下载失败请手动下载模型文件mobilenetv2.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)。
### 编写端侧推理代码
在JNI层调用MindSpore Lite C++ API实现端测推理。
推理代码流程如下,完整代码请参见`src/cpp/MindSporeNetnative.cpp`。
推理代码流程如下,完整代码请参见`src/cpp/MindSporeNetnative.cpp`。
1. 加载MindSpore Lite模型文件构建上下文、会话以及用于推理的计算图。
- 加载模型文件:创建并配置用于模型推理的上下文
```cpp
// Buffer is the model data passed in by the Java layer
jlong bufferLen = env->GetDirectBufferCapacity(buffer);
char *modelBuffer = CreateLocalModelBuffer(env, buffer);
```
- 创建会话
```cpp
void **labelEnv = new void *;
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
// Create context.
lite::Context *context = new lite::Context;
context->thread_num_ = numThread; //Specify the number of threads to run inference
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, context);
delete(context);
```
- 加载模型文件并构建用于推理的计算图
```cpp
void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
{
@ -196,11 +205,11 @@ target_link_libraries(
int ret = session->CompileGraph(model);
}
```
2. 将输入图片转换为传入MindSpore模型的Tensor格式。
2. 将输入图片转换为传入MindSpore模型的Tensor格式。
将待检测图片数据转换为输入MindSpore模型的Tensor。
```cpp
if (!BitmapToLiteMat(env, srcBitmap, &lite_mat_bgr)) {
MS_PRINT("BitmapToLiteMat error");
@ -243,8 +252,8 @@ target_link_libraries(
memcpy(inTensor->MutableData(), dataHWC,
inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
```
3. 对输入Tensor按照模型进行推理获取输出Tensor并进行后处理。
3. 对输入Tensor按照模型进行推理获取输出Tensor并进行后处理。
- 图执行,端测推理。
@ -254,6 +263,7 @@ target_link_libraries(
```
- 获取输出数据。
```cpp
auto names = mSession->GetOutputTensorNames();
std::unordered_map<std::string,mindspore::tensor::MSTensor *> msOutputs;
@ -264,8 +274,9 @@ target_link_libraries(
std::string resultStr = ProcessRunnetResult(::RET_CATEGORY_SUM,
::labels_name_map, msOutputs);
```
- 输出数据的后续处理。
```cpp
std::string ProcessRunnetResult(const int RET_CATEGORY_SUM, const char *const labels_name_map[],
std::unordered_map<std::string, mindspore::tensor::MSTensor *> msOutputs) {
@ -318,5 +329,5 @@ target_link_libraries(
}
return categoryScore;
}
```

View File

@ -16,13 +16,11 @@ The following section describes how to build and execute an on-device object det
### Building and Running
1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
![start_home](images/home.png)
Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
![start_sdk](images/sdk_management.png)
If you have any Android Studio configuration problem when trying this demo, please refer to item 5 to resolve it.
2. Connect to an Android device and runs the object detection application.
@ -36,6 +34,16 @@ The following section describes how to build and execute an on-device object det
![result](images/object_detection.png)
4. The solutions of Android Studio configuration problems:
| | Warning | Solution |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | Gradle sync failed: NDK not configured. | Specify the installed ndk directory in local.propertiesndk.dir={ndk的安装目录} |
| 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download corresponding [NDK Version](https://developer.android.com/ndk/downloads)and specify the sdk directory in Project Structure - Android NDK location.You can refer to the figure below. |
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Update Android Studio Version in Tools - help - Checkout for Updates. |
| 4 | SSL peer shut down incorrectly | Run this demo again. |
![project_structure](images/project_structure.png)
## Detailed Description of the Sample Program
@ -51,7 +59,7 @@ Note: if the automatic download fails, please manually download the relevant lib
mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz [Download link](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.0.1/lite/android_aarch64/mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz)
```
```text
android{
defaultConfig{
externalNativeBuild{
@ -60,7 +68,7 @@ android{
}
}
ndk{
ndk{
abiFilters 'arm64-v8a'
}
}
@ -69,7 +77,7 @@ android{
Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
```
```text
# Set MindSpore Lite Dependencies.
set(MINDSPORELITE_VERSION mindspore-lite-1.0.1-runtime-arm64-cpu)
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
@ -80,7 +88,7 @@ set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
# Link target library.
# Link target library.
target_link_libraries(
...
mindspore-lite
@ -91,23 +99,21 @@ target_link_libraries(
### Downloading and Deploying a Model File
In this example, the download.gradle File configuration auto download `ssd.ms `and placed in the 'app / libs / arm64-v8a' directory.
In this example, the download.gradle File configuration auto download `ssd.ms`and placed in the 'app / libs / arm64-v8a' directory.
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
ssd.ms [ssd.ms]( https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms)
### Compiling On-Device Inference Code
Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
- Load a model file. Create and configure the context for model inference.
- Load a model file. Create and configure the context for model inference.
```cpp
// Buffer is the model data passed in by the Java layer
@ -115,26 +121,26 @@ The inference code process is as follows. For details about the complete code, s
char *modelBuffer = CreateLocalModelBuffer(env, buffer);
```
- Create a session.
- Create a session.
```cpp
void **labelEnv = new void *;
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
// Create context.
lite::Context *context = new lite::Context;
context->device_ctx_.type = lite::DT_CPU;
context->thread_num_ = numThread; //Specify the number of threads to run inference
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
delete(context);
```
- Load the model file and build a computational graph for inference.
- Load the model file and build a computational graph for inference.
```cpp
void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
@ -146,12 +152,12 @@ The inference code process is as follows. For details about the complete code, s
}
```
2. Pre-Process the imagedata and convert the input image into the Tensor format of the MindSpore model.
2. Pre-Process the imagedata and convert the input image into the Tensor format of the MindSpore model.
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
LiteMat lite_mat_bgr,lite_norm_mat_cut;
if (!BitmapToLiteMat(env, srcBitmap, lite_mat_bgr)){
MS_PRINT("BitmapToLiteMat error");
return NULL;
@ -166,7 +172,7 @@ The inference code process is as follows. For details about the complete code, s
inputDims.channel =lite_norm_mat_cut.channel_;
inputDims.width = lite_norm_mat_cut.width_;
inputDims.height = lite_norm_mat_cut.height_;
// Get the mindsore inference environment which created in loadModel().
void **labelEnv = reinterpret_cast<void **>(netEnv);
if (labelEnv == nullptr) {
@ -174,17 +180,17 @@ The inference code process is as follows. For details about the complete code, s
return NULL;
}
MSNetWork *labelNet = static_cast<MSNetWork *>(*labelEnv);
auto mSession = labelNet->session;
if (mSession == nullptr) {
MS_PRINT("MindSpore error, Session is a nullptr.");
return NULL;
}
MS_PRINT("MindSpore get session.");
auto msInputs = mSession->GetInputs();
auto inTensor = msInputs.front();
float *dataHWC = reinterpret_cast<float *>(lite_norm_mat_cut.data_ptr_);
// copy input Tensor
memcpy(inTensor->MutableData(), dataHWC,
@ -219,7 +225,7 @@ The inference code process is as follows. For details about the complete code, s
```
4. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
4. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
Perform graph execution and on-device inference.
@ -243,14 +249,14 @@ The inference code process is as follows. For details about the complete code, s
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
The model output the object category scores (1:1917:81) and the object detection location offset (1:1917:4). The location offset can be calcalation the object location in getDefaultBoxes function .
The model output the object category scores (1:1917:81) and the object detection location offset (1:1917:4). The location offset can be calcalation the object location in getDefaultBoxes function .
```cpp
void SSDModelUtil::getDefaultBoxes() {
float fk[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
std::vector<struct WHBox> all_sizes;
struct Product mProductData[19 * 19] = {0};
for (int i = 0; i < 6; i++) {
fk[i] = config.model_input_height / config.steps[i];
}
@ -260,36 +266,36 @@ The inference code process is as follows. For details about the complete code, s
for (int i = 0; i < sizeof(config.num_default) / sizeof(int); i++) {
scales[i] = config.min_scale + scale_rate * i;
}
for (int idex = 0; idex < sizeof(config.feature_size) / sizeof(int); idex++) {
float sk1 = scales[idex];
float sk2 = scales[idex + 1];
float sk3 = sqrt(sk1 * sk2);
struct WHBox tempWHBox;
all_sizes.clear();
if (idex == 0) {
float w = sk1 * sqrt(2);
float h = sk1 / sqrt(2);
tempWHBox.boxw = 0.1;
tempWHBox.boxh = 0.1;
all_sizes.push_back(tempWHBox);
tempWHBox.boxw = w;
tempWHBox.boxh = h;
all_sizes.push_back(tempWHBox);
tempWHBox.boxw = h;
tempWHBox.boxh = w;
all_sizes.push_back(tempWHBox);
} else {
} else {
tempWHBox.boxw = sk1;
tempWHBox.boxh = sk1;
all_sizes.push_back(tempWHBox);
for (int j = 0; j < sizeof(config.aspect_ratios[idex]) / sizeof(int); j++) {
float w = sk1 * sqrt(config.aspect_ratios[idex][j]);
float h = sk1 / sqrt(config.aspect_ratios[idex][j]);
@ -300,21 +306,21 @@ The inference code process is as follows. For details about the complete code, s
tempWHBox.boxh = w;
all_sizes.push_back(tempWHBox);
}
tempWHBox.boxw = sk3;
tempWHBox.boxh = sk3;
all_sizes.push_back(tempWHBox);
}
for (int i = 0; i < config.feature_size[idex]; i++) {
for (int j = 0; j < config.feature_size[idex]; j++) {
mProductData[i * config.feature_size[idex] + j].x = i;
mProductData[i * config.feature_size[idex] + j].y = j;
}
}
int productLen = config.feature_size[idex] * config.feature_size[idex];
for (int i = 0; i < productLen; i++) {
for (int j = 0; j < all_sizes.size(); j++) {
struct NormalBox tempBox;
@ -546,4 +552,3 @@ The inference code process is as follows. For details about the complete code, s
return result;
}
```

View File

@ -2,7 +2,6 @@
本示例程序演示了如何在端侧利用MindSpore Lite C++ APIAndroid JNI以及MindSpore Lite 目标检测模型完成端侧推理实现对图库或者设备摄像头捕获的内容进行检测并在App图像预览界面中显示连续目标检测结果。
### 运行依赖
- Android Studio >= 3.2 (推荐4.0以上版本)
@ -12,7 +11,7 @@
### 构建与运行
1. 在Android Studio中加载本示例源码并安装相应的SDK指定SDK版本后由Android Studio自动安装
1. 在Android Studio中加载本示例源码并安装相应的SDK指定SDK版本后由Android Studio自动安装
![start_home](images/home.png)
@ -20,14 +19,12 @@
![start_sdk](images/sdk_management.png)
可选若安装时出现NDK版本问题可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定SDK的位置。
![project_structure](images/project_structure.png)
使用过程中若出现Android Studio配置问题可参考第5项解决。
2. 连接Android设备运行目标检测示例应用程序。
通过USB连接Android设备调试点击`Run 'app'`即可在你的设备上运行本示例项目。
* 注:编译过程中Android Studio会自动下载MindSpore Lite、模型文件等相关依赖项编译过程需做耐心等待。
> 编译过程中Android Studio会自动下载MindSpore Lite、模型文件等相关依赖项编译过程需做耐心等待。
![run_app](images/run_app.PNG)
@ -41,6 +38,16 @@
![result](images/object_detection.png)
4. Android Studio 配置问题解决方案可参考下表:
| | 报错 | 解决方案 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | Gradle sync failed: NDK not configured. | 在local.properties中指定安装的ndk目录ndk.dir={ndk的安装目录} |
| 2 | Requested NDK version did not match the version requested by ndk.dir | 可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)并在Project Structure - Android NDK location设置中指定SDK的位置可参考下图完成 |
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | 在工具栏-help-Checkout for Updates中更新版本 |
| 4 | SSL peer shut down incorrectly | 重新构建 |
![project_structure](images/project_structure.png)
## 示例程序详细说明
@ -50,7 +57,7 @@
### 示例程序结构
```
```text
app
|
├── libs # 存放demo jni层编译出的库文件
@ -67,12 +74,12 @@ app
│ | |
| | ├── MindSporeNetnative.cpp # MindSpore调用相关的JNI方法
│ ├── java # java层应用代码
│ │ └── com.huawei.himindsporedemo
│ │ └── com.huawei.himindsporedemo
│ │ ├── help # 图像处理及MindSpore JNI调用相关实现
│ │ │ └── ...
│ │ └── obejctdetect # 开启摄像头及绘制相关实现
│ │ └── ...
│ │
│ │
│ ├── res # 存放Android相关的资源文件
│ └── AndroidManifest.xml # Android配置文件
@ -95,13 +102,13 @@ Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通
本示例中build过程由download.gradle文件自动下载MindSpore Lite 版本文件,并放置在`app/src/main/cpp/`目录下。
* 注:若自动下载失败,请手动下载相关库文件,解压并放在对应位置:
> 若自动下载失败,请手动下载相关库文件,解压并放在对应位置:
mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz [下载链接](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.0.1/lite/android_aarch64/mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz)
在app的`build.gradle`文件中配置CMake编译支持以及`arm64-v8a`的编译支持,如下所示:
```
```text
android{
defaultConfig{
externalNativeBuild{
@ -110,7 +117,7 @@ android{
}
}
ndk{
ndk{
abiFilters 'arm64-v8a'
}
}
@ -119,7 +126,7 @@ android{
在`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。
```
```text
# Set MindSpore Lite Dependencies.
set(MINDSPORELITE_VERSION mindspore-lite-1.0.1-runtime-arm64-cpu)
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
@ -130,7 +137,7 @@ set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
# Link target library.
# Link target library.
target_link_libraries(
...
mindspore-lite
@ -143,61 +150,63 @@ target_link_libraries(
从MindSpore Model Hub中下载模型文件本示例程序中使用的目标检测模型文件为`ssd.ms`,同样通过`download.gradle`脚本在APP构建时自动下载并放置在`app/src/main/assets`工程目录下。
* 注若下载失败请手动下载模型文件ssd.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms)。
> 若下载失败请手动下载模型文件ssd.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms)。
### 编写端侧推理代码
在JNI层调用MindSpore Lite C++ API实现端测推理。
推理代码流程如下,完整代码请参见`src/cpp/MindSporeNetnative.cpp`。
推理代码流程如下,完整代码请参见`src/cpp/MindSporeNetnative.cpp`。
1. 加载MindSpore Lite模型文件构建上下文、会话以及用于推理的计算图。
- 加载模型文件:创建并配置用于模型推理的上下文
```cpp
// Buffer is the model data passed in by the Java layer
jlong bufferLen = env->GetDirectBufferCapacity(buffer);
char *modelBuffer = CreateLocalModelBuffer(env, buffer);
```
- 创建会话
```cpp
void **labelEnv = new void *;
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
// Create context.
lite::Context *context = new lite::Context;
context->cpu_bind_mode_ = lite::NO_BIND;
context->device_ctx_.type = lite::DT_CPU;
context->thread_num_ = numThread; //Specify the number of threads to run inference
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
delete context;
```
- 加载模型文件并构建用于推理的计算图
```cpp
void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx)
{
CreateSession(modelBuffer, bufferLen, ctx);
session = mindspore::session::LiteSession::CreateSession(ctx);
auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen);
int ret = session->CompileGraph(model); // Compile Graph
int ret = session->CompileGraph(model); // Compile Graph
}
```
2. 将输入图片转换为传入MindSpore模型的Tensor格式。
2. 将输入图片转换为传入MindSpore模型的Tensor格式。
将待检测图片数据转换为输入MindSpore模型的Tensor。
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
LiteMat lite_mat_bgr,lite_norm_mat_cut;
if (!BitmapToLiteMat(env, srcBitmap, lite_mat_bgr)){
MS_PRINT("BitmapToLiteMat error");
return NULL;
@ -220,24 +229,24 @@ target_link_libraries(
return NULL;
}
MSNetWork *labelNet = static_cast<MSNetWork *>(*labelEnv);
auto mSession = labelNet->session;
if (mSession == nullptr) {
MS_PRINT("MindSpore error, Session is a nullptr.");
return NULL;
}
MS_PRINT("MindSpore get session.");
auto msInputs = mSession->GetInputs();
auto inTensor = msInputs.front();
float *dataHWC = reinterpret_cast<float *>(lite_norm_mat_cut.data_ptr_);
// copy input Tensor
memcpy(inTensor->MutableData(), dataHWC,
inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
delete[] (dataHWC);
```
3. 进行模型推理前输入tensor格式为 NHWCshape为1:300:300:3格式为RGB, 并对输入tensor做标准化处理.
```cpp
@ -255,15 +264,15 @@ target_link_libraries(
MS_PRINT("ConvertTo error");
return false;
}
float means[3] = {0.485, 0.456, 0.406};
float vars[3] = {1.0 / 0.229, 1.0 / 0.224, 1.0 / 0.225};
SubStractMeanNormalize(lite_mat_convert_float, lite_norm_mat_cut, means, vars);
return true;
}
```
4. 对输入Tensor按照模型进行推理获取输出Tensor并进行后处理。
4. 对输入Tensor按照模型进行推理获取输出Tensor并进行后处理。
- 图执行,端测推理。
@ -273,6 +282,7 @@ target_link_libraries(
```
- 获取输出数据。
```cpp
auto names = mSession->GetOutputTensorNames();
typedef std::unordered_map<std::string,
@ -285,15 +295,15 @@ target_link_libraries(
}
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
- 模型有2个输出输出1是目标的类别置信度维度为11917: 81 输出2是目标的矩形框坐标偏移量维度为1:1917:4。 为了得出目标的实际矩形框,需要根据偏移量计算出矩形框的位置。这部分在 getDefaultBoxes中实现。
- 模型有2个输出输出1是目标的类别置信度维度为11917: 81 输出2是目标的矩形框坐标偏移量维度为1:1917:4。 为了得出目标的实际矩形框,需要根据偏移量计算出矩形框的位置。这部分在 getDefaultBoxes中实现。
```cpp
void SSDModelUtil::getDefaultBoxes() {
float fk[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
std::vector<struct WHBox> all_sizes;
struct Product mProductData[19 * 19] = {0};
for (int i = 0; i < 6; i++) {
fk[i] = config.model_input_height / config.steps[i];
}
@ -303,36 +313,36 @@ target_link_libraries(
for (int i = 0; i < sizeof(config.num_default) / sizeof(int); i++) {
scales[i] = config.min_scale + scale_rate * i;
}
for (int idex = 0; idex < sizeof(config.feature_size) / sizeof(int); idex++) {
float sk1 = scales[idex];
float sk2 = scales[idex + 1];
float sk3 = sqrt(sk1 * sk2);
struct WHBox tempWHBox;
all_sizes.clear();
if (idex == 0) {
float w = sk1 * sqrt(2);
float h = sk1 / sqrt(2);
tempWHBox.boxw = 0.1;
tempWHBox.boxh = 0.1;
all_sizes.push_back(tempWHBox);
tempWHBox.boxw = w;
tempWHBox.boxh = h;
all_sizes.push_back(tempWHBox);
tempWHBox.boxw = h;
tempWHBox.boxh = w;
all_sizes.push_back(tempWHBox);
} else {
} else {
tempWHBox.boxw = sk1;
tempWHBox.boxh = sk1;
all_sizes.push_back(tempWHBox);
for (int j = 0; j < sizeof(config.aspect_ratios[idex]) / sizeof(int); j++) {
float w = sk1 * sqrt(config.aspect_ratios[idex][j]);
float h = sk1 / sqrt(config.aspect_ratios[idex][j]);
@ -343,21 +353,21 @@ target_link_libraries(
tempWHBox.boxh = w;
all_sizes.push_back(tempWHBox);
}
tempWHBox.boxw = sk3;
tempWHBox.boxh = sk3;
all_sizes.push_back(tempWHBox);
}
for (int i = 0; i < config.feature_size[idex]; i++) {
for (int j = 0; j < config.feature_size[idex]; j++) {
mProductData[i * config.feature_size[idex] + j].x = i;
mProductData[i * config.feature_size[idex] + j].y = j;
}
}
int productLen = config.feature_size[idex] * config.feature_size[idex];
for (int i = 0; i < productLen; i++) {
for (int j = 0; j < all_sizes.size(); j++) {
struct NormalBox tempBox;
@ -373,9 +383,9 @@ target_link_libraries(
}
}
```
- 通过最大值抑制将目标类型置信度较高的输出筛选出来。
- 通过最大值抑制将目标类型置信度较高的输出筛选出来。
```cpp
void SSDModelUtil::nonMaximumSuppression(const YXBoxes *const decoded_boxes,
const float *const scores,
@ -402,9 +412,9 @@ target_link_libraries(
}
}
```
- 对每类的概率大于阈值通过NMS算法筛选出矩形框后 还需要将输出的矩形框恢复到原图尺寸。
```cpp
std::string SSDModelUtil::getDecodeResult(float *branchScores, float *branchBoxData) {
std::string result = "";
@ -414,7 +424,7 @@ target_link_libraries(
float scoreWithOneClass[1917] = {0};
int outBoxNum = 0;
YXBoxes decodedBoxes[1917] = {0};
// Copy branch outputs box data to tmpBox.
for (int i = 0; i < 1917; ++i) {
tmpBox[i].y = branchBoxData[i * 4 + 0];
@ -422,14 +432,14 @@ target_link_libraries(
tmpBox[i].h = branchBoxData[i * 4 + 2];
tmpBox[i].w = branchBoxData[i * 4 + 3];
}
// Copy branch outputs score to mScores.
for (int i = 0; i < 1917; ++i) {
for (int j = 0; j < 81; ++j) {
mScores[i][j] = branchScores[i * 81 + j];
}
}
ssd_boxes_decode(tmpBox, decodedBoxes);
const float nms_threshold = 0.3;
for (int i = 1; i < 81; i++) {
@ -496,5 +506,3 @@ target_link_libraries(
return result;
}
```

View File

@ -0,0 +1,386 @@
# MindSpore Lite 端侧骨骼检测demoAndroid
本示例程序演示了如何在端侧利用MindSpore Lite API以及MindSpore Lite骨骼检测模型完成端侧推理对设备摄像头捕获的内容进行检测并在App图像预览界面中显示连续目标检测结果。
## 运行依赖
- Android Studio >= 3.2 (推荐4.0以上版本)
- NDK 21.3
- CMake 3.10
- Android SDK >= 26
## 构建与运行
1. 在Android Studio中加载本示例源码并安装相应的SDK指定SDK版本后由Android Studio自动安装
![start_home](images/home.png)
启动Android Studio后点击`File->Settings->System Settings->Android SDK`勾选相应的SDK。如下图所示勾选后点击`OK`Android Studio即可自动安装SDK。
![start_sdk](images/sdk_management.png)
使用过程中若出现Android Studio配置问题可参考第5项解决。
2. 连接Android设备运行骨骼检测示例应用程序。
通过USB连接Android设备调试点击`Run 'app'`即可在你的设备上运行本示例项目。
> 编译过程中Android Studio会自动下载MindSpore Lite、模型文件等相关依赖项编译过程需做耐心等待。
![run_app](images/run_app.PNG)
Android Studio连接设备调试操作可参考<https://developer.android.com/studio/run/device?hl=zh-cn>
3. 在Android设备上点击“继续安装”安装完即可查看到设备摄像头捕获的内容和推理结果。
![install](images/install.jpg)
使用骨骼检测模型的输出如图:
蓝色标识点检测人体面部的五官分布及上肢、下肢的骨骼走势。此次推理置信分数0.98/1推理时延66.77ms。
![sult](images/posenet_detection.png)
4. Android Studio 配置问题解决方案可参考下表:
| | 报错 | 解决方案 |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| 1 | Gradle sync failed: NDK not configured. | 在local.properties中指定安装的ndk目录ndk.dir={ndk的安装目录} |
| 2 | Requested NDK version did not match the version requested by ndk.dir | 可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)并在Project Structure - Android NDK location设置中指定SDK的位置可参考下图完成 |
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | 在工具栏-help-Checkout for Updates中更新版本 |
| 4 | SSL peer shut down incorrectly | 重新构建 |
![project_structure](images/project_structure.png)
## 示例程序详细说明
骨骼检测Android示例程序通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理等功能在[Runtime](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/runtime.html)中完成模型推理的过程。
### 示例程序结构
```text
├── app
│   ├── build.gradle # 其他Android配置文件
│   ├── download.gradle # APP构建时由gradle自动从HuaWei Server下载依赖的库文件及模型文件
│   ├── proguard-rules.pro
│   └── src
│   ├── main
│   │   ├── AndroidManifest.xml # Android配置文件
│   │   ├── java # java层应用代码
│   │   │   └── com
│   │   │   └── mindspore
│   │   │   └── posenetdemo # 图像处理及推理流程实现
│   │   │   ├── CameraDataDealListener.java
│   │   │   ├── ImageUtils.java
│   │   │   ├── MainActivity.java
│   │   │   ├── PoseNetFragment.java
│   │   │   ├── Posenet.java #
│   │   │   └── TestActivity.java
│   │   └── res # 存放Android相关的资源文件
│   └── test
└── ...
```
### 下载及部署模型文件
从MindSpore Model Hub中下载模型文件本示例程序中使用的目标检测模型文件为`posenet_model.ms`,同样通过`download.gradle`脚本在APP构建时自动下载并放置在`app/src/main/assets`工程目录下。
> 若下载失败请手动下载模型文件posenet_model.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/posenet_lite/posenet_model.ms)。
### 编写端侧推理代码
在骨骼检测demo中使用Java API实现端测推理。相比于C++ APIJava API可以直接在Java Class中调用无需实现JNI层的相关代码具有更好的便捷性。
- 本实例通过识别鼻子眼睛等身体特征、获取身体特征位置、计算结果的置信分数,来实现骨骼检测的目的。
```java
public enum BodyPart {
NOSE,
LEFT_EYE,
RIGHT_EYE,
LEFT_EAR,
RIGHT_EAR,
LEFT_SHOULDER,
RIGHT_SHOULDER,
LEFT_ELBOW,
RIGHT_ELBOW,
LEFT_WRIST,
RIGHT_WRIST,
LEFT_HIP,
RIGHT_HIP,
LEFT_KNEE,
RIGHT_KNEE,
LEFT_ANKLE,
RIGHT_ANKLE
}
public class Position {
int x;
int y;
}
public class KeyPoint {
BodyPart bodyPart = BodyPart.NOSE;
Position position = new Position();
float score = 0.0f;
}
public class Person {
List<KeyPoint> keyPoints;
float score = 0.0f;
}
```
骨骼检测demo推理代码流程如下完整代码请参见`src/main/java/com/mindspore/posenetdemo/Posenet.java`。
1. 加载MindSpore Lite模型文件构建上下文、会话以及用于推理的计算图。
- 加载模型从文件系统中读取MindSpore Lite模型并进行模型解析。
```java
// Load the .ms model.
model = new Model();
if (!model.loadModel(mContext, "posenet_model.ms")) {
Log.e("MS_LITE", "Load Model failed");
return false;
}
```
- 创建配置上下文:创建配置上下文`MSConfig`,保存会话所需的一些基本配置参数,用于指导图编译和图执行。
```java
// Create and init config.
msConfig = new MSConfig();
if (!msConfig.init(DeviceType.DT_CPU, NUM_THREADS, CpuBindMode.MID_CPU)) {
Log.e("MS_LITE", "Init context failed");
return false;
}
```
- 创建会话:创建`LiteSession`,并调用`init`方法将上一步得到`MSConfig`配置到会话中。
```java
// Create the MindSpore lite session.
session = new LiteSession();
if (!session.init(msConfig)) {
Log.e("MS_LITE", "Create session failed");
msConfig.free();
return false;
}
msConfig.free();
```
- 加载模型文件并构建用于推理的计算图
```java
// Complile graph.
if (!session.compileGraph(model)) {
Log.e("MS_LITE", "Compile graph failed");
model.freeBuffer();
return false;
}
// Note: when use model.freeBuffer(), the model can not be complile graph again.
model.freeBuffer();
```
2. 输入数据: Java目前支持`byte[]`或者`ByteBuffer`两种类型的数据设置输入Tensor的数据。
- 在输入数据之前需要对存储图像信息的Bitmap进行解读分析与转换。
```java
/**
* Scale the image to a byteBuffer of [-1,1] values.
*/
private ByteBuffer initInputArray(Bitmap bitmap) {
final int bytesPerChannel = 4;
final int inputChannels = 3;
final int batchSize = 1;
ByteBuffer inputBuffer = ByteBuffer.allocateDirect(
batchSize * bytesPerChannel * bitmap.getHeight() * bitmap.getWidth() * inputChannels
);
inputBuffer.order(ByteOrder.nativeOrder());
inputBuffer.rewind();
final float mean = 128.0f;
final float std = 128.0f;
int[] intValues = new int[bitmap.getWidth() * bitmap.getHeight()];
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
int pixel = 0;
for (int y = 0; y < bitmap.getHeight(); y++) {
for (int x = 0; x < bitmap.getWidth(); x++) {
int value = intValues[pixel++];
inputBuffer.putFloat(((float) (value >> 16 & 0xFF) - mean) / std);
inputBuffer.putFloat(((float) (value >> 8 & 0xFF) - mean) / std);
inputBuffer.putFloat(((float) (value & 0xFF) - mean) / std);
}
}
return inputBuffer;
}
```
- 通过`ByteBuffer`输入数据。
```java
long estimationStartTimeNanos = SystemClock.elapsedRealtimeNanos();
ByteBuffer inputArray = this.initInputArray(bitmap);
List<MSTensor> inputs = session.getInputs();
if (inputs.size() != 1) {
return null;
}
Log.i("posenet", String.format("Scaling to [-1,1] took %.2f ms",
1.0f * (SystemClock.elapsedRealtimeNanos() - estimationStartTimeNanos) / 1_000_000));
MSTensor inTensor = inputs.get(0);
inTensor.setData(inputArray);
long inferenceStartTimeNanos = SystemClock.elapsedRealtimeNanos();
```
3. 对输入Tensor按照模型进行推理获取输出Tensor并进行后处理。
- 使用`runGraph`进行模型推理。
```java
// Run graph to infer results.
if (!session.runGraph()) {
Log.e("MS_LITE", "Run graph failed");
return null;
}
lastInferenceTimeNanos = SystemClock.elapsedRealtimeNanos() - inferenceStartTimeNanos;
Log.i(
"posenet",
String.format("Interpreter took %.2f ms", 1.0f * lastInferenceTimeNanos / 1_000_000)
);
```
- 通过输出Tensor得到推理结果。
```java
// Get output tensor values.
List<MSTensor> heatmaps_list = session.getOutputsByNodeName("Conv2D-27");
if (heatmaps_list == null) {
return null;
}
MSTensor heatmaps_tensors = heatmaps_list.get(0);
float[] heatmaps_results = heatmaps_tensors.getFloatData();
int[] heatmapsShape = heatmaps_tensors.getShape(); //1, 9, 9 ,17
float[][][][] heatmaps = new float[heatmapsShape[0]][][][];
for (int x = 0; x < heatmapsShape[0]; x++) { // heatmapsShape[0] =1
float[][][] arrayThree = new float[heatmapsShape[1]][][];
for (int y = 0; y < heatmapsShape[1]; y++) { // heatmapsShape[1] = 9
float[][] arrayTwo = new float[heatmapsShape[2]][];
for (int z = 0; z < heatmapsShape[2]; z++) { //heatmapsShape[2] = 9
float[] arrayOne = new float[heatmapsShape[3]]; //heatmapsShape[3] = 17
for (int i = 0; i < heatmapsShape[3]; i++) {
int n = i + z * heatmapsShape[3] + y * heatmapsShape[2] * heatmapsShape[3] + x * heatmapsShape[1] * heatmapsShape[2] * heatmapsShape[3];
arrayOne[i] = heatmaps_results[n]; //1*9*9*17 ??
}
arrayTwo[z] = arrayOne;
}
arrayThree[y] = arrayTwo;
}
heatmaps[x] = arrayThree;
}
List<MSTensor> offsets_list = session.getOutputsByNodeName("Conv2D-28");
if (offsets_list == null) {
return null;
}
MSTensor offsets_tensors = offsets_list.get(0);
float[] offsets_results = offsets_tensors.getFloatData();
int[] offsetsShapes = offsets_tensors.getShape();
float[][][][] offsets = new float[offsetsShapes[0]][][][];
for (int x = 0; x < offsetsShapes[0]; x++) {
float[][][] offsets_arrayThree = new float[offsetsShapes[1]][][];
for (int y = 0; y < offsetsShapes[1]; y++) {
float[][] offsets_arrayTwo = new float[offsetsShapes[2]][];
for (int z = 0; z < offsetsShapes[2]; z++) {
float[] offsets_arrayOne = new float[offsetsShapes[3]];
for (int i = 0; i < offsetsShapes[3]; i++) {
int n = i + z * offsetsShapes[3] + y * offsetsShapes[2] * offsetsShapes[3] + x * offsetsShapes[1] * offsetsShapes[2] * offsetsShapes[3];
offsets_arrayOne[i] = offsets_results[n];
}
offsets_arrayTwo[z] = offsets_arrayOne;
}
offsets_arrayThree[y] = offsets_arrayTwo;
}
offsets[x] = offsets_arrayThree;
}
```
- 对输出节点的数据进行处理得到骨骼检测demo的返回值`person`,实现功能。
`Conv2D-27`中,`heatmaps`存储`height`、`weight`、`numKeypoints`三种参数,可用于求出`keypointPosition`位置信息。
`Conv2D-28`中,`offset`代表位置坐标的偏移量,与`keypointPosition`结合可获取`confidenceScores`置信分数,用于判断模型推理结果。
通过`keypointPosition`与`confidenceScores`,获取`person.keyPoints`和`person.score`,得到模型的返回值`person`。
```java
int height = ((Object[]) heatmaps[0]).length; //9
int width = ((Object[]) heatmaps[0][0]).length; //9
int numKeypoints = heatmaps[0][0][0].length; //17
// Finds the (row, col) locations of where the keypoints are most likely to be.
Pair[] keypointPositions = new Pair[numKeypoints];
for (int i = 0; i < numKeypoints; i++) {
keypointPositions[i] = new Pair(0, 0);
}
for (int keypoint = 0; keypoint < numKeypoints; keypoint++) {
float maxVal = heatmaps[0][0][0][keypoint];
int maxRow = 0;
int maxCol = 0;
for (int row = 0; row < height; row++) {
for (int col = 0; col < width; col++) {
if (heatmaps[0][row][col][keypoint] > maxVal) {
maxVal = heatmaps[0][row][col][keypoint];
maxRow = row;
maxCol = col;
}
}
}
keypointPositions[keypoint] = new Pair(maxRow, maxCol);
}
// Calculating the x and y coordinates of the keypoints with offset adjustment.
int[] xCoords = new int[numKeypoints];
int[] yCoords = new int[numKeypoints];
float[] confidenceScores = new float[numKeypoints];
for (int i = 0; i < keypointPositions.length; i++) {
Pair position = keypointPositions[i];
int positionY = (int) position.first;
int positionX = (int) position.second;
yCoords[i] = (int) ((float) positionY / (float) (height - 1) * bitmap.getHeight() + offsets[0][positionY][positionX][i]);
xCoords[i] = (int) ((float) positionX / (float) (width - 1) * bitmap.getWidth() + offsets[0][positionY][positionX][i + numKeypoints]);
confidenceScores[i] = sigmoid(heatmaps[0][positionY][positionX][i]);
}
Person person = new Person();
KeyPoint[] keypointList = new KeyPoint[numKeypoints];
for (int i = 0; i < numKeypoints; i++) {
keypointList[i] = new KeyPoint();
}
float totalScore = 0.0f;
for (int i = 0; i < keypointList.length; i++) {
keypointList[i].position.x = xCoords[i];
keypointList[i].position.y = yCoords[i];
keypointList[i].score = confidenceScores[i];
totalScore += confidenceScores[i];
}
person.keyPoints = Arrays.asList(keypointList);
person.score = totalScore / numKeypoints;
return person;
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 456 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB