forked from mindspore-Ecosystem/mindspore
!10301 Add english version
From: @liuxiao78 Reviewed-by: @hangangqiang,@zhanghaibo5 Signed-off-by: @hangangqiang,@hangangqiang
This commit is contained in:
commit
e06048bfe0
|
@ -1,15 +1,15 @@
|
|||
## MindSpore Lite 端侧目标检测demo(Android)
|
||||
# MindSpore Lite 端侧目标检测demo(Android)
|
||||
|
||||
本示例程序演示了如何在端侧利用MindSpore Lite C++ API(Android JNI)以及MindSpore Lite 目标检测模型完成端侧推理,实现对图库或者设备摄像头捕获的内容进行检测,并在App图像预览界面中显示连续目标检测结果。
|
||||
|
||||
### 运行依赖
|
||||
## 运行依赖
|
||||
|
||||
- Android Studio >= 3.2 (推荐4.0以上版本)
|
||||
- NDK 21.3
|
||||
- CMake 3.10
|
||||
- Android SDK >= 26
|
||||
|
||||
### 构建与运行
|
||||
## 构建与运行
|
||||
|
||||
1. 在Android Studio中加载本示例源码,并安装相应的SDK(指定SDK版本后,由Android Studio自动安装)。
|
||||
|
||||
|
|
|
@ -0,0 +1,388 @@
|
|||
# MindSpore Lite Skeleton Detection Demo (Android)
|
||||
|
||||
This sample application demonstrates how to use the MindSpore Lite API and skeleton detection model to perform inference on the device, detect the content captured by the device camera, and display the continuous objective detection result on the image preview screen of the app.
|
||||
|
||||
## Running Dependencies
|
||||
|
||||
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
|
||||
- NDK 21.3
|
||||
- CMake 3.10
|
||||
- Android software development kit (SDK) 26 or later
|
||||
|
||||
## Building and Running
|
||||
|
||||
1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
|
||||
|
||||
![start_home](images/home.png)
|
||||
|
||||
Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
|
||||
|
||||
![start_sdk](images/sdk_management.png)
|
||||
|
||||
If an Android Studio configuration error occurs, solve it by referring to the following solution table in item 4.
|
||||
|
||||
2. Connect to an Android device and run the skeleton detection sample application.
|
||||
|
||||
Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
|
||||
> During the building, Android Studio automatically downloads dependencies related to MindSpore Lite and model files. Please wait.
|
||||
|
||||
![run_app](images/run_app.PNG)
|
||||
|
||||
For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device>.
|
||||
|
||||
3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.
|
||||
|
||||
![install](images/install.jpg)
|
||||
|
||||
The following figure shows the output of the skeletal detection model.
|
||||
|
||||
The blue points are used to detect facial features and limb bone movement trends. The confidence score of this inference is 0.98/1, and the inference delay is 66.77 ms.
|
||||
|
||||
![sult](images/posenet_detection.png)
|
||||
|
||||
4. The following table lists solutions to Android Studio configuration errors.
|
||||
|
||||
| | Error | Solution |
|
||||
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| 1 | Gradle sync failed: NDK not configured. | Specify the NDK installation directory in the local.properties file: ndk.dir={NDK installation directory} |
|
||||
| 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) and specify the SDK location in the `Android NDK location` field (see the following figure). |
|
||||
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Choose `Help` > `Checkout for Updates` on the toolbar to update the version. |
|
||||
| 4 | SSL peer shut down incorrectly | Rebuild. |
|
||||
|
||||
![project_structure](images/project_structure.png)
|
||||
|
||||
## Detailed Description of the Sample Application
|
||||
|
||||
The skeleton detection sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html) to complete model inference.
|
||||
|
||||
### Sample Application Structure
|
||||
|
||||
```text
|
||||
|
||||
├── app
|
||||
│ ├── build.gradle # Other Android configuration file.
|
||||
│ ├── download.gradle # During app building, the .gradle file automatically downloads the dependent library files and model files from the Huawei server.
|
||||
│ ├── proguard-rules.pro
|
||||
│ └── src
|
||||
│ ├── main
|
||||
│ │ ├── AndroidManifest.xml # Android configuration file.
|
||||
│ │ ├── java # Application code at the Java layer.
|
||||
│ │ │ └── com
|
||||
│ │ │ └── mindspore
|
||||
│ │ │ └── posenetdemo # Image processing and inference process implementation.
|
||||
│ │ │ ├── CameraDataDealListener.java
|
||||
│ │ │ ├── ImageUtils.java
|
||||
│ │ │ ├── MainActivity.java
|
||||
│ │ │ ├── PoseNetFragment.java
|
||||
│ │ │ ├── Posenet.java #
|
||||
│ │ │ └── TestActivity.java
|
||||
│ │ └── res # Resource files related to Android.
|
||||
│ └── test
|
||||
└── ...
|
||||
|
||||
```
|
||||
|
||||
### Downloading and Deploying the Model File
|
||||
|
||||
Download the model file from MindSpore Model Hub. The objective detection model file used in this sample application is `posenet_model.ms`, which is automatically downloaded during app building using the `download.gradle` script and stored in the `app/src/main/assets` project directory.
|
||||
|
||||
> If the download fails, manually download the model file [posenet_model.ms](https://download.mindspore.cn/model_zoo/official/lite/posenet_lite/posenet_model.ms)
|
||||
|
||||
### Writing On-Device Inference Code
|
||||
|
||||
In the skeleton detection demo, the Java API is used to implement on-device inference. Compared with the C++ API, the Java API can be directly called in the Java Class and does not need to implement the related code at the JNI layer. Therefore, the Java API is more convenient.
|
||||
|
||||
- The following example identifies body features such as nose and eyes, obtains their locations, and calculates the confidence score to implement bone detection.
|
||||
|
||||
```java
|
||||
public enum BodyPart {
|
||||
NOSE,
|
||||
LEFT_EYE,
|
||||
RIGHT_EYE,
|
||||
LEFT_EAR,
|
||||
RIGHT_EAR,
|
||||
LEFT_SHOULDER,
|
||||
RIGHT_SHOULDER,
|
||||
LEFT_ELBOW,
|
||||
RIGHT_ELBOW,
|
||||
LEFT_WRIST,
|
||||
RIGHT_WRIST,
|
||||
LEFT_HIP,
|
||||
RIGHT_HIP,
|
||||
LEFT_KNEE,
|
||||
RIGHT_KNEE,
|
||||
LEFT_ANKLE,
|
||||
RIGHT_ANKLE
|
||||
}
|
||||
|
||||
public class Position {
|
||||
int x;
|
||||
int y;
|
||||
}
|
||||
|
||||
public class KeyPoint {
|
||||
BodyPart bodyPart = BodyPart.NOSE;
|
||||
Position position = new Position();
|
||||
float score = 0.0f;
|
||||
}
|
||||
|
||||
public class Person {
|
||||
List<KeyPoint> keyPoints;
|
||||
float score = 0.0f;
|
||||
}
|
||||
```
|
||||
|
||||
The inference code process of bone detection demo is as follows. For details about the complete code, see `src/main/java/com/mindspore/posenetdemo/Posenet.java`.
|
||||
|
||||
1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
|
||||
|
||||
- Loading a model: Read a MindSpore Lite model from the file system and parse it.
|
||||
|
||||
```java
|
||||
// Load the .ms model.
|
||||
model = new Model();
|
||||
if (!model.loadModel(mContext, "posenet_model.ms")) {
|
||||
Log.e("MS_LITE", "Load Model failed");
|
||||
return false;
|
||||
}
|
||||
```
|
||||
|
||||
- Creating a configuration context: Create the configuration context `MSConfig` and save some basic configuration parameters required by the session for guiding graph building and execution.
|
||||
|
||||
```java
|
||||
// Create and init config.
|
||||
msConfig = new MSConfig();
|
||||
if (!msConfig.init(DeviceType.DT_CPU, NUM_THREADS, CpuBindMode.MID_CPU)) {
|
||||
Log.e("MS_LITE", "Init context failed");
|
||||
return false;
|
||||
}
|
||||
```
|
||||
|
||||
- Creating a session: Create `LiteSession` and call the `init` method to configure the `MSConfig` obtained in the previous step to the session.
|
||||
|
||||
```java
|
||||
// Create the MindSpore lite session.
|
||||
session = new LiteSession();
|
||||
if (!session.init(msConfig)) {
|
||||
Log.e("MS_LITE", "Create session failed");
|
||||
msConfig.free();
|
||||
return false;
|
||||
}
|
||||
msConfig.free();
|
||||
```
|
||||
|
||||
- Load the model file and build a computational graph for inference.
|
||||
|
||||
```java
|
||||
// Complile graph.
|
||||
if (!session.compileGraph(model)) {
|
||||
Log.e("MS_LITE", "Compile graph failed");
|
||||
model.freeBuffer();
|
||||
return false;
|
||||
}
|
||||
|
||||
// Note: when use model.freeBuffer(), the model can not be complile graph again.
|
||||
model.freeBuffer();
|
||||
```
|
||||
|
||||
2. Input data. Currently, Java supports two types of data: `byte[]` and `ByteBuffer`. Set the data of the input tensor.
|
||||
|
||||
- Before data is input, the bitmap that stores image information needs to be interpreted, analyzed, and converted.
|
||||
|
||||
```java
|
||||
/**
|
||||
* Scale the image to a byteBuffer of [-1,1] values.
|
||||
*/
|
||||
private ByteBuffer initInputArray(Bitmap bitmap) {
|
||||
final int bytesPerChannel = 4;
|
||||
final int inputChannels = 3;
|
||||
final int batchSize = 1;
|
||||
ByteBuffer inputBuffer = ByteBuffer.allocateDirect(
|
||||
batchSize * bytesPerChannel * bitmap.getHeight() * bitmap.getWidth() * inputChannels
|
||||
);
|
||||
inputBuffer.order(ByteOrder.nativeOrder());
|
||||
inputBuffer.rewind();
|
||||
|
||||
final float mean = 128.0f;
|
||||
final float std = 128.0f;
|
||||
int[] intValues = new int[bitmap.getWidth() * bitmap.getHeight()];
|
||||
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
|
||||
```
|
||||
|
||||
```java
|
||||
int pixel = 0;
|
||||
for (int y = 0; y < bitmap.getHeight(); y++) {
|
||||
for (int x = 0; x < bitmap.getWidth(); x++) {
|
||||
int value = intValues[pixel++];
|
||||
inputBuffer.putFloat(((float) (value >> 16 & 0xFF) - mean) / std);
|
||||
inputBuffer.putFloat(((float) (value >> 8 & 0xFF) - mean) / std);
|
||||
inputBuffer.putFloat(((float) (value & 0xFF) - mean) / std);
|
||||
}
|
||||
}
|
||||
return inputBuffer;
|
||||
}
|
||||
```
|
||||
|
||||
- Input data through `ByteBuffer`.
|
||||
|
||||
```java
|
||||
long estimationStartTimeNanos = SystemClock.elapsedRealtimeNanos();
|
||||
ByteBuffer inputArray = this.initInputArray(bitmap);
|
||||
List<MSTensor> inputs = session.getInputs();
|
||||
if (inputs.size() != 1) {
|
||||
return null;
|
||||
}
|
||||
|
||||
Log.i("posenet", String.format("Scaling to [-1,1] took %.2f ms",
|
||||
1.0f * (SystemClock.elapsedRealtimeNanos() - estimationStartTimeNanos) / 1_000_000));
|
||||
|
||||
MSTensor inTensor = inputs.get(0);
|
||||
inTensor.setData(inputArray);
|
||||
long inferenceStartTimeNanos = SystemClock.elapsedRealtimeNanos();
|
||||
```
|
||||
|
||||
3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
|
||||
|
||||
- Use `runGraph` for model inference.
|
||||
|
||||
```java
|
||||
// Run graph to infer results.
|
||||
if (!session.runGraph()) {
|
||||
Log.e("MS_LITE", "Run graph failed");
|
||||
return null;
|
||||
}
|
||||
|
||||
lastInferenceTimeNanos = SystemClock.elapsedRealtimeNanos() - inferenceStartTimeNanos;
|
||||
Log.i(
|
||||
"posenet",
|
||||
String.format("Interpreter took %.2f ms", 1.0f * lastInferenceTimeNanos / 1_000_000)
|
||||
);
|
||||
```
|
||||
|
||||
- Obtain the inference result by the output tensor.
|
||||
|
||||
```java
|
||||
// Get output tensor values.
|
||||
List<MSTensor> heatmaps_list = session.getOutputsByNodeName("Conv2D-27");
|
||||
if (heatmaps_list == null) {
|
||||
return null;
|
||||
}
|
||||
MSTensor heatmaps_tensors = heatmaps_list.get(0);
|
||||
|
||||
float[] heatmaps_results = heatmaps_tensors.getFloatData();
|
||||
int[] heatmapsShape = heatmaps_tensors.getShape(); //1, 9, 9 ,17
|
||||
|
||||
float[][][][] heatmaps = new float[heatmapsShape[0]][][][];
|
||||
for (int x = 0; x < heatmapsShape[0]; x++) { // heatmapsShape[0] =1
|
||||
float[][][] arrayThree = new float[heatmapsShape[1]][][];
|
||||
for (int y = 0; y < heatmapsShape[1]; y++) { // heatmapsShape[1] = 9
|
||||
float[][] arrayTwo = new float[heatmapsShape[2]][];
|
||||
for (int z = 0; z < heatmapsShape[2]; z++) { //heatmapsShape[2] = 9
|
||||
float[] arrayOne = new float[heatmapsShape[3]]; //heatmapsShape[3] = 17
|
||||
for (int i = 0; i < heatmapsShape[3]; i++) {
|
||||
int n = i + z * heatmapsShape[3] + y * heatmapsShape[2] * heatmapsShape[3] + x * heatmapsShape[1] * heatmapsShape[2] * heatmapsShape[3];
|
||||
arrayOne[i] = heatmaps_results[n]; //1*9*9*17 ??
|
||||
}
|
||||
arrayTwo[z] = arrayOne;
|
||||
}
|
||||
arrayThree[y] = arrayTwo;
|
||||
}
|
||||
heatmaps[x] = arrayThree;
|
||||
}
|
||||
```
|
||||
|
||||
```java
|
||||
List<MSTensor> offsets_list = session.getOutputsByNodeName("Conv2D-28");
|
||||
if (offsets_list == null) {
|
||||
return null;
|
||||
}
|
||||
MSTensor offsets_tensors = offsets_list.get(0);
|
||||
float[] offsets_results = offsets_tensors.getFloatData();
|
||||
int[] offsetsShapes = offsets_tensors.getShape();
|
||||
|
||||
float[][][][] offsets = new float[offsetsShapes[0]][][][];
|
||||
for (int x = 0; x < offsetsShapes[0]; x++) {
|
||||
float[][][] offsets_arrayThree = new float[offsetsShapes[1]][][];
|
||||
for (int y = 0; y < offsetsShapes[1]; y++) {
|
||||
float[][] offsets_arrayTwo = new float[offsetsShapes[2]][];
|
||||
for (int z = 0; z < offsetsShapes[2]; z++) {
|
||||
float[] offsets_arrayOne = new float[offsetsShapes[3]];
|
||||
for (int i = 0; i < offsetsShapes[3]; i++) {
|
||||
int n = i + z * offsetsShapes[3] + y * offsetsShapes[2] * offsetsShapes[3] + x * offsetsShapes[1] * offsetsShapes[2] * offsetsShapes[3];
|
||||
offsets_arrayOne[i] = offsets_results[n];
|
||||
}
|
||||
offsets_arrayTwo[z] = offsets_arrayOne;
|
||||
}
|
||||
offsets_arrayThree[y] = offsets_arrayTwo;
|
||||
}
|
||||
offsets[x] = offsets_arrayThree;
|
||||
}
|
||||
```
|
||||
|
||||
- Process the output node data, and obtain the return value `person` of the skeleton detection demo to implement the function.
|
||||
|
||||
In `Conv2D-27`, the `height`, `weight`, and `numKeypoints` parameters stored in `heatmaps` can be used to obtain the `keypointPosition` information.
|
||||
|
||||
In `Conv2D-28`, `offset` indicates the position coordinate offset, which can be used together with `keypointPosition` to obtain `confidenceScores` and determine the model inference result.
|
||||
|
||||
Use `keypointPosition` and `confidenceScores` to obtain `person.keyPoints` and `person.score` to obtain the model's return value `person`.
|
||||
|
||||
```java
|
||||
int height = ((Object[]) heatmaps[0]).length; //9
|
||||
int width = ((Object[]) heatmaps[0][0]).length; //9
|
||||
int numKeypoints = heatmaps[0][0][0].length; //17
|
||||
|
||||
// Finds the (row, col) locations of where the keypoints are most likely to be.
|
||||
Pair[] keypointPositions = new Pair[numKeypoints];
|
||||
for (int i = 0; i < numKeypoints; i++) {
|
||||
keypointPositions[i] = new Pair(0, 0);
|
||||
}
|
||||
|
||||
for (int keypoint = 0; keypoint < numKeypoints; keypoint++) {
|
||||
float maxVal = heatmaps[0][0][0][keypoint];
|
||||
int maxRow = 0;
|
||||
int maxCol = 0;
|
||||
for (int row = 0; row < height; row++) {
|
||||
for (int col = 0; col < width; col++) {
|
||||
if (heatmaps[0][row][col][keypoint] > maxVal) {
|
||||
maxVal = heatmaps[0][row][col][keypoint];
|
||||
maxRow = row;
|
||||
maxCol = col;
|
||||
}
|
||||
}
|
||||
}
|
||||
keypointPositions[keypoint] = new Pair(maxRow, maxCol);
|
||||
}
|
||||
|
||||
// Calculating the x and y coordinates of the keypoints with offset adjustment.
|
||||
int[] xCoords = new int[numKeypoints];
|
||||
int[] yCoords = new int[numKeypoints];
|
||||
float[] confidenceScores = new float[numKeypoints];
|
||||
for (int i = 0; i < keypointPositions.length; i++) {
|
||||
Pair position = keypointPositions[i];
|
||||
int positionY = (int) position.first;
|
||||
int positionX = (int) position.second;
|
||||
|
||||
yCoords[i] = (int) ((float) positionY / (float) (height - 1) * bitmap.getHeight() + offsets[0][positionY][positionX][i]);
|
||||
xCoords[i] = (int) ((float) positionX / (float) (width - 1) * bitmap.getWidth() + offsets[0][positionY][positionX][i + numKeypoints]);
|
||||
confidenceScores[i] = sigmoid(heatmaps[0][positionY][positionX][i]);
|
||||
}
|
||||
|
||||
Person person = new Person();
|
||||
KeyPoint[] keypointList = new KeyPoint[numKeypoints];
|
||||
for (int i = 0; i < numKeypoints; i++) {
|
||||
keypointList[i] = new KeyPoint();
|
||||
}
|
||||
|
||||
float totalScore = 0.0f;
|
||||
for (int i = 0; i < keypointList.length; i++) {
|
||||
keypointList[i].position.x = xCoords[i];
|
||||
keypointList[i].position.y = yCoords[i];
|
||||
keypointList[i].score = confidenceScores[i];
|
||||
totalScore += confidenceScores[i];
|
||||
}
|
||||
person.keyPoints = Arrays.asList(keypointList);
|
||||
person.score = totalScore / numKeypoints;
|
||||
|
||||
return person;
|
||||
```
|
|
@ -0,0 +1,289 @@
|
|||
# MindSpore Lite Scene Detection Demo (Android)
|
||||
|
||||
This sample application demonstrates how to use the MindSpore Lite C++ API (Android JNI) and MindSpore Lite scene detection model to perform inference on the device, detect the content captured by the device camera, and display the continuous objective detection result on the image preview screen of the app.
|
||||
|
||||
## Running Dependencies
|
||||
|
||||
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
|
||||
- NDK 21.3
|
||||
- CMake 3.10
|
||||
- Android software development kit (SDK) 26 or later
|
||||
|
||||
## Building and Running
|
||||
|
||||
1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
|
||||
|
||||
![start_home](images/home.png)
|
||||
|
||||
Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
|
||||
|
||||
![start_sdk](images/sdk_management.png)
|
||||
|
||||
If an Android Studio configuration error occurs, solve it by referring to the following table.
|
||||
|
||||
| | Error | Solution |
|
||||
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| 1 | Gradle sync failed: NDK not configured. | Specify the NDK installation directory in the local.properties file: ndk.dir={NDK installation directory} |
|
||||
| 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) and specify the SDK location in the `Android NDK location` field (see the following figure). |
|
||||
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Choose `Help` > `Checkout for Updates` on the toolbar to update the version. |
|
||||
| 4 | SSL peer shut down incorrectly | Rebuild. |
|
||||
|
||||
![project_structure](images/project_structure.png)
|
||||
|
||||
2. Connect to an Android device and run the scene detection sample application.
|
||||
|
||||
Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
|
||||
> During the building, Android Studio automatically downloads dependencies related to MindSpore Lite and model files. Please wait.
|
||||
|
||||
![run_app](images/run_app.PNG)
|
||||
|
||||
For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device>.
|
||||
|
||||
3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.
|
||||
|
||||
![install](images/install.jpg)
|
||||
|
||||
## Detailed Description of the Sample Application
|
||||
|
||||
The scene detection sample application on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images (drawing frames based on the inference result). At the JNI layer, the model inference process is completed in [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html).
|
||||
|
||||
> This following describes the JNI layer implementation of the sample application. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge.
|
||||
|
||||
### Sample Application Structure
|
||||
|
||||
```text
|
||||
app
|
||||
|
|
||||
├── libs # Library files built by the demo JNI layer
|
||||
│ └── arm64-v8a
|
||||
│ │── libmlkit-label-MS.so #
|
||||
|
|
||||
├── src/main
|
||||
│ ├── assets # Resource file
|
||||
| | └── mobilenetv2.ms # Model file
|
||||
│ |
|
||||
│ ├── cpp # Main logic encapsulation classes for model loading and prediction
|
||||
| | ├── mindspore-lite-x.x.x-mindata-arm64-cpu # Calling package built from the MindSpore source code, including the library files and related header files on which the demo JNI layer depends
|
||||
| | | └── ...
|
||||
│ | |
|
||||
| | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling
|
||||
│ ├── java # Application code at the Java layer
|
||||
│ │ └── com.huawei.himindsporedemo
|
||||
│ │ ├── help # Implementation related to image processing and MindSpore JNI calling
|
||||
│ │ │ └── ...
|
||||
│ │ └── obejctdetect # Implementation related to camera enabling and drawing
|
||||
│ │ └── ...
|
||||
│ │
|
||||
│ ├── res # Resource files related to Android
|
||||
│ └── AndroidManifest.xml # Android configuration file
|
||||
│
|
||||
├── CMakeLists.txt # CMake compilation entry file
|
||||
│
|
||||
├── build.gradle # Other Android configuration file
|
||||
├── download.gradle # During app building, the .gradle file automatically downloads the dependent library files and model files from the Huawei server.
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Configuring MindSpore Lite Dependencies
|
||||
|
||||
When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can refer to [Building MindSpore Lite](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz` library file package (including the `libmindspore-lite.so` library file and related header files) and decompress it. The following example uses the build command with the image preprocessing module.
|
||||
|
||||
> version: version number in the output file, which is the same as the version number of the built branch code.
|
||||
>
|
||||
> device: The value can be cpu (built-in CPU operator) or gpu (built-in CPU and GPU operator).
|
||||
>
|
||||
> os: operating system to be deployed in the output file.
|
||||
|
||||
In this example, the MindSpore Lite version file is automatically downloaded by the download.gradle file during the build process and stored in the `app/src/main/cpp/` directory.
|
||||
|
||||
> If the automatic download fails, manually download the library file [mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz](https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.0.1/lite/android_aarch64/mindspore-lite-1.0.1-runtime-arm64-cpu.tar.gz), and decompress and save it to the directory.
|
||||
|
||||
In the `build.gradle` file of the app, configure the build support of both CMake and `arm64-v8a`:
|
||||
|
||||
```text
|
||||
android{
|
||||
defaultConfig{
|
||||
externalNativeBuild{
|
||||
cmake{
|
||||
arguments "-DANDROID_STL=c++_shared"
|
||||
}
|
||||
}
|
||||
|
||||
ndk{
|
||||
abiFilters 'arm64-v8a'
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
|
||||
|
||||
```text
|
||||
# Set MindSpore Lite Dependencies.
|
||||
set(MINDSPORELITE_VERSION mindspore-lite-1.0.1-runtime-arm64-cpu)
|
||||
include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
|
||||
add_library(mindspore-lite SHARED IMPORTED )
|
||||
add_library(minddata-lite SHARED IMPORTED )
|
||||
set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
|
||||
${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
|
||||
set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
|
||||
${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
|
||||
|
||||
# Link target library.
|
||||
target_link_libraries(
|
||||
...
|
||||
mindspore-lite
|
||||
minddata-lite
|
||||
...
|
||||
)
|
||||
```
|
||||
|
||||
### Downloading and Deploying the Model File
|
||||
|
||||
Download the model file from MindSpore Model Hub. The scene detection model file used in this sample application is `mobilenetv2.ms`, which is automatically downloaded during app building using the `download.gradle` script and stored in the `app/src/main/assets` project directory.
|
||||
|
||||
> If the download fails, manually download the model file [mobilenetv2.ms](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms).
|
||||
|
||||
### Writing On-Device Inference Code
|
||||
|
||||
Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
|
||||
|
||||
The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`.
|
||||
|
||||
1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
|
||||
|
||||
- Load a model file.
|
||||
|
||||
```cpp
|
||||
jlong bufferLen = env->GetDirectBufferCapacity(model_buffer);
|
||||
if (0 == bufferLen) {
|
||||
MS_PRINT("error, bufferLen is 0!");
|
||||
return (jlong) nullptr;
|
||||
}
|
||||
|
||||
char *modelBuffer = CreateLocalModelBuffer(env, model_buffer);
|
||||
if (modelBuffer == nullptr) {
|
||||
MS_PRINT("modelBuffer create failed!");
|
||||
return (jlong) nullptr;
|
||||
}
|
||||
```
|
||||
|
||||
- Create a session.
|
||||
|
||||
```cpp
|
||||
void **labelEnv = new void *;
|
||||
MSNetWork *labelNet = new MSNetWork;
|
||||
*labelEnv = labelNet;
|
||||
|
||||
mindspore::lite::Context *context = new mindspore::lite::Context;
|
||||
context->thread_num_ = num_thread;
|
||||
context->device_list_[0].device_info_.cpu_device_info_.cpu_bind_mode_ = mindspore::lite::NO_BIND;
|
||||
context->device_list_[0].device_info_.cpu_device_info_.enable_float16_ = false;
|
||||
context->device_list_[0].device_type_ = mindspore::lite::DT_CPU;
|
||||
|
||||
labelNet->CreateSessionMS(modelBuffer, bufferLen, context);
|
||||
delete context;
|
||||
```
|
||||
|
||||
- Load the model file and build a computational graph for inference.
|
||||
|
||||
```cpp
|
||||
void
|
||||
MSNetWork::CreateSessionMS(char *modelBuffer, size_t bufferLen, mindspore::lite::Context *ctx) {
|
||||
session_ = mindspore::session::LiteSession::CreateSession(ctx);
|
||||
if (session_ == nullptr) {
|
||||
MS_PRINT("Create Session failed.");
|
||||
return;
|
||||
}
|
||||
|
||||
// Compile model.
|
||||
model_ = mindspore::lite::Model::Import(modelBuffer, bufferLen);
|
||||
if (model_ == nullptr) {
|
||||
ReleaseNets();
|
||||
MS_PRINT("Import model failed.");
|
||||
return;
|
||||
}
|
||||
|
||||
int ret = session_->CompileGraph(model_);
|
||||
if (ret != mindspore::lite::RET_OK) {
|
||||
ReleaseNets();
|
||||
MS_PRINT("CompileGraph failed.");
|
||||
return;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. Convert the input image into the Tensor format of the MindSpore model.
|
||||
|
||||
```cpp
|
||||
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
|
||||
LiteMat lite_mat_bgr,lite_norm_mat_cut;
|
||||
|
||||
if (!BitmapToLiteMat(env, srcBitmap, lite_mat_bgr)){
|
||||
MS_PRINT("BitmapToLiteMat error");
|
||||
return NULL;
|
||||
}
|
||||
int srcImageWidth = lite_mat_bgr.width_;
|
||||
int srcImageHeight = lite_mat_bgr.height_;
|
||||
if(!PreProcessImageData(lite_mat_bgr, lite_norm_mat_cut)){
|
||||
MS_PRINT("PreProcessImageData error");
|
||||
return NULL;
|
||||
}
|
||||
ImgDims inputDims;
|
||||
inputDims.channel =lite_norm_mat_cut.channel_;
|
||||
inputDims.width = lite_norm_mat_cut.width_;
|
||||
inputDims.height = lite_norm_mat_cut.height_;
|
||||
|
||||
// Get the mindsore inference environment which created in loadModel().
|
||||
void **labelEnv = reinterpret_cast<void **>(netEnv);
|
||||
if (labelEnv == nullptr) {
|
||||
MS_PRINT("MindSpore error, labelEnv is a nullptr.");
|
||||
return NULL;
|
||||
}
|
||||
MSNetWork *labelNet = static_cast<MSNetWork *>(*labelEnv);
|
||||
|
||||
auto mSession = labelNet->session;
|
||||
if (mSession == nullptr) {
|
||||
MS_PRINT("MindSpore error, Session is a nullptr.");
|
||||
return NULL;
|
||||
}
|
||||
MS_PRINT("MindSpore get session.");
|
||||
|
||||
auto msInputs = mSession->GetInputs();
|
||||
auto inTensor = msInputs.front();
|
||||
|
||||
float *dataHWC = reinterpret_cast<float *>(lite_norm_mat_cut.data_ptr_);
|
||||
// copy input Tensor
|
||||
memcpy(inTensor->MutableData(), dataHWC,
|
||||
inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
|
||||
delete[] (dataHWC);
|
||||
```
|
||||
|
||||
3. Perform inference on the input tensor based on the model to obtain the output tensor.
|
||||
|
||||
- Perform graph execution and on-device inference.
|
||||
|
||||
```cpp
|
||||
// After the model and image tensor data is loaded, run inference.
|
||||
auto status = mSession->RunGraph();
|
||||
|
||||
if (status != mindspore::lite::RET_OK) {
|
||||
MS_PRINT("MindSpore run net error.");
|
||||
return NULL;
|
||||
}
|
||||
```
|
||||
|
||||
- Obtain the output data.
|
||||
|
||||
```cpp
|
||||
/**
|
||||
* Get the mindspore inference results.
|
||||
* Return the map of output node name and MindSpore Lite MSTensor.
|
||||
*/
|
||||
auto names = mSession->GetOutputTensorNames();
|
||||
std::unordered_map<std::string, mindspore::tensor::MSTensor *> msOutputs;
|
||||
for (const auto &name : names) {
|
||||
auto temp_dat = mSession->GetOutputByTensorName(name);
|
||||
msOutputs.insert(std::pair<std::string, mindspore::tensor::MSTensor *>{name, temp_dat});
|
||||
}
|
||||
```
|
|
@ -0,0 +1,309 @@
|
|||
# MindSpore Lite Style Transfer Demo (Android)
|
||||
|
||||
This sample application demonstrates how to use the MindSpore Lite API and MindSpore Lite style transfer model to perform inference on the device, replace the art style of the target image based on the built-in standard image in the demo, and display the image on the image preview screen of the app.
|
||||
|
||||
## Running Dependencies
|
||||
|
||||
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
|
||||
- NDK 21.3
|
||||
- CMake 3.10
|
||||
- Android software development kit (SDK) 26 or later
|
||||
|
||||
## Building and Running
|
||||
|
||||
1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.)
|
||||
|
||||
![start_home](images/home.png)
|
||||
|
||||
Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK.
|
||||
|
||||
![start_sdk](images/sdk_management.png)
|
||||
|
||||
If an Android Studio configuration error occurs, solve it by referring to the following solution table in item 4.
|
||||
|
||||
2. Connect to an Android device and run the style transfer sample application.
|
||||
|
||||
Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device.
|
||||
> During the building, Android Studio automatically downloads dependencies related to MindSpore Lite and model files. Please wait.
|
||||
|
||||
![run_app](images/run_app.PNG)
|
||||
|
||||
For details about how to connect the Android Studio to a device for debugging, see <https://developer.android.com/studio/run/device>.
|
||||
|
||||
3. Continue the installation on the Android device. After the installation is complete, you can view the inference result.
|
||||
|
||||
![install](images/install.jpg)
|
||||
|
||||
When using the style transfer demo, you can import or take a photo, select a built-in style to obtain a new photo after inference, and then restore or save the new photo.
|
||||
|
||||
Before style transfer:
|
||||
|
||||
![sult](images/style_transfer_demo.png)
|
||||
|
||||
After style transfer:
|
||||
|
||||
![sult](images/style_transfer_result.png)
|
||||
|
||||
4. The following table lists solutions to Android Studio configuration errors.
|
||||
|
||||
| | Error | Solution |
|
||||
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| 1 | Gradle sync failed: NDK not configured. | Specify the NDK installation directory in the local.properties file: ndk.dir={NDK installation directory} |
|
||||
| 2 | Requested NDK version did not match the version requested by ndk.dir | Manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) and specify the SDK location in the `Android NDK location` field (see the following figure). |
|
||||
| 3 | This version of Android Studio cannot open this project, please retry with Android Studio or newer. | Choose `Help` > `Checkout for Updates` on the toolbar to update the version. |
|
||||
| 4 | SSL peer shut down incorrectly | Rebuild. |
|
||||
|
||||
![project_structure](images/project_structure.png)
|
||||
|
||||
## Detailed Description of the Sample Application
|
||||
|
||||
The style transfer sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html) to complete model inference.
|
||||
|
||||
### Sample Application Structure
|
||||
|
||||
```text
|
||||
|
||||
├── app
|
||||
│ ├── build.gradle # Other Android configuration file.
|
||||
│ ├── download.gradle # During app building, the .gradle file automatically downloads the dependent library files and model files from the Huawei server.
|
||||
│ ├── proguard-rules.pro
|
||||
│ └── src
|
||||
│ ├── main
|
||||
│ │ ├── AndroidManifest.xml # Android configuration file.
|
||||
│ │ ├── java # Application code at the Java layer.
|
||||
│ │ │ └── com
|
||||
│ │ │ └── mindspore
|
||||
│ │ │ └── posenetdemo # Image processing and inference process implementation
|
||||
│ │ │ ├── CameraDataDealListener.java
|
||||
│ │ │ ├── ImageUtils.java
|
||||
│ │ │ ├── MainActivity.java
|
||||
│ │ │ ├── PoseNetFragment.java
|
||||
│ │ │ ├── Posenet.java #
|
||||
│ │ │ └── TestActivity.java
|
||||
│ │ └── res # Resource files related to Android.
|
||||
│ └── test
|
||||
└── ...
|
||||
|
||||
```
|
||||
|
||||
### Downloading and Deploying the Model File
|
||||
|
||||
Download the model file from MindSpore Model Hub. The objective detection model file used in this sample application is `posenet_model.ms`, which is automatically downloaded during app building using the `download.gradle` script and stored in the `app/src/main/assets` project directory.
|
||||
|
||||
> If the download fails, manually download the model files [style_predict_quant.ms](https://download.mindspore.cn/model_zoo/official/lite/style_lite/style_predict_quant.ms) and [style_transfer_quant.ms](https://download.mindspore.cn/model_zoo/official/lite/style_lite/style_transfer_quant.ms).
|
||||
|
||||
### Writing On-Device Inference Code
|
||||
|
||||
In the style transfer demo, the Java API is used to implement on-device inference. Compared with the C++ API, the Java API can be directly called in the Java Class and does not need to implement the related code at the JNI layer. Therefore, the Java API is more convenient.
|
||||
|
||||
The inference code process of style transfer demo is as follows. For details about the complete code, see `src/main/java/com/mindspore/styletransferdemo/StyleTransferModelExecutor.java`.
|
||||
|
||||
1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference.
|
||||
|
||||
- Loading a model: Read a MindSpore Lite model from the file system and parse it.
|
||||
|
||||
```java
|
||||
// Load the .ms model.
|
||||
style_predict_model = new Model();
|
||||
if (!style_predict_model.loadModel(mContext, "style_predict_quant.ms")) {
|
||||
Log.e("MS_LITE", "Load style_predict_model failed");
|
||||
}
|
||||
|
||||
style_transform_model = new Model();
|
||||
if (!style_transform_model.loadModel(mContext, "style_transfer_quant.ms")) {
|
||||
Log.e("MS_LITE", "Load style_transform_model failed");
|
||||
}
|
||||
```
|
||||
|
||||
- Creating a configuration context: Create the configuration context `MSConfig` and save some basic configuration parameters required by the session for guiding graph building and execution.
|
||||
|
||||
```java
|
||||
msConfig = new MSConfig();
|
||||
if (!msConfig.init(DeviceType.DT_CPU, NUM_THREADS, CpuBindMode.MID_CPU)) {
|
||||
Log.e("MS_LITE", "Init context failed");
|
||||
}
|
||||
```
|
||||
|
||||
- Creating a session: Create `LiteSession` and call the `init` method to configure the `MSConfig` obtained in the previous step to the session.
|
||||
|
||||
```java
|
||||
// Create the MindSpore lite session.
|
||||
Predict_session = new LiteSession();
|
||||
if (!Predict_session.init(msConfig)) {
|
||||
Log.e("MS_LITE", "Create Predict_session failed");
|
||||
msConfig.free();
|
||||
}
|
||||
|
||||
Transform_session = new LiteSession();
|
||||
if (!Transform_session.init(msConfig)) {
|
||||
Log.e("MS_LITE", "Create Predict_session failed");
|
||||
msConfig.free();
|
||||
}
|
||||
msConfig.free();
|
||||
```
|
||||
|
||||
- Load the model file and build a computational graph for inference.
|
||||
|
||||
```java
|
||||
// Complile graph.
|
||||
if (!Predict_session.compileGraph(style_predict_model)) {
|
||||
Log.e("MS_LITE", "Compile style_predict graph failed");
|
||||
style_predict_model.freeBuffer();
|
||||
}
|
||||
if (!Transform_session.compileGraph(style_transform_model)) {
|
||||
Log.e("MS_LITE", "Compile style_transform graph failed");
|
||||
style_transform_model.freeBuffer();
|
||||
}
|
||||
|
||||
// Note: when use model.freeBuffer(), the model can not be complile graph again.
|
||||
style_predict_model.freeBuffer();
|
||||
style_transform_model.freeBuffer();
|
||||
```
|
||||
|
||||
2. Input data. Currently, Java supports two types of data: `byte[]` and `ByteBuffer`. Set the data of the input tensor.
|
||||
|
||||
- Convert a float array to a byte array before data is input.
|
||||
|
||||
```java
|
||||
|
||||
public static byte[] floatArrayToByteArray(float[] floats) {
|
||||
ByteBuffer buffer = ByteBuffer.allocate(4 * floats.length);
|
||||
buffer.order(ByteOrder.nativeOrder());
|
||||
FloatBuffer floatBuffer = buffer.asFloatBuffer();
|
||||
floatBuffer.put(floats);
|
||||
return buffer.array();
|
||||
}
|
||||
```
|
||||
|
||||
- Input data through `ByteBuffer`. `contentImage` is the image provided by users, and `styleBitmap` is the built-in style image.
|
||||
|
||||
```java
|
||||
public ModelExecutionResult execute(Bitmap contentImage, Bitmap styleBitmap) {
|
||||
Log.i(TAG, "running models");
|
||||
fullExecutionTime = SystemClock.uptimeMillis();
|
||||
preProcessTime = SystemClock.uptimeMillis();
|
||||
ByteBuffer contentArray =
|
||||
ImageUtils.bitmapToByteBuffer(contentImage, CONTENT_IMAGE_SIZE, CONTENT_IMAGE_SIZE, 0, 255);
|
||||
ByteBuffer input = ImageUtils.bitmapToByteBuffer(styleBitmap, STYLE_IMAGE_SIZE, STYLE_IMAGE_SIZE, 0, 255);
|
||||
```
|
||||
|
||||
3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
|
||||
|
||||
- Use `runGraph` to perform model inference on the built-in image and obtain the result `Predict_results`.
|
||||
|
||||
```java
|
||||
List<MSTensor> Predict_inputs = Predict_session.getInputs();
|
||||
if (Predict_inputs.size() != 1) {
|
||||
return null;
|
||||
}
|
||||
MSTensor Predict_inTensor = Predict_inputs.get(0);
|
||||
Predict_inTensor.setData(input);
|
||||
|
||||
preProcessTime = SystemClock.uptimeMillis() - preProcessTime;
|
||||
stylePredictTime = SystemClock.uptimeMillis();
|
||||
```
|
||||
|
||||
```java
|
||||
if (!Predict_session.runGraph()) {
|
||||
Log.e("MS_LITE", "Run Predict_graph failed");
|
||||
return null;
|
||||
}
|
||||
stylePredictTime = SystemClock.uptimeMillis() - stylePredictTime;
|
||||
Log.d(TAG, "Style Predict Time to run: " + stylePredictTime);
|
||||
|
||||
// Get output tensor values.
|
||||
List<String> tensorNames = Predict_session.getOutputTensorNames();
|
||||
Map<String, MSTensor> outputs = Predict_session.getOutputMapByTensor();
|
||||
Set<Map.Entry<String, MSTensor>> entrys = outputs.entrySet();
|
||||
|
||||
float[] Predict_results = null;
|
||||
for (String tensorName : tensorNames) {
|
||||
MSTensor output = outputs.get(tensorName);
|
||||
if (output == null) {
|
||||
Log.e("MS_LITE", "Can not find Predict_session output " + tensorName);
|
||||
return null;
|
||||
}
|
||||
int type = output.getDataType();
|
||||
Predict_results = output.getFloatData();
|
||||
}
|
||||
```
|
||||
|
||||
- Perform model inference on the user image again based on the previous result to obtain the style transfer result `transform_results`.
|
||||
|
||||
```java
|
||||
List<MSTensor> Transform_inputs = Transform_session.getInputs();
|
||||
// transform model have 2 input tensor, tensor0: 1*1*1*100, tensor1; 1*384*384*3
|
||||
MSTensor Transform_inputs_inTensor0 = Transform_inputs.get(0);
|
||||
Transform_inputs_inTensor0.setData(floatArrayToByteArray(Predict_results));
|
||||
|
||||
MSTensor Transform_inputs_inTensor1 = Transform_inputs.get(1);
|
||||
Transform_inputs_inTensor1.setData(contentArray);
|
||||
```
|
||||
|
||||
```java
|
||||
|
||||
styleTransferTime = SystemClock.uptimeMillis();
|
||||
|
||||
if (!Transform_session.runGraph()) {
|
||||
Log.e("MS_LITE", "Run Transform_graph failed");
|
||||
return null;
|
||||
}
|
||||
|
||||
styleTransferTime = SystemClock.uptimeMillis() - styleTransferTime;
|
||||
Log.d(TAG, "Style apply Time to run: " + styleTransferTime);
|
||||
|
||||
postProcessTime = SystemClock.uptimeMillis();
|
||||
|
||||
// Get output tensor values.
|
||||
List<String> Transform_tensorNames = Transform_session.getOutputTensorNames();
|
||||
Map<String, MSTensor> Transform_outputs = Transform_session.getOutputMapByTensor();
|
||||
|
||||
float[] transform_results = null;
|
||||
for (String tensorName : Transform_tensorNames) {
|
||||
MSTensor output1 = Transform_outputs.get(tensorName);
|
||||
if (output1 == null) {
|
||||
Log.e("MS_LITE", "Can not find Transform_session output " + tensorName);
|
||||
return null;
|
||||
}
|
||||
transform_results = output1.getFloatData();
|
||||
}
|
||||
```
|
||||
|
||||
- Process the output node data to obtain the final inference result.
|
||||
|
||||
```java
|
||||
float[][][][] outputImage = new float[1][][][]; // 1 384 384 3
|
||||
for (int x = 0; x < 1; x++) {
|
||||
float[][][] arrayThree = new float[CONTENT_IMAGE_SIZE][][];
|
||||
for (int y = 0; y < CONTENT_IMAGE_SIZE; y++) {
|
||||
float[][] arrayTwo = new float[CONTENT_IMAGE_SIZE][];
|
||||
for (int z = 0; z < CONTENT_IMAGE_SIZE; z++) {
|
||||
float[] arrayOne = new float[3];
|
||||
for (int i = 0; i < 3; i++) {
|
||||
int n = i + z * 3 + y * CONTENT_IMAGE_SIZE * 3 + x * CONTENT_IMAGE_SIZE * CONTENT_IMAGE_SIZE * 3;
|
||||
arrayOne[i] = transform_results[n];
|
||||
}
|
||||
arrayTwo[z] = arrayOne;
|
||||
}
|
||||
arrayThree[y] = arrayTwo;
|
||||
}
|
||||
outputImage[x] = arrayThree;
|
||||
}
|
||||
```
|
||||
|
||||
```java
|
||||
Bitmap styledImage =
|
||||
ImageUtils.convertArrayToBitmap(outputImage, CONTENT_IMAGE_SIZE, CONTENT_IMAGE_SIZE);
|
||||
postProcessTime = SystemClock.uptimeMillis() - postProcessTime;
|
||||
|
||||
fullExecutionTime = SystemClock.uptimeMillis() - fullExecutionTime;
|
||||
Log.d(TAG, "Time to run everything: $" + fullExecutionTime);
|
||||
|
||||
return new ModelExecutionResult(styledImage,
|
||||
preProcessTime,
|
||||
stylePredictTime,
|
||||
styleTransferTime,
|
||||
postProcessTime,
|
||||
fullExecutionTime,
|
||||
formatExecutionLog());
|
||||
```
|
|
@ -21,7 +21,7 @@
|
|||
|
||||
使用过程中若出现Android Studio配置问题,可参考第4项解决。
|
||||
|
||||
2. 连接Android设备,运行骨应用程序。
|
||||
2. 连接Android设备,运行应用程序。
|
||||
|
||||
通过USB连接Android设备调试,点击`Run 'app'`即可在你的设备上运行本示例项目。
|
||||
> 编译过程中Android Studio会自动下载MindSpore Lite、模型文件等相关依赖项,编译过程需做耐心等待。
|
||||
|
|
Loading…
Reference in New Issue