fix urls suitable to docs structure for master

This commit is contained in:
lvmingfu 2021-07-21 16:13:30 +08:00
parent 609ac1140a
commit 84b6756df3
35 changed files with 61 additions and 61 deletions

View File

@ -44,7 +44,7 @@ enrichment of the AI software/hardware application ecosystem.
<img src="https://gitee.com/mindspore/mindspore/raw/master/docs/MindSpore-architecture.png" alt="MindSpore Architecture" width="600"/> <img src="https://gitee.com/mindspore/mindspore/raw/master/docs/MindSpore-architecture.png" alt="MindSpore Architecture" width="600"/>
For more details please check out our [Architecture Guide](https://www.mindspore.cn/doc/note/en/master/design/mindspore/architecture.html). For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/programming_guide/en/master/architecture.html).
### Automatic Differentiation ### Automatic Differentiation
@ -240,7 +240,7 @@ please check out [docker](https://gitee.com/mindspore/mindspore/blob/master/dock
## Quickstart ## Quickstart
See the [Quick Start](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) See the [Quick Start](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html)
to implement the image classification. to implement the image classification.
## Docs ## Docs

View File

@ -41,7 +41,7 @@ MindSpore提供了友好的设计和高效的执行旨在提升数据科学
<img src="https://gitee.com/mindspore/mindspore/raw/master/docs/MindSpore-architecture.png" alt="MindSpore Architecture" width="600"/> <img src="https://gitee.com/mindspore/mindspore/raw/master/docs/MindSpore-architecture.png" alt="MindSpore Architecture" width="600"/>
欲了解更多详情,请查看我们的[总体架构](https://www.mindspore.cn/doc/note/zh-CN/master/design/mindspore/architecture.html)。 欲了解更多详情,请查看我们的[总体架构](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/architecture.html)。
### 自动微分 ### 自动微分
@ -236,7 +236,7 @@ MindSpore的Docker镜像托管在[Docker Hub](https://hub.docker.com/r/mindspore
## 快速入门 ## 快速入门
参考[快速入门](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)实现图片分类。 参考[快速入门](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/quick_start/quick_start.html)实现图片分类。
## 文档 ## 文档

View File

@ -81,7 +81,7 @@
###### `mindspore.dataset.Dataset.device_que` interface removes unused parameter `prefetch_size`([!18973](https://gitee.com/mindspore/mindspore/pulls/18973)) ###### `mindspore.dataset.Dataset.device_que` interface removes unused parameter `prefetch_size`([!18973](https://gitee.com/mindspore/mindspore/pulls/18973))
Previously, we have a parameter `prefetch_size` in `device_que` to define the prefetch number of records ahead of the user's request. But indeed this parameter is never used which means it is an ineffective parameter. Therefore, we remove this parameter in 1.3.0 and users can set this configuration by [mindspore.dataset.config.set_prefetch_size](https://www.mindspore.cn/docs/api/zh-CN/r1.3/api_python/mindspore.dataset.config.html#mindspore.dataset.config.set_prefetch_size). Previously, we have a parameter `prefetch_size` in `device_que` to define the prefetch number of records ahead of the user's request. But indeed this parameter is never used which means it is an ineffective parameter. Therefore, we remove this parameter in 1.3.0 and users can set this configuration by [mindspore.dataset.config.set_prefetch_size](https://www.mindspore.cn/docs/api/en/r1.3/api_python/mindspore.dataset.config.html#mindspore.dataset.config.set_prefetch_size).
<table> <table>
<tr> <tr>
@ -136,7 +136,7 @@ thor(net, learning_rate, damping, momentum, weight_decay=0.0, loss_scale=1.0, ba
##### Dump Config ##### Dump Config
Previously, we could only dump tensor data for one or all steps. To make the dump feature easier to use, we changed the dump configuration format and dump structure. View the [New Dump Tutorial](https://www.mindspore.cn/tutorial/training/zh-CN/r1.3/advanced_use/dump_in_graph_mode.html#dump). Previously, we could only dump tensor data for one or all steps. To make the dump feature easier to use, we changed the dump configuration format and dump structure. View the [New Dump Tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dump_in_graph_mode.html#dump).
| 1.2.1 | 1.3.0 | | 1.2.1 | 1.3.0 |
| ------------------------------------------------------ | ------------------------------------------------------------------------------------------- | | ------------------------------------------------------ | ------------------------------------------------------------------------------------------- |
@ -520,7 +520,7 @@ However, currently MindSpore Parser cannot parse numpy.ndarray in JIT-graph. To
###### mindspore.numpy interfaces remove support for keyword arguments `out` and `where`([!12726](https://gitee.com/mindspore/mindspore/pulls/12726)) ###### mindspore.numpy interfaces remove support for keyword arguments `out` and `where`([!12726](https://gitee.com/mindspore/mindspore/pulls/12726))
Previously, we have incomplete support for keyword arguments `out` and `where` in mindspore.numpy interfaces, however, the `out` argument is only functional when `where` argument is also provided, and `out` cannot be used to pass reference to numpy functions. Therefore, we have removed these two arguments to avoid any confusion users may have. Their original functionality can be found in [np.where](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/numpy/mindspore.numpy.where.html#mindspore.numpy.where) Previously, we have incomplete support for keyword arguments `out` and `where` in mindspore.numpy interfaces, however, the `out` argument is only functional when `where` argument is also provided, and `out` cannot be used to pass reference to numpy functions. Therefore, we have removed these two arguments to avoid any confusion users may have. Their original functionality can be found in [np.where](https://www.mindspore.cn/docs/api/en/master/api_python/numpy/mindspore.numpy.where.html#mindspore.numpy.where)
<table> <table>
<tr> <tr>
@ -819,7 +819,7 @@ MSTensor::DestroyTensorPtr(tensor);
###### `nn.MatMul` is now deprecated in favor of `ops.matmul` ([!12817](https://gitee.com/mindspore/mindspore/pulls/12817)) ###### `nn.MatMul` is now deprecated in favor of `ops.matmul` ([!12817](https://gitee.com/mindspore/mindspore/pulls/12817))
[ops.matmul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.matmul.html#mindspore.ops.matmul) follows the API of [numpy.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) as closely as possible. As a function interface, [ops.matmul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.matmul.html#mindspore.ops.matmul) is applied without instantiation, as opposed to `nn.MatMul`, which should only be used as a class instance. [ops.matmul](https://www.mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.matmul.html#mindspore.ops.matmul) follows the API of [numpy.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) as closely as possible. As a function interface, [ops.matmul](https://www.mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.matmul.html#mindspore.ops.matmul) is applied without instantiation, as opposed to `nn.MatMul`, which should only be used as a class instance.
<table> <table>
<tr> <tr>

View File

@ -644,7 +644,7 @@ def set_context(**kwargs):
suffix to the file. Default: ''. suffix to the file. Default: ''.
enable_sparse (bool): Whether to enable sparsity feature. Default: False. enable_sparse (bool): Whether to enable sparsity feature. Default: False.
For details of sparsity and sparse tensor, please check For details of sparsity and sparse tensor, please check
`<https://www.mindspore.cn/doc/programming_guide/zh-CN/master/tensor.html>`_. `<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/tensor.html>`_.
max_call_depth (int): Specify the maximum depth of function call. Must be positive integer. Default: 1000. max_call_depth (int): Specify the maximum depth of function call. Must be positive integer. Default: 1000.
env_config_path (str): Config path for DFX. env_config_path (str): Config path for DFX.
auto_tune_mode (str): The mode of auto tune when op building, get the best tiling performance, auto_tune_mode (str): The mode of auto tune when op building, get the best tiling performance,

View File

@ -26,8 +26,8 @@ class DatasetCache:
""" """
A client to interface with tensor caching service. A client to interface with tensor caching service.
For details, please check `Tutorial <https://www.mindspore.cn/tutorial/training/en/master/advanced_use/ For details, please check `Tutorial <https://www.mindspore.cn/docs/programming_guide/en/master/enable_cache.html>`_,
enable_cache.html>`_, `Programming guide <https://www.mindspore.cn/doc/programming_guide/en/master/cache.html>`_. `Programming guide <https://www.mindspore.cn/docs/programming_guide/en/master/cache.html>`_.
Args: Args:
session_id (int): A user assigned session id for the current pipeline. session_id (int): A user assigned session id for the current pipeline.

View File

@ -42,7 +42,7 @@ MDP requires MindSpore version 0.7.0-beta or later. MDP is actively evolving. In
### Bayesian Neural Network ### Bayesian Neural Network
1. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) in Tutorial. 1. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) in Tutorial.
2. Define a Bayesian Neural Network. The bayesian LeNet is used in this example. 2. Define a Bayesian Neural Network. The bayesian LeNet is used in this example.
@ -227,7 +227,7 @@ optimizer = nn.Adam(params=vae.trainable_params(), learning_rate=0.001)
net_with_loss = nn.WithLossCell(vae, net_loss) net_with_loss = nn.WithLossCell(vae, net_loss)
``` ```
3. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) in Tutorial. 3. Process the required dataset. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) in Tutorial.
4. Use SVI interface to train VAE network. vi.run can return the trained network, get_train_loss can get the loss after training. 4. Use SVI interface to train VAE network. vi.run can return the trained network, get_train_loss can get the loss after training.
```python ```python
@ -437,7 +437,7 @@ if __name__ == "__main__":
The uncertainty estimation toolbox is based on MindSpore Deep Probabilistic Programming (MDP), and it is suitable for mainstream deep learning models, such as regression, classification, target detection and so on. In the inference stage, with the uncertainy estimation toolbox, developers only need to pass in the trained model and training dataset, specify the task and the samples to be estimated, then can obtain the aleatoric uncertainty and epistemic uncertainty. Based the uncertainty information, developers can understand the model and the dataset better. The uncertainty estimation toolbox is based on MindSpore Deep Probabilistic Programming (MDP), and it is suitable for mainstream deep learning models, such as regression, classification, target detection and so on. In the inference stage, with the uncertainy estimation toolbox, developers only need to pass in the trained model and training dataset, specify the task and the samples to be estimated, then can obtain the aleatoric uncertainty and epistemic uncertainty. Based the uncertainty information, developers can understand the model and the dataset better.
In classification task, for example, the model is lenet model. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html) in Tutorial. For evaluating the uncertainty of test examples, the use of the toolbox is as follows: In classification task, for example, the model is lenet model. The MNIST dateset is used in the example. Data processing is consistent with [Implementing an Image Classification Application](https://www.mindspore.cn/docs/programming_guide/en/master/quick_start/quick_start.html) in Tutorial. For evaluating the uncertainty of test examples, the use of the toolbox is as follows:
```python ```python
from mindspore.nn.probability.toolbox.uncertainty_evaluation import UncertaintyEvaluation from mindspore.nn.probability.toolbox.uncertainty_evaluation import UncertaintyEvaluation

View File

@ -399,7 +399,7 @@ def check_version_and_env_config():
except OSError: except OSError:
logger.warning( logger.warning(
"Pre-Load Lirary libgomp.so.1 failed, this might cause cannot allocate TLS memory problem, " "Pre-Load Lirary libgomp.so.1 failed, this might cause cannot allocate TLS memory problem, "
"if so find solution in FAQ in https://www.mindspore.cn/doc/faq/en/master/index.html.") "if so find solution in FAQ in https://www.mindspore.cn/docs/faq/en/master/index.html.")
elif __package_name__.lower() == "mindspore-gpu": elif __package_name__.lower() == "mindspore-gpu":
env_checker = GPUEnvChecker() env_checker = GPUEnvChecker()
else: else:

View File

@ -549,7 +549,7 @@ python export.py --config_path [CONFIG_PATH]
### 推理 ### 推理
如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理可参考此[链接](https://www.mindspore.cn/docs/programming_gui/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例: 如果您需要使用此训练模型在GPU、Ascend 910、Ascend 310等多个硬件平台上进行推理可参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。下面是操作步骤示例:
- Ascend处理器环境运行 - Ascend处理器环境运行

View File

@ -73,7 +73,7 @@ YOLOv5作为先进的检测器它比所有可用的替代检测器更快FP
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install) - [MindSpore](https://www.mindspore.cn/install)
- 更多关于Mindspore的信息请查看以下资源 - 更多关于Mindspore的信息请查看以下资源
- [MindSpore教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html) - [MindSpore API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
# [快速入门](#目录) # [快速入门](#目录)

View File

@ -64,7 +64,7 @@ The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) an
## Detailed Description of the Sample Program ## Detailed Description of the Sample Program
This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html). This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html).
### Sample Program Structure ### Sample Program Structure
@ -100,7 +100,7 @@ app
### Configuring MindSpore Lite Dependencies ### Configuring MindSpore Lite Dependencies
When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module.
In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory.

View File

@ -107,7 +107,7 @@ app
### 配置MindSpore Lite依赖项 ### 配置MindSpore Lite依赖项
Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。
> version输出件版本号与所编译的分支代码对应的版本一致。 > version输出件版本号与所编译的分支代码对应的版本一致。
> >

View File

@ -66,7 +66,7 @@ The following describes how to use the MindSpore Lite JAVA APIs and MindSpore Li
## Detailed Description of the Sample Program ## Detailed Description of the Sample Program
This image segmentation sample program on the Android device is implemented through Java. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. Then Java API is called to infer.[Runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html). This image segmentation sample program on the Android device is implemented through Java. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. Then Java API is called to infer.[Runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html).
### Sample Program Structure ### Sample Program Structure
@ -99,7 +99,7 @@ app
### Configuring MindSpore Lite Dependencies ### Configuring MindSpore Lite Dependencies
When MindSpore Java APIs are called, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. When MindSpore Java APIs are called, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module.
In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory.

View File

@ -104,7 +104,7 @@ app
### 配置MindSpore Lite依赖项 ### 配置MindSpore Lite依赖项
Android 调用MindSpore Java API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 Android 调用MindSpore Java API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。
> version输出件版本号与所编译的分支代码对应的版本一致。 > version输出件版本号与所编译的分支代码对应的版本一致。
> >

View File

@ -70,7 +70,7 @@ This object detection sample program on the Android device includes a Java layer
### Configuring MindSpore Lite Dependencies ### Configuring MindSpore Lite Dependencies
When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module. When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the MindSpore Lite version. In this case, you need to use the compile command of generate with image preprocessing module.
In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory. In this example, the build process automatically downloads the `mindspore-lite-1.0.1-runtime-arm64-cpu` by the `app/download.gradle` file and saves in the `app/src/main/cpp` directory.

View File

@ -69,7 +69,7 @@
## 示例程序详细说明 ## 示例程序详细说明
本端侧目标检测Android示例程序分为JAVA层和JNI层其中JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理针对推理结果画框等功能JNI层在[Runtime](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/runtime.html)中完成模型推理的过程。 本端侧目标检测Android示例程序分为JAVA层和JNI层其中JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理针对推理结果画框等功能JNI层在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime.html)中完成模型推理的过程。
> 此处详细说明示例程序的JNI层实现JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能需读者具备一定的Android开发基础知识。 > 此处详细说明示例程序的JNI层实现JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能需读者具备一定的Android开发基础知识。
@ -110,7 +110,7 @@ app
### 配置MindSpore Lite依赖项 ### 配置MindSpore Lite依赖项
Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。
> version输出件版本号与所编译的分支代码对应的版本一致。 > version输出件版本号与所编译的分支代码对应的版本一致。
> >

View File

@ -66,7 +66,7 @@ This sample application demonstrates how to use the MindSpore Lite API and skele
## Detailed Description of the Sample Application ## Detailed Description of the Sample Application
The skeleton detection sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html) to complete model inference. The skeleton detection sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html) to complete model inference.
### Sample Application Structure ### Sample Application Structure

View File

@ -69,7 +69,7 @@
## 示例程序详细说明 ## 示例程序详细说明
骨骼检测Android示例程序通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理等功能在[Runtime](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/runtime.html)中完成模型推理的过程。 骨骼检测Android示例程序通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理等功能在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime.html)中完成模型推理的过程。
### 示例程序结构 ### 示例程序结构

View File

@ -66,7 +66,7 @@ This sample application demonstrates how to use the MindSpore Lite C++ API (Andr
## Detailed Description of the Sample Application ## Detailed Description of the Sample Application
The scene detection sample application on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images (drawing frames based on the inference result). At the JNI layer, the model inference process is completed in [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html). The scene detection sample application on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images (drawing frames based on the inference result). At the JNI layer, the model inference process is completed in [runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html).
> This following describes the JNI layer implementation of the sample application. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge. > This following describes the JNI layer implementation of the sample application. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge.
@ -107,7 +107,7 @@ app
### Configuring MindSpore Lite Dependencies ### Configuring MindSpore Lite Dependencies
When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can refer to [Building MindSpore Lite](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html) to generate the `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz` library file package (including the `libmindspore-lite.so` library file and related header files) and decompress it. The following example uses the build command with the image preprocessing module. When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can refer to [Building MindSpore Lite](https://www.mindspore.cn/lite/docs/en/master/use/build.html) to generate the `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz` library file package (including the `libmindspore-lite.so` library file and related header files) and decompress it. The following example uses the build command with the image preprocessing module.
> version: version number in the output file, which is the same as the version number of the built branch code. > version: version number in the output file, which is the same as the version number of the built branch code.
> >

View File

@ -106,7 +106,7 @@ app
### 配置MindSpore Lite依赖项 ### 配置MindSpore Lite依赖项
Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。 Android JNI层调用MindSpore C++ API时需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html)生成`mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`库文件包并解压缩(包含`libmindspore-lite.so`库文件和相关头文件),在本例中需使用生成带图像预处理模块的编译命令。
> version输出件版本号与所编译的分支代码对应的版本一致。 > version输出件版本号与所编译的分支代码对应的版本一致。
> >

View File

@ -66,7 +66,7 @@ This sample application demonstrates how to use the MindSpore Lite API and MindS
## Detailed Description of the Sample Application ## Detailed Description of the Sample Application
The style transfer sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/tutorial/lite/en/master/use/runtime.html) to complete model inference. The style transfer sample application on the Android device uses the Android Camera 2 API to enable a camera to obtain image frames and process images, as well as using [runtime](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html) to complete model inference.
### Sample Application Structure ### Sample Application Structure

View File

@ -69,7 +69,7 @@
## 示例程序详细说明 ## 示例程序详细说明
风格Android示例程序通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理等功能在[Runtime](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/runtime.html)中完成模型推理的过程。 风格Android示例程序通过Android Camera 2 API实现摄像头获取图像帧以及相应的图像处理等功能在[Runtime](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime.html)中完成模型推理的过程。
### 示例程序结构 ### 示例程序结构

View File

@ -37,8 +37,8 @@ Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Quick start](#contents) # [Quick start](#contents)

View File

@ -49,8 +49,8 @@ Original dataset is from Human Protein Atlas (www.proteinatlas.org). After post-
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Quick start](#contents) # [Quick start](#contents)

View File

@ -116,8 +116,8 @@ EDSR先经过1次卷积层,再串联32个残差模块,再经过1次卷积层,最
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# 快速入门 # 快速入门

View File

@ -50,7 +50,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore API](https://www.mindspore.cn/docs/api/en/master/index.html) - [MindSpore API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Script description](#contents) # [Script description](#contents)

View File

@ -113,8 +113,8 @@ DIV2K
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install) - [MindSpore](https://www.mindspore.cn/install)
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
# 脚本说明 # 脚本说明

View File

@ -51,8 +51,8 @@ The process of training SRGAN needs a pretrained VGG19 based on Imagenet.
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Script Description](#contents) # [Script Description](#contents)

View File

@ -58,8 +58,8 @@ Dataset used: [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
## [Quick Start](#contents) ## [Quick Start](#contents)

View File

@ -46,7 +46,7 @@ Dataset used: ETH, CalTech, MOT17, CUHK-SYSU, PRW, CityPerson
## [Mixed Precision](#contents) ## [Mixed Precision](#contents)
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision. The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents) # [Environment Requirements](#contents)
@ -70,8 +70,8 @@ To run the python scripts in the repository, you need to prepare the environment
- mindspore 1.2.0 - mindspore 1.2.0
- pycocotools 2.0 - pycocotools 2.0
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Quick Start](#contents) # [Quick Start](#contents)

View File

@ -71,8 +71,8 @@ GhostNet的总体网络架构如下[链接](https://arxiv.org/pdf/1911.11907.
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
# 快速入门 # 快速入门

View File

@ -72,7 +72,7 @@
## 混合精度 ## 混合精度
采用[混合精度](https://www.mindspore.cn/docs/programming_gui/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例如果输入数据类型为FP32MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志搜索“reduce precision”查看精度降低的算子。 以FP16算子为例如果输入数据类型为FP32MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志搜索“reduce precision”查看精度降低的算子。
# 环境要求 # 环境要求

View File

@ -54,7 +54,7 @@ Midas的总体网络架构如下
## 混合精度 ## 混合精度
采用[混合精度](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例如果输入数据类型为FP32MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志搜索“reduce precision”查看精度降低的算子。 以FP16算子为例如果输入数据类型为FP32MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志搜索“reduce precision”查看精度降低的算子。
# 环境要求 # 环境要求
@ -64,8 +64,8 @@ Midas的总体网络架构如下
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
# 快速入门 # 快速入门

View File

@ -49,7 +49,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore tutorials](https://www.mindspore.cn/tutorial/en/master/index.html) - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore API](https://www.mindspore.cn/docs/api/en/master/index.html) - [MindSpore API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Script description](#contents) # [Script description](#contents)

View File

@ -57,7 +57,7 @@ Dataset used: [CIFAR10](https://www.kaggle.com/c/cifar-10)
## Mixed Precision ## Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching reduce precision.
# [Environment Requirements](#contents) # [Environment Requirements](#contents)
@ -67,8 +67,8 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- Framework - Framework
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below - For more information, please check the resources below
- [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
# [Quick Start](#contents) # [Quick Start](#contents)

View File

@ -116,8 +116,8 @@ WDSR网络主要由几个基本模块包括卷积层和池化层组成。
- 框架 - 框架
- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
# 快速入门 # 快速入门