update README

This commit is contained in:
meng_chunyang 2020-08-31 09:49:38 +08:00
parent 433eaab225
commit e256877be9
3 changed files with 22 additions and 1 deletions

View File

@ -56,7 +56,6 @@
* Add 93 TFLite op.
* Add 24 Caffe op.
* Add 62 ONNX op.
* Add support for windows.
* Add 11 optimized passes, include fusion/const fold.
* Support aware-training and Post-training quantization.
* CPU

View File

@ -54,3 +54,14 @@ For more details please check out our [MindSpore Lite Architecture Guide](https:
Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html) is the process of running input data through the model to get output.
MindSpore provides a series of pre-trained models that can be deployed on mobile device [example](#TODO).
## MindSpore Lite benchmark test result
Base on MindSpore r0.7, we test a couple of networks on HUAWEI Mate30 (Hisilicon Kirin990) mobile phone, and get the test results below for your reference.
| NetWork | Thread Number | Average Run Time(ms) |
| ------------------- | ------------- | -------------------- |
| basic_squeezenet | 4 | 9.10 |
| inception_v3 | 4 | 69.361 |
| mobilenet_v1_10_224 | 4 | 7.137 |
| mobilenet_v2_10_224 | 4 | 5.569 |
| resnet_v2_50 | 4 | 48.691 |

View File

@ -64,3 +64,14 @@ MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推
主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。
MindSpore提供了一系列预训练模型部署在智能终端的[样例](#TODO)。
## MindSpore Lite性能参考数据
我们在HUAWEI Mate30Hisilicon Kirin990手机上基于MindSpore r0.7,测试了一组端侧常见网络的性能数据,供您参考:
| 网络 | 线程数 | 平均推理时间(毫秒) |
| ------------------- | ------ | ------------------ |
| basic_squeezenet | 4 | 9.10 |
| inception_v3 | 4 | 69.361 |
| mobilenet_v1_10_224 | 4 | 7.137 |
| mobilenet_v2_10_224 | 4 | 5.569 |
| resnet_v2_50 | 4 | 48.691 |