diff --git a/docs/MindSpore-Lite-architecture.png b/docs/MindSpore-Lite-architecture.png
new file mode 100644
index 00000000000..abf28796690
Binary files /dev/null and b/docs/MindSpore-Lite-architecture.png differ
diff --git a/mindspore/lite/README.md b/mindspore/lite/README.md
index fde5c6bcb95..25a21b74439 100644
--- a/mindspore/lite/README.md
+++ b/mindspore/lite/README.md
@@ -1,6 +1,56 @@
-MindSpore的设计目标
-1 端云一体化:IR统一,云侧训练模型可以在端侧重训;云侧混合精度训练与端侧混合精度推理协同提升推理性能;
-2 高性能与轻量化:高性能内核计算库nnacl,支持slide window、im2col+gemm、winograde等多种卷积算法,深度汇编指令优化;在线运行模块不依赖第三方库,保持框架的轻量化;
-3 快捷部署:支持TensorFlow Lite、Caffe、ONNX等多种格式的模型转换;提供量化、图片数据处理等功能,便利用户部署;
-4 全场景覆盖: Lite & Micro覆盖手机、IoT等各种智能设备
-
\ No newline at end of file
+[查看中文](./README_CN.md)
+
+## What Is MindSpore Lite
+
+MindSpore lite is a high-performance, lightweight open source reasoning framework that can be used to meet the needs of AI applications on mobile devices. MindSpore Lite focuses on how to deploy AI technology more effectively on devices. It has been integrated into HMS (Huawei Mobile Services) to provide inferences for applications such as image classification, object detection and OCR. MindSpore Lite will promote the development and enrichment of the AI software/hardware application ecosystem.
+
+
+
+For more details please check out our [MindSpore Lite Architecture Guide](https://www.mindspore.cn/lite/docs/en/master/architecture.html).
+
+### MindSpore Lite features
+
+1. Cooperative work with MindSpore training
+ - Provides training, optimization, and deployment.
+ - The unified IR realizes the device-cloud AI application integration.
+
+2. Lightweight
+ - Provides model compress, which could help to improve performance as well.
+ - Provides the ultra-lightweight reasoning solution MindSpore Micro to meet the deployment requirements in extreme environments such as smart watches and headphones.
+
+3. High-performance
+ - The built-in high-performance kernel computing library NNACL supports multiple convolution optimization algorithms such as Slide window, im2col+gemm, winograde, etc.
+ - Assembly code to improve performance of kernel operators. Supports CPU, GPU, and NPU.
+4. Versatility
+ - Supports IOS, Android.
+ - Supports Lite OS.
+ - Supports mobile device, smart screen, pad, and IOT devices.
+ - Supports third party models such as TFLite, CAFFE and ONNX.
+
+## MindSpore Lite AI deployment procedure
+
+1. Model selection and personalized training
+
+ Select a new model or use an existing model for incremental training using labeled data. When designing a model for mobile device, it is necessary to consider the model size, accuracy and calculation amount.
+
+ The MindSpore team provides a series of pre-training models used for image classification, object detection. You can use these pre-trained models in your application.
+
+ The pre-trained models provided by MindSpore include: [Image Classification](https://download.mindspore.cn/model_zoo/official/lite/) and [Object Detection](https://download.mindspore.cn/model_zoo/official/lite/). More models will be provided in the feature.
+
+ MindSpore allows you to retrain pre-trained models to perform other tasks. For example: using a pre-trained image classification model, it can be retrained to recognize new image types. See [Retraining](https://www.mindspore.cn/lite/tutorial/zh-CN/master/advanced_use/retraining_of_quantized_network.html).
+
+2. Model converter and optimization
+
+ If you use MindSpore or a third-party model, you need to use [MindSpore Lite Model Converter Tool](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html) to convert the model into MindSpore Lite model. The MindSpore Lite model converter tool provides the converter of TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure.
+
+ MindSpore also provides a tool to convert models running on IoT devices .
+
+3. Model deployment
+
+ This stage mainly realizes model deployment, including model management, deployment, operation and maintenance monitoring, etc.
+
+4. Inference
+
+ Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html) is the process of running input data through the model to get output.
+
+ MindSpore provides a series of pre-trained models that can be deployed on mobile device [example](#TODO).
diff --git a/mindspore/lite/README_CN.md b/mindspore/lite/README_CN.md
new file mode 100644
index 00000000000..d2051cae3be
--- /dev/null
+++ b/mindspore/lite/README_CN.md
@@ -0,0 +1,66 @@
+
+[View English](./README.md)
+
+## MindSpore Lite介绍
+
+MindSpore Lite是MindSpore推出的端云协同的、轻量化、高性能AI推理框架,用于满足越来越多的端测AI应用需求。MindSpore Lite聚焦AI技术在端侧设备上的部署和运行,已经在华为HMS和智能终端的图像分类、目标识别、人脸识别、文字识别等应用中广泛使用,未来MindSpore Lite将与MindSpore AI社区一起,致力于丰富AI软硬件应用生态。
+
+
+
+
+
+欲了解更多详情,请查看我们的[MindSpore Lite 总体架构](https://www.mindspore.cn/lite/docs/zh-CN/master/architecture.html)。
+
+## MindSpore Lite技术特点
+
+1. 端云协同提供一站式训练和推理
+
+ - 提供模型训练、模型转换优化、部署和推理端到端流程。
+ - 统一的IR实现端云AI应用一体化。
+
+2. 超轻量
+
+ - 支持模型量化压缩,模型更小跑得更快。
+ - 提供超轻量的推理解决方案MindSpore Micro,满足智能手表、耳机等极限环境下的部署要求。
+
+3. 高性能
+
+ - 自带的高性能内核计算库NNACL,支持Sliding Windows、Im2Col+GEMM、Winograd等多种卷积优化算法。
+ - 汇编级优化,支持CPU、GPU、NPU异构调度,最大化发挥硬件算力,最小化推理时延和功耗。
+
+4. 广覆盖
+
+ - 支持iOS、Android等手机操作系统。
+ - 支持LiteOS嵌入式操作系统。
+ - 支持手机、大屏、平板、IoT等各种智能设备上的AI应用。
+ - 支持MindSpore/TensorFlow Lite/Caffe/ONNX模型,方便用户快速部署。
+
+## MindSpore Lite AI部署流程
+
+1. 模型选择和个性化训练
+
+ 包括选择新模型或对已有模型,利用标注数据进行增量训练。面向端侧设计模型时,需要考虑模型大小、精度和计算量。
+
+ MindSpore团队提供了一系列预训练模型,用于解决图像分类、目标检测等场景的学习问题。可以在您的应用程序中使用这些预训练模型对应的终端模型。
+
+ MindSpore提供的预训练模型包括:[图像分类(Image Classification)](https://download.mindspore.cn/model_zoo/official/lite/)和[目标检测(Object Detection)](https://download.mindspore.cn/model_zoo/official/lite/)。后续MindSpore团队会增加更多的预置模型。
+
+ MindSpore允许您重新训练预训练模型,以执行其他任务。比如:使用预训练的图像分类模型,可以重新训练来识别新的图像类型。参见[重训练](https://www.mindspore.cn/lite/tutorial/zh-CN/master/advanced_use/retraining_of_quantized_network.html)。
+
+2. 模型转换/优化
+
+ 如果您使用MindSpore或第三方训练的模型,需要使用[MindSpore Lite模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)转换成MindSpore Lite模型格式。MindSpore Lite模型转换工具不仅提供了将TensorFlow Lite、Caffe、ONNX等模型格式转换为MindSpore Lite模型格式,还提供了算子融合、量化等功能。
+
+ MindSpore还提供了将IoT设备上运行的模型转换成.C代码的生成工具。
+
+ 经过上述两个部署,您已经得到端侧可以部署的模型。
+
+3. 模型部署
+
+ 这个阶段主要实现模型部署,包括模型管理、部署和运维监控等。
+
+4. 模型推理
+
+ 主要完成模型推理工作,即加载模型,完成模型相关的所有计算。[推理](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)是通过模型运行输入数据,获取预测的过程。
+
+ MindSpore提供了一系列预训练模型部署在智能终端的[样例](#TODO)。