forked from OSSInnovation/mindspore
update version to 0.5
This commit is contained in:
parent
08ff9099ed
commit
16976b5f68
15
README.md
15
README.md
|
@ -29,7 +29,7 @@ enrichment of the AI software/hardware application ecosystem.
|
|||
|
||||
<img src="docs/MindSpore-architecture.png" alt="MindSpore Architecture" width="600"/>
|
||||
|
||||
For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/en/0.3.0-alpha/architecture.html).
|
||||
For more details please check out our [Architecture Guide](https://www.mindspore.cn/docs/en/0.5.0-beta/architecture.html).
|
||||
|
||||
### Automatic Differentiation
|
||||
|
||||
|
@ -66,7 +66,6 @@ MindSpore offers build options across multiple backends:
|
|||
| Ascend910 | Ubuntu-x86 | ✔️ |
|
||||
| | EulerOS-x86 | ✔️ |
|
||||
| | EulerOS-aarch64 | ✔️ |
|
||||
| GPU CUDA 9.2 | Ubuntu-x86 | ✔️ |
|
||||
| GPU CUDA 10.1 | Ubuntu-x86 | ✔️ |
|
||||
| CPU | Ubuntu-x86 | ✔️ |
|
||||
| | Windows-x86 | ✔️ |
|
||||
|
@ -76,7 +75,7 @@ For installation using `pip`, take `CPU` and `Ubuntu-x86` build version as an ex
|
|||
1. Download whl from [MindSpore download page](https://www.mindspore.cn/versions/en), and install the package.
|
||||
|
||||
```
|
||||
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.3.0-alpha/MindSpore/cpu/ubuntu_x86/mindspore-0.3.0-cp37-cp37m-linux_x86_64.whl
|
||||
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.5.0-beta/MindSpore/cpu/ubuntu_x86/mindspore-0.5.0-cp37-cp37m-linux_x86_64.whl
|
||||
```
|
||||
|
||||
2. Run the following command to verify the install.
|
||||
|
@ -133,8 +132,8 @@ currently the containerized build options are supported as follows:
|
|||
|
||||
For `CPU` backend, you can directly pull and run the latest stable image using the below command:
|
||||
```
|
||||
docker pull mindspore/mindspore-cpu:0.3.0-alpha
|
||||
docker run -it mindspore/mindspore-cpu:0.3.0-alpha /bin/bash
|
||||
docker pull mindspore/mindspore-cpu:0.5.0-beta
|
||||
docker run -it mindspore/mindspore-cpu:0.5.0-beta /bin/bash
|
||||
```
|
||||
|
||||
* GPU
|
||||
|
@ -151,8 +150,8 @@ currently the containerized build options are supported as follows:
|
|||
|
||||
Then you can pull and run the latest stable image using the below command:
|
||||
```
|
||||
docker pull mindspore/mindspore-gpu:0.3.0-alpha
|
||||
docker run -it --runtime=nvidia --privileged=true mindspore/mindspore-gpu:0.3.0-alpha /bin/bash
|
||||
docker pull mindspore/mindspore-gpu:0.5.0-beta
|
||||
docker run -it --runtime=nvidia --privileged=true mindspore/mindspore-gpu:0.5.0-beta /bin/bash
|
||||
```
|
||||
|
||||
To test if the docker image works, please execute the python code below and check the output:
|
||||
|
@ -187,7 +186,7 @@ please check out [docker](docker/README.md) repo for the details.
|
|||
|
||||
## Quickstart
|
||||
|
||||
See the [Quick Start](https://www.mindspore.cn/tutorial/en/0.3.0-alpha/quick_start/quick_start.html)
|
||||
See the [Quick Start](https://www.mindspore.cn/tutorial/en/0.5.0-beta/quick_start/quick_start.html)
|
||||
to implement the image classification.
|
||||
|
||||
## Docs
|
||||
|
|
71
RELEASE.md
71
RELEASE.md
|
@ -1,3 +1,74 @@
|
|||
# Release 0.5.0-beta
|
||||
|
||||
## Major Features and Improvements
|
||||
|
||||
### Ascend 910 Training and Inference Framework
|
||||
* New models
|
||||
* ResNext50: a simple, highly modularized network architecture using aggregated resdiual transformations for image classification on ImageNet 2012 dataset.
|
||||
* MASS: a pre-training method for sequence to sequence based language generation tasks on Text Summarization and Conversational Response Generation using News Crawls 2007-2017 dataset, Gigaword corpus and Cornell movie dialog corpus.
|
||||
* Transformer: a neural network architecture for language understanding on WMT 2014 English-German dataset.
|
||||
* GCN:Graph Convolutional Networks for the task of classification of nodes in a graph on Cora and Citeseer datasets.
|
||||
* GAT:an attention-based graph neural network for node classification on Cora and CiteSeer dataset.
|
||||
* Frontend and user interface
|
||||
* Support tensor value and assignment of mixed tensor index in graph mode.
|
||||
* Support tensor comparison, len operator, constexpr syntax, value and assignment of tensor index in pynative mode.
|
||||
* Support converting MindSpore IR to pb format for infer model.
|
||||
* Support print operator to write data directly on the hard disk.
|
||||
* Add the double recursive programming solution for very high speed parallel strategy search in automatic parallel.
|
||||
* User interfaces change log
|
||||
* Allow the learning rate of AdamWeightDecayDynamicLR and Lamb to be 0([!1826](https://gitee.com/mindspore/mindspore/pulls/1826))
|
||||
* Restricting the entire network input parameter is Tensor([!1967](https://gitee.com/mindspore/mindspore/pulls/1967))
|
||||
* Turn shape and dtype into attributes instead of interfaces([!1919](https://gitee.com/mindspore/mindspore/pulls/1919))
|
||||
* Delete multitypefungraph([!2116](https://gitee.com/mindspore/mindspore/pulls/2116))
|
||||
* Refactor the callback module in an encapsulated way, use _CallbackManager instead of _build_callbacks([!2236](https://gitee.com/mindspore/mindspore/pulls/2236))
|
||||
* Delete EmbeddingLookup([!2163](https://gitee.com/mindspore/mindspore/pulls/2163))
|
||||
* Checkpoint add model_type([!2517](https://gitee.com/mindspore/mindspore/pulls/2517))
|
||||
* Executor and performance optimization
|
||||
* Heterogeneous execution on CPU and Ascend devices supported, and is verified in Wide&Deep model.
|
||||
* Quantitative training of MobileNetV2, Lenet and Resnet50 on Ascend-910 are supported.
|
||||
* Support new fusion architecture, which can do fusion optimization across graphs and kernels to improve execution speed.
|
||||
* Data processing, augmentation, and save format
|
||||
* Support data processing pipeline performance profiling.
|
||||
* Support public dataset loading, such as CLUE and Coco.
|
||||
* Support more text processing, such as more tokenizers and vocab data.
|
||||
* Support MindRecord padded data.
|
||||
### Other Hardware Support
|
||||
* GPU platform
|
||||
* New model supported: Bert / Wide&Deep.
|
||||
* Support setting max device memory.
|
||||
* CPU platform
|
||||
* New model supported: LSTM.
|
||||
|
||||
## Bugfixes
|
||||
* Models
|
||||
* Bert, Move Bert from `example` to `model_zoo`, optimize network for better performance. ([!1902](https://gitee.com/mindspore/mindspore/pulls/1902))
|
||||
* VGG16, Move VGG16 from `example` to `model_zoo`, optimize network for better accuracy. ([!2645](https://gitee.com/mindspore/mindspore/pulls/2645))
|
||||
* Alexnet, modify parameter setting to improve accuracy ([!1364](https://gitee.com/mindspore/mindspore/pulls/2370))
|
||||
* Python API
|
||||
* Fix bug in auto cast([!1766](https://gitee.com/mindspore/mindspore/pulls/1766))
|
||||
* Fix bug of register_backward_hook([!2148](https://gitee.com/mindspore/mindspore/pulls/2148))
|
||||
* Fix bug of tuple args in pynative mode([!1878](https://gitee.com/mindspore/mindspore/pulls/1878))
|
||||
* Fix bug of checking numbers of arguments and graph parameters([!1701](https://gitee.com/mindspore/mindspore/pulls/1701))
|
||||
* Executor
|
||||
* Fix bug of loading input data repeatedly in pynative mode([!1966](https://gitee.com/mindspore/mindspore/pulls/1966))
|
||||
* Fix bug of list cannot be used as input in pynative mode([!1765](https://gitee.com/mindspore/mindspore/pulls/1765))
|
||||
* Fix bug of kernel select ([!2103](https://gitee.com/mindspore/mindspore/pulls/2103))
|
||||
* Fix bug of pattern matching for batchnorm fusion in the case of auto mix precision.([!1851](https://gitee.com/mindspore/mindspore/pulls/1851))
|
||||
* Fix bug of generate hccl's kernel info.([!2393](https://gitee.com/mindspore/mindspore/mindspore/pulls/2393))
|
||||
* GPU platform
|
||||
* Fix bug of summary feature invalid([!2173](https://gitee.com/mindspore/mindspore/pulls/2173))
|
||||
* Data processing
|
||||
* Fix bug of Cifar dataset reading([!2096](https://gitee.com/mindspore/mindspore/pulls/2096))
|
||||
* Fix bug of C++ behavior in RandomCropAndResize([!2026](https://gitee.com/mindspore/mindspore/pulls/2026))
|
||||
* Fix the bug of mindrecord shuffle([!2420](https://gitee.com/mindspore/mindspore/pulls/2420))
|
||||
|
||||
## Contributors
|
||||
Thanks goes to these wonderful people:
|
||||
|
||||
Alexey Shevlyakov, avakh, baihuawei, BowenK, buxue, caifubi, caojian05, Cathy Wong, changzherui, chenfei, chengxianbin, chenhaozhe, chenjianping, chentingting, chenzomi, chujinjin, Danish Farid, dayschan, dengwentao, dinghao, etone-chan, fangzehua, fary86, geekun, Giancarlo Colmenares, gong chen, gukecai, guohongzilong, hangangqiang, heleiwang, hesham, He Wei, hexia, hongxing, huangdongrun, huanghui, islam_amin, Jamie Nisbet, Jesse Lee, jiangjinsheng, jiangzhiwen, jinyaohui, jjfeing, jojobugfree, Jonathan Yan, jonyguo, Junhan Hu, Kang, kingfo, kouzhenzhong, kpy, kswang, laiyongqiang, leopz, liangzelang, lichenever, lihongkang, Li Hongzhang, lilei, limingqi107, lirongzhen1, liubuyu, liuchongming74, liuwenhao4, liuxiao, Lixia Chen, liyanliu, liyong, lizhenyu, lvliang, Mahdi, Margaret_wangrui, meixiaowei, ms_yan, nhussain, ougongchang, panfengfeng, panyifeng, peilinwang, Peilin Wang, pkuliuliu, qianlong, rick_sanchez, shibeiji, Shida He, shijianning, simson, sunsuodong, suteng, Tinazhang, Tron Zhang, unknown, VectorSL, wandongdong, wangcong, wangdongxu, wangdongxu6, wanghua, wangnan39, Wei Luning, wenchunjiang, wenkai, wilfChen, WilliamLian, wukesong, Xian Weizhao, Xiaoda Zhang, xiefangqi, xulei2020, xunxue, xutianchun, Yang, yanghaitao, yanghaitao1, yanghaoran, yangjie, yangjie159, YangLuo, Yanjun Peng, yankai, yanzhenxiang2020, yao_yf, Yi Huaijie, yoonlee666, yuchaojie, yujianfeng, zhangzhongpeng, zhangdengcheng, Zhang Qinghua, zhangyinxia, zhangz0911gm, zhaojichen, zhaoting, zhaozhenlong, zhoufeng, zhouneng, zhousiyi, Zirui Wu, Ziyan, zjun, ZPaC, lihongzhang, wangdongxu
|
||||
|
||||
Contributions of any kind are welcome!
|
||||
|
||||
# Release 0.3.0-alpha
|
||||
|
||||
## Major Features and Improvements
|
||||
|
|
4
build.sh
4
build.sh
|
@ -461,9 +461,9 @@ build_predict()
|
|||
|
||||
cd "${BASEPATH}/predict/output/"
|
||||
if [[ "$PREDICT_PLATFORM" == "x86_64" ]]; then
|
||||
tar -cf MSPredict-0.3.0-linux_x86_64.tar.gz include/ lib/ --warning=no-file-changed
|
||||
tar -cf MSPredict-0.5.0-linux_x86_64.tar.gz include/ lib/ --warning=no-file-changed
|
||||
elif [[ "$PREDICT_PLATFORM" == "arm64" ]]; then
|
||||
tar -cf MSPredict-0.3.0-linux_aarch64.tar.gz include/ lib/ --warning=no-file-changed
|
||||
tar -cf MSPredict-0.5.0-linux_aarch64.tar.gz include/ lib/ --warning=no-file-changed
|
||||
fi
|
||||
echo "success to build predict project!"
|
||||
}
|
||||
|
|
|
@ -0,0 +1,67 @@
|
|||
FROM ubuntu:18.04
|
||||
|
||||
MAINTAINER leonwanghui <leon.wanghui@huawei.com>
|
||||
|
||||
# Set env
|
||||
ENV PYTHON_ROOT_PATH /usr/local/python-3.7.5
|
||||
ENV PATH /usr/local/bin:$PATH
|
||||
|
||||
# Install base tools
|
||||
RUN apt update \
|
||||
&& DEBIAN_FRONTEND=noninteractive apt install -y \
|
||||
vim \
|
||||
wget \
|
||||
curl \
|
||||
xz-utils \
|
||||
net-tools \
|
||||
openssh-client \
|
||||
git \
|
||||
ntpdate \
|
||||
tzdata \
|
||||
tcl \
|
||||
sudo \
|
||||
bash-completion
|
||||
|
||||
# Install compile tools
|
||||
RUN DEBIAN_FRONTEND=noninteractive apt install -y \
|
||||
gcc \
|
||||
g++ \
|
||||
zlibc \
|
||||
make \
|
||||
libgmp-dev \
|
||||
patch \
|
||||
autoconf \
|
||||
libtool \
|
||||
automake \
|
||||
flex
|
||||
|
||||
# Set bash
|
||||
RUN echo "dash dash/sh boolean false" | debconf-set-selections
|
||||
RUN DEBIAN_FRONTEND=noninteractive dpkg-reconfigure dash
|
||||
|
||||
# Install python (v3.7.5)
|
||||
RUN apt install -y libffi-dev libssl-dev zlib1g-dev libbz2-dev libncurses5-dev \
|
||||
libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libsqlite3-dev \
|
||||
&& cd /tmp \
|
||||
&& wget https://github.com/python/cpython/archive/v3.7.5.tar.gz \
|
||||
&& tar -xvf v3.7.5.tar.gz \
|
||||
&& cd /tmp/cpython-3.7.5 \
|
||||
&& mkdir -p ${PYTHON_ROOT_PATH} \
|
||||
&& ./configure --prefix=${PYTHON_ROOT_PATH} \
|
||||
&& make -j4 \
|
||||
&& make install -j4 \
|
||||
&& rm -f /usr/local/bin/python \
|
||||
&& rm -f /usr/local/bin/pip \
|
||||
&& ln -s ${PYTHON_ROOT_PATH}/bin/python3.7 /usr/local/bin/python \
|
||||
&& ln -s ${PYTHON_ROOT_PATH}/bin/pip3.7 /usr/local/bin/pip \
|
||||
&& rm -rf /tmp/cpython-3.7.5 \
|
||||
&& rm -f /tmp/v3.7.5.tar.gz
|
||||
|
||||
# Set pip source
|
||||
RUN mkdir -pv /root/.pip \
|
||||
&& echo "[global]" > /root/.pip/pip.conf \
|
||||
&& echo "trusted-host=mirrors.aliyun.com" >> /root/.pip/pip.conf \
|
||||
&& echo "index-url=http://mirrors.aliyun.com/pypi/simple/" >> /root/.pip/pip.conf
|
||||
|
||||
# Install MindSpore cpu whl package
|
||||
RUN pip install --no-cache-dir https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.5.0-beta/MindSpore/cpu/ubuntu_x86/mindspore-0.5.0-cp37-cp37m-linux_x86_64.whl
|
|
@ -0,0 +1,83 @@
|
|||
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
|
||||
|
||||
MAINTAINER leonwanghui <leon.wanghui@huawei.com>
|
||||
|
||||
# Set env
|
||||
ENV PYTHON_ROOT_PATH /usr/local/python-3.7.5
|
||||
ENV OMPI_ROOT_PATH /usr/local/openmpi-3.1.5
|
||||
ENV PATH ${OMPI_ROOT_PATH}/bin:/usr/local/bin:$PATH
|
||||
ENV LD_LIBRARY_PATH ${OMPI_ROOT_PATH}/lib:$LD_LIBRARY_PATH
|
||||
|
||||
# Install base tools
|
||||
RUN apt update \
|
||||
&& DEBIAN_FRONTEND=noninteractive apt install -y \
|
||||
vim \
|
||||
wget \
|
||||
curl \
|
||||
xz-utils \
|
||||
net-tools \
|
||||
openssh-client \
|
||||
git \
|
||||
ntpdate \
|
||||
tzdata \
|
||||
tcl \
|
||||
sudo \
|
||||
bash-completion
|
||||
|
||||
# Install compile tools
|
||||
RUN DEBIAN_FRONTEND=noninteractive apt install -y \
|
||||
gcc \
|
||||
g++ \
|
||||
zlibc \
|
||||
make \
|
||||
libgmp-dev \
|
||||
patch \
|
||||
autoconf \
|
||||
libtool \
|
||||
automake \
|
||||
flex \
|
||||
libnccl2=2.4.8-1+cuda10.1 \
|
||||
libnccl-dev=2.4.8-1+cuda10.1
|
||||
|
||||
# Set bash
|
||||
RUN echo "dash dash/sh boolean false" | debconf-set-selections
|
||||
RUN DEBIAN_FRONTEND=noninteractive dpkg-reconfigure dash
|
||||
|
||||
# Install python (v3.7.5)
|
||||
RUN apt install -y libffi-dev libssl-dev zlib1g-dev libbz2-dev libncurses5-dev \
|
||||
libgdbm-dev libgdbm-compat-dev liblzma-dev libreadline-dev libsqlite3-dev \
|
||||
&& cd /tmp \
|
||||
&& wget https://github.com/python/cpython/archive/v3.7.5.tar.gz \
|
||||
&& tar -xvf v3.7.5.tar.gz \
|
||||
&& cd /tmp/cpython-3.7.5 \
|
||||
&& mkdir -p ${PYTHON_ROOT_PATH} \
|
||||
&& ./configure --prefix=${PYTHON_ROOT_PATH} \
|
||||
&& make -j4 \
|
||||
&& make install -j4 \
|
||||
&& rm -f /usr/local/bin/python \
|
||||
&& rm -f /usr/local/bin/pip \
|
||||
&& ln -s ${PYTHON_ROOT_PATH}/bin/python3.7 /usr/local/bin/python \
|
||||
&& ln -s ${PYTHON_ROOT_PATH}/bin/pip3.7 /usr/local/bin/pip \
|
||||
&& rm -rf /tmp/cpython-3.7.5 \
|
||||
&& rm -f /tmp/v3.7.5.tar.gz
|
||||
|
||||
# Set pip source
|
||||
RUN mkdir -pv /root/.pip \
|
||||
&& echo "[global]" > /root/.pip/pip.conf \
|
||||
&& echo "trusted-host=mirrors.aliyun.com" >> /root/.pip/pip.conf \
|
||||
&& echo "index-url=http://mirrors.aliyun.com/pypi/simple/" >> /root/.pip/pip.conf
|
||||
|
||||
# Install openmpi (v3.1.5)
|
||||
RUN cd /tmp \
|
||||
&& wget https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.5.tar.gz \
|
||||
&& tar -xvf openmpi-3.1.5.tar.gz \
|
||||
&& cd /tmp/openmpi-3.1.5 \
|
||||
&& mkdir -p ${OMPI_ROOT_PATH} \
|
||||
&& ./configure --prefix=${OMPI_ROOT_PATH} \
|
||||
&& make -j4 \
|
||||
&& make install -j4 \
|
||||
&& rm -rf /tmp/openmpi-3.1.5 \
|
||||
&& rm -f /tmp/openmpi-3.1.5.tar.gz
|
||||
|
||||
# Install MindSpore cuda-10.1 whl package
|
||||
RUN pip install --no-cache-dir https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.5.0-beta/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-0.5.0-cp37-cp37m-linux_x86_64.whl
|
2
setup.py
2
setup.py
|
@ -23,7 +23,7 @@ from setuptools import setup, find_packages
|
|||
from setuptools.command.egg_info import egg_info
|
||||
from setuptools.command.build_py import build_py
|
||||
|
||||
version = '0.3.0'
|
||||
version = '0.5.0'
|
||||
|
||||
backend_policy = os.getenv('BACKEND_POLICY')
|
||||
commit_id = os.getenv('COMMIT_ID').replace("\n", "")
|
||||
|
|
Loading…
Reference in New Issue