From df8bedd299df3b0e6800b384b06ff17f8f7b6b02 Mon Sep 17 00:00:00 2001 From: zhanke Date: Mon, 24 Aug 2020 10:35:45 +0800 Subject: [PATCH] gat modelzoo --- model_zoo/official/gnn/gat/README.md | 247 ++++++++++-------- ...ess_data.sh => run_process_data_ascend.sh} | 4 +- .../{run_train.sh => run_train_ascend.sh} | 2 +- model_zoo/official/gnn/gat/src/dataset.py | 13 +- 4 files changed, 153 insertions(+), 113 deletions(-) rename model_zoo/official/gnn/gat/scripts/{run_process_data.sh => run_process_data_ascend.sh} (91%) mode change 100755 => 100644 rename model_zoo/official/gnn/gat/scripts/{run_train.sh => run_train_ascend.sh} (95%) diff --git a/model_zoo/official/gnn/gat/README.md b/model_zoo/official/gnn/gat/README.md index 0c46aebbaf..90c7258d17 100644 --- a/model_zoo/official/gnn/gat/README.md +++ b/model_zoo/official/gnn/gat/README.md @@ -3,100 +3,116 @@ - [Graph Attention Networks Description](#graph-attention-networks-description) - [Model architecture](#model-architecture) - [Dataset](#dataset) - - [Data Preparation](#data-preparation) - [Features](#features) - [Mixed Precision](#mixed-precision) - [Environment Requirements](#environment-requirements) -- [Structure](#structure) - - [Parameter configuration](#parameter-configuration) -- [Running the example](#running-the-example) - - [Usage](#usage) - - [Result](#result) +- [Quick Start](#quick-start) +- [Script Description](#script-description) + - [Script and Sample Code](#script-and-sample-code) + - [Script Parameters](#script-parameters) + - [Training Process](#training-process) + - [Training](#training) +- [Model Description](#model-description) + - [Performance](#performance) + - [Evaluation Performance](#evaluation-performance) + - [Inference Performance](#evaluation-performance) - [Description of random situation](#description-of-random-situation) -- [Others](#others) +- [ModelZoo Homepage](#modelzoo-homepage) -# Graph Attention Networks Description +# [Graph Attention Networks Description](#contents) Graph Attention Networks(GAT) was proposed in 2017 by Petar Veličković et al. By leveraging masked self-attentional layers to address shortcomings of prior graph based method, GAT achieved or matched state of the art performance on both transductive datasets like Cora and inductive dataset like PPI. This is an example of training GAT with Cora dataset in MindSpore. [Paper](https://arxiv.org/abs/1710.10903): Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., & Bengio, Y. (2017). Graph attention networks. arXiv preprint arXiv:1710.10903. -# Model architecture - -An illustration of multi- head attention (with K = 3 heads) by node 1 on its neighborhood can be found below: - -![](https://camo.githubusercontent.com/4fe1a90e67d17a2330d7cfcddc930d5f7501750c/68747470733a2f2f7777772e64726f70626f782e636f6d2f732f71327a703170366b37396a6a6431352f6761745f6c617965722e706e673f7261773d31) +# [Model architecture](#contents) Note that according to whether this attention layer is the output layer of the network or not, the node update function can be concatenate or average. -# Dataset -Statistics of dataset used are summerized as below: +# [Dataset](#contents) +- Dataset size: + Statistics of dataset used are summerized as below: -| | Cora | Citeseer | -| ------------------ | -------------: | -------------: | -| Task | Transductive | Transductive | -| # Nodes | 2708 (1 graph) | 3327 (1 graph) | -| # Edges | 5429 | 4732 | -| # Features/Node | 1433 | 3703 | -| # Classes | 7 | 6 | -| # Training Nodes | 140 | 120 | -| # Validation Nodes | 500 | 500 | -| # Test Nodes | 1000 | 1000 | + | | Cora | Citeseer | + | ------------------ | -------------: | -------------: | + | Task | Transductive | Transductive | + | # Nodes | 2708 (1 graph) | 3327 (1 graph) | + | # Edges | 5429 | 4732 | + | # Features/Node | 1433 | 3703 | + | # Classes | 7 | 6 | + | # Training Nodes | 140 | 120 | + | # Validation Nodes | 500 | 500 | + | # Test Nodes | 1000 | 1000 | -## Data Preparation -Download the dataset Cora or Citeseer provided by /kimiyoung/planetoid from github. +- Data Preparation + > Place the dataset to any path you want, the folder should include files as follows(we use Cora dataset as an example): -> Place the dataset to any path you want, the folder should include files as follows(we use Cora dataset as an example): - -``` -. -└─data - ├─ind.cora.allx - ├─ind.cora.ally - ├─ind.cora.graph - ├─ind.cora.test.index - ├─ind.cora.tx - ├─ind.cora.ty - ├─ind.cora.x - └─ind.cora.y -``` + ``` + . + └─data + ├─ind.cora.allx + ├─ind.cora.ally + ├─ind.cora.graph + ├─ind.cora.test.index + ├─ind.cora.tx + ├─ind.cora.ty + ├─ind.cora.x + └─ind.cora.y + ``` -> Generate dataset in mindrecord format for cora or citeseer. ->> Usage -```buildoutcfg -cd ./scripts -# SRC_PATH is the dataset file path you downloaded, DATASET_NAME is cora or citeseer -sh run_process_data.sh [SRC_PATH] [DATASET_NAME] -``` + > Generate dataset in mindrecord format for cora or citeseer. + >> Usage + ```buildoutcfg + cd ./scripts + # SRC_PATH is the dataset file path you downloaded, DATASET_NAME is cora or citeseer + sh run_process_data_ascend.sh [SRC_PATH] [DATASET_NAME] + ``` ->> Launch -``` -#Generate dataset in mindrecord format for cora -./run_process_data.sh ./data cora -#Generate dataset in mindrecord format for citeseer -./run_process_data.sh ./data citeseer -``` + >> Launch + ``` + #Generate dataset in mindrecord format for cora + ./run_process_data_ascend.sh ./data cora + #Generate dataset in mindrecord format for citeseer + ./run_process_data_ascend.sh ./data citeseer + ``` -# Features +# [Features](#contents) ## Mixed Precision To ultilize the strong computation power of Ascend chip, and accelerate the training process, the mixed training method is used. MindSpore is able to cope with FP32 inputs and FP16 operators. In GAT example, the model is set to FP16 mode except for the loss calculation part. -# Environment Requirements +# [Environment Requirements](#contents) - Hardward (Ascend) -- Install [MindSpore](https://www.mindspore.cn/install/en). +- Framework + - [MindSpore](https://www.mindspore.cn/install/en) +- For more information, please check the resources below: + - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html) + - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html) -# Structure +# [Quick Start](#contents) + +After installing MindSpore via the official website and Dataset is correctly generated, you can start training and evaluation as follows. + +- running on Ascend + + ``` + # run training example with cora dataset, DATASET_NAME is cora + sh run_train_ascend.sh [DATASET_NAME] + ``` + +# [Script Description](#contents) + +## [Script and Sample Code](#contents) ```shell . └─gat ├─README.md ├─scripts - | ├─run_process_data.sh # Generate dataset in mindrecord format - | └─run_train.sh # Launch training + | ├─run_process_data_ascend.sh # Generate dataset in mindrecord format + | └─run_train_ascend.sh # Launch training | ├─src | ├─config.py # Training configurations @@ -107,60 +123,73 @@ To ultilize the strong computation power of Ascend chip, and accelerate the trai └─train.py # Train net ``` -## Parameter configuration +## [Script Parameters](#contents) -Parameters for training can be set in config.py. - -``` -"learning_rate": 0.005, # Learning rate -"num_epochs": 200, # Epoch sizes for training -"hid_units": [8], # Hidden units for attention head at each layer -"n_heads": [8, 1], # Num heads for each layer -"early_stopping": 100, # Early stop patience -"l2_coeff": 0.0005 # l2 coefficient -"attn_dropout": 0.6 # Attention dropout ratio -"feature_dropout":0.6 # Feature dropout ratio -``` +Parameters for both training and evaluation can be set in config.py. -# Running the example -## Usage -After Dataset is correctly generated. -``` -# run train with cora dataset, DATASET_NAME is cora -sh run_train.sh [DATASET_NAME] -``` +- config for GAT, CORA dataset -## Result - -Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the followings in log. + ```python + "learning_rate": 0.005, # Learning rate + "num_epochs": 200, # Epoch sizes for training + "hid_units": [8], # Hidden units for attention head at each layer + "n_heads": [8, 1], # Num heads for each layer + "early_stopping": 100, # Early stop patience + "l2_coeff": 0.0005 # l2 coefficient + "attn_dropout": 0.6 # Attention dropout ratio + "feature_dropout":0.6 # Feature dropout ratio + ``` - -``` -Epoch:0, train loss=1.98498 train acc=0.17143 | val loss=1.97946 val acc=0.27200 -Epoch:1, train loss=1.98345 train acc=0.15000 | val loss=1.97233 val acc=0.32600 -Epoch:2, train loss=1.96968 train acc=0.21429 | val loss=1.96747 val acc=0.37400 -Epoch:3, train loss=1.97061 train acc=0.20714 | val loss=1.96410 val acc=0.47600 -Epoch:4, train loss=1.96864 train acc=0.13571 | val loss=1.96066 val acc=0.59600 -... -Epoch:195, train loss=1.45111 train_acc=0.56429 | val_loss=1.44325 val_acc=0.81200 -Epoch:196, train loss=1.52476 train_acc=0.52143 | val_loss=1.43871 val_acc=0.81200 -Epoch:197, train loss=1.35807 train_acc=0.62857 | val_loss=1.43364 val_acc=0.81400 -Epoch:198, train loss=1.47566 train_acc=0.51429 | val_loss=1.42948 val_acc=0.81000 -Epoch:199, train loss=1.56411 train_acc=0.55000 | val_loss=1.42632 val_acc=0.80600 -Test loss=1.5366285, test acc=0.84199995 -... -``` +## [Training Process](#contents) -Results on Cora dataset is shown by table below: +### Training -| | MindSpore + Ascend910 | Tensorflow + V100 | -| ------------------------------------ | --------------------: | ----------------: | -| Accuracy | 0.830933271 | 0.828649968 | -| Training Cost(200 epochs) | 27.62298311s | 36.711862s | -| End to End Training Cost(200 epochs) | 39.074s | 50.894s | +- running on Ascend + + ```python + sh run_train_ascend.sh [DATASET_NAME] + ``` + + Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the + followings in log. + + ```python + Epoch:0, train loss=1.98498 train acc=0.17143 | val loss=1.97946 val acc=0.27200 + Epoch:1, train loss=1.98345 train acc=0.15000 | val loss=1.97233 val acc=0.32600 + Epoch:2, train loss=1.96968 train acc=0.21429 | val loss=1.96747 val acc=0.37400 + Epoch:3, train loss=1.97061 train acc=0.20714 | val loss=1.96410 val acc=0.47600 + Epoch:4, train loss=1.96864 train acc=0.13571 | val loss=1.96066 val acc=0.59600 + ... + Epoch:195, train loss=1.45111 train_acc=0.56429 | val_loss=1.44325 val_acc=0.81200 + Epoch:196, train loss=1.52476 train_acc=0.52143 | val_loss=1.43871 val_acc=0.81200 + Epoch:197, train loss=1.35807 train_acc=0.62857 | val_loss=1.43364 val_acc=0.81400 + Epoch:198, train loss=1.47566 train_acc=0.51429 | val_loss=1.42948 val_acc=0.81000 + Epoch:199, train loss=1.56411 train_acc=0.55000 | val_loss=1.42632 val_acc=0.80600 + Test loss=1.5366285, test acc=0.84199995 + ... + ``` + +# [Model Description](#contents) +## [Performance](#contents) + +| Parameter | GAT | +| ------------------------------------ | ----------------------------------------- | +| Resource | Ascend 910 | +| uploaded Date | 06/16/2020(month/day/year) | +| MindSpore Version | 0.5.0-beta | +| Dataset | Cora/Citeseer | +| Training Parameter | epoch=200 | +| Optimizer | Adam | +| Loss Function | Softmax Cross Entropy | +| Accuracy | 83.0/72.5 | +| Speed | 0.195s/epoch | +| Total time | 39s | +| Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/gat | + +# [Description of random situation](#contents) -# Description of random situation GAT model contains lots of dropout operations, if you want to disable dropout, set the attn_dropout and feature_dropout to 0 in src/config.py. Note that this operation will cause the accuracy drop to approximately 80%. -# Others -GAT model is verified on Ascend environment, not on CPU or GPU. \ No newline at end of file +# [ModelZoo Homepage](#contents) + +Please check the official [homepage](http://gitee.com/mindspore/mindspore/tree/master/model_zoo). diff --git a/model_zoo/official/gnn/gat/scripts/run_process_data.sh b/model_zoo/official/gnn/gat/scripts/run_process_data_ascend.sh old mode 100755 new mode 100644 similarity index 91% rename from model_zoo/official/gnn/gat/scripts/run_process_data.sh rename to model_zoo/official/gnn/gat/scripts/run_process_data_ascend.sh index 3bf6672301..a775ee7d11 --- a/model_zoo/official/gnn/gat/scripts/run_process_data.sh +++ b/model_zoo/official/gnn/gat/scripts/run_process_data_ascend.sh @@ -16,7 +16,7 @@ if [ $# != 2 ] then - echo "Usage: sh run_train.sh [SRC_PATH] [DATASET_NAME]" + echo "Usage: sh run_train_ascend.sh [SRC_PATH] [DATASET_NAME]" exit 1 fi @@ -42,7 +42,7 @@ MINDRECORD_PATH=`pwd`/data_mr rm -f $MINDRECORD_PATH/* -cd ../../utils/graph_to_mindrecord || exit +cd ../../../../utils/graph_to_mindrecord || exit python writer.py --mindrecord_script $DATASET_NAME \ --mindrecord_file "$MINDRECORD_PATH/$DATASET_NAME" \ diff --git a/model_zoo/official/gnn/gat/scripts/run_train.sh b/model_zoo/official/gnn/gat/scripts/run_train_ascend.sh similarity index 95% rename from model_zoo/official/gnn/gat/scripts/run_train.sh rename to model_zoo/official/gnn/gat/scripts/run_train_ascend.sh index 3e9213712d..b07998e067 100644 --- a/model_zoo/official/gnn/gat/scripts/run_train.sh +++ b/model_zoo/official/gnn/gat/scripts/run_train_ascend.sh @@ -16,7 +16,7 @@ if [ $# != 1 ] then - echo "Usage: sh run_train.sh [DATASET_NAME]" + echo "Usage: sh run_train_ascend.sh [DATASET_NAME]" exit 1 fi diff --git a/model_zoo/official/gnn/gat/src/dataset.py b/model_zoo/official/gnn/gat/src/dataset.py index 0d0b544514..7636bfb74d 100644 --- a/model_zoo/official/gnn/gat/src/dataset.py +++ b/model_zoo/official/gnn/gat/src/dataset.py @@ -12,7 +12,18 @@ # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================ -"""Preprocess data obtained for training""" +""" +Preprocess data obtained for training +Cora and Citeseer datasets are supported by our example, the original versions of these datasets are as follows: +@inproceedings{nr, + title={The Network Data Repository with Interactive Graph Analytics and Visualization}, + author={Ryan A. Rossi and Nesreen K. Ahmed}, + booktitle={AAAI}, + url={http://networkrepository.com}, + year={2015} +} +In this example, we use dataset splits provided by https://github.com/kimiyoung/planetoid (Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov, [Revisiting Semi-Supervised Learning with Graph Embeddings](https://arxiv.org/abs/1603.08861), ICML 2016). +""" import numpy as np import mindspore.dataset as ds