!17907 Add PINNs (Navier-Stokes) model for GPU and update PINNs (Schrodinger)

Merge pull request !17907 from yuyiyang/pinns_ns
This commit is contained in:
i-robot 2021-06-10 09:09:17 +08:00 committed by Gitee
commit af3d54c35b
25 changed files with 1292 additions and 189 deletions

View File

@ -84,7 +84,7 @@ In order to facilitate developers to enjoy the benefits of MindSpore framework,
- [GOMO](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/hpc/ocean_model/README.md)
- [Molecular_Dynamics](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/hpc/molecular_dynamics/README.md)
- [SPONGE](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/hpc/sponge/README.md)
- [PINNs](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/pinns/README.md)
- [Community](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/community)
## Announcements

View File

@ -84,7 +84,7 @@
- [GOMO](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/hpc/ocean_model/README.md)
- [分子动力学](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/hpc/molecular_dynamics/README.md)
- [SPONGE](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/hpc/sponge/README.md)
- [PINNs](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/pinns/README.md)
- [社区](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/community)
## 公告

View File

@ -2,9 +2,11 @@
[查看中文](./README_CN.md)
- [PINNs Description](#PINNs-Description)
- [Contents](#contents)
- [PINNs Description](#pinns-description)
- [Model Architecture](#model-architecture)
- [Schrodinger equation](#Schrodinger-equation)
- [Schrodinger equation](#schrodinger-equation)
- [Navier-Stokes equation](#navier-stokes-equation)
- [Dataset](#dataset)
- [Features](#features)
- [Mixed Precision](#mixed-precision)
@ -16,19 +18,21 @@
- [Training Process](#training-process)
- [Evaluation Process](#evaluation-process)
- [Model Description](#model-description)
- [Performance](#performance)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Evaluation of Schrodinger equation scenario](#Evaluation-of-Schrodinger-equation-scenario)
- [Inference Performance](#evaluation-performance)
- [Inference of Schrodinger equation scenario](#Inference-of-Schrodinger-equation-scenario)
- [Evaluation of Schrodinger equation scenario](#evaluation-of-schrodinger-equation-scenario)
- [Evaluation of Navier-Stokes equation scenario](#evaluation-of-navier-stokes-equation-scenario)
- [Inference Performance](#inference-performance)
- [Inference of Schrodinger equation scenario](#inference-of-schrodinger-equation-scenario)
- [Inference of Navier-Stokes equation scenario](#inference-of-navier-stokes-equation-scenario)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [PINNs Description](#contents)
PINNs (Physics Information Neural Networks) is a neural network proposed in 2019. PINNs network provides a new approach for solving partial differential equations with neural network. Partial differential equations are often used in the modeling of physical, biological and engineering systems. The characteristics of such systems have significantly difference from most problems in machine learning: (1) the cost of data acquisition is high, and the amount of data is usually small;2) a large amount of priori knowledge, such as previous research result like physical laws, are hard to be utilized by machine learning systems.
PINNs (Physics Information Neural Networks) is a neural network proposed in 2019. PINNs network provides a new approach for solving partial differential equations with neural network. Partial differential equations are often used in the modeling of physical, biological and engineering systems. The characteristics of such systems have significantly difference from most problems in machine learning: (1) the cost of data acquisition is high, and the amount of data is usually small;(2) a large amount of priori knowledge, such as previous research result like physical laws, are hard to be utilized by machine learning systems.
In PINNs, firstly the prior knowledge in the form of partial differential equation is introduced as the regularization term of the network through proper construction of the Pinns network. Then, by utilizing the prior knowledge in PINNs, the network can train very good results with very little data.
In PINNs, firstly the prior knowledge in the form of partial differential equation is introduced as the regularization term of the network through proper construction of the Pinns network. Then, by utilizing the prior knowledge in PINNs, the network can train very good results with very little data. The effectiveness of PINNs are verified in various scenarios such as quantum mechanics and hydrodynamics.
[paper](https://www.sciencedirect.com/science/article/pii/S0021999118307125)Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations."*Journal of Computational Physics*. 2019 (378): 686-707.
@ -38,7 +42,11 @@ Pinns is a new framework of constructing neural network for solving partial diff
## [Schrodinger equation](#Contents)
The PINNs of the Schrodinger equation can be divided into two parts. First, a neural network composed of five fully connected layers is used to fit the wave function to be solved (i.e., the solution of the Schrodinger equation in the quantum mechanics system described by the data set). The neural network has two outputs, which represent the real part and the imaginary part of the wave function respectively. Then, the two outputs are followed by some derivative operations. The Schrodinger equation can be expressed by properly combining these derivative results, and act as a constraint term of the neural network. The outputs of the whole network are the real part, imaginary part and some related partial derivatives of the wave function.
The Schrodinger equation is the basic equation in quantum mechanics, which describes the wave function of particles. The PINNs of the Schrodinger equation can be divided into two parts. First, a neural network composed of five fully connected layers is used to fit the wave function to be solved (i.e., the solution of the Schrodinger equation in the quantum mechanics system described by the data set). The neural network has two outputs, which represent the real part and the imaginary part of the wave function respectively. Then, the two outputs are followed by some derivative operations. The Schrodinger equation can be expressed by properly combining these derivative results, and act as a constraint term of the neural network. The outputs of the whole network are the real part, imaginary part and some related partial derivatives of the wave function.
## [Navier-Stokes equation](#Contents)
The Navier-Stokes equation is the equation describing incompressible Newtonian fluid in hydrodynamics. The PINNs of the Navier-Stokes equation can be divided into two parts. First, a neural network composed of nine fully connected layers is used to fit a latent function and the pressure. The derivatives of the latent function are related to the velocity field. Then, the two outputs are followed by some derivative operations. The Navier-Stokes equation can be expressed by properly combining these derivative results, and act as a constraint term of the neural network.
# [Dataset](#contents)
@ -52,6 +60,14 @@ Dataset used: [NLS](https://github.com/maziarraissi/PINNs/tree/master/main/Data)
- Data formatmat files
- NoteThis dataset is used in the Schrodinger equation scenario. Data will be processed in src/Schrodinger/dataset.py
Dataset used[cylinder nektar wake](https://github.com/maziarraissi/PINNs/tree/master/main/Data), can refer to [paper](https://www.sciencedirect.com/science/article/pii/S0021999118307125)
- Dataset size23MB1000000 points sampled from a two -dimensional incompressible fluid
- Train5000 data points
- TestAll 1000000 data points of the dataset
- Data formatmat files
- Notehis dataset is used in the Navier-Stokes equation scenario. Data will be processed in src/NavierStokes/dataset.py
# [Features](#contents)
## [Mixed Precision](#Contents)
@ -61,7 +77,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
# [Environment Requirements](#contents)
- HardwareGPU
- Hardware(GPU)
- Prepare hardware environment with GPU processor.
- Framework
- [MindSpore](https://www.mindspore.cn/install/en)
@ -88,6 +104,21 @@ After installing MindSpore via the official website, you can start training and
bash /scriptsrun_standalone_Schrodinger_eval.sh [CHECKPOINT_PATH] [DATASET_PATH]
```
- Navier-Stokes equation scenario running on GPU
```shell
# Running training example
export CUDA_VISIBLE_DEVICES=0
python train.py --scenario=NavierStokes --datapath=[DATASET_PATH] --noise=[NOISE] > train.log
OR
bash scripts/run_standalone_NavierStokes_train.sh [DATASET] [NOISE]
# Running evaluation example
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=NavierStokes --datapath=[DATASET_PATH] > eval.log
OR
bash scripts/run_standalone_NavierStokes_eval.sh [CHECKPOINT] [DATASET]
```
# [Script Description](#contents)
## [Script and Sample Code](#contents)
@ -100,17 +131,31 @@ After installing MindSpore via the official website, you can start training and
├── scripts
│ ├──run_standalone_Schrodinger_train.sh // shell script for Schrodinger equation scenario training on GPU
| ├──run_standalone_Schrodinger_eval.sh // shell script for Schrodinger equation scenario evaluation on GPU
| ├──run_standalone_NavierStokes_train.sh // shell script for Navier-Stokes equation scenario training on GPU
| ├──run_standalone_NavierStokes_eval.sh // shell script for Navier-Stokes equation scenario evaluation on GPU
├── src
| ├──Schrodinger //Schrodinger equation scenario
| ├──Schrodinger // Schrodinger equation scenario
│ | ├──dataset.py // creating dataset
│ | ├──net.py // PINNs (Schrodinger) architecture
│ | ├──loss.py // PINNs (Schrodinger) loss function
│ | ├──train_sch.py // PINNs (Schrodinger) training process
│ | ├──eval_sch.py // PINNs (Schrodinger) evaluation process
│ | ├──export_sch.py // export PINNs (Schrodinger) model
| ├──NavierStokes // Navier-Stokes equation scenario
│ | ├──dataset.py // creating dataset
│ | ├──net.py // PINNs (Navier-Stokes) architecture
│ | ├──loss.py // PINNs (Navier-Stokes) loss function
│ | ├──train_ns.py // PINNs (Navier-Stokes) training process
│ | ├──eval_ns.py // PINNs (Navier-Stokes) evaluation process
│ | ├──export_ns.py // export PINNs (Navier-Stokes) model
│ ├──config.py // parameter configuration
├── train.py // training script (Schrodinger)
├── eval.py // evaluation script (Schrodinger)
├── export.py // export checkpoint files into mindir (Schrodinger) ├── ├── requirements // additional packages required to run PINNs networks
├── train.py // training script
├── eval.py // evaluation script
├── export.py // export checkpoint files into mindir
├── requirements // additional packages required to run PINNs networks
```
## [Script Parameters](#contents)
## [Script Parameters](#contents)
Parameters for both training and evaluation can be set in config.py
@ -128,10 +173,26 @@ Parameters for both training and evaluation can be set in config.py
'ck_path':'./ckpoints/' # path to save checkpoint files (.ckpt)
```
- config for Navier-Stokes equation scenario
```python
'epoch':18000 # number of epochs in training
'lr': 0.01 # learning rate
'n_train':5000 # amount of training data
'path':'./Data/cylinder_nektar_wake.mat' # data set path
'noise':0.0 # noise intensity
'num_neuron':20 # number of neurons in fully connected hidden layer
'ck_path':'./navier_ckpoints/' # path to save checkpoint files (.ckpt)
'seed':0 # random seed
'batch_size':500 # batch size
```
For more configuration details, please refer the script `config.py`.
## [Training Process](#contents)
Schrodinger equation scenario
- Running Schrodinger equation scenario on GPU
```bash
@ -140,6 +201,27 @@ For more configuration details, please refer the script `config.py`.
- The python command above will run in the background, you can view the results through the file `train.log`
Navier-Stokes equation scenario
- Running Navier-Stokes equation scenario on GPU
```bash
python train.py --scenario='NavierStokes' --datapath=[DATAPATH] --noise=[NOISE] > train.log 2>&1 &
```
- The python command above will run in the background, you can view the results through the file `train.log`
The loss value can be achieved as follows:
```bash
# grep "loss is " train.log
epoch: 1 step: 10, loss is 0.36841542
epoch time: 24938.602 ms, per step time: 2493.86 ms
epcoh: 2 step: 10, loss is 0.21505485
epoch time: 985.929 ms, per step time: 98.593 ms
...
```
The loss value can be achieved as follows:
```bash
@ -155,9 +237,9 @@ For more configuration details, please refer the script `config.py`.
## [Evaluation Process](#contents)
- evaluation of Schrodinger equation scenario when running on GPU
- Evaluation of Schrodinger equation scenario when running on GPU
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path, e.g., “./ckpt/checkpoint_PINNs_Schrodinger-50000_1.ckpt”
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path。
```bash
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=Schrodinger --datapath=[DATASET_PATH] > eval.log
@ -170,6 +252,22 @@ For more configuration details, please refer the script `config.py`.
evaluation error is: 0.01207
```
- Evaluation of Navier-Stokes equation scenario when running on GPU
Before running the command below, please check the checkpoint path used for evaluation. Please set the checkpoint path to be the absolute full path。
```bash
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=NavierStokes --datapath=[DATASET_PATH] > eval.log
```
The above python command will run in the background. You can view the results through the file "eval.log". The error of evaluation is as follows:
```bash
# grep "Error of lambda 1" eval.log
Error of lambda 1 is 0.2698
Error of lambda 2 is 0.8558
```
# [Model Description](#contents)
## [Performance](#contents)
@ -195,6 +293,43 @@ For more configuration details, please refer the script `config.py`.
| Parameters | 32K |
| Checkpoint for Fine tuning | 363K (.ckpt file) |
#### [Evaluation of Navier-Stokes equation scenario](#contents)
| Parameters | GPU |
| -------------------------- | ------------------------------------------------------------ |
| Model Version | PINNs (Navier-Stokes), noiseless version |
| Resource | NV Tesla V100-32G |
| uploaded Date | 6/7/2021 (month/day/year) |
| MindSpore Version | 1.2.0 |
| Dataset | cylinder nektar wake |
| Training Parameters | epoch=18000, lr=0.01, batch size=500. See src/config.py for details |
| Optimizer | Adam |
| Loss Function | src/NavierStokes/loss.py |
| outputs | the velocity field (x and y component), presure, and the fitting of the Navier-Stokes equation (x and y component) |
| Loss | 0.0007302 |
| Speed | 99ms/step |
| Total time | 4.9431 hours |
| Parameters | 3.1K |
| Checkpoint for Fine tuning | 39K (.ckpt file) |
| Parameters | GPU |
| ------------------------------------ | ------------------------------------------------------------ |
| Model Version | PINNs (Navier-Stokes), noisy version |
| Resource | NV Tesla V100-32G |
| uploaded Date | 6/7/2021 (month/day/year) |
| MindSpore Version | 1.2.0 |
| Dataset | cylinder nektar wake |
| Noise intensity of the training data | 0.01 |
| Training Parameters | epoch=18000, lr=0.01, batch size=500. See src/config.py for details |
| Optimizer | Adam |
| Loss Function | src/NavierStokes/loss.py |
| outputs | the velocity field (x and y component), presure, and the fitting of the Navier-Stokes equation (x and y component) |
| Loss | 0.001309 |
| Speed | 100ms/step |
| Total time | 5.0084 hours |
| Parameters | 3.1K |
| Checkpoint for Fine tuning | 39K (.ckpt file) |
### [Inference Performance](#contents)
#### [Inference of Schrodinger equation scenario](#contents)
@ -209,10 +344,35 @@ For more configuration details, please refer the script `config.py`.
| outputs | real part and imaginary of the wave function |
| mean square error | 0.01323 |
#### [Inference of Navier-Stokes equation scenario](#contents)
| Parameters | GPU |
| -------------------------------- | ------------------------------------------------------------ |
| Model Version | PINNs (Navier-Stokes), noiseless version |
| Resource | NV Tesla V100-32G |
| uploaded Date | 6/7/2021 (month/day/year) |
| MindSpore Version | 1.2.0 |
| Dataset | cylinder nektar wake |
| outputs | undermined coefficient $\lambda_1$ and $\lambda_2$ of the Naiver-Stokes equation |
| error percentage of $\lambda_1$ | 0.2698% |
| error percentage of $\lambda_2$ | 0.8558% |
| Parameters | GPU |
| ------------------------------------ | ------------------------------------------------------------ |
| Model Version | PINNs (Navier-Stokes), noisy version |
| Resource | NV Tesla V100-32G |
| uploaded Date | 6/7/2021 (month/day/year) |
| MindSpore Version | 1.2.0 |
| Dataset | cylinder nektar wake |
| Noise intensity of the training data | 0.01 |
| outputs | undermined coefficient $\lambda_1$ and $\lambda_2$ of the Naiver-Stokes equation |
| error percentage of $\lambda_1$ | 0.3655% |
| error percentage of $\lambda_2$ | 2.3851% |
# [Description of Random Situation](#contents)
We use random seed in train.pywhich can be reset in src/config.py.
# [ModelZoo Homepage](#contents)
# [ModelZoo Homepage](#contents)
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).

View File

@ -3,9 +3,10 @@
[View English](./README.md)
- [目录](#目录)
- [PINNs描述](#PINNs描述)
- [PINNs描述](#pinns描述)
- [模型架构](#模型架构)
- [Schrodinger方程](#Schrodinger方程)
- [Schrodinger方程](#schrodinger方程)
- [Navier-Stokes方程](#navier-stokes方程)
- [数据集](#数据集)
- [特性](#特性)
- [混合精度](#混合精度)
@ -16,19 +17,20 @@
- [脚本参数](#脚本参数)
- [训练过程](#训练过程)
- [评估过程](#评估过程)
- [评估](#评估)
- [模型描述](#模型描述)
- [性能](#性能)
- [评估性能](#评估性能)
- [Schrodinger方程场景评估](#Schrodinger方程场景评估)
- [Schrodinger方程场景评估](#schrodinger方程场景评估)
- [Navier-Stokes方程场景评估](#navier-stokes方程场景评估)
- [推理性能](#推理性能)
- [Schrodinger方程场景推理](#Schrodinger方程场景推理)
- [Schrodinger方程场景推理](#schrodinger方程场景推理)
- [Navier-Stokes方程场景推理](#navier-stokes方程场景推理)
- [随机情况说明](#随机情况说明)
- [ModelZoo主页](#modelzoo主页)
# [PINNs描述](#目录)
PINNs (Physics-informed neural networks)是2019年提出的神经网络。PINNs网络提供了一种全新的用神经网络求解偏微分方程的思路。对现实的物理、生物、工程等系统建模时常常会用到偏微分方程。而此类问题的特征与机器学习中遇到的大多数问题有两点显著不同(1)获取数据的成本较高,数据量通常较小;(2)存在大量前人对于此类问题的研究成果作为先验知识而无法被机器学习系统利用例如各种物理定律等。PINNs网络首先通过适当的构造将偏微分方程形式的先验知识作为网络的正则化约束引入进而通过利用这些先验知识强大的约束作用使得网络能够用很少的数据就训练出很好的结果。PINNs网络在量子力学等场景中经过了成功的验证能够用很少的数据成功训练网络并对相应的物理系统进行建模。
PINNs (Physics-informed neural networks)是2019年提出的神经网络。PINNs网络提供了一种全新的用神经网络求解偏微分方程的思路。对现实的物理、生物、工程等系统建模时常常会用到偏微分方程。而此类问题的特征与机器学习中遇到的大多数问题有两点显著不同(1)获取数据的成本较高,数据量通常较小;(2)存在大量前人对于此类问题的研究成果作为先验知识而无法被机器学习系统利用例如各种物理定律等。PINNs网络首先通过适当的构造将偏微分方程形式的先验知识作为网络的正则化约束引入进而通过利用这些先验知识强大的约束作用使得网络能够用很少的数据就训练出很好的结果。PINNs网络在量子力学、流体力学等场景中经过了成功的验证,能够用很少的数据成功训练网络并对相应的物理系统进行建模。
[论文](https://www.sciencedirect.com/science/article/pii/S0021999118307125)Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations."*Journal of Computational Physics*. 2019 (378): 686-707.
@ -38,7 +40,11 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
## [Schrodinger方程](#目录)
针对Schrodinger方程的PINNs分为两部分首先是一个由5个全连接层组成的神经网络用来拟合待求解的波函数(即薛定谔方程在数据集所描述的量子力学系统下的解)。该神经网络有2个输出分别表示波函数的实部和虚部。之后在这两个输出后面接上一些求导的操作将这些求导的结果适当的组合起来就可以表示Schrodinger方程作为神经网络的约束项。将波函数的实部、虚部以及一些相关的偏导数作为整个网络的输出。
薛定谔方程是量子力学中的基本方程描述粒子的波函数服从的物理规律。针对Schrodinger方程的PINNs分为两部分首先是一个由5个全连接层组成的神经网络用来拟合待求解的波函数(即薛定谔方程在数据集所描述的量子力学系统下的解)。该神经网络有2个输出分别表示波函数的实部和虚部。之后在这两个输出后面接上一些求导的操作将这些求导的结果适当的组合起来就可以表示Schrodinger方程作为神经网络的约束项。将波函数的实部、虚部以及一些相关的偏导数作为整个网络的输出。
## [Navier-Stokes方程](#目录)
Navier-Stokes方程是流体力学中描述粘性牛顿流体的方程。针对Navier-Stokes方程的PINNs分为两部分首先构造一个由9个全连接层组成的神经网络该神经网络的有2个输出分别代表隐函数和压强。该隐函数的导数与速度场有关。在这两个输出后面接上一些求导的操作将这些求导的结果适当的组合起来就可以表示Navier-Stokes方程作为神经网络的约束项。整个网络的输出为速度场、压强以及Navier-Stokes方程产生的约束项。
# [数据集](#目录)
@ -52,6 +58,14 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
- 数据格式mat文件
- 注该数据集在Schrodinger方程场景中使用。数据将在src/Schrodinger/dataset.py中处理。
使用的数据集:[cylinder nektar wake](https://github.com/maziarraissi/PINNs/tree/master/main/Data), 可参照[论文](https://www.sciencedirect.com/science/article/pii/S0021999118307125)
- 数据集大小23MB对二维不可压缩流体的1000000个采样点
- 训练集5000个点
- 测试集整个数据集的1000000个点
- 数据格式mat文件
- 注该数据集在Navier-Stokes方程场景中使用。数据将在src/NavierStokes/dataset.py中处理
# [特性](#目录)
## [混合精度](#目录)
@ -80,12 +94,27 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
export CUDA_VISIBLE_DEVICES=0
python train.py --scenario=Schrodinger --datapath=[DATASET_PATH] > train.log
OR
bash /scripts/run_standalone_Schrodinger_train.sh [DATASET_PATH]
bash scripts/run_standalone_Schrodinger_train.sh [DATASET_PATH]
# 运行评估示例
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=Schrodinger --datapath=[DATASET_PATH] > eval.log
OR
bash /scriptsrun_standalone_Schrodinger_eval.sh [CHECKPOINT_PATH] [DATASET_PATH]
bash scriptsrun_standalone_Schrodinger_eval.sh [CHECKPOINT_PATH] [DATASET_PATH]
```
- GPU处理器环境运行Navier-Stokes方程场景
```shell
# 运行训练示例
export CUDA_VISIBLE_DEVICES=0
python train.py --scenario=NavierStokes --datapath=[DATASET_PATH] --noise=[NOISE] > train.log
OR
bash scripts/run_standalone_NavierStokes_train.sh [DATASET] [NOISE]
# 运行评估示例
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=NavierStokes --datapath=[DATASET_PATH] > eval.log
OR
bash scripts/run_standalone_NavierStokes_eval.sh [CHECKPOINT] [DATASET]
```
# [脚本说明](#目录)
@ -100,13 +129,26 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
├── scripts
│ ├──run_standalone_Schrodinger_train.sh // Schrodinger方程GPU训练的shell脚本
| ├──run_standalone_Schrodinger_eval.sh // Schrodinger方程GPU评估的shell脚本
| ├──run_standalone_NavierStokes_train.sh // Navier-Stokes方程GPU训练的shell脚本
| ├──run_standalone_NavierStokes_eval.sh // Navier-Stokes方程GPU训练的shell脚本
├── src
| ├──Schrodinger //Schrodinger方程场景
| ├──Schrodinger // Schrodinger方程场景
│ | ├──dataset.py //创建数据集
│ | ├──net.py // PINNs (Schrodinger) 架构
│ | ├──loss.py // PINNs (Schrodinger) 损失函数
│ | ├──train_sch.py // PINNs (Schrodinger) 训练过程
│ | ├──eval_sch.py // PINNs (Schrodinger) 评估过程
│ | ├──export_sch.py //导出 PINNs (Schrodinger) 模型
│ ├──config.py // 参数配置
├── train.py // 训练脚本 (Schrodinger)
├── eval.py // 评估脚本 (Schrodinger)
| ├──NavierStokes // Navier-Stokes方程场景
│ | ├──dataset.py //创建数据集
│ | ├──net.py // PINNs (Navier-Stokes) 架构
│ | ├──loss.py // PINNs (Navier-Stokes) 损失函数
│ | ├──train_sch.py // PINNs (Navier-Stokes) 训练过程
│ | ├──eval_sch.py // PINNs (Navier-Stokes) 评估过程
│ | ├──export_sch.py //导出 PINNs (Navier-Stokes) 模型
├── train.py // 训练脚本
├── eval.py // 评估脚本
├── export.py // 将checkpoint文件导出为mindir
├── requirements // 运行PINNs网络额外需要的包
```
@ -129,16 +171,52 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
'ck_path':'./ckpoints/' #保存checkpoint文件(.ckpt)的路径
```
- 配置Navier-Stokes方程场景。
```python
'epoch':18000 # number of epochs in training
'lr': 0.01 # learning rate
'n_train':5000 # amount of training data
'path':'./Data/cylinder_nektar_wake.mat' # data set path
'noise':0.0 # noise intensity
'num_neuron':20 # number of neurons in fully connected hidden layer
'ck_path':'./navier_ckpoints/' # path to save checkpoint files (.ckpt)
'seed':0 # random seed
'batch_size':500 # batch size
```
更多配置细节请参考脚本`config.py`。
## [训练过程](#目录)
Schrodinger方程场景
- GPU处理器环境运行Schrodinger方程场景
```bash
python train.py --scenario=Schrodinger --datapath=[DATASET_PATH] > train.log 2>&1 &
```
Navier-Stokes方程场景
- GPU处理器环境运行Navier-Stokes方程场景
```bash
python train.py --scenario='NavierStokes' --datapath=[DATAPATH] --noise=[NOISE] > train.log 2>&1 &
```
- 以上python命令将在后台运行。您可以通过train.log文件查看结果。
可以采用以下方式达到损失值:
```bash
# grep "loss is " train.log
epoch: 1 step: 10, loss is 0.36841542
epoch time: 24938.602 ms, per step time: 2493.86 ms
epcoh: 2 step: 10, loss is 0.21505485
epoch time: 985.929 ms, per step time: 98.593 ms
...
```
- 以上python命令将在后台运行。您可以通过train.log文件查看结果。
可以采用以下方式达到损失值:
@ -158,7 +236,7 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
- 在GPU处理器环境运行Schrodinger方程场景
在运行以下命令之前,请检查用于评估的检查点路径。请将检查点路径设置为绝对全路径,例如“./ckpt/checkpoint_PINNs_Schrodinger-50000_1.ckpt”
在运行以下命令之前,请检查用于评估的检查点路径。请将检查点路径设置为绝对全路径。
```bash
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=Schrodinger --datapath=[DATASET_PATH] > eval.log
@ -167,10 +245,26 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
上述python命令将在后台运行您可以通过eval.log文件查看结果。测试误差如下
```bash
# grep "accuracy:" eval.log
# grep "evaluation error" eval.log
evaluation error is: 0.01207
```
- 在GPU处理器环境运行Navier-Stokes方程场景
在运行以下命令之前,请检查用于评估的检查点路径。请将检查点路径设置为绝对全路径。
```bash
python eval.py --ckpoint_path=[CHECKPOINT_PATH] --scenario=NavierStokes --datapath=[DATASET_PATH] > eval.log
```
上述python命令将在后台运行您可以通过eval.log文件查看结果。测试误差如下
```bash
# grep "Error of lambda 1" eval.log
Error of lambda 1 is 0.2698
Error of lambda 2 is 0.8558
```
# [模型描述](#目录)
## [性能](#目录)
@ -196,6 +290,43 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
| 参数 | 32K |
| 微调检查点 | 363K (.ckpt文件) |
#### [Navier-Stokes方程场景评估](#目录)
| 参数 | GPU |
| ------------- | ------------------------------------------------------------ |
| 模型版本 | PINNs (Navier-Stokes),无噪声版 |
| 资源 | NV Tesla V100-32G |
| 上传日期 | 2021-6-7 |
| MindSpore版本 | 1.2.0 |
| 数据集 | cylinder nektar wake |
| 训练参数 | epoch=18000, lr=0.01, batch size=500. 详见src/config.py |
| 优化器 | Adam |
| 损失函数 | src/NavierStokes/loss.py |
| 输出 | 速度场(x分量、y分量)压强对Navier-Stokes方程的拟合(x分量、y分量) |
| 损失 | 0.0007302 |
| 速度 | 99毫秒/步 |
| 总时长 | 4.9431 小时 |
| 参数 | 3.1K |
| 微调检查点 | 39K (.ckpt文件) |
| 参数 | GPU |
| -------------- | ------------------------------------------------------------ |
| 模型版本 | PINNs (Navier-Stokes),有噪声版 |
| 资源 | NV Tesla V100-32G |
| 上传日期 | 2021-6-7 |
| MindSpore版本 | 1.2.0 |
| 数据集 | cylinder nektar wake |
| 训练集噪声强度 | 0.01 |
| 训练参数 | epoch=18000, lr=0.01, batch size=500. 详见src/config.py |
| 优化器 | Adam |
| 损失函数 | src/NavierStokes/loss.py |
| 输出 | 速度场(x分量、y分量)压强对Navier-Stokes方程的拟合(x分量、y分量) |
| 损失 | 0.001309 |
| 速度 | 100毫秒/步 |
| 总时长 | 5.0084 小时 |
| 参数 | 3.1K |
| 微调检查点 | 39K (.ckpt文件) |
### [推理性能](#目录)
#### [Schrodinger方程场景推理](#目录)
@ -210,10 +341,35 @@ PINNs是针对偏微分方程问题构造神经网络的思路具体的模型
| 输出 | 波函数的实部与虚部 |
| 均方误差 | 0.01323 |
#### [Navier-Stokes方程场景推理](#目录)
| 参数 | GPU |
| --------------------- | --------------------------------------------------- |
| 模型版本 | PINNs (Navier-Stokes), 无噪声版 |
| 资源 | NV Tesla V100-32G |
| 上传日期 | 2021-6-7 |
| MindSpore 版本 | 1.2.0 |
| 数据集 | cylinder nektar wake |
| 输出 | Navier-Stokes方程的待定系数$\lambda_1$和$\lambda_2$ |
| $\lambda_1$误差百分比 | 0.2698% |
| $\lambda_2$误差百分比 | 0.8558% |
| 参数 | GPU |
| --------------------- | --------------------------------------------------- |
| 模型版本 | PINNs (Navier-Stokes), 有噪声版 |
| 资源 | NV Tesla V100-32G |
| 上传日期 | 2021-6-7 |
| MindSpore 版本 | 1.2.0 |
| 数据集 | cylinder nektar wake |
| 训练集噪声强度 | 0.01 |
| 输出 | Navier-Stokes方程的待定系数$\lambda_1$和$\lambda_2$ |
| $\lambda_1$误差百分比 | 0.3655% |
| $\lambda_2$误差百分比 | 2.3851% |
# [随机情况说明](#目录)
在train.py中的使用了随机种子可在src/config.py中修改。
# [ModelZoo主页](#目录)
请浏览官网[主页](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
请浏览官网[主页](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。

View File

@ -15,61 +15,24 @@
"""Eval"""
import argparse
import numpy as np
from mindspore import Tensor, context
from mindspore import load_checkpoint, load_param_into_net
import mindspore.common.dtype as mstype
from src import config
from src.Schrodinger.dataset import get_eval_data
from src.Schrodinger.net import PINNs
def eval_PINNs_sch(ckpoint_name, num_neuron=100, path='./Data/NLS.mat'):
"""
Evaluation of PINNs for Schrodinger equation scenario.
Args:
ckpoint_name (str): model checkpoint file name
num_neuron (int): number of neurons for fully connected layer in the network
path (str): path of the dataset for Schrodinger equation
"""
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [2, num_neuron, num_neuron, num_neuron, num_neuron, 2]
lb = np.array([-5.0, 0.0])
ub = np.array([5.0, np.pi/2])
n = PINNs(layers, lb, ub)
param_dict = load_checkpoint(ckpoint_name)
load_param_into_net(n, param_dict)
X_star, _, _, h_star = get_eval_data(path)
X_tensor = Tensor(X_star, mstype.float32)
pred = n(X_tensor)
u_pred = pred[0].asnumpy()
v_pred = pred[1].asnumpy()
h_pred = np.sqrt(u_pred**2 + v_pred**2)
error_h = np.linalg.norm(h_star-h_pred, 2)/np.linalg.norm(h_star, 2)
print(f'evaluation error is: {error_h}')
return error_h
from src.NavierStokes.eval_ns import eval_PINNs_navier
from src.Schrodinger.eval_sch import eval_PINNs_sch
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Evaluate PINNs for Schrodinger equation scenario')
parser.add_argument('--ckpoint_path', type=str, help='model checkpoint(ckpt) filename')
#only support 'Schrodinger' for now
parser.add_argument('--scenario', type=str, help='scenario for PINNs', default='Schrodinger')
parser.add_argument('--datapath', type=str, help='path for dataset', default='')
args_opt = parser.parse_args()
f_name = args_opt.ck_file
f_name = args_opt.ckpoint_path
pinns_scenario = args_opt.scenario
data_path = args_opt.datapath
if pinns_scenario == 'Schrodinger':
if pinns_scenario in ['Schrodinger', 'Sch', 'sch', 'quantum']:
conf = config.config_Sch
hidden_size = conf['num_neuron']
if data_path == '':
@ -77,5 +40,13 @@ if __name__ == '__main__':
else:
dataset_path = data_path
mse_error = eval_PINNs_sch(f_name, hidden_size, dataset_path)
elif pinns_scenario in ['ns', 'NavierStokes', 'navier', 'Navier']:
conf = config.config_navier
hidden_size = conf['num_neuron']
if data_path == '':
dataset_path = conf['path']
else:
dataset_path = data_path
error = eval_PINNs_navier(f_name, dataset_path, hidden_size)
else:
print(f'{pinns_scenario} is not supported in PINNs evaluation for now')

View File

@ -12,59 +12,50 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""export checkpoint file into air, onnx, mindir models"""
"""export checkpoint file into mindir models"""
import argparse
import numpy as np
from mindspore import (Tensor, context, export, load_checkpoint,
load_param_into_net)
import mindspore.common.dtype as mstype
from src import config
from src.Schrodinger.net import PINNs
from src.Schrodinger.export_sch import export_sch
from src.NavierStokes.export_ns import export_ns
parser = argparse.ArgumentParser(description='PINNs export')
parser.add_argument('ck_file', type=str, help='model checkpoint(ckpt) filename')
parser.add_argument('file_name', type=str, help='export file name')
#only support Schrodinger' for now
parser.add_argument('--ckpoint_path', type=str, help='model checkpoint(ckpt) filename')
parser.add_argument('--file_name', type=str, help='export file name')
parser.add_argument('--scenario', type=str, help='scenario for PINNs', default='Schrodinger')
def export_sch(conf_sch, export_format, export_name):
"""
export PINNs for Schrodinger model
Args:
conf_sch (dict): dictionary for configuration, see src/config.py for details
export_format (str): file format to export
export_name (str): name of exported file
"""
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
num_neuron = conf_sch['num_neuron']
layers = [2, num_neuron, num_neuron, num_neuron, num_neuron, 2]
lb = np.array([-5.0, 0.0])
ub = np.array([5.0, np.pi/2])
n = PINNs(layers, lb, ub)
param_dict = load_checkpoint(ck_file)
load_param_into_net(n, param_dict)
batch_size = conf_sch['N0'] + 2*conf_sch['Nb'] +conf_sch['Nf']
inputs = Tensor(np.ones((batch_size, 2)), mstype.float32)
export(n, inputs, file_name=export_name, file_format=export_format)
parser.add_argument('--datapath', type=str, help='path for dataset', default='')
parser.add_argument('--batch_size', type=int, help='batch size', default=0)
if __name__ == '__main__':
args_opt = parser.parse_args()
ck_file = args_opt.ck_file
ck_file = args_opt.ckpoint_path
file_format = 'MINDIR'
file_name = args_opt.file_name
pinns_scenario = args_opt.scenario
conf = config.config_Sch
if pinns_scenario == 'Schrodinger':
export_sch(conf, file_format, file_name)
dataset_path = args_opt.datapath
b_size = args_opt.batch_size
if pinns_scenario in ['Schrodinger', 'Sch', 'sch', 'quantum']:
conf = config.config_Sch
num_neuron = conf['num_neuron']
N0 = conf['N0']
Nb = conf['Nb']
Nf = conf['Nf']
export_sch(num_neuron, N0=N0, Nb=Nb, Nf=Nf, ck_file=ck_file,
export_format=file_format, export_name=file_name)
elif pinns_scenario in ['ns', 'NavierStokes', 'navier', 'Navier']:
conf = config.config_navier
num_neuron = conf['num_neuron']
if dataset_path != '':
path = dataset_path
else:
path = conf['path']
if b_size <= 0:
batch_size = conf['batch_size']
else:
batch_size = b_size
export_ns(num_neuron, path=path, ck_file=ck_file, batch_size=batch_size,
export_format=file_format, export_name=file_name)
else:
print(f'{pinns_scenario} scenario in PINNs is not supported to export for now')

View File

@ -0,0 +1,45 @@
#!/bin/bash
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
get_real_path() {
if [ "${1:0:1}" == "/" ]; then
echo "$1"
else
echo "$(realpath -m $PWD/$1)"
fi
}
if [ $# != 2 ] && [ $# != 3 ]
then
echo "=============================================================================================================="
echo "Please run the script as: "
echo "bash scripts/run_standalone_NavierStokes_eval.sh [CHECKPOINT] [DATASET] [DEVICE_ID](option, default is 0)"
echo "for example: bash scripts/run_standalone_NavierStokes_eval.sh ckpoints/checkpoint_PINNs_NavierStokes-18000_10.ckpt Data/cylinder_nektar_wake.mat 0"
echo "=============================================================================================================="
exit 1
fi
PROJECT_DIR=$(cd "$(dirname "$0")" || exit; pwd)
export DEVICE_ID=0
if [ $# == 3 ];
then
export DEVICE_ID=$3
fi
ck_path=$(get_real_path $1)
data_set_path=$(get_real_path $2)
nohup python ${PROJECT_DIR}/../eval.py --ckpoint_path=$ck_path --scenario=NavierStokes --datapath=$data_set_path > eval.log 2>&1 &

View File

@ -0,0 +1,45 @@
#!/bin/bash
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
get_real_path() {
if [ "${1:0:1}" == "/" ]; then
echo "$1"
else
echo "$(realpath -m $PWD/$1)"
fi
}
if [ $# != 2 ] && [ $# != 3 ]
then
echo "=============================================================================================================="
echo "Please run the script as: "
echo "bash scripts/run_standalone_NavierStokes_train.sh [DATASET] [NOISE] [DEVICE_ID](option, default is 0)"
echo "for example: bash scripts/run_standalone_NavierStokes_train.sh cylinder_nektar_wake.mat 0.01 0"
echo "=============================================================================================================="
exit 1
fi
PROJECT_DIR=$(cd "$(dirname "$0")" || exit; pwd)
data_set_path=$(get_real_path $1)
coef_noise=$2
export DEVICE_ID=0
if [ $# == 3 ];
then
export DEVICE_ID=$3
fi
nohup python ${PROJECT_DIR}/../train.py --datapath=$data_set_path --scenario=NavierStokes --noise=$coef_noise > train.log 2>&1 &

View File

@ -27,7 +27,7 @@ then
echo "=============================================================================================================="
echo "Please run the script as: "
echo "bash scripts/run_standalone_Schrodinger_eval.sh [CHECKPOINT] [DATASET] [DEVICE_ID](option, default is 0)"
echo "for example: bash scripts/run_standalone_Schrodinger_eval.sh ckpoints/checkpoint_PINNs_Schrodinger-50000_1.ckptData/NLS.mat 0"
echo "for example: bash scripts/run_standalone_Schrodinger_eval.sh ckpoints/checkpoint_PINNs_Schrodinger-50000_1.ckpt Data/NLS.mat 0"
echo "=============================================================================================================="
exit 1
fi
@ -42,4 +42,4 @@ fi
ck_path=$(get_real_path $1)
data_set_path=$(get_real_path $2)
python ${PROJECT_DIR}/../eval.py --ckpoint_path=$ck_path --scenario=Schrodinger --datapath=$data_set_path > eval.log 2>&1 &
nohup python ${PROJECT_DIR}/../eval.py --ckpoint_path=$ck_path --scenario=Schrodinger --datapath=$data_set_path > eval.log 2>&1 &

View File

@ -41,4 +41,4 @@ then
export DEVICE_ID=$2
fi
python ${PROJECT_DIR}/../train.py --datapath $data_set_path --scenario Schrodinger > train.log 2>&1 &
nohup python ${PROJECT_DIR}/../train.py --datapath=$data_set_path --scenario=Schrodinger > train.log 2>&1 &

View File

@ -0,0 +1,14 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================

View File

@ -0,0 +1,109 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Create dataset for training or evaluation"""
import mindspore.dataset as ds
import numpy as np
import scipy.io as scio
class data_set_navier_stokes:
"""
Training set for PINNs(Navier-Stokes)
Args:
n_train (int): amount of training data
path (str): path of dataset
noise (float): noise intensity, 0 for noiseless training data
train (bool): True for training set, False for evaluation set
"""
def __init__(self, n_train, path, noise, train=True):
data = scio.loadmat(path)
self.n_train = n_train
self.noise = noise
# load data
X_star = data['X_star'].astype(np.float32)
t_star = data['t'].astype(np.float32)
U_star = data['U_star'].astype(np.float32)
N = X_star.shape[0] # number of data points per time step
T = t_star.shape[0] # number of time steps
XX = np.tile(X_star[:, 0:1], (1, T))
YY = np.tile(X_star[:, 1:2], (1, T))
TT = np.tile(t_star, (1, N)).T
UU = U_star[:, 0, :]
VV = U_star[:, 1, :]
x = XX.flatten()[:, None]
y = YY.flatten()[:, None]
t = TT.flatten()[:, None]
u = UU.flatten()[:, None]
v = VV.flatten()[:, None]
self.lb = np.array([np.min(x), np.min(y), np.min(t)], np.float32)
self.ub = np.array([np.max(x), np.max(y), np.max(t)], np.float32)
if train:
idx = np.random.choice(N*T, n_train, replace=False) # sampled data points
self.noise = noise
self.x = x[idx, :]
self.y = y[idx, :]
self.t = t[idx, :]
u_train = u[idx, :]
self.u = u_train + noise*np.std(u_train)*np.random.randn(u_train.shape[0], u_train.shape[1])
v_train = v[idx, :]
self.v = v_train + noise*np.std(v_train)*np.random.randn(v_train.shape[0], v_train.shape[1])
else:
self.x = x
self.y = y
self.t = t
self.u = u
self.v = v
P_star = data['p_star'].astype(np.float32)
PP = P_star
self.p = PP.flatten()[:, None]
def __getitem__(self, index):
ans_x = self.x[index]
ans_y = self.y[index]
ans_t = self.t[index]
ans_u = self.u[index]
ans_v = self.v[index]
input_data = np.hstack((ans_x, ans_y, ans_t)).astype(np.float32)
label = np.hstack((ans_u, ans_v, np.array([0.]))).astype(np.float32) #
return input_data, label
def __len__(self):
return self.n_train
def generate_training_set_navier_stokes(batch_size, n_train, path, noise):
"""
Generate training set for PINNs (Navier-Stokes)
Args:
batch_size (int): amount of training data per batch
n_train (int): amount of training data
path (str): path of dataset
noise (float): noise intensity, 0 for noiseless training data
"""
s = data_set_navier_stokes(n_train, path, noise, True)
lb = s.lb
ub = s.ub
dataset = ds.GeneratorDataset(source=s, column_names=['data', 'label'], shuffle=True)
dataset = dataset.batch(batch_size)
return dataset, lb, ub

View File

@ -0,0 +1,46 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Evaluate PINNs for Navier-Stokes equation scenario"""
import numpy as np
from mindspore import context, load_checkpoint, load_param_into_net
from src.NavierStokes.dataset import generate_training_set_navier_stokes
from src.NavierStokes.net import PINNs_navier
def eval_PINNs_navier(ck_path, path, num_neuron=20):
"""
Evaluation of PINNs for Navier-Stokes equation scenario.
Args:
ck_path (str): path of the dataset for Navier-Stokes equation scenario
path (str): path of the dataset for Navier-Stokes equation
num_neuron (int): number of neurons for fully connected layer in the network
"""
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [3, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron,
num_neuron, 2]
_, lb, ub = generate_training_set_navier_stokes(10, 10, path, 0)
n = PINNs_navier(layers, lb, ub)
param_dict = load_checkpoint(ck_path)
load_param_into_net(n, param_dict)
lambda1_pred = n.lambda1.asnumpy()
lambda2_pred = n.lambda2.asnumpy()
error_lambda_1 = np.abs(lambda1_pred - 1.0)*100
error_lambda_2 = np.abs(lambda2_pred - 0.01)/0.01 * 100
print(f'Error of lambda 1 is {error_lambda_1[0]:.6f}%')
print(f'Error of lambda 2 is {error_lambda_2[0]:.6f}%')
return error_lambda_1, error_lambda_2

View File

@ -0,0 +1,48 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Export PINNs (Navier-Stokes) model"""
import numpy as np
import mindspore.common.dtype as mstype
from mindspore import (Tensor, context, export, load_checkpoint,
load_param_into_net)
from src.NavierStokes.dataset import generate_training_set_navier_stokes
from src.NavierStokes.net import PINNs_navier
def export_ns(num_neuron, path, ck_file, batch_size, export_format, export_name):
"""
export PINNs for Navier-Stokes model
Args:
num_neuron (int): number of neurons for fully connected layer in the network
path (str): path of the dataset for Navier-Stokes equation
ck_file (str): path for checkpoint file
batch_size (int): batch size
export_format (str): file format to export
export_name (str): name of exported file
"""
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [3, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron,
num_neuron, 2]
_, lb, ub = generate_training_set_navier_stokes(10, 10, path, 0)
n = PINNs_navier(layers, lb, ub)
param_dict = load_checkpoint(ck_file)
load_param_into_net(n, param_dict)
inputs = Tensor(np.ones((batch_size, 3)), mstype.float32)
export(n, inputs, file_name=export_name, file_format=export_format)

View File

@ -0,0 +1,46 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Loss function for PINNs (Navier-Stokes)"""
from mindspore import nn
class PINNs_loss_navier(nn.Cell):
"""
Loss of PINNs (Navier-Stokes). Loss = mse loss + regularizer term from the PDE.
"""
def __init__(self):
super(PINNs_loss_navier, self).__init__()
self.mse = nn.MSELoss(reduction='mean')
def construct(self, pred, target):
"""
pred: preditiction of PINNs (Navier-Stokes), pred = (u, v, p, fu, fv)
target: targeted value of (u, v)
"""
u_pred = pred[0]
u_target = target[:, 0:1]
v_pred = pred[1]
v_target = target[:, 1:2]
fu_pred = pred[3]
fv_pred = pred[4]
f_target = target[:, 2:3]
mse_u = self.mse(u_pred, u_target)
mse_v = self.mse(v_pred, v_target)
mse_fu = self.mse(fu_pred, f_target)
mse_fv = self.mse(fv_pred, f_target)
ans = mse_u + mse_v + mse_fu + mse_fv
return ans

View File

@ -0,0 +1,266 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Define the PINNs network for the Navier-Stokes equation."""
import numpy as np
import mindspore.common.dtype as mstype
from mindspore import Parameter, Tensor, nn, ops
from mindspore.common.initializer import TruncatedNormal, Zero, initializer
from mindspore.ops import constexpr
@constexpr
def _generate_ones(batch_size):
arr = np.ones((batch_size, 1), np.float32)
return Tensor(arr, mstype.float32)
@constexpr
def _generate_zeros(batch_size):
arr = np.zeros((batch_size, 1), np.float32)
return Tensor(arr, mstype.float32)
class neural_net(nn.Cell):
"""
Neural net to fit the wave function
Args:
layers (list(int)): num of neurons for each layer
lb (np.array): lower bound (x, t) of domain
ub (np.array): upper bound (x, t) of domain
"""
def __init__(self, layers, lb, ub):
super(neural_net, self).__init__()
self.layers = layers
self.concat = ops.Concat(axis=1)
self.lb = Tensor(lb, mstype.float32)
self.ub = Tensor(ub, mstype.float32)
self.tanh = ops.Tanh()
self.add = ops.Add()
self.matmul = ops.MatMul()
self.w0 = self._init_weight_xavier(0)
self.b0 = self._init_biase(0)
self.w1 = self._init_weight_xavier(1)
self.b1 = self._init_biase(1)
self.w2 = self._init_weight_xavier(2)
self.b2 = self._init_biase(2)
self.w3 = self._init_weight_xavier(3)
self.b3 = self._init_biase(3)
self.w4 = self._init_weight_xavier(4)
self.b4 = self._init_biase(4)
self.w5 = self._init_weight_xavier(5)
self.b5 = self._init_biase(5)
self.w6 = self._init_weight_xavier(6)
self.b6 = self._init_biase(6)
self.w7 = self._init_weight_xavier(7)
self.b7 = self._init_biase(7)
self.w8 = self._init_weight_xavier(8)
self.b8 = self._init_biase(8)
def construct(self, x, y, t):
"""Forward propagation"""
X = self.concat((x, y, t))
X = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
X = self.tanh(self.add(self.matmul(X, self.w0), self.b0))
X = self.tanh(self.add(self.matmul(X, self.w1), self.b1))
X = self.tanh(self.add(self.matmul(X, self.w2), self.b2))
X = self.tanh(self.add(self.matmul(X, self.w3), self.b3))
X = self.tanh(self.add(self.matmul(X, self.w4), self.b4))
X = self.tanh(self.add(self.matmul(X, self.w5), self.b5))
X = self.tanh(self.add(self.matmul(X, self.w6), self.b6))
X = self.tanh(self.add(self.matmul(X, self.w7), self.b7))
X = self.add(self.matmul(X, self.w8), self.b8)
return X[:, 0:1], X[:, 1:2]
def _init_weight_xavier(self, layer):
"""
Initialize weight for the ith layer
"""
in_dim = self.layers[layer]
out_dim = self.layers[layer+1]
std = np.sqrt(2/(in_dim + out_dim))
name = 'w' + str(layer)
return Parameter(default_input=initializer(TruncatedNormal(std), [in_dim, out_dim], mstype.float32),
name=name, requires_grad=True)
def _init_biase(self, layer):
"""
Initialize biase for the ith layer
"""
name = 'b' + str(layer)
return Parameter(default_input=initializer(Zero(), self.layers[layer+1], mstype.float32),
name=name, requires_grad=True)
class Grad_2_1(nn.Cell):
"""
Net has 3 inputs and 2 outputs. Using the first output to compute gradient.
"""
def __init__(self, net):
super(Grad_2_1, self).__init__()
self.net = net
self.grad = ops.GradOperation(get_all=True, sens_param=True)
def construct(self, x, y, t):
sens_1 = _generate_ones(x.shape[0])
sens_2 = _generate_zeros(x.shape[0])
return self.grad(self.net)(x, y, t, (sens_1, sens_2))
class Grad_2_2(nn.Cell):
"""
Net has 3 inputs and 2 outputs. Using the third output to compute gradient.
"""
def __init__(self, net):
super(Grad_2_2, self).__init__()
self.net = net
self.grad = ops.GradOperation(get_all=True, sens_param=True)
def construct(self, x, y, t):
sens_1 = _generate_zeros(x.shape[0])
sens_2 = _generate_ones(x.shape[0])
return self.grad(self.net)(x, y, t, (sens_1, sens_2))
class Grad_3_1(nn.Cell):
"""
Net has 3 inputs and 3 outputs. Using the first output to compute gradient.
"""
def __init__(self, net):
super(Grad_3_1, self).__init__()
self.net = net
self.grad = ops.GradOperation(get_all=True, sens_param=True)
self.gradop = self.grad(self.net)
def construct(self, x, y, t):
sens_1 = _generate_ones(x.shape[0])
sens_2 = _generate_zeros(x.shape[0])
sens_3 = _generate_zeros(x.shape[0])
return self.grad(self.net)(x, y, t, (sens_1, sens_2, sens_3))
class Grad_3_2(nn.Cell):
"""
Net has 3 inputs and 3 outputs. Using the second output to compute gradient.
"""
def __init__(self, net):
super(Grad_3_2, self).__init__()
self.net = net
self.grad = ops.GradOperation(get_all=True, sens_param=True)
def construct(self, x, y, t):
sens_1 = _generate_zeros(x.shape[0])
sens_2 = _generate_ones(x.shape[0])
sens_3 = _generate_zeros(x.shape[0])
return self.grad(self.net)(x, y, t, (sens_1, sens_2, sens_3))
class PINNs_navier(nn.Cell):
"""
PINNs for the Navier-Stokes equation.
"""
def __init__(self, layers, lb, ub):
super(PINNs_navier, self).__init__()
self.lambda1 = Parameter(default_input=initializer(Zero(), 1, mstype.float32),
name='lambda1', requires_grad=True)
self.lambda2 = Parameter(default_input=initializer(Zero(), 1, mstype.float32),
name='lambda2', requires_grad=True)
self.mul = ops.Mul()
self.add = ops.Add()
self.nn = neural_net(layers, lb, ub)
# first order gradient
self.dpsi = Grad_2_1(self.nn)
self.dpsi_dv = Grad_2_1(self.nn)
self.dpsi_duy = Grad_2_1(self.nn)
self.dpsi_dvy = Grad_2_1(self.nn)
self.dp = Grad_2_2(self.nn)
# second order gradient
self.du = Grad_3_2(self.dpsi)
self.du_duy = Grad_3_2(self.dpsi_duy)
self.dv = Grad_3_1(self.dpsi_dv)
self.dv_dvy = Grad_3_1(self.dpsi_dvy)
# third order gradient
self.dux = Grad_3_1(self.du)
self.duy = Grad_3_2(self.du_duy)
self.dvx = Grad_3_1(self.dv)
self.dvy = Grad_3_2(self.dv_dvy)
def construct(self, X):
"""forward propagation"""
x = X[:, 0:1]
y = X[:, 1:2]
t = X[:, 2:3]
ans_nn = self.nn(x, y, t)
p = ans_nn[1]
# first order gradient
d_psi = self.dpsi(x, y, t)
v = -d_psi[0]
u = d_psi[1]
d_p = self.dp(x, y, t)
px = d_p[0]
py = d_p[1]
# second order gradient
d_u = self.du(x, y, t)
ux = d_u[0]
uy = d_u[1]
ut = d_u[2]
d_v = self.dv(x, y, t)
vx = -d_v[0]
vy = -d_v[1]
vt = -d_v[2]
# third order gradient
d_ux = self.dux(x, y, t)
uxx = d_ux[0]
d_uy = self.duy(x, y, t)
uyy = d_uy[1]
d_vx = self.dvx(x, y, t)
vxx = -d_vx[0]
d_vy = self.dvy(x, y, t)
vyy = -d_vy[1]
# regularizer of the PDE (Navier-Stokes)
fu1 = self.add(self.mul(u, ux), self.mul(v, uy))
fu1 = self.mul(self.lambda1, fu1)
fu2 = self.add(uxx, uyy)
fu2 = self.mul(self.lambda2, uyy)
fu2 = self.mul(fu2, -1.0)
fu = ut + fu1 + px + fu2
fv1 = self.add(self.mul(u, vx), self.mul(v, vy))
fv1 = self.mul(self.lambda1, fv1)
fv2 = self.add(vxx, vyy)
fv2 = self.mul(self.lambda2, fv2)
fv2 = self.mul(fv2, -1.0)
fv = vt + fv1 + py + fv2
return u, v, p, fu, fv

View File

@ -0,0 +1,65 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Train PINNs for Navier-Stokes equation scenario"""
import numpy as np
from mindspore import Model, context, nn
from mindspore.common import set_seed
from mindspore.train.callback import (CheckpointConfig, LossMonitor,
ModelCheckpoint, TimeMonitor)
from src.NavierStokes.dataset import generate_training_set_navier_stokes
from src.NavierStokes.loss import PINNs_loss_navier
from src.NavierStokes.net import PINNs_navier
def train_navier(epoch, lr, batch_size, n_train, path, noise, num_neuron, ck_path, seed=None):
"""
Train PINNs for Navier-Stokes equation
Args:
epoch (int): number of epochs
lr (float): learning rate
batch_size (int): amount of data per batch
n_train(int): amount of training data
noise (float): noise intensity, 0 for noiseless training data
path (str): path of dataset
num_neuron (int): number of neurons for fully connected layer in the network
ck_path (str): path to store the checkpoint file
seed (int): random seed
"""
if seed is not None:
np.random.seed(seed)
set_seed(seed)
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [3, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron, num_neuron,
num_neuron, 2]
training_set, lb, ub = generate_training_set_navier_stokes(batch_size, n_train, path, noise)
n = PINNs_navier(layers, lb, ub)
opt = nn.Adam(n.trainable_params(), learning_rate=lr)
loss = PINNs_loss_navier()
#call back configuration
loss_print_num = 1 # print loss per loss_print_num epochs
# save model
config_ck = CheckpointConfig(save_checkpoint_steps=1000, keep_checkpoint_max=20)
ckpoint = ModelCheckpoint(prefix="checkpoint_PINNs_NavierStokes", directory=ck_path, config=config_ck)
model = Model(network=n, loss_fn=loss, optimizer=opt)
model.train(epoch=epoch, train_dataset=training_set,
callbacks=[LossMonitor(loss_print_num), ckpoint, TimeMonitor(1)], dataset_sink_mode=True)
print('Training complete')

View File

@ -29,6 +29,7 @@ class PINNs_training_set:
Nf (int): number of sampled training data points for the collocation points
lb (np.array): lower bound (x, t) of domain
ub (np.array): upper bound (x, t) of domain
path (str): path of dataset
"""
def __init__(self, N0, Nb, Nf, lb, ub, path='./Data/NLS.mat'):
data = scio.loadmat(path)

View File

@ -0,0 +1,52 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Evaluate PINNs for Schrodinger equation scenario"""
import numpy as np
import mindspore.common.dtype as mstype
from mindspore import Tensor, context, load_checkpoint, load_param_into_net
from src.Schrodinger.dataset import get_eval_data
from src.Schrodinger.net import PINNs
def eval_PINNs_sch(ckpoint_name, num_neuron=100, path='./Data/NLS.mat'):
"""
Evaluation of PINNs for Schrodinger equation scenario.
Args:
ckpoint_name (str): model checkpoint file name
num_neuron (int): number of neurons for fully connected layer in the network
path (str): path of the dataset for Schrodinger equation
"""
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [2, num_neuron, num_neuron, num_neuron, num_neuron, 2]
lb = np.array([-5.0, 0.0])
ub = np.array([5.0, np.pi/2])
n = PINNs(layers, lb, ub)
param_dict = load_checkpoint(ckpoint_name)
load_param_into_net(n, param_dict)
X_star, _, _, h_star = get_eval_data(path)
X_tensor = Tensor(X_star, mstype.float32)
pred = n(X_tensor)
u_pred = pred[0].asnumpy()
v_pred = pred[1].asnumpy()
h_pred = np.sqrt(u_pred**2 + v_pred**2)
error_h = np.linalg.norm(h_star-h_pred, 2)/np.linalg.norm(h_star, 2)
print(f'evaluation error is: {error_h}')
return error_h

View File

@ -0,0 +1,52 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Export PINNs (Schrodinger) model"""
import numpy as np
import mindspore.common.dtype as mstype
from mindspore import (Tensor, context, export, load_checkpoint,
load_param_into_net)
from src.Schrodinger.net import PINNs
def export_sch(num_neuron, N0, Nb, Nf, ck_file, export_format, export_name):
"""
export PINNs for Schrodinger model
Args:
num_neuron (int): number of neurons for fully connected layer in the network
N0 (int): number of data points sampled from the initial condition,
0<N0<=256 for the default NLS dataset
Nb (int): number of data points sampled from the boundary condition,
0<Nb<=201 for the default NLS dataset. Size of training set = N0+2*Nb
Nf (int): number of collocation points, collocation points are used
to calculate regularizer for the network from Schoringer equation.
0<Nf<=51456 for the default NLS dataset
ck_file (str): path for checkpoint file
export_format (str): file format to export
export_name (str): name of exported file
"""
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [2, num_neuron, num_neuron, num_neuron, num_neuron, 2]
lb = np.array([-5.0, 0.0])
ub = np.array([5.0, np.pi/2])
n = PINNs(layers, lb, ub)
param_dict = load_checkpoint(ck_file)
load_param_into_net(n, param_dict)
batch_size = N0 + 2*Nb + Nf
inputs = Tensor(np.ones((batch_size, 2)), mstype.float32)
export(n, inputs, file_name=export_name, file_format=export_format)

View File

@ -13,13 +13,13 @@
# limitations under the License.
# ============================================================================
"""Loss function for PINNs (Schrodinger)"""
from mindspore import nn, ops
import mindspore.common.dtype as mstype
from mindspore import nn, ops
class PINNs_loss(nn.Cell):
"""
Loss of the PINNs network, only works with full-batch training. Training data are organized in
Loss of the PINNs network (Schrodinger), only works with full-batch training. Training data are organized in
the following order: initial condition points ([0:n0]), boundary condition points ([n0:(n0+2*nb)]),
collocation points ([(n0+2*nb)::])
"""

View File

@ -14,10 +14,10 @@
# ============================================================================
"""Define the PINNs network for the Schrodinger equation."""
import numpy as np
import mindspore.common.dtype as mstype
from mindspore import Parameter, Tensor, nn, ops
from mindspore.common.initializer import TruncatedNormal, Zero, initializer
from mindspore.ops import constexpr
import mindspore.common.dtype as mstype
@constexpr
@ -37,7 +37,7 @@ class neural_net(nn.Cell):
Neural net to fit the wave function
Args:
layers (int): num of neurons for each layer
layers (list(int)): num of neurons for each layer
lb (np.array): lower bound (x, t) of domain
ub (np.array): upper bound (x, t) of domain
"""
@ -64,7 +64,7 @@ class neural_net(nn.Cell):
self.b4 = self._init_biase(4)
def construct(self, x, t):
"""forward propagation"""
"""Forward propagation"""
X = self.concat((x, t))
X = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
@ -98,7 +98,7 @@ class neural_net(nn.Cell):
class Grad_1(nn.Cell):
"""
Using the first output to compute gradient.
Net has 2 inputs and 2 outputs. Using the first output to compute gradient.
"""
def __init__(self, net):
super(Grad_1, self).__init__()
@ -113,7 +113,7 @@ class Grad_1(nn.Cell):
class Grad_2(nn.Cell):
"""
Using the second output to compute gradient.
Net has 2 inputs and 2 outputs. Using the second output to compute gradient.
"""
def __init__(self, net):
super(Grad_2, self).__init__()

View File

@ -0,0 +1,73 @@
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Train PINNs for Schrodinger equation scenario"""
import numpy as np
from mindspore.common import set_seed
from mindspore import context, nn, Model
from mindspore.train.callback import (CheckpointConfig, LossMonitor,
ModelCheckpoint, TimeMonitor)
from src.Schrodinger.dataset import generate_PINNs_training_set
from src.Schrodinger.net import PINNs
from src.Schrodinger.loss import PINNs_loss
def train_sch(epoch=50000, lr=0.0001, N0=50, Nb=50, Nf=20000, num_neuron=100, seed=None,
path='./Data/NLS.mat', ck_path='./ckpoints/'):
"""
Train PINNs network for Schrodinger equation
Args:
epoch (int): number of epochs
lr (float): learning rate
N0 (int): number of data points sampled from the initial condition,
0<N0<=256 for the default NLS dataset
Nb (int): number of data points sampled from the boundary condition,
0<Nb<=201 for the default NLS dataset. Size of training set = N0+2*Nb
Nf (int): number of collocation points, collocation points are used
to calculate regularizer for the network from Schoringer equation.
0<Nf<=51456 for the default NLS dataset
num_neuron (int): number of neurons for fully connected layer in the network
seed (int): random seed
path (str): path of the dataset for Schrodinger equation
ck_path (str): path to store checkpoint files (.ckpt)
"""
if seed is not None:
np.random.seed(seed)
set_seed(seed)
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [2, num_neuron, num_neuron, num_neuron, num_neuron, 2]
lb = np.array([-5.0, 0.0])
ub = np.array([5.0, np.pi/2])
training_set = generate_PINNs_training_set(N0, Nb, Nf, lb, ub, path=path)
n = PINNs(layers, lb, ub)
opt = nn.Adam(n.trainable_params(), learning_rate=lr)
loss = PINNs_loss(N0, Nb, Nf)
#call back configuration
loss_print_num = 1 # print loss per loss_print_num epochs
# save model
config_ck = CheckpointConfig(save_checkpoint_steps=1000, keep_checkpoint_max=50)
ckpoint = ModelCheckpoint(prefix="checkpoint_PINNs_Schrodinger", directory=ck_path, config=config_ck)
model = Model(network=n, loss_fn=loss, optimizer=opt)
model.train(epoch=epoch, train_dataset=training_set,
callbacks=[LossMonitor(loss_print_num), ckpoint, TimeMonitor(1)], dataset_sink_mode=True)
print('Training complete')

View File

@ -20,3 +20,7 @@ Network config setting
# config for Schrodinger equation scenario
config_Sch = {'epoch': 50000, 'lr': 0.0001, 'N0': 50, 'Nb': 50, 'Nf': 20000, 'num_neuron': 100,
'seed': 2, 'path': './Data/NLS.mat', 'ck_path': './ckpoints/'}
# config for Navier-Stokes equation scenario
config_navier = {'epoch': 18000, 'lr': 0.01, 'n_train': 5000, 'path': './Data/cylinder_nektar_wake.mat',
'noise': 0.0, 'num_neuron': 20, 'ck_path': './navier_ckpoints/', 'seed': 0, 'batch_size': 500}

View File

@ -12,84 +12,43 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""train PINNs"""
"""Train PINNs"""
import argparse
import numpy as np
from mindspore import Model, context, nn
from mindspore.train.callback import (CheckpointConfig, LossMonitor,
ModelCheckpoint, TimeMonitor)
from mindspore.common import set_seed
from src import config
from src.Schrodinger.dataset import generate_PINNs_training_set
from src.Schrodinger.loss import PINNs_loss
from src.Schrodinger.net import PINNs
def train_sch(epoch=50000, lr=0.0001, N0=50, Nb=50, Nf=20000, num_neuron=100, seed=None,
path='./Data/NLS.mat', ck_path='./ckpoints/'):
"""
Train PINNs network for Schrodinger equation
Args:
epoch (int): number of epochs
lr (float): learning rate
N0 (int): number of data points sampled from the initial condition,
0<N0<=256 for the default NLS dataset
Nb (int): number of data points sampled from the boundary condition,
0<Nb<=201 for the default NLS dataset. Size of training set = N0+2*Nb
Nf (int): number of collocation points, collocation points are used
to calculate regularizer for the network from Schoringer equation.
0<Nf<=51456 for the default NLS dataset
num_neuron (int): number of neurons for fully connected layer in the network
seed (int): random seed
path (str): path of the dataset for Schrodinger equation
ck_path (str): path to store checkpoint files (.ckpt)
"""
if seed is not None:
np.random.seed(seed)
set_seed(seed)
context.set_context(mode=context.GRAPH_MODE, device_target='GPU')
layers = [2, num_neuron, num_neuron, num_neuron, num_neuron, 2]
lb = np.array([-5.0, 0.0])
ub = np.array([5.0, np.pi/2])
training_set = generate_PINNs_training_set(N0, Nb, Nf, lb, ub, path=path)
n = PINNs(layers, lb, ub)
opt = nn.Adam(n.trainable_params(), learning_rate=lr)
loss = PINNs_loss(N0, Nb, Nf)
#call back configuration
loss_print_num = 1 # print loss per loss_print_num epochs
# save model
config_ck = CheckpointConfig(save_checkpoint_steps=1000, keep_checkpoint_max=50)
ckpoint = ModelCheckpoint(prefix="checkpoint_PINNs_Schrodinger", directory=ck_path, config=config_ck)
model = Model(network=n, loss_fn=loss, optimizer=opt)
model.train(epoch=epoch, train_dataset=training_set,
callbacks=[LossMonitor(loss_print_num), ckpoint, TimeMonitor(1)], dataset_sink_mode=True)
print('Training complete')
from src.NavierStokes.train_ns import train_navier
from src.Schrodinger.train_sch import train_sch
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Train PINNs')
#only support 'Schrodinger' for now
parser.add_argument('--scenario', type=str, help='scenario for PINNs', default='Schrodinger')
parser.add_argument('--datapath', type=str, help='path for dataset', default='')
parser.add_argument('--noise', type=float, help='noise, Navier-Stokes only', default='-0.5')
parser.add_argument('--epoch', type=int, help='number of epochs for training', default=0)
args_opt = parser.parse_args()
pinns_scenario = args_opt.scenario
data_path = args_opt.datapath
if pinns_scenario == 'Schrodinger':
epoch_num = args_opt.epoch
if pinns_scenario in ['Schrodinger', 'Sch', 'sch', 'quantum']:
conf = config.config_Sch
if data_path != '':
conf['path'] = data_path
if epoch_num > 0:
conf['epoch'] = epoch_num
train_sch(**conf)
elif pinns_scenario in ['ns', 'NavierStokes', 'navier', 'Navier']:
conf = config.config_navier
if data_path != '':
conf['path'] = data_path
noise = args_opt.noise
if noise >= 0:
conf['noise'] = noise
if epoch_num > 0:
conf['epoch'] = epoch_num
train_navier(**conf)
else:
print(f'{pinns_scenario} is not supported in PINNs training for now')