forked from mindspore-Ecosystem/mindspore
mindspore-ci-bot
c10e5f1b22
Merge pull request !3146 from yao_yf/modezoo_widedeep_run_clusters |
||
---|---|---|
.. | ||
alexnet | ||
community | ||
deepfm | ||
deeplabv3 | ||
faster_rcnn | ||
gat | ||
gcn | ||
googlenet | ||
lenet | ||
lenet_quant | ||
lstm | ||
mass | ||
mobilenetv2 | ||
mobilenetv2_quant | ||
mobilenetv3 | ||
official | ||
research | ||
resnet | ||
resnet50_quant | ||
resnet_thor | ||
resnext50 | ||
ssd | ||
utils | ||
vgg16 | ||
warpctc | ||
yolov3_darknet53 | ||
yolov3_resnet18 | ||
README.md | ||
__init__.py |
README.md
Welcome to the Model Zoo for MindSpore
In order to facilitate developers to enjoy the benefits of MindSpore framework and Huawei chips, we will continue to add typical networks and models . If you have needs for the model zoo, you can file an issue on gitee or MindSpore, We will consider it in time.
-
SOTA models using the latest MindSpore APIs
-
The best benefits from MindSpore and Huawei chips
-
Officially maintained and supported
Table of Contents
Announcements
Date | News |
---|---|
May 31, 2020 | Support MindSpore v0.3.0-alpha |
Models and Implementations
Computer Vision
Image Classification
GoogleNet
Parameters | GoogleNet |
---|---|
Published Year | 2014 |
Paper | Going Deeper with Convolutions |
Resource | Ascend 910 |
Features | • Mixed Precision • Multi-GPU training support with Ascend |
MindSpore Version | 0.3.0-alpha |
Dataset | CIFAR-10 |
Training Parameters | epoch=125, batch_size = 128, lr=0.1 |
Optimizer | Momentum |
Loss Function | Softmax Cross Entropy |
Accuracy | 1pc: 93.4%; 8pcs: 92.17% |
Speed | 79 ms/Step |
Loss | 0.0016 |
Params (M) | 6.8 |
Checkpoint for Fine tuning | 43.07M (.ckpt file) |
Model for inference | 21.50M (.onnx file), 21.60M(.geir file) |
Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/googlenet |
ResNet50
Parameters | ResNet50 |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Accuracy | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
ResNet101
Parameters | ResNet101 |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Accuracy | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
VGG16
Parameters | VGG16 |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Accuracy | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
AlexNet
Parameters | AlexNet |
---|---|
Published Year | 2012 |
Paper | ImageNet Classification with Deep Convolutional Neural Networks |
Resource | Ascend 910 |
Features | support with Ascend, GPU |
MindSpore Version | 0.5.0-beta |
Dataset | CIFAR10 |
Training Parameters | epoch=30, batch_size=32 |
Optimizer | Momentum |
Loss Function | SoftmaxCrossEntropyWithLogits |
Accuracy | 88.23% |
Speed | 1481fps |
Loss | 0.108 |
Params (M) | 61.10 |
Checkpoint for Fine tuning | 445MB(.ckpt file) |
Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/alexnet |
LeNet
Parameters | LeNet |
---|---|
Published Year | 1998 |
Paper | Gradient-Based Learning Applied to Document Recognition |
Resource | Ascend 910 |
Features | support with Ascend, GPU, CPU |
MindSpore Version | 0.5.0-beta |
Dataset | MNIST |
Training Parameters | epoch=10, batch_size=32 |
Optimizer | Momentum |
Loss Function | SoftmaxCrossEntropyWithLogits |
Accuracy | 98.52% |
Speed | 18680fps |
Loss | 0.004 |
Params (M) | 0.06 |
Checkpoint for Fine tuning | 483KB(.ckpt file) |
Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/lenet |
Object Detection and Segmentation
YoloV3
Parameters | YoLoV3 |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Mean Average Precision (mAP@0.5) | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
MobileNetV2
Parameters | MobileNetV2 |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Mean Average Precision (mAP@0.5) | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
MobileNetV3
Parameters | MobileNetV3 |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Mean Average Precision (mAP@0.5) | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
SSD
Parameters | SSD |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
Mean Average Precision (mAP@0.5) | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
Natural Language Processing
BERT
Parameters | BERT |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
GLUE Score | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
MASS
Parameters | MASS |
---|---|
Published Year | |
Paper | |
Resource | |
Features | |
MindSpore Version | |
Dataset | |
Training Parameters | |
Optimizer | |
Loss Function | |
ROUGE Score | |
Speed | |
Loss | |
Params (M) | |
Checkpoint for Fine tuning | |
Model for inference | |
Scripts |
Transformer
Parameters | Transformer |
---|---|
Published Year | 2017 |
Paper | Attention Is All You Need |
Resource | Ascend 910 |
Features | • Multi-GPU training support with Ascend |
MindSpore Version | 0.5.0-beta |
Dataset | WMT Englis-German |
Training Parameters | epoch=52, batch_size=96 |
Optimizer | Adam |
Loss Function | Softmax Cross Entropy |
BLEU Score | 28.7 |
Speed | 410ms/step (8pcs) |
Loss | 2.8 |
Params (M) | 213.7 |
Checkpoint for inference | 2.4G (.ckpt file) |
Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/nlp/transformer |