Go to file
Dun Liang 2dd8e911df github ci 2020-04-02 15:58:20 +08:00
.github/workflows github ci 2020-04-02 15:58:20 +08:00
extern github ci 2020-04-02 15:58:20 +08:00
notebook fix # Kh, Kw, Kc 2020-03-23 21:07:30 +08:00
python github ci 2020-04-02 15:58:20 +08:00
script polish readme and install and setup 2020-03-24 17:12:43 +08:00
src fix cd 2020-04-01 17:07:06 +08:00
.gitignore version ee002b49b2fd09c70af20f5067a1667dcd07ec05 2020-03-20 11:43:35 +08:00
.gitlab-ci.yml add gitlab ci 2020-03-26 21:46:29 +08:00
LICENSE.txt version 1522f3d004f9bdbf3953d91d4c259c341817c71f 2020-03-19 12:20:54 +08:00
README.cn.md polish readme 2020-03-25 11:58:39 +08:00
README.md chinese version link 2020-03-25 14:16:49 +08:00
README.src.md chinese version link 2020-03-25 14:16:49 +08:00
setup.py Update setup.py 2020-03-26 21:31:54 +08:00

README.md

Jittor: a Just-in-time(JIT) deep learning framework

Quickstart | Install | Tutorial | Chinese

Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators. The whole framework and meta-operators are compiled just-in-time. A powerful op compiler and tuner are integrated into Jittor. It allowed us to generate high-performance code with specialized for your model.

The front-end language is Python. Module Design is used in the front-end, which is the most popular design for deeplearning framework interface. The back-end is implemented by high performance language, such as CUDA,C++.

The following example shows how to model a two-layer neural network step by step and train from scratch In a few lines of Python code.

import jittor as jt
from jittor import Module
from jittor import nn
class Model(Module):
    def __init__(self):
        self.layer1 = nn.Linear(1, 10)
        self.relu = nn.Relu() 
        self.layer2 = nn.Linear(10, 1)
    def execute (self,x) :
        x = self.layer1(x)
        x = self.relu(x)
        x = self.layer2(x)
        return x

def get_data(n): # generate random data for training test.
    for i in range(n):
        x = np.random.rand(batch_size, 1)
        y = x*x
        yield jt.float32(x), jt.float32(y)

model = Model()
learning_rate = 0.1
optim = nn.SGD(model.parameters(), learning_rate)

for i,(x,y) in enumerate(get_data(n)):
    pred_y = model(x)
    loss = ((pred_y - y)**2)
    loss_mean = loss.mean()
    optim.step(loss_mean)
    print(f"step {i}, loss = {loss_mean.data.sum()}")

Contents

Quickstart

We provide some jupyter notebooks to help you quick start with Jittor.

Install

Jittor is written in Python and C++. It requires a compiler for JIT compilation, Currently, we support four compilers:

  • CPU compiler (require at least one of the following)
    • g++ (>=5.4.0)
    • clang (>=8.0) recommend
  • GPU compiler (optional)
    • nvcc (>=10.0)

Jittor environment requirements:

  • System: Ubuntu >= 16.04
  • Python version >= 3.7
  • C++ compiler(g++ or clang)

Jittor offers three ways to install: pip, script or manual.

Pip install

sudo apt install python3.7-dev libomp-dev
sudo python3.7 -m pip install git+https://github.com/Jittor/jittor.git
# if you cannot access github, please download code from our website:
#     wget https://cg.cs.tsinghua.edu.cn/jittor/assets/build/jittor.tgz
#     mkdir -p jittor && tar -xvf ./jittor.tgz -C jittor
#     sudo pip install ./jittor
python3.7 -m jittor.test.test_example

single line script install

We provide single line command for quick installation the latest version of Jittor(Ubuntu>=16.04):

# install with clang and cuda
wget -O - https://raw.githubusercontent.com/Jittor/jittor/master/script/install.sh | with_clang=1 with_cuda=1 bash
# install with clang
wget -O - https://raw.githubusercontent.com/Jittor/jittor/master/script/install.sh | with_clang=1 bash
# install with g++ and cuda
wget -O - https://raw.githubusercontent.com/Jittor/jittor/master/script/install.sh | with_gcc=1 with_cuda=1 bash
# install with g++
wget -O - https://raw.githubusercontent.com/Jittor/jittor/master/script/install.sh | with_gcc=1 bash

After execution, the script will show some environment variables you need to export.

If you use Jittor for CPU computing, we strongly recommend clang(>=8.0) as the back-end compiler of Jittor. Because some customized optimizations will be enabled.

manual install

We will show how to install Jittor in Ubuntu 16.04 step by step, Other Linux distributions may have similar commands.

Step 1: Choose your back-end compiler

# g++
sudo apt install g++ build-essential libomp-dev

# OR clang++-8
wget -O - https://apt.llvm.org/llvm.sh > /tmp/llvm.sh
bash /tmp/llvm.sh 8

Step 2: Install Python and python-dev

Jittor need python version >= 3.7.

sudo apt install python3.7 python3.7-dev

Step 3: Run Jittor

The whole framework is compiled Just-in-time. Let's install jittor via pip

git clone https://github.com/Jittor/jittor.git
sudo pip3.7 install ./jittor
export cc_path="clang++-8"
# if other compiler is used, change cc_path
# export cc_path="g++"
# export cc_path="icc"

# run a simple test
python3.7 -m jittor.test.test_example

if the test is passed, your Jittor is ready.

Optional Step 4: Enable CUDA

Using CUDA in Jittor is very simple, Just setup environment value nvcc_path

# replace this var with your nvcc location 
export nvcc_path="/usr/local/cuda/bin/nvcc" 
# run a simple cuda test
python3.7 -m jittor.test.test_cuda 

if the test is passed, your can use Jittor with CUDA by setting use_cuda flag.

import jittor as jt
jt.flags.use_cuda = 1

Optional Step 5: Run full tests

To check the integrity of Jittor, you can run full tests.

python3.7 -m jittor.test -v

if those tests are failed, please report bugs for us, and feel free to contribute ^_^

Tutorial

In the tutorial section, we will briefly explain the basic concept of Jittor.

To train your model with Jittor, there are only three main concepts you need to know:

  • Var: basic data type of jittor
  • Operations: Jittor'op is simular with numpy

Var

First, let's get started with Var. Var is the basic data type of jittor. Computation process in Jittor is asynchronous for optimization. If you want to access the data, Var.data can be used for synchronous data accessing.

import jittor as jt
a = jt.float32([1,2,3])
print (a)
print (a.data)
# Output: float32[3,]
# Output: [ 1. 2. 3.]

And we can give the variable a name.

c.name('c')
print(c.name())
# Output: c

###Operations

Jittor'op is simular with numpy. Let's try some operations. We create Var a and b via operation jt.float32, and add them. Printing those variables shows they have the same shape and dtype.

import jittor as jt
a = jt.float32([1,2,3])
b = jt.float32([4,5,6])
c = a*b
print(a,b,c)
print(type(a), type(b), type(c))
# Output: float32[3,] float32[3,] float32[3,]
# Output: <class 'jittor_core.Var'> <class 'jittor_core.Var'> <class 'jittor_core.Var'>

Beside that, All the operators we used jt.xxx(Var, ...) have alias Var.xxx(...). For example:

c.max() # alias of jt.max(a)
c.add(a) # alias of jt.add(c, a)
c.min(keepdims=True) # alias of jt.min(c, keepdims=True)

if you want to know all the operation which Jittor supports. try help(jt.ops). All the operation you found in jt.ops.xxx, can be used via alias jt.xxx.

help(jt.ops)
# Output:
#   abs(x: core.Var) -> core.Var
#   add(x: core.Var, y: core.Var) -> core.Var
#   array(data: array) -> core.Var
#   binary(x: core.Var, y: core.Var, op: str) -> core.Var
#   ......

More

If you want to know more about Jittor, please check out the notebooks below:

Those notebooks can be started in your own computer by python3.7 -m jittor.notebook

Contributing

Jittor is still young. It may contain bugs and issues. Please report them in our bug track system. Contributions are welcome. Besides, if you have any ideas about Jittor, please let us know.

You can help Jittor in the following ways:

  • Citing Jittor in your paper
  • recommend Jittor to your friends
  • Contributing code
  • Contributed tutorials and documentation
  • File an issue
  • Answer jittor related questions
  • Light up the stars
  • Keep an eye on jittor
  • ......

Contact Us

Website: http://cg.cs.tsinghua.edu.cn/jittor/

Email: jittor@qq.com

File an issue: https://github.com/Jittor/jittor/issues

The Team

Jittor is currently maintained by Dun Liang, Guo-Ye Yang, Guo-Wei Yang and Wen-Yang Zhou etc. from the Tsinghua CSCG Group. If you are also interested in Jittor and want to improve it, Please join us!

License

Jittor is Apache 2.0 licensed, as found in the LICENSE.txt file.