Go to file
Abraham-Xu d1744376ae
Align with huggingface Top K sampling (#753)
2023-08-15 16:44:33 -07:00
.github/workflows [Quality] Add CI for formatting (#343) 2023-07-03 14:50:56 -07:00
benchmarks fix: enable trust-remote-code in api server & benchmark. (#509) 2023-07-19 17:06:15 -07:00
csrc [OPTIMIZATION] Optimizes the single_query_cached_kv_attention kernel (#420) 2023-08-04 10:57:29 -07:00
docs Fix baichuan doc style (#748) 2023-08-13 20:57:31 -07:00
examples Refactor scheduler (#658) 2023-08-02 16:42:01 -07:00
tests/kernels Fix paged attention testing. (#495) 2023-07-24 21:01:56 -07:00
vllm Align with huggingface Top K sampling (#753) 2023-08-15 16:44:33 -07:00
.gitignore [Doc] Documentation for distributed inference (#261) 2023-06-26 11:34:23 -07:00
.pylintrc [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
.readthedocs.yaml Add .readthedocs.yaml (#136) 2023-06-02 22:27:44 -07:00
CONTRIBUTING.md [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
LICENSE Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
MANIFEST.in [PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
README.md [Doc] Add Baichuan 13B to supported models (#656) 2023-08-02 16:45:12 -07:00
format.sh [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
mypy.ini Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
pyproject.toml [PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
requirements-dev.txt [Quality] Add code formatter and linter (#326) 2023-07-03 11:31:55 -07:00
requirements.txt [Fix] Add chat completion Example and simplify dependencies (#576) 2023-07-25 23:45:48 -07:00
setup.py Update setup.py (#282) 2023-06-27 14:34:23 -07:00

README.md

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Discussions |


Latest News 🔥

  • [2023/07] Added support for LLaMA-2! You can run and serve 7B/13B/70B LLaMA-2s on vLLM with a single command!
  • [2023/06] Serving vLLM On any Cloud with SkyPilot. Check out a 1-click example to start the vLLM demo, and the blog post for the story behind vLLM development on the clouds.
  • [2023/06] We officially released vLLM! FastChat-vLLM integration has powered LMSYS Vicuna and Chatbot Arena since mid-April. Check out our blog post.

vLLM is a fast and easy-to-use library for LLM inference and serving.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Optimized CUDA kernels

vLLM is flexible and easy to use with:

  • Seamless integration with popular HuggingFace models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server

vLLM seamlessly supports many Huggingface models, including the following architectures:

  • Baichuan (baichuan-inc/Baichuan-7B, baichuan-inc/Baichuan-13B-Chat, etc.)
  • BLOOM (bigscience/bloom, bigscience/bloomz, etc.)
  • Falcon (tiiuae/falcon-7b, tiiuae/falcon-40b, tiiuae/falcon-rw-7b, etc.)
  • GPT-2 (gpt2, gpt2-xl, etc.)
  • GPT BigCode (bigcode/starcoder, bigcode/gpt_bigcode-santacoder, etc.)
  • GPT-J (EleutherAI/gpt-j-6b, nomic-ai/gpt4all-j, etc.)
  • GPT-NeoX (EleutherAI/gpt-neox-20b, databricks/dolly-v2-12b, stabilityai/stablelm-tuned-alpha-7b, etc.)
  • LLaMA & LLaMA-2 (meta-llama/Llama-2-70b-hf, lmsys/vicuna-13b-v1.3, young-geng/koala, openlm-research/open_llama_13b, etc.)
  • MPT (mosaicml/mpt-7b, mosaicml/mpt-30b, etc.)
  • OPT (facebook/opt-66b, facebook/opt-iml-max-30b, etc.)

Install vLLM with pip or from source:

pip install vllm

Getting Started

Visit our documentation to get started.

Performance

vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput. For details, check out our blog post.


Serving throughput when each request asks for 1 output completion.


Serving throughput when each request asks for 3 output completions.

Contributing

We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.