Go to file
Simon Mo 3fd2b0d21c
Bump version to v0.6.1 (#8379)
2024-09-11 14:42:11 -07:00
.buildkite [Speculative Decoding] Test refactor (#8317) 2024-09-11 14:07:34 -07:00
.github [Kernel][Misc] register ops to prevent graph breaks (#6917) 2024-09-11 12:52:19 -07:00
benchmarks [Misc] Move device options to a single place (#8322) 2024-09-11 13:25:58 -07:00
cmake [Kernel][Misc] register ops to prevent graph breaks (#6917) 2024-09-11 12:52:19 -07:00
csrc [Kernel][Misc] register ops to prevent graph breaks (#6917) 2024-09-11 12:52:19 -07:00
docs Pixtral (#8377) 2024-09-11 14:41:55 -07:00
examples Pixtral (#8377) 2024-09-11 14:41:55 -07:00
tests Pixtral (#8377) 2024-09-11 14:41:55 -07:00
vllm Bump version to v0.6.1 (#8379) 2024-09-11 14:42:11 -07:00
.clang-format [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
.dockerignore [CI/Build] Dockerfile.cpu improvements (#7298) 2024-08-08 15:24:52 -04:00
.gitignore [Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174) 2024-08-20 07:09:33 -06:00
.readthedocs.yaml [Doc] Add missing mock import to docs `conf.py` (#6834) 2024-07-27 04:47:33 +00:00
.yapfignore [issue templates] add some issue templates (#3412) 2024-03-14 13:16:00 -07:00
CMakeLists.txt [CI/Build][Kernel] Update CUTLASS to 3.5.1 tag (#8043) 2024-09-10 23:51:58 +00:00
CODE_OF_CONDUCT.md [Doc] [Misc] Create CODE_OF_CONDUCT.md (#8161) 2024-09-04 16:50:13 -07:00
CONTRIBUTING.md [Misc] Define common requirements (#3841) 2024-04-05 00:39:17 -07:00
Dockerfile [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
Dockerfile.cpu [Hardware][Intel] Support compressed-tensor W8A8 for CPU backend (#7257) 2024-09-11 09:46:46 -07:00
Dockerfile.neuron [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
Dockerfile.openvino [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
Dockerfile.ppc64le [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
Dockerfile.rocm [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
Dockerfile.tpu [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
Dockerfile.xpu [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
LICENSE Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
MANIFEST.in [Misc] Use ray[adag] dependency instead of cuda (#7938) 2024-09-06 09:18:35 -07:00
README.md Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319) 2024-09-09 23:21:00 -07:00
collect_env.py [misc] add nvidia related library in collect env (#7674) 2024-08-19 23:22:49 -07:00
format.sh [mypy] Enable mypy type checking for `vllm/core` (#7229) 2024-08-28 07:11:14 +08:00
pyproject.toml [mypy] Enable mypy type checking for `vllm/core` (#7229) 2024-08-28 07:11:14 +08:00
requirements-build.txt [Misc] Add jinja2 as an explicit build requirement (#7695) 2024-08-20 17:17:47 +00:00
requirements-common.txt Pixtral (#8377) 2024-09-11 14:41:55 -07:00
requirements-cpu.txt [Hardware][Intel CPU] Update torch 2.4.0 for CPU backend (#6931) 2024-08-02 08:55:58 -07:00
requirements-cuda.txt [CI/Build] build on empty device for better dev experience (#4773) 2024-08-11 13:09:44 -07:00
requirements-dev.txt Seperate dev requirements into lint and test (#5474) 2024-06-13 11:22:41 -07:00
requirements-lint.txt [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
requirements-neuron.txt [Misc] Define common requirements (#3841) 2024-04-05 00:39:17 -07:00
requirements-openvino.txt [OpenVINO] migrate to latest dependencies versions (#7251) 2024-08-07 09:49:10 -07:00
requirements-rocm.txt [CI/Build][ROCm] Enabling tensorizer tests for ROCm (#7237) 2024-08-27 10:09:13 -07:00
requirements-test.txt [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
requirements-tpu.txt [TPU] Support single and multi-host TPUs on GKE (#7613) 2024-08-30 00:27:40 -07:00
requirements-xpu.txt [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (#3814) 2024-06-17 11:01:25 -07:00
setup.py [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00

README.md

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Discord | Twitter/X |


vLLM, AMD, Anyscale Meet & Greet at Ray Summit 2024 (Monday, Sept 30th, 5-7pm PT) at Marriott Marquis San Francisco

We are excited to announce our special vLLM event in collaboration with AMD and Anyscale. Join us to learn more about recent advancements of vLLM on MI300X. Register here and be a part of the event!


Latest News 🔥


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with CUDA/HIP graph
  • Quantizations: GPTQ, AWQ, INT4, INT8, and FP8.
  • Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
  • Speculative decoding
  • Chunked prefill

Performance benchmark: We include a performance benchmark that compares the performance of vLLM against other LLM serving engines (TensorRT-LLM, text-generation-inference and lmdeploy).

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor parallelism and pipeline parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron.
  • Prefix caching support
  • Multi-lora support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral)
  • Embedding Models (e.g. E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Getting Started

Install vLLM with pip or from source:

pip install vllm

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.

Sponsors

vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!

  • a16z
  • AMD
  • Anyscale
  • AWS
  • Crusoe Cloud
  • Databricks
  • DeepInfra
  • Dropbox
  • Google Cloud
  • Lambda Lab
  • NVIDIA
  • Replicate
  • Roblox
  • RunPod
  • Sequoia Capital
  • Skywork AI
  • Trainy
  • UC Berkeley
  • UC San Diego
  • ZhenFund

We also have an official fundraising venue through OpenCollective. We plan to use the fund to support the development, maintenance, and adoption of vLLM.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use Github issues or discussions.
  • For discussing with fellow users, please use Discord.
  • For security disclosures, please use Github's security advisory feature.
  • For collaborations and partnerships, please contact us at vllm-questions AT lists.berkeley.edu.