Go to file
Alexander Matveev 0a02744dc8 fix TP 2025-01-31 01:18:56 +00:00
.buildkite [Kernel] Pipe attn_logits_soft_cap through paged attention TPU kernels (#12482) 2025-01-28 22:36:44 +00:00
.github [ci/lint] Add back default arg for pre-commit (#12279) 2025-01-22 01:15:27 +00:00
benchmarks [Misc][MoE] add Deepseek-V3 moe tuning support (#12558) 2025-01-30 00:47:30 +00:00
cmake [Build] Only build 9.0a for scaled_mm and sparse kernels (#12339) 2025-01-27 10:40:00 -05:00
csrc renaming for consistency 2025-01-30 04:00:26 +00:00
docs [Model] Refactoring of MiniCPM-V and add MiniCPM-o-2.6 support for vLLM (#12069) 2025-01-29 09:24:59 +00:00
examples [Model] Refactoring of MiniCPM-V and add MiniCPM-o-2.6 support for vLLM (#12069) 2025-01-29 09:24:59 +00:00
tests squashed commits 2025-01-30 03:18:46 +00:00
tools Update `pre-commit` hooks (#12475) 2025-01-27 17:23:08 -07:00
vllm fix TP 2025-01-31 01:18:56 +00:00
.clang-format [CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722) 2024-05-22 07:18:41 +00:00
.dockerignore [CI/Build] remove .github from .dockerignore, add dirty repo check (#9375) 2024-10-17 10:25:06 -07:00
.gitignore [Doc] Group examples into categories (#11782) 2025-01-08 09:20:12 +08:00
.pre-commit-config.yaml Do not run `suggestion` `pre-commit` hook multiple times (#12521) 2025-01-28 20:05:27 +00:00
.readthedocs.yaml [CI/Build] Drop Python 3.8 support (#10038) 2024-11-06 14:31:01 +00:00
.shellcheckrc [CI/Build] Add shell script linting using shellcheck (#7925) 2024-11-07 18:17:29 +00:00
.yapfignore [issue templates] add some issue templates (#3412) 2024-03-14 13:16:00 -07:00
CMakeLists.txt Revert "[Build/CI] Fix libcuda.so linkage" (#12552) 2025-01-29 21:12:24 +00:00
CODE_OF_CONDUCT.md [Doc] [Misc] Create CODE_OF_CONDUCT.md (#8161) 2024-09-04 16:50:13 -07:00
CONTRIBUTING.md [Doc] Fix typo error in CONTRIBUTING.md (#10190) 2024-11-10 07:47:24 +00:00
DCO [Doc] Add the DCO to CONTRIBUTING.md (#9803) 2024-10-30 05:22:23 +00:00
Dockerfile [FlashInfer] Upgrade to 0.2.0 (#11194) 2025-01-27 18:19:24 +00:00
Dockerfile.arm [Feature] vLLM ARM Enablement for AARCH64 CPUs (#9228) 2024-11-25 18:32:39 -08:00
Dockerfile.cpu [Bugfix] Fix issues in CPU build Dockerfile (#12135) 2025-01-17 12:54:01 +08:00
Dockerfile.hpu [HPU][Bugfix] set_forward_context and CI test execution (#12014) 2025-01-14 11:04:18 +08:00
Dockerfile.neuron [CI] Fix neuron CI and run offline tests (#11779) 2025-01-06 21:36:10 -08:00
Dockerfile.openvino [OpenVINO] Fixed Docker.openvino build (#11732) 2025-01-08 13:08:30 +08:00
Dockerfile.ppc64le Fixed docker build for ppc64le (#11518) 2025-01-08 13:05:37 +08:00
Dockerfile.rocm [Bugfix] Fixing AMD LoRA CI test. (#12329) 2025-01-23 10:53:02 +08:00
Dockerfile.rocm_base [AMD][Build] Porting dockerfiles from the ROCm/vllm fork (#11777) 2025-01-21 12:22:23 +08:00
Dockerfile.tpu [TPU][CI] Update torchxla version in requirement-tpu.txt (#12422) 2025-01-25 07:23:03 +00:00
Dockerfile.xpu [ci] add vllm_test_utils (#10659) 2024-11-26 00:20:04 -08:00
LICENSE Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
MANIFEST.in [Misc] Use ray[adag] dependency instead of cuda (#7938) 2024-09-06 09:18:35 -07:00
README.md Update README.md with V1 alpha release (#12495) 2025-01-28 08:22:41 +00:00
SECURITY.md [Docs] Fix broken link in SECURITY.md (#12175) 2025-01-18 04:35:21 +00:00
collect_env.py [Misc] report relevant env vars in collect_env.py tool (#9293) 2024-11-07 16:14:01 -08:00
find_cuda_init.py [Core][VLM] Test registration for OOT multimodal models (#8717) 2024-10-04 10:38:25 -07:00
format.sh [misc] add placeholder format.sh (#12206) 2025-01-20 16:04:49 +08:00
pyproject.toml [Doc] Convert docs to use colon fences (#12471) 2025-01-29 11:38:29 +08:00
python_only_dev.py [doc] update wheels url (#11830) 2025-01-08 14:36:49 +08:00
requirements-build.txt [ci][gh200] dockerfile clean up (#11351) 2024-12-19 18:13:06 -08:00
requirements-common.txt Update compressed-tensors version (#12367) 2025-01-24 11:19:42 +08:00
requirements-cpu.txt [Model] Refactoring of MiniCPM-V and add MiniCPM-o-2.6 support for vLLM (#12069) 2025-01-29 09:24:59 +00:00
requirements-cuda.txt [Model] Refactoring of MiniCPM-V and add MiniCPM-o-2.6 support for vLLM (#12069) 2025-01-29 09:24:59 +00:00
requirements-dev.txt Seperate dev requirements into lint and test (#5474) 2024-06-13 11:22:41 -07:00
requirements-hpu.txt [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update (#12338) 2025-01-23 08:35:46 +00:00
requirements-lint.txt Move linting to `pre-commit` (#11975) 2025-01-20 14:58:01 +08:00
requirements-neuron.txt [Docker] bump up neuron sdk v2.21 (#11593) 2024-12-30 13:46:14 +08:00
requirements-openvino.txt [OpenVINO] Fixed installation conflicts (#11458) 2024-12-24 11:38:21 +00:00
requirements-rocm.txt [ci/build] Fix AMD CI dependencies (#11087) 2024-12-11 00:39:53 -08:00
requirements-test.in [Model] Refactoring of MiniCPM-V and add MiniCPM-o-2.6 support for vLLM (#12069) 2025-01-29 09:24:59 +00:00
requirements-test.txt [Model] Refactoring of MiniCPM-V and add MiniCPM-o-2.6 support for vLLM (#12069) 2025-01-29 09:24:59 +00:00
requirements-tpu.txt [CI/Build] Fixed the xla nightly issue report in #12451 (#12453) 2025-01-28 11:18:07 +08:00
requirements-xpu.txt [MISC][XPU]update ipex link for CI fix (#11278) 2024-12-17 22:34:23 -08:00
setup.py Update `pre-commit` hooks (#12475) 2025-01-27 17:23:08 -07:00
use_existing_torch.py [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00

README.md

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Discord | Twitter/X | Developer Slack |


Latest News 🔥

  • [2025/01] We are excited to announce the alpha release of vLLM V1: A major architectural upgrade with 1.7x speedup! Clean code, optimized execution loop, zero-overhead prefix caching, enhanced multimodal support, and more. Please check out our blog post here.
  • [2025/01] We hosted the eighth vLLM meetup with Google Cloud! Please find the meetup slides from vLLM team here.
  • [2024/12] vLLM joins pytorch ecosystem! Easy, Fast, and Cheap LLM Serving for Everyone!
  • [2024/11] We hosted the seventh vLLM meetup with Snowflake! Please find the meetup slides from vLLM team here, and Snowflake team here.
  • [2024/10] We have just created a developer slack (slack.vllm.ai) focusing on coordinating contributions and discussing features. Please feel free to join us there!
  • [2024/10] Ray Summit 2024 held a special track for vLLM! Please find the opening talk slides from the vLLM team here. Learn more from the talks from other vLLM contributors and users!
  • [2024/09] We hosted the sixth vLLM meetup with NVIDIA! Please find the meetup slides here.
  • [2024/07] We hosted the fifth vLLM meetup with AWS! Please find the meetup slides here.
  • [2024/07] In partnership with Meta, vLLM officially supports Llama 3.1 with FP8 quantization and pipeline parallelism! Please check out our blog post here.
  • [2024/06] We hosted the fourth vLLM meetup with Cloudflare and BentoML! Please find the meetup slides here.
  • [2024/04] We hosted the third vLLM meetup with Roblox! Please find the meetup slides here.
  • [2024/01] We hosted the second vLLM meetup with IBM! Please find the meetup slides here.
  • [2023/10] We hosted the first vLLM meetup with a16z! Please find the meetup slides here.
  • [2023/08] We would like to express our sincere gratitude to Andreessen Horowitz (a16z) for providing a generous grant to support the open-source development and research of vLLM.
  • [2023/06] We officially released vLLM! FastChat-vLLM integration has powered LMSYS Vicuna and Chatbot Arena since mid-April. Check out our blog post.

About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evloved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with CUDA/HIP graph
  • Quantizations: GPTQ, AWQ, INT4, INT8, and FP8.
  • Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
  • Speculative decoding
  • Chunked prefill

Performance benchmark: We include a performance benchmark at the end of our blog post. It compares the performance of vLLM against other LLM serving engines (TensorRT-LLM, SGLang and LMDeploy). The implementation is under nightly-benchmarks folder and you can reproduce this benchmark using our one-click runnable script.

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor parallelism and pipeline parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron.
  • Prefix caching support
  • Multi-lora support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g. E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Getting Started

Install vLLM with pip or from source:

pip install vllm

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.

Sponsors

vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!

Cash Donations:

  • a16z
  • Dropbox
  • Sequoia Capital
  • Skywork AI
  • ZhenFund

Compute Resources:

  • AMD
  • Anyscale
  • AWS
  • Crusoe Cloud
  • Databricks
  • DeepInfra
  • Google Cloud
  • Lambda Lab
  • Nebius
  • Novita AI
  • NVIDIA
  • Replicate
  • Roblox
  • RunPod
  • Trainy
  • UC Berkeley
  • UC San Diego

Slack Sponsor: Anyscale

We also have an official fundraising venue through OpenCollective. We plan to use the fund to support the development, maintenance, and adoption of vLLM.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use Github issues or discussions.
  • For discussing with fellow users, please use Discord.
  • For coordinating contributions and development, please use Slack.
  • For security disclosures, please use Github's security advisory feature.
  • For collaborations and partnerships, please contact us at vllm-questions AT lists.berkeley.edu.

Media Kit