Commit Graph

890 Commits

Author SHA1 Message Date
Simon Mo 7d648418b8
Update Ray version requirements (#2636) 2024-01-28 14:27:22 -08:00
Murali Andoorveedu 89be30fa7d
Small async_llm_engine refactor (#2618) 2024-01-27 23:28:37 -08:00
Woosuk Kwon f8ecb84c02
Speed up Punica compilation (#2632) 2024-01-27 17:46:56 -08:00
Woosuk Kwon 5f036d2bcc
[Minor] Fix warning on Ray dependencies (#2630) 2024-01-27 15:43:40 -08:00
Hanzhi Zhou 380170038e
Implement custom all reduce kernels (#2192) 2024-01-27 12:46:35 -08:00
Xiang Xu 220a47627b
Use head_dim in config if exists (#2622) 2024-01-27 10:30:49 -08:00
Casper beb89f68b4
AWQ: Up to 2.66x higher throughput (#2566) 2024-01-26 23:53:17 -08:00
Philipp Moritz 390b495ff3
Don't build punica kernels by default (#2605) 2024-01-26 15:19:19 -08:00
dakotamahan-stability 3a0e1fc070
Support for Stable LM 2 (#2598)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-26 12:45:19 -08:00
Hongxia Yang 6b7de1a030
[ROCm] add support to ROCm 6.0 and MI300 (#2274) 2024-01-26 12:41:10 -08:00
Vladimir 5265631d15
use a correct device when creating OptionalCUDAGuard (#2583) 2024-01-25 23:48:17 -08:00
Junyang Lin 2832e7b9f9
fix names and license for Qwen2 (#2589) 2024-01-24 22:37:51 -08:00
Simon Mo 3a7dd7e367
Support Batch Completion in Server (#2529) 2024-01-24 17:11:07 -08:00
LastWhisper 223c19224b
Fix the syntax error in the doc of supported_models (#2584) 2024-01-24 11:22:51 -08:00
Federico Galatolo f1f6cc10c7
Added `include_stop_str_in_output` and `length_penalty` parameters to OpenAI API (#2562) 2024-01-24 10:21:56 -08:00
Nikola Borisov 3209b49033
[Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
Simon Mo 1e4277d2d1
lint: format all python file instead of just source code (#2567) 2024-01-23 15:53:06 -08:00
Antoni Baum 9b945daaf1
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
2024-01-23 15:26:37 -08:00
Erfan Al-Hossami 9c1352eb57
[Feature] Simple API token authentication and pluggable middlewares (#1106) 2024-01-23 15:13:00 -08:00
Jason Zhu 7a0b011dd5
Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (#2553) 2024-01-22 14:47:25 -08:00
Harry Mellor 63e835cbcc
Fix progress bar and allow HTTPS in `benchmark_serving.py` (#2552) 2024-01-22 14:40:31 -08:00
Junyang Lin 94b5edeb53
Add qwen2 (#2495) 2024-01-22 14:34:21 -08:00
Philipp Moritz ab7e6006d6
Fix https://github.com/vllm-project/vllm/issues/2540 (#2545) 2024-01-22 19:02:38 +01:00
Cade Daniel 18bfcdd05c
[Speculative decoding 2/9] Multi-step worker for draft model (#2424) 2024-01-21 16:31:47 -08:00
Jannis Schönleber 71d63ed72e
migrate pydantic from v1 to v2 (#2531) 2024-01-21 16:05:56 -08:00
Nick Hill d75c40734a
[Fix] Keep `scheduler.running` as deque (#2523) 2024-01-20 22:36:09 -08:00
Junda Chen 5b23c3f26f
Add `group` as an argument in broadcast ops (#2522) 2024-01-20 16:00:26 -08:00
Simon Mo 00efdc84ba
Add benchmark serving to CI (#2505) 2024-01-19 20:20:19 -08:00
Roy 91a61da9b1
[Bugfix] fix load local safetensors model (#2512) 2024-01-19 16:26:16 -08:00
Zhuohan Li ef9b636e2d
Simplify broadcast logic for control messages (#2501) 2024-01-19 11:23:30 -08:00
Harry Mellor 2709c0009a
Support OpenAI API server in `benchmark_serving.py` (#2172) 2024-01-18 20:34:08 -08:00
Simon Mo dd7e8f5f64
refactor complemention api for readability (#2499) 2024-01-18 16:45:14 -08:00
ljss d2a68364c4
[BugFix] Fix abort_seq_group (#2463) 2024-01-18 15:10:42 -08:00
Nikola Borisov 7e1081139d
Don't download both safetensor and bin files. (#2480) 2024-01-18 11:05:53 -08:00
Liangfu Chen 18473cf498
[Neuron] Add an option to build with neuron (#2065) 2024-01-18 10:58:50 -08:00
zspo 4df417d059
fix: fix some args desc (#2487) 2024-01-18 09:41:44 -08:00
Jason Zhu 5d80a9178b
Minor fix in prefill cache example (#2494) 2024-01-18 09:40:34 -08:00
YingchaoX 8a25d3a71a
fix stablelm.py tensor-parallel-size bug (#2482) 2024-01-18 09:39:46 -08:00
shiyi.c_98 d10f8e1d43
[Experimental] Prefix Caching Support (#1669)
Co-authored-by: DouHappy <2278958187@qq.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-17 16:32:10 -08:00
FlorianJoncour 14cc317ba4
OpenAI Server refactoring (#2360) 2024-01-16 21:33:14 -08:00
Hyunsung Lee e1957c6ebd
Add StableLM3B model (#2372) 2024-01-16 20:32:40 -08:00
Simon Mo 8cd5a992bf
ci: retry on build failure as well (#2457) 2024-01-16 12:51:04 -08:00
Simon Mo 947f0b23cc
CI: make sure benchmark script exit on error (#2449) 2024-01-16 09:50:13 -08:00
Chenhui Zhang f780504d12
fix weigit loading for GQA with TP (#2379) 2024-01-15 15:43:59 -08:00
Simon Mo bfc072addf
Allow buildkite to retry build on agent lost (#2446) 2024-01-15 15:43:15 -08:00
Woosuk Kwon 2a18da257c
Announce the second vLLM meetup (#2444) 2024-01-15 14:11:59 -08:00
Simon Mo 6e01e8c1c8
[CI] Add Buildkite (#2355) 2024-01-14 12:37:58 -08:00
Roy 9f659bf07f
[Minor] Optimize cuda graph memory usage (#2437) 2024-01-14 18:40:51 +01:00
Woosuk Kwon 35c4bc20d9
[Minor] Fix err msg (#2431) 2024-01-12 14:02:52 -08:00
陈序 218dc2ccda
Aligning `top_p` and `top_k` Sampling (#1885)
* Align top_p and top_k with huggingface

* remove _get_prompt_and_output_tokens

* rename _apply_top_p_top_k

* compare top_p top_k with hf

* fix test errors
2024-01-12 22:51:03 +01:00