vllm/tests
陈序 54be8a0be2
Fix assertion failure in Qwen 1.5 with prefix caching enabled (#3373)
Co-authored-by: Cade Daniel <edacih@gmail.com>
2024-03-14 13:56:57 -07:00
..
async_engine Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
basic_correctness [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
core Fixes #1556 double free (#3347) 2024-03-13 00:30:08 +00:00
distributed [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
engine Fix auto prefix bug (#3239) 2024-03-07 16:37:28 -08:00
entrypoints Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
kernels Add batched RoPE kernel (#3095) 2024-03-13 13:45:26 -07:00
lora Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350) 2024-03-13 12:18:25 -07:00
metrics Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
models Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
prefix_caching Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
samplers Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
spec_decode Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
worker [Speculative decoding 3/9] Worker which speculates, scores, and applies rejection sampling (#3103) 2024-03-08 23:32:46 -08:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
test_cache_block_hashing.py Possible fix for conflict between Automated Prefix Caching (#2762) and multi-LoRA support (#1804) (#3263) 2024-03-07 23:03:22 +00:00
test_config.py Fix assertion failure in Qwen 1.5 with prefix caching enabled (#3373) 2024-03-14 13:56:57 -07:00
test_regression.py [BugFix] Fix GC bug for `LLM` class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [Speculative decoding 3/9] Worker which speculates, scores, and applies rejection sampling (#3103) 2024-03-08 23:32:46 -08:00