vllm/tests
Tao He 3123f15138
Fixes the incorrect argument in the prefix-prefill test cases (#3246)
2024-03-15 20:58:10 -07:00
..
async_engine Asynchronous tokenization (#2879) 2024-03-15 23:37:01 +00:00
basic_correctness [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
core Fixes #1556 double free (#3347) 2024-03-13 00:30:08 +00:00
distributed [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
engine Asynchronous tokenization (#2879) 2024-03-15 23:37:01 +00:00
entrypoints Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
kernels Fixes the incorrect argument in the prefix-prefill test cases (#3246) 2024-03-15 20:58:10 -07:00
lora Asynchronous tokenization (#2879) 2024-03-15 23:37:01 +00:00
metrics Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
models Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
prefix_caching Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
samplers Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
spec_decode Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
tokenization Asynchronous tokenization (#2879) 2024-03-15 23:37:01 +00:00
worker [Speculative decoding 3/9] Worker which speculates, scores, and applies rejection sampling (#3103) 2024-03-08 23:32:46 -08:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py Asynchronous tokenization (#2879) 2024-03-15 23:37:01 +00:00
test_cache_block_hashing.py Possible fix for conflict between Automated Prefix Caching (#2762) and multi-LoRA support (#1804) (#3263) 2024-03-07 23:03:22 +00:00
test_config.py Fix assertion failure in Qwen 1.5 with prefix caching enabled (#3373) 2024-03-14 13:56:57 -07:00
test_regression.py [BugFix] Fix GC bug for `LLM` class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [Speculative decoding 3/9] Worker which speculates, scores, and applies rejection sampling (#3103) 2024-03-08 23:32:46 -08:00