vllm/tests
Yuan cafb8e06c5
[CI/BUILD] enable intel queue for longer CPU tests (#4113)
2024-06-03 10:39:50 -07:00
..
async_engine [BUGFIX] [FRONTEND] Correct chat logprobs (#5029) 2024-05-30 02:52:14 -07:00
basic_correctness [Scheduler] Warning upon preemption and Swapping (#4647) 2024-05-13 23:50:44 +09:00
core [Core] Avoid the need to pass `None` values to `Sequence.inputs` (#5099) 2024-05-29 16:05:01 -07:00
distributed [Core][Optimization] remove vllm-nccl (#5091) 2024-05-29 05:13:52 +00:00
engine [Core] Avoid the need to pass `None` values to `Sequence.inputs` (#5099) 2024-05-29 16:05:01 -07:00
entrypoints [BUGFIX] [FRONTEND] Correct chat logprobs (#5029) 2024-05-30 02:52:14 -07:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Kernel] Pass a device pointer into the quantize kernel for the scales (#5159) 2024-06-03 09:52:30 -07:00
lora [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00
metrics [CI/Build] Move `test_utils.py` to `tests/utils.py` (#4425) 2024-05-13 23:50:09 +09:00
model_executor [CI/Build] Move `test_utils.py` to `tests/utils.py` (#4425) 2024-05-13 23:50:09 +09:00
models [CI/BUILD] enable intel queue for longer CPU tests (#4113) 2024-06-03 10:39:50 -07:00
multimodal [Core] Support image processor (#4197) 2024-06-02 22:56:41 -07:00
prefix_caching [Bugfix / Core] Prefix Caching Guards (merged with main) (#4846) 2024-05-27 15:18:17 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Feature][Kernel] Support bitsandbytes quantization and QLoRA (#4776) 2024-06-01 14:51:10 -06:00
samplers Update test_ignore_eos (#4898) 2024-06-02 02:21:53 +00:00
spec_decode [Core] Support image processor (#4197) 2024-06-02 22:56:41 -07:00
tensorizer_loader [Frontend] [Core] perf: Automatically detect vLLM-tensorized model, update `tensorizer` to version 2.9.0 (#4208) 2024-05-13 14:57:07 -07:00
tokenization [Core] Support image processor (#4197) 2024-06-02 22:56:41 -07:00
worker [Core][2/N] Model runner refactoring part 2. Combine prepare prefill / decode to a single API (#4681) 2024-05-15 14:00:10 +09:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [CI/BUILD] enable intel queue for longer CPU tests (#4113) 2024-06-03 10:39:50 -07:00
test_cache_block_hashing.py [Core] Avoid the need to pass `None` values to `Sequence.inputs` (#5099) 2024-05-29 16:05:01 -07:00
test_config.py [Bugfix / Core] Prefix Caching Guards (merged with main) (#4846) 2024-05-27 15:18:17 -07:00
test_inputs.py [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00
test_logger.py [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
test_logits_processor.py [Misc] Remove unnecessary ModelRunner imports (#4703) 2024-05-09 00:17:17 -07:00
test_regression.py [BugFix] Fix GC bug for `LLM` class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [CI/Build] Move `test_utils.py` to `tests/utils.py` (#4425) 2024-05-13 23:50:09 +09:00
test_sharded_state_loader.py [Core] Implement sharded state loader (#4690) 2024-05-15 22:11:54 -07:00
test_utils.py [Bugfix][CI/Build] Fix test and improve code for `merge_async_iterators` (#5096) 2024-05-29 16:02:25 -07:00
utils.py [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00